uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,565,915 | arxiv |
\section{Introduction}
Semantic parsing (SP) is one of the most important tasks in natural language processing (NLP).
It requires both understanding the meaning of natural language sentences and mapping them to meaningful executable queries such as logical forms, SQL queries, and Python code.
Recently, some state-of-the-art methods with Seq2Seq architectures are able to achieve over 80\% exact matching accuracy even on some complex benchmarks such as ATIS and GeoQuery. These models seem to have already solved most problems in this field.
\begin{figure}[!t]
\vspace{-1.5mm}\hspace{-1mm}
\centering
\includegraphics[width=0.48\textwidth]{fig/spider_top_fig.pdf}\vspace{-2mm}
\caption{
Our corpus annotates complex questions and SQLs. The example contains joining of multiple tables, a \texttt{GROUP BY} component, and a nested query.
}
\label{fig:task}
\vspace{-6mm}
\end{figure}
However, previous tasks in this field have a simple but problematic task definition because most of these results are predicted by semantic ``matching" rather than semantic parsing. Existing datasets for SP have two shortcomings.
First, those that have complex programs \cite{zelle96,li2014constructing,yaghmazadeh2017sqlizer,iyer17} are too small in terms of the number of programs for training modern data-intensive models and have only a single dataset, meaning that the same database is used for both training and testing the model. More importantly, the number of logic forms or SQL labels is small and each program has about 4-10 paraphrases of natural language problem to expand the size of the dataset. Therefore, the exact same target programs appear in both the train and test sets. The models can achieve decent performances even on very complex programs by memorizing the patterns of question and program pairs during training and decoding the programs exactly the same way as it saw in the training set during testing. \citet{cathy18} split the dataset by programs so that no two identical programs would be in both the train and test sets. They show that the models built on this question-splitting data setting fail to generalize to unseen programs.
Second, existing datasets that are large in terms of the number of programs and databases such as WikiSQL \cite{Zhong2017} contain only simple SQL queries and single tables. In order to test a model's real semantic parsing performance on unseen complex programs and its ability to generalize to new domains, an SP dataset that includes a large amount of complex programs and databases with multiple tables is a must.
However, compared to other large, realistic datasets such as ImageNet for object recognition \cite{imagenet_cvpr09} and SQuAD for reading comprehension \cite{Pranav16}, creating such SP dataset is even more time-consuming and challenging in some aspects due to the following reasons.
First, it is hard to find many databases with multiple tables online. Second, given a database, annotators have to understand the complex database schema to create a set of questions such that their corresponding SQL queries cover all SQL patterns. Moreover, it is even more challenging to write different complex SQL queries. Additionally, reviewing and quality-checking of question and SQL pairs takes a significant amount of time. All of these processes require very specific knowledge in databases.
To address the need for a large and high-quality dataset for a new complex and cross-domain semantic parsing task, we introduce {\it Spider}, which consists of 200 databases with multiple tables, 10,181 questions, and 5,693 corresponding complex SQL queries, all written by 11 college students spending a total of 1,000 man-hours.
As Figure \ref{fig:task} illustrates, given a database with multiple tables including foreign keys, our corpus creates and annotates complex questions and SQL queries including different SQL clauses such as joining and nested query.
In order to generate the SQL query given the input question, models need to understand both the natural language question and relationships between tables and columns in the database schema.
In addition, we also propose a new task for the text-to-SQL problem. Since Spider contains 200 databases with foreign keys, we can split the dataset with complex SQL queries in a way that no database overlaps in train and test, which overcomes the two shortcomings of prior datasets, and defines a new semantic parsing task in which the model needs to generalize not only to new programs but also to new databases. Models have to take questions and database schemas as inputs and predict unseen queries on new databases.
To assess the task difficulty, we experiment with several state-of-the-art semantic parsing models. All of them struggle with this task.
The best model achieves only 12.4\% exact matching accuracy in the database split setting.
This suggests that there is a large room for improvement.
\section{Related Work and Existing Datasets}
\label{sec:rel}
Several semantic parsing datasets with different queries have been created. The output can be in many formats, e.g., logic forms. These datasets include ATIS \cite{Price90,Dahl94}, GeoQuery \cite{zelle96}, and JOBS \cite{tang01}. They have been studied extensively \cite{zelle96,Zettlemoyer05,wong07,Das10,Liang11,banarescu13,artzi13,Reddy14,Berant14,dong16}. However, they are domain specific and there is no standard label guidance for multiple SQL queries.
Recently, more semantic parsing datasets using SQL as programs have been created. \citet{iyer17} and \citet{Popescu03} labeled SQL queries for ATIS and GeoQuery datasets. Other existing text-to-SQL datasets also include Restaurants \cite{tang2001using,Popescu03}, Scholar \cite{iyer17}, Academic \cite{li2014constructing}, Yelp and IMDB \cite{Yaghmazadeh17}, Advising \cite{cathy18}, and WikiSQL \cite{Zhong2017}. These datasets have been studied for decades in both the NLP community \cite{warren1982efficient,popescu2003towards,popescu2004modern,li2006constructing,giordani2012translating,wang2017synthesizing,iyer17,Zhong2017,Xu2017,Yu18,pshuang2018PT-MAML,2018executionguided,P18-1068,mccann2018natural} and the Database community \cite{li2014constructing, Yaghmazadeh17}. We provide detailed statistics on these datasets in Table \ref{tb:data}.
Most of the previous work train their models without schemas as inputs because they use a single database for both training and testing. Thus, they do not need to generalize to new domains. Most importantly, these datasets have a limited number of labeled logic forms or SQL queries. In order to expand the size of these datasets and apply neural network approaches, each logic form or SQL query has about 4-10 paraphrases for the natural language input. Most previous studies follow the standard question-based train and test split \cite{Zettlemoyer05}. This way, the exact same target queries (with similar paraphrases) in the test appear in training set as well. Utilizing this assumption, existing models can achieve decent performances even on complex programs by memorizing database-specific SQL templates. However, this accuracy is artificially inflated because the model merely needs to decide which template to use during testing. \citet{cathy18} show that template-based approaches can get even higher results. To avoid getting this inflated result, \citet{cathy18} propose a new, program-based splitting evaluation, where the exact same queries do not appear in both training and testing. They show that under this framework, the performance of all the current state-of-the-art semantic parsing systems drops dramatically even on the same database, indicating that these models fail to generalize to unseen queries. This indicates that current studies in semantic parsing have limitations.
We also want the model to generalize not only to unseen queries but also to unseen databases.
\citet{Zhong2017} published the WikiSQL dataset. In their problem definition, the databases in the test set do not appear in the train or development sets. Also, the task needs to take different table schemas as inputs. Therefore, the model has to generalize to new databases.
However, in order to generate 80654 questions and SQL pairs for 24241 databases, \citet{Zhong2017} made simplified assumptions about the SQL queries and databases. Their SQL labels only cover single \texttt{SELECT} column and aggregation, and \texttt{WHERE} conditions. Moreover, all the databases only contain single tables. No \texttt{JOIN}, \texttt{GROUP BY}, and \texttt{ORDER BY}, etc. are included.
Recently, researchers have constructed some datasets for code generation including IFTTT \cite{quirk2015language}, DJANGO \cite{Oda15}, HEARTHSTONE \cite{ling16}, NL2Bash \cite{nl2bash}, and CoNaLa \cite{yin18msr}. These tasks parse natural language descriptions into a more general-purpose programming language such as Python \cite{Allamanis15,ling16, RabinovichSK17,Yin17}.
\section{Corpus Construction}
\label{sec:data_collection}
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{spider_annotation.pdf}\vspace{-2mm}
\caption{The annotation process of our Spider corpus.}
\label{fig:annotation}\vspace{-3mm}
\end{figure}
All questions and SQL queries were written and reviewed by 11 computer science students. Some of them were native English speakers.
As illustrated in Figure \ref{fig:annotation}, we develop our dataset in five steps, spending around 1,000 hours of human labor in total: \S \ref{sec:database_collection} Database Collection and Creation, \S \ref{sec:question_and_sql_annnotation} Question and SQL Annotation, \S \ref{sec:sql_review} SQL Review, \S \ref{sec:question_review} Question Review and Paraphrase, \S \ref{sec:final_question} Final Question and SQL Review.
\subsection{Database Collection and Creation}
\label{sec:database_collection}
Collecting databases with complex schemas is hard. Although relational databases are widely used in industry and academia, most of them are not publicly available. Only a few databases with multiple tables are easily accessible online.
Our 200 databases covering 138 different domains are collected from three resources. First, we collected about 70 complex databases from different college database courses, SQL tutorial websites, online csv files, and textbook examples. Second, we collected about 40 databases from the DatabaseAnswers\footnote{\url{http://www.databaseanswers.org/}} where contains over 1,000 data models across different domains. These data models contain only database schemas. We converted them into SQLite, populated them using an online database population tool\footnote{\url{http://filldb.info/}}, and then manually corrected some important fields so that the table contents looked natural. Finally, we created the remaining 90 databases based on WikiSQL. To ensure the domain diversity, we select about 500 tables in about 90 different domains to create these 90 databases. To create each database, we chose several related tables from WikiSQL dev or test splits, and then created a relational database schema with foreign keys based on the tables we selected. We had to create some intersection tables in order to link several tables together. For most other cases, we did not need to populate these databases since tables in WikiSQL are from Wikipedia, which already had real world data stored.
We manually corrected some database schemas if they had some column names that did not make sense or missed some foreign keys. For table and column names, it is common to use abbreviations in databases. For example, `student\_id' might be represented by `stu\_id'. For our task definition, we manually changed each column name back to regular words so that the system only handled semantic parsing issues.
\subsection{Question and SQL Annotation}
\label{sec:question_and_sql_annnotation}
For each database, we ask eight computer science students proficient in SQL to create 20-50 natural questions and their SQL labels.
To make our questions diverse, natural, and reflective of how humans actually use databases,
we did not use any template or script to generate question and SQL queries.
Our annotation procedure ensures the following three aspects.
\paragraph{A)~ SQL pattern coverage.}
We ensure that our corpus contains enough examples for all common SQL patterns. For each database, we ask annotators to write SQL queries that cover all the following SQL components: \texttt{SELECT} with multiple columns and aggregations, \texttt{WHERE}, \texttt{GROUP BY}, \texttt{HAVING}, \texttt{ORDER BY}, \texttt{LIMIT}, \texttt{JOIN}, \texttt{INTERSECT}, \texttt{EXCEPT}, \texttt{UNION}, \texttt{NOT IN}, \texttt{OR}, \texttt{AND}, \texttt{EXISTS}, \texttt{LIKE} as well as nested queries. The annotators made sure that each table in the database appears in at least one query.
\paragraph{B)~ SQL consistency.}
Some questions have multiple acceptable SQL queries with the same result.
However, giving totally different SQL labels to similar questions can hinder the training of semantic parsing models.
To avoid this issue, we designed the annotation protocol so that
all annotators choose the same SQL query pattern if multiple equivalent queries are possible.
\paragraph{C)~ Question clarity.}
We did not create questions that are (1) vague or too ambiguous, or (2) require knowledge outside the database to answer.
First, ambiguous questions refer to the questions that do not have enough clues to infer which columns to return and which conditions to consider.
For example, we would not ask ``What is the most popular class at University X?'' because the definition of ``popular'' is not clear: it could mean the rating of the class or the number of students taking the course.
Instead, we choose to ask ``What is the name of the class which the largest number of students are taking at University X?''. Here, ``popular'' refers to the size of student enrollment. Thus, the ``student\_enrollment'' column can be used in condition to answer this question.
We recognize that ambiguous questions appear in real-world natural language database interfaces.
We agree that future work needs to address this issue by having multi-turn interactions between the system and users for clarification. However, our main aim here is to develop a corpus to tackle the problem of handling complex queries and generalizing across databases without multi-turn interactions required, which no existing semantic parsing datasets could do. Moreover, the low performances of current state-of-the-art models already show that our task is challenging enough, without ambiguous questions.
In addition, questions are required to contain the specific information to return.
Otherwise, we don't know if class id is also acceptable in the previous case. Most of the questions in the existing semantic parsing datasets are ambiguous. This is not a serious problem if we use one single dataset because we have enough data domain specific examples to know which columns are the default.
However, it would be a serious problem in cross domain tasks since the default return values differ cross domain and people.
Second, humans sometimes ask questions that require common sense knowledge outside the given database.
For instance, when people ask ``Display the employee id for the employees who report to John", the correct SQL is
\begin{quote}
\texttt{SELECT employee\_id \\ FROM employees \\ WHERE manager\_id = (\\ SELECT employee\_id \\ FROM employees \\ \quad WHERE first\_name = `John')}
\end{quote}
which requires the common knowledge that ``X reports to Y" corresponds to an ``employee-manager" relation.
we do not include such questions and leave them as a future research direction.
\paragraph{Annotation tools} We open each database on a web-based interface powered by the sqlite\_web\footnote{\url{https://github.com/coleifer/sqlite-web}} tool. It allows the annotators to see the schema and content of each table, execute SQL queries, and check the returned results.
This tool was extremely helpful for the annotators to write executable SQL queries that reflect the true meaning of the given questions and return correct answers.
\input{data.tex}
\subsection{SQL Review}
\label{sec:sql_review}
Once the database is labeled with question-query pairs, we ask a different annotator to check if the questions are clear and contain enough information to answer the query.
For a question with multiple possible SQL translations, the reviewers double check whether the SQL label is correctly chosen under our protocol.
Finally, the reviewers check if all the SQL labels in the current database cover all the common SQL clauses.
\subsection{Question Review and Paraphrase}
\label{sec:question_review}
After SQL labels are reviewed, native English speakers review and correct each question.
They first check if the question is grammatically correct and natural.
Next, they make sure that the question reflects the meaning of its corresponding SQL label. Finally, to
improve the diversity in questions, we ask annotators to add a paraphrased version to some questions.
\subsection{Final Review}
\label{sec:final_question}
Finally, we ask the most experienced annotator to conduct the final question and SQL review. This annotator makes the final decision if multiple reviewers are not sure about some annotation issues. Also, we run a script to execute and parse all SQL labels to make sure they are correct.
\section{Dataset Statistics and Comparison}
\label{sec:data_analysis}
We summarize the statistics of Spider and other text-to-SQL datasets in Table \ref{tb:data}.
Compared with other datasets, Spider contains databases with multiple tables and contains SQL queries including many complex SQL components.
For example, Spider contains about twice more nested queries and 10 times more \texttt{ORDER BY (LIMIT)} and \texttt{GROUP BY (HAVING)} components
than the total of previous text-to-SQL datasets.
Spider has 200 distinct databases covering 138 different domains such as college, club, TV show, government, etc.
Most domains have one database, thus containing 20-50 questions, and a few domains such as flight information have multiple databases with more than 100 questions in total.
On average, each database in Spider has 27.6 columns and 8.8 foreign keys.
The average question length and SQL length are about 13 and 21 respectively.
Our task uses different databases for training and testing, evaluating the cross-domain performance.
Therefore, Spider is the \textit{only one} text-to-SQL dataset that contains both databases with multiple tables in different domains and complex SQL queries
It tests the ability of a system to generalize to not only new SQL queries and database schemas but also new domains.
\section{Task Definition}
\label{sec:task}
On top of the proposed dataset, we define a text-to-SQL task that is more realistic than prior work. Unlike most of the previous semantic parsing or text-to-SQL tasks, models will be tested on \textit{both different complex SQL queries and different complex databases in different domains} in our task. It aims to ensure that models can only make the correct prediction when they truly understand the semantic meaning of the questions, rather than just memorization. Also, because our databases contain different domains, our corpus tests model's ability to generalize to new databases. In this way, model performance on this task can reflect the real semantic parsing ability.
In order to make the task feasible and to focus on the more fundamental part of semantic parsing, we make the following assumptions:
\begin{itemize}
\setlength{\leftskip}{-3mm}
\item In our current task, we do not evaluate model performance on generating values. Predicting correct SQL structures and columns is more realistic and critical at this stage based on the low performances of various current state-of-the-art models on our task.
In a real world situation, people need to double check what condition values are and finalize them after multiple times. It is unrealistic to predict condition values without interacting with users. In reality, most people know what values to ask but do not know the SQL logic. A more reasonable way is to ask users to use an interface searching the values, then ask more specific questions. Also, other previous work with value prediction uses one single database in both train and test which makes it vulnerable to overfitting. However, SQL queries must include values in order to execute them. For value prediction in our task, a list of gold values for each question is given. Models need to fill them into the right slots in their predicted SQL.
\item As mentioned in the previous sections, we exclude some queries that require outside knowledge such as common sense inference and math calculation. For example, imagine a table with birth and death year columns. To answer the questions like ``How long is X's life length?'', we use \texttt{SELECT death\_year - birth\_year}. Even though this example is easy for humans, it requires some common knowledge of the life length definition and the use of a math operation, which is not the focus of our dataset.
\item We assume all table and column names in the database are clear and self-contained.
For example, some databases use database specific short-cut names for table and column names such as ``stu\_id'', which we manually converted to ``student id'' in our corpus.
\end{itemize}
\section{Evaluation Metrics}
\label{sec:eval}
Our evaluation metrics include Component Matching, Exact Matching, and Execution Accuracy.
In addition, we measure the system's accuracy as a function of the difficulty of a query.
Since our task definition does not predict value string, our evaluation metrics do not take value strings into account.
We will release the official evaluation script along with our corpus so that the research community can share the same evaluation platform.
\paragraph{Component Matching}
To conduct a detailed analysis of model performance, we measure the average exact match between the prediction and ground truth on different SQL components. For each of the following components: \vspace{-2mm}
\begin{itemize}
\setlength{\itemsep}{-1mm}
\item \texttt{SELECT} \quad $\bullet$ \texttt{WHERE} \quad $\bullet$ \texttt{GROUP BY}
\item \texttt{ORDER BY} \quad $\bullet$ \texttt{KEYWORDS} (including all SQL keywords without column names and operators)
\end{itemize}\vspace{-2mm}
we decompose each component in the prediction and the ground truth as bags of several sub-components, and check whether or not these two sets of components match exactly.
To evaluate each \texttt{SELECT} component, for example, consider \texttt{SELECT avg(col1), max(col2), min(col1)}, we first parse and decompose into a set \texttt{(avg, min, col1), (max, col2)}, and see if the gold and predicted sets are the same.
Previous work directly compared decoded SQL with gold SQL.
However, some SQL components do not have order constraints.
In our evaluation, we treat each component as a set so that for example, \texttt{SELECT avg(col1), min(col1), max(col2)} and \texttt{SELECT avg(col1), max(col2), min(col1)} would be treated as the same query.
To report a model's overall performance on each component, we compute F1 score on exact set matching.
\paragraph{Exact Matching}
We measure whether the predicted query as a whole is equivalent to the gold query.
We first evaluate the SQL clauses as described in the last section. The predicted query is correct only if all of the components are correct. Because we conduct a set comparison in each clause, this exact matching metric can handle the ``ordering issue'' \cite{Xu2017}.
\paragraph{Execution Accuracy\footnote{Please check our website for the latest updates on the task at \url{https://yale-lily.github.io/spider}}}
Since Exact Matching is possible to provide false negative evaluation when the semantic parser is able to generate novel syntax structures, we also consider Execution Accuracy.
For Execution Accuracy, the value is a must in order to execute SQL queries. Instead of generating these values, a list of gold values for each question is given. Models need to select them and fill them into the right slots in their predicted SQL.
We exclude value prediction in Component and Exact Matching evaluations and do not provide Execution Accuracy in the current version.
However, it is also important to note that Execution Accuracy can create false positive evaluation when a predicted SQL returns the same result (for example, `NULL') as the gold SQL while they are semantically different.
So we can use both to complement each other.
Finally, our evaluation also considers multiple acceptable keys if \texttt{JOIN} and \texttt{GROUP} are in the query. For example, suppose ``stu\_id'' in one table refers to ``stu\_id'' in another table, \texttt{GROUP BY} either is acceptable.
\iffalse
\fi
\paragraph{SQL Hardness Criteria}
To better understand the model performance on different queries, we divide SQL queries into 4 levels: easy, medium, hard, extra hard.
We define the difficulty based on the number of SQL components, selections, and conditions, so that queries that contain more SQL keywords (\texttt{GROUP BY}, \texttt{ORDER BY}, \texttt{INTERSECT}, nested subqueries, column selections and aggregators, etc) are considered to be harder.
For example, a query is considered as hard if it includes more than two \texttt{SELECT} columns, more than two \texttt{WHERE} conditions, and \texttt{GROUP BY} two columns, or contains \texttt{EXCEPT} or nested queries. A SQL with more additions on top of that is considered as extra hard.
Figure \ref{fig:example} shows examples of SQL queries in 4 hardness levels.
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{fig/example.pdf}
\caption{SQL query examples in 4 hardness levels.}
\label{fig:example}
\end{figure}
\input{results_table.tex}
\section{Methods}
\label{sec:methods}
In order to analyze the difficulty and demonstrate the purpose of our corpus, we experiment with several state-of-the-art semantic parsing models.
As our dataset is fundamentally different from the prior datasets such as Geoquery and WikiSQL, we adapted these models to our task as follows.
We created a `big' column list by concatenating columns in all tables of the database together as an input to all models. Also, for each model, we limit the column selection space for each question example to all column of the database which the question is asking instead of all column names in the whole corpus.
\paragraph{Seq2Seq}
Inspired by neural machine translation \cite{sutskever2014sequence}, we first apply a basic sequence-to-sequence model, \textbf{Seq2Seq}.
Then, we also explore \textbf{Seq2Seq+Attention} from \cite{dong16} by adding an attention mechanism \cite{bahdanau2015neural}.
In addition, we include \textbf{Seq2Seq+Copying} by adding an attention-based copying operation similar to \cite{jia2016}.
The original model does not take the schema into account because it has the same schema in both train and test. We modify the model so that it considers the table schema information by passing a vocabulary mask that limits the model to decode the words from SQL keywords, table and column names in the current database.
\paragraph{SQLNet}
introduced by \cite{Xu2017} uses column attention and employs a sketch-based method and generates SQL as a slot-filling task.
This fundamentally avoids the sequence-to-sequence structure when ordering does not matter in SQL query conditions.
Because it is originally designed for WikiSQL, we extend its \texttt{SELECT} and \texttt{WHERE} modules to \texttt{ORDER BY} and \texttt{GROUP BY} components.
\paragraph{TypeSQL} proposed by \cite{Yu18} improves upon SQLNet by proposing a different training procedure and utilizing types extracted from either knowledge graph or table content to help model better understand entities and numbers in the question.
In our experiment, we use the question type info extracted from database content. Also, we extend their modules to \texttt{ORDER BY} and \texttt{GROUP BY} components as well. It is the only model that uses database content.
\section{Experimental Results and Discussion}
We summarize the performance of all models on our test set including accuracy of exact matching in Table \ref{tab:results} and F1 scores of component matching in Table \ref{tab:results_component}.
\paragraph{Data Splits}
For the final training dataset, we also select and include 752 queries and 1659 questions that follow our annotation protocol from six existing datasets: Restaurants, GeoQuery, Scholar, Academic, IMDB, and Yelp.
We report results on two different settings for all models: (1) Example split where examples are randomly split into 8659 train, 1034 dev, 2147 test. Questions for the same database can appear in both train and test. (2) Database split where 206 databases are split into 146 train, 20 dev, and 40 test. All questions for the same database are in the same split.
\paragraph{Overall Performance}
The performances of the Seq2Seq-based basic models including Seq2Seq, Seq2Seq+Attention, and Seq2Seq+Copying are very low. However, they are able to generate nested and complex queries because of their general decoding process.
Thus, they can get a few hard and extra hard examples correct.
But in the vast majority of cases, they predict invalid SQL queries with grammatical errors.
The attention and copying mechanisms do not help much either.
In contrast, SQLNet and TypeSQL that utilize SQL structure information to guide the SQL generation process significantly outperform other Seq2Seq models. While they can produce valid queries, however, they are unable to generate nested queries or queries with keywords such as \texttt{EXCEPT} and \texttt{INTERSECT} because they limit possible SQL outputs in some fixed pre-defined SQL structures.
As Component Matching results in Table \ref{tab:results_component} shows, all models struggle with \texttt{WHERE} clause prediction the most. \texttt{WHERE} clause is more likely to have multiple columns and operations, which makes its prediction the most challenging. The most number of prediction errors for each component is from column prediction.
In general, the overall performances of all models are low, indicating that our task is challenging and there is still a large room for improvement.
\paragraph{Example Split vs Database Split}
As discussed in Section \ref{sec:task}, another challenge of the dataset is to generalize to new databases.
To study this, in Table \ref{tab:results} and Table \ref{tab:results_component} we compare model performances under the two settings.
For all models, the performance under database split is much lower than that under example split.
Especially, TypeSQL utilizes column names as question types, and it outperforms other models with a large margin under the example split. However, its performance drops the most on the database split data set. This indicates that the model does well on complex SQL prediction but fails to generalize to new databases.
In addition, we observe that all models perform much poorer on column selection under database split than example split.
Overall, the result shows that our dataset presents a challenge for the model to generalize to new databases.
\begin{figure}[!t]
\vspace{-1.5mm}\hspace{-1mm}
\centering
\includegraphics[width=0.48\textwidth]{fig/fk_with_acc_typesql_seq2seq_atten.png}\vspace{-3mm}
\caption{Exact matching accuracy as a function of the number of foreign keys.}
\label{fig:complexity}
\vspace{-4mm}
\end{figure}
\paragraph{Complexity of Database Schema}
In order to show how the complexity of the database schema affects model performance, Figure \ref{fig:complexity} plots the exact matching accuracy as a function of the number of foreign keys in a database.
The performance decreases as the database has more foreign keys.
The first reason is that the model has to choose column and table names from many candidates in a complex database schema.
Second, a complex database schema presents a great challenge for the model to capture the relationship between different tables with foreign keys.
SQL answers to questions on the database with more number of foreign keys are more likely to join more tables.
It indicates that this task requires more effective methods to encode the relation of tables with foreign keys.
\section{Conclusion}
\label{sec:conclusion}
In this paper we introduce Spider, a large, complex and cross-domain semantic parsing and text-to-SQL dataset,
which directly benefits both NLP and DB communities.
Based on Spider, we define a
new challenging and realistic semantic parsing task.
Experimental results on several state-of-the-art models on this task suggest plenty space for improvement.
\section*{Acknowledgement}
We thank Graham Neubig, Tianze Shi, Catherine Finegan-Dollak, and three anonymous reviewers for their discussion and feedback.
We also thank Barry Williams for providing part of our database schemas from the DatabaseAnswers.
\section{Introduction}
\input{intro}
\section{Background}
\input{background}
\section{Our Approach}
\input{approach}
\section{Experiments}
\input{experiments}
\section{Discussion}
\input{discussion}
\section{Related Work}
\input{related}
\section{Conclusion}
\input{conclusion}
\subsection{Models}
\textbf{SQL+Q: } Model only uses questions as its input.
\textbf{SQL+QC: } Model only uses question and column names as inputs.
\textbf{SQLKG+QC: } Our main model comparable to previous works on WikiSQL dataset. It uses questions, column names and type attention but no content of databases as inputs.
\textbf{SQLKG+TC: } model uses question, column names, type attention by fully access to content of databases.
\fi
Semantic parsing maps natural language to meaningful executable programs. The programs could be a range of representations such as logic form \cite{zelle96,Zettlemoyer05,wong07,Das10,Liang11,banarescu13,artzi13,Reddy14,Berant14,pasupat2015compositional,Yin15}.
The main focus of our paper is on translating natural language into SQL queries. \cite{Poon13}
\cite{li2014constructing}
\section{Introduction}
Semantic parsing introduction
Most of previous work on text-to-SQL use either simple datasets or simple train-test splits on a single database. WikiSQL \cite{Zhong2017} is an example of a simple SQL dataset. Its databases only contain single tables. This leads to the fact that corresponding SQL queries do not have complex structures such as JOIN and GROUP BY etc. On the other hand, \cite{iyer17} applied a basic seq2seq model on the seq2SQL task with very complex SQL queries. However, their experiments are on one single database and use question-splitting datasets during training and testing. Thus, their model memorizes database-specific SQL templates and only needs to decide which template to use during testing. Unsurprisingly, it performs very bad on unseen SQL queries despite having high test accuracy under the question-splitting setting on one single database. To avoid this issue, Cathy et al. proposed a new way of splitting datasets so that the same SQL queries do not appear in both train and test data. The template-based approach fails under this query-splitting setting.
In the real world, however, we would like to know how good the seq2SQL model performs not only on unseen queries but also on unseen databases. In this project, we are going to introduce the most realistic seq2SQL task and explore new methods to solve this problem. First, we will label a new corpus including about 10000 SQL-question pairs for about 200 databases with multiple tables (about 4000 SQL-question pairs/8 databases already labeled by Cathy et al. and Iyer et al.). In this task, we split the dataset in a way so that different databases are seen during training and testing. Second, we are going to discover new seq2SQL approaches that take not only questions and SQL pairs but also table schemas and database structures as inputs.
How can we learn a seq2sql parser and generalize well to new databases
Building natural language interfaces to relational databases is an important and challenging problem \cite{li2014constructing,pasupat2015compositional,Yin15,Zhong2017,Yaghmazadeh17,Xu2017,Wang2017}.
It requires a system that is able to understand natural language questions and generate corresponding SQL queries.
Interacting with relational databases through natural language helps users of any background easily query and analyze a vast amount of data. This requires a system that understands users' questions and converts them to SQL queries automatically. However, this research area lacks a large corpus of training dataset because of the hardness of labelling. In this paper we introduce a large human labeled corpus of test to SQL dataset to the research community. Also, we define a more realistic and challenging text2SQL task. We show current state-of-art methods fail in this task. Finally, we present a novel approach to solve this problem.
\section{Related Work}
\label{sec:rel}
Semantic parsing maps natural language to meaningful executable programs. The programs could be a range of representations such as logic forms \cite{zelle96,Zettlemoyer05,wong07,Das10,Liang11,banarescu13,artzi13,Reddy14,Berant14,pasupat2015compositional}. Another area close to our task is code generation. This task parses natural language descriptions into a more general-purpose programming language such as Python \cite{Allamanis15,ling16, RabinovichSK17,Yin17}.
As a sub-task of semantic parsing, the text-to-SQL problem has been studied for decades \cite{warren1982efficient,popescu2003towards,popescu2004modern,li2006constructing,giordani2012translating,wang2017synthesizing}. The methods of the Database community \cite{li2014constructing, Yaghmazadeh17} involve more hand feature engineering and user interactions with the systems. In this work, we focus on recent neural network based approaches \cite{Yin15,Zhong2017,Xu2017,Wang2017,iyer17}. \newcite{dong16} introduce a sequence-to-sequence approach to converting text to logical forms. Most of previous work focus on specific table schemas, which means they use a single database in both train and test. Thus, they don't generalize to new databases.
\newcite{Zhong2017} publish the WikiSQL dataset and propose a sequence-to-sequence model with reinforcement learning to generate SQL queries. In the problem definition of the WikiSQL task, the databases in the test set do not appear in the train and development sets. Also, the task needs to take different table schemas into account. \newcite{Xu2017} further improve the results by using a SQL sketch based approach employing a sequence-to-set model.
\section{Semantic Parsing Datasets}
\label{sec:methods}
Previous semantic parsing benchmarks expand their size by paraphrasing many questions for each labeled program. Then, they split the dataset by questions so that almost all programs in test appear in train for mutilple times as well. To make this problem clearer, (Cathy et al, 2018) propose a new evaluation by splitting dataset based on programs so that train and test have no overlap on programs. They show performances of all current state-of-art semantic parsing systems drop dramatically.
\section{Task Definition}
\label{sec:methods}
\section{Systems}
\label{sec:methods}
\subsection{Implementation Details}
\section{Results and Discussion}
\subsection{Results}
\subsection{Error Analysis}
\section{Conclusion and Future Work}
\label{sec:conclusion}
\subsection{Annotation Projection}
\input{annotation_projection}
\subsection{Direct Transfer}
\input{direct_transfer}
\subsection{Transfer Scenarios}\label{sec_scenarios}
We conduct a diverse set of experiments to evaluate our proposed methods. We
experiment with three scenarios:
1) \emph{parallel data}: in this setting, we use both the annotation projection and direct transfer approaches. In order to see the effect of domain and genre, we use different translation datasets: out-of-domain data (religious text), out-of-domain data (contemporary political text) and relatively in-domain data.
\todo{KM: Below not quite clear. I have tried to rephrase. CHeck if OK.
In the direct transfer approach, we produce the intersected alignments from source and target alignment direction and use the Giza++ \cite{och2000giza} tool to extract from these the translation dictionaries.
We restrict ourselves to the most frequent translation for each word in order to prune alignment noise.
\todo{KM: is effect of domain and genre measured for both approaches? I think so but if the sentence occurs here, it implies only for direct transfer. I have moved it above. If I am wrong, move back.}
2) \emph{translation dictionaries}: we use the manual translation dictionaries extracted from Wiktionary\footnote{\url{https://www.wiktionary.org/}}
\todo{KM: check my rewording}
in place of the dictionaries generated from parallel translations
for the direct transfer model. The Wiktionary entries are noisy and for some languages, the coverage of the lexicon is very restricted; and 3) \emph{comparable corpora}: we use the model of \newcite{??} to extract cross-lingual word embeddings. We use the cross-lingual embeddings as features for the direct transfer method.
\todo{KM: Check if my rewording OK. I foudn this awkward.}
We only use the fix pre-trained word embedding features $x_{ce}$ for the direct transfer method in this scenario.
\subsection{Datasets, Tools and Settings}
\todo{KM: wording confusing here. Makes it sound as if only Persian is laveled. I assume not.}
\paragraph{Labeled sentiment data} We downloaded tweets labeled with sentiment for 12 languages
from \newcite{tweet_paper} as well as SentiPers data\footnote{\url{http://dadegan.ir/catalog/sentipers}}, a set of digital products reviews for Persian (fa), as our rich-resource languages. The languages are Bulgarian (bg), German (de), English (en), Spanish (es), Croatian (hr), Hungarian (hu), Polish (pl), Portuguese (pt), Russian (ru), Slovak (sk), Slovene (sl) and Swedish (sv). We split use 80\% of the data as training, 10\% for development and 10\% of the data for testing. We use all development data sets to train the supervised NBSVM models and
\todo{KM: What is slightly tune? Either drop the word or define.}
slightly tune the parameters for the direct transfer and annotation projection on the Persian development data.
\todo{KM: I changed the wording below. Check}
We also use labeled Uyghur, Chinese and Arabic for the target language only, given smaller amounts of labeled data. We manually annotated the Uyghur and Chinese data and use the test section of the Arabic Semeval dataset\footnote{\url{http://alt.qcri.org/semeval2017/task4/}}. In the multi-source setting, we hold out the training dataset for each language and use the other training sets for training. Table~\ref{tab_tweet_size} shows the data sizes.
\todo{KM: It sounds odd to say here that use U, C and A as ONLY the target language. Why? . To address this I have edited above although we could be called out on that for Arabic. Let's discuss which are low resource languages.}
\input{tweet_data_stat}
\paragraph{Parallel data}
\todo{KM: One problem with the discussion as is, is that you don't describe the domain of the test data. So the out-of-domain descriptions require the reader to think: what is the domain? I think it should be clearly stated above. We can discuss hwere. What exactly is in-domain? Is it product reviews? Or is it any twitter data?}
We using the following datasets:
\begin{itemize}
\item \emph{Bible and Quran}: We use the Quran and Bible translations as our out-of-domain parallel datasets. We use the corpus of \newcite{christodouloupoulos2014massively} for the Bible dataset. This dataset has one translation per language; except for English for which it has two translations. We use both translations of the English data. We use the Tanzil translations\footnote{\url{http://tanzil.net/trans/}} for the holy Quran. This dataset has multiple translations for some languages.
\todo{KM: CHeck my rewording.}
We excluded a subset of the translations from Russian and English that were interpretations as opposed to translations.\footnote{For Russian, we use the Krachkovsky, Kuliev, Osmanov, Porokhova, and Sablukov translations and for English, we use the Ahmedali, Arberry, Daryabadi, Itani, Mubarakpuri, Pickthall, Qarai, Qaribullah, Sahih, Sarwar, Shakir, Wahiduddin, and Yusufali translations.}
\item \emph{Europarl }: We use the Europarl data \cite{koehn2005europarl} as out-of-domain contemporary political text. We restricted ourselves to those sentences that are translated to all of the 10 languages (Bulgarian, German, English, Spanish, Hungarian, Polish, Portuguese, Slovak, Slovene, and Swedish). That comprised a total of 294738 sentences for all languages.
\item \emph{Linguistic Data Consortium (LDC) parallel data}: The LDC consists of in-domain parallel data \todo{KM: Why indomain? State} drawn from Twitter???. We use nine English to target parallel translations from the LDC data: Chinese (16440 sentences)\footnote{LDC2016E30\_LORELEI\_Mandarin}, Persian (57087 sentences)\footnote{LDC2016E93\_LORELEI\_Farsi}, Hungarian (157931 sentences) \footnote{LDC2016E99\_LORELEI\_Hungarian}, Arabic (49446 sentences)\footnote{LDC2016E89\_LORELEI\_Arabic}, Russian (193967 sentences)\footnote{LDC2016E95\_LORELEI\_Russian}, Spanish (345940 sentences)\footnote{LDC2016E97\_LORELEI\_Spanish}, and Uyghur (99272 sentences) \footnote{LDC2016E57\_LORELEI\_IL3\_Incident\_Language\_Pack\_\\for\_Year\_1\_Eval}.
\end{itemize}
\paragraph{Monolingual text}
We use the Wikipedia dump data set to create monolingual word embeddings and Brown clusters. We also use the monolingual text to train cross-lingual word embeddings and Brown clusters based on the method of \newcite{rasooli_16}: this is done by randomly swapping words in a language with its translation in another language. We replaced X\% of the words in the monolingual corpora with their translations drawn from either the bilingual dictionary automatically generated from the parallel corpus or the manual Wiktionary bilingual dictionary.
\todo{KM: Since it is so prominent here, I think we cannot rely on people reading your other paper to understand what is done. It should be defined here. So the last part of this sentence should be made more precise. I've tried but you should edit.}
\paragraph{Pos tagging and tokenization} We use the Universal dependencies corpora \cite{universal_deps} to create training data for sentence and word tokenization, and part-of-speech tagging. We trained sentence tokenization with the OpenNLP toolkit\footnote{\url{https://opennlp.apache.org/}}. We also use the OpenNLP toolkit to train word tokenization for the languages that the Europarl package does not provide a tokenization script; except Arabic, Persian and Chinese. For Chinese we use the Stanford Chinese segmenter \cite{chinese_segmentor}, Madamira \cite{pasha2014madamira} with ATB tokenization format for Arabic, and the Hazm toolkit\footnote{\url{https://github.com/sobhe/hazm}} for Persian. We trained the perceptron-based part-of-speech tagger of \newcite{collins2002discriminative}\footnote{\url{https://github.com/rasoolims/SemiSupervisedPosTagger}} for training our tagging models.
\todo{KM: A question will come up here: you needed data labeled with POS for each language, correct? Did you have it?}
\paragraph{Comparable data}\todo{MSR: Noura; please fill this in. If you want to add any table for statistics; please put in in another tex file and use \\input\{\} for that}
\paragraph{Embeddings and Brown clusters} We use the Word2vec tool\footnote{\url{https://code.google.com/archive/p/word2vec/}} with its default setting and dimension of 300 for all of our embeddings (monolingual and cross-lingual). We trained monolingual and cross-lingual Brown clusters with the method of \newcite{stratos2014spectral}\footnote{\url{https://github.com/karlstratos/singular}} with 500 clusters.
\paragraph{Neural network model} We have implemented our neural network model by using the Dynet library \cite{neubig2017dynet}. We use seven iterations for the single-source transfer experiments and two epochs for the multi-source transfer (concatenation of multiple sources). We use the following dimensions for all of our experiments: pre-trained word embeddings $d_{ce}=300$, updatable word embeddings $d_{e}=400$, cross-lingual Brown cluster embeddings $d_{cc}=50$, batch size of 10K, LSTM output dimension $d_{rec}=400$, and hidden layer dimension $d_h = 400$.
\subsection{Baseline approach}
We provide a simple baseline to compare all of our methods: this is done by translating the Sentiwordnet \cite{baccianella2010sentiwordnet} lexicon to each target language by using the dictionaries extracted from the Bible and Quran parallel data. We use a simple threshold to determine the sentiment assignment for each sentence:
\[
l(x)=
\begin{cases}
\text{positive} & s_x[p]-s_x[n]>\delta \\
\text{negative} & s_x[n]-s_x[p]>\delta \\
\text{neutral} & \text{otherwise} \\
\end{cases}
\]
$s_x[p]$ and $s_x[n]$ show the average positive and negative sentiwordnet score for the words in a sentence $x$. We chose $\delta = 0.1$ based on the performance on the Persian development data.
\subsection{English to Target Transfer Results}
In our first set of experiments, we conduct the transfer experiments by transferring only from the English language. In cases where we have more than one translation for a sentence (such as Quran), we apply majority voting to get the most frequent label. The results are depicted in Table~\ref{tab_single_results}. \todo{MSR: after having all numbers, I will write more. the results for the projection will be updated; some numbers are wrong. Will update it asap}
\input{single_table}
\subsection{Multi-Source Transfer Results}
In this set of experiments, we use all training data sets (except the one for the target language) as the source language in the experiments. For annotation projection, we apply majority voting on sentences with more than one translation. For direct transfer we apply three different techniques: 1) Concatenation: we concatenate all the training sets and use it as the training data; 2) Ensemble-flat: for every source language, we train a single-source direct transfer method and run the trained model on the testing data. We then use the most frequent label assignment on the testing data; 3) Ensemble-KL: this is very similar to the Ensemble-flat method, except that we weigh every label assignment proportional to the KL-divergence of the target and source POS trigram distribution. We use the method of \newcite{rosa-zabokrtsky:2015:ACL-IJCNLP} in which we use the fourth power of the inverted value the KL-divergence.\footnote{We also tried the cosine similarity but similar to \cite{rosa-zabokrtsky:2015:ACL-IJCNLP} we did not see any improvement.}
\todo{KM: THink about whether we should include both ensemble-flat and ensemble-KL.}
Table~\ref{tab_multi_source_results} shows the results for different settings.\todo{the results for the projection will be updated; some numbers are wrong. Will update it asap}
\input{multi_source_table}
\subsection{Supervised Sentiment Analysis}\label{sec_sup_sent}
\todo{KM: Agree this section can be removed.}
In a supervised sentiment analysis system, a set of training examples ${\cal X}=\{x_1, x_2, \cdots, x_n\}$ are given with their corresponding labels $\{l_1, l_2, \cdots, l_n\}$. Each label $l_i$ comes from the set of possible sentiment labels ${\cal L}=\{positive, negative, neutral\}$. A machine learning method learns a model ${\cal M}$ from the labeled instances ${\cal X}$. At decoding time, the sentiment analysis system uses a scoring function obtained from ${\cal M}$ and predicts the best label $l^*(x)$ for every sentence $x$:
\[
l^*(x) = \arg\max_{l \in {\cal L}} \theta(\phi(x,l))
\]
where $\theta(x,l)$ is a function that gives the score of label $l$ for the input sentence $x$. The function $\phi(x,l)\in \mathbb{R}^d$ is a generic feature function that extracts features for sentence $x$ and label $l$. The features may include any useful information such as word n-grams, part-of-speech tags or word embeddings.
\subsection{Data Assumption}
Throughout this paper, we will assume that we have $k$ source languages $L_1, \cdots, L_k$ and a single target language $L_{k+1}$. We assume to have the following data:
\begin{itemize}
\item Labeled training examples ${\cal X}_i$ for $i\in \{1,\cdots,k\}$.
\item Small translation data for all language pairs. We can also extract a cross-lingual translation dictionary $t(w, i, j)$ for which gives the most frequent translation of word $w$ in language $i$ to language $j$. The frequency of translations is automatically extracted from word alignment (e.g. Giza++~\cite{och2000giza}) run on translation data.
\item Part-of-speech tagger and word tokenizer for all languages.
\item Monolingual text for all languages. We use the Wikipedia text in this paper.
\end{itemize}
\subsection{A Baseline Approach}\label{sec_baseline}
As a baseline, we use a lexicon-based approach. We use the Sentiwordnet lexicon \cite{baccianella2010sentiwordnet}: this lexicon gives the probability of each word and sense pair for being positive or negative. Formally, we use the lexicon ${\cal S}$ that gives the probability of positive and negative sentiments as a two-dimensional vector ${\cal S}(w,s)\in \mathbb{R}^2$ for a word $w$ and its corresponding sense $s$. Since we do not have word sense information for the target language, we use the averaged vector $\psi(w)\in \mathbb{R}^2$ over all the possible senses of each word $w$:
\[
\psi(w)=
\begin{cases}
\frac{\sum_{s \in \text{senses}(w)} {\cal S}(w,s)}{|\text{senses}(w)|} & \text{if } w\in {\cal S}\\ [0, 0] & \text{otherwise}
\end{cases}\]
For each sentence $x$ in a target language $L_t$, we use the translation dictionary to get the corresponding English Sentiwordnet score and average them by the number of seen entries in the Sentiwordnet dictionary:
\[
f(x) = \frac{\sum_{i=1}^{n} \psi(t(w_i, \text{en}, L_t))}{\sum_{i=1}^{n}\mathbb{I}(t(w_i, \text{en}, L_t)\in {\cal S})}
\]
where $\mathbb{I}$ is the identity function.
Finally we define a simple threshold to assign the label of the sentence:
\[
l(x)=
\begin{cases}
\text{positive} & f(x)[0]-f(x)[1]>\delta \\
\text{negative} & f(x)[1]-f(x)[0]>\delta \\
\text{neutral} & \text{otherwise} \\
\end{cases}
\]
where $f(x)[0]$ and $f(x)[1]$ show the positive and negative scores of sentence $x$ and $\delta$ is a predefined threshold. In this paper we use $\delta = 0.1$.
\end{comment}
\subsection{Annotation Projection}
In single-source annotation projection, we assume that we have translation data where sentences $\{s_1, s_2, \cdots, s_m\}$ from the source rich-resource language $L_s$ are translated to sentences $\{t_1, t_2, \cdots, t_m\}$ in a low-resource language $L_t$; i.e. $s_i$ is a translation for $t_i$ for $i=1,\cdots, m$. We also assume that we have labeled training examples ${\cal X}=\{x_1, x_2, \cdots, x_n\}$ for the source language $L_s$. Thus, we first train a supervised system ${\cal M}_{sup}$ on ${\cal X}$ and afterward, use the trained system to ${\cal M}_{sup}$ predict the labels of the source-side parallel translated text $\{l_{s_1}, l_{s_2}, \cdots, l_{s_m}\}$. We project those labels to the target-side translation by assuming $l_{s_i} \rightarrow l_{t_i}$. After applying projection, we use the target-side translation text $\{t_1, t_2, \cdots, t_m\}$ with the projected labels $\{l_{t_1}, l_{t_2}, \cdots, l_{t_m}\}$ to train the target model ${\cal M}_{proj}$. The trained model ${\cal M}_{proj}$ is the final model for analyzing sentiment in the target language.
Multi-source transfer is an extension to single-source transfer. In this setting, there is more than one source language and the translation data includes a mapping between source sentences in multiple source languages for each target sentence. In this setting, we apply the same technique for each source language and use majority voting the get the most reliable projected label for each target sentence. Finally, a supervised system is trained on the projected data.
\subsection{Direct Transfer}
In direct transfer, we directly use the training instance $X_i$ for the source languages $L_i$ ($i=1,\cdots k$). We can have an unrestricted number of source languages. In other words, a supervised sentiment analysis system can be trained on the concatenation of $k$ labeled datasets $\cup_{i=1}^{k} {\cal X}_i$ or can be the outcome of the ensemble of $k$ single-source direct transfer methods. One benefit of direct transfer is the direct use of gold labels in code-switched data as opposed to the use of automatically transferred labels in annotation projection. The main problem with direct transfer is that most of the common features do not generalize beyond each source language: for example lexical features in one language may not appear in other languages. To address this problem, we design a deep learning neural network model that makes use of cross-lingual word representations and translation dictionaries. More details are given in \S\ref{sec_direct}.
\subsubsection{Feature Adaptation}
As mentioned in \S\ref{sec_background}, the main challenge in direct transfer is the adaptation of features from the source languages to the target language. When dealing with labeled datasets in other languages, most of the words occurring in the training data do not exist in the target language vocabulary. We apply the following techniques to address this problem:
\begin{itemize}
\item {\bf Word representation features:} We train cross-lingual word representations such that words with similar meanings in different language have similar representations. We use the method of \newcite[Figure 1]{rasooli_16} to train cross-lingual word embeddings and Brown clusters.
\item {\bf Lexical features:} We use the translation dictionaries $t(w, i, t)$ that are either extracted from the translation text or from manual dictionaries to translate as many words of the labeled examples ${\cal X}_i \in L_i$ to the target language words.
\todo{KM: Mohammad - do you use any lexical features other than the words themselves? Or are you referring to the word embeddings of the translated words? It's slightly unclear what you mean by lexical features. Perhaps you should define it if you're going to use it often?}
The lexical features of the translated words can be used directly when training the target model.
\end{itemize}
\subsubsection{Deep Learning Model}
The input to our model is a sequence of words in the sentence $x = \{x_1, x_2, \cdots, x_n\}$ where $n$ is the number of words. If any dictionary (either extracted from parallel data or a manual dictionary) is provided, the words that exist in the dictionary, will be translated to the target language. The model uses two representations for the input: a recurrent representation based on long short-term memories (LSTM) \cite{hochreiter1997long} and the other based on averaging over all inputs. A graphical depiction of underlying model is shown in Figure~\ref{fig_neural_fig}.
\todo{KM: In this section, I don't think it is clear that you use code-switched data. Unless perhaps it is in a later section? I am wondering if some examples would help.}
\todo{KM: YOu definitely need an example here. Show the code-switched data and describe how that affects the word embeddings.}
\input{neural_fig}
\paragraph{Embedding Layer}
We use the following features for every word in the sentence:
\begin{itemize}
\item A \emph{fixed} pre-trained cross-lingual word embeddings $x_{ce} \in \mathbb{R}^{d_{ce}}$ extracted from the method of \newcite{rasooli_16}.
\item A randomly initialized word embedding $x_{e} \in \mathbb{R}^{d_{e}}$ for every word in the sentence.
\todo{KM: I would not comment out the following sentence as it's unclear here whether you have a sentence in one language or if you have both.}
If a word has translation, we use the translation; otherwise we use the original word.
\item The cross-lingual word cluster embedding $x_{cc} \in \mathbb{R}^{d_{cc}}$ to represent the word cluster identity of each word. The word clusters are extracted by the method of \newcite{rasooli_16}.
\todo{KM: Again, I would include the sentence you have commented below. I have uncommented both.}
Similar to the cross-lingual embeddings; we first look at the translation and otherwise the original word.
\todo{KM: Are you using sentiwordnet in *all* methods? I thought you were just using it in the baseline.}
\item In the case of single transfer from English, we use the fixed two-dimensional Sentiwordnet \cite{baccianella2010sentiwordnet} score $x_{sw} \in \mathbb{R}^{2}$ that represents the likelihood of a word being positive or negative. We basically translate Sentiwordnet lexicon to the target language by using the translation dictionaries.
\end{itemize}
\paragraph{Intermediate Layer}
The input for every word $x_i$ to intermediate layers is the concatenation of the embedding layers: $f = x_{ce}[i] \circ x_e[i] \circ x_{cc}[i]$ where $\circ$ is the concatenation operator. Thus the dimension of the input is $d_i = d_{ce}+d_{e}+d_{cc}$.\footnote{For the sake of generality, we have not included the Sentiwordnet representation $x_{sw}$ in the notation since it is only used in single-transfer from English to the target and not in the multi-source transfer experiments.} The following representations are used to represent the intermediate layer:
\begin{itemize}
\item {\bf Recurrent layer}: We use a Bidirectional LSTM (BiLSTM) for representing the sequence of words in the sentence: a forward pass $LSTM_f(f_{[1:n]}) \in \mathbb{R}^{d_{rec} \times d_i}$ gives the final representation of the sentence by looking from left to right, and the backward pass $LSTM_b(f_{[n:1]}) \in \mathbb{R}^{d_{rec} \times d_i}$ looks from right to left. Then the output of the two LSTMs are concatenated as $r(x) \in \mathbb{R}^{2\cdot d_{rec}}$:
\[
r(x) = LSTM_f(f_{[1:n]}) \circ LSTM_b(f_{[n:1]})
\]
\item {\bf Average pool layer}: Since there are so many word order inconsistencies from the source to the target languages, we also use an average pool layer $p(x) \in \mathbb{R}^{d_i}$ to average all the input features. This layer is represents the bag-of-word information of the sentence without taking the sequence information into account:
\[
p(x) = \frac{\sum_{i=1}^{n} f[i]}{n}
\]
\end{itemize}
\paragraph{Output Layer}
We use a multilayer perceptron (MLP) to get the likelihood of each label: the two intermediate layers $r(x)$ and $p(x)$ are concatenated and fed to a hidden layer $H \in \mathbb{R}^{d_h \times (2\cdot d_{rec} + d_i)}$ activated by rectified linear units (ReLU) \cite{nair2010rectified}:
\[
H(x) = ReLU (H (r(x) \circ p(x))
\]
Finally the hidden layer output is fed to the output softmax layer with the weight matrix $W \in \mathbb{R}^{|{\cal L}| \times d_h}$ where $|{\cal L}|$ is the number of unique sentiment labels (in our case it is 3):
\[
p(l | x) = \frac{e_l^{W H(x)}}{\sum_{i=1}^{|{\cal L}|} e_i^{W H(x)}}
\]
We use the sum log likelihood function function with the Adam optimizer \cite{adam_paper} to learn the model parameters.
\subsection{Transfer from a Single Source}
The results from single source transfer are mixed, but some trends do appear. In general, in-domain data (LDC) enables significant increases in performance for most languages. For projection, the LDC results are better than projection results using either BQ or EP in all cases except for Uyghur where perhaps the smaller parallel data caused the drop {\bf KM: numbers could help here. How much smaller was the Uyghur dataset than the others?}. For direct transfer, the LDC results are better than the projection results using either Wikipedia, BQ or EP in all cases except for Farsi. {\bf Mohammad - any idea why?}
Looking at direct transfer in isolation, we see that the automatically generated dictionaries produce better results than the manual Wikipedia dictionary in eight out of 14 languages when the BQ parallel data is used {\bf Mohammed - is there less parallel data in the other six languages?} and six out of nine languages when the EP data is used. In general, where more parallel data is available, we can expect automatically generated dictionaries to do better than manual.
In comparing project with direct transfer, the results are more mixed. With LDC data, each approach does better or equal on approximately the same number of languages. With the BQ data (a lower resource setting), annotation does better on average but that is because of some large differences in a small number of languages (sk, sl, sv, ug, and zh). {\bf What is different about these languages? MOre or less data? By the way, in the table there is room to spell out the language. Why not do that?} With the EP data, results are split, with direct transfer doing better on four out of nine languages, projection better on four, and in one case, equivalent performance. {\bf I think it would be worth doing statistical significance tests on the differences to enable better counts.}
{\bf Would be good to conclude this subsection by indicating which approach better if you only have a single source. Let's discuss.}
\section{Evaluation Metrics}
\section{SQL Hardness Criteria}
To better understand the model performance on different queries, we divide SQL queries into 4 levels: easy, medium, hard, extra hard. We define the difficulty as follows.
We first define:
\begin{itemize}
\item SQL components 1: \texttt{WHERE, GROUP BY, ORDER BY, LIMIT, JOIN, OR, LIKE, HAVING}
\item SQL components 2: \texttt{EXCEPT, UNION, INTERSECT, NESTED}
\item Others: number of aggregations $>$ 1, number of select columns $>$ 1, number of where conditions $>$ 1, number of group by clauses $>$ 1, number of group by clauses $>$ 1 (no consider col1-col2 math equations etc.)
\end{itemize}
Then different hardness levels are determined as follows.
\begin{itemize}
\item Easy: if SQL key words have ZERO or exact ONE from [SQL components 1] and SQL do not satisfy any conditions in [Others] above. AND no word from [SQL components 2].
\item Medium: SQL satisfies no more than two rules in [Others], and does not have more than one word from [SQL components 1], and no word from [SQL components 2]. OR, SQL has exact 2 words from [SQL components 1] and less than 2 rules in [Others], and no word from [SQL components 2]
\item Hard: SQL satisfies more than two rules in [Others], with no more than 2 key words in [SQL components 1] and no word in [SQL components 2]. OR, SQL has 2 $<$ number key words in [SQL components 1] $<=$ 3 and satisfies no more than two rules in [Others] but no word in [SQL components 2]. OR, SQL has no more than 1 key word in [SQL components 1] and no rule in [Others], but exact one key word in [SQL components 2].
\item Extra Hard: All others left.
\end{itemize}
\end{document}
|
1,108,101,565,916 | arxiv | \section{Introduction}
\label{intro}
In all studies of relativistic properties of nuclear matter, mean
field models are usually the models of first choice \cite{SW,NJL}.
These models use a statistical approach based on Boltzmann-Gibbs
(BG) statistics which is, strictly speaking, only correct when the
corresponding heat bath is homogeneous and infinite. These
conditions are by no means met in realistic situations in which
nuclear matter occurs. Usually one encounters some inherent
problems arising, for example, from the smallness of the
collisional systems and their rapid evolution. These, among other
things, render the spatial configuration of the system far from
uniform and prevent global equilibrium from being established (cf.
\cite{departure} and references therein). As a result, some
quantities become non extensive and develop power-law tailed
rather than exponential distributions. The widely used way to
account for these effects is to resort to a nonextensive
statistics, known as $q$-statistics \cite{T}. The new
phenomenological nonextensivity parameter $q$ occurring there is
supposed to account for all possible dynamical factors violating
the assumptions of the usual BG statistics. This is recovered in
the limit of $q \rightarrow 1 $. Because it enters into the
respective formulas of the particular dynamical model used for a
given investigation, it allows for a simple phenomenological check
of the stability of the model against possible deviations from the
BG approach.
So far, applications of the nonextensive approach are numerous and
cover all branches of physics \cite{applications}. These include
high energy multiparticle production processes (cf.,
\cite{multipart}) and different aspects of nuclear and quark
matter \cite{AL,LPQ,BiroQ}. The nonextensive framework can also be
derived from a special treatment of kinetic theory investigating
complex systems in their nonequilibrium stationary states
\cite{qBiro}. Some examples of more specialized topics can be
found in \cite{todos} and references therein. For an illustration,
the Tsallis distribution, $h_q(E)$, and BG distribution, $f(E)$,
are connected as follows:
\begin{eqnarray} h_q(E) &=& \exp_q
\left(-\frac{E}{T}\right) = \frac{2 -
q}{T}\left[1 - (1-q)\frac{E}{T}\right]^{\frac{1}{1-q}} \label{eq:Tsallis}\\
&\stackrel{q \rightarrow 1}{\Longrightarrow}& f(E) =
\frac{1}{T}\exp \left(-\frac{E}{T}\right). \label{eq:BG}
\end{eqnarray}
It is usually argued that, for the $q > 1$ case, $q - 1$ is a
measure of intrinsic fluctuations of the temperature in the system
considered \cite{WW}, whereas $q < 1$ is usually attributed to
some specific correlations limiting the available phase space
\cite{Kodama} or to the possible fractality of the allowed phase
space \cite{fractal} (other possible interpretations were
considered in \cite{todos})\footnote{One must admit at this point
that this approach is subjected to a rather hot debate of whether
it is consistent with the equilibrium thermodynamics or else it is
only a handy way to a phenomenological description of some
intrinsic fluctuations in the system \cite{debate}. It should be
therefore noticed that it was demonstrated on general grounds
\cite{M} that fluctuation phenomena can be incorporated into a
traditional presentation of a thermodynamic. The Tsallis
distribution (\ref{eq:Tsallis}) belongs to the class of general
admissible distributions which satisfy thermodynamical consistency
conditions and which are therefore a natural extension of the
usual BG canonical distribution (\ref{eq:BG}).}.
For our further considerations of importance are recent
applications of nonextensive statistics in description of nuclear
\cite{Pereira} and quarkonic matter \cite{LPQ,JG1,JG2}, the later
of which we shall continue here. In \cite{Pereira}, the
$q$-version of the Walecka many-body field theory \cite{SW} has
been investigated. It was shown there that $q$-statistics results
in the enhancement of the scalar and vector meson fields in
nuclear matter, in diminishing of the nucleon effective mass and
in hardening of the nuclear equation of state (only the $q > 1$
case was considered there). In \cite{LPQ} the relativistic
equation of state of hadronic matter and a quark-gluon plasma at
finite temperature and baryon density was investigated in the
framework of nonextensive statistical mechanics. In our work
\cite{JG1} we investigated a nonextensive version of another mean
field theory, namely the QCD-based Nambu - Jona-Lasinio (NJL)
model of a many-body field theory describing the behavior of
strongly interacting matter presented recently in \cite{Sousa}.
This time, unlike in \cite{Pereira}, we used the quark rather than
the hadronic degrees of freedom and, because of this, we had to
consider both the $q > 1$ and $q < 1$ cases. This $q$-NJL model
allowed us to discuss the $q$-dependence of the chiral phase
transition in dense quark matter, in particular the quark
condensates and the effective quark masses and their influence on
the masses of $\pi$ and $\sigma$ mesons and on the spinodal
decomposition (cf., \cite{JG1} for details). These results helped
us proceed further and consider critical phenomena in strongly
interaction matter using $q$-statistics (these phenomena are of
interest nowadays, cf., for example, \cite{HK,SFR}, but were so
far not investigated in non-equilibrium environment provided by
$q$-statistics). In particular, we shall now concentrate on the
influence of dynamical factors causing nonextensivity and
represented by the parameter $q$ in the vicinity of the {\it
critical end point} (CEP).
\section{Basic elements of the $q$-NJL model}
\label{sec:II}
First we present the basic elements of the $q$-NJL model
introduced in \cite{JG1} (to which we refer for more details).
\subsection{The usual NJL model}
\label{IIa}
We start with the usual QCD based NJL model based on BG statistics
discussed in \cite{Sousa}. It is the standard $SU(3)$ NJL model
with $U(1)_A$ symmetry described in, with the usual Lagrangian of
the NJL model used in a form suitable for the bosonization
procedure (with four quarks interactions only), from which we
obtain the gap equations for the constituent quark masses $M_i$:
\begin{eqnarray}
M_i = m_i - 2g_{_S} \big <\bar{q_i}q_i \big > -2g_{_D}\big
<\bar{q_j}q_j\big > \big <\bar{q_k}q_k \big >\,,\label{gap}
\end{eqnarray}
with cyclic permutation of $i,j,k =u,d,s$ and with the quark
condensates given by $\big <\bar{q}_i q_i \big > = -i \mbox{Tr}[
S_i(p)]$ ($S_i(p)$ is the quark Green function); $m_i$ denotes the
current mass of quark of flavor $i$. We consider a system of
volume $V$, temperature $T$ and the $i^{th}$ quark chemical
potential $\mu_i$ characterized by the baryonic thermodynamic
potential of the grand canonical ensemble (with quark density
equal to $\rho_i = N_i/V$, the baryonic chemical potential $\mu_B=
\frac{1}{3} (\mu_u+\mu_d+\mu_s)$ and the baryonic matter density
as $\rho_B = \frac{1}{3}(\rho_u+\rho_d+\rho_s)$),
\begin{equation}
\Omega (T, V, \mu_i )= E- TS - \sum_{i=u,d,s} \mu _{i} N_{i} .
\label{tpot}
\end{equation}
The internal energy, $E$, the entropy, $S$, and the particle
number, $N_i$, are given by \cite{Sousa} (here $E_i = \sqrt{M_i^2
+ p^2}$):
\begin{eqnarray}
E &=&- \frac{ N_c}{\pi^2} V\sum_{i=u,d,s}\left[
\int p^2 dp \frac{p^2 + m_{i} M_{i}}{E_{i}}
(1 - n_{i}- \bar{n}_{i}) \right] - \nonumber\\
&& - g_{S} V \sum_{i=u,d,s}\, \left(\big <
\bar{q}_{i}q_{i}\big > \right)^{2}
- 2 g_{D}V \big < \bar{u}u\big > \big < \bar{d}d\big > \big <
\bar{s}s\big > , \label{eq:energy} \\
S &=& -\frac{ N_c}{\pi^2} V \sum_{i=u,d,s}\int p^2 dp \cdot
\tilde{S}, \label{eq:entropy}\\
&& {\rm where}\quad \tilde{S} = \bigl[ n_{i} \ln n_{i} + (1-n_{i})\ln (1-n_{i})
\bigr] +\nonumber\\
&&~~~~~~~~~~~~~~~~~~~~+ \bigl[ n_{i}\rightarrow 1 - \bar n_{i} \bigr],\nonumber\\
N_i &=& \frac{ N_c}{\pi^2} V \int p^2 dp
\left( n_{i}-\bar n_{i} \right) \label{number}.
\end{eqnarray}
The quark and antiquark occupation numbers are, respectively,
\begin{eqnarray}
n_{i} &=& \frac{1}{\left\{ \exp\left[\beta \left(E_{i} -
\mu_{i}\right)\right] - 1\right\}}, \label{eq:n}\\
\bar n_{i} &=& \frac{1}{\left\{\exp\left[ \left( \beta(E_{i} +
\mu_{i} \right)\right] + 1\right\}}, \label{eq:barn}
\end{eqnarray}
and with them one calculates values of the quark condensates
present in Eq. (\ref{gap}),
\begin{eqnarray}
\big <\bar{q}_i q_i \big> = - \frac{ N_c}{\pi^2} \,
\sum_{i=u,d,s}\left[ \int \frac{p^2M_i}{E_i} (1\,-\,n_{i}-\bar
n_{i})\right]dp .\label{gap1}
\end{eqnarray}
Eqs. (\ref{gap}) and (\ref{gap1}) form a self consistent set of
equations from which one gets the effective quark masses $M_i$ and
values of the corresponding quark condensates.
The values of the pressure, $P$, and the energy density,
$\epsilon$, are defined as:
\begin{eqnarray}
P(\mu_i, T) &=& - \frac{\Omega(\mu_i, T)}{V},\qquad
\epsilon(\mu_i, T) = \frac{E(\mu_i, T)}{V} \label{eq:p}\\
~~{\rm with}~~&& P(0,0) = \epsilon(0,0)=0.\nonumber
\end{eqnarray}
\subsection{The $q$ extension of the NJL model - the $q$-NJL}
\label{IIb}
The $q$-statistics is introduced by using the $q$-form of quantum
distributions for fermions $(+1)$ and bosons $(-1)$ in Eqs.
(\ref{eq:n}) and (\ref{eq:barn}). This is done following a
prescription provided in \cite{TPM}, namely by replacing $n$ and
$\bar{n}$ by
\begin{eqnarray}
n_{qi} &=& \frac{1}{\tilde{e}_q(\beta(E_{qi} - \mu_i))\pm
1},\label{nq} \label{TPM}
\end{eqnarray}
(the important point to notice is that one encounters here $E_{qi}
= \sqrt{M^2_{qi} + p^2}$, i.e., that because of $M_{qi}$ also
energy is now a $q$-dependent quantity). Denoting $x = \beta(E
-\mu)$ one has that for $q > 1$
\begin{eqnarray}
\tilde{e}_q(x) &=& \left\{
\begin{array}{l}
~[1+(q-1)x]^{\frac{1}{q-1}}\quad {\rm if}\quad x > 0 \\
\\
~[1+(1-q)x]^{\frac{1}{1-q}}\quad {\rm if}\quad x\leq 0 \\
\end{array}
\right. , \label{qgt1}
\end{eqnarray}
whereas for $ q < 1$
\begin{eqnarray}
\tilde{e}_q(x) &=& \left\{
\begin{array}{l}
~[1+(q-1)x]^{\frac{1}{q-1}}\quad {\rm if}\quad x \leq 0 \\
\\
~[1+(1-q)x]^{\frac{1}{1-q}}\quad {\rm if}\quad x > 0 \\
\end{array}
\right. .\label{qst1}
\end{eqnarray}
This is because only then can one consistently treat on the same
footing quarks and antiquarks (and for all values of $x$). This
should show the particle-hole symmetry observed in the $q$-Fermi
distribution in plasma containing both particles and
antiparticles, namely that
\begin{equation}
n_q(E,\beta,\mu,q) = 1 - n_{2-q}(- E,\beta,-\mu).
\label{pap_symmetry}
\end{equation}
This means, therefore, that in a system containing both particles
and antiparticles (as in our case) both $q$ and $2 - q$ occur (or,
when expressed by a single $q$ only, one can encounter both $q
> 1$ and $q <1$ at the same time). These dual possibilities warn
us that not only $q > 1$ but also $ q < 1$ (or $(2 - q) > 1$ have
physical meaning in the systems we are considering. This
differentiates our $q$-NJL model from the $q$-version of the
model presented in \cite{Pereira}. Notice that for $q\rightarrow
1$ one recovers the standard FD distribution, $n(\mu,T)$.
Actually, it is important to realize that for $T\rightarrow0$ one
always gets $n_q(\mu,T)\rightarrow n(\mu,T)$, irrespectively of
the value of $q$ \cite{Pereira}. This means that we can expect any
nonextensive signature only for high enough temperatures (how high
depends on circumstances and on the kind of observable considered,
for illustration of this point see results presented in our paper
\cite{JG1} and Fig. \ref{Figure2} below).
Our $q$-NJL model is then obtained by replacing the formulas of
Section \ref{IIa} with their $q$-counterparts in what concerns the
form of the FD distributions. Additionally, when calculating
energies and condensates we follow \cite{AL,LPQ} and use the
$q$-versions quark condensates, replacing Eqs. (\ref{gap1}),
(\ref{gap}) and (\ref{eq:energy}) by their $q$-forms:
\begin{equation}
\big <\bar{q}_i
q_i \big>_q = - \frac{ N_c}{\pi^2} \sum_{i=u,d,s}\left[ \int
\frac{p^2M_{qi}}{E_{qi}} (1\,-\,n^q_{qi}- \bar{n}^q_{qi})\right]dp
,\label{q_gap1}
\end{equation}
\begin{equation}
M_{qi} = m_i - 2g_{_S} \big <\bar{q_i}q_i \big >_q -2g_{_D}\big
<\bar{q_j}q_j\big >_q \big <\bar{q_k}q_k \big >_q\, ,\label{q_gap}
\end{equation}
\begin{eqnarray}
E_q &=& - \frac{ N_c}{\pi^2} V\!\!\!\sum_{i=u,d,s}\left[
\int p^2 dp \frac{p^2 + m_{i} M_{qi}}{E_{qi}}
(1 - n^q_{qi}- \bar{n}^q_{qi}) \right] - \nonumber\\
&& - g_{S} V \sum_{i=u,d,s} \left(\big <
\bar{q}_{i}q_{i}\big >_q \right)^{2}
- 2 g_{D}V \big < \bar{u}u\big >_q \big < \bar{d}d\big >_q \big <
\bar{s}s\big >_q . \label{q_energy}
\end{eqnarray}
On the other hand, again following \cite{AL,LPQ}, densities which
are given by the the $q$-version of Eq. (\ref{number}) are
calculated with $n_q$'s (not with $n_q^q$, as in (\ref{q_energy})
and in (\ref{q_gap1})). The pressure for given $q$ is calculated
using the above $E_q$ and the $q$-entropy version of Eq.
(\ref{eq:entropy}) with (cf. \cite{TPM})
\begin{eqnarray}
\tilde{S}_q &=& \left[ n^q_{qi} \ln_q n_{qi} + (1-n_{qi})^q\ln_q
(1-n_{qi}) \right] + \nonumber\\
&&+ \left\{ n_{qi}\rightarrow 1\! -\! \bar n_{qi} \right\}.
\label{q entropy}
\end{eqnarray}
Eq. (\ref{q_gap1}) together with the $q$-version of the gap
equation, Eq. (\ref{q_gap}), are the basic equations from which
one deduces all results presented here.
\section{Results}
\label{sec:Results}
Before presenting our results concerning nonextensive critical
effects we shortly repeat the previous results (cf.,
\cite{JG1,JG2}). In Fig. \ref{Figure1} we present the typical
pressure at critical temperature $T_{cr}$ obtained in a $q$-NJL
model as a function of compression $\rho/\rho_0$ calculated for
different values of the nonextensivity parameter $q$
\begin{figure}[h]
\begin{center}
\resizebox{0.45\textwidth}{!}{\includegraphics{fig1a.eps}}\\
\resizebox{0.45\textwidth}{!}{\includegraphics{fig1b.eps}}
\end{center}
\caption{The pressure at critical temperature $T_{cr}$ as a
function of compression $\rho/\rho_0$ calculated
for different values of the nonextensivity parameter
$q$ (the area marked at the upper panel is shown in
detail in the lower panel).
The dots indicate positions of the inflection
points for which first derivative of pressure by
compression vanishes. As in \cite{Sousa} for $q = 1$
the corresponding compression is $\rho/\rho_0 = 1.67$
(and this leads to $\mu = 318.5$ MeV); it remains the
same for $q > 1$ considered here (but now $\mu = 321$
MeV for $q = 1.01$ and $\mu = 326.1$ MeV for $q = 1.02$)
whereas it is shifted to $\rho/\rho_0 = 1.72$ for
$ q< 1$ ($\mu = 313$ MeV for $q = 0.99$
and $\mu = 307.7$ MeV for $q = 0.98$).
}
\label{Figure1}
\end{figure}
(see \cite{JG1} for more details on spinodial decomposition and
chiral symmetry restoration in $q$-NJL model)\footnote{There is
still an ongoing discussion on the meaning of the temperature in
nonextensive systems. However, in our case the small values of the
parameter $q$ deduced from the data allow us to argue that, to
first approximation, $T_q = T$ used here and in \cite{Pereira}. In
high energy physics it is just the hadronizing temperature (and
instead of the state of equilibrium one deals there with some kind
of stationary state). For a thorough discussion of the temperature
of nonextensive systems, see \cite{Abe}.}. Notice that the effect
is stronger for $ q <1$ and that, essentially, the saddle point
remains at the same value of compression. When one moves away from
the critical temperature, the typical spinodal structure occurs,
which is more pronounced for lower temperatures whereas its
sensitivity to the $q$ parameter gets stronger with increasing
temperature (cf., \cite{JG1}). However, it occurs that, for each
temperature (even for very small one) a $q > 1$ exists for which
there is no more mixed phase and for which the spinodal effect
vanishes. This seems to be a quite natural effect in the scenario
in which $q > 1$ is attributed to the fluctuations of the
temperature in a system considered as proposed in \cite{WW}. On
the contrary, effects like correlations or limitations of the
phase space considered in \cite{Kodama,fractal} work towards an
increase of the $T_{cr}$ and make the spinodal effect more
pronounced.
A few remarks are in order here (for more detailed discussion we
refer to \cite{JG1}). Nonextensive dynamics enter the NJL
calculations through the quark (antiquark) number distribution
functions $n_{qi}$ ($\bar{n}_{qi}$). These functions are connected
with the respective quark (antiquarks) spectral functions in the
NJL model. However, deviations from the exponential shape of
$q$-exponents, as defined in Eqs. (\ref{qgt1}) and (\ref{qst1}),
are negligible for values of $q$ close to unity (in our case $0.98
< q < 1.02$). It is also important to notice that Eqs.
(\ref{qgt1}) and (\ref{qst1}) are symmetric for $q \leftrightarrow
1-q$. The differences between $q<1$ and $q>1$ cases observed in
our results are then due to our way of defining the energy
(\ref{q_energy}) and entropy (\ref{q entropy}), which, following
\cite{AL,LPQ}, we do by using $n^q{_{qi}}$ and $\bar{n}^q{_{qi}}$
instead of $n_{qi}$ and $\bar{n}_{qi}$ \footnote{It is worth
notice that in \cite{Pereira}, which considers only the $q>1$ case
and uses number distributions without powers of $q$, the
significant effects were obtained only for much larger values of
the nonextensive parameter $q=1.2$.}. Because now for $q<1$
distributions $n^q{_{qi}}$ and $\bar{n}^q{_{qi}}$ are closer to
unity than $n{_{qi}}$ and $\bar{n}{_{qi}}$, therefore the absolute
values of quark condensates (as given by Eq. (\ref{q_gap1})) begin
to decrease for $q=0.98$ at lower temperature as compared with the
$q=1$ case. Therefore the corresponding energy is larger, which
means that $q < 1$ introduces some residual attractive
correlations which rise the energy and lead to hadronization
occurring at lower temperature. On the other hand, $ q
> 1$ introduces fluctuations which decrease the effective
occupations ($n^q{_{qi}}$ and $\bar{n}^q{_{qi}}$) and the energy,
and smears out the chiral phase transition. In Fig. \ref{Figure2}
we present our phase diagram in the $\mu-T$ plane for different
nonextensivity parameters considered here with positions of the
corresponding critical end points (CEP) for different values of
$q$ clearly indicated. The overlap of curves observed in Fig.
\ref{Figure2} (inlet) indicates how the critical end point is
smeared to a kind of critical area. This is because fireballs
created in different events can have different values of $q$
(representing, as mentioned before, action of all factors
responsible for the departure of our system from the usual BG
approach - not specified here in detail but, in general, resulting
in specific correlations of quarks or fluctuations of temperature
mentioned before). Therefore when analyzing experimental data one
most probably will encounter such a critical area instead of a
well defined CEP.
\begin{figure}[h]
\begin{center}
\resizebox{0.45\textwidth}{!}{\includegraphics{njlb11.eps}}
\end{center}
\vspace{-0.5cm} \caption{Phase diagram in the $q$-NJL model in $T
- \mu$ plane for values of $q$ considered before:
$q=0.98,~1.0~,~1.02$. Solid and dashed lines denote, respectively,
first order and crossover phase transitions. The results are
presented for three different values of the nonextensivity
parameter $q$ with the vicinity of the ($q$-dependent) critical
end points (CEP) enlarged in the inlet. The crossover phase
transition for $q = 0.98$ and for $\mu \rightarrow 0$ takes place
for a smaller temperature $T$.} \label{Figure2}
\end{figure}
\begin{figure*}[t]
\begin{center}
\resizebox{0.45\textwidth}{!}{\includegraphics{fig2_all.eps}}
\resizebox{0.45\textwidth}{!}{\includegraphics{fig2_098.eps}}
\resizebox{0.45\textwidth}{!}{\includegraphics{fig2_100.eps}}
\resizebox{0.45\textwidth}{!}{\includegraphics{fig2_102.eps}}
\end{center}
\caption{The baryon compression $\rho/\rho_0$ (calculated in the vicinity of the
critical values of temperature and density indicated by the corresponding
dotted lines) as function of the chemical potential $\mu$ for different values of the
nonextensivity parameter, $q =0.98, 1.00, 1.02$. The summary presented in the
top-left panel is detailed in the three consecutive panels.
}
\label{Figure3}
\end{figure*}
\begin{figure}[h]
\begin{center}
\resizebox{0.45\textwidth}{!}{\includegraphics{njlb32.eps}}
\resizebox{0.45\textwidth}{!}{\includegraphics{njlb33.eps}}
\end{center}
\caption{Upper panel: the chemical potential ($\mu_B$) dependence
of the light quarks condensate in the vicinity of the
critical region calculated according to Eq. (\ref{q_gap1})
for different values of the nonextensivity parameter $q$:
$q = 0.99$, $1.0$ and $1.01$.
Bottom panel: the $\mu_B$ dependence of the
chemical potential derivative of the light quark mass
$M_{qu}$ calculated according to Eq. (\ref{q_gap})
in the critical region for the same values of $q$ as above.}.
\label{Figure4}
\end{figure}
The role of all these factors is shown in more detail in Fig.
\ref{Figure3} which shows the baryon compression $\rho/\rho_0$
(calculated in the vicinity of the critical values of temperature
and density indicated by the corresponding dotted lines) as the
function of the chemical potential $\mu$ for different values of
the nonextensivity parameter, $q =0.98, 1.00, 1.02$. Notice the
remarkable difference of the density derivative at the critical
point: from the smooth transition through the critical point for
$q<1$ to a big jump in density for critical value of chemical
potential for $q>1$. It reflects the infinite values of the baryon
number susceptibility, $\chi_B$:
\begin{equation}
\chi_B = \sum_{i=u,d,s} \left(
\frac{\partial\rho_i}{\partial\mu_B}\right)_T = - \sum_{i=u,d,s}
\left( \frac{\partial^2\Omega}{\partial^2\mu_B}\right)_T.
\label{eq:sus}
\end{equation}
The transition between confined and deconfined phases and/or
chiral phase transition \cite{HK} can be seen by measuring, event
by event, the difference in the magnitude of local fluctuation of
the net baryon number in a heavy ion collision \cite{Hatta}. They
are initiated and driven mainly by the quark number fluctuation,
described here by $\chi_B$, and can survive through the freezout
\cite{Hatta}. Consequently, our q-NJL model allows us to make the
fine tuning for the magnitude of baryon number fluctuations
(measured, for example, by the charge fluctuations of protons) and
to find the value of the parameter $q$ characteristic for this
system. However, it does not allow us to differentiate between
possible dynamical mechanisms of baryon fluctuation. We close by
noticing that using $q$ dependent $\chi_B$ leads to $q$-dependent
parameter $\epsilon$ of the critical exponents which describe the
behavior of baryon number susceptibilities near the critical point
\cite{Ikeda}. Whereas in the mean field universality class one has
$\epsilon=\epsilon'=2/3$, our preliminary results using the
$q$-NJL model show a smaller value of this parameter for $q>1$,
($\epsilon \sim 0.6$ for q=1.02) and greater for $q<1$
($\epsilon\sim0.8$ for q=0.98). It would be interesting to deduce
the corresponding values of $q$ from different models and compare
them with results on a lattice which, by definition, should
correspond to $q=1$ (it should be mentioned at this point that
there are already attempts to apply Monte Carlo methods,
simulating lattice gauge field dynamics as based on non-extensive
rather than extensive thermodynamics, see \cite{Birolat} and
references therein).
\begin{figure}[h]
\begin{center}
\resizebox{0.45\textwidth}{!}{\includegraphics{njlb31.eps}}
\end{center}
\caption{The $\mu_B$ dependence of the baryon number susceptibility,
$\chi_B$, in the vicinity of the critical region, calculated according
to Eq. (\ref{chi}) for different nonextensivity nonextensive
parameters, $q = 0.99$, $1.0$ and $1.01$. Notice that it is essentially identical
with results presented in the bottom panel of Fig. \ref{Figure4}.}
\label{Figure5}
\end{figure}
In order to further investigate the $q$ dependence of $\chi_B$ let
us rewrite Eq. (\ref{eq:sus}) in the following form (recall that
$\rho = N_q/V$ and $N_q$ is $q$-version of Eq. (\ref{number})):
\begin{eqnarray}
\chi_B = \frac{ 1}{\pi^2} \sum_{i=u,d,s} \int p^2 dp \left(
\frac{\partial n_{qi}}{\partial\mu_B} - \frac{\partial\bar
n_{qi}}{\partial\mu_B} \right)_T. \label{chi}
\end{eqnarray}
The $q$-versions of occupation numbers, $n_{qi}$ and
$\bar{n}_{qi}$, are taken from Eq. (\ref{TPM}). The $q$-version of
energies there depend on masses $M_{qi}$, which are given by gap
equation (\ref{q_gap}) in a quite involved way. Therefore, the
$q$-dependence enters here in two ways: by rather straightforward
replacement of $\exp(...)$ by the respective $\exp(...)$ in Eq.
(\ref{TPM}) and by quite involved $q$-dependence of $M_{qi}$ given
by the gap equation (\ref{q_gap}). Therefore,
\begin{eqnarray}
\chi_B(\mu_B,T) &=& \frac{1}{\pi^2 T} \cdot \left[
\chi\left(\mu_B\right)+\bar{\chi}\left(\mu_B\right)\right]
\label{chi1}
\end{eqnarray}
with
\begin{eqnarray}
{\chi}\left(\mu_B\right)\!\! &=&\!\! \sum_{i=u,d,s}\!\! \left[\int
\! p^2 dp n_{qi}^2 \left(\frac{1 -
n_{qi}}{n_{qi}}\right)^{f(q)}\left(1- \frac{M_{qi}}{E_{qi}}
\frac{\partial M_{qi}}{\partial\mu_B}\right)\right], \nonumber\\
\bar{\chi}\left(\mu_B\right)\!\! &=&\!\! \sum_{i=u,d,s}\!\!
\left[\int\! p^2 dp \bar{n}_{qi}^2 \left(\frac{1 - \bar{n}_{qi}}
{\bar{n}_{qi}}\right)^{f(q)}
\left(1+\frac{M_{qi}}{E_{qi}}\frac{\partial
M_{qi}}{\partial\mu_B}\right)\right] , \nonumber
\end{eqnarray}
where
\begin{eqnarray}
f(q)&=&(2-q) \qquad {\rm if}\quad (q-1)(E_{qi}-\mu_B) > 0, \nonumber \\
f(q)&=&q. \qquad \qquad{\rm otherwise}. \nonumber
\end{eqnarray}
Our results are presented in Figs. \ref{Figure4} and
\ref{Figure5}. It turns out that the chiral phase transition
investigated here (Fig. \ref{Figure5}) is mainly driven by the
behavior of the light quark mass derivative, see Fig.
\ref{Figure4}, which in turn is determined by the behavior of the
light condensate, cf., Fig. \ref{Figure3}. Thus the dynamic of the
nonextensive effects is generated not so much by the nonextensive
form of occupation numbers in Eq. (\ref{TPM}) but rather by the
main gap equation (\ref{gap}) where both the condensates and the
effective quark masses are present.
\section{\label{sec:IV}Summary}
We have investigated the sensitivity of critical behavior of the
QCD based NJL type of mean theory type presented in \cite{Sousa},
the $q$-NJL model, to the departure from the conditions required
by the application of the BG approach by using the Tsallis version
of nonextensive statistical mechanics \cite{T}. All factors
causing this departure are summarily described by the
nonextensivity parameter $q$, such that $q-1$ quantifies departure
from the BG situation (which is recovered for $q \to 1$).
We have investigated two possible scenarios corresponding to $q
> 1$ and $q < 1$, respectively, which, as mentioned, correspond to
different physical interpretations of the nonextensivity
parameter. For $ q <1$ (usually connected with some specific
correlations \cite{Kodama} or with fractal character of the phase
space \cite{fractal}) we observe a decreasing of pressure, which
reaches negative values for a broad ($q$-dependent) range of
temperatures and increasing of the critical temperature
\footnote{It acts therefore in the same way as including of the
Polyakov loop into the NJL model \cite{PNJL}.}. The $ q > 1$ case
(usually connected with some specific nonstatistical fluctuations
existing in the system \cite{WW}) we observe a decreasing of the
critical temperature, $T_{crit}$, and therefore in the limit of
large $q$ we do not have a mixed phase but rather a quark gas in
the deconfined phase above the critical line (on the contrary, the
compression at critical temperature does not depend on $q$. As in
\cite{Pereira} the resulting equation of state is stiffer (in the
sense that for a given density we get larger pressure with
increasing $q$). As expected, the effects depend on the
temperature, and tend to vanish when the temperature approaches
zero. Fig. \ref{Figure3} shows that the nonequilibrium statistics
dilutes the border between the crossover and the first order
transition. Finally, Figs. \ref{Figure4} and \ref{Figure5}
demonstrate that the most important $q$-dependence is coming from
the main gap equation (\ref{gap}), where both the condensates and
the effective quark masses are present, rather than from the the
nonextensive form of occupation numbers in Eq. (\ref{TPM}).
We would like to end by stressing that our results could be of
interest for investigations aimed at finding the critical point in
high energy heavy ion collisions \cite{departure} or when studying
particularities of the equation of state (EoS) of compact stars
\cite{NSTARS}. The fact that they depend on the parameter $q$
means that the exact position of such a point or the type or shape
of EoS could be quite different from what is naively expected.
\section*{Acknowledgements}
Partial support of the Ministry of Science and Higher Education
under contract DPN/N97/CERN/2009 for (GW) and under the Research
Project No. N N202046237 for (JR) is acknowledged.
|
1,108,101,565,917 | arxiv | \section{Introduction} \label{sec-intro}
It is well-known that static black holes do not have the scalar hair with non-negative
potential in asymptotically flat spacetimes \cite{bekenstein1972} (see Ref. \cite{H-R} for a review).
However, there is an exception, that is, we can have the Bocharova-Bronnikov-Melnikov-Bekenstein (BBMB)
solution \cite{B-B-M, bekenstein1974}. This is because this solution does not satisfy
the regularity condition for the scalar field at the event horizon.
In this paper, we shall address the uniqueness issue of the BBMB black hole solution.
In particular, we will examine if the static black hole spacetime with conformal scalar
hair is spherically symmetric as the Schwarzschild spacetime \cite{Israel}. As a result,
we could prove the uniqueness of the photon surface of the BBMB solution, that is, the
outside region of the photon surface is unique to be the BBMB solution in the Einstein
gravity with a conformally coupled scalar field. In the BBMB solution, the photon surface
corresponds to the unstable circular orbit of null geodesics (see Ref. \cite{Claudel}
for the definition of the photon surface). In the Einstein frame, the system is reduced to
the Einstein-massless scalar field system. In this system, the uniqueness of the outiside
region of the photon surface has been proven in Ref. \cite{Y}. However, the proof in Ref.
\cite{Y} cannot be applied to the current situation. This is because the photon surface
of the BBMB black hole in the Jordan frame is singular in the Einstein frame.
Note that it will be difficult to prove the black hole uniqueness for cases except for
vacuum/electrovacuum or well-motivated systems by string theory or so \cite{Heusler}.
In this sense, the current result may encourage us to try to prove
the uniqueness of static black holes with a hair although we know that the BBMB black
hole itself is not stable \cite{H-R}.
The rest of this paper is organized as follows. In Sec. 2, we briefly review the BBMB
black hole. In Sec. 3, we describe the basic equations for static spacetimes in the
Einstein gravity with the conformally coupled scalar field. In Sec. 4, we show that
the certain relation between the scalar field and the time lapse function holds once
the scalar field is turned on. Then, in Sec. 5, we will present the proof of the
uniqueness of the photon surface in the current system. Finally, we will give the summary
and discussion in Sec. 6.
\section{BBMB black hole}
Let us consider the Einstein equation with the conformally coupled scalar field
\cite{B-B-M, bekenstein1974},
\begin{eqnarray}
S=\dfrac{1}{2\kappa} \displaystyle \int d^4 x\sqrt{-g} R-\int d^4 x\sqrt{-g} \Big( \dfrac{1}{2} (\nabla \phi)^2
+\dfrac{1}{12} R\phi ^2 \Big), \label{BBMBaction}
\end{eqnarray}
where $\phi$ is the scalar field and $R$ is the Ricci scalar. The field equations are
\begin{eqnarray}
G_{\mu \nu}=\kappa T_{\mu \nu} \label{BBMB-Eineq}
\end{eqnarray}
and
\begin{eqnarray}
\nabla^2 \phi=\dfrac{1}{6}R\phi, \label{BBMB-scalareq}
\end{eqnarray}
where $G_{\mu \nu}$ is the Einstein tensor and
\begin{eqnarray}
T_{\mu \nu}= \nabla_\mu \phi \nabla_\nu \phi -\dfrac{1}{2} g_{\mu \nu} (\nabla \phi)^2 +\dfrac{1}{6} (g_{\mu \nu} \nabla ^2 -\nabla_\mu \nabla_\nu +G_{\mu \nu}) \phi^2. \label{T}
\end{eqnarray}
The trace for Eq. (\ref{BBMB-Eineq}) and Eq. (\ref{BBMB-scalareq}) show us
\begin{eqnarray}
R=0,\label{rs0}
\end{eqnarray}
and
\begin{eqnarray}
\nabla ^2 \phi =0. \label{BBMB-phi=0}
\end{eqnarray}
For the current purpose, it is better to rearrange the Einstein equation as
\begin{eqnarray}
\Bigl(1-\frac{\kappa}{6}\phi^2\Bigr)R_{\mu\nu}=\kappa S_{\mu\nu},\label{einstein2}
\end{eqnarray}
where
\begin{eqnarray}
S_{\mu\nu}:=\frac{2}{3}\nabla_\mu \phi \nabla_\nu \phi-\frac{1}{6}
g_{\mu\nu}(\nabla \phi)^2-\frac{1}{3}\phi \nabla_\mu \nabla_\nu \phi. \label{smunu}
\end{eqnarray}
From this, we can see that one needs a careful treatment at $\phi=\pm {\sqrt {6/\kappa}}=:\phi_p$
to have the regular spacetime.
The metric of the BBMB black hole is given by \cite{B-B-M, bekenstein1974}
\begin{eqnarray}
ds^2=-f(r) dt^2 + f^{-1}(r) dr^2 +r^2 d\Omega_2^2,
\end{eqnarray}
where $f(r)=(1-m/r)^2$, $m$ is the mass of black hole and $d\Omega_2^2$ is
the metric of the unit 2-sphere. The configuration of the scalar field is
\begin{eqnarray}
\phi =\pm \sqrt{\dfrac{6}{\kappa}} \dfrac{m}{r-m}.
\end{eqnarray}
The metric itself is exactly the same with the extreme Reissner-Nordstr\"{o}m black hole
spacetime. The event horizon is located at $r=m$ and the scalar field diverges at there.
Here we note that the factor $1-\kappa \phi^2/6$ in the left-hand side of Eq.
(\ref{einstein2}) vanishes at $r=2m$ where there is the unstable circular orbit of null
geodesics.
\section{Basic equations in static spacetimes}
Let us focus on static spacetimes from now on. Then the metric is written as
\begin{eqnarray}
ds^2=-V^2(x^k)dt^2 +g_{ij}(x^k) dx^i dx^j.
\end{eqnarray}
The Latin indices indicate the spatial components. The locus of the event horizon is
$V=0$ and we assume that the spacetime is regular in the outside of the event horizon, that is,
the domain of the outer communication.
In static spacetimes, the non-trivial parts of the Ricci tensor are
\begin{eqnarray}
R_{00}=VD^2 V \label{ricci00}
\end{eqnarray}
and
\begin{eqnarray}
R_{ij} ={}^{(3)} R_{ij} -V^{-1} D_i D_j V, \label{static-Rij}
\end{eqnarray}
where $D_i$ and ${}^{(3)}R_{ij}$ are the covariant derivative and the Ricci tensor
on the $t=$constant hypersurface $\Sigma$, respectively.
$S_{\mu\nu}$ defined in Eq. (\ref{smunu}) is decomposed into
\begin{eqnarray}
& & S_{00}=\frac{1}{6}[V^2(D\phi)^2+2\phi V D^i V D_i \phi], \\
& & S_{ij}=\frac{2}{3}D_i \phi D_j \phi -\frac{1}{6}g_{ij}(D\phi)^2
-\frac{1}{3}\phi D_i D_j \phi.
\end{eqnarray}
The equation for the scalar field (\ref{BBMB-phi=0}) becomes
\begin{eqnarray}
D_i (VD^i \phi)=0. \label{dvdp}
\end{eqnarray}
Since we focus on asymptotically flat cases, the asymptotic behaviors of metric
are given by
\begin{eqnarray}
V=1-m/r+O(1/r^2)
\end{eqnarray}
and
\begin{eqnarray}
g_{ij}=(1+2m/r)\delta_{ij}+O(1/r^2).
\end{eqnarray}
For the scalar field, we impose
\begin{eqnarray}
\phi=O(1/r).
\end{eqnarray}
From Eqs. (\ref{rs0}), (\ref{ricci00}) and (\ref{static-Rij}), we can see
\begin{eqnarray}
{}^{(3)}R=2V^{-1} D^2 V. \label{static-R=0}
\end{eqnarray}
\section{Scalar field and time lapse function}
In this section, we will show that the scalar field $\phi$ is written by the time lapse function $V$ uniquely
if there is non-trivial scalar field exists in static spacetimes.
From the $(0,0)$-component of the Einstein equation and Eq. (\ref{dvdp}), we have
\begin{eqnarray}
D_i [(1-\varphi)D^i \Phi]=0,\label{new}
\end{eqnarray}
where $\Phi:=(1+\varphi)V$ and $\varphi:=\pm {\sqrt {\kappa /6} \phi}$. Now, in $\Sigma$,
we focus on
the region $\Omega$ which has the two boundaries; the surface $S_p$ specified by $\phi=\phi_p$
and the 2-sphere $S_\infty$ at the spatial infinity. In $\Omega$, we assume that there are
no event horizons, that is, $V$ is strictly positive ($V>0$).
Since $\varphi$ or $\phi$ follows Eq. (\ref{dvdp}),
$\varphi$ is a monotonic function which has the maximum value $1$ at
$S_p$ and minimum value $0$ at the spatial infinity,
that is, $0 \leq \varphi \leq 1$ in $\Omega$.
Let us consider the conformally transformed space $\tilde \Omega (\subset \tilde \Sigma)$ with the
metric $\tilde g_{ij}=(1-\varphi)^2 g_{ij}$. Then Eq. (\ref{new}) becomes
\begin{eqnarray}
\tilde D^2 \Phi=0.
\end{eqnarray}
The volume integration of the above over $\tilde \Omega$ and
the Gauss theorem give us
\begin{eqnarray}
0=\int_{\tilde \Omega} \tilde D^2 \Phi d \tilde \Sigma =\int_{\tilde S_\infty} \tilde D_i \Phi d \tilde S^i
-\int_{\tilde S_p} \tilde{D}_i \Phi d \tilde S^i.\label{int1}
\end{eqnarray}
We can show that the second term of the right-hand side vanishes as
\begin{eqnarray}
\int_{\tilde S_p}\tilde D_i \Phi d \tilde S^i=\int_{S_p}(1-\varphi)D_i \Phi dS^i=0. \label{sidpps}
\end{eqnarray}
For the last equality in the above, we used the fact of $\varphi|_{S_p}=1$.
Then, Eq. (\ref{int1}) tells us
\begin{eqnarray}
\int_{\tilde S_\infty}\tilde D_i \Phi d \tilde S^i=0. \label{sidpsi}
\end{eqnarray}
Next we consider the volume integration of $\Phi \tilde D^2 \Phi=0$ over $\tilde \Omega$ and then
\begin{eqnarray}
0& = & \int_{\tilde \Omega} \Phi\tilde D^2 \Phi d \tilde \Sigma \nonumber \\
& = & -\int_{\tilde \Omega} (\tilde D \Phi)^2 d \tilde \Sigma
+\int_{\tilde S_\infty}\Phi\tilde D_i \Phi d \tilde S^i
-\int_{\tilde S_p}\Phi \tilde D_i \Phi d \tilde S^i. \nonumber \\
& & \label{sipdp}
\end{eqnarray}
In the second equality, we used the Gauss theorem. As Eq. (\ref{sidpps}), it is easy to show that
the last term vanishes. Using Eq. (\ref{sidpsi}) and the fact of $\Phi_\infty=1$,
we can see
\begin{eqnarray}
\int_{\tilde S_\infty}\Phi\tilde D_i \Phi d \tilde S^i=\int_{\tilde S_\infty}\tilde D_i \Phi d \tilde S^i=0.
\end{eqnarray}
Thus, Eq. (\ref{sipdp}) implies
\begin{eqnarray}
\int_{\tilde \Omega} (\tilde D \Phi)^2 d \tilde \Sigma=\int_{\Omega} (D \Phi)^2(1-\varphi) d \Sigma= 0.
\end{eqnarray}
Therefore, $D_i \Phi=0$ has to be satisfied everywhere on $\Omega$. Together with the boundary condition at the spatial infinity ($\Phi_\infty=1$),
this means that $\Phi=1$ holds everywhere. Thus we have
the following relation between the time lapse function $V$ and
the scalar field $\phi$;
\begin{eqnarray}
\phi=\pm \sqrt{\dfrac{6}{\kappa}} (V^{-1}-1). \label{pvrel}
\end{eqnarray}
Of course, the BBMB solution satisfies this relation. Note that $\phi=\phi_p$
corresponds to $V=1/2=:V_p$ and the $V=V_p$ surface in the BBMB solution is
composed of the closed circular orbit of photon (null geodesics).
Now, both Eq. (\ref{dvdp}) and the $(0,0)$-component of the Einstein equation
give us the same equation
\begin{eqnarray}
D^2 v = 0, \label{laplace}
\end{eqnarray}
where $v:=\ln V$. Then, we can use $v$ as a ``radial" coordinate later.
The $(i,j)$-component of the Einstein equation becomes
\begin{eqnarray}
\frac{2V-1}{V^2}\Bigl({}^{(3)}R_{ij}-\frac{1}{V}D_iD_jV \Bigr)
=4\frac{D_iVD_jV}{V^4}-g_{ij}\frac{(DV)^2}{V^4}-2\Bigl(\frac{1}{V}-1 \Bigr)D_i D_j V^{-1}. \label{eij}
\end{eqnarray}
The trace part of the above and Eq. (\ref{laplace}) (or Eqs. (\ref{static-R=0})
and (\ref{laplace})) imply
\begin{eqnarray}
{}^{(3)}R=\frac{2}{V^2}(DV)^2 \geq 0. \label{3riccidv2}
\end{eqnarray}
\section{Uniqueness of the BBMB photon surface}
In this section, we will employ Israel's way to
address the uniqueness of the BBMB black hole.
First, we consider the foliation by $v=$constant 2-surfaces $\lbrace S_v \rbrace$
on $t=$constant hypersurfaces $\Sigma$. We write the induced metric on 2-surfaces as $h_{ij}:
=g_{ij}-n_i n_j$, where $n_i:=\rho D_i v$ is the unit normal vector on the
surface and $\rho :=(D_i v D^i v)^{-1/2}=V(D_iVD^iV)^{-1/2}$. Then we can derive
the following equations;
\begin{eqnarray}
D_i \Bigl( \frac{n^i}{\rho} \Bigr) =0, \label{div1}
\end{eqnarray}
\begin{eqnarray}
D_i \Bigl(\frac{(\rho k-2)n^i}{(2V-1)\rho^{3/2}} \Bigr)
=-\frac{1}{2V-1}(\rho^{-1/2}\tilde k_{ij}\tilde k^{ij}+\rho^{-3/2}{\cal D}^2\rho )
\label{div2}
\end{eqnarray}
and
\begin{eqnarray}
D_i \Bigl( (k\xi+\eta)n^i \Bigr)=-(\tilde k_{ij}\tilde k^{ij}+\rho^{-1}{\cal D}^2 \rho )\xi,\label{div3}
\end{eqnarray}
where $\xi:=(2V-1)\rho^{-1/2}$, $\eta:=2(2V+1)\rho^{-3/2}$, $k_{ij}$ is the extrinsic curvature of $S_v$ in $\Sigma$, $k$ is its trace part and $\tilde k_{ij}:=k_{ij}-(1/2)kh_{ij}$ is its traceless part.
${\cal D}_i$ is the covariant derivative with respect to $h_{ij}$. In the above, we used
the following equations;
\begin{eqnarray}
n^i D_i V=\frac{V}{\rho},
\end{eqnarray}
\begin{eqnarray}
n^i D_i \rho=\rho k
\end{eqnarray}
and
\begin{eqnarray}
n^i D_i k & = & -k_{ij}k^{ij}-\rho^{-1}{\cal D}^2 \rho-{}^{(3)}R_{ij}n^i n^j \nonumber \\
& = & -k_{ij}k^{ij}-\rho^{-1}{\cal D}^2 \rho+\frac{\rho k-4V}{(2V-1)\rho^2}.
\end{eqnarray}
For the derivation of the second equation, we used Eq. (\ref{laplace}).
For the third one, we used Eq. (\ref{eij}) and the formula
\begin{eqnarray}
D_i D_j V = \frac{V}{\rho}\Bigl[ k_{ij}-\frac{1}{\rho}(n_i {\cal D}_j \rho+n_j {\cal D}_i \rho) -n_i n_j \Bigl(k-\frac{1}{\rho} \Bigr) \Bigr]. \label{didjv}
\end{eqnarray}
This comes from the definition of the extrinsic curvature and Eq. (\ref{laplace}).
Now we compute the curvature invariant $R_{\mu\nu}R^{\mu\nu}$ to
check the regularity of spacetime.
It is written as
\begin{eqnarray}
R_{\mu\nu}R^{\mu\nu} & = & \frac{1}{\rho^4}+\frac{1}{(2V-1)^2\rho^2}
\Bigl[ \Bigl( 2(1-V)k_{ij}-\frac{1}{\rho}h_{ij} \Bigr)^2 \nonumber \\
& & ~~+\Bigl(-2(1-V)k+\frac{1+2V}{\rho} \Bigr)^2 +\frac{8(1-V)^2}{\rho^2}({\cal D}\rho)^2
\Bigr].
\end{eqnarray}
In the above, we used the Einstein equation and Eq. (\ref{didjv}).
As commented below Eq. (\ref{smunu}),
one needs a careful treatment at the surface of $V=1/2 ~(\phi=\phi_p)$.
For the regularity of spacetimes at the surface of $V=1/2$, we require the conditions
\begin{eqnarray}
{\cal D}_i \rho|_{S_p}=0 \label{rc1}
\end{eqnarray}
and
\begin{eqnarray}
k_{ij}|_{S_p}=\frac{1}{\rho_p}h_{ij}|_{S_p}. \label{rc2}
\end{eqnarray}
Note that the index ``$p$" indicates the evaluation at $S_p$, e.g. $\rho_p=\rho|_{S_p}$.
These features tell us that $S_p$ is totally umbilic and the photon surface defined in
Ref. \cite{Claudel} \footnote{Although the theory we consider is different from that in
Ref. \cite{Claudel}, we can show that $S_p$ is indeed the photon surface. In addition,
we can easily see that the null geodesic initially tangent to $S_p \times {\bf R}_t$
remains to be tangent to $S_p \times {\bf R}_t$.} or photon sphere \cite{Cederbaum:2014gva}.
From Eqs. (\ref{3riccidv2}) and (\ref{didjv}), the Kretschmann invariant can be written as
\begin{eqnarray}
& & R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma} \nonumber \\
& & ~~= \frac{4}{V^2}D_i D_j V D^i D^j V+4{}^{(3)}R_{ij} {}^{(3)} R^{ij}-({}^{(3)}R)^2 \nonumber \\
& & ~~=\frac{4}{\rho^2}\Bigl[k_{ij} k^{ij}+\frac{2}{\rho^2}({\cal D}\rho)^2+\Bigl(k-\frac{1}{\rho} \Bigr)^2\Bigr]
\nonumber \\
& &~~~~+\frac{4}{(2V-1)^2\rho^2}\Bigl[ \Bigl(k_{ij}-\frac{1}{\rho}h_{ij} \Bigr)^2
+\frac{2}{\rho^2}({\cal D}\rho)^2+\Bigl(k-\frac{4V}{\rho} \Bigr)^2
\Bigr]-\frac{4}{\rho^4},
\end{eqnarray}
where we also used
\begin{eqnarray}
R_{0i0j}=VD_iD_j V,
\end{eqnarray}
and
\begin{eqnarray}
{}^{(3)}R_{ij} = \frac{1}{(2V-1)\rho}\Bigl[ k_{ij}-\frac{1}{\rho}h_{ij}
-\frac{1}{\rho}(n_i {\cal D}_j \rho+n_j {\cal D}_i \rho) -n_in_j \Bigl(k-\frac{4V}{\rho} \Bigr) \Bigr].\label{3rij}
\end{eqnarray}
Thus, we can see that the Kretschmann invariant is finite
in $\Omega$ when Eqs. (\ref{rc1}) and (\ref{rc2}) hold on $S_p$.
From now on, we focus on the region $\Omega$ in $\Sigma$ which has the two
boundaries, that is, $S_p$ and the spatial infinity $S_\infty$.
The volume integrations of Eqs. (\ref{div1}), (\ref{div2}) and (\ref{div3}) over $\Omega$ give us
\begin{eqnarray}
A_p=4 \pi m \rho_p, \label{equal1}
\end{eqnarray}
\begin{eqnarray}
8\pi m^{1/2}-\frac{1}{2}\rho_p^{1/2} \int_{S_p}{}^{(2)}RdS =-\int_{1/2}^1dV\frac{1}{V(2V-1)}
\int_{S_v}dS \Bigl( \rho^{1/2}\tilde k_{ij}\tilde k^{ij}
+\frac{1}{2\rho^{3/2}}({\cal D}\rho)^2 \Bigr) \label{equal2}
\end{eqnarray}
and
\begin{eqnarray}
8\pi m^{1/2}-\frac{4A_p}{\rho_p^{3/2}} =-\int_{1/2}^1dV \frac{(2V-1)}{V}
\int_{S_v}dS \Bigl( \rho^{1/2}\tilde k_{ij}\tilde k^{ij}
+\frac{1}{2\rho^{3/2}}({\cal D}\rho)^2 \Bigr) , \label{equal3}
\end{eqnarray}
respectively. $A_p$ and ${}^{(2)}R$ are the area of $S_p$ and the Ricci scalar of $S_v$,
respectively.
Equation (\ref{equal1}) tells us that $m$ is positive. In the left-hand side of Eq. (\ref{equal2}),
we used the fact of
\begin{eqnarray}
{}^{(2)}R=\frac{2}{\rho^2}+k^2-k_{ij}k^{ij}+\frac{2(\rho k-4V)}{(2V-1)\rho^2}. \label{2ricci}
\end{eqnarray}
In particular, on $S_p$, we have
\begin{eqnarray}
{}^{(2)}R_p=\lim_{V \to 1/2}\frac{2(\rho k-2)}{(2V-1)\rho^2}. \label{2riccip}
\end{eqnarray}
Then Eq. (\ref{equal2}) shows us
\begin{eqnarray}
16\pi m^{1/2} \leq \rho_p^{1/2}\int_{S_p}{}^{(2)}RdS. \label{miegb}
\end{eqnarray}
Because of $m>0$, inequality (\ref{miegb}) and the Gauss-Bonnet theorem tell
us that the topology of $S_p$ is restricted to $S^2$(that is, $\int_{S_p}{}^{(2)}RdS=8\pi$) and then we have
the following inequality with the help of Eq. (\ref{equal1});
\begin{eqnarray}
A_p \geq 16 \pi m^2. \label{Sp-geq16}
\end{eqnarray}
Using Eq. (\ref{equal1}), Eq. (\ref{equal3}) gives us the inequality
\begin{eqnarray}
A_p \leq 16\pi m^2. \label{Sp-leq16}
\end{eqnarray}
This may be regarded as a mimic of the Penrose inequality \cite{Penrose:1973um}
for the photon surface.
Thus, we can conclude that the equality holds in inequalities (\ref{Sp-geq16}) and (\ref{Sp-leq16}),
and then
\begin{eqnarray}
\tilde k_{ij}=0, \ {\cal D}_i \rho=0. \label{sp0}
\end{eqnarray}
Finally, we use the Codazzi equation,
\begin{eqnarray}
{\cal D}_i k^i_j-{\cal D}_j k={}^{(3)}R_{ik}n^i h^k_j, \label{cdz}
\end{eqnarray}
to show that $k$ is constant on each $S_v$. From Eq. (\ref{3rij}), we see that
the right-hand side vanishes. Using Eq. (\ref{sp0}), we can have
\begin{eqnarray}
{\cal D}_ik=0.
\end{eqnarray}
Because of Eq. (\ref{2ricci}), we can see that ${}^{(2)}R$ is also constant on each $S_v$.
In addition, inequality (\ref{miegb}) tells us that
${}^{(2)}R$ is positive. Thus, we can conclude that the region of $\Omega$ in the spacetime
is maximally symmetric with the positive curvature and then spherically symmetric. Thus, we could see
that the spacetime in $\Omega$ is unique to be the BBMB solution because the regular
spherically symmetric solutions have been shown to be unique in Ref. \cite{X-Z}.
Note that what we could prove is only the uniqueness for outside region of the photon surface and
we did not discuss the inside of the photon surface.
Therefore, this proof does not mean the uniqueness of the whole region of the BBMB black hole spacetime.
\section{Summary and discussion}
In this paper, we proved that, in the Einstein gravity with a conformally coupled scalar field,
if a single closed surface $S_p$ satisfying $(1-\kappa \phi^2/6 )=0$
exists and if there is no horizon in the region $\Omega$ whose boundaries are only $S_p$ and the spatial
infinity, the geometry in $\Omega$ is the same as that outside of the BBMB photon surface.
Since we employed Israel's way which can work for the proof
of the uniqueness of single object, we cannot exclude the existence of multi-photon surface system.
And we did
not prove the uniqueness of the whole region of the BBMB black hole spacetime. We have no definite answer for
the inside region of the photon surface. Accidentally, the uniqueness of the photon surfaces
have been recently discussed for vacuum, electrovacuum and so on
\cite{Cederbaum:2014gva,Cederbaum:2015aha,Cederbaum:2015fra,Y,Yoshino}. Therein, the existence
of photon surface or photon sphere was assumed by hand. On the other hand,
in our current study on the Einstein gravity with a conformally coupled scalar field, the existence of
the photon surface is automatically required to make the spacetime regular.
There are many remaining works. If one may examine the possibility of cases having multi-photon
surface, one must employ the another proof developed by Bunting and Masood-ul-Alam \cite{Bunting}.
One may be also interested in the uniqueness issue for the inside region of the photon surface, that is,
the region between the photon surface and the event horizon. Some extension of
our proof into other non-vacuum cases should be addressed. Finally, we may have the Penrose inequality
for a kind of the photon surface for {\it dynamical} systems in the Einstein gravity with conformally coupled
scalar fields. This is because we could have a mimic of the Penrose inequality in our study.
\begin{acknowledgments}
This work is initiated by the collaboration with Mr. K. Ueda.
T. S. is supported by Grant-Aid for Scientific Research from Ministry of Education,
Science, Sports and Culture of Japan (Nos. 25610055 and 16K05344).
\end{acknowledgments}
|
1,108,101,565,918 | arxiv | \section{Introduction}
The concept of form factors plays an extremely important role in the
studies of the internal structure of composite particles.
The non-trivial dependence of form factors on the momentum transfer $Q^2$
(i.e., its deviation from the constant behavior) is usually
a signal of the non-elementary nature of the investigated particle.
In particular, the pioneering study of the nucleon form factors by Hofstadter
and collaborators
\cite{Mcallister:1956ng}
demonstrated that the nucleons have a finite size
of the order of a fermi. Later, it was observed
that the behavior of the proton electromagnetic form factors,
in a rather wide range of momentum transfers,
is well described by the so-called dipole formula
$G_p(Q^2)/G_p(0) \approx G_D (Q^2) \equiv 1/(1+Q^2/0.71{\rm GeV}^2)^2$,
suggesting a simple $G_p(Q^2) \sim 1/(Q^2)^2$ power law for
their large-$Q^2$ asymptotic behavior.
At the same time, strong evidence was accumulated that
the pion electromagnetic form factor is well described by the
$\rho$-pole fit $F_{\pi} (Q^2) \approx 1/(1+Q^2/m_{\rho}^2)$ indicating that,
in the pion case, the asymptotic behavior looks more like $1/Q^2$.
From the quark model point of view, the faster decrease
of the proton form factor seems rather natural, since
the proton contains more valence constituents
than the pion. Furthermore, it was established that,
if one can treat the hadrons at high momentum transfer
as collinear beams of $N$ valence quarks located at small transverse
separations and exchanging intermediate
gluing particles with which they interact via
a dimensionless coupling constant,
then the spin-averaged form factor behaves asymptotically as $1/(Q^2)^{N-1}$
\cite{Brodsky:1973kr}.
This hard-exchange picture and the resulting
dimensional power counting rules \cite{Brodsky:1973kr,Matveev:1973ra}
can be formally extended onto other hard exclusive processes.
After the advent of quantum chromodynamics,
this hard-gluon-exchange picture was
formalized with the help of the
QCD factorization approach to exclusive processes
\cite{Chernyak:1977as,Radyushkin:1977gp,Lepage:1979zb}
that presents one of the highlights of perturbative QCD (pQCD).
Within this approach, the hard gluon exchange contribution proves to
be dominant for sufficiently large momentum transfers $Q^2$.
An important ingredient of the asymptotic pQCD formalism for
hard exclusive processes is the concept of hadron distribution amplitudes (DAs).
They are fundamental nonperturbative
functions describing the momentum distribution within rare parton configurations
when the hadron is represented by
a fixed number of Fock constituents.
It was shown that in the $Q^2 \to \infty$ limit,
form factors can be written in a factorized form, as a convolution
of distribution amplitudes related to hadrons in the initial and final state
times a ``short-distance'' coefficient function that
is calculable in QCD perturbation theory.
The leading contribution corresponds to DAs with minimal possible number
of constituents, e.g., 3 for the proton and 2 for the pion.
The essential requirement for the
applicability of the pQCD approach is a high virtuality
of the exchanged gluons and also of the quarks inside the
short distance subprocess.
Since the quarks carry only some fractions $x_iP$, $y_j P'$ of the initial $P$
and final $P'$ momenta, the virtualities of the internal lines
of the subprocess are generically given by $x_i y_j Q^2$, i.e., they
may be essentially smaller than $Q^2$, the nominal
momentum transfer to the hadron. Assuming that $\langle x \rangle \sim 1/N$,
one should expect the reduction factor of $0.1$ for the proton and
$0.2$ for the pion. In the pion case, this expectation
was confirmed by an explicit calculation of the one-loop pQCD
radiative corrections
\cite{Dittes:1981aw}. Absorbing the terms proportional
to the $\beta$-function coefficient $\beta_0$ into the effective coupling constant
$\alpha_s(\mu^2)$
of the hard gluon exchange, one indeed obtains
$\langle x \rangle \langle y \rangle Q^2$ as its argument.
As a result, at accessible $Q^2$, the bulk part of the hard pQCD contribution
comes from the regions where the ``hard'' virtualities are
much smaller than the typical hadronic scale of 1\,GeV$^2$
\cite{Efremov:1980mb,Isgur:1988iw,Radyushkin:1990te}.
According to the pQCD factorization recipe,
contributions from such regions should not be included into the hard term,
which is strongly reduced after such contributions are subtracted. In practice,
the subtraction is never made, and pQCD estimates are
based on the original expressions (which implies, in particular,
that the perturbative $\sim 1/k^2$ behavior of propagators
is trusted even if $k^2 \to 0$).
Despite this, in most cases pQCD results for hadronic form factors
need special efforts to bring their magnitude close to experimental data.
For example, assuming the ``asymptotic'' form $\varphi_{\pi} (x) =6 f_{\pi} x_1 x_2$
for the pion DA gives $Q^2 F^{\rm as}_{\pi}(Q^2) =
8\pi f_{\pi}^2 \alpha_s \approx \alpha_s \times 0.44\,$GeV$^2$
for the pion form factor \cite{Radyushkin:1977gp,Lepage:1979zb}
that agrees with existing data only if one takes an uncomfortably
large value $\alpha_s \approx 1$ for the ``hard'' gluon vertex.
Switching
to a wider Chernyak-Zhitnitsky (CZ) shape
$\varphi_{\pi}^{CZ} (x) =30 f_{\pi} x_1 x_2 (x_1-x_2)^2$
\cite{Chernyak:1981zz,Chernyak:1983ej}
gives $Q^2 F^{\rm CZ}_{\pi}(Q^2) = \frac{200}{9} \pi f_{\pi}^2 \alpha_s$,
which formally agrees with the data for $\alpha_s \approx 0.4$.
However, at accessible $Q^2$ more than 90\% of this contribution comes
from the region of gluon virtualities below (500\,MeV)$^2$ \cite{Isgur:1988iw}.
In the nucleon case, the situation is even worse.
For the asymptotic $\sim x_1 x_2 x_3$ form of the leading twist three-quark
distribution amplitude, the proton form factor $G^p_M$ turns out to be zero
to leading order \cite{Lepage:1979za,Avdeenko}, while the neutron form factor $G_M^n$
is of opposite sign compared to the data.
Assuming the equal sharing DA $\sim \delta (x_1-1/3) \delta (x_2-1/3) \delta (x_3-1/3)$
gives wrong signs both for the proton and neutron form factors
\cite{Aznaurian:1979zz}.
Furthermore, if one takes the QCD sum rule estimate
for the $\langle qqq\,|\, N \rangle $ matrix element, the absolute magnitude
of the form factors in the above examples is too small (by a factor of hundred)
\cite{Belyaev:1982sa} compared to the data.
Just like in the pion case, the magnitude of the formal pQCD result can be
increased, and also the signs of the predicted proton and neutron magnetic
form factors reversed to coincide with the experimental ones,
by using CZ-type DAs \cite{Chernyak:1984bm,King:1986wi,Gari:1986dr,Chernyak:1987nu}
having peaks in a region where the
momentum fraction of one of the quarks is close to 1.
Since the average fractions of the nucleon momentum carried by the two other
quarks are small, this formal result is strongly dominated for
accessible $Q^2$ by regions
of unacceptably small virtualities.
It was argued \cite{Li:1992nu} that higher order Sudakov-type corrections
squeeze the size of the valence quark configuration
participating in the pQCD subprocess. Indeed, in the pion case, there are negative
terms in the one-loop correction to the short-distance
amplitude that
can be written as Sudakov double logarithms in the impact parameter
$b$-space \cite{Musatov:1997pu}.
After resummation to all orders, they produce a factor
like $\exp[-\alpha_s \ln^2 (Q^2b^2)/3 \pi]$ suppressing the contribution
of large transverse separations. These effects
increase the region of the $x,y$ fractions
where the leading-order pQCD expressions are formally applicable,
though they are not strong enough to visibly suppress nonperturbative regions
for accessible momentum transfers,
see e.g. \cite{Bolz:1994hb} for discussion of the nucleon form factor case.
\begin{figure}[ht]
\centerline{\epsfxsize15cm\epsffile{factor_1.ps}}
\caption{\label{figwan}\small
Structure of QCD factorization for baryon form factors.
}
\end{figure}
As already emphasized, according to the standard philosophy of separating large-
and small-virtuality contributions underlying the pQCD factorization formulas,
the low-virtuality contributions of
gluon-exchange diagrams should be treated as a part of the
soft contribution. More precisely, in the case of the nucleon form factors,
the hard pQCD contribution is only the third term of the factorization
expansion. Schematically, one can envisage the expansion of,
say, the Dirac electromagnetic nucleon form factor
$F_1(Q^2)$ of the form (see Fig.~\ref{figwan})
\beq{schema}
F_1(Q^2) \sim A(Q^2)
+ \left ( \frac{\alpha_s(Q^2)}{\pi}\right ) \frac{B(Q^2)}{Q^2}
+ \left ( \frac{\alpha_s(Q^2)}{\pi}\right ) ^2 \frac{C}{Q^4} +
\ldots
\end{equation}
where $C$ is a constant determined by the nucleon DAs, while $A(Q^2)$ and $B(Q^2)$
are form-factor-type functions generated by soft contributions.
Because of their nonperturbative nature, it is impossible
to tell precisely what is their large-$Q^2$ behavior. On general grounds, one may expect
that $A(Q^2)$ and $B(Q^2)/Q^2$ correspond to higher powers
of $1/Q^2$ than the perturbative $1/Q^4$ term.
Perturbative estimates suggest that
$ A(Q^2), B(Q^2)/Q^2 \lesssim 1/Q^6$.
At very large $Q^2$, one may also expect that they are further suppressed by
Sudakov form factor.
The most important feature of the factorization expansion is a numerical suppression
of each hard gluon exchange by the $\alpha_s/\pi$ factor, which is a standard perturbation theory
penalty for each extra loop.
If
$\alpha_s \sim 0.3$,
the pQCD
contribution to baryon form factors is suppressed by a factor of
100 compared to the purely soft term.
Thus, one may expect that
the onset of the perturbative regime is postponed to very large momentum transfers since
the factorizable pQCD contribution ${ O}(1/Q^4)$ has to win over nonperturbative effects
that are suppressed by extra powers of $1/Q^2$, but do not involve small coefficients.
In the light cone formalism, the functions like $A(Q^2)$ and $B(Q^2)$ in the above expansion are determined by
overlap integrals of the soft parts of hadronic wave functions corresponding to large
transverse separations. There is a growing consensus that such ``soft''
contributions play the dominant role at present energies.
Indeed, it is known for a long time that the use of
QCD-motivated models for the wave functions allows one to
obtain, without much effort, soft contributions comparable in size
to experimentally observed values (see, e.g.~\cite{Isgur:1984jm,Isgur:1988iw,Kroll:1995pv}).
A new trend \cite{Radyushkin:1998rt,Diehl:1998kh} is to use the concept of
generalized parton distributions (GPDs, see
\cite{Goeke:2001tz,Diehl:2003ny,Belitsky:2005qn} for recent extensive reviews on GPDs)
to describe/parametrize soft contributions.
The use of GPDs allows to easily
describe existing data by soft contributions alone
(the latest attempts can be found in Refs.~\cite{Belitsky:2003nz,Diehl:2004cx,Guidal:2004nd}).
A subtle point for these semi-phenomenological
approaches is to avoid double counting of hard rescattering
contributions ``hidden'' in the model-dependent hadron wave functions
or GPD parametrizations.
The dominant role of the soft contribution for the pion form factor at
moderate momentum transfers, up to $Q^2\sim 2-3$~GeV$^2$, is supported by its
calculation \cite{Ioffe:1982qb,Nesterenko:1982gc}
within the QCD sum rule approach \cite{Shifman:1978bx}
applied to the vacuum average $\langle 0 |T \{ \eta_2 (0) j(z) \eta_1^*(y) \} |0 \rangle$
of three currents, with $j$ representing the electromagnetic probe and the two others $ \eta_1^* , \eta_2$
having quantum numbers of the initial and final hadrons, respectively.
The application of the method at higher $Q^2$ faces the
problem that the inclusion of nonperturbative effects due to vacuum
condensates
through the expansion over inverse powers of the Borel parameter $M^2$
interferes with the large-$Q^2$ expansion of the form factors,
producing a series of $(Q^2/M^2)^n$ type even for a decreasing function
of $Q^2$, like $\exp[-Q^2/M^2]$.
For the nucleon form factors, the usual QCD sum rule approach works only in the region
of small momentum transfers
$Q^2 < 1$~GeV$^2$ \cite{Belyaev:1992xf,Castillo:2003pt}.
To extend the results to higher $Q^2$, it was proposed \cite{Bakulev:1991ps} to sum back
the $(Q^2/M^2)^n$ terms originating from the Taylor expansion
of the same nonlocal condensate, using to this end simple models for it.
Another approach
\cite{Nesterenko:1982gc,Nesterenko:1983ef}
is to use the so-called local quark-hadron duality approximation,
in which the values of the duality intervals are postulated to be
$Q^2$-independent and are taken from the two-point QCD sum rules.
The parameter-free results for the pion
and nucleon form factors obtained in this way are in
a rather good agreement with existing data.
A less assumption-dependent approach allowing to calculate hadronic form factors
for moderately large $Q^2$ is based
on light-cone sum rules (LCSR) \cite{Balitsky:1989ry,Chernyak:1990ag}.
The basic object of the LCSR approach is the
matrix element $\langle 0| T \{ \eta_2(0) j(z) \} | h_1 \rangle $ in which the
currents $\eta_2$ and $ j$ are the same as in the usual QCD sum rules,
while the initial hadron is explicitly represented by its state vector
$| h_1 \rangle $, see a schematic representation in Fig.~\ref{figsum}.
\begin{figure}[ht]
\centerline{\epsfxsize5cm\epsffile{factor_3.ps}}
\caption{\label{figsum}\small
Schematic structure of the light-cone sum rule for baryon form factors.
}
\end{figure}
When both the momentum transfer $Q^2$ and
the Borel parameter $M^2$ of the $\eta_2$ channel are large,
the asymptotics is governed by the operator product expansion
$T \{ \eta_2(0) j(z) \} \sim \sum C_i(y) {\cal O}_i(0)$ on the
light-cone $z^2=0$. The $z^2$-singularity of a particular perturbatively calculable
short-distance factor $C_i(z)$ is determined by the twist of the relevant
composite operator ${\cal O}_i$, whose matrix element $\langle 0| {\cal O}_i(0)| h_1 \rangle $
accumulates nonperturbative
information about the initial state parametrized in terms of a
distribution amplitude.
The lowest order $O(\alpha_s^0)$ terms of the OPE correspond to
purely soft contributions ordered by twists of the relevant operators. The magnitude
and details of their $Q^2$ -dependence are governed by the
form of the corresponding DA. As shown by the LCSR
calculation of the pion form factor \cite{Braun:1994ij},
taking the CZ model for the pion DA, which enhances the
hard contribution, one obtains a soft contribution
that is too large. Hence one should
take the DAs that are sufficiently narrower\footnote{Recent studies of the pion DA
\cite{Petrov:1998kg,Schmedding:1999ap,Anikin:2000rq,Bakulev:2001pa,Praszalowicz:2001wy,Bijnens:2002mg,Bakulev:2002uc,Bakulev:2004cu,Ball:2004ye,Agaev:2005gu,Agaev:2005rc}
show a converging consensus that
the integral $\int_0^1 dx \, x^{-1} \varphi_{\pi}(x)$ determining the size
of the leading-order hard contribution has the value close
to that corresponding to the asymptotic wave function,
see Ref. \cite{Bakulev:2005cp} for an updated compilation.}
in order to describe
the experimental magnitude of the pion form factor
at accessible $Q^2$.
The LQSR expansion also contains terms
generating the asymptotic pQCD contributions. They appear
at proper order in $\alpha_s$, i.e., in the $O(\alpha_s)$ term for the
pion form factor, at the $\alpha_s^2$ order for the nucleon form factors, etc.
In the pion case, it was explicitly demonstrated
\cite{Braun:1999uj,Bijnens:2002mg} that the contribution of hard
rescattering is correctly reproduced in the LCSR
approach as a part of the $O(\alpha_s)$ correction.
It should be noted that the diagrams of LCSR that
contain the ``hard'' pQCD contributions also possess ``soft'' parts,
i.e., one should perform separation of ``hard'' and ``soft''
terms inside each diagram. As a result,
the distinction between ``hard'' and ``soft'' contributions appears to
be scale- and scheme-dependent \cite{Braun:1999uj}.
During the last years there have been numerous applications of LCSRs
to mesons, see \cite{Braun:1997kw,Colangelo:2000dp} for a review.
Nucleon electromagnetic form factors
were first considered in \cite{Braun:2001tj,Lenz:2003tq}
and the weak decay $\Lambda_b\to p\ell\nu_\ell$ in
\cite{Huang:2004vf}.
In this paper, we incorporate light-cone sum rules to develop an
approach to the calculation of transition form factors
for electroproduction of the $\Delta$-resonance.
All three possibilities for the virtual photon polarization
are allowed in this case, hence
the $\gamma^* N \to \Delta$ transition is described by three
independent form
factors\footnote{The supply of possible choices and definitions of the three form factors
offered in the literature seems inexhaustible.}.
Hence, the challenge is not only to fit
the absolute magnitude of one of them, but also to explain
the relations between different form factors.
In particular, pQCD prediction
\cite{Carlson:1985mm,Carlson:1988gt} for the
helicity amplitudes is $A_{1/2}\sim 1/Q^3$
and $A_{3/2}\sim 1/Q^5$. For multipole amplitudes
$M1= -\frac12( A_{1/2} + \sqrt{3} A_{3/2})$ and
$E2= -\frac12(A_{1/2} - A_{3/2}/\sqrt{3})$,
the perturbative QCD approach predicts, hence,
the same strength of the $E2$ and $M1$
transitions
at asymptotically large $Q^2$.
Experimentally, their ratio is negative and extremely close to zero (within a few per cent)
in the whole investigated region, i.e., up to $Q^2 \sim $4\,GeV$^2$
\cite{Beck:1997ew,Frolov:1998pw,Kamalov:1999hs,Kamalov:2000en,Sato:1996gk}.
To explain this phenomenon
within the pQCD framework,
it was suggested \cite{Carlson:1998bg} that the observed small
value of the $E2/M1$ ratio is due to cancellation of the
$A_{1/2}$ and $A_{3/2}$ helicity amplitudes.
It should be emphasized that there is no intrinsic reason inside perturbative QCD
for such a cancellation to happen.
Moreover, given the very strong sensitivity of the pQCD outcome on the
form of the nucleon and $\Delta$-isobar distribution amplitudes
(e.g., by varying
the CZ type DAs without changing their moments within the
limits specified by QCD sum rule estimates,
one can change the result for $A_{1/2}$
by an order of magnitude and even reverse its
sign\footnote{In this connection, we would also like to remark that
the model $A_{1/2}(Q^2) = A_{1/2}(0)/(1+Q^2/\Lambda_1^2)^2$
proposed and used in Ref.~\cite{Carlson:1998bg}, with experimental
{\it negative} value of $A_{1/2}(0)$, is not able
to reproduce the {\it positive} pQCD asymptotic results for
$A_{1/2}(Q^2)$ quoted in Eqs. (5)
and (6) of that paper.}, see Refs.~\cite{Carlson:1988gt,Carlson:1986zs,Stefanis:1992nw})
such a fine-tuned cancellation looks like a miracle.
Furthermore, since $A_{1/2}$ and $A_{3/2}$ are expected to behave
differently with $Q^2$, just a nearly constant value
of $E2/M1$ ratio in a sufficiently wide region of $Q^2$ (say, about $1$\,GeV$^2$ wide)
practically rules out the relevance of pQCD within such a region.
On the other hand, the smallness of $E2/M1$ ratio is a famous prediction of
the quark model.
40 years ago it was shown \cite{Becchi:1965} that
$E2$ is zero in the nonrelativistic SU(6) quark model,
provided that quarks have zero orbital angular momentum.
Small deviation of $E2$ from zero was later explained either
by $D$-wave admixtures \cite{Gershtein:1981zf,Isgur:1981yz}
or in terms of two-body exchange currents \cite{Buchmann:1996bd}.
In the large $N_c$ limit of QCD, it is possible to show \cite{Jenkins:2002rj}
that the $E2/M1$ ratio has a smallness of order $1/N_c^2$ without making
any assumptions about the angular momentum of the quarks.
Small values for $E2/M1$ in the region $Q^2 < 4$\,GeV$^2$ were
obtained in the relativistic quark model \cite{Aznaurian:1993rk}.
The $\gamma^*N \to \Delta$ transition form factors were also
calculated \cite{Belyaev:1995ya}
within the local quark-hadron duality approach motivated
by QCD sum rules. It gives small values, within $\pm 20 \%
$,
both for the ratios $E2/M1$ and $C2/M1$ ($C2$ being the
electric quadrupole or Coulombic transition form factor).
Recent lattice calculations
\cite{Alexandrou:2003ea,Alexandrou:2004xn,Alexandrou:2005em}
of the $N \Delta$ transition form factors up to 1.5\,GeV$^2$
gives small negative values for both these ratios.
All these results provide strong evidence that the observed
small value of $E2/M1$ has a purely nonperturbative origin.
Another interesting feature of the $\gamma^*N \to \Delta$ reaction
is that the leading $M1$ magnetic transition
form factor $G_{M} (Q^2)$ decreases with $Q^2$ faster
than the dipole fit (see, e.g., \cite{Stoler:1993yk}).
This was considered as a fact favoring pQCD since
it usually gives for $G_{M} (Q^2)$ a result that
is much smaller (by an order of magnitude)
than that for the proton elastic form factor
$G_M^p (Q^2)$.
In the large $N_c$ limit, assuming chiral and isospin
symmetry, it was established
\cite{Frankfurt:1999xe,Goeke:2001tz}
that the transition form factor $G_M (Q^2)$
is expressed through
the isovector component of the
GPD $E$ related to the elastic spin-flip nucleon
form factor $F_2(Q^2)$, which is also known to
drop faster than the dipole fit.
This observation was incorporated in Ref.~\cite{Stoler:2002im}
to describe both $F_2(Q^2)$ and $G_{M} (Q^2)$
using a
model for the GPDs $E^{u,d} (x, Q^2)$
with a Gaussian $\sim \exp[-(1-x)Q^2/4x \lambda^2]$
plus a small power-law tail
ansatz for the $Q^2$-dependence.
A Regge type ansatz
$E^{u,d} (x, Q^2) =e^{u,d}(x) \, x^{\alpha'(0)(1-x)Q^2} $
was used in Ref.~\cite{Guidal:2004nd}.
In both models, to get an accurate fit of the data, one needs
to introduce a rescaling factor $\sim 1.5$
in the relation between elastic and
transition GPDs.
The goal of the present study is to set up the LCSR-based framework
for the calculation
of the $\gamma^*N \to \Delta$ transition form factors.
The light-cone sum rule formalism turns out to be considerably more
cumbersome in this case because of the spin 3/2 of the $\Delta$
resonance. The local interpolating current with
the $\Delta$ quantum numbers has also a nonzero
projection on the spin 1/2 states with
opposite parity, and the necessity to get rid of these contaminating
contributions produces further complications.
Still, as we show, it is possible to derive a general Lorentz
decomposition and the twist expansion of sum rules
for all three form factors in question.
We explicitly calculate
the leading $\alpha_s$ order
sum rules and compare their consequences with
existing experimental data.
Apart from resolving several technical issues,
our main finding in this work
is that the soft contribution to the $\gamma^*N \to \Delta$
form factors in the intermediate $Q^2$ range
is strongly affected by the valence quark configurations
involving ``minus'' components of the quark fields that do not
have the simple interpretation in terms of the leading-twist amplitude but
rather correspond to the contributions of the orbital angular
momentum\cite{Ji:2002xn,Ji:2003yj}.
The same conclusion was reached in \cite{Braun:2001tj}
for the nucleon electromagnetic form factor. In a more general context,
large contributions of the orbital angular momentum can explain why
helicity selection rules in perturbative QCD appear to be badly
broken in hard exclusive processes at present energies.
By construction, the LCSRs use nucleon
distribution amplitudes of the leading and higher twist \cite{Braun:2000kw}
as the main input, and the results are very sensitive to their shape.
Using the asymptotic distribution amplitudes we obtain a reasonable agreement
with the data in the range $2<Q^2<6$~GeV$^2$ for the form factors.
We believe that the accuracy can be improved significantly by the
calculation of $O(\alpha_s)$ corrections to the sum rules and especially
if lattice data on the moments of higher-twist distribution amplitudes
become available.
A long-term goal of our study is to determine
leading twist nucleon distribution amplitudes
from the combined fit to the experimental
data on all of the existing form factors involving the nucleon.
In this perspective,
the present paper should be viewed as a step in this direction.
The presentation is organized as follows.
In Section 2 we introduce the necessary notation and set up the
general framework.
Section 3 contains the derivation of sum rules including higher
twist corrections, which is our main result.
The numerical analysis of the LCSRs is carried out in Section~4,
together with a summary and discussion.
The paper has three Appendices devoted to technical aspects of the
calculation: In Appendix A we present a complete Lorentz-invariant
decomposition of the correlation function, Appendix B contains
the summary of asymptotic expressions for the nucleon distribution amplitudes,
and in Appendix C the Belyaev-Ioffe
sum rule for the $\Delta$ coupling constant is given and reanalyzed.
\setcounter{equation}{0} \section{General Framework}
\subsection{Definition of the form factors}
The $\gamma^* N \to \Delta$ transition
is described by the matrix element of the electromagnetic current
\beq{EM}
j_{\nu}=e_{u}\overline{u}\gamma_{\nu}u+e_{d}\overline{d}\gamma_{\nu}d
\end{equation}
between the nucleon state with momentum $P$ and the $\Delta$-isobar
state with momentum $P'=P-q$. It can be written as
\beq{Ndelta}
\langle \Delta(P') \mid j_{\nu}(0)\mid N(P)\rangle =
\bar{\Delta}_{\beta}(P')\Gamma_{\beta\nu}\gamma_5 N(P)\, ,
\end{equation}
where $N(P)$ and $\Delta_\beta(P')$ are the nucleon spinor and the
Rarita-Schwinger spinor for the $\Delta$-isobar,
respectively\footnote{We hope that denoting particle states by
the same letters as the corresponding spinors will not
create confusion.}.
The decomposition of the vertex function
\bea{G123}
\Gamma_{\beta\nu} &=& G_{1}(Q^{2})\big[-q_{\beta}\gamma_{\nu}+q\hspace{-0,21cm}\slash g_{\beta\nu}\big]
+G_{2}(Q^{2})\big[-q_{\beta}(P-q/2)_{\nu}+q(P-q/2)g_{\beta\nu}\big]
\nonumber\\&&{}
+G_{3}(Q^{2})\big[q_{\beta}q_{\nu}-q^{2}g_{\beta\nu}\big].
\end{eqnarray}
defines three scalar form factors $G_i (Q^2)$. As usual, $Q^2 =-q^2$.
Following \cite{Jones:1972ky}, one can also define the magnetic dipole $G_{M}$,
electric quadrupole $G_{E}$, and Coulomb quadrupole $G_{C}$ form
factors instead of $G_{1}$, $G_{2}$, $G_{3}$:
\bea{MEQ}
G_{M}(Q^{2})&=&
\frac{m_{P}}{3(m_{P}+m_{\Delta})}\Bigg[((3m_{\Delta}+m_{P})(m_{\Delta}+m_{P})+Q^{2})\frac{G_{1}(Q^{2})}{m_{\Delta}}
\nonumber\\
&&\hspace{2cm}+(m_{\Delta}^{2}-m_{P}^{2})G_{2}(Q^{2})-2Q^{2}G_{3}(Q^{2})\Bigg],
\nonumber\\
G_{E}(Q^{2})&=&
\frac{m_{P}}{3(m_{P}+m_{\Delta})}\Bigg[(m_{\Delta}^{2}-m_{P}^{2}-Q^{2})\frac{G_{1}(Q^{2})}{m_{\Delta}}
\nonumber\\
&&\hspace{2cm}+(m_{\Delta}^{2}-m_{P}^{2})G_{2}(Q^{2})-2Q^{2}G_{3}(Q^{2})\Bigg],
\nonumber\\
G_{C}(Q^{2})&=&
\frac{2m_{P}}{3(m_{\Delta}+m_{P})}\Bigg[2m_{\Delta}G_{1}(Q^{2})
+\frac{1}{2}(3m_{\Delta}^{2}+m_{P}^{2}+Q^{2})G_{2}(Q^{2})
\nonumber\\
&&\hspace{2cm}+(m_{\Delta}^{2}-m_{P}^{2}-Q^{2})G_{3}(Q^{2})\Bigg] \, .
\end{eqnarray}
For a comparison with the literature we also write down
the form factors $G_M^{\rm Ash}$ \cite{Ash}, $G_T$ \cite{Stoler:1993yk} and the ratios
$R_{EM}$ \cite{Jones:1972ky} and $R_{SM}$ (see e.g. \cite{Buchmann:2004ia,Caia:2004pm,Pascalutsa:2005ts})
that are used in experimental papers
\bea{GT}
G_M (Q^{2})&=& G_M^{\rm Ash}(Q^{2})\sqrt{1+\frac{Q^2}{(m_\Delta+m_P)^2}}\,,
\nonumber\\
\left|G_{M}(Q^{2})\right|^{2}+3\left|G_{E}(Q^{2})\right|^{2}&=&
\frac{Q^{2}}{Q^{2}+\nu^{2}}\Bigg(1+\frac{Q^{2}}{(m_{\Delta}+m_{P})^{2}}\Bigg)\left|G_{T}(Q^{2})\right|^{2},
\quad \nu = \frac{m_{\Delta}^{2}-m_{P}^{2}+Q^{2}}{2m_{P}},
\nonumber \\
R_{EM}(Q^2) & = & \frac{E2(Q^{2})}{M1(Q^{2})} = \frac{E_{1+}}{M_{1+}} = - \frac{G_E(Q^2)}{G_M(Q^2)}
\\
R_{SM}(Q^2) & = & \frac{C2(Q^{2})}{M1(Q^{2})} = \frac{S_{1+}}{M_{1+}}
= -\sqrt{Q^2 +\frac{(m_{\Delta}^2- m_p^2-Q^2)^2}{4 m_{\Delta}^2}}
\frac{1}{2 m_{\Delta}} \frac{G_C(Q^2)}{G_M(Q^2)}
\, .
\nonumber
\end{eqnarray}
Note that there is a disagreement in the literature on the overall sign in the relation between $R_{SM}$ and the ratio
of the quadrupole and the magnetic form factors $G_C(Q^2)/G_M(Q^2)$. We follow the definition from \cite{Pascalutsa:2005ts}.
\subsection{Choice of the correlation function}
To extract information about hadronic form factors within the light-cone sum rule
approach we should analyze a matrix element in which one of the hadrons is represented
by an interpolating field with proper quantum numbers, and another is described
by its explicit state vector, cf. Fig.~\ref{figsum}.
Building the sum rule, we would need distribution amplitudes of the latter hadron.
The nucleon DAs for all three-quark operators were
introduced and studied in Ref.~\cite{Braun:2000kw}, while
in the case of the $\Delta$-isobar such an analysis is yet to be performed.
Thus, in the present paper, we consider the
correlation function given by the matrix element
\beq{T}
T_{\mu\nu}(P,q)=i\int d^{4}z\, e^{iqz} \langle0
| T\left\{\eta_{\mu}(0)j_{\nu}(z)\right\} | N(P)\rangle
\end{equation}
between the vacuum and a single-nucleon state $| N(P)\rangle$.
The interpolating field for
the $\Delta^{+}$-particle is taken in the form suggested in \cite{Ioffe:1981kw}
\beq{eta}
\eta_{\mu}(0)=
\epsilon^{abc}\left[2(u^{a}(0)C\gamma_{\mu}d^{b}(0))u^{c}(0)+
(u^{a}(0)C\gamma_{\mu}u^{b}(0))d^{c}(0)\right]\, ,
\end{equation}
where $a,b,c$ are color indices and $C$ is the charge conjugation matrix.
The contribution of $\Delta^+$ to the correlation function in \Gl{T} is given by
\beq{Tdelta}
T_{\mu\nu}(P,q)=
\frac{1}{m_{\Delta}^{2}-(P')^{2}}\sum_{s}\langle 0 | \eta_{\mu}(0)| \Delta(P',s)\rangle\langle
\Delta(P',s) | j_{\nu}(0)| N(P)\rangle.
\end{equation}
Parametrizing the matrix element
\beq{lambdaD}
\langle0|\eta_{\mu}(0)| \Delta(P',s)\rangle = \frac{\lambda_{\Delta}}{(2\pi)^{2}} \Delta^{(s)}_{\mu}(P')
\end{equation}
in terms of $\lambda_{\Delta}$,
the coupling constant of the $\Delta^{+}$-particle to the current $\eta_\mu$,
and using the standard spin summation formula for Rarita-Schwinger spinors
\beq{spinsum}
\sum_{s}\Delta_{\mu}^{(s)}(P')\overline{\Delta}_{\beta}^{(s)}(P')
= -(\!\not\!{P}'+m_{\Delta})\left\{g_{\mu\beta}-\frac13\gamma_{\mu}\gamma_{\beta}
-\frac{2P'_{\mu}P'_{\beta}}{3m_{\Delta}^{2}}+\frac{P'_{\mu}\gamma_{\beta}
-P'_{\beta}\gamma_{\mu}}{3m_{\Delta}}\right\} \,
\end{equation}
we write this contribution as
\bea{T32}
\lefteqn{T_{\mu\nu}^{(\Delta)}(P,q)=}
\nonumber\\&=&{}
-\frac{\lambda_{\Delta}}{(2\pi)^{2}}\frac{\!\not\!{P}'+m_{\Delta}}{m_{\Delta}^{2}-(P')^{2}}
\left[g_{\mu\beta}-\frac{1}{3}\gamma_{\mu}\gamma_{\beta}
-\frac{2P'_{\mu}P'_{\beta}}{3m_{\Delta}^{2}}+
\frac{P'_{\mu}\gamma_{\beta}-P'_{\beta}\gamma_{\mu}}{3m_{\Delta}}\right] \Gamma_{\beta\nu}\gamma_5 N(P) \, .
\end{eqnarray}
This expression provides us with one of the starting points of our analysis.
To extract the $\Delta$-related information from
the total correlator $T_{\mu\nu}(P,q)$, we should take into account
a subtlety that the current $\eta_\mu$ couples
not only to isospin $I=\frac32$ spin-parity $\frac32^+$ states,
but also to isospin $I=\frac32$ spin-parity $\frac12^-$ states.
It makes sense to eliminate their contribution
by a clever choice of the Lorentz structures.
For a generic $\frac12^-$ resonance, call it $\Delta^*$,
we define
\beq{12}
\langle0|\eta_{\mu}(0)|\Delta^*(P',s)\rangle=\frac{\lambda_{*}}{(2\pi)^2}
(m_{*}\gamma_{\mu}-4P'_{\mu})\Delta^{*}(P',s)
\end{equation}
with $\lambda^{*}$ being the corresponding coupling and $m_{*}$ the mass. The $\Delta^*$ spinor satisfies
the usual Dirac equation $(\not\!\!P'-m_{*})\Delta^{*}(P')=0$, and the matrix element of the electromagnetic
current between the nucleon and a $\Delta^*$ state takes the form
\beq{EM12}
\langle \Delta^*(P')| j_{\nu}(0) | N(P)\rangle = \overline{\Delta^*}(P')
\left[
(\gamma_{\nu}q^{2}-\not\!q q_{\nu})F^{N\Delta^*}_1(q^2) -i\sigma_{\mu\alpha}q^{\alpha}F^{N\Delta^*}_{2}(q^2)
\right]\gamma_5 N(P)\,.
\end{equation}
The corresponding contribution to the correlation function in \Gl{T} reads
\bea{T12}
\lefteqn{T_{\mu\nu}^{({\Delta^*})}(P,q)=}
\nonumber\\&=&
\frac{\lambda^*_{\Delta}}{(2\pi)^{2}}\big[m_{*}\gamma_{\mu}-4P'_{\mu}\big]
\frac{\not \! P^{\prime}+m_{*}}{m_{*}^{2}-(P')^{2}}
\left[(\gamma_{\nu}q^{2}-\not\!q q_{\nu})F^{N\Delta}_{1}-
i\sigma_{\nu\alpha}q^{\alpha}F^{N\Delta}_{2}\right]\gamma_{5}N(P) \, ,
\end{eqnarray}
and the question is whether (and at which cost) the contributions in \Gl{T32} and \Gl{T12} can be separated.
To achieve this, we have to understand the general decomposition of the correlation function in \Gl{T}
into contributions of different Lorentz structures.
To this end, it is convenient to go over to the twist decomposition in the infinite momentum frame.
\subsection{Light-Cone Expansion}
\subsubsection{Kinematics}
Having in mind the practical construction of light-cone sum rules that involve nucleon distribution amplitudes,
we define a light-like vector $n_\mu$ by the condition
\beq{z}
q\cdot n =0\,,\qquad n^2 =0
\end{equation}
and introduce the second light-like vector
vector
\bea{vectors}
p_\mu &=& P_\mu - \frac{1}{2} \, n_\mu \frac{m_P^2}{P\cdot n}\,,~~~~~ p^2=0\,,
\end{eqnarray}
so that $P \to p$ if the nucleon mass can be neglected, $m_P \to 0$.
The photon momentum then can be written as
\begin{eqnarray}
q_{\mu}=q_{\bot \mu}+ n_{\mu}\frac{P\cdot q}{P\cdot n}\, .
\end{eqnarray}
Assume for a moment that the nucleon moves in the positive
${\bf e_z}$ direction, then $p^+$ and $n^-$ are the only nonvanishing
components of $p$ and $n$, respectively.
The infinite momentum frame can be visualized
as the limit $p^+ \sim Q \to \infty$ with fixed $P\cdot n = p \cdot n\sim 1$
where $Q$ is the large scale in the process.
Expanding the matrix element in powers of $1/p^+$ introduces
the power counting in $Q$. In this language, twist counts the
suppression in powers of $p^+$. Similarly,
the nucleon spinor $N_\gamma(P,\lambda)$ has to be decomposed
in ``large'' and ``small'' components as
\beq{spinor}
N_\gamma(P,\lambda) = \frac{1}{2 p\cdot n} \left(\!\not\!{p}\! \!\not\!{n} +
\!\not\!{n}\!\!\not\!{p} \right) N_\gamma(P,\lambda)= N^+_\gamma(P,\lambda) + N^-_\gamma(P,\lambda)
\; ,
\end{equation}
where we have introduced two projection operators
\beq{project}
\Lambda^+ = \frac{\!\not\!{p}\! \!\not\!{n}}{2 p\cdot n} \quad ,\quad
\Lambda^- = \frac{\!\not\!{n}\! \!\not\!{p}}{2 p\cdot n}
\end{equation}
that project onto the ``plus'' and ``minus'' components of the spinor.
Note the useful relations
\beq{bwgl}
\lash{p} N(P) = m_P N^+(P)\,,\qquad \lash{n} N(P) = \frac{2 p \cdot n}{m_P} N^-(P)
\end{equation}
that follow readily from the Dirac equation $\not\!\!{P} N(P) = M N(P)$.
Using the explicit expressions for $N(P)$ it is easy to see
that $\Lambda^+N = N^+ \sim \sqrt{p^+}$ while $\Lambda^-N = N^- \sim 1/\sqrt{p^+}$.
The correlation function $T_{\mu\nu}$ in \Gl{T} can be expanded in contributions with increasing twist.
The leading twist contribution corresponds to the projection
\beq{lt}
\Lambda_+ T_{++}
\end{equation}
and contains the maximum power of the large momentum $p^+$. There exist three projections
of the next-to-leading twist that are suppressed by one power of
the large momentum compared to the leading twist, namely
\bea{t+1}
&& \Lambda_- T_{++}\,,\quad \Lambda_+ T_{\perp +}\,,\quad \Lambda_+ T_{+\perp} \, ,
\end{eqnarray}
and more contributions with loosing two powers of the momentum, etc.
Each light-cone projection in a general situation gives rise to several invariant
Lorentz structures, in particular
\bea{decomp}
\Lambda_+\,n^{\mu}n^{\nu}T_{\mu\nu}&=& (p \cdot n)^2
\Big\{ {\cal A}(Q^2,Pq) + \not\!q_{\bot}\,{\cal B}(Q^2,Pq) \Big\}\gamma_{5}N^{+}(P)\,,
\nonumber\\
\Lambda_-\,n^{\mu}n^{\nu}T_{\mu\nu}&=& (p \cdot n)^2
\Big\{ {\cal C}(Q^2,Pq) + \not\!q_{\bot}\,{\cal D}(Q^2,Pq) \Big\}\gamma_{5}N^{-}(P)\,,
\nonumber\\
\Lambda_+\,n^{\mu}e_{\bot}^{\nu}T_{\mu\nu}&=&
(p \cdot n)\Big\{
\not\!e_{\bot}\,{\cal E}(Q^2,Pq) + \not\!e_{\bot}\not\!q_{\bot}\,{\cal F}(Q^2,Pq)\Big\}\gamma_{5}N^{+}(P)\,,
\nonumber\\
\Lambda_+\,e_{\bot}^{\mu}n^{\nu}T_{\mu\nu}&=&
(p \cdot n)\Big\{
\not\! e_{\bot}\,{\cal G}(Q^2,Pq) + \not\!e_{\bot}\not\!q_{\bot}\,{\cal H}(Q^2,Pq)\Big\}\gamma_{5}N^{+}(P)\,,
\nonumber\\
\Lambda_+\,q_{\bot}^{\mu}n^{\nu}T_{\mu\nu}&=&
(p \cdot n)\Big\{{\cal I}(Q^2,Pq) + \not\!q_{\bot}\,{\cal J}(Q^2,Pq) \Big\}\gamma_{5}N^{+}(P)\,,
\end{eqnarray}
where $e_{\bot}$ is a two-component vector in the transverse plane that is
orthogonal to $q_{\bot}$:
\beq{ee}
e_{\bot}\cdot q_{\bot} =0\,,\quad (e_{\bot})^2 = 1\,.
\end{equation}
The invariant amplitudes ${\cal A}$\,--\,${\cal J}$ are not all independent. First of all, note
that the $n^\mu q_{\bot}^{\nu}T_{\mu\nu}$-projection does not lead to new independent amplitudes
because of the transversality condition $q^\nu T_{\mu\nu}=0$. Second, the
Rarita-Schwinger constraint $\gamma^\mu T_{\mu\nu}=0$ results in two relations
\bea{RScond}
{\cal G} + {\cal J} - m_P\, {\cal C} &=&0\,,
\nonumber\\
{\cal H}- \frac{1}{Q^2}{\cal I} + m_P\, {\cal D} &=&0\,.
\end{eqnarray}
Finally, more relations follow from the Lorentz symmetry. In order to find them, we write
a Lorentz-invariant decomposition of the correlation function $T_{\mu\nu}$ which involves
10 independent amplitudes, see Appendix~A. Taking the necessary light-cone projections,
we obtain two new relations that can be chosen as
\bea{LorSym}
2\, {\cal F} -m_P\, {\cal D} + m_P\, {\cal B} + 2\, {\cal H} &=& 0\,.
\nonumber\\
m_P\,{\cal A} -2(Pq){\cal B}- m_P\,{\cal C} -2\, {\cal E}-2\, {\cal G} &=& 0\,.
\end{eqnarray}
The remaining 6 independent invariant functions produce an overcomplete
set of sum rules of the leading and next-to-leading twist
for the three form factors of the $\gamma^* N \to \Delta$ transition.
\subsubsection{The $J^P=\frac32^+$ contributions}
Now, the contribution of the $\Delta^+$ to the
invariant functions defined above can readily be calculated
using Eq.~(\ref{T32}).
The result has the following form:
\bea{lhs}
{\cal A}^{\Delta}&=&
-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})} \frac{Q^2}{3m_{\Delta}^{2}}
\Big[ 2G_{1}+(2m_{\Delta}-m_{P})G_{2} +2(m_{P}-m_{\Delta})G_{3}\Big],
\nonumber\\
{\cal B}^{\Delta} &=&
-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})} \frac{1}{3m_{\Delta}^{2}}
\Big[-2m_{P}G_{1}+[m_{\Delta}(m_{\Delta}-m_{P})-Q^{2}]G_{2}+2Q^{2}G_{3}\Big],
\nonumber\\
{\cal C}^{\Delta}&=&
\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})} \frac{Q^2}{3m_{\Delta}^{2}m_P}
\Big[2(m_{P}+m_{\Delta})G_{1}+(m_{\Delta}^{2}+Q^{2})G_{2}+2[m_{\Delta}(m_{P}-m_{\Delta})-Q^{2}]G_{3}\Big],
\nonumber\\
{\cal D}^{\Delta}&=&-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})}
\frac{1}{3m_{\Delta}^{2}m_{P}}
\Big[2(Q^{2}
-m_{\Delta}m_{P})G_{1}
+(m_{\Delta}-m_{P})(m_{\Delta}^{2}+Q^{2})G_{2}
\nonumber\\
&&{}+2m_{P}Q^{2}G_{3}\Big],
\nonumber\\
{\cal E}^{\Delta}&=&-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})}
\frac{1}{6m_{\Delta}^{2}}
\Big[2[m_{P}(m_{P}^{2}-m_{\Delta}^{2})+Q^{2}(2m_{\Delta}+m_{P})]G_{1}
\nonumber\\
&&{}+ m_{\Delta} (m_{\Delta}-m_{P})^2 (m_{\Delta}+m_{P}) G_{2}
+2Q^{2}m_{\Delta}(m_{P}-m_{\Delta})G_{3}\Big],
\nonumber\\
{\cal F}^{\Delta}&=&-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})}
\frac{1}{6m_{\Delta}^{2}}
\Big[2[Q^{2} -4m_{\Delta}^{2}+ (m_{\Delta}-m_{p})^2]G_{1}
+m_{\Delta}(m_{P}^{2}-m_{\Delta}^{2})G_{2}
\nonumber\\
&&{}+2m_{\Delta}Q^{2}G_{3}\Big],
\nonumber\\
{\cal G}^{\Delta}&=&
-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})}
\frac{Q^2}{6m_{\Delta}}
\Big[-2G_{1}+ (m_{P}+m_{\Delta})G_{2}+2(m_{\Delta}-m_{P})G_{3}\Big],
\nonumber\\
{\cal H}^{\Delta}&=&-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})}
\frac{1}{6m_{\Delta}}
\Big[2(3m_{\Delta}+m_{P})G_{1}+(2m_{\Delta} (m_{\Delta}-m_{P}) +Q^{2})G_{2}-2Q^{2}G_{3}\Big],
\nonumber\\
{\cal I}^{\Delta}&=&
\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})}
\frac{Q^2}{6m_{\Delta}^{2}}
\Big[2(m_\Delta m_P -3 m_{\Delta}^2 -2 Q^2)]G_{1}
\nonumber\\
&&{}+ (4m_\Delta^2 (m_P-m_\Delta) + Q^2(2 m_P-3m_\Delta))G_2+2Q^2(m_\Delta-2m_P)G_{3}\Big],
\nonumber\\
{\cal J}^{\Delta}&=&
-\frac{\lambda_{\Delta}/(2\pi)^2}{(m_{\Delta}^{2}-P'^{2})}
\frac{Q^2}{6m_\Delta^2}
\Big[-2(2m_{P} +m_{\Delta})G_{1}
-(m_\Delta (m_P +3 m_\Delta) + 2 Q^2)G_{2}
\nonumber\\
&&\hspace{1cm}+2(m_\Delta (m_\Delta- m_P) +2 Q^2)G_{3}\Big].
\end{eqnarray}
\subsubsection{The $J^P=\frac12^-$ contributions}
{}For completeness, we present here the (unwanted)
contributions to the same invariant functions ${\cal A}$\ldots ${\cal J}$
of the negative parity spin-1/2 $\Delta$-resonances,
cf.~(\ref{T12}):
\bea{lhs12}
{\cal A}^{\Delta^*}&=&\frac{8\lambda_{*}/(2\pi)^2}{(m_{*}^{2}-P'^{2})}\, Q^2 F_{1}\,,
\nonumber\\
{\cal B}^{\Delta^*}&=&-\frac{8\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})}F_{2}\,,
\nonumber\\
{\cal C}^{\Delta^*}&=&-\frac{4\lambda_\ast/(2\pi)^2Q^{2}}{(m_{*}^{2}-P'^{2})m_{P}}\Big[m_* F_{1}+2 F_{2}\Big],
\nonumber\\
{\cal D}^{\Delta^*}&=&\frac{4\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})m_{P}}\Big[2 Q^2 F_{1}-m_{*}F_{2}\Big],
\nonumber\\
{\cal E}^{\Delta^*}&=&\frac{4\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})}
\Big[Q^{2}(m_{*}+m_{P})F_1+ (m_{P}^{2}-m_{*}^{2})F_2\Big],
\nonumber\\
{\cal F}^{\Delta^*}&=&\frac{4\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})}\Big[Q^{2}F_1+(m_{P}-m_{*})F_2\Big],
\nonumber\\
{\cal G}^{\Delta^*}&=&-\frac{2\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})}Q^2 \, m_* F_{1}\,,
\nonumber\\
{\cal H}^{\Delta^*}&=&\frac{2\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})}\, m_* F_{2}\,,
\nonumber\\
{\cal I}^{\Delta^*}&=&\frac{2\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})}Q^2 \, \Big[4Q^2F_1-m_*F_2\Big]\,,
\nonumber\\
{\cal J}^{\Delta^*}&=&-\frac{2\lambda_\ast/(2\pi)^2}{(m_{*}^{2}-P'^{2})}Q^2 \,\Big[m_*F_1+4F_2\Big]\,.
\end{eqnarray}
Here $F_{1,2}=F_{1,2}^{N\Delta^*}(Q^2)$ and $m_* = m_{\Delta^*}$ is the mass of the $\Delta^*$ resonance.
Taking into account the relations in (\ref{RScond}) and (\ref{LorSym})
we find that there exist two independent combinations of the invariant functions that are
free from contributions of $J^P=\frac12^-$ resonances {\it with arbitrary mass $m_*$}:
\bea{free12}
&& {\cal G}^{\Delta^*} - {\cal J}^{\Delta^*} + Q^2 {\cal B}^{\Delta^*} =0\,,
\nonumber\\
&& {\cal I}^{\Delta^*} + Q^2\,{\cal H}^{\Delta^*} - Q^2 {\cal A}^{\Delta^*} =0\, .
\end{eqnarray}
On the other hand, using (\ref{lhs}) one obtains very simple expressions for the $\Delta$-isobar
contributions to the same combinations:
\bea{free32}
{\cal G}^{\Delta} - {\cal J}^{\Delta} + Q^2 {\cal B}^{\Delta} &=&
- \frac{\lambda_\Delta/(2\pi)^2}{(m_\Delta^2 - P'^2)} Q^2 G_2(Q^2)\,,
\nonumber\\
{\cal I}^{\Delta} + Q^2\,{\cal H}^{\Delta} - Q^2\,{\cal A}^{\Delta} &=&
- \frac{\lambda_\Delta/(2\pi)^2}{(m_\Delta^2 - P'^2)}Q^2\big[2 G_1(Q^2)+(m_\Delta-m_P)G_2(Q^2)\big].
\end{eqnarray}
This implies that one can construct light-cone sum rules for the $\gamma^* N \to \Delta$ form factors
$G_1$ and $G_2$ that are free from contamination by negative parity spin-1/2 resonances.
{}For the form factor $G_3$ this separation is not possible, unless one
goes over to Lorentz structures of yet higher twist. Since the accuracy of light-cone sum
rules is expected to decrease with twist, the gain of excluding spin-1/2 resonances in this
case is probably not worth the effort. Note that the difference between the magnetic $G_M$ and
electric $G_E$ form factors only involves $G_1$:
\bea{GM-GE}
G_M(Q^2)-G_E(Q^2) &=& \frac{2m_P}{3m_\Delta}\frac{[(m_\Delta+m_P)^2+Q^2]}{(m_\Delta+m_P)}
G_1(Q^2)
\end{eqnarray}
\setcounter{equation}{0} \section{Calculation of Correlation Functions}
On the other side, one can calculate the invariant functions ${\cal A}$\ldots ${\cal J}$ for Euclidean
virtuality $(P')^2 = (P-q)^2$ in terms of the nucleon (proton) DAs of increasing twist.
The calculation can be simplified considerably by observing that only the isospin-one
part of the electromagnetic current can initiate $N\Delta$-transitions. Hence, all correlation
functions must be proportional to $e_u-e_d$ and a (simpler) calculation of the $d$-quark
contribution suffices to obtain the complete result. An additional simplification is due to the
fact that, to the leading-order accuracy in QCD coupling, the two $u$-quarks in the $\Delta$-current
(\ref{eta}) remain spectators and retain their position at the origin. It is, therefore, sufficient
to retain only those terms in the general Lorentz decomposition of the three-quark matrix element
between vacuum and the proton state, that are symmetric with respect to the interchange of the momentum
fractions of the two $u$-quarks \cite{Braun:2000kw}:
\bea{zerl}
\lefteqn{ 4 \bra{0} \varepsilon^{ijk} u_\alpha^i(0) u_\be^j(0) d_\gamma^k(z)
\ket{N(P)} =}
\nonumber\\
&=&
\left({\cal V}_1 + \frac{z^2m_P^2}{4} {\cal V}_1^M \right) \left(\!\not\!{P}C \right)_{\alpha \be} \left(\gamma_5 N\right)_\gamma +
{\cal V}_2 m_P \left(\!\not\!{P} C \right)_{\alpha \be} \left(\!\not\!{z} \gamma_5 N\right)_\gamma +
{\cal V}_3 m_P \left(\gamma_\mu C \right)_{\alpha \be}\left(\gamma^{\mu} \gamma_5 N\right)_\gamma
\nonumber \\
&&{} +
{\cal V}_4 m_P^2 \left(\!\not\!{z}C \right)_{\alpha \be} \left(\gamma_5 N\right)_\gamma +
{\cal V}_5 m_P^2 \left(\gamma_\mu C \right)_{\alpha \be} \left(i \sigma^{\mu\nu} z_\nu \gamma_5
N\right)_\gamma
+ {\cal V}_6 m_P^3 \left(\!\not\!{z} C \right)_{\alpha \be} \left(\!\not\!{z} \gamma_5 N\right
)_\gamma\,
\nonumber\\
&&{}+
\left({\cal T}_1+\frac{z^2m_P^2}{4}{\cal T}_1^M\right)
\left(P^\nu i \sigma_{\mu\nu} C\right)_{\alpha \be} \left(\gamma^\mu\gamma_5 N\right)_\gamma
+
{\cal T}_2 m_P \left(z^\mu P^\nu i \sigma_{\mu\nu} C\right)_{\alpha \be} \left(\gamma_5 N\right)_\gamma
\nonumber \\
&&{}
+ {\cal T}_3 m_P \left(\sigma_{\mu\nu} C\right)_{\alpha \be} \left(\sigma^{\mu\nu}\gamma_5 N\right)_\gamma
+ {\cal T}_4 m_P \left(P^\nu \sigma_{\mu\nu} C\right)_{\alpha \be} \left(\sigma^{\mu\varrho} z_\varrho \gamma_5 N\right)_\gamma
\nonumber \\
&&{}
+ {\cal T}_5 m_P^2 \left(z^\nu i \sigma_{\mu\nu} C\right)_{\alpha \be} \left(\gamma^\mu\gamma_5 N\right)_\gamma
+{\cal T}_6 m_P^2 \left(z^\mu P^\nu i \sigma_{\mu\nu} C\right)_{\alpha \be} \left(\!\not\!{z} \gamma_5 N\right)_\gamma
\nonumber \\&&
+{\cal T}_{7} m_P^2 \left(\sigma_{\mu\nu} C\right)_{\alpha \be} \left(\sigma^{\mu\nu} \!\not\!{z} \gamma_5 N\right)_\gamma
+ {\cal T}_{8} m_P^3 \left(z^\nu \sigma_{\mu\nu} C\right)_{\alpha \be} \left(\sigma^{\mu\varrho} z_\varrho \gamma_5 N\right)_\gamma\,.
\end{eqnarray}
The expansion in (\ref{zerl}) should be viewed as an operator product expansion
to the leading order in the strong coupling.
Each of the
functions ${\cal V}_i$ and ${\cal T}_i$
is a function of the scalar product $(P\cdot z)$ and
also depends on the deviation from the light-cone $z^2$ at most
logarithmically. In addition, we take into account the $O(z^2)$ corrections to the
leading-twist-3 structures, denoted by ${\cal V}^M_1$ and ${\cal T}^M_1$.
The invariant functions ${\cal V}_1(Pz),\ldots,{\cal T}_8(Pz)$ can be expressed in terms of the
nucleon distribution
amplitudes $V_1(x_i),\ldots,T_8(x_i)$ with increasing twist, introduced in Ref.~\cite{Braun:2000kw}:
\bea{opev}
\renewcommand{\arraystretch}{1.7}
\begin{array}{lll}
{\cal V}_1 = V_1\,, &~~& 2 \,(pz) {\cal V}_2 = V_1 - V_2 - V_3\,, \\
2\, {\cal V}_3 = V_3\,, &~~& 4 \,(pz) {\cal V}_4 = - 2 V_1 + V_3 + V_4 + 2 V_5\,, \\
4\, (pz)\, {\cal V}_5 = V_4 - V_3\,, &~~&
4 \,(p z )^2 {\cal V}_6 = - V_1 + V_2 + V_3 + V_4 + V_5 - V_6\, \\
{\cal T}_1 = T_1\,, && 2 \, {\cal T}_2 = T_1 + T_2 - 2 T_3\,,
\\
2 {\cal T}_3 = T_7\,, && 2 \,(pz) {\cal T}_4 = T_1 - T_2 - 2 T_7\,,
\\
2 \,(pz) {\cal T}_5 = - T_1 + T_5 + 2 T_8\,, &&
4 \left( p z\right )^2 {\cal T}_6 = 2 T_2 - 2 T_3 - 2 T_4 + 2 T_5 + 2 T_7 + 2 T_8\,,
\\
4 \, (pz) {\cal T}_7 = T_7 - T_8\,, &&
4 \left( p z \right)^2 {\cal T}_8 = -T_1 + T_2 + T_5 - T_6 + 2 T_7 + 2 T_8 \,.
\\
\end{array}
\renewcommand{\arraystretch}{1.0}
\end{eqnarray}
Each distribution amplitude $F = V_i,T_i$ can be
represented by a Fourier integral
\beq{fourier}
F(pz) = \int_0^1 dx_3\int_0^{1-x_3}\!\!dx_1 \; e^{-i x_3 (p z)}\, F(x_1,1-x_1-x_3,x_3)\,,
\end{equation}
where the functions $F(x_i)$ depend on the dimensionless
variables $x_1,x_2=1-x_1-x_3,x_3$, $\,0 < x_i < 1$ which
correspond to the longitudinal momentum fractions
carried by the quarks inside the nucleon.
In difference to the ``calligraphic'' functions ${\cal V}_i(x_i),{\cal T}_i(x_i)$
each of the distribution amplitudes $V_i(x_i),T_i(x_i)$
has definite twist, see Table~\ref{tabelle1}, and corresponds
to the matrix element of a (renormalized) three-quark operator with
exactly light-like separations $z^2\to 0$, see Table~2 and Appendix C in
Ref.~\cite{Braun:2000kw} for the details.
\begin{table}
\renewcommand{\arraystretch}{1.3}
\begin{center}
\begin{tabular}{l|l|l|l}
twist-3 & twist-4 & twist-5 & twist-6 \\ \hline
$V_1$ & $V_2\;,\;V_3$ & $V_4\;,\;V_5 $& $V_6$ \\
$T_1$ & $T_2,T_3,T_7$ & $T_4,T_5,T_8 $& $T_6$ \\ \hline
\end{tabular}
\end{center}
\caption[]{\sf Twist classification of the distribution amplitudes
in (\ref{opev}).}
\label{tabelle1}
\renewcommand{\arraystretch}{1.0}
\end{table}
The higher-twist distribution
amplitudes $V_2(x_i),\ldots,V_6(x_i)$ correspond to ``wrong'' components
of the quark spinors and have different helicity structure compared to
the leading twist amplitude. For
baryons these ``bad'' components cannot all be traded for gluons as
in the case of mesons~\cite{Braun:1989iv,Ball:1998sk}.
They are not all independent, but related to
each other by the exact QCD equations of motion. As a result, to
the leading conformal spin accuracy, the five functions
$V_2(x_i),\ldots,V_6(x_i)$ involve only a single nonperturbative
higher twist parameter.
In the calculations presented
below we use the conformal expansions of higher twist distribution amplitudes
to the next-to-leading order (include ``P-wave'').
This accuracy is consistent with neglecting multiparton components
with extra gluons (quark-antiquark pairs) that are of yet higher spin.
Finally, the invariant functions ${\cal V}_1^M(Pz),{\cal T}^M_1(Pz)$ are twist-5 and can
be calculated using equations of motion in terms of the nucleon distribution amplitudes~\cite{Braun:2001tj}.
Explicit expressions for the distribution amplitudes are collected in Appendix~A.
In the expressions given below we use the following shorthand notations:
\bea{tildefunction}
\widetilde{F}(x_3) &=& \int_1^{x_3} \dd x_3' \int_0^{1-x_3'} \dd x_1
F(x_1,1-x_1-x_3',x_3') \,,
\nonumber \\
\widetilde{\widetilde{F}}(x_3) &=& \int_1^{x_3} \dd x_3'\int_1^{x_3'}
\dd x_3'' \int_0^{1-x_3''} \dd x_1
F(x_1,1-x_1-x_3'',x_3'') \,,
\end{eqnarray}
where $F=V_i,T_i$. These functions result from partial integration in $x_3$
which is done in order to eliminate the $1/ p z$ factors that appear in the definition
of nucleon distribution amplitudes (\ref{opev}). After this,
the $\int d^4 z$ integration becomes trivial. The surface terms sum up to zero.
After a straightforward but tedious calculation we obtain the desired expansions:
\bea{A}
{\cal A}^{\rm QCD}&=& 4(e_d-e_u)\int\limits_0^1\! dx_3 \Bigg\{
\frac{x_3}{(x_{3}P-q)^2}\int\limits_0^{1-x_3}\!\!\!dx_1 (V_1-T_1)(x_i)+\frac{x_3m_P^2}{(x_{3}P-q)^4}
(V_1^{M(d)}-T_1^{M(d)})
\nonumber\\
&&{}+ \frac{x_3^2m_P^2}{(x_{3}P-q)^4}(
-2 \widetilde V_1 + \widetilde V_2 + \widetilde V_3 + \widetilde V_4 + \widetilde V_5
+2 \widetilde T_1 - \widetilde T_2 - \widetilde T_5 - 2\widetilde T_7 - 2\widetilde T_8)
\nonumber\\
&&{}+ \frac{2x_3^3m_P^4}{(x_{3}P-q)^6}(
\widetilde{\widetilde{V}}_1 -\widetilde{\widetilde{V}}_2 -\widetilde{\widetilde{V}}_3-\widetilde{\widetilde{V}}_4-\widetilde{\widetilde{V}}_5+\widetilde{\widetilde{V}}_6
-\widetilde{\widetilde{T}}_1+\widetilde{\widetilde{T}}_2+\widetilde{\widetilde{T}}_5-\widetilde{\widetilde{T}}_6+2\widetilde{\widetilde{T}}_7+2\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}+ \left( \frac{x_3m_P^2}{(x_{3}P-q)^4}+ \frac{2x_3m_P^2Q^2}{(x_{3}P-q)^6}\right)(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\Bigg\},
\end{eqnarray}
\bea{B}
{\cal B}^{\rm QCD}&=& 4(e_d-e_u)\int\limits_0^1\! dx_3 \Bigg\{
\frac{x_3 m_P}{(x_{3}P-q)^4}( - \widetilde V_1 + \widetilde V_2 +
\widetilde V_3 + \widetilde T_1 - \widetilde T_3 - \widetilde T_7)
\\
&+& \frac{2x_3^2m_P^3}{(x_{3}P-q)^6}(
\widetilde{\widetilde{V}}_1 -\widetilde{\widetilde{V}}_2 -\widetilde{\widetilde{V}}_3-\widetilde{\widetilde{V}}_4-\widetilde{\widetilde{V}}_5+\widetilde{\widetilde{V}}_6 -\widetilde{\widetilde{T}}_1+\widetilde{\widetilde{T}}_3+\widetilde{\widetilde{T}}_4-\widetilde{\widetilde{T}}_6+ \widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\Bigg\},
\nonumber
\end{eqnarray}
\bea{C}
{\cal C}^{\rm QCD}&=& 2(e_d-e_u)\int\limits_0^1\! dx_3 \Bigg\{
\frac{x_3}{(x_{3}P-q)^2}\int\limits_0^{1-x_3}\!\!\!dx_1 (-T_1+V_3)(x_i)
- \frac{x_3m_P^2}{(x_{3}P-q)^4} T_1^{M(d)}
\nonumber\\
&&{}
-\frac{1}{(x_{3}P-q)^2}(\widetilde V_1 - \widetilde V_2 - \widetilde V_3)
+ \frac{Q^2}{(x_{3}P-q)^4}(\widetilde T_1 -\widetilde T_3-\widetilde T_7-\widetilde V_1 +
\widetilde V_2 + \widetilde V_3)
\nonumber\\
&&{}
+ \frac{x_3^2m_P^2}{(x_{3}P-q)^4}( \widetilde V_4 - \widetilde V_3 + \widetilde T_1 -
\widetilde T_2 + \widetilde T_3 - \widetilde T_5 - \widetilde T_7 - 2 \widetilde T_8)
\nonumber\\
&&{}
+\left(\frac{2x_3m_P^2}{(x_{3}P-q)^4}+ \frac{4x_3m_P^2Q^2}{(x_{3}P-q)^6}\right)
(\widetilde{\widetilde{V}}_1-\widetilde{\widetilde{V}}_2-\widetilde{\widetilde{V}}_3-\widetilde{\widetilde{V}}_4-\widetilde{\widetilde{V}}_5+\widetilde{\widetilde{V}}_6)
\nonumber\\
&&{}
+\left(\frac{x_3 m_P^2}{(x_{3}P-q)^4}+ \frac{4x_3m_P^2Q^2}{(x_{3}P-q)^6}\right)
(-\widetilde{\widetilde{T}}_1+\widetilde{\widetilde{T}}_2+\widetilde{\widetilde{T}}_5-\widetilde{\widetilde{T}}_6+2\widetilde{\widetilde{T}}_7+2\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}- \frac{2x_3m_P^2[Q^2-x_3^2 m_P^2]}{(x_{3}P-q)^6}
(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\Bigg\},
\end{eqnarray}
\bea{D}
{\cal D}^{\rm QCD}&=& 2\frac{(e_d-e_u)}{m_P}\int\limits_0^1\! dx_3 \Bigg\{
\frac{1}{(x_{3}P-q)^2}\int\limits_0^{1-x_3}\!\!\!dx_1 V_1(x_i)
+ \frac{m_P^2}{(x_{3}P-q)^4} V_1^{M(d)}
\nonumber\\
&&{}
+ \frac{x_3m_P^2}{(x_{3}P-q)^4}( -3 \widetilde V_1 + \widetilde V_2 + 2\widetilde V_3
+ \widetilde V_4 +2 \widetilde V_5 +2 \widetilde T_1 -
\widetilde T_2 - \widetilde T_5 - 2 \widetilde T_7 - 2 \widetilde T_8)
\nonumber\\
&&{}
+\frac{4x_3^2m_P^4}{(x_{3}P-q)^6}(
\widetilde{\widetilde{V}}_1 -\widetilde{\widetilde{V}}_2 -\widetilde{\widetilde{V}}_3-\widetilde{\widetilde{V}}_4-\widetilde{\widetilde{V}}_5+\widetilde{\widetilde{V}}_6
-\widetilde{\widetilde{T}}_1+\widetilde{\widetilde{T}}_2+\widetilde{\widetilde{T}}_5-\widetilde{\widetilde{T}}_6+2\widetilde{\widetilde{T}}_7+2\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}
+\left(\frac{m_P^2}{(x_{3}P-q)^4}+ \frac{2m_P^2[Q^2-x_3^2m_P^2]}{(x_{3}P-q)^6}\right)
(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\Bigg\},
\end{eqnarray}
\bea{E}
{\cal E}^{\rm QCD}&=&
(e_d-e_u)\int\limits_0^1\! dx_3\Bigg\{
\frac{2m_Px_3}{(x_3P-q)^2}\int\limits_0^{1-x_3}\!dx_1 (V_1-V_3)
+\frac{2m_P}{(x_3P-q)^2}(\widetilde{T}_1-\widetilde{T}_3-\widetilde{T}_7)
\nonumber\\
&&{}+\frac{2x_3m_P^3}{(x_3P-q)^4} V_1^{M(d)}
+\frac{2x_3^2m_P^3}{(x_3P-q)^4}(-\widetilde{V}_1+\widetilde{V}_3+\widetilde{V}_5)
+\frac{2m_P Q^2}{(x_3P-q)^4}(\widetilde{T}_1-\widetilde{T}_3-\widetilde{T}_7)
\nonumber\\
&&{}+\frac{2x_3m_P^3}{(x_3P-q)^4}(-\widetilde{\widetilde{T}}_1+\widetilde{\widetilde{T}}_3+\widetilde{\widetilde{T}}_4-\widetilde{\widetilde{T}}_6+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\Bigg\},
\end{eqnarray}
\bea{F}
{\cal F}^{\rm QCD}&=&(e_d-e_u)\int\limits_0^1\! dx_3\Bigg\{
\frac{2}{(x_3P-q)^2}\int\limits_0^{1-x_3}\!dx_1(T_1 +V_1)+\frac{2 m_P^2}{(x_3P-q)^4}(V_1^{M(d)}+T_1^{M(d)})
\nonumber\\
&&{}-\frac{2x_3m_P^2}{(x_3P-q)^4}(\widetilde{T}_1-\widetilde{T}_3-\widetilde{T}_7
+\widetilde{V}_1-\widetilde{V}_3-\widetilde{V}_5)
\nonumber\\
&&{}-\frac{2m_P^2}{(x_3P-q)^4}(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)\Bigg\},
\end{eqnarray}
\bea{G}
{\cal G}^{\rm QCD}&=& (e_d-e_u)\int\limits_0^1\! dx_3 \Bigg\{
\frac{x_3m_P}{(x_3P-q)^2} \int\limits_0^{1-x_3}\!dx_1(V_3-T_1)(x_i)
+\frac{m_P}{(x_3P-q)^2}(-\widetilde{V}_1+\widetilde{V}_2+\widetilde{V}_3)
\nonumber\\
&&{} -\frac{x_3m_P^3}{(x_3P-q)^4}T_1^{M(d)}
+\frac{m_PQ^2}{(x_3P-q)^4}
(-\widetilde{T}_1+\widetilde{T}_3+\widetilde{T}_7-\widetilde{V}_1+\widetilde{V}_2+\widetilde{V}_3)
\nonumber\\
&&{}+\frac{x_3^2m_P^3}{(x_3P-q)^4}
(\widetilde{T}_1-\widetilde{T}_2+\widetilde{T}_3-\widetilde{T}_5-\widetilde{T}_7-2\widetilde{T}_8
-\widetilde{V}_3+\widetilde{V}_4)
\nonumber\\
&&{}+\frac{x_3m_P^3}{(x_3P-q)^4}(-\widetilde{\widetilde{T}} _1+\widetilde{\widetilde{T}}_2+\widetilde{\widetilde{T}}_5-\widetilde{\widetilde{T}}_6+2\widetilde{\widetilde{T}}_7+2\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}+\frac{2x_3m_P^3}{(x_3P-q)^4}(\widetilde{\widetilde{V}}_1-\widetilde{\widetilde{V}}_2-\widetilde{\widetilde{V}}_3-\widetilde{\widetilde{V}}_4-\widetilde{\widetilde{V}}_5+\widetilde{\widetilde{V}}_6)
\nonumber\\
&&{}+\frac{2x_3m_P^3[Q^2+x_3^2m_P^2]}{(x_3P-q)^6}(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\Bigg\},
\end{eqnarray}
\bea{H}
{\cal H}^{\rm QCD}&=&(e_d-e_u)\int\limits_0^1\! dx_3 \Bigg\{
\frac{-1}{(x_3P-q)^2}\int\limits_0^{1-x_3}\!dx_1(V_1+2T_1)(x_i)-\frac{m_P^2}{(x_3P-q)^4}(V_1^{M(d)}+2T_1^{M(d)})
\nonumber\\
&&{}+\frac{x_3m_P^2}{(x_3P-q)^4}(2\widetilde{T}_1-\widetilde{T}_2-\widetilde{T}_5-2\widetilde{T}_7-2\widetilde{T}_8
+\widetilde{V}_1-\widetilde{V}_2-2\widetilde{V}_3+\widetilde{V}_4)
\nonumber\\
&&{}+\frac{3m_P^2}{(x_3P-q)^4}(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}+\frac{2x_3^2m_P^4}{(x_3P-q)^6}(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}+\frac{2m_P^2 Q^2}{(x_3P-q)^6}(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)\Bigg\},
\end{eqnarray}
\bea{I}
{\cal I}^{\rm QCD}&=&(e_d-e_u)Q^2\int\limits_0^1\! dx_3 \Bigg\{
\frac{1}{(x_3P-q)^2}\int\limits_0^{1-x_3}\!dx_1(V_1-2T_1)(x_i)+\frac{m_P^2}{(x_3P-q)^4}(V_1^{M(d)}\!-2T_1^{M(d)})
\nonumber\\
&&{}-\frac{x_3m_P^2}{(x_3P-q)^4}(
{-6\widetilde{T}_1+3\widetilde{T}_2+3\widetilde{T}_5+6\widetilde{T}_7+6\widetilde{T}_8}
+5\widetilde{V}_1-\widetilde{V}_2-2\widetilde{V}_3-3\widetilde{V}_4-4\widetilde{V}_5)
\nonumber\\
&&{}+\frac{5m_P^2}{(x_3P-q)^4}(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}-\frac{2x_3^2m_P^4}{(x_3P-q)^6}(4\widetilde{\widetilde{T}}_1-3\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4-3\widetilde{\widetilde{T}}_5+4\widetilde{\widetilde{T}}_6-7\widetilde{\widetilde{T}}_7-7\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}-\frac{8x_3^2m_P^4}{(x_3P-q)^6}(-\widetilde{\widetilde{V}}_1+\widetilde{\widetilde{V}}_2+\widetilde{\widetilde{V}}_3+\widetilde{\widetilde{V}}_4+\widetilde{\widetilde{V}}_5-\widetilde{\widetilde{V}}_6)
\nonumber\\
&&{}-\frac{6m_P^2Q^2}{(x_3P-q)^6}(-\widetilde{\widetilde{T}}_2+\widetilde{\widetilde{T}}_3+\widetilde{\widetilde{T}}_4-\widetilde{\widetilde{T}}_5-\widetilde{\widetilde{T}}_7-\widetilde{\widetilde{T}}_8)\Bigg\},
\end{eqnarray}
\bea{J}
{\cal J}^{\rm QCD}&=&(e_d-e_u)\int\limits_0^1\! dx_3 \Bigg\{
\frac{x_3m_P}{(x_3P-q)^2} \int\limits_0^{1-x_3}\! dx_1 (V_3-T_1)(x_i)
+\frac{m_P}{(x_3P-q)^2}(-\widetilde{V}_1+\widetilde{V}_2+\widetilde{V}_3)
\nonumber\\
&&{}-\frac{x_3m_P^3}{(x_3P-q)^4}T_1^{M(d)}
+\frac{m_P Q^2}{(x_3P-q)^4}
(3\widetilde{T}_1-3\widetilde{T}_3-3\widetilde{T}_7-\widetilde{V}_1+\widetilde{V}_2+\widetilde{V}_3)
\nonumber\\
&&{}+\frac{x_3m_P^3}{(x_3P-q)^4}(-\widetilde{\widetilde{T}}_1+\widetilde{\widetilde{T}}_2+\widetilde{\widetilde{T}}_5-\widetilde{\widetilde{T}}_6+2\widetilde{\widetilde{T}}_7+2\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}+\frac{2x_3m_P^3}{(x_3P-q)^4}(\widetilde{\widetilde{V}}_1-\widetilde{\widetilde{V}}_2-\widetilde{\widetilde{V}}_3-\widetilde{\widetilde{V}}_4-\widetilde{\widetilde{V}}_5+\widetilde{\widetilde{V}}_6)
\nonumber\\
&&{}+\frac{x_3^2m_P^3}{(x_3P-q)^4}(\widetilde{T}_1-\widetilde{T}_2+\widetilde{T}_3-\widetilde{T}_5-\widetilde{T}_7-2\widetilde{T}_8-\widetilde{V}_3+\widetilde{V}_4)
\nonumber\\
&&{}+\frac{2x_3^3m_P^5}{(x_3P-q)^6}(\widetilde{\widetilde{T}}_2-\widetilde{\widetilde{T}}_3-\widetilde{\widetilde{T}}_4+\widetilde{\widetilde{T}}_5+\widetilde{\widetilde{T}}_7+\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}+\frac{x_3m_P^3Q^2}{(x_3P-q)^6}(-8\widetilde{\widetilde{T}}_1+2\widetilde{\widetilde{T}}_2+6\widetilde{\widetilde{T}}_3+6\widetilde{\widetilde{T}}_4+2\widetilde{\widetilde{T}}_5-8\widetilde{\widetilde{T}}_6+10\widetilde{\widetilde{T}}_7+10\widetilde{\widetilde{T}}_8)
\nonumber\\
&&{}+\frac{8x_3m_P^3Q^2}{(x_3P-q)^6}(\widetilde{\widetilde{V}}_1-\widetilde{\widetilde{V}}_2-\widetilde{\widetilde{V}}_3-\widetilde{\widetilde{V}}_4-\widetilde{\widetilde{V}}_5+\widetilde{\widetilde{V}}_6)\Bigg\}.
\end{eqnarray}
The Borel transformation and the continuum subtraction
are performed by using the following substitution rules:
\bea{borel0}
\int \dd x \frac{\varrho(x) }{(q-x P)^2} &=&
- \int_0^1 \frac{\dd x }{x} \frac{\varrho(x)}{(s - {P'}^2)}
\nonumber \\ &\to&
- \int_{x_0}^1 \frac{\dd x}{x} \varrho(x)
\exp
\left( - \frac{\bar x Q^2}{x M_B^2} - \frac{\bar x m_\Delta^2}{M_B^2}\right)}\, ,
\nonumber\\
\int \dd x \frac{\varrho(x) }{(q-x P)^4} &=&
\int_0^1 \frac{\dd x }{x^2} \frac{\varrho(x)}{(s - {P'}^2)^2}
\nonumber \\ &\to&
\frac{1}{M_B^2} \int_{x_0}^1 \frac{\dd x}{x^2} \varrho(x)
\exp
\left( - \frac{\bar x Q^2}{x M_B^2} - \frac{\bar x m_\Delta^2}{M_B^2}\right)}
+
\frac{\varrho(x_0)\,e^{-s_0 /M_B^2} }{Q^2 + x_0^2 m_\Delta^2} \, ,
\nonumber\\
\int \dd x \frac{\varrho(x) }{(q-x P)^6} &=&
- \int_0^1 \frac{\dd x }{x^3} \frac{\varrho(x)}{(s - {P'}^2 )^3}
\nonumber \\ &\to & -
\frac{1}{2 M_B^4} \int_{x_0}^1 \frac{\dd x}{x^3} \varrho(x)
\exp{
\left( - \frac{\bar x Q^2}{x M_B^2} - \frac{\bar x m_\Delta^2}{M_B^2}
\right)}
- \frac12
\frac{\varrho(x_0)\,e^{-s_0 /M_B^2} }{x_0\left(Q^2 + x_0^2 m_\Delta^2\right) M_B^2}
\nonumber \\ &&
+ \frac12 \frac{x_0^2}{Q^2 + x_0^2 m_\Delta^2} \left[\frac{d}{dx_0}
\frac{\varrho(x_0)}{x_0\left(Q^2 + x_0^2 m_\Delta^2\right)} \right]
\,e^{-s_0 /M_B^2} \, \nonumber \\
\end{eqnarray}
where $M_B$ is the Borel parameter,
$s = \frac{1-x}{x} Q^2 + (1-x) m_\Delta^2$ and
$x_0$ is the solution of the corresponding quadratic equation for $s = s_0$:
\bea{x0}
x_0 &=&\bigg[ \sqrt{(Q^2+s_0-m_\Delta^2)^2+ 4 m_\Delta^2 Q^2}-(Q^2+s_0-m_\Delta^2)\bigg]
/(2m_\Delta^2)\,.
\end{eqnarray}
The contributions $\sim e^{-s_0 /M_B^2}$ in \Gl{borel0}
correspond to the ``surface terms'' arising from successive
partial integrations to reduce the power in the denominators
$(q - x P)^{2N} = (s - {P'}^2 )^{2N} (-x)^{2N}$
with $N > 1$ to the usual dispersion representation with the denominator
$\sim (s - {P'}^2 )$. Without continuum subtraction, i.e. in the
limit $s_0 \to \infty$ these terms vanish.
In addition, in the hadronic representation for the same correlation
functions (\ref{lhs}) one has to make the substitution
\beq{borel5}
\frac{1}{m_\Delta^2-P'^2} \to e^{-m_\Delta^2/M_B^2}.
\end{equation}
\setcounter{equation}{0} \section{Light-cone Sum Rules}
Our general strategy is as follows.
At the first step, we analyze the two light-cone sum rules
that are obtained from the combinations of the correlation functions
that are free from contributions of
negative parity spin-1/2 resonances, cf. (\ref{free12}):
\bea{freeSR}
{\mathbb B}\Big[{\cal G}^{\rm QCD} - {\cal J}^{\rm QCD} + Q^2 {\cal B}^{\rm QCD}
\Big](M_B^2) &=&
- \frac{\lambda_\Delta e^{-m_\Delta^2/M_B^2}}{(2\pi)^2} Q^2 G_2(Q^2)\,,
\\
{\mathbb B}\Big[{\cal I}^{\rm QCD} + Q^2\,{\cal H}^{\rm QCD} - Q^2\,{\cal A}^{\rm QCD}
\Big](M_B^2) &=&
- \frac{\lambda_\Delta e^{-m_\Delta^2/M_B^2}}{(2\pi)^2}Q^2\big[2 G_1(Q^2)+(m_\Delta-m_P)G_2(Q^2)\big].
\nonumber
\end{eqnarray}
Here ${\mathbb B}\big[\ldots\big](M_B^2)$ stands for the Borel transform with respect to $P'^2$.
{}From these sum rules we extract $G_1$ and $G_2$ form factors and also $G_M-G_E\sim G_1$.
At the second step, we determine the form factor $G_3$ from the leading twist
sum rule
\beq{B-SR}
{\mathbb B}\Big[{\cal B}^{\rm QCD}\Big](M_B^2) =
- \frac{\lambda_\Delta e^{-m_\Delta^2/M_B^2}}{(2\pi)^2M_B^2}
\frac{1}{3m_{\Delta}^{2}}
\Big[-2m_{P}G_{1}+[m_{\Delta}(m_{\Delta}-m_{P})-Q^{2}]G_{2}+2Q^{2}G_{3}\Big].
\end{equation}
The rationale for choosing this structure is that in the sum rule for ${\cal A}$ the contribution
of interest is multiplied by a small factor $m_\Delta-m_P$, cf.~(\ref{lhs}).
{}Finally, we rewrite our results in terms of $G_M$, $G_E$ and $G_C$.
\subsection{Asymptotic expansion}
As an example, consider the contribution of the leading-twist nucleon distribution amplitudes to the LCSR
for $G_2(Q^2)$, the first equation in (\ref{freeSR}). Putting everything together, one obtains to this
accuracy
\bea{example1}
{\cal G}^{\rm QCD} - {\cal J}^{\rm QCD} + Q^2 {\cal B}^{\rm QCD} &=&
(e_d-e_u)Q^2\int\limits_0^1\! dx_3 \Bigg\{
-\frac{4m_P }{(x_3P-q)^4}\widetilde{T}_1
+\frac{4x_3m_P }{(x_3P-q)^4}(\widetilde{T}_1-\widetilde{V}_1)
\nonumber\\&&{}
+\frac{x_3\bar x_3 m_P^3}{(x_3P-q)^6}(\widetilde{\widetilde{T}}_1-\widetilde{\widetilde{V}}_1)
\Bigg\}
+{\cal O}(\mbox{\rm twist-4})\,.
\end{eqnarray}
The main effect of the continuum subtraction and/or Borel transformation
is that the integration over the momentum fraction $x_3$
gets restricted to a narrow interval of $1 > x_3 > x_0$ with $1-x_0 =s_0/Q^2 \ll 1$. As the result, power
counting of $1-x_3$ factors in the integrand translates into the power counting in $1/Q^2$.
The behavior of the nucleon distribution amplitudes close to the end point $x_3\to 1$ is governed by
conformal symmetry \cite{Braun:2003rp}. To the next-to-leading order accuracy in the expansion over the
conformal spin one obtains~\cite{Chernyak:1984bm,Braun:2000kw}
\bea{conf1}
V_1(x_1,x_2,x_3) &=& 120 f_N x_1 x_2 x_3 \Big[1+\frac72(1-3V_1^d)(1-3x_3)+\ldots\Big],
\nonumber\\
T_1(x_1,x_2,x_3) &=& 120 f_N x_1 x_2 x_3 \Big[1+ \frac74(3 A_1^u+3V_1^d-1)(1-3x_3)+\ldots\Big].
\end{eqnarray}
Here $A_1^u$ and $V_1^d$ are scale-dependent parameters that characterize the deviation of the
distribution amplitude from its asymptotic form at $Q^2\to\infty$. For the asymptotic distribution
amplitude $V_1^d=1/3$ and $A_1^u =0$; the Chernyak-Zhitnitsky (CZ) model \cite{Chernyak:1984bm} corresponds to
$V_1^d=0.23$ and $A_1^u =0.38$ at a low scale of a few hundred MeV.
A simple calculation shows that in the end-point region $\widetilde{V}_1\sim \widetilde{T}_1 \sim (1-x_3)^4$
and $\widetilde{\widetilde{V}}_1\sim \widetilde{\widetilde{T}}_1\sim (1-x_3)^5$.
After some algebra we obtain the contribution of leading-twist nucleon distribution amplitudes to
the light-cone sum rule (\ref{freeSR}) for $G_2(Q^2)$ in the $Q^2\to \infty$ limit:
\bea{example2}
\frac{\lambda_\Delta e^{-m_\Delta^2/M_B^2}}{(2\pi)^2} G^{\rm tw-3}_2(Q^2) &=&
\frac{4(e_u-e_d)m_P}{Q^4}\int\limits_0^{s_0}ds\, e^{-s/M_B^2}
\int\limits_0^{s/Q^2} dx_1\, V_1(x_1,s/Q^2-x_1,1-s/Q^2)
\nonumber\\
&=&\frac{80 f_N m_P}{Q^{10}}\big[1+7(3V_1^d-1)\big]\int\limits_0^{s_0} ds\, s^3 \, e^{-s/M_B^2}
+{\cal O}\left(\frac{1}{Q^{12}}\right).
\end{eqnarray}
The term in $\sim (3V_1^d-1)$ corresponds to the contribution of subleading conformal spin,
cf.~(\ref{conf1}). For the CZ-model this correction is very large,
$\big[1+7(3V_1^d-1)\big]_{\rm CZ} \simeq -1.17$.
It turns out, however, that the contribution of the leading-twist distribution amplitudes
is only subleading for large $Q^2$, and the true large-$Q^2$ asymptotics of the sum rules
is determined by the contribution of higher-twist amplitudes, in particular those corresponding to different
helicity structures (compared to leading twist). Using full expressions, i.e.
taking into account the terms $\sim V_2\ldots V_6$ and $\sim T_2\ldots T_6$,
we obtain
\bea{asymG2}
\frac{\lambda_\Delta e^{-m_\Delta^2/M_B^2}}{(2\pi)^2} G_2(Q^2) &=&
\frac{8 m_P}{Q^8}\big[5 f_N+ 3 \lambda_1]\int\limits_0^{s_0} ds\, s^2 \, e^{-s/M_B^2}
+{\cal O}\left(\frac{1}{Q^{10}}\right),
\end{eqnarray}
\bea{asymG12}
\frac{\lambda_\Delta e^{-m_\Delta^2/M_B^2}}{(2\pi)^2}\big[2 G_1(Q^2)+(m_\Delta-m_P)G_2(Q^2)\big]
&=&
\frac{8}{Q^8}\Bigg\{
10 f_N \int\limits_0^{s_0} ds\, s^3 \, e^{-s/M_B^2}
\nonumber\\&&
\hspace*{-4.2cm}{}- m_P^2 \Big[\frac{89}{3}f_N+ 7 \lambda_1\Big] \int\limits_0^{s_0} ds\, s^2 \, e^{-s/M_B^2}\Bigg\}
+{\cal O}\left(\frac{1}{Q^{10}}\right),
\end{eqnarray}
\bea{asymG123}
\frac{\lambda_\Delta e^{-m_\Delta^2/M_B^2}}{(2\pi)^2}\big[2G_3(Q^2) - G_2(Q^2)\big]
&=&
\frac{8m_\Delta^2 m_P}{Q^{10}}\Bigg\{
\big[75 f_N +9 \lambda_1\big]\int\limits_0^{s_0} ds\, s^2 \, e^{-s/M_B^2}
\nonumber\\
&& \hspace*{-2.2cm}{}+
m_P^2 \Big[34 f_N+ \frac{24}{5} \lambda_1\Big] \int\limits_0^{s_0} ds\, s \, e^{-s/M_B^2}\Bigg\}
+{\cal O}\left(\frac{1}{Q^{12}}\right),
\end{eqnarray}
from the three sum rules in Eqs.~(\ref{freeSR}), (\ref{B-SR}), respectively.
{}For simplicity, only the contributions of asymptotic distribution amplitudes are shown, see
Appendix~B.
To avoid misunderstanding, note that these expressions correspond to the ``soft'' contribution
to the form factors that does not involve any hard gluon exchanges and correspond, loosely speaking,
to the first term in the expansion in Eq.~(\ref{schema}). In order to observe the true asymptotic behavior
one has to calculate radiative corrections to the sum rules, cf. \cite{Braun:1999uj}.
In this way, the hard gluon exchange contribution considered in \cite{Carlson:1988gt} appears to be
a part of the two-loop ${\cal O}(\alpha^2_s)$ correction.
The corresponding calculation goes beyond the scope of the present work.
In spite of being enhanced asymptotically
by powers of the momentum transfer, the radiative corrections are accompanied
by increasing powers of the (small) QCD coupling at a scale at least that of the Borel parameter, and
we expect that for moderate momentum transfers in the range $1< Q^2 < 10$~GeV$^2$ their contribution is
numerically less important.
\begin{figure}[ht]
\centerline{\epsfysize6.5cm\epsffile{Plotx0.eps}}
\caption[]{\small
The average value $\langle x\rangle$ of the momentum fraction $x_3$ in the sum rules
(\ref{freeSR}), (\ref{B-SR}) as a function of the momentum transfer $Q^2$.
The lower and the upper solid curves correspond to the first and the second sum rule
in (\ref{freeSR}), respectively. The dashed curve corresponds to (\ref{B-SR}).
The value of the threshold $x_0$ (\ref{x0}) is plotted by dots for comparison.}
\label{fig:x0}
\end{figure}
We conclude that the leading-order LCSR calculation predicts a universal $1/Q^8$ falloff of soft contributions
to all the three form factors $G_1$, $G_2$, $G_3$, and to this accuracy $G_2 = 2G_3$.
This translates to the asymptotic behavior of the soft terms
$G_M \sim 1/Q^6$, $G_M \sim 1/Q^6$ and $G_Q \sim 1/Q^8$, for the magnetic, electric and quadrupole form factors
defined in Eq.~(\ref{MEQ}), respectively.
In agreement with the common wisdom, soft contributions in the light-cone sum rules
arise from the integration regions where the quark interacting with
the virtual photon carries almost all hadron momentum, alias the corresponding momentum fraction $x\to 1$.
To illustrate this feature, we have plotted in Fig.~\ref{fig:x0} the average value $\langle x\rangle$
of the momentum fraction $x_3$ in the integrals in the sum rules (\ref{freeSR}), (\ref{B-SR}).
The main effect here is that
the integration region in the momentum fraction gets restricted
to a narrow interval $x_0 < x < 1$ where $x_0$ is given by Eq.~(\ref{x0}). For asymptotically large $Q^2$
one finds $x_0\simeq 1-s_0/Q^2$ and the integration region shrinks to the end-point. For realistic values
of $Q^2$ in the range $1-10$~GeV$^2$ the average
$\langle x\rangle$ grows slowly and one finds very similar values
for all the three sum rules in question. [The irregular behavior that is seen for the sum rule
(\ref{B-SR}) around $Q^2\sim 2$~GeV$^2$ is due to accidental vanishing of the denominator involved
in taking the average.]
\subsection{Numerical analysis}
The numerical results presented below are obtained using two models of the nucleon distribution
amplitudes \cite{Braun:2000kw}. The first model corresponds to asymptotic distribution amplitudes
of all twists. The corresponding expressions are collected in Appendix~B. The second model
corresponds to taking into account the corrections to the asymptotic distribution amplitudes
that are due to the next-to-leading conformal spin (``P-waves'') with parameters estimated using
QCD sum rules. For the leading twist distribution amplitude this choice corresponds
to a simplified version of the Chernyak-Zhitnitsky (CZ) model:
we have truncated the original CZ expressions \cite{Chernyak:1984bm} leaving out contributions
of the next-to-next-to-leading conformal spin operators (``D-waves'') in order to simplify the calculation
of nucleon mass corrections, see \cite{Braun:2000kw} for details. The explicit expressions of the
distribution amplitudes including P-wave corrections are rather cumbersome,
they can be found in \cite{Braun:2000kw,Lenz05}.
In addition, for the numerical evaluation of the sum rules we have to specify
the values of the $\Delta$-coupling $\lambda_\Delta$, the continuum threshold $s_0$ and
the Borel parameter $M_B^2$. The usual strategy is to determine all of them from the simplest
QCD sum rule involving the $\Delta$ resonance \cite{Belyaev:1982sa}, see Appendix C.
We obtain from this sum rule $s_0\simeq3$~GeV$^2$ and
$\lambda_\Delta \simeq 2.0$~GeV$^3$. Both numbers are somewhat lower that the ones obtained
in \cite{Belyaev:1982sa} because we use the experimental input value of
the mass of the $\Delta$-resonance $m_\Delta=1.23$~GeV instead of trying to find it from the sum rule.
\begin{figure}[t]
\centerline{\epsfysize6.5cm\epsffile{Plotstability.eps}}
\caption[]{\small
The Borel parameter dependence of the ratio $G_M/3 G_{\rm dipole}$ where $G_{\rm dipole} = 1/(1+Q^2/0.71)^2$.
The magnetic form factor $G_M$ is defined in (\ref{MEQ}). The solid and the dashed curves correspond
to the calculation using $M_B^2 = 2.5$~GeV$^2$ and $M_B^2 = 1.5$~GeV$^2$, respectively.
In both cases the asymptotic nucleon distribution amplitudes of the
leading and higher twists are used.}
\label{fig:stability}
\end{figure}
A suitable range of Borel parameters for the LCSR can be obtained as follows. On the one hand,
$M_B^2$ has to be small enough in order to guarantee sufficient suppression of higher mass resonances
and the continuum in the hadronic representation for the correlation function. This is the same criterium
that is applied to the Belyaev-Ioffe sum rule (\ref{BI-SR}) for the coupling $\lambda_\Delta$, hence we
would want to take $M_B^2$ as close as possible to 1.5~GeV$^2$ which is the minimum value at
which the stability in (\ref{BI-SR}) sets in.
On the other hand, the Borel parameter in the LCSRs has to be large enough to guarantee convergence of the twist
expansion in the QCD calculation.
Note that for a fixed value of the momentum fraction $x$ the light-cone expansion of the relevant correlation
functions goes generically in powers of $1/(q-xP)^2$ which translates to the expansion in powers of
$1/(x M^2_B)$ after the Borel transformation. The true expansion parameter is therefore
of order $\sim 1/(\langle x\rangle M^2_B)$ where $\langle x\rangle$ is
the average value of the momentum fraction in the corresponding integrals, rather than $1/M_B^2$ itself.
{}In the range $1 < Q^2 < 10$~GeV$^2$ we find $0.4 < \langle x\rangle < 0.8$ (see Fig.~\ref{fig:x0}) so
that preferred values of $M_B^2$ in LCSRs appear from this side to be a factor 1.2--2.5 larger compared
to the two-point sum rule Borel parameter $M^2$.
Note that for fixed $M^2$ there is effectively a bias towards using larger values of $M_B^2$
with decreasing $Q^2$. All in all, it appears that the interval
$1.5 < M^2_B < 2.5$~GeV$^2$ presents a reasonable choice. The stability of the LCSRs
in this range turns out to be better than 10--15\%, see Fig.~\ref{fig:stability}
for an example. In what follows we use the fixed value of the Borel parameter $M^2_B=2$~GeV$^2$
in the center of this fiducial interval.
\begin{figure}[t]
\centerline{\epsfysize7cm\epsffile{GMashoverGdipole.eps}}
\caption[]{\small
The ratio $G_M^{\rm Ash}/3 G_{\rm dipole}$ where $G_{\rm dipole} = 1/(1+Q^2/0.71)^2$.
The form factor $G_M^{\rm Ash}$ is defined in (\ref{GT}). The solid and the dashed curves correspond
to the calculation using the asymptotic and the
QCD sum rules motivated nucleon distribution amplitudes of the leading and higher twists, respectively.
The data points are from \cite{Stoler:1993yk} (blue squares) \cite{Stuart:1996zs} (red triangles)
and \cite{Bartel:1968tw,Alder:1972di,Stein:1975yy,Foster:1983kn,Frolov:1998pw} (red squares).}
\label{fig:GMAsh}
\end{figure}
\begin{figure}[t]
\centerline{\epsfysize7cm\epsffile{GMoverGdipole.eps}}
\caption[]{\small
The ratio $G_M/3 G_{\rm dipole}$ where $G_{\rm dipole} = 1/(1+Q^2/0.71)^2$.
The magnetic form factor $G_M$ is defined in (\ref{MEQ}).
The identification of the curves and data points is the same as in Fig.~\ref{fig:GMAsh}.}
\label{fig:GM}
\end{figure}
\begin{figure}[ht]
\centerline{\epsfysize7cm\epsffile{GToverGdipole.eps}}
\caption[]{\small
The ratio $G_T/3 G_{\rm dipole}$ where $G_{\rm dipole} = 1/(1+Q^2/0.71)^2$.
The form factor $G_T$ is defined in (\ref{GT}).
The identification of the curves and data points is the same as in Fig.~\ref{fig:GMAsh}.}
\label{fig:GT}
\end{figure}
The results for the magnetic transition form factor are shown in Fig.~\ref{fig:GMAsh}, Fig.~\ref{fig:GM}
and Fig.~\ref{fig:GT} using three different ways to present the data (in terms of $G_M^{\rm Ash}$, $G_M$ and
$G_T$) accepted in the experimental papers. The difference between $G_M^{\rm Ash}$ and
$G_M$ is in a kinematical factor only, while $G_T$ includes in addition the contribution of the electric
form factor $G_E$ which is negligible.
We observe that the calculation with asymptotic distribution amplitudes (solid curves)
is much closer to the data so that large deviations from the asymptotic shape of distribution amplitudes
suggested by QCD sum rule calculations are
disfavored. A similar conclusion was reached
in Refs.~\cite{Braun:2001tj,Lenz:2003tq} from the LCSR analysis of
the electromagnetic nucleon form factors.
It turns out that the sum rules for $G_M$ are dominated by contributions of subleading twist-4.
To illustrate this issue, we have plotted in Fig.~\ref{fig:decompose} separate contributions to
the sum rule for the ratio $G_M/(3G_{dipole})$ that come from the nonperturbative matrix
element of twist three $\sim f_N$ (dashed curves) and the two existing
matrix elements of twist four: terms in $\lambda_1$ (dotted curves) and $\lambda_2$ (dash-dotted curves),
cf. Appendix B. The full result is given by the sum of all three contributions and is shown by the solid curves.
We see that contributions $\sim \lambda_2$ are numerically very small, the contributions $\sim \lambda_1$
are the dominant ones and remain roughly constant in the considered $Q^2$ range, whereas the contributions
$\sim f_N$ have a stronger $Q^2$ dependence and enter with an opposite sign. QCD sum rule motivated
corrections to the asymptotic distribution amplitudes generally tend to increase both leading- and higher-twist
contributions, so that the cancellation becomes more pronounced.
Note that the uncertainties in
nonperturbative parameters $f_N,\lambda_1,\lambda_2$ presently are at the level of 20-30\%. Larger leading twist contributions
$\sim f_N$ would yield a steeper falloff of the form factor which is favored by the data. The same observation
was made in \cite{Braun:2001tj} for the case of the electromagnetic form factors of the nucleon.
Note that the dominant contributions $\sim \lambda_1$ correspond to the operators that include a ``minus'' component
of one of the valence quark fields. They do not have a simple partonic interpretation in terms of quark parton
amplitudes at small transverse separation but rather involve the orbital angular momentum, see
\cite{Ji:2002xn,Ji:2003yj} for the relevant formalism and discussion.
\begin{figure}[ht]
\centerline{\epsfysize5cm\epsffile{DecomposeAsym.eps},\epsfysize5cm\epsffile{DecomposeCZ.eps}}
\caption[]{\small Separate contributions to the light-cone sum rule results (solid curves) for
the ratio $G_M/3 G_{\rm dipole}$ where $G_{\rm dipole} = 1/(1+Q^2/0.71)^2$.
The terms in $f_N$, $\lambda_1$ and $\lambda_2$ are shown by the dashed, dotted and dash-dotted
curves, respectively. The results on the left panel correspond to the asymptotic distribution
amplitudes and the ones on the right panel are obtained including a CZ-like model, see text.}
\label{fig:decompose}
\end{figure}
The results for the electric form factor $G_E$ and quadrupole form factor $G_C$ are shown in Fig.~\ref{fig:GE}
and Fig.~\ref{fig:GC}, respectively. In both cases we plot the experimentally measured quantities $R_{EM}$ and
$R_{SM}$ (\ref{GT}) that are related to the form factor ratios, normalized to the
magnetic form factor $G_M$. Here, again, the asymptotic distribution amplitudes tend to give a better
description.
In the future, one should try to constrain the parameters in the distribution amplitudes by making a combined
fit to the LCSR for the electromagnetic and weak form factors of the nucleon, including $\Delta$-resonance
production etc. In order to make this program fully quantitative one first has to
calculate radiative corrections to the LCSRs, similar as this has been done for the pion
form factor \cite{Braun:1999uj} and B-meson decays \cite{Bdecays}.
The corresponding task goes beyond the scope of this paper.
\begin{figure}[ht]
\centerline{\epsfysize7cm\epsffile{minusGEoverGM.eps}}
\caption[]{\small
The ratio $R_{\rm EM}(Q^2)= - G_E/G_M$ of the electric and the magnetic form factors,
cf.~(\ref{GT}).
The solid and the dashed curves correspond
to the calculation using the asymptotic and the
QCD sum rules motivated nucleon distribution amplitudes of the leading and higher twists, respectively.
The data points are from \cite{Joo:2001tw} (red squares) and \cite{Frolov:1998pw} (blue squares).}
\label{fig:GE}
\end{figure}
\begin{figure}[ht]
\centerline{\epsfysize7cm\epsffile{minusGCoverGM.eps}}
\caption[]{\small
The ratio $R_{SM}\sim -G_C/G_M$ of the quadrupole and the magnetic form factors,
cf.~(\ref{GT}).
The solid and the dashed curves correspond
to the calculation using the asymptotic and the
QCD sum rules motivated nucleon distribution amplitudes of the leading and higher twists, respectively.
The data points are from \cite{Joo:2001tw} (red squares) and \cite{Frolov:1998pw} (blue squares).}
\label{fig:GC}
\end{figure}
\setcounter{equation}{0} \section{Summary and conclusions}
In this paper, we incorporated light-cone QCD sum rules approach to
calculate the purely nonperturbative soft contribution to the $\gamma^* p
\to \Delta^+$ transition. The soft contribution corresponds to the
so-called Feynman mechanism of the large momentum transfer,
which is suppressed asymptotically by power(s) of $1/Q^2$ compared to the
perturbative contribution of hard rescattering. We argued from the very beginning that,
due to two hard gluon exchanges required in the pQCD contribution,
the latter is numerically suppressed by a factor of 1/100
compared to the soft term. Indeed, our results for the dominant magnetic
form factor $G_M (Q^2)$ are rather close to the experimental data in the
region above $Q^2 \sim 2$ GeV$^2$. This confirms the general wisdom that
the soft (end-point) contribution is dominant at the experimentally accessible
momentum transfers. Moreover, the inspection shows that the soft contribution
is dominated by quark configurations involving a ``minus'' light-cone projections
of one of the quark field operators, which can be reinterpreted as importance
of the orbital angular momentum (cf.~\cite{Ji:2002xn,Ji:2003yj}).
In the region of low $Q^2 < 2$~GeV$^2$ our results for the magnetic form factor
appear to be factor two below the data. One reason for that are most likely the
so-called ``bilocal power corrections'' that correspond to
long-distance propagation in the $Q^2$ channel.
Such terms were studied in case of the pion
\cite{Nesterenko:1984tk} and nucleon \cite{Belyaev:1992xf}
form factors,
and they provided a sizable enhancement of form factors at low momentum transfers
as compared to the extrapolation of the large-$Q^2$ results.
Another possibility is that the interpolating current for the $\Delta$-particle
is not good enough and couples strongly to the excited states.
This can be checked by applying the light-cone QCD sum rules
to the correlation function $\langle \Delta | J \eta_N |0 \rangle$ of the
electromagnetic current and a local current $\eta_N$ with nucleon quantum numbers,
while the $\Delta$ is left explicitly in the final state $\langle \Delta |$.
To pursue such a program, one needs a systematic analysis of the distribution amplitudes
of the $\Delta$ as the first step, the task which is interesting in its own.
Our calculations are in agreement with the experimental observation
that the ratios $E2/M1$ and $C2/M1$ are small. The charge form factor $G_E$
appears to be very sensitive to the shape of the nucleon distribution amplitudes
so that the experimental data can easily be fitted by assuming moderate corrections to the asymptotic
form, of the same sign but much smaller as compared to the CZ model.
However,
both the $G_{E}(Q^2)$ and $G_{C}(Q^2)$ form factors in our calculations
appear as small differences between expressions dominated
by the much larger form factor $G_{M}(Q^2)$, a situation similar to that
encountered in the approach \cite{Belyaev:1995ya} based on the
analysis of the 3-point correlator. For this reason, at this stage we restrict ourselves
to a conservative statement that $G_{E}(Q^2)$ and $G_{C}(Q^2)$
are small compared to $G_{M}(Q^2)$ without insisting on a specific curve
(or even sign) for them.
The inclusion of QCD radiative corrections to the sum rules presents another important issue,
and is needed to make the theoretical studies of the $\gamma^* p \to
\Delta^+$ transition fully quantitative. Such corrections contain both
purely soft contributions and also hard gluon exchanges,
with nontrivial interplay and connections between these two types of
dynamics. For example, for the pion form factor it was found \cite{Braun:1999uj} that
there exists a partial cancellation between soft contributions and hard contributions of higher
twist. A big advantage of the light-cone sum rule technique is that it is free from
double counting: soft and hard contributions can be separated rigorously.
In the present case, such a separation becomes necessary starting at two-loop
$O(\alpha_s^2)$ corrections to the sum rules.
To summarize, we believe that the light-cone sum rule approach currently offers
the best compromise between theoretical rigor and the applicability to present and planned
experiments involving elastic and transition form factors for baryons. This approach
is rigorous as far as the separation of hard and soft dynamics is concerned, and provides
one with a useful tool for the study of the transition region between hard perturbative and soft
nonperturbative QCD dynamics. One goal of such studies is to determine nucleon
distribution amplitudes from the data on form factors, similar as parton distributions
are extracted from the measured deep inelastic structure functions. Our work presents
a step in this direction.
{\bf Acknowledgements}
We would like to thank Paul Stoler and Sabit Kamalov for providing us with detailed tables of the
experimental data. The work of A.R. was supported by the US
Department of Energy contract
DE-AC05-84ER40150 under which the Southeastern
Universities Research Association (SURA)
operates the Thomas Jefferson Accelerator Facility,
and by the Alexander von Humboldt Foundation.
|
1,108,101,565,919 | arxiv | \section{Introduction}
\label{sec:introduction}
Structure functions in deep-inelastic scattering (DIS) are among the most
extensively measured observables. Today the combined data from fixed-target
experiments and the HERA collider spans about four orders of magnitude in both
Bjorken-$x$ and the scale $Q^2 = -q^2$ given by the momentum $q$ of the
exchanged electroweak gauge boson \cite{Yao:2006px}.
In this article we focus on the $W\!$-exchange charged-current (CC) case, see
Refs.~\cite {Yang:2000ju,Tzanov:2005kr,Onengut:2005kv} and
\cite{Adloff:2003uh,Chekanov:2003vw,Aktas:2005ju,Chekanov:2006da} for recent
measurements in neutrino DIS and at HERA.
With six structure functions, $F_2^{\,W^\pm}\!$, $F_3^{\, W^\pm}$ and
$F_L^{\,W^\pm}\!$, this case has a far richer structure than, for example,
electromagnetic DIS with only two independent observables, $F_{\:\!2}$
and~$F_L$.
More detailed measurements are required to fully exploit the resulting
potential, for instance at a future neutrino factory, see Ref.~\cite
{Mangano:2001mj}, and the LHeC, the proposed high-luminosity electron-proton
collider at the LHC~\cite{Dainton:2006wd}.
Already now, however, charged-current DIS provides important information on the
parton structure of the proton, e.g., its flavour decomposition and the
valence-quark distributions. Moreover, present results are also sensitive to
electroweak parameters of the Standard Model such as $\sin^2 \theta_W$, see
Ref.~\cite{Zeller:2001hh}, and the space-like $W\!$-boson propagator
\cite{Aktas:2005iv}. As discussed, for example, in Refs.~\cite
{Davidson:2001ji,McFarland:2003jw,Dobrescu:2003ta,Kretzer:2003wy}, a reliable
determination of $\sin^2 \theta_W$ from neutrino DIS requires a detailed
understanding of non-perturbative and perturbative QCD effects.
The perturbative calculations for the unpolarised structure functions in DIS
have almost been completed to the next-to-next-to-leading order (NNLO) of
massless QCD. These results include the splitting functions, controlling the
scale evolution of the parton distributions, to the third order in the strong
coupling constant $\alpha_{\rm s}$~\cite{Moch:2004pa,Vogt:2004mw}, as well as the
hard-scattering coefficient functions for $F_1$, $F_{\:\!2}$ and $F_{\:\!3}$ to
second order in $\alpha_{\rm s}$~\cite{SanchezGuillen:1991iq,vanNeerven:1991nn,%
Zijlstra:1991qc,Zijlstra:1992kj,Moch:1999eb}.
For the longitudinal structure function $F_L = F_{\:\!2} - 2x F_1$ the
third-order coefficient functions are required at NNLO. So far these quantities
have been computed only for electromagnetic (photon-exchange) DIS
\cite{Moch:2004xu,Vermaseren:2005qc}.
In fact, it appears that even the second-order coefficient functions for the
charged-current $F_L$ have not been fully presented in the literature.
It is convenient to consider linear combinations of the charged-current
structure functions $F_a^{\,W^\pm}$ with simple properties under crossing, such
as $F_a^{\,\nu p \pm \bar \nu p}$ ($a = 2,\: 3,\: L$) for neutrino DIS. For all
these combinations either the even or odd moments can be calculated in
Mellin-$N$ space in the framework of the operator product expansion (OPE),
see Ref.~\cite{Buras:1980yt}.
The results for the third-order coefficient functions for the even-$N$
combinations $\, F_{2,L}^{\,\nu p + \bar\nu p}$ can be taken over from
electromagnetic DIS \cite{Moch:2004xu,Vermaseren:2005qc}. Also the coefficient
function for the odd-$N$ based quantity $\, F_3^{\,\nu p +\bar\nu p}$ is
completely known at three-loop accuracy, with the results only published via
compact parametrizations so far \cite{Vogt:2006bt}.
For the remaining combinations $\,F_{2,L}^{\,\nu p - \bar\nu p}$ and
$\,F_3^{\,\nu p - \bar\nu p\!}$, on the other hand, only the first five odd and
even integer moments of the respective coefficient functions have been
calculated to third order in Ref.~\cite{Moch:2007gx} following the
approach of Refs.~\cite{Larin:1994vu,Larin:1997wd,Retey:2000nq} based on the
{\sc Mincer} program \cite{Gorishnii:1989gt,Larin:1991fz}.
The complete results of Refs.~\cite{Moch:2004xu,Vermaseren:2005qc,Vogt:2006bt}
fix all even and odd moments $N$. Hence already the present knowledge is
sufficient to determine also the lowest five moments of the differences of
corresponding even-$N$ and odd-$N$ coefficient functions and to address a
theoretical conjecture \cite{Broadhurst:2004jx} for these quantities.
Furthermore these moments facilitate $x$-space approximations in the style of,
e.g, Ref.~\cite{vanNeerven:2001pe} which are sufficient for most
phenomenological purposes, including the determination of the third-order QCD
corrections to the Paschos-Wolfenstein relation~\cite{Paschos:1973kj} used for
the extraction of $\sin^2 \theta_W$ from neutrino DIS.
The outline of this article is as follows.
In Section~\ref{sec:2-loop} we briefly specify our notations and write down the
complete second-order results $\delta\:\! c_{a}^{(2)}(x)$ for the above
coefficient-function differences. We discuss their behaviour at the end points
$x = 0$ and $x = 1$, and provide compact but accurate parametrizations for use
in numerical applications.
We then proceed, in Section~\ref{sec:3-loop}, to our new results for the five
lowest odd moments of $\delta\:\! c_{2,L}^{(3)}$ and even moments of
$\delta\:\! c_{3}^{(3)}\!$, as a byproduct deriving the third-order
coefficient-function correction to the Gottfried sum rule. These three-loop
moments are presented in a numerical form and employed to construct $x$-space
approximations valid at $x \gsim 10^{-2}$.
In Section~\ref{sec:applications} we address the numerical implications of
our results. In particular we discuss the higher-order QCD corrections to the
Paschos-Wolfenstein relation.
Our~findings are finally summarized in Section~\ref{sec:summary}. The
lengthy full expressions of the new third-order moments in terms of fractions
and the Riemann \mbox{$\zeta$-function} can be found in the Appendix.
\setcounter{equation}{0}
\section{The complete second-order results}
\label{sec:2-loop}
We define the even-odd differences of the CC coefficient functions $\,C_a\,$
for $\,a = 2,\: 3,\: L\,$ as
\begin{eqnarray}
\label{eq:cdiff}
\delta\, C_{2,L} \; =\; C_{2,L}^{\,\nu p + {\bar \nu} p}
- C_{2,L}^{\,\nu p - {\bar \nu} p} \:\: , \qquad
\delta\, C_3 \; =\; C_3^{\,\nu p - {\bar \nu} p}
- C_3^{\,\nu p + {\bar \nu} p} \:\: .
\end{eqnarray}
The signs are chosen such that the differences are always `even -- odd' in the
moments $\, N$ accessible by the OPE \cite{Buras:1980yt}, and it is understood
that the $d^{\:\!abc}d_{abc}$ part of $\,C_3^{\,\nu p + \bar\nu p}$
\cite{Retey:2000nq,Vogt:2006bt} is removed before the difference is formed. The
non-singlet quantities (\ref{eq:cdiff}) have an expansion in powers of $\alpha_{\rm s}$,
\begin{eqnarray}
\label{eq:cf-exp}
\delta\, C_a \; = \;
\sum_{l=2} \: a_{\rm s}^{\, l}\: \delta\:\! c_{a}^{(l)}
\end{eqnarray}
where, as throughout this and the next section, we are have normalized the
expansion parameter as $a_{\rm s} = \alpha_{\rm s} /(4 \pi)$.
There are no first-order contributions to these differences, hence the sums
start at $l = 2\,$ in Eq.~(\ref{eq:cf-exp}).
All known DIS coefficient functions in massless perturbative QCD can be
expressed in terms of the harmonic polylogarithms $H_{m_1,...,\,m_w}(x)$ with
$m_j = 0,\,\pm 1$. Our notation for these functions follows Ref.~\cite
{Remiddi:1999ew} to which the reader is referred for a detailed discussion.
For $w \leq 3$ the harmonic polylogarithms can be expressed in terms of
standard polylogarithms; a complete list can be found in Appendix A of
Ref.~\cite{Moch:1999eb}. A {\sc Fortran} programs for these functions up to
weight $w=4$ has been provided in Ref.~\cite{Gehrmann:2001pz}, with an
unpublished extension also covering $w=5$. In the remainder of this section
we employ the short-hand notation
\begin{equation}
\label{eq:habbr}
H_{{\footnotesize \underbrace{0,\ldots ,0}_{\scriptstyle m} },\,
\pm 1,\, {\footnotesize \underbrace{0,\ldots ,0}_{\scriptstyle n} },
\, \pm 1,\, \ldots}(x) \; = \; H_{\pm (m+1),\,\pm (n+1),\, \ldots}(x)
\end{equation}
and additionally suppress the arguments of the harmonic polylogarithms for
brevity.
Exact expressions for (moments of) the coefficient functions will be given in
terms of the SU($N_c$) colour factors $\,C_A = N_c\,$ and $\,C_F = (N_c^{\,2}
-1)/(2N_c)$, while we use the QCD values $C_A = 3$ and $C_F = 4/3$ in numerical
results. All our results are presented in the $\overline{\mbox{MS}}$\ scheme for the standard
choice $\,\mu_r =\mu_f^{} = Q\,$ of the renormalization and factorization
scales.
The second-order coefficient functions $\delta\:\! c_2^{(2)}$ and
$\,\delta c_L^{(2)}$ for the even-odd differences of $F_{\:\! 2,L\,}$ read
\begin{eqnarray}
\label{eq:dc2qq2}
\delta c_{2}^{(2)}(x) &\! =\! &
{C^{}_F} \* [{C^{}_F}-{C^{}_A}/2] \* \biggl(
- {324 \over 5}
+ 112 \* (1+x)^{-1} \* \z3
+ {16 \over 5} \* x^{-1}
+ {164 \over 5} \* x
+ {144 \over 5} \* x^2
\nonumber\\
& &\mbox{}
- 40 \* \z3
+ 136 \* \z3 \* x
+ 8 \* \z2
+ 56 \* \z2 \* x
+ 96 \* \z2 \* x^2
- {144 \over 5} \* \z2 \* x^3
- 32 \* \Hh(-2,0)
\nonumber\\
& &\mbox{}
+ 96 \* \Hh(-2,0) \* (1+x)^{-1}
+ 128 \* \Hh(-2,0) \* x
- 128 \* \H(-1) \* (1+x)^{-1} \* \z2
+ 48 \* \H(-1) \* \z2
\nonumber\\
& &\mbox{}
- 144 \* \H(-1) \* \z2 \* x
+ 32 \* \Hhh(-1,-1,0)
- 128 \* \Hhh(-1,-1,0) \* (1+x)^{-1}
- 224 \* \Hhh(-1,-1,0) \* x
\nonumber\\
& &\mbox{}
+ 64 \* \Hh(-1,0)
+ {16 \over 5} \* \Hh(-1,0) \* x^{-2}
+ 64 \* \Hh(-1,0) \* x
+ 96 \* \Hh(-1,0) \* x^2
- {144 \over 5} \* \Hh(-1,0) \* x^3
\nonumber\\
& &\mbox{}
- 64 \* \Hhh(-1,0,0)
+ 160 \* \Hhh(-1,0,0) \* (1+x)^{-1}
+ 160 \* \Hhh(-1,0,0) \* x
+ 64 \* \Hh(-1,2) \* (1+x)^{-1}
\nonumber\\
& &\mbox{}
- 32 \* \Hh(-1,2)
+ 32 \* \Hh(-1,2) \* x
+ {28 \over 5} \* \H(0)
- 32 \* \H(0) \* (1+x)^{-1}
- {16 \over 5} \* \H(0) \* x^{-1}
- {292 \over 5} \* \H(0) \* x
\nonumber\\
& &\mbox{}
+ 32 \* \H(0) \* (1+x)^{-1} \* \z2
+ {144 \over 5} \* \H(0) \* x^2
- 16 \* \H(0) \* \z2
+ 16 \* \H(0) \* \z2 \* x
- 16 \* \Hh(0,0)
- 64 \* \Hh(0,0) \* x
\nonumber\\
& &\mbox{}
- 96 \* \Hh(0,0) \* x^2
+ {144 \over 5} \* \Hh(0,0) \* x^3
+ 24 \* \Hhh(0,0,0)
- 48 \* \Hhh(0,0,0) \* (1+x)^{-1}
- 24 \* \Hhh(0,0,0) \* x
\nonumber\\
& &\mbox{}
- 32 \* \H(1)
+ 32 \* \H(1) \* x
- 16 \* \H(2)
- 16 \* \H(2) \* x
+ 16 \* \H(3)
- 32 \* \H(3) \* (1+x)^{-1}
- 16 \* \H(3) \* x
\biggr)
\:\: ,
\\
\label{eq:dcLqq2}
\delta c_{L}^{(2)}(x) &\! =\! &
{C^{}_F} \* [{C^{}_F}-{C^{}_A}/2] \* \biggl(
{64 \over 5} \* x^{-1}
- {416 \over 5}
+ {256 \over 5} \* x
+ {96 \over 5} \* x^2
+ 64 \* \z3 \* x
+ 32 \* \z2 \* x
+ 64 \* \z2 \* x^2
\nonumber\\
& &\mbox{}
- {96 \over 5} \* \z2 \* x^3
+ 64 \* \Hh(-2,0) \* x
- 64 \* \H(-1) \* \z2 \* x
- 128 \* \Hhh(-1,-1,0) \* x
+ 64 \* \Hh(-1,0)
+ {64 \over 5} \* \Hh(-1,0) \* x^{-2}
\nonumber\\
& &\mbox{}
- 32 \* \Hh(-1,0) \* x^{-1}
+ 64 \* \Hh(-1,0) \* x
+ 64 \* \Hh(-1,0) \* x^2
- {96 \over 5} \* \Hh(-1,0) \* x^3
+ 64 \* \Hhh(-1,0,0) \* x
+ {32 \over 5} \* \H(0)
\nonumber\\
& &\mbox{}
- {64 \over 5} \* \H(0) \* x^{-1}
- {448 \over 5} \* \H(0) \* x
+ {96 \over 5} \* \H(0) \* x^2
- 32 \* \Hh(0,0) \* x
- 64 \* \Hh(0,0) \* x^2
+ {96 \over 5} \* \Hh(0,0) \* x^3
\biggr)
\:\: . \quad
\end{eqnarray}
The corresponding quantity $\delta\:\! c_3^{(2)}$ for the charged-current
structure functions $F_{\:\! 3}$ is given by
\begin{eqnarray}
\label{eq:dc3qq2}
\delta c_{3}^{(2)}(x) &\! =\! & \delta c_{2}^{(2)}(x) \;
- {C^{}_F} \* [{C^{}_F}-{C^{}_A}/2] \* \biggl(
- {624\over 5}
+ {16\over 5} \* x^{-1}
+ {464\over 5} \* x
+ {144\over 5} \* x^2
+ 32 \* \z3
\nonumber \\
& &\mbox{}
+ 96 \* \z3 \* x
- 16 \* \z2
+ 48 \* \z2 \* x
+ 80 \* \z2 \* x^2
- {144\over 5} \* \z2 \* x^3
+ 32 \* \Hh(-2,0)
+ 96 \* \Hh(-2,0) \* x
\nonumber\\
& &\mbox{}
- 32 \* \H(-1) \* \z2
- 96 \* \H(-1) \* \z2 \* x
- 64 \* \Hhh(-1,-1,0)
- 192 \* \Hhh(-1,-1,0) \* x
+ 64 \* \Hh(-1,0)
\nonumber\\
& &\mbox{}
+ {16\over 5} \* \Hh(-1,0) \* x^{-2}
- 16 \* \Hh(-1,0) \* x^{-1}
+ 64 \* \Hh(-1,0) \* x
+ 80 \* \Hh(-1,0) \* x^2
- {144\over 5} \* \Hh(-1,0) \* x^3
\nonumber\\
& &\mbox{}
+ 32 \* \Hh(-1,0,0)
+ 96 \* \Hh(-1,0,0) \* x
- {16\over 5} \* \H(0) \* x^{-1}
- {112\over5} \* \H(0)
- {592\over 5} \* \H(0) \* x
+ {144\over 5} \* \H(0) \* x^2
\nonumber\\
& &\mbox{}
+ 16 \* \Hh(0,0)
- 48 \* \Hh(0,0) \* x
- 80 \* \Hh(0,0) \* x^2
+ {144\over 5} \* \Hh(0,0) \* x^3
\biggr)
\:\: .
\end{eqnarray}
Expressions equivalent to Eqs.~(\ref{eq:dc2qq2}) and (\ref{eq:dc3qq2}) have
first been published in Refs.~\cite{vanNeerven:1991nn} and \cite
{Zijlstra:1992kj}, respectively, and were later confirmed in Ref.~\cite
{Moch:1999eb}. To the best of our knowledge, on the other hand, the function
$\,\delta c_L^{(2)}$ has not been documented in the literature before, see,
e.g., Ref.~\cite{Kazakov:1990fu} and references therein.
It was however calculated by the authors of Refs.~\cite
{vanNeerven:1991nn,Zijlstra:1991qc,Zijlstra:1992kj}, distributed in a
{\sc Fortran} package of the two-loop coefficient functions, and employed for
the parametrizations of Ref.~\cite{vanNeerven:1999ca}. Our expression
(\ref{eq:dcLqq2}) agrees with this unpublished result.
It is instructive to briefly consider the end-point limits of the above
results. Suppressing the ubiquitous factor $\,C_F C_{FA} \equiv
C_F [\,C_F-C_A/2]$, the small-$x$ behaviour of Eqs.~(\ref{eq:dc2qq2}) --
(\ref{eq:dc3qq2}) is
\begin{eqnarray}
\label{eq:c2-smallx}
\delta c_{2}^{(2)}(x) &\! \simeq\! & \mbox{}
- 4\, \ln^3 x \: - \;\: 8\, \ln^2 x - ( 28 - 16\,\z2 ) \ln x
\: - \: 64 \: + \;\: 8\,\z2 + 72\,\z3 \: + \: \ldots
\nonumber \\
\delta c_{3}^{(2)}(x) &\! \simeq\! & \mbox{}
- 4\, \ln^3 x - 16\, \ln^2 x + ( 12 + 16\,\z2 ) \ln x
\: + \: 44 + 24\,\z2 + 40\,\z3 \: + \: \ldots
\nonumber \\[1mm]
\delta c_{L}^{(2)}(x) &\! \simeq\! & \mbox{}
- 32\, \ln x - 48 \: + \: \ldots \:\: .
\end{eqnarray}
Thus the even-odd differences are not suppressed with respect to the $\,\nu p
+ \bar\nu p\,$ two-loop non-singlet coefficient functions for $x\rightarrow 0\, $:
the same powers of $\,\ln x\,$ enter Eqs.~(\ref{eq:c2-smallx}) and those
quantities. At large $x$, on the other hand, all three functions
$\delta\:\! c_{a}^{(2)}$ are suppressed by factors $(1-x)^2$ times logarithms,
reading
\begin{eqnarray}
\label{eq:c2-largex}
\delta c_{2}^{(2)}(x) &\! = \! & - ( 12 - 8\,\z2 )\, [1-x] \: C_F C_{FA}
\: + \: O \left( [1-x]^2 \right)
\nonumber \\
\delta c_{3}^{(2)}(x) &\! = \! & \phantom- ( 20 - 8\,\z2 )\, [1-x] \:
C_F C_{FA} \: + \: O \left( [1-x]^2 \right)
\nonumber \\
\delta c_{L}^{(2)}(x) &\! = \! & ( 32 -16\,\z2 )\, [1-x]^2 \: C_F C_{FA}
\: + \: O \left( [1-x]^3 \right) \:\: .
\end{eqnarray}
\vspace{-2mm}
The differences $\delta\:\! c_{2}^{(2)}(x)$ and $\delta\:\! c_{L}^{(2)}(x)$
(both multiplied by -1 for display purposes) are compared to the corresponding
even-$N$ $\,\nu p +\bar\nu p\,$ coefficient functions in Fig.~\ref{fig:c2diff}.
The quantities (\ref{eq:dc2qq2}) and (\ref{eq:dcLqq2}) are negligible at
$\,x \gsim 0.1\,$ and at $\,x \gsim 0.3\,$, respectively, but indeed comparable
to the even-moment coefficient functions at small $x$. The corresponding
results for $F_{\:\!3}$ are qualitative similar to those for $F_{\:\!2}$, but
with $\delta\:\! c_{3}^{(2)}(x)$ small down to $\,x \simeq 0.01\,$.
\begin{figure}[th]
\centerline{\epsfig{file=c2diff.eps,width=15cm,angle=0}}
\vspace{-2mm}
\caption{The odd$\,-\,$even non-singlet differences $\, -\, \delta\:\!
c_{2,L}^{(2)}(x)$ of Eqs.~(\ref{eq:dc2qq2}) and (\ref{eq:dcLqq2}), compared at
$x\leq 0.8$ to the corresponding even-$N$ coefficient functions calculated in
Refs.~\cite {SanchezGuillen:1991iq,vanNeerven:1991nn,Moch:1999eb}.
\label{fig:c2diff} }
\vspace*{1mm}
\end{figure}
For certain numerical applications, for instance for use with complex-$N$
packages like Ref.~\cite{Vogt:2004ns}, it is convenient to have
parametrizations of Eqs.~(\ref{eq:dc2qq2}) -- (\ref{eq:dc3qq2}) in terms of
elementary functions. With an error of less than 0.1\% these functions can be
approximated by
\begin{eqnarray}
\label{eq:dc2qq2p}
\delta c_{2}^{(2)}(x) &\! \simeq\! &
\{ - 9.1587 - 57.70\, x + 72.29\, x^2 - 5.689\, x^3
- xL_0\, ( \, 68.804 + 24.40\, L_0
\nonumber \\ & & \mbox{}
\; + 2.958\, L_0^2 \: )
+ 0.249\, L_0 + 8/9\: L_0^2\, (2 + L_0) \, \} \: (1 - x)
\:\: , \nonumber \\
\delta c_{3}^{(2)}(x) &\! \simeq\! &
\{ - 29.65 + 116.05\, x - 71.74\, x^2 - 16.18\, x^3
+ xL_0\, ( \, 14.60 + 69.90\, x
\nonumber \\ & & \mbox{}
- 0.378\, L_0^2 \: )
- 8.560\, L_0 + 8/9\: L_0^2\, (4 + L_0) \, \} \: (1 - x)
\:\: , \nonumber \\
\delta c_{L}^{(2)}(x) &\! \simeq\! &
\{ \, 10.663 - 5.248\, x - 7.500\, x^2 + 0.823\, x^3
+ xL_0\, ( \, 11.10 + 2.225\, L_0
\nonumber \\ & & \mbox{}
\; - 0.128\, L_0^2 \: )
+ 64/9\: L_0 \, \} \: (1 - x)^2
\:\: .
\end{eqnarray}
Here we have employed the short-hand $\, L_0 = \ln x\,$ and inserted the QCD
values of $C_F$ and $C_A$.
\setcounter{equation}{0}
\section{Third-order moments and approximations}
\label{sec:3-loop}
Recently the first five odd-integer moments have been computed of the third-%
order coefficient functions for $\,F_{2,L}^{\,\nu p - \bar\nu p}$ in
charged-current DIS, together with the corresponding moments
$\,N = 2,\, \ldots ,\, 10\,$ for $\,F_3^{\,\nu p - \bar\nu p\!}$
\cite{Moch:2007gx}. Unlike previous fixed-$N$ calculations, the complete
three-loop results for $F_{2,L}^{\,\nu p + \bar \nu p\,}$
\cite{Moch:2004xu,Vermaseren:2005qc}\footnote
{$\,$The $\alpha_{\rm s}^3$ coefficient functions for this process are those of
photon-exchange DIS, but without the contributions of the $fl_{11}$ flavour
classes, see Fig.~1 of Ref.~\cite{Vermaseren:2005qc}, where the two photons
couple to different quark loops.}
and $F_3^{\nu P + \bar \nu P}$ \cite{Vogt:2006bt} facilitate analytic
continuations to these values of $N$. We have performed this continuation
using the $x$-space expressions in terms of harmonic polylogarithms \cite
{Remiddi:1999ew} and the Mellin transformation package provided with version 3
of {\sc Form}~\cite{Vermaseren:2000nd}.
Thus we are in a position to derive the respective lowest five moments of
the hitherto unknown third-order contributions to the even-odd differences
(\ref{eq:cdiff}). These moments represent the main new results of this article.
With one exception (see below) the exact SU($N_c$) expressions are however
deferred to the Appendix.
Here we present numerical results for QCD, using the conventions introduced at
the beginning of Section~\ref{sec:2-loop}, recall especially $\,a_{\rm s}\, \equiv\,
\alpha_{\rm s}/(4 \pi)\,$ and the scale choice $\,\mu_r = \mu_f^{} = Q\,$.
In addition ${n^{}_{\! f}}$ denotes the number of effectively massless quark flavours,
and we use the notation $\delta\, C_{a,\,N}$ for the $N$-th moment of
$\delta\, C_{a}(x)$. The results for $F_{\:\!2}$ and $F_L$ read
\begin{eqnarray}
\label{eq:dc2ns}
\delta\, C_{2,1} & = &
- 4.378539253
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 125.2948456
- 0.6502282123 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{2,3} & = &
- 0.138066958
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 5.554493975
+ 0.1939792023 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{2,5} & = &
- 0.032987989
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 0.707322026
+ 0.0004910378 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{2,7} & = &
- 0.013235254
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 0.008816536
- 0.0201069660 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{2,9} & = &
- 0.006828983
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
\phantom+ 0.133159220
- 0.0200289710 \* \, {n^{}_{\! f}}
)
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:dcLns}
\delta\, C_{L,1} & = &
- 2.138954096
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 106.6667685
+ 3.294301343 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{L,3} & = &
- 0.078259985
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 9.239637919
+ 0.2718024935 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{L,5} & = &
- 0.016892540
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 2.548566852
+ 0.0650677125 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{L,7} & = &
- 0.006263113
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 1.075400460
+ 0.0251053847 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{L,9} & = &
- 0.003001231
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 0.560603262
+ 0.0122952192 \* \, {n^{}_{\! f}}
)
\:\: .
\end{eqnarray}
The lowest even moments for the structure function $F_{\:\! 3}$ are given by
\begin{eqnarray}
\label{eq:dc3ns}
\delta\, C_{3,2} & = &
- 0.1135841071
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
\phantom+ 8.386266870
+ 0.0605431788 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{3,4} & = &
- 0.0683669250
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 1.237248886
+ 0.0971522112 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{3,6} & = &
- 0.0350849853
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 1.370404531
+ 0.0496762716 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{3,8} & = &
- 0.0208455457
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 1.052847874
+ 0.0282541123 \* \, {n^{}_{\! f}}
)
\nonumber
\\
\delta\, C_{3,10} \!\! & = &
- 0.0137316528
\, a_{\rm s}^2
\: + \:a_{\rm s}^3 \, \* (
- 0.798850682
+ 0.0177100327 \* \, {n^{}_{\! f}}
)
\:\: .
\end{eqnarray}
The new $\alpha_{\rm s}^3$ contributions are rather large if compared to the leading
second-order results also included in Eqs.~(\ref{eq:dc2ns}) -- (\ref{eq:dc3ns})
with, e.g., $\,a_{\rm s} = 1/50\,$ corresponding to $\,\alpha_{\rm s} \simeq 0.25$. Except for
the lowest moment for $\,a = 2,L$, on the other hand, the integer-$N$
differences $\delta\, C_{a,N}$ are entirely negligible compared to the
$\,\nu p \pm \bar\nu p\,$ moments of Refs.~\cite{Retey:2000nq,Moch:2007gx}.
Before we turn to the $x$-space implications of Eqs.~(\ref{eq:dc2ns}) --
(\ref{eq:dc3ns}), let us briefly discuss some interesting structural features
of our third-order results. For this purpose we consider the exact SU($N_c$)
expression for the lowest moment of $\,\delta\:\! c_2^{(3)}$ given by
\begin{eqnarray}
\label{eq:dc2q1}
\delta c_{2,1}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
{175030 \over 81}
- {49216 \over 27} \* \z2
+ {404720 \over 81}\*\z3
- {562784 \over 135} \* \zs2
+ {33200 \over 9}\* \z2 \* \z3
\nonumber\\
& &\mbox{}\qquad\qquad
- {4160 \over 9} \*\z5
- {8992 \over 63} \* \zt2
- {1472 \over 3}\*\zs3
\Biggr)
\nonumber\\[0.5mm]
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
-{ 303377 \over 162}
+{ 41350 \over 27} \* \z2
-{ 363896 \over 81} \* \z3
+{ 396824 \over 135} \* \zs2
-{ 26000 \over 9} \* \z2 \* \z3
\nonumber\\
& &\mbox{}\qquad\qquad
+{ 25616 \over 9} \* \z5
+{ 1456 \over 3} \* \zs3
-{ 56432 \over 315} \*\ \zt2
\Biggl)
\\[0.5mm]
& &\mbox{{\hspace{-4.5mm}}}
+ {C^{}_F} \* {C_{FA}} \* {n^{}_{\! f}} \* \Biggl(
{ 8786 \over 81}
- {3056 \over 27} \* \z2
+ {39592 \over 81} \* \z3
+{ 1408 \over 9} \* \z2 \* \z3
-{ 30424 \over 135} \* \zs2
-{ 1792 \over 9} \* \z5
\Biggr)
\nonumber
\:\: .\quad
\end{eqnarray}
As all other calculated moments of the functions $\delta\:\! c_{a}^{(2)}(x)$,
this result contains an overall factor ${C_{FA}}\,=\, C_F - C_A /2\,=\, -1/(2N_c)$.
Hence the third-order even-odd differences are suppressed in the large-$N_c$
limit as conjectured, to all orders, in Ref.~\cite{Broadhurst:2004jx} on the
basis of two-loop results in particular for $N=1$ Adler and Gottfried sum
rules, for a recent discussion see also Ref.~\cite{Kataev:2007jz}.
In fact, up to the additional $fl_{11}$ contribution absent in charged-current
DIS (recall Footnote 1),
\begin{eqnarray}
\label{dc2em}
\Delta_{\:\rm e.m.}\, c_{2,1}^{(3)} &\!= \! &
{{d^{abc}d_{abc}}\over{n_c}} \, \biggl( - 288
+ 96\,\z2
+ {1472 \over 3}\, \z3
- {256 \over 5}\, \zs2
- {1280 \over 3}\,\z5 \biggr)
\nonumber \\[1mm]
&\!= \! & - 33.67693293\:{n^{}_{\! f}} \qquad \mbox{in~~QCD} \:\: ,
\end{eqnarray}
Eq.~(\ref{eq:dc2q1}) represents the third-order coefficient-function
correction to the Gottfried sum rule (GSR)\footnote
{$\,$Note that our overall normalization and expansion parameter differ from
those of Ref.~\cite{Broadhurst:2004jx}. Consequently the corresponding GSR
coefficients (\ref{eq:dc2ns}), (\ref{eq:dc2q1}) and (\ref{dc2em}) are
larger by a factor $4^l/3$ at order $\alpha_{\rm s}^{\: l}$ than in their notation.}$\!\!$,
since the Adler sum rule involving the non-singlet coefficient function
$C_{2,1}$ of the $\,\nu p - \bar\nu p \,$ combination does not receive any
perturbative or non-perturbative corrections, see, e.g., Ref.~\cite
{Dokshitzer:1995qm}.
Another interesting feature of the functions $\delta\:\! c_{a=2,3}^{(l)}$ in
Eq.~(\ref{eq:cf-exp}) is the presence of $\zeta$-functions up to weight $2l\,$
in the integer moments, e.g., terms up to $\zt2$ and $\zs3$ occur in the
third-order result (\ref{eq:dc2q1}). This is in contrast to the `natural'
(OPE-based) moments of $C_a^{\nu p \pm \bar\nu p}$ which only include
contributions up to weight $2l\! -\! 1$, see
Refs.~\cite{Larin:1994vu,Larin:1997wd,Retey:2000nq,Moch:2007gx}.
Yet the $x$-space expressions of all these quantities consist of harmonic
polylogarithms up to weight $2l\! -\! 1$ corresponding to harmonic sums up to
weight $2l$. Note also that, in the approach of Refs.~\cite{vanNeerven:1991nn,%
Zijlstra:1991qc,Zijlstra:1992kj}, the absence of weight-$2l\,$ terms in the
natural moments appears to require a cancellation between different diagram
classes.
We now return to the numerical moments (\ref{eq:dc2ns}) -- (\ref{eq:dc3ns})
and investigate their consequences for the $x$-space functions
$\delta\:\! c_{a}^{(3)}(x)$. We follow an approach successfully used, for
instance, in Ref.~\cite{vanNeerven:2001pe} when only the coefficient-functions
moments of Refs.~\cite{Larin:1994vu,Larin:1997wd,Retey:2000nq} were known.
Based on the two-loop end-point behaviour in Eqs.~(\ref{eq:c2-smallx}) and
(\ref{eq:c2-largex}) we expect small-$x$ terms up to $\ln^5 x$ and $\ln^3 x$
in $\,\delta\:\! c_{2,3}^{(3)}(x)\,$ and $\,\delta\:\! c_{L}^{(3)}(x)$,
respectively, and large-$x$ limits including contributions up to
$\,(1-x)^{\eta_a} \ln^2 (1-x)\,$ with $\,\eta_{2,3}^{} = 1\,$ and
$\,\eta_{L}^{} = 2$.
Thus the $x$-space expressions of $\delta\:\! c_{a}^{(3)}$ will be of the form
\begin{equation}
\delta c_{a}^{(3)}(x) \; = \; (1-x)^{\eta_a}\: \bigg(
\sum_{m=1}^2 A_{m}\,\ln^{\,m}(1-x) \, + \, \delta c_{a}^{\:\rm smooth}(x)
\, + \, B_1\, \frac{\ln x}{1-x} \bigg)
\: + \! \sum_{n=2}^{\;\;7-2\eta_a} B_n\, \ln^{\,n} x \quad
\end{equation}
where the functions $\delta\:\! c_{a}^{\:\rm smooth}(x)$ are finite for
$0 \leq x \leq 1$. For moment-based approximations a simple ansatz is chosen
for these functions, and its free parameters are determined from the available
moments together with a reasonably balanced subset of the coefficients $A_m$
and $B_n$. This ansatz and the choice of the non-vanishing end-point parameters
are then varied in order to estimate the remaining uncertainties of $\delta\:\!
c_{a}^{(3)}(x)$. Finally for each value of $a$ two (out of about 50)
approximations, denoted below by $A$ and $B$, are selected which indicate the
widths of the uncertainty bands.
For $F_{\:\! 2}$ and $F_L$ these functions are, with $\, L_0 = \ln x\,$,
$x_1 = 1-x$ and $\, L_1 = \ln x_1$,
\begin{eqnarray}
\label{eq:dc2qq3p}
\delta c_{2,\,A}^{(3)}(x) &\! =\! &
( 54.478\,L_1^2 + 304.6\,L_1 + 691.68\, x ) \, x_1
+ 179.14\,L_0 - 0.1826\,L_0^3
\nonumber \\ & & \mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}}\, \{ ( 20.822\, x^2 - 282.1\, (1 + {\textstyle {x \over 2}}) )\, x_1
- (285.58\, x + 112.3 - 3.587\,L_0^2) \, L_0 \}
\:\: , \nonumber \\[0.5mm]
\delta c_{2,\,B}^{(3)}(x) &\! =\! &
- ( 13.378\,L_1^2 + 97.60\,L_1 + 118.12\, x ) \, x_1
- 91.196\,L_0^2 - 0.4644\,L_0^5
\\ & & \mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}}\, \{ (4.522\,L_1 + 447.88\, (1 + {\textstyle {x \over 2}}) ) \, x_1
+ (514.02\, x + 147.05 + 7.386\,L_0)\, L_0 \}
\quad \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:dcLqq3p}
\delta c_{L,\,A}^{(3)}(x) &\! =\! &
- ( 495.49\,x^2 + 906.86 ) \, x_1^{\,2} - 983.23\,x x_1 L_0
+ 53.706\,L_0^2 + 5.3059\,L_0^3
\nonumber \\ & & \mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}}\, \{ ( 29.95\, x^3 - 59.087\,x^2 + 379.91 ) \, x_1^{\,2}
- 273.042\, xL_0^2 + 71.482\, x_1L_0 \}
\:\: , \nonumber \\[0.5mm]
\delta c_{L,\,B}^{(3)}(x) &\! =\! &
( 78.306\,L_1 + 6.3838\, x ) \, x_1^{\,2} + 20.809\,x x_1 L_0
- 114.47\,L_0^2 - 22.222\,L_0^3
\\ & & \mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}}\, \{ (12.532\,L_1 + 141.99\,x^2 - 250.62\, x ) \, x_1^{\,2}
- ( 153.586\,x - 0.6569 ) \, x_1L_0 \}
\:\: . \quad \nonumber
\end{eqnarray}
The corresponding results for $F_{\:\! 3}$ read
\begin{eqnarray}
\label{eq:dc3qq3p}
\delta c_{3,\,A}^{(3)}(x) &\! =\! &
( 3.216\,L_1^2 + 44.50\,L_1 - 34.588 ) \, x_1
+ 98.719\,L_0^2 + 2.6208\,L_0^5
\nonumber \\ & & \mbox{{\hspace{-4.5mm}}}
- {n^{}_{\! f}}\, \{ ( 0.186\, L_1 + 61.102\, (1 + x) ) \, x_1
+ 122.51\, xL_0 - 10.914\,L_0^2 - 2.748\,L_0^3 \}
\:\: , \nonumber \\[0.5mm]
\delta c_{3,\,B}^{(3)}(x) &\! =\! &
- ( 46.72\,L_1^2 + 267.26\,L_1 + 719.49\, x ) \, x_1
- 171.98\,L_0 + 9.470\,L_0^3
\\ & & \mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}}\, \{ (0.8489\,L_1 + 67.928\, (1 + {\textstyle {x \over 2}}) ) \, x_1
+ 97.922\, xL_0 - 17.070\,L_0^2 - 3.132\,L_0^3 \}
\:\: . \quad \nonumber
\end{eqnarray}
The resulting approximations for the $\,\nu p -\bar\nu p\,$ odd-$N$ coefficient
functions $c_{2,L}^{(3)}(x)$ are compared in Fig.~\ref{fig:c3diff} to their
exact counterparts \cite{Moch:2004xu,Vermaseren:2005qc} for the even-$N$
non-singlet structure functions. The third-order even-odd differences remain
noticeable to larger values of $x$ than at two loops, e.g., up to $x \simeq
0.3$ for $F_{\:\!2}$ and $x \simeq 0.6$ for $F_{\:\! L}$ for the four-flavour
case shown in the figure. The moments $N = 1,\:3,\:\ldots,\: 9\,$ constrain
$\,\delta\:\! c_{2,L}^{(3)}(x)\,$ very well at $\,x \gsim 0.1$, and
approximately down to $\,x \approx 10^{-2}$.
\begin{figure}[bh]
\vspace{-3mm}
\centerline{\epsfig{file=c3diff.eps,width=15cm,angle=0}}
\vspace{-2mm}
\caption{The exact third-order coefficient functions of the even-$N$ structure
functions $\, F_{2,L}^{\,\nu p + \bar\nu p}$
\cite{Moch:2004xu,Vermaseren:2005qc} for four massless flavours, and the
corresponding odd-moment quantities obtained from these results and the
approximations (\ref{eq:dc2qq3p}) and (\ref{eq:dcLqq3p}) for the even -- odd
differences. \label{fig:c3diff}}
\vspace*{-4mm}
\end{figure}
For some applications, such as the Paschos-Wolfenstein relation addressed in the
next section, one needs the second moments of the functions $\,\delta\:\!
c_{2,L}^{(3)}(x)$. These quantities can now be determined approximately from
the above $x$-space results, yielding
\begin{eqnarray}
\label{eq:dc3mom2}
\delta c_{2,2}^{(3)} &\! =\! & -20.19 \pm 0.39 \: + \: (0.691\pm 0.040) \,{n^{}_{\! f}}
\nonumber \\
\delta c_{L,2}^{(3)} &\! =\! & -24.75 \pm 0.15 \: - \: (0.792\pm 0.014) \,{n^{}_{\! f}}
\:\: .
\end{eqnarray}
Here the central values are given by the respective averages of the
approximations $A$ and $B$ in Eqs.~(\ref{eq:dc2qq3p}) and (\ref{eq:dcLqq3p})
which directly provide the upper and lower limits.
Returning to $x$-space we recall that
uncertainty bands as in Fig.~\ref{fig:c3diff} do not directly indicate the
range of applicability of these approximations, since the coefficient functions
enter observables only via smoothening Mellin convolutions with
non-perturbative initial distributions. In Fig.~\ref{fig:c3dcnv} we therefore
present the convolutions of all six third-order CC coefficient functions with
a characteristic reference distribution. It turns out that the approximations
(\ref{eq:dc2qq3p}) and (\ref{eq:dcLqq3p}) of the previous figure can be
sufficient down to values even below $x = 10^{-3}$. The uncertainty of
$\,\delta\:\! c_{3}^{(3)}(x)$, on the other hand, becomes relevant already
at larger values, $\,x \lsim 10^{-2}$, as the lowest calculated moment of this
quantity, $\,N=2$, has far less sensitivity to the behaviour at low $x$.
\begin{figure}[th]
\centerline{\epsfig{file=c23ldcnv.eps,width=14cm,angle=0}\qquad}
\vspace*{-1mm}
\caption{Convolution of the six third-order CC coefficient functions for
$F_{\:\!2,\,3,\,L}$ in $\,\nu p + \bar\nu p\,$
\cite{Moch:2004xu,Vermaseren:2005qc,Vogt:2006bt} and $\,\nu p - \bar\nu p\,$
[Eqs.~(\ref{eq:dc2qq3p}) -- (\ref{eq:dc3qq3p})] DIS with a schematic but
typical non-singlet distribution $\!f$. All results have been normalized to
$\!f(x)$, suppressing a large but trivial variation of the absolute
convolutions for small and large values of $x$. \label{fig:c3dcnv} }
\vspace*{-2mm}
\end{figure}
The three-loop corrections to the non-singlet structure functions are rather
small even well below the $x$-values shown in the figure~~--~~recall our small
expansion parameter $a_{\rm s}\,$: the third-order coefficient are smaller by a
factor $2.0\cdot 10^{-3}$ if the expansion is written in powers of $\alpha_{\rm s}$.
Their sharp rise for $\,x \rightarrow 1\,$ is understood in terms of soft-gluon effects
which can be effectively resummed, if required, to next-to-next-to-next-to-%
leading logarithmic accuracy \cite{Moch:2005ba}. Our even-odd differences
$\,\delta\:\! c_{a}^{(3)}(x)$, on the other hand, are irrelevant at $x > 0.1$
but have a sizeable impact at smaller $x$ in particular on the corrections for
$F_{\:\!2}$ and $F_{\:\!L}$.
\setcounter{equation}{0}
\section{Applications}
\label{sec:applications}
The approximate results for $\,\delta\:\! c_{a}^{(3)}(x)$ facilitate a first
assessment of the perturbative stability of the even-odd differences
(\ref{eq:cdiff}). In Fig.~\ref{fig:c2lexp} we illustrate the known two orders
for $F_{\:\!2}$ and $F_{\:\!L}$ for \mbox{$\alpha_{\rm s} = 0.25$} and ${n^{}_{\! f}} = 4$ massless
quark flavours, employing the same reference quark distribution as in
Fig.~\ref{fig:c3dcnv}.
Obviously our new $\alpha_{\rm s}^{\,3}$ corrections are important wherever these
coefficient-function differences are non-negligible. On the other hand, our
results confirm that these quantities are very small, and thus relevant only
when a high accuracy is required. Presently this condition is fulfilled
only for the determination of the weak mixing angle $\theta_W$ from neutrino
DIS to which we therefore turn now.
\begin{figure}[hbt]
\centerline{\epsfig{file=c2ldexp.eps,width=15.2cm,angle=0}\hspace{3mm}}
\vspace{-1mm}
\caption{The first two approximations, denoted by LO and NLO, of the
differences (\ref{eq:cf-exp}) for $F_{\:\!2}$ and $F_{\:\!L}$ in
charged-current DIS. The results are shown for representative values of
$\alpha_{\rm s}$ and ${n^{}_{\! f}}$ after convolution with the reference distribution $\! f(x)$
also employed in Fig.~3. The dashed curves correspond to the two
approximations in Eqs.~(\ref{eq:dc2qq3p}) and (\ref{eq:dcLqq3p}) for the new
$\alpha_{\rm s}^{\,3}$ contributions. \label{fig:c2lexp} }
\vspace*{-2mm}
\end{figure}
For this purpose one considers the so-called Paschos-Wolfenstein relation
defined in terms of a ratio of neutral-current and charged-current cross
sections for neutrino-nucleon DIS~\cite{Paschos:1973kj},
\begin{equation}
\label{eq:rminus}
R^{-} \; = \:\:
\frac{\sigma(\nu_{\mu\,}N\rightarrow\nu_{\mu\,}X) \: - \:
\sigma(\bar \nu_{\mu\,}N\rightarrow\bar \nu_{\mu\,}X)}
{\sigma(\nu_{\mu\,}N\rightarrow\mu^-X) \: - \:
\sigma(\bar \nu_{\mu\,}N\rightarrow\mu^+X)}
\:\: .
\end{equation}
$R^{-}$ directly measures $\,\sin^2 \theta_W$ if the up and down valence quarks
in the target carry equal momenta, and if the strange and heavy-quark sea
distributions are charge symmetric. At the lowest order of perturbative QCD
one generally finds
\begin{equation}
\label{eq:rminusLO}
R^{-}_{\:\!\rm LO} \:\: = \:\: {1 \over 2} \: - \: \sin^2 \theta_W
\:\: .
\end{equation}
The quantity (\ref{eq:rminus}) has attracted considerable attention in recent
years due to a determination of $\sin^2 \theta_W$ by the NuTeV collaboration
\cite{Zeller:2001hh}: within the Standard Model their result is at variance
with other measurements of this quantity~\cite{Yao:2006px}, see also Refs.~\cite
{Davidson:2001ji,McFarland:2003jw,Dobrescu:2003ta} for detailed discussions.
Beyond the leading order Eq.~(\ref{eq:rminusLO}) receives perturbative QCD
corrections which involve the second moments of coefficient functions for
the $\nu N - \bar\nu N$ neutral- and charged-current structure functions.%
\footnote{$\,$Specifically the ratio $R^{-}$ includes, besides all $\,\nu N
- \bar\nu N$ CC coefficient functions, the neutral-current quantity
$C_3^{\:\rm NC}$ which is equal to its charged-current counterpart
$C_3^{\,\nu N - \bar\nu N}$ at the perturbative orders considered here.}
Armed with the results of Sections \ref{sec:2-loop} and \ref{sec:3-loop} we are
now able to finalize the corresponding $\alpha_{\rm s}^{\,2}$ contribution for massless
quarks \cite{McFarland:2003jw} and to present an accurate numerical result at
order $\alpha_{\rm s}^{\,3}$.
We denote by $q^- \equiv q-\bar{q}\,$ the second Mellin moments of the
valence distributions of the flavours $q = u,\;d,\;s,\;\ldots\,$,
\begin{equation}
\label{eq:pdfmom}
q^- \; = \; \int_0^1\! dx\; x \left( q(x) - \bar{q}(x) \right)\:\: .
\end{equation}
The QCD corrections to $R^{-}$ can be expanded in inverse powers of the
dominant isoscalar combination $\,u^- + d^-$ of the parton distributions~~--~~%
recall that the measurements of this ratio are performed for (almost) isoscalar
targets. After inserting the expansion of the $\overline{\mbox{MS}}$\ coefficient functions in
powers of $\alpha_{\rm s}\,$, the Paschos--Wolfenstein ratio Eq.~(\ref{eq:rminus}) can be
written as
\begin{eqnarray}
\label{eq:rminusNNNLO}
R^{-} &\! =&
g_L^{\:\!2} - g_R^{\:\!2}
\; + \; \frac{u^- - d^- + c^- - s^-}{u^- + d^-} \: \Biggl(
3(g_{Lu}^{\:\!2} - g_{Ru}^{\:\!2}) + (g_{Ld}^{\:\!2} - g_{Rd}^{\:\!2})
\nonumber \\ & & \mbox{}
+ (g_L^{\:\!2} - g_R^{\:\!2}) \: \Biggl\{
\,\frac{8}{9} \frac{\alpha_{\rm s}}{\pi}
\; +\; \frac{\alpha_s^2}{\pi^2} \: \Biggl[
\frac{15127}{1944}
- \frac{89}{81}\, \z2
+ \frac{61}{27}\, \z3
- \frac{32}{45}\, \zs2
- \frac{83}{162}\, {n^{}_{\! f}}
\Biggr]
\nonumber\\ & & \mbox{}
+\; \frac{\alpha_{\rm s}^3}{\pi^3} \Biggl[
\frac{5175965}{52488}
- \frac{356}{729}\, \z2
- \frac{586}{27}\, \z3
- \frac{128}{405}\, \zs2
+ \frac{190}{81}\, \z5
- \frac{9062}{729}\, {n^{}_{\! f}}
+ \frac{2}{3}\, {n^{}_{\! f}} \z3
\nonumber\\ & & \mbox{}
+ \frac{226}{729}\, {n^{\:\!2}_{\! f}}
- \frac{1}{32}\, \delta c_{2,2}^{(3)}
+ \frac{1}{128}\, \delta c_{L,2}^{(3)}
\Biggr] \Biggr\} \Biggr)
\; + \; {\cal{O}} \left( (u^- + d^-)^{-2\,} \right)
\; + \; {\cal{O}}\bigl(\alpha_s^4\bigr) \:\: .
\end{eqnarray}
Here the left- and right-handed weak couplings $\,g_{Lu}$, $g_{Ld}$, $g_{Ru\,}$
and $g_{Rd\,}$ are related to the weak mixing angle $\sin^2 \theta_W$ by
\begin{equation}
\label{eq:gLgRdef}
g_L^{\:\!2} \;\equiv\; g_{Lu}^{\:\!2} + g_{Ld}^{\:\!2}
\;=\; \frac{1}{2}-\sin^2\theta_W+\frac{5}{9}\sin^4\theta_W
\:\: ,\qquad
g_R^{\:\!2} \;\equiv\; g_{Ru}^{\:\!2} + g_{Rd}^{\:\!2}
\;=\; \frac{5}{9}\sin^4\theta_W
\:\: .
\end{equation}
Beyond the tree level, of course, these relations receive electroweak radiative
corrections, see, e.g., Ref.~\cite{Diener:2005me}. Eq.~(\ref{eq:rminusNNNLO})
shows the well-known fact that the relation (\ref{eq:rminusLO}) receives
corrections if the parton content of the target includes an isotriplet
component, $u^-\not=d^-$, or a quark sea with a $C$-odd component,
$s^-\not=0\,$ or $\,c^-\not=0$.
Notice also that perturbative QCD only affects these corrections.
The exact second-order contribution in Eq.~(\ref{eq:rminusNNNLO}) differs
from the result in Ref.~\cite{McFarland:2003jw} where the function
$\,\delta\:\! c_{L}^{(2)}(x)$ of Eq.~(\ref{eq:dcLqq2}) was not included.
The third-order corrections can now be completed in a numerical form, using
our approximations (\ref{eq:dc3mom2}) for the second moments of
$\,\delta\:\! c_{2,L}^{(3)}(x)$. For ${n^{}_{\! f}} = 4\,$ flavours (and disregarding
electroweak corrections) we obtain
\begin{eqnarray}
\label{eq:rminus-numbers}
R^{-} &\! = &
\frac{1}{2} - \sin^2\theta_W
\:\: + \:\: \frac{u^- - d^- + c^- - s^-}{u^- + d^-} \: \Bigg\{
1 - \frac{7}{3}\:\sin^2\theta_W
\; + \; \left( \frac{1}{2} - \sin^2\theta_W \right) \cdot
\nonumber \\ & & \mbox{}
\frac{8}{9} \frac{\alpha_s}{\pi} \left[ \,
1
+ 1.689\,\alpha_{\rm s}
+ (3.661 \pm 0.002)\,\alpha_{\rm s}^2 \,
\right]
\Biggr\}
\; + \; {\cal{O}} \left( (u^- + d^-)^{-2\,} \right)
\; + \; {\cal{O}}\bigl(\alpha_s^4\bigr)
\:\: . \quad
\end{eqnarray}
The perturbation series in the square brackets appears reasonably well
convergent for relevant values of the strong coupling constant, with the known
terms reading, e.g., 1 + 0.42 + 0.23 for $\alpha_{\rm s} = 0.25$. Thus the $\alpha_{\rm s}^2$ and
$\alpha_{\rm s}^3$ contributions correct the NLO estimate by 65\% in this case. On the
other hand, due to the small prefactor of this expansion, the new third-order
term increases the complete curved bracket in Eq.~(\ref{eq:gLgRdef}) by only
about 1\%, which can therefore by considered as the new uncertainty of this
quantity due to the truncation of the perturbative expansion. Consequently
previous NLO estimates of the effect of, for instance, the (presumably mainly
non-perturbative, see Refs.~\cite{Catani:2004nc,Lai:2007dq,Thorne:2007bt})
charge asymmetry of the strange sea remain practically unaffected by
higher-order corrections to the coefficient functions.
\setcounter{equation}{0}
\section{Summary}
\label{sec:summary}
In this article we have presented new results for the coefficient functions of
inclusive charged-current DIS in the framework of massless perturbative QCD.
We have filled a gap in the two-loop literature by writing down the
corresponding difference $\,\delta\:\! c^{(2)}_L(x)$ of the $\,\nu p +
\bar\nu p\,$ and \mbox{$\,\nu p - \bar\nu p\,$} structure functions $\,F_{L\,}$.
Our main results are the lowest five (even- or odd-integer) Mellin moments of
the third-order corrections $\,\delta\:\! c^{(3)}_a(x)$ for all three structure
functions $F_{a\: =\: 2,\:3,\:L}$ and approximations in Bjorken-$x$ space based
on these moments which are applicable down to at least $x \lsim 10^{-2}$.
As a byproduct we have calculated the related third-order coefficient-function
correction to the Gottfried sum rule in photon-exchange DIS.
All our third-order results are proportional to the `non-planar' colour factor
$C_A-2\,C_F$, thus confirming a conjecture by Broadhurst, Kataev and Maxwell on
the $1/N_c^{\,2}$ suppression of these coefficient-function differences in the
limit of a large number of colours $N_c$. Numerically our $\alpha_{\rm s}^3$ corrections
prove relevant in particular for $F_{\:\!2}$ and $F_L$ wherever the differences
of the $\,\nu p + \bar\nu p\,$ and $\,\nu p - \bar\nu p\,$ coefficient functions
are not negligible. We have employed the above results to derive the second- and
third-order QCD corrections to the Paschos-Wolfenstein ratio $R^-$ used to
determine the weak mixing angle from neutrino deep-inelastic scattering. The
uncertainty due to uncalculated higher-order coefficient functions has been
reduced to a level amply sufficient for the foreseeable future, i.e., 1\% for the
coefficient-function factor multiplying the quark-distribution asymmetries.
{\sc Form} files and {\sc Fortran} subroutines with our results can be obtained
from the preprint server {\tt http://arXiv.org} by downloading the source of
this article. Furthermore they are available from the authors upon request.
\subsection*{Note added}
While this article was finalized, the 11-th moments of the functions
$\,\delta\:\! c^{(3)}_2(x)$ and $\,\delta\:\! c^{(3)}_L(x)$ have been
computed~\cite{Rogalprep}. Both results fall into the bands generated by the
respective $x$-space approximations in Section~\ref{sec:3-loop}, thus confirming
the reliability of these uncertainty estimates.
\subsection*{Acknowledgements}
We would like to thank S.~Alekhin, D.~Broadhurst and A.~Kataev for stimulating
discussions.
The work of S.M. and M.R. has been supported by the Helmholtz Gemeinschaft
under contract VH-NG-105 and in part by the Deutsche Forschungsgemeinschaft in
Sonderforschungs\-be\-reich/Transregio~9. During the final stage of this
research A.V. enjoyed the hospitality of the Instituut-Lorentz of Leiden
University.
\section*{Appendix}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
Here we present the analytic expressions for the Mellin-space
coefficient-function differences $\delta c_{a,N}^{(3)}$ which were given
numerically in Eqs.~(\ref{eq:dc2ns}) -- (\ref{eq:dc3ns}). We use the
notations and conventions as specified at the beginning of
Section~\ref{sec:2-loop} and above Eq.~(\ref{eq:dc2qq2} ).
The first moment of $\delta c_{2}^{(3)}(x)$ has been written down in Eq.~(\ref
{eq:dc2q1}) above. The remaining known moments of this quantity are given by
\begin{eqnarray}
\label{eq:dc2q3}
\delta c_{2,3}^{(3)} &\! =\! &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
{1805677051 \over 466560}
- {2648 \over 9} \* \z5
+ {10093427 \over 810} \* \z3
- {1472 \over 3} \* \zs3
- {7787113 \over 1944} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {55336 \over 9} \* \z2 \* \z3
- {378838 \over 45} \* \zs2
- {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
- {5165481803 \over 1399680}
+ {40648 \over 9} \* \z5
- {9321697 \over 810} \* \z3
+ {1456 \over 3} \* \zs3
+ {8046059 \over 1944} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- 4984 \* \z2 \* \z3
+ {798328 \over 135} \* \zs2
- {56432 \over 315} \* \zt2
\Biggr)
\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
{20396669 \over 116640}
- {1792 \over 9} \* \z5
+ {405586 \over 405} \* \z3
- {139573 \over 486} \* \z2
+ {1408 \over 9} \* \z2 \* \z3
- {50392 \over 135} \* \zs2
\Biggr)
\nonumber\; ,
\quad
\\[2mm]
\label{eq:dc2q5}
\delta c_{2,5}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
{18473631996593 \over 3827250000}
- {17584 \over 45} \* \z5
+ {149815672 \over 7875} \* \z3
- {1472 \over 3} \* \zs3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {291199027 \over 50625} \* \z2
+ {330416 \over 45} \* \z2 \* \z3
- {2577928 \over 225} \* \zs2
- {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
- {16016244428419 \over 3827250000}
+ {47560 \over 9} \* \z5
- {1270840912 \over 70875} \* \z3
+ {1456 \over 3} \* \zs3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {1321405949 \over 202500} \* \z2
- {89128 \over 15} \* \z2 \* \z3
+{26658224 \over 3375} \* \zs2
- {56432 \over 315} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
{181199822513 \over 765450000}
- {1792 \over 9} \* \z5
+ {6514448 \over 4725} \* \z3
- {1652773 \over 3375} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {1408 \over 9} \* \z2 \* \z3
- {11888 \over 27} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dc2q7}
\delta c_{2,7}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
{177036089007294328733 \over 32934190464000000}
- {27248 \over 63} \* \z5
+ {65397081433 \over 2646000} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {1472 \over 3} \* \zs3
- {340303364748629 \over 46675440000} \* \z2
+ {2563996 \over 315} \* \z2 \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {4570738447 \over 330750} \* \zs2
- {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
- {213694072871074531 \over 45177216000000}
+ {1821772 \over 315} \* \z5
- {438487320707 \over 18522000} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {1456 \over 3} \* \zs3
+ {418808510000479 \over 46675440000} \* \z2
- {2071492 \over 315} \* \z2 \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {6241478743 \over 661500} \* \zs2
- {56432 \over 315 } \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
{38079608000704561 \over 117622108800000}
- {1792 \over 9} \* \z5
+ {22115039 \over 13230} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {113587875043 \over 166698000} \* \z2
+ {1408 \over 9} \* \z2 \* \z3
- {2296328 \over 4725} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dc2q9}
\delta c_{2,9}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
{5676515460744370321603 \over 1000376035344000000}
- {25664 \over 63} \* \z5
+ {11165079556403 \over 375070500} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {1472 \over 3} \* \zs3
- {8178803099431493 \over 945177660000} \* \z2
+ {1648352 \over 189} \* \z2 \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {23488033336 \over 1488375} \* \zs2
- {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
- {32102287673972370020989 \over 6002256212064000000}
+ {1162796 \over 189} \* \z5
- {89153747611 \over 3087000} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {1456 \over 3} \* \zs3
+ {342078312478997 \over 30005640000} \* \z2
- {1332820 \over 189} \* \z2 \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {3187232017 \over 297675} \* \zs2
- {56432 \over 315} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
{21832132134852204299 \over 52400649470400000}
- {1792 \over 9} \* \z5
+ {6271692134 \over 3274425} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {1931824297943 \over 2250423000} \* \z2
+ {1408 \over 9} \* \z2 \* \z3
- {164116 \over 315} \* \zs2
\Biggr)
\; .
\end{eqnarray}
The corresponding lowest five odd-integer moments for the longitudinal
structure function read
\begin{eqnarray}
\label{eq:dcLq1}
\delta c_{L,1}^{(3)} & = &
{C^{}_F} \*{C_{FA}^{\,2}} \* \Biggl(
{ 21977 \over 9}
-{ 608 \over3}\*\z5
-{ 2648 \over9}\*\z3
-{ 3068 \over 9}\*\z2
- 448\*\z2\*\z3
- 336\*\zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
-{ 17819 \over 9}
-{ 1568 \over 3}\*\z5
+ {5648 \over 9}\*\z3
+ {1376 \over 9}\*\z2
+ 288\*\z2\*\z3
+ {2304 \over 5}\*\zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
{ 1366 \over 9}
-{ 496 \over 9}\*\z3
-{ 328 \over 9}\*\z2
-{ 224 \over 15}\*\zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dcLq3}
\delta c_{L,3}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
- {12350749 \over 19440}
+ 352 \* \z5
+ {52516 \over 45} \* \z3
+{ 47 \over 27} \* \z2
+ 96 \* \z2 \* \z3
- {7544 \over 15} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{10152961 \over 12960}
- 368 \* \z5
- {16412 \over 15} \* \z3
- 242 \* \z2
+ 144 \* \z2 \* \z3
+{ 1168 \over 3} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
- {16757 \over 1620}
+{ 2936 \over 45} \* \z3
- {16 \over 9} \* \z2
- {368 \over 15} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dcLq5}
\delta c_{L,5}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
- {735306721 \over 17010000}
- {1888 \over 3} \* \z5
+ {558244 \over 315} \* \z3
- {442783 \over 675} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ 448 \* \z2 \* \z3
- {4160 \over 9} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{6741265367 \over 10206000}
- {736 \over 3} \* \z5
- {1285168 \over 945} \* \z3
+ {51493 \over 405} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ 96 \* \z2 \* \z3
+ {69608 \over 225} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
- {2107157 \over 255150}
+ {8816 \over 105} \* \z3
- {11992 \over 405} \* \z2
- {736 \over 45} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dcLq7}
\delta c_{L,7}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
{354522585410107 \over 666792000000}
- 1408 \* \z5
+ {47266403 \over 23625} \* \z3
- {1095179473 \over 945000} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ 752 \* \z2 \* \z3
- {147056 \over 375} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{11388456807174161 \over 28005264000000}
- 184 \* \z5
- {176925641 \over 132300} \* \z3
+ {4569363329 \over 13230000 }\* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ 72 \* \z2 \* \z3
+ {663878 \over 2625} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
{369546282989 \over 50009400000}
+ {124282 \over 1575} \* \z3
- {220747 \over 5250} \* \z2
- {184 \over 15} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dcLq9}
\delta c_{L,9}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
{1346454911003496947 \over 1323248724000000}
- {10528 \over 5 }\* \z5
+ {13247918 \over 6125} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {325373958827 \over 208372500} \* \z2
+ {5184 \over 5} \* \z2 \* \z3
- {296736 \over 875} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{17693872049573089 \over 73513818000000}
- {736 \over 5} \* \z5
- {125991917 \over 99225} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {10496201057 \over 23152500} \* \z2
+ {288 \over 5} \* \z2 \* \z3
+ {1688888 \over 7875} \* \zs2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
{23852323249607 \over 1444021425000}
+ {1249264 \over 17325 }\* \z3
- {1542176 \over 33075} \* \z2
- {736 \over 75} \* \zs2
\Biggr)
\; .
\end{eqnarray}
Finally the analytic expressions for the moments $\delta c_{2,N}^{(3)}$ in
Eq.~(\ref{eq:dc3ns}) are
\begin{eqnarray}
\label{eq:dc3q2}
\delta c_{3,2}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
-{840949 \over 243}
+ {9344 \over 9} \* \z5
- {650360 \over 81 }\* \z3
+ {1472 \over 3} \* \zs3
+ {712328 \over 243} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {47920 \over 9} \* \z2 \* \z3
+ {30416 \over 5} \* \zs2
+ {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{15979879 \over 4374}
- {38416 \over 9 }\* \z5
+ {580504 \over 81} \* \z3
- {1456 \over 3} \* \zs3
- {742390 \over 243} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ 4368 \* \z2 \* \z3
- {577264 \over 135} \* \zs2
+ {56432 \over 315} \* \zt2
\Biggr)
\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
-{119522 \over 729}
+ {1792 \over 9} \* \z5
- {57128 \over 81} \* \z3
+ {46112 \over 243 }\* \z2
- {1408 \over 9 }\* \z2 \* \z3
+ {40024 \over 135} \* \zs2
\Biggr)
\nonumber\; ,
\quad
\\[2mm]
\label{eq:dc3q4}
\delta c_{3,4}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
-{21230721185377 \over 4374000000}
+ {23704 \over 45} \* \z5
- {292322783 \over 20250} \* \z3
+ {1472 \over 3} \* \zs3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {5644168873 \over 1215000} \* \z2
- {100792 \over 15} \* \z2 \* \z3
+ {6477802 \over 675} \* \zs2
+ {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{19991706724601 \over 4374000000}
- 5208 \* \z5
+ {272933467 \over 20250} \* \z3
- {1456 \over 3} \* \zs3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {6307524619 \over 1215000} \* \z2
+ {253064 \over 45} \* \z2 \* \z3
- {2500616 \over 375} \* \zs2
+ {56432 \over 315} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
-{15339664501 \over 72900000}
+ {1792 \over 9} \* \z5
- {755894 \over 675} \* \z3
+ {21942049 \over 60750} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {1408 \over 9} \* \z2 \* \z3
+ {53128 \over 135} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dc3q6}
\delta c_{3,6}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
-{172761364527374293 \over 32162295375000}
+ {21200 \over 63} \* \z5
- {3380925064 \over 165375} \* \z3
+ {1472 \over 3} \* \zs3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {147865501939 \over 24310125} \* \z2
- {2395856 \over 315} \* \z2 \* \z3
+ {75351016 \over 6125} \* \zs2
+ {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{313157547783370669 \over 64324590750000}
- {1810712 \over 315 }\* \z5
+ {67828543996 \over 3472875} \* \z3
- {1456 \over 3} \* \zs3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {3604225183081 \over 486202500} \* \z2
+ {667864 \over 105} \* \z2 \* \z3
- {1397140016 \over 165375} \* \zs2
+ {56432 \over 315} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
-{503591542653161 \over 1837845450000}
+ {1792 \over 9} \* \z5
- {16004944 \over 11025} \* \z3
+ {23420609 \over 42875} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {1408 \over 9} \* \z2 \* \z3
+ {2135888 \over 4725} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dc3q8}
\delta c_{3,8}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
-{45882775286477927067311 \over 8003008282752000000}
+ {22640 \over 63} \* \z5
- {38829577931303 \over 1500282000} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+{1472 \over 3} \* \zs3
+ {5677110453154657 \over 756142128000} \* \z2
- {7858468 \over 945} \* \z2 \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {43146817871 \over 2976750} \* \zs2
+ {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{126830527574348410837327 \over 24009024848256000000}
- {5795836 \over 945} \* \z5
+ {4175977929883 \over 166698000} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {1456 \over 3} \* \zs3
- {194854342276579 \over 20003760000} \* \z2
+ {6509876 \over 945} \* \z2 \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {58805551031 \over 5953500} \* \zs2
+ {56432 \over 315} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
-{3380190329263337489 \over 9527390812800000}
+ {1792 \over 9} \* \z5
- {1026540911 \over 595350} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {3263620615369 \over 4500846000} \* \z2
- {1408 \over 9} \* \z2 \* \z3
+ {778496 \over 1575} \* \zs2
\Biggr)
\; ,
\quad
\\[2mm]
\label{eq:dc3q10}
\delta c_{3,10}^{(3)} & = &
{C^{}_F} \* {C_{FA}^{\,2}} \* \Biggl(
-{2924815993615556996346598663 \over 483334682604559632000000}
+ {210944 \over 385} \* \z5
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {3080312718428437 \over 99843767100} \* \z3
+ {1472 \over 3} \* \zs3
+ {13720175530646448109 \over 1537594013340000} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
- {8455904 \over 945} \* \z2 \* \z3
+ {3568998808 \over 218295} \* \zs2
+ {8992 \over 63} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {C_{F}^{\,2}} \* {C_{FA}} \* \Biggl(
{61916581373996975119251441821 \over 10633363017300311904000000}
- {66873844 \over 10395} \* \z5
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {30055925797598243 \over 998437671000} \* \z3
- {1456 \over 3} \* \zs3
- {3186598606475201011 \over 263587545144000} \* \z2
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {75900812 \over 10395} \* \z2 \* \z3
- {1995571648453 \over 180093375} \* \zs2
+ {56432 \over 315} \* \zt2
\Biggr)
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}
+ {n^{}_{\! f}} \* {C^{}_F} \* {C_{FA}} \* \Biggl(
-{339629926756418877268603 \over 767197908896126400000}
+{1792 \over 9} \* \z5
- {70469642338 \over 36018675} \* \z3
\nonumber\\
& &\mbox{{\hspace{-4.5mm}}}\qquad\qquad
+ {2677118231310293 \over 2995313013000} \* \z2
- {1408 \over 9} \* \z2 \* \z3
+ {1015276 \over 1925} \* \zs2
\Biggr)
\; .
\end{eqnarray}
{\footnotesize
|
1,108,101,565,920 | arxiv | \section{Introduction}
\setcounter{equation}{0}
We consider the following model boundary value problem (BVP) for the Helmholtz equation in a plane angle $Q$ of magnitude $\Phi>\pi$ with a complex frequency $\omega\in\mathbb{C}^{+}=\big\lbrace \Im\omega>0\big\rbrace$:
\begin{equation}\label{D}
\left\{ \begin{array}{rcl}
(-\Delta-\omega^2)u(x) =0, & & x\in Q \\\\
B_1 u(x)\Big\vert_{\Gamma_1} = f_1(x), \\\\
B_2 u(x)\Big\vert_{\Gamma_2}=f_2(x). & &
\end{array}
\right.
\end{equation}
Here $\Gamma_{l}$ for $l=1,2$ are the sides of the angle $Q$, $B_{l}=I$ or $B_{l}=\displaystyle\frac{\partial}{\partial n_{l}}$ ($n_l$ is the exterior normal to $\Gamma_l$), $f_{l}$ are given functions which can be distributions, see Fig.\;\ref{AQ}
\newpage
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.27]{AQ.pdf}
\caption{BVP in exterior angle}\label{AQ}
\end{figure}
Problems of this type arise is many areas of mathematical physics.
We list some of them.\\
Firstly, such BVP describe the diffusion of a desintegrating gas \cite[Ch. VII, cf. 2]{Tikh}.\\
Secondly, diffraction problems by the wedge $W=\mathbb{R}^2\setminus Q$ are reduced to such problems for a lossy medium \cite{Teix 1991} or for a slightly conducting medium \cite{ST 1989}.\\
Thirdly, time-dependent diffraction by wedges \cite{Sob34,KB,Ka,Somm,obe,BER JML1,Ro,Rot,kmjv}
is reduced to a problem of this type after the Fourier-Laplace (F-L) transform $t\to \omega$ with respect to time \cite{KMM, km, mm, mesqNN}.\\
Fourthly, the problem of scattering of waves emitted by a point source by $W$ is reduced
to this problem, with
$$
f_{1}(y_1)= e^{ik y_1},\quad y_1>0,\quad f_2=0.
$$
Let us describe this scattering problem in more detail since it was precisely this problem which was the starting point of the present paper.\\
\medskip
\noindent Let us consider the following time-dependent scattering problem:
\begin{equation}\label{we}
\left.
\begin{array}{rcl}
U_{tt}(y,t) =\Delta U(y,t)+\delta(y-y^{\ast}) e^{-i\omega_0 t},\quad y\in Q:=\mathbb{R}^2\setminus W,\quad t\geq 0,\quad \omega_0\geq0\\\\
U(y,t)\Big\vert_{\partial Q} =0.
\end{array}
\right|
\end{equation}
Here $y^{\ast}\in Q$.
After the F-L transform $t\to\omega: U(y,t)\longrightarrow \hat{U}(y,\omega)$ (\ref{we}) becomes equivalent to
\begin{equation}\label{fwe}
\left.
\begin{array}{rcl}
-\omega^2\hat{U}(y,\omega)&=&\Delta\hat{U}(y,\omega)+\delta(y-y^{\ast})\displaystyle\frac{i}{\omega-\omega_0},\quad\omega_0>0\\\\
\hat{U}(y,\omega)\Big\vert_{\partial Q}&=&0,
\end{array}
\right| \omega\in\mathbb{C}^{+}
\end{equation}
see Fig.\;\ref{W}
\newpage
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{SOU.pdf}
\caption{}\label{W}
\end{figure}
We reduce this problem to a homogeneous Helmholtz equation with nonhomogeneous boundary conditions.
Let $\mathscr{E}(y,\omega)$ be s.t.
\begin{equation}\label{ge}
-\Delta\mathscr{E}(y,\omega)-\omega^2\mathscr{E}(y,\omega)=\frac{i\delta(y-y^{\ast})}{\omega-\omega_0},\quad y\in\mathbb{R}^2,\quad \Im\omega>0,\quad\omega_0>0.
\end{equation}
Passing to the Fourier transform $y\to\eta$ in (\ref{ge}), we obtain
\begin{equation*}
\tilde{\mathscr{E}}(\eta,\omega)=\frac{i e^{i\eta\cdot y^{\ast}}}{(|\eta|^2-\omega^2)(\omega-\omega_0)}.
\end{equation*}
Hence
\begin{equation*}
\mathscr{E}(y,\omega)=\frac{i}{(\omega-\omega_0)(2\pi)^2}\int\limits_{\mathbb{R}^2}\frac{e^{-i\eta(y-y^{\ast})}}{|\eta|^2-\omega^2}\;d\eta.
\end{equation*}
Define
\begin{equation*}
U_{s}(y,\omega):=\hat{U}(y,\omega)-\mathscr{E}(y,\omega),\quad y\in Q,\quad \omega\in\mathbb{C}^{+}.
\end{equation*}
By (\ref{fwe}), (\ref{ge}) this implies that $U_s(y,\omega)$ satisfies the problem of type (\ref{D}):
\begin{equation}\label{su_s}
\left\{ \begin{array}{rcl}
-\Delta U_{s}(y,\omega)-\omega^2 U_{s}(y,\omega) = 0, & & y\in Q \\\\
U_{s}(y,\omega)\Big\vert_{\partial Q} = -\mathscr{E}(y,\omega)\Big\vert_{\partial Q} & &
\end{array}
\right| \ \omega\in\mathbb{C}^{+}.
\end{equation}
Let us calculate $\mathscr{E}(y,\omega)$ on $\partial Q$. We have
\begin{equation*}
-\mathscr{E}(y, \omega)\Big\vert_{Q_1}=-\mathscr{E}(y_1, 0, \omega)= C(\omega)\displaystyle\int\limits_{\mathbb{R}^2} e^{-i\eta_1 y_1}\; C_1(\eta, \omega,y^{\ast})\;d\eta
\end{equation*}
where $C(\omega)=\displaystyle\frac{i}{(\omega-\omega_0)(2\pi)^2}$; $C_1(\eta, \omega, y^{\ast})= \displaystyle\frac{e^{i\eta\cdot y^{\ast}}}{|\eta|^2-\omega^2}$.
Hence,\\
$-\mathscr{E}(y, \omega)\Big\vert_{Q_2}= C(\omega)\displaystyle\int\limits_{\mathbb{R}^2} e^{-i(\eta_1\cot\Phi+\eta_2)y_2}\; C_1(\eta, \omega,y^{\ast})\;d\eta$.
Thus (\ref{su_s}) is equivalent to
\begin{equation}\label{pU_s}
\left\{ \begin{array}{rcl}
& &-\Delta U_{s}(y,\omega)-\omega^2 U_{s}(y,\omega) = 0, \quad y\in Q \\\\
& & U_{s}(y_1,0,\omega) = C(\omega)\displaystyle\int\limits_{\mathbb{R}^2} e^{-i\eta_1 y_1}\;C_1(\eta, \omega, y^{\ast})\;d\eta, \quad y_1>0\\\\
& & U_{s}(y_2\cot\phi, y_2,\omega)= C(\omega) \displaystyle\int e^{-i(\eta_1\cot\Phi+\eta_2)y_2}\;C_1(\eta, \omega, y^{\ast})\;d\eta, \quad y_2>0.
\end{array}
\right.
\end{equation}
Therefore it seems natural to solve first the following model problem, corresponding to problem (\ref{pU_s})
\begin{equation}\label{PU}
\left\{ \begin{array}{rcl}
& &-\Delta U(y,\omega)-\omega^2 U(y,\omega) = 0, \quad y\in Q \\\\
& & U(y_1,0) = e^{-ik_1 y_1}, \quad y_1>0\\\\
& & U(y_2\cot\phi, y_2)= e^{\frac{-ik_2 y_2}{\sin\Phi}}, \quad y_2>0,
\end{array}
\right.
\end{equation}
where $k_1, k_2\in\mathbb{R}$.\\
Note that this problem is similar to the BVP in \cite[(23)]{KMM} arising in the problem of time-dependent diffraction:
\begin{equation}\label{23}
\left\{ \begin{array}{rcl}
& &\big(-\Delta-\omega^2\big) U(\omega,y) = 0, \quad y\in Q \\\\
& & U(\omega, y) = -g(\omega) e^{i\omega y_1\cos\alpha}, \quad y\in Q_1\\\\
& & U(\omega, y)= -g(\omega) e^{-i\omega y_2 \big(\cos(\alpha+\Phi)/\sin\Phi\big)}, \quad y\in Q_2.
\end{array}
\right.
\end{equation}
However, there are substantial differences. The exponents in the right-hand side of
this problem are complex ($\Im \omega>0$) and, thus, the corresponding functions decrease exponentially when $y_{1,2}\longrightarrow +\infty$ since $\alpha<\phi<\displaystyle\frac{\pi}{2}$.\\ Moreover the structure of boundary conditions in (\ref{23}) is connected with the first equation through the common parameter $\omega$. This gives a unique opportunity to reduce problem (\ref{23}) to a difference equation which is solved easily in an explicit form.\\
In contrast, problem (\ref{PU}) has periodic boundary conditions which are independent of the first equation. This results in the fact that the corresponding difference equation cannot be solved as easily as in the previous case except for the case when $\Phi=\displaystyle\frac{3\pi}{2}$ (see Section\;5).\\
In turn, problem (\ref{PU}) splits into two problems for $u_1, u_2$ such that $U=u_1+u_2$ by linearity:
\begin{equation}\label{u_1}
\left\{ \begin{array}{rcl}
& &-\Delta u_1(y,\omega)-\omega^2 u_1(y,\omega) = 0, \quad y\in Q \\\\
& & u_1(y_1,0) = e^{-ik_1 y_1}, \quad y_1>0\\\\
& & u_1(y_2\cot\phi, y_2)= 0, \quad y_2>0.
\end{array}
\right.
\end{equation}
\medskip
\begin{equation}\label{u_2}
\left\{ \begin{array}{rcl}
& &-\Delta u_2(y,\omega)-\omega^2 u_2(y,\omega) = 0, \quad y\in Q \\\\
& & u_2(y_1,0) = 0, \quad y_1>0\\\\
& & u_2(y_2\cot\phi, y_2)= e^{-\frac{ik_2 y_2}{\sin\phi}}, \quad y_2>0.
\end{array}
\right.
\end{equation}
Here $\omega\in\mathbb{C}^{+}$.
This paper is devoted to solving the model problem (\ref{u_1}). Solution of (\ref{u_2}) is obtained from (\ref{u_1}) by a simple change of variable, (see (\ref{u_1''})).\\
Note that the BVP in a right angle $Q$ or in its complement and in other particular angles whose magnitudes are commensurate with $\pi$, were considered in many papers \cite{Mei87, MPR, Mei93, Mei94, ms, Pen99, Cas08, CK08, caskap, 4, Cas06}.\\
In those papers exact results were obtained by means of operator methods. Boundary data in those papers belong to Sobolev spaces $H_s(\mathbb{R})$, $s>0$. We consider another type of boundary data, namely, periodic functions. We obtain exact solutions in explicit form, namely, in the form of Sommerfeld type integrals. We use the method of automorphic functions (MAF) on complex characteristics \cite{BKM}. This method was developed by A.Komech for $\Phi<\pi$ in \cite{K} and then was extended to $\Phi>\pi$ in \cite[Section\;1.2 and part\;2]{BKM}. It allows us to find all distributional solutions of the BVP for the Helmholtz equation in arbitrary angles with general boundary conditions. It was applied, in particular, to time-dependent diffraction problems by angles \cite{mm, KMJ2014, MZJ2015, KM14, KM15, KMNMV2018}.\\
It should be noted that there is a very effective Sommerfeld-Malyzhinetz method of constructing solutions of diffraction problems in angles; by means of this method many important results were obtained \cite{BLG}. This method allows one to obtain the solution in the form of the Sommerfeld integral. However, this method does not allow one to prove uniqueness which usually is proved on the basis of physical considerations.
We also obtain solution in the Sommerfeld integral form, using the MAF which also allows us to prove uniqueness in an appropriate functional space (see e.g. \cite{Mei87}).
The paper is organized as follows: in Section 2 we formulate the main result, the Sections 3-10 are devoted to its proof. In Section 3 we reduce boundary value problem to a difference equation and we prove the necessary and sufficient conditions for the existence of solution. In Sections 4 and 5 we find solution of the difference equation for $\Phi\not = \frac{3}{2}\pi$ and $\Phi\not = \frac{3}{2}\pi$ respectively. In Section 6 we prove the asymptotics of the integrand for the Sommerfeld's representation of the solution. In Sections 7 and 8 we give the Sommerfeld-type representation of solution and we prove the boundary conditions. In Section 9 we prove the existence and uniqueness of the solution. En Appendices we prove some technical assertions.
\section{The main result}\label{MR}
\setcounter{equation}{0}
We will construct the solutions of problems (\ref{u_1}) and (\ref{u_2}) in the form of the well-known Sommerfeld integrals which have the form
\begin{equation*}
\displaystyle\int\limits_{\mathcal{C}} e^{-\omega\rho\sinh w} v(w+i\theta)\;dw,
\end{equation*}
where $\mathcal{C}$ is a certain contour on the complex plane and the correct construction of the factor $v(w)$, which ensures that (\ref{SI}) satisfies the boundary conditions, is the main difficulty of the problem.\\
To formulate the main result we need to describe the integrand $v(w)$ of the Sommerfeld integral. The construction of this integrand is the main contents of this paper.\\
Consider $\hat{v}_1(w)$ given by (\ref{check{v}_1}) where $\hat{v}_1^1(w)$ is given by (\ref{hat{v}_1^1}) for $\Phi\neq \displaystyle\frac{3\pi}{2}$ and by (\ref{u_3}) for $\Phi=\displaystyle\frac{3\pi}{2}$ with $\hat{G}$, given by (\ref{G(w)}).
Let $(\rho,\theta)$ be the polar coordinates in $\mathbb{R}_{y}^{2}$,
\begin{equation}\label{PC}
y_1=\rho\cos\theta,\quad y_2=\rho\sin\theta, \quad \rho>0,\quad \theta\in\big[2\pi-\Phi,2\pi\big],
\end{equation}
and $\mathcal{C}$ be the Sommerfeld double-loop contour, see Fig.\;\ref{CS3}.
\begin{defn}
$E$ is the space of the $C^{\infty}\big(\overline{Q}\setminus\lbrace 0\rbrace\big)$ functions bounded
with its first derivatives in $\overline{Q}\setminus B_{\varepsilon}(0)~\forall \varepsilon>0$ and admitting the following asymptotics at the origin
\begin{equation}\label{SP}
\left. \begin{array}{rcl}
& & u(\rho,\theta)= C(\theta)+o(1)\\\\
& & \nabla u(\rho,\theta)= C_1(\theta) \rho^{-1}+C_2(\theta)+o(1)
\end{array}
\right| \quad \rho\to0.
\end{equation}
\end{defn}
Our main result consists in the following statement.
\begin{teo}\label{Tu_1u_2}
{\bf i)} Let $\omega\in\mathbb{C}^{+}, k_1, k_2>0$. There exists a solution to problem (\ref{PU})
\begin{equation*}
U(\rho, \theta)=u_1(\rho, \theta)+u_2(\rho, \theta),\quad (\rho, \theta)\in Q,
\end{equation*}
belonging to $E$, where $u_1, u_2$ are solutions to (\ref{u_1}), (\ref{u_2}), respectively, which admit a Sommerfeld integral representation
\begin{equation}\label{u_1'}
u_1(\rho,\theta)=\displaystyle\frac{1}{4\pi\sin\Phi}\int\limits_{\mathcal{C}} e^{-\omega\rho\sinh w}\hat{v}_1(w+i\theta) dw,
\end{equation}
\begin{equation}\label{u_1''}
u_2(\rho,\theta)=\displaystyle\frac{1}{4\pi\sin\Phi}\int\limits_{\mathcal{C}} e^{-\omega\rho\sinh w}\hat{v}_1(w+i\theta_1) dw,
\end{equation}
where in (\ref{u_1'}) $\hat{v}_1$ is constructed according to the algorithm presented below for $k=k_1$, in (\ref{u_1''}) for $k=k_2$ and
\begin{equation}\label{Ch}
\theta_1= -\theta+4\pi-\Phi.
\end{equation}
{\bf ii)} The solution $U$ is unique in $E$.
\end{teo}
\begin{obs}\label{rbc}
The integrals in (\ref{u_1'}), (\ref{u_1''}) converge absolutely since the infinite part of $\mathcal{C}$ belongs to the region of the superexponential decrease of $e^{-\omega \rho\sinh w}$ (see Fig.\;\ref{C_1C_2}) and $\hat{v}_1$ admits asymptotics (\ref{as hat{v}_1}). The boundary values are understood in the sense of distributions.
\end{obs}
\begin{obs}\label{r22}
The representation (\ref{u_1''}) follows easily from (\ref{u_1'}) by the change (\ref{Ch}).
\end{obs}
\begin{obs}\label{ast1}
Note that the solution $u_{1}$ also admits a slightly different representation where several different Sommerfeld-type contours are used (see (\ref{des})).
\end{obs}
\section{Reduction to a difference equation. Necessary and sufficient conditions for the Neumann data}
\setcounter{equation}{0}
Consider problem (\ref{u_1}). The MAF permits to reduce this problem to finding the Neumann data of solution $u_1$, and it consists of several steps. In the following subsections we present these steps.\\
We assume that the solution $u_1\in S'(Q):=\Big\lbrace u\Big\vert_{Q}, u\in S'(\mathbb{R}^2)\Big\rbrace$.
The first step of the MAF is to reduce the problem to the complement of the first quadrant and to extend the solution $u_1$ to the plane, see \cite{BKM, km}.
\subsection{First step: extension of $v_{l}^{\beta}(x_{l})$ to the whole plane $\mathbb{R}^2$}
Consider the linear transformation
\begin{equation*}
\mathcal{J}(y): x_1=y_1+y_2\cot\Phi,\quad x_2=-\frac{y_2}{\sin\Phi},
\end{equation*}
which sends the angle $Q$ to the right angle $K:=\Big\lbrace (x_1,x_2): x_1<0~{\rm or}~x_2<0\Big\rbrace$. This transformation reduces system (\ref{u_1}) to the problem (\ref{ra1})-(\ref{ra3}) in the complement $K$ of the first quadrant for
\begin{equation*}
v(x):=u_1\big(\mathcal{J}^{-1}(y)\big)
\end{equation*}
\begin{subequations}
\begin{empheq}[left=\empheqlbrace]{align}
\mathscr{H}(D) v(x) &= 0, \qquad\quad~ x\in K
\label{ra1}
\\ \nonumber\\
v(x_1,0) &= e^{-ik x_1}, \quad x_1>0
\label{ra2}
\\ \nonumber\\
v(0,x_2) &= 0, \qquad\quad ~x_2>0,
\label{ra3}
\end{empheq}
\end{subequations}
where
\begin{equation}\label{mathcal}
\mathscr{H}(D)=-\displaystyle\frac{1}{\sin^2\Phi}\Bigg[\Delta-2\cos\Phi\displaystyle\frac{\partial^2}{\partial x_1\partial x_2}\Bigg]-\omega^2.
\end{equation}
By \cite[Lemma\;8.2]{BKM}, if $v(x)\in S'(K)$ is a solution of equation (\ref{ra1}), then there exists an extension $v_0\in S'(\mathbb{R}^2)$ of $v$ by 0, such that $v_0\Big\vert_{K}=v$,
\begin{equation}\label{Pu_0}
\mathscr{H}(D)v_0(x)=\gamma(x),\quad x\in\mathbb{R}^2,
\end{equation}
where $\gamma\in S'(\mathbb{R}^2)$ and has the form
\begin{equation}\label{gamma_a}
\begin{array}{lll}
\gamma (x)&=&\displaystyle\frac{1}{\sin^2\Phi}\Big[\delta(x_2)v_1^1(x_1)+\delta'(x_2)v_1^0(x_1)+\delta(x_1)v_2^1(x_2)+\delta'(x_1)v_2^0(x_2)-\\\\
&-& 2\cos\Phi\;\delta(x_2)\;\partial_{x_{1}} v_1^{0}(x_1)-2\cos\Phi\;\delta(x_1)\partial_{x_2}\;v_2^{0}(x_2)\Big],\quad x\in\mathbb{R}^2;
\end{array}
\end{equation}
$v_{l}^{\beta}(x_{l})\in S'(\overline{\mathbb{R}^{+}}):=\Big\lbrace v\in S'(\mathbb{R}): {\rm supp}\;v\subset \overline{\mathbb{R}^+}\Big\rbrace$.\\
We will use the extension of the Fourier transform $F$ defined on $S(\mathbb{R})\subset S'(\mathbb{R}^2)$, $ \varphi(x_1,x_2)\to \tilde{\varphi}(z_1,z_2)$, $\varphi\in S(\mathbb{R}^2)$ to $S'(\mathbb{R}^2)$ by continuity:
\begin{equation}\label{ft}
F_{x\to z}\big[\varphi\big](z)=F\big[\varphi(x)\big](z)=\tilde{\varphi}(z_1,z_2):=\iint e^{iz_1x_1+iz_2 x_2} \varphi(x_1, x_2)\;dx_1 dx_2,
\end{equation}
and denote this extension by the tilde, $\tilde{v}(z)=F_{x\to z}[v(x)],~v\in S'(\mathbb{R}^2)$. Applying this transform to (\ref{Pu_0}) and using the fact that $\mathscr{H}(z)\neq 0, z\in\mathbb{R}^2$, we obtain
\begin{align}\label{div1}
\tilde{v}_0(z)&=\displaystyle\frac{\tilde{\gamma}(z)}{\mathscr{H}(z)},\quad z\in\mathbb{R}^2,\quad \tilde{v}_0\in S'(\mathbb{R}^2).
\end{align}
Hence,
\begin{equation}\label{8.5}
v_0(x)=F^{-1}_{x\to z}\Bigg[\displaystyle\frac{\tilde{\gamma}(z)}{\mathscr{H}(z)}\Bigg],\quad x\in\mathbb{R}^2,\quad v(x)=v_0(x)\Big\vert_{K}.
\end{equation}
Here $\tilde{\gamma}(z)$ is the Fourier transform of (\ref{gamma_a}), and for $z\in\mathbb{R}^2$
\begin{equation}\label{tilde{gamma}}
\begin{array}{lll}
\tilde{\gamma} (z)
= \displaystyle\frac{1}{\sin^2\Phi}\Big[\tilde{v}_1^1(z_1)-\tilde{v}_1^0(z_1)\big(iz_2-2\cos\Phi\;iz_1\big)+\tilde{v}_2^1(z_2)-
\tilde{v}_2^0(z_2)\big(iz_1-2\cos\Phi\;iz_2\big)\Big],
\end{array}
\end{equation}
where $\tilde{v}_{l}^{\beta}(z_l)$ are the Fourier transforms of $v_{l}^{\beta}(x_{l})$. Thus, if we know $v_{l}^{\beta}(x_{l})$, we know $v$ by (\ref{8.5}), and problem (\ref{gamma_l^beta}) is reduced to finding the four functions $\tilde{v}_{l}^{\beta}(x_{l})$, $l=1,2$, $\beta=0,1$.
\begin{obs}\label{rl}
Formula (\ref{gamma_a}) is obtained by direct differentiation (in the sense of
$\mathcal{D}'(\mathbb{R}^2))$ of the discontinuous function
\begin{equation}\label{ex}
v_0(x)=\Big[v(x)\Big]_0:=\left\{
\begin{array}{rcl}
v(x), & & x\in K\\\\
0, & & x\notin K
\end{array}
\right.
\end{equation}
in the case when $v(x)\in C^{\infty}(\overline{K})$. Moreover, the formula
$$
f'_{0}(x)=\Big[f'(x)\Big]_{0}+f(0)\delta(x),\quad x\in\mathbb{R}
$$
is used for $f\in C^{\infty}\Big(\overline{\mathbb{R}^{+}}\Big)$. Obviously, in this case the functions $v_l^{\beta}$ in (\ref{gamma_a}) are the Cauchy data of the function $v$:
\begin{equation}\label{vlb}
\begin{array}{lll}
v_1^0(x_1)= v(x_1,0),~ x_1>0, \qquad \quad v_1^1(x_1) = \displaystyle\frac{\partial}{\partial x_2} v(x_1,0-),\quad~ x_1>0\\\\
v_2^0(x_2) = v(0-,x_2), ~ x_2>0,\qquad v_2^1(x_2) =\displaystyle\frac{\partial}{\partial x_1} v(0-,x_2), ~\quad x_2>0.
\end{array}
\end{equation}
It turns out that formula (\ref{gamma_a}) and representations (\ref{vlb}) remain true for distributional solutions. The following two lemmas describe the solution of equation (\ref{ra1}) in terms of its Cauchy data.
\end{obs}
\begin{lem}\label{8.3}
\cite[Lemma\;8.3]{BKM}. Let $v\in S'(K)$ be a distributional solution to equation (\ref{ra1}) and let $v_0$ be its extension by 0 to $\mathbb{R}^2$ satisfying (\ref{Pu_0}), (\ref{gamma_a}). Then the Cauchy data
\begin{equation*}
\left\{
\begin{array}{rcl}
v_1^{\beta}(x_1):=\partial_2^{\beta} v_0(x_1,0-) & & x_1>0 \\\\
v_2^{\beta}(x_2):=\partial_2^{\beta} v_0(0-,x_2) & & x_2>0
\end{array}
\right| \ \beta=0,1
\end{equation*}
exist (here the limits are understood in the sense $\mathcal{D}'(\mathbb{R}^+):=\mathcal{D}'(\mathbb{R})\Big\vert_{\mathbb{R}^{+}}).$
\end{lem}
\begin{lem}\label{l8.4}
\cite[Lemma\;8.4]{BKM}. Let $v\in S'(K)$ be a distributional solution to equation (\ref{ra1}) given by (\ref{8.5}), where $\gamma$ is defined by (\ref{gamma_a}). Then $v_l^{\beta}\Big\vert_{\mathbb{R}^{+}}$ are the Cauchy data of $v$.
\end{lem}
\begin{obs}
Formula (\ref{8.5}) and Lemma\;\ref{l8.4} show that it suffices to find the Neumann data $v_1^1, v_2^1$ in (\ref{gamma_a}) in order to solve problem (\ref{ra1})-(\ref{ra3}).
\end{obs}
\medskip
Now we use boundary conditons (\ref{ra2}), (\ref{ra3}). Let $v\in S'(K)$ be a solution to (\ref{ra1})-(\ref{ra3}) and $v_0\in\mathcal{S}'(\overline{K})$ be its extension by $0$ satisfying (\ref{Pu_0}); then,
by (\ref{vlb}).
\begin{equation}\label{gamma_l^beta}
\left.
\begin{array}{rcl}
v_1^0(x_1)\Big\vert_{\mathbb{R}^+}= e^{-i k x_1},\quad x_1>0,\qquad
v_2^0(x_2)\Big\vert_{\mathbb{R}^+}= 0,\quad x_2>0.
\end{array}
\right.
\end{equation}
Since ${\rm supp}~v_{l}^{0}(x_{l})\subset \overline{\mathbb{R}^+}$, by the distribution theory, we have, generally speaking,
\begin{equation}\label{deltam}
\left\{
\begin{array}{rcl}
v_1^0(x_1)&=&\Big[e^{-ik x_1}\Big]_0+c_1^0\delta(x_1)+c_1^1\delta'(x_1)+\cdots +c_1^{m}\delta^{(m)}(x_1),\\\\
v_2^0(x_2)&=&c_2^0\delta(x_2)+c_2^1\delta'(x_2)+\cdots +c_2^{m}\delta^{(m)}(x_2),
\end{array}
\right.
\end{equation}
for some $m\geq0$.
Here $\Big[e^{-ikx_1}\Big]_0$ is defined similarly to (\ref{ex}). Obviously $\Big[e^{-ikx_1}\Big]_0=\Theta(x) e^{-ikx_1}$ where $\Theta(x)$ is the Heaviside function.
We will find a solution to (\ref{ra1})-(\ref{ra3}) for
\begin{equation}\label{as}
c_1^0=\cdots =c_1^m=c_2^0=\cdots=c_2^m=0.
\end{equation}
Thus we put
\begin{equation}\label{v_1^0}
v_1^0(x_1)= \Big[e^{-ik x_1}\Big]_0=\left\{
\begin{array}{rcl}
e^{-ikx_1},& & x_1\geq0\\\\
0,& & x_1<0
\end{array}
\right. \ \in S'(\mathbb{R}),\quad v_2^0(x_2)=0.
\end{equation}
\begin{obs}
It is not guaranteed a priori that the solution of (\ref{ra1})-(\ref{ra3}) exists under condition (\ref{as}) because $\tilde{v}_{l}^{\beta}$ should satisfy a certain connection equation (see Section\;3.2). Nevertheless, it turns out that we are able to construct an explicit solution under the condition (\ref{as}). Solutions which correspond to nonzero values of $c_{l}^m$ in (\ref{deltam}) will only contain additional singularities at the origin, and are not of interest.
\end{obs}
\noindent Substituting $v_1^0, v_2^0$ given by (\ref{v_1^0}) in (\ref{Pu_0}), we obtain
\begin{equation}\label{e3}
\mathscr{H}(D) v_0(x)=\gamma(x),
\end{equation}
with $\gamma$ containing only two unknown functions $v_1^1$ and $v_2^1$.\\
The MAF gives the necessary and sufficient conditions for the functions $v_1^1$
and $v_2^1$, which allow us to find these functions in an explicit form. Substituting these functions in (\ref{e3}) we obtain $v_0$ (and so $v$) by (\ref{8.5}), (\ref{gamma_a}).\\
In what follows we consider equation (\ref{e3}).
\subsection{Second step: Fourier-Laplace transform and the lifting to the Riemann surface. Connection equation}
In addition to the (real) Fourier transform (\ref{ft}) we will use the complex Fourier transform (or Fourier-Laplace (F-L) transform). Let
\begin{equation*}
f\in S'(\overline{\mathbb{R}^{+}}):=\Big\lbrace f\in S'(\mathbb{R}): {\rm supp}\;f\subset \overline{\mathbb{R}^{+}}\Big\rbrace.
\end{equation*}
Then by the Paley-Wiener theorem \cite{RS}, $\Big({\rm see}~{\rm also}~\cite[Theorem\;5.2]{Ka}\Big)$ $\tilde{f}(z)=F\big[f\big]\in\mathbb{R}$, admits an analytic continuation $\tilde{f}(z)\in\mathcal{H}\big(\mathbb{C}^+\big),~ \mathbb{C}^{+}:=\Big\lbrace z\in\mathbb{C}: \Im z>0\Big\rbrace$ and $\lim \tilde{f}(z_1+iz_2)=\tilde{f}(z_1)~{\rm in}~S'(\mathbb{R}),~ {\rm as}~\varepsilon\to 0+$. Since $v_{l}^{\beta}(x_l)\in S'(\overline{\mathbb{R}^{+}})$, there exist their F-L transforms
\begin{equation}\label{atilde{v}_l}
\tilde{v}_{l}^{\beta}(z_l)\in\mathcal{H}\big(\mathbb{C}^+\big),\quad l=1,2;\quad \beta=0,1.
\end{equation}
In particular,
from (\ref{v_1^0}) we have
\begin{equation}\label{tilde{v}_1^0}
\tilde{v}_1^0(z_1)=\frac{i}{z_1-k},\quad z_1\in\overline{\mathbb{C}^{+}},
\end{equation}
where for $z_1\in\mathbb{R}$, $\tilde{v}_1^{0}(z_1)=\displaystyle\lim_{\tau_1\to 0+}\tilde{v}_1^0(z_1+i\tau_1)$ in $S'(\mathbb{R})$.
Hence, using (\ref{tilde{gamma}}) we obtain (since $v_2^0\equiv 0$)
\begin{equation}\label{tilde{gamma}''}
\tilde{\gamma}(z)=\displaystyle\frac{1}{\sin^2\Phi}\Bigg[\tilde{v}_1^1(z_1)+\displaystyle\frac{z_2-2\cos\Phi\;z_1}{z_1-k}+\tilde{v}_2^1(z_2)\Bigg],\quad z\in\mathbb{R}^2.
\end{equation}
In the MAF, the Riemann surface of complex zeros of the symbol of the operator (\ref{mathcal}) plays an essential role, since a necessary condition for the existence of the solution on $\tilde{\gamma}(z)$ can be written in terms of this surface. The symbol of this operator is the polynomial
$$
\mathscr{H}(z)=\displaystyle\frac{1}{\sin^2\Phi}\Big(z_1^2+z_2^2-2z_1 z_2\cos\Phi\Big)-\omega^2,\quad (z_1,z_2)\in\mathbb{C}^2.
$$
Obviously, $\mathscr{H}(z)$ does not have real zeros, but it does have complex ones. Denote the Riemann surface of the complex zeros of $\mathscr{H}$ by
\begin{equation*}
V:=\Big\lbrace z\in\mathbb{C}^2: \mathscr{H}(z)=0\Big\rbrace.
\end{equation*}
It is convenient to parametrize the complex surface $V$ introducing the parameter $w\in\mathbb{C}$.\\
The Riemann surface $V$ admits a universal covering $\hat{V}$, which is isomorphic to $\mathbb{C}$ (see \cite [Ch. 15]{BKM}). Let $w$ be a parameter on $\hat{V}\cong\mathbb{C}$. Then the formulas
\begin{equation}\label{z_{1,2}(om)}
\left\{
\begin{array}{rcl}
z_1&=&z_1(w)=-i\omega\sinh w\\\\
z_2&=&z_2(w)=-i\omega\sinh(w+i\Phi)
\end{array}
\right| w\in\mathbb{C}
\end{equation}
describe an infinitely sheeted covering of $\mathbb{C}$ onto $V$.\\
Let us ``lift'' the functions $\tilde{v}_{l}^{\beta}(z_l), z_{l}\in\mathbb{C}^{+}$ to $\hat{V}$. For this we must identify
$V_l^{+}:=\Big\lbrace z\in\mathbb{C}^2: \Im z_l>0\Big\rbrace$ with regions on $\hat{V}$. This can be done in many ways. For example, define, for $\omega\in\mathbb{C}^{+}$,
\begin{equation}\label{Gamma_0'}
\Gamma_0=\Gamma_0(\omega):=\Big\lbrace w_1+i\arctan\Big(\displaystyle\frac{\omega_1}{\omega_2}\tanh w_1\Big)\Big| w_1\in\mathbb{R}\Big\rbrace.
\end{equation}
Obviously, for $w\in\Gamma_0$, $\Im \big(z_1(w)\big)=0$. Moreover,
$
\arctan\Big(\displaystyle\frac{\omega_1}{\omega_2}\tanh w_1\Big)\xrightarrow[w_1\to\pm\infty]{}\pm\arctan\displaystyle\frac{\omega_1}{\omega_2}.
$
For $\alpha\in\mathbb{R}$, define
\begin{equation*}
\Gamma_{\alpha}=\Gamma_{\alpha}(w):=\Gamma_{0}(w)+i\alpha
\end{equation*}
and for $\alpha<\beta$, define
\begin{equation*}
V_{\alpha}^{\beta}:=\Bigg\lbrace w\in\mathbb{C}: \arctan\Big(\displaystyle\frac{\omega_1}{\omega_2}\tanh w_1\Big)<\Im w<\arctan\Big(\displaystyle\frac{\omega_1}{\omega_2}\tanh w_1\Big)+\beta\Bigg\rbrace.
\end{equation*}
For $l=1,2$, let us ``lift'' $V_{l}^{+}$ to $\hat{V}$.
Denote this lifting by $\hat{V}_{l}^{+}=\Big\lbrace w\in\hat{V}: \big(z_l(w)\big)\in V_{l}^{+}\Big\rbrace$. Then
\begin{equation*}
\hat{V}_{1}^{+}=\displaystyle\bigcup_{k=-\infty}^{\infty}{V_{2k\pi}^{(2k+1)\pi}},\quad \hat{V}_{1}^{-}=\displaystyle\bigcup_{k=-\infty}^{\infty}{V^{2k\pi}_{(2k+1)\pi}},\quad V_{2}^{\pm}=V_{1}^{\pm}-2i\Phi.
\end{equation*}
Note that $\pm\Im \big(z_{l}(\omega, w)\big)>0,\quad w\in \hat{V}_{l}^{\pm}$.
We choose the connected component of $\hat{V}_{l}^{+}$ corresponding to the condition $\Im z_{l}>0$ as $\hat{V}_1^{+}:= V_0^{\pi}, \hat{V}_2^{+}= V_{-\Phi}^{\pi-\Phi}$,
(see Fig.\;\ref{w=i}, where $\Gamma_\alpha(w)$ are represented for $\omega_1\geq0$).
\newpage
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.32]{www=i.pdf}
\caption{Connection Equation}\label{w=i}
\end{figure}
Now we ``lift'' $\tilde{v}_{l}^{\beta}(z_{l})$ to $\hat{V}_{l}^{+},~ l=1,2,~ \beta=0,1$, using (\ref{z_{1,2}(om)}). We obtain from (\ref{tilde{v}_1^0}), (\ref{v_1^0})
\begin{align}\label{check{v}_1^0}
\hat{v}_1^0(w)=\displaystyle\frac{i}{-i\omega\sinh w-k},\quad w\in\hat{V}_{1}^{+},\qquad \hat{v}_2^0(w)=0,\quad w\in\hat{V}_{2}^{+}.
\end{align}
Further, $\hat{v}_l^1(w)$ are analytic functions in $\hat{V}_{l}^{+}$ by (\ref{atilde{v}_l}). Our aim is to find the unknown functions $\hat{v}_{l}^1, l=1,2$. Having these functions, we obtain $\tilde{\gamma}(z)$ and the solution $v_{0}(x)$ by (\ref{8.5}), (\ref{div1}).\\
Note that in the case $\Phi>\pi$ the function $\tilde{\gamma}(z)$ given by (\ref{tilde{gamma}''}) is not lifted to $\hat{V}$ since $\hat{\gamma}(w):=\tilde{\gamma}\big(z_1(w), z_2(w)\big)$ is not defined at any point of $\hat{V}$. In fact, $\hat{v}_1^1(w)$ is not defined in $\hat{V}_2^{+}$, $\hat{v}_2^1(w)$ is not defined in $\hat{V}_1^{+}$ since $\hat{V}^{\ast}:=\hat{V}_{1}^{+}\cap \hat{V}_{2}^{+}=\emptyset$, see Fig.\;\ref{w=i}. In the case $\Phi<\pi$ this intersection is not empty and such lifting to $\hat{V}^{\ast}$ is possible \cite{BKM}. Thus, in that case there exists a connection between $\hat{v}_1^1$ and $\hat{v}_2^1$ generated by (\ref{div1}) since $\mathscr{H}(z)$ has zeros in $\hat{V}^{\ast}$ and $\hat{\gamma}(w)$ must vanish for $w\in \hat{V}^{\ast}$.\\
Nevertheless, a similar relation between $\hat{v}_1^1$ and $\hat{v}_2^1$ exists in the case $\Phi>\pi$ too (see \cite[{\rm Chap.\;21}]{BKM}). Let us describe the corresponding construction. The function $\hat{\gamma}(z)$ is naturally splitted into two summands each of which is extended to $\hat{V}_{1}^{+}$ and $\hat{V}_{2}^{+}$, respectively.
Namely
\begin{equation*}
\sin^2\Phi\;\tilde{\gamma}(z_1, z_2)=\tilde{v}_{1}(z_1, z_2)+\tilde{v}_{2}(z_1, z_2),
\end{equation*}
where
\begin{equation*}
\left.
\begin{array}{rcl}
v_1(z_1, z_2):=\tilde{v}_1^1(z_1)+\displaystyle\frac{z_2-2\cos\Phi\;z_1}{z_1-k},\qquad v_2(z_1,z_2):= \tilde{v}_2^1(z_2),\quad (z_1, z_2)\in\mathbb{R}^2.
\end{array}
\right.
\end{equation*}
By the Paley-Wiener Theorem the function $\tilde{v}_1(z_1,z_2)$ admits an analytic continuation to $\mathbb{C}_{z_1}^{+}\times \mathbb{C}$, and $\tilde{v}_2(z_1,z_2)$ admits an analytic continuation to $\mathbb{C}^{+}\times\mathbb{C}_{z_2}^{+}$, where
$
\mathbb{C}_{z_k}^{+}=\Big\lbrace z_{k}\big\vert\; \Im z_{k}>0\Big\rbrace.
$
Now we can ``lift'' $\tilde{v}_1$ and $\tilde{v}_2$ to the Riemann surface $\hat{V}$ by formulas (\ref{z_{1,2}(om)}).\\
We obtain
\begin{align}\label{check v_1}
\hat{v}_1(w)=\hat{v}_1^1(w)+\omega\sinh(w-i\Phi)\; \hat{v}_1^0(w),\quad w\in \hat{V}_1^{+},\quad\hat{v}_2(w)=\hat{v}_2^1(w), \quad w\in \hat{V}_2^{+}.
\end{align}
Then, by (\ref{check v_1}), (\ref{check{v}_1^0}),
\begin{align}\label{check{v}_1}
\hat{v}_1(w)&= \hat{v}_1^1(w)-\hat{G}(w),\quad w\in\hat{V}_1^{+},
\end{align}
where
\begin{equation}\label{G(w)}
\hat{G}(w):=\displaystyle\frac{i \omega\sinh(w-i\Phi)}{i\omega\sinh w+k},\quad w\in\mathbb{C}.
\end{equation}
In the case $\Phi<\pi$, $\hat{v}_1(w)$ and $\hat{v}_2(w)$ have a common domain $\hat{V}^{\ast}$ which is not empty, and thus the connection equation has the following form:
\begin{equation}\label{cecs}
\hat{v}_1(w)+\hat{v}_2(w)=0,\quad w\in \hat{V}^{\ast}.
\end{equation}
(see \cite[{\rm Chap.\;10}]{BKM}).\\
In the case $\Phi>\pi$ the domain $\hat{V}^{\ast}=\emptyset$ (see Fig.\;\ref{w=i}). Nevertheless, it turns out that in this case there exists a {\it connection} between $\hat{v}_1$ and $\hat{v}_2$ such that (\ref{cecs}) holds in a slightly different sense. Only in this case this equation holds for analytic continuations of $\hat{v}_1$ and $\hat{v}_2$.
Let us formulate precisely the corresponding theorem.
\begin{defn} Denote
$
\hat{V}_{\Sigma}:=\hat{V}_1^{+}\cup\hat{V}_2^{+}\cup\hat{V}^{\ast},
$
where $\hat{V}^{\ast}=V_{\pi-\Phi}^{0}$. Note that $\hat{V}_{\Sigma}=V_{-\Phi}^{\pi}$, (see Fig.\;\ref{w=i}).
\end{defn}
\begin{teo}\label{CONEX}
(Connection equation in the case $\Phi>\pi$) \cite[Section\;20.1, Theorem\,20.1]{BKM}.
Let $v\in S'(K)$ be any distributional solution to (\ref{ra1}). Then functions (\ref{check v_1}) admit {\it analytic} continuations $[\hat{v}_{l}]$ along the Riemann surface $\hat{V}$ from $\hat{V}_{l}^{+}$ to $\hat{V}_{\Sigma}$ (see Fig.\;\ref{w=i}) and
\begin{equation}\label{CE}
\Big[\hat{v}_1(w)\Big]+\Big[\hat{v}_2(w)\Big]=0,\quad w\in\hat{V}_{\Sigma}.
\end{equation}
\end{teo}
\begin{obs}\label{r3.7}
Using the connection equation (\ref{CE}) we will find $\hat{v}_1$. The solution $u_1$ of problem (\ref{u_1}) is given by (\ref{u_1'}).
\end{obs}
\subsection{Step\;3: Reduction to a difference equation}
From (\ref{CE}), (\ref{check{v}_1}) it follows that $\hat{v}_1^1(w)$ and $\hat{v}_2^1(w)$ admit meromorphic continuations to $\hat{V}_{\Sigma}$ and
\begin{equation}\label{al}
\hat{v}_1^1(w)+\hat{v}_2^1(w)=\hat{G}(w),\quad w\in\hat{V}_{\Sigma}.
\end{equation}
We will use the following automorphisms on $\hat{V}$ $\Big({\rm see}~ \cite[{\rm Ch.}\;13]{BKM} ~{\rm and}~ \cite[(73)]{KMM}\Big)$:
\begin{equation}\label{h_1h_2}
\left.
\begin{array}{rcl}
h_1 w= -w+\pi i, \qquad h_2 w=-w+\pi i-2i\Phi, \quad w\in\mathbb{C}
\end{array}
\right.
\end{equation}
which are symmetries with respect to $i\displaystyle\frac{\pi}{2}$ and $i\displaystyle\frac{\pi}{2}-i\Phi$, respectively.\\
Sometimes we will use the notation $f^{h_{l}}(w):=f\Big(h_{l}(w)\Big),~l=1,2$.\\
The functions $\hat{v}_1^1$ and $\hat{v}_2^1$ are automorphic functions with respect to $h_1$ and $h_2$, respectively:
\begin{align}\label{h_1}
&\hat{v}_1^{1h_{1}}(w)=\hat{v}_1^1(-w+\pi i)=\hat{v}_1^1(w),\quad w\in\hat{V}_1^{+},
\\ \nonumber\\\label{h_2}
&\hat{v}_2^{1h_{2}}(w)=\hat{v}_2^1(-w+\pi i-2i\Phi)=\hat{v}_2^1(w),\quad w\in\hat{V}_2^{+},
\end{align}
as follows from the fact that $\tilde{v}_{l}^1(z_l)$ depend only on $z_l$ and hence their liftings $\hat{v}_{l}(w)$ to $\hat{V}_{l}^{+}$ satisfies (\ref{h_1}), (\ref{h_2}) since $\sinh w$ satisfy (\ref{h_1}) and $\sinh(w+i\Phi)$ satisfies (\ref{h_2}).\\
Thanks to this automorphy we can eliminate one unknown function in the undetermined equation (\ref{al}) and reduce it to an equation with a shift, see \cite{KMM}. The idea of this method is due to Malyshev \cite{Mal}.
\begin{lem}\label{ll-1}
Let $v\in S'(\mathbb{R})$ satisfy (\ref{ra1}-\ref{ra3}) and $v_l^1(x_l), l=1,2$, be its Neumann data. Then their liftings to $\hat{V}$, $\hat{v}_{l}^1(w), w\in\hat{V}_{l}^{+}$, admit meromorphic continuations to $\mathbb{C}$ (which we also denote $\hat{v}_{l}^1$) such that for $w\in\mathbb{C}$
\begin{align}\label{3.25}
\hat{v}_1^1(w)+\hat{v}_2^1(w)&=\hat{G}(w),
\end{align}
and they are $h_l$-automorphic functions,
\begin{align}\label{icheck{v}_1^1}
\hat{v}_1^1\big(h_1 (w)\big) &= \hat{v}_1^1(w),
\end{align}
$$
\hat{v}_2^1\big(h_2 (w)\big) = \hat{v}_2^1(-w+\pi i-2i\Phi)= \hat{v}_2^1(w).
$$
\end{lem}
The proof of this lemma is given in Appendix\;\ref{aal}.
\medskip
\noindent Now we reduce system (\ref{3.25})-(\ref{icheck{v}_1^1}) to a difference equation, which is also called the shift equation. This reduction is the part of MAF which was introduced in \cite{Mal} for difference equations in angles. It uses the automorphy of $\hat{v}_{l}^{\beta}$ on $\hat{V}$ under the automorphisms $h_l$ and the term MAF is due to this observation.\\
Define, for $w\in\mathbb{C}$,
\begin{equation}\label{d21}
\hat{G}_2(w):=\hat{G}(w)-\hat{G}\Big(h_2(w)\Big)=\displaystyle\frac{i\omega\sinh(w-i\Phi)}{i\omega\sinh w+k}-\displaystyle\frac{i\omega\sinh(w+3i\Phi)}{i\omega\sinh(w+2i\Phi)+k}.
\end{equation}
For a region $U$ in $\mathbb{C}$ we will denote here and everywhere below $\mathcal{M}(U)$ the set of meromorphic functions on $U$.
\begin{lem}\label{Vin}
Let $v\in S'(K)$ satisfy (\ref{ra1})-(\ref{ra2}). Then the connection equation (\ref{CE}) holds, the function $\hat{v}_1^1$ belongs to $\mathcal{M}(\mathbb{C})\cap \mathcal{H}(\hat{V}_1^{+})$,
satisfies the difference equation
\begin{equation}\label{de.}
\begin{array}{lll}
\hat{v}_1^1(w)-\hat{v}_1^1(w+2i\Phi)=\hat{G}_2(w),
\end{array}
\end{equation}
and the automorphic condition (\ref{icheck{v}_1^1}).
\end{lem}
The proof of this lemma is given in Appendix\;\ref{ade}.
\medskip
Our goal is to find $\hat{v}_1^1(w)\in \mathcal{M}(\mathbb{C})\cap \mathcal{H}\big(\hat{V}_1^{+}\big)$ s.t. (\ref{de.}), (\ref{icheck{v}_1^1}) and the condition
$
\hat{v}_1(w)\in \mathcal{H}\big(\hat{V}_{\Sigma}\big)
$
hold. Here $\hat{v}_1$ is given by (\ref{check v_1}). In its turn, this condition is equivalent to the condition
\begin{equation}\label{Si2}
\hat{v}_1(w)=\hat{v}_1^1(w)-\hat{G}(w)\in\mathcal{H}\big(\hat{V}_{\Sigma}\big)
\end{equation}
by (\ref{check{v}_1}).\\
In the next section we find the necessary and sufficient conditions for $\hat{v}_1^1$ such that condition (\ref{Si2}) holds.
\subsection{Necessary and sufficient condition for $\hat{v}_1^1$}
The analyticity condition (\ref{Si2}), which follows from the connection equation (\ref{CE}), imposes certain necessary conditions for the poles of the function $\hat{v}_1^1$, more exactly, of its continuation obtained in Lemma\;\ref{ll-1}. This section is devoted to the derivation of these conditions and to the proof of the fact that they are also sufficient for (\ref{CE}) to hold.\\
We will often use the following evident statement.
\begin{lem}\label{lh_1}
Let $A\in\mathcal{M}(\mathbb{C})$, and $A$ satisfy (\ref{icheck{v}_1^1}). Then
$
\overset{}{\underset{w=w_0}{res}}\;A(w)=-\overset{}{\underset{w=-w_0+\pi i}{res}}\;A.
$
\end{lem}
\newpage
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.33]{DiR2.pdf}
\caption{Necessary conditions}\label{AJ}
\end{figure}
Denote
\begin{equation}\label{P}
P:=\Big\lbrace p_1, -p_1\pm \pi i, p_1+2\pi i\Big\rbrace,\qquad p_1:=\sinh^{-1}\Big(\displaystyle\frac{ik}{\omega}\Big)\in\Gamma_0.
\end{equation}
(See Fig.\;\ref{AJ}, where the positions of the curves $\Gamma_\alpha$ correspond to the case $\Re \omega>0$. We will always assume in the following that this is the case; in the case $\Re \omega <0$ the construction is similar.)
Introduce the following two important parameters:
\begin{equation}\label{r1r2}
r_1:=\overset{}{\underset{-p_1\pm \pi i}{res}}\;\hat{G}:=-\displaystyle\frac{\sinh(p_1+i\Phi)}{\cosh p_1},\quad r_2:=\overset{}{\underset{p_1}{res}}\;\hat{G}:=\displaystyle\frac{\sinh(p_1-i\Phi)}{\cosh p_1}.
\end{equation}
In the next proposition we give a necessary and sufficient condition for $\hat{v}_1^1$ guaranteeing that condition (\ref{d21}) holds.
\begin{prop}\label{pns}
Let $\hat{v}_1^1\in\mathcal{M}(\mathbb{C})\cap \mathcal{H}(\hat{V}_{1}^{+})$ satisfy (\ref{de.}) and (\ref{icheck{v}_1^1}). Then $\hat{v}_1\in\mathcal{H}\big(\hat{V}_{\Sigma}\big)$ if and only if
\begin{equation}\label{cu1}
\hat{v}_1^1\in\mathcal{H}\big(V_{-\Phi}^{\Phi+\pi}\setminus P\big)
\end{equation}
and
\begin{equation}\label{u2}
\overset{}{\underset{p_1}{res}}\;\hat{v}_1^1=r_2,\quad \overset{}{\underset{-p_1-\pi i}{res}}\;\hat{v}_1^1=r_1.
\end{equation}
\end{prop}
{\bf Proof}. First, let us prove the necessity of condition (\ref{u2}). By (\ref{G(w)}) the poles of $\hat{G}(w)$ in $\mathbb{C}$ are
$
p_1+2k\pi i, -p_1-\pi i+2k\pi i, k\in\mathbb{Z}.
$
Of all of these poles only $p_1, -p_1-\pi i$ belong to $\hat{V}_{\Sigma}$ (see Fig.\;\ref{AJ}).
Hence formulas (\ref{u2}) follow from (\ref{Si2}), (\ref{r1r2}). Further, since $\hat{v}_1^1\in \mathcal{H}\Big(V_{-\Phi}^{\pi}\setminus \lbrace p_1, -p_1-\pi i\rbrace\Big)$ by (\ref{Si2}), $\hat{v}_1^1\in \mathcal{H}\Big(V_{0}^{\pi+\Phi}\setminus \lbrace p_1+2\pi i, -p_1+\pi i\rbrace\Big)$ by (\ref{icheck{v}_1^1}). Hence (\ref{cu1}) also holds.\\
Let us prove the sufficiency of conditions (\ref{cu1}), (\ref{u2}). From (\ref{u2}) and (\ref{r1r2}) it follows that
$
\overset{}{\underset{w=p_1}{res}}\;\hat{v}_1(w)=\overset{}{\underset{w=-p_1-\pi i}{res}}\;\hat{v}_1(w)=0.
$
Hence, $\hat{v}_1\in\mathcal{H}\big(\hat{V}_{\Sigma}\big)$ since $\hat{v}_1^1\in\mathcal{H}\Big(\hat{V}_{\Sigma}\setminus\lbrace p_1, -p_1-\pi i\rbrace\Big)$ by (\ref{cu1}) and $\hat{G}(w)$ belongs to this space too. The proposition is proven.~~~~~$\blacksquare$
\begin{obs}
Condition (\ref{cu1}) implies $\hat{v}_{1}^1\in\mathcal{H}\big(\hat{V}_1^{+}\big)$.
\end{obs}
\section{$h_1$-automorphic solution of difference equation (\ref{de.}), $\Phi\neq\displaystyle\frac{3\pi}{2}$}
\setcounter{equation}{0}
In this section we construct an $h_1$-automorphic solution of difference equation (\ref{de.}) satisfying all conditions of Proposition\;\ref{pns} for $\Phi\neq\displaystyle\frac{3\pi}{2}$.
This limitation is related to the method of obtaining a solution which uses the Cauchy-type integral. The kernel of this integral must be analytic on the integration contour. In turns out that it is possible to find such a kernel only when $\Phi\neq\displaystyle\frac{3\pi}{2}$. Fortunately, the case $\Phi=\displaystyle\frac{3\pi}{2}$ does not need an integral of the Cauchy-type since the difference equation (\ref{de.}) is solved by elementary methods in this case (see Section\;5).
\subsection{Poles of $\hat{G}_2$ and asymptotics.}
In this subsection we give the properties of $G_2$ which are necessary for the solution of the main problem.
Let
\begin{align}\label{pq}
\mathscr{P}:=\Big\lbrace p_1+2\pi i k-2i\Phi m: k\in\mathbb{Z}, m=0,1\Big\rbrace,\quad
\mathscr{Q}:=\Big\lbrace -p_1+\pi i+2\pi i k-2i\Phi m: k\in\mathbb{Z}, m=0,1\Big\rbrace.
\end{align}
Obviously, the poles of $\hat{G}_2$ belong to $\mathscr{P}\cup\mathscr{Q}$ by (\ref{d21}).\\
Denote
$
q_1=-p_1-\pi i+2i\Phi.
$
\begin{lem}\label{lpG2}
i) The poles of $\hat{G}_2$ belonging to $\overline{V_{\frac{\pi}{2}-\Phi}^{\pi-\Phi}}$ are
$-p_1-\pi i~{\rm for}~\Phi\geq\frac{3\pi}{2} ~{\rm and}~ -q_1+\pi i~{\rm for}~\Phi\leq\frac{3\pi}{2}$.\\
The residues of $\hat{G}_2$ at these points are
\begin{equation*}
\overset{}{\underset{-p_1-\pi i}{res}}\;\hat{G}_2= \overset{}{\underset{-q_1+\pi i}{res}}\;\hat{G}_2=r_1.
\end{equation*}
ii) The poles of $\hat{G}_2$ belonging to $\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}$ are $p_1-2\pi i$ for $\Phi\geq\displaystyle\frac{3\pi}{2}$, $-p_1+\pi i-2i\Phi$ for $\Phi\leq \displaystyle\frac{3\pi}{2}$ and
\begin{equation*}
\overset{}{\underset{p_1-2\pi i}{res}}\;\hat{G}_2=\overset{}{\underset{-p_1+\pi i-2i\Phi}{res}}\;\hat{G}_2=r_2.
\end{equation*}
iii) The function $\hat{G}_2$ admits the asymptotics
\begin{equation}\label{asG2}
\hat{G}_2(w)=\mp 2i\sin\Phi\Big(1+O(e^{\mp \Re w})\Big),\quad \Re w\to \pm\infty
\end{equation}
uniformly with respect to $\Im w$.\\
iv)
\begin{equation}\label{Shat{G}}
\hat{G}_2\Big(h_2(w)\Big)=-\hat{G}_2(w),\quad\hat{G}_2\Big(\displaystyle\frac{\pi}{2}-i\Phi\Big)=0.
\end{equation}
\end{lem}
{\bf Proof}. The first three assertions follow directly from (\ref{d21}). The last assertion (\ref{Shat{G}}) follows from the fact that $h_2\Big(\displaystyle\frac{\pi i}{2}-i\Phi\Big)=\displaystyle\frac{\pi i}{2}-i\Phi$. ~~~~$\blacksquare$
\begin{obs}\label{ras}
In the case $\Phi=\displaystyle\frac{3\pi}{2}$,
$$
\hat{G}_2(w)=\pm 2i\Big(1+O(e^{\mp 2\Re w})\Big),\quad \Re w\to\pm\infty.
$$
However, this will not affect the final results.
\end{obs}
\subsection{Reduction of problem (\ref{de.}), (\ref{icheck{v}_1^1}) to a conjugate problem}
Denote
$$
\hat{\Pi}:=V_{\frac{\pi}{2}-\Phi}^{\frac{\pi}{2}+\Phi},\qquad \hat{\Pi}_{\pm}=\Big\lbrace w\in\hat{\Pi}:\pm\Re w>0\Big\rbrace,\quad \partial\hat{\Pi}_{+}=\hat{\beta}\cup\hat{\gamma}\cup(\hat{\beta}-2i\Phi),
$$
where
\begin{equation*}
\hat{\beta}:=\Big\lbrace w\in\Gamma_{\frac{\pi}{2}-i\Phi}: \Re w\geq0\Big\rbrace,\quad \hat{\gamma}:=\Big\lbrace w\big\vert \Re w=0,\quad \Im w\in\Big[\frac{\pi}{2}-\Phi, \frac{\pi}{2}+\Phi\Big]\Big\rbrace.
\end{equation*}
We will look for a solution of the following problem: to find an analytic function in $\hat{\Pi}_{+}$ whose boundary values on $\partial\hat{\Pi}_{+}$,
\begin{align*}
&\hat{a}_{1}(w+i0),\quad w\in \hat{\beta};\quad\hat{a}_{1}(w+2i\Phi-i0),\quad w\in \hat{\beta}+2i\Phi, \quad\hat{a}_{1}(w+0),\quad w\in\hat{\gamma},
\end{align*}
exist and are such that they satisfy the following conditions of conjugation
\begin{align}\label{C1}
&\hat{a}_{1}(w+i0)-\hat{a}_{1}(w+2i\Phi-i0)=\hat{G}_2(w),\quad w\in\hat{\beta}
\\\nonumber\\\label{C2}
&\hat{a}_{1}(w+0)=\hat{a}_{1}(-w+\pi i+0),\quad w\in\hat{\gamma},
\end{align}
see Fig.\;\ref{APi}
\newpage
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.33]{APi.pdf}
\caption{Reduction to conjugate problem}\label{APi}
\end{figure}
\subsection{Solution of the conjugate problem (\ref{C1}), (\ref{C2}) for $\Phi\neq\displaystyle\frac{3\pi}{2}$}
We start solving the problem (\ref{C1}), (\ref{C2}). We will reduce this problem to a Riemann-Hilbert problem. To this end we map $\hat{\Pi}_{+}$ conformally onto $\check{\Pi}:=\mathbb{C}^{\ast}\setminus \check{\beta}$, where $\mathbb{C}^{\ast}$ is the Riemann sphere, $\check{\beta}:=t(\beta)$. For example, define
\begin{equation}\label{t(w)}
w\mapsto t=t(w)=\coth^2\Big(\displaystyle\frac{\pi}{2\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big)\Big),\quad w\in\hat{\Pi}_{+}.
\end{equation}
Denote the inverse transform $w(t): \check{\Pi}_{+}$ to $\hat{\Pi}_{+}$.\\
Note that when $w\in\hat{\Pi}_{+}$ tends to $\hat{\beta}$ implies that
$t(w)$ tends to $\check{\beta}$ from above, and when $w\in\hat{\Pi}_{+}$
tends to $\beta+2i\Phi$ implies that $t(w)$ to $\check{\beta}$ from below.
Obviously,
\begin{align*}
t(\hat{\gamma})=:\check{\gamma}=[-\infty,0],\quad t\Big(\pm i\Big(\displaystyle\frac{\pi}{2}+\Phi\Big)\Big)=0,\quad t(\infty)=1,\quad t\Big(\displaystyle\frac{\pi i}{2}\Big)=\infty
\end{align*}
(see Fig.\ref{At}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.33]{At.pdf}
\caption{Riemann-Hilbert problem}\label{At}
\end{figure}
\vspace{0.5cm}
Then problem (\ref{C1}), (\ref{C2}) is equivalent to the Riemann-Hilbert problem for $\check{a}_1(t):=\hat{a}_{1}\big(w(t)\big), t\in\check{\Pi}_{+}$, which at the same time is the saltus problem (see \cite[Ch. 16, 18]{BKM})
\begin{equation}\label{RG}
\check{a}_{1}(t+i0)-\check{a}_{1}(t-i0)=\check{G}_2(t),\quad t\in\check{\beta}.
\end{equation}
Here $\check{G}_2(t)=\hat{G}_2\Big(w(t)\Big), t\in\hat{\Pi}$, $\check{a}_{1}(t):=\check{a}_{1}\big(w(t)\big), t\in\check{\Pi}_{+}$; $\check{a}_{1}(t\pm i0)=\displaystyle\lim_{\varepsilon\to 0+} \check{a}_{1}(t\pm i\varepsilon)$ and for $t\in\check{\beta}$, $w\in\hat{\beta}$, $w\in\overline{\hat{\Pi}_{+}}$. From (\ref{t(w)}) and (\ref{Shat{G}}) it follows that $\check{G}_2(t)$ and $\check{G}'_2(t)$ are continuous on $\overline{\check{\beta}}$ and
\begin{equation}\label{check{G}_2(0)=0}
\check{G}_2(0)=0,\qquad \check{G}_2(1)=-2i\sin\Phi.
\end{equation}
It is well known that a particular solution of (\ref{RG}) is given by the Cauchy type integral
\begin{align}\label{cons}
\check{a}_{1}(t)=\displaystyle\frac{1}{2\pi i}\displaystyle\int\limits_{\check{\beta}}\displaystyle\frac{\check{G}_2(t')}{t'-t}\;dt',\quad t\in\check{\Pi}.
\end{align}
Obviously $\check{a}_{1}(t)\in\mathcal{H}(\check{\Pi})$, and $\check{a}_{1}(t)\xrightarrow[t\to \infty]{}0$. Moreover, by (\ref{check{G}_2(0)=0}) there exists $\displaystyle\lim_{t\to 0}\;\check{a}_1(t), t\notin\check{\beta}$.\\
In the following lemma we establish an asymptotics of (\ref{cons})
at $t=1$; it plays an important role in describing the fact that the
solution belongs to a certain class and hence its uniqueness.
\begin{lem}\label{l as check{a}_1}
The function $\check{a}_1(t)$ admits the following asymptotics
\begin{align}\label{as hat{a}_1}
\check{a}_{1}(t)=-\displaystyle\frac{\check{G}_2(1)}{2\pi i}\ln\displaystyle\frac{1}{t-1}+C+O(t-1),\quad t\to 1,
\end{align}
where, by $\ln\displaystyle\frac{1}{t-1}$, we understand a certain branch that is single-valued on a plane cut along $\check{\beta}$ and $C$ depends only on $\hat{G}_2$; moreover,
\begin{equation}\label{4.22'}
\displaystyle\frac{d}{dt}\check{a}_1(t)=\displaystyle\frac{\sin\Phi}{\pi}\;\displaystyle\frac{1}{1-t}+\displaystyle\frac{\check{G}'_2(1)}{2\pi i} \ln(1-t)+C_1+o(1),
\end{equation}
where $C$, $C_1$ depend only on $\check{G}_2$.
\end{lem}
{\bf Proof.}
(\ref{as hat{a}_1}) follows from (\ref{cons}) (see \cite[\S 16]{Mus}). Let us find the asymptotics of $\displaystyle\frac{d}{dt}\check{a}_1(t),~t\to1$. (\ref{cons}) gives
\begin{align*}
\displaystyle\frac{d}{dt} \check{a}_1(t)=\displaystyle\frac{1}{2\pi i}\displaystyle\int\limits_{\check{\beta}}\displaystyle\frac{\check{G}_2(t') dt'}{(t'-t)^2}.
\end{align*}
Represent $\check{G}_2(t')$ in the form $\check{G}_2(t')=\check{G}_{2}(1)+\check{G}_{2}(1)(t'-1)+\psi(t')(t'-1)^2, ~ t'\in\check{\beta}$, where $\psi(t')\in C^{\infty}\big(\overline{\check{\beta}}\big)$. This is possible since $\check{G}_{2}(t')\in C^{\infty}\big(\overline{\check{\beta}}\big)$ by (\ref{pq}) if $\Phi\neq\displaystyle\frac{3\pi}{2}$.
Then
\begin{equation*}
\begin{array}{lll}
\displaystyle\frac{d}{dt}\check{a}_1(t)
=\displaystyle\int\limits_{\check{\beta}}\displaystyle\frac{\psi(t')(t'-1)^2}{(t'-t)^2} dt'+\displaystyle\frac{1}{2\pi i}\displaystyle\int\limits_{\check{\beta}}\displaystyle\frac{\check{G}_2(1)+\check{G}_2'(1)(t'-1)}{(t'-t)^2} dt'.
\end{array}
\end{equation*}
Obviously
$
\displaystyle\frac{1}{2\pi i}\displaystyle\int\limits_{\check{\beta}}\displaystyle\frac{(t'-1)^2\;\psi(t')}{(t'-1)^2} dt'=C_1+o(1),\quad t\to1,
$
where $C_1$ depends only on $\check{G}_2$;
$
\displaystyle\frac{1}{2\pi i}\displaystyle\int\limits_{\check{\beta}}\displaystyle\frac{\check{G}_2(1)}{(t'-t)^2} dt'
= \displaystyle\frac{\sin\Phi}{\pi} \displaystyle\frac{1}{1-t}-\displaystyle\frac{\sin\Phi}{\pi}+o(1),\quad t\to1
$
by (\ref{check{G}_2(0)=0}). Further,
$
\displaystyle\frac{1}{2\pi i}\displaystyle\int\limits_{\check{\beta}}\displaystyle\frac{\check{G}'_2(1) (t'-1)}{(t'-t)^2} dt'
= \displaystyle\frac{\check{G}'_2(1)}{2\pi i}\Big[\ln(1-t)\Big]+C_2+o(1),\quad t\to 1.
$
This implies (\ref{4.22'}).~~~~~$\blacksquare$
\medskip
Now we are able to find a solution to problem (\ref{de.}). First we define this solution in $\overline{\hat{\Pi}}$ and then we extend it to $\mathbb{C}$.\\
Let us define $\hat{a}_1(w), w\in\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}$ by the formula
\begin{align}\label{hat{a}_1(w)}
& \hat{a}_{1}(w):=\check{a}_1\big(t(w)\big)+C_2, \quad w\in\hat{\Pi}_{+},
\end{align}
where $\check{a}_1(t)$ is given by (\ref{cons}) and
\begin{equation}\label{defC_1}
C_2=\ln 4\;\displaystyle\frac{\sin\Phi}{\Phi}-C,
\end{equation}
with $C$ taken from (\ref{as hat{a}_1}).\\
Obviously $\hat{a}_1(w)$ satisfies (\ref{C1}) and (\ref{C2}), since a constant is a solution of the homogeneous equation (\ref{C1}) and satisfies (\ref{C2}).
Moreover, the same formula (\ref{hat{a}_1(w)}) defines the analytic function $\hat{a}_1(w)$ in $\overline{\hat{\Pi}}$, since $\hat{a}_1(w)$ satisfies (\ref{C2}). Obviously, (\ref{t(w)}) implies that
\begin{align}\label{au}
& \hat{a}_{1}(w)=\hat{a}_1(-w+\pi i), \quad w\in\overline{\hat{\Pi}}.
\end{align}
Moreover, $\hat{a}_1(w)$ satisfies (\ref{C1}).
In fact, for $w\in\hat{\beta}$, $\Re w>0$ this follows from (\ref{RG}), and for $w\in\hat{\beta}$, $\Re w<0$ this follows from (\ref{RG}) and (\ref{Shat{G}}).
Further, we extend $\hat{a}_{1}(w)$ to $\overline{{V}_{\frac{\pi}{2}-3\Phi}^{\frac{\pi}{2}+3\Phi}}$ by formulas corresponding to the difference equation (\ref{de.}):
\begin{equation}\label{e1}
\hat{a}_{1}(w)=\hat{a}_{1}(w-2i\Phi)-\hat{G}_2(w-2i\Phi),\quad w\in \overline{\hat{\Pi}}+2i\Phi,\quad \overline{\hat{\Pi}}+4i\Phi,\cdots
\end{equation}
and
\begin{equation}\label{3.4'}
\hat{a}_{1}(w)=\hat{a}_{1}(w+2i\Phi)+\hat{G}_2(w),~~{\rm for}~~ w\in\overline{\hat{\Pi}}-2i\Phi,\quad \overline{\hat{\Pi}}-4i\Phi,\cdots
\end{equation}
This extension is meromorphic by (\ref{C1}) and still has property (\ref{au}). Let us prove this.
Let $w\in\hat{\Pi}-2i\Phi$ (see Fig.\;\ref{phi+}, \ref{Phi+1}). Then $w+\pi i\in\hat{\Pi}+2i\Phi$. By (\ref{3.4'}) and (\ref{e1}) we have $\hat{a}_1(w)=\hat{a}_1(w+2i\Phi)+\hat{G}_2(w), ~ \hat{a}_1(-w+\pi i)=\hat{a}_1(-w+\pi i-2i\Phi)-\hat{G}_2(-w+\pi i-2i\Phi)$. But $\hat{a}_1(w+2i\Phi)=\hat{a}_1(-w+\pi i-2i\Phi)$ since $\hat{a}_1\Big(h_1(w)\Big)=\hat{a}_1(w),~w\in\overline{\hat{\Pi}}$ and $\hat{G}_2(w)=-\hat{G}_2(-w+\pi i-2i\Phi)$ by (\ref{Shat{G}}). Hence $\hat{a}_1(w)=\hat{a}_1(-w+\pi i)$ in this case. Similarly this is true for $w\in\hat{\Pi}+2i\Phi$.~~~~~~$\blacksquare$
\begin{obs}
Similarly, $\hat{a}_1(w)$ admits a meromorphic extension to $\mathbb{C}$ which satisfies (\ref{de.}) and (\ref{icheck{v}_1^1}).
\end{obs}
\begin{prop}\label{P_2a_2}
For $\Phi\neq\displaystyle\frac{3\pi}{2}$\\
i) There exists a meromorphic in $\mathbb{C}$ and analytic in $\hat{\Pi}$ solution $\hat{a}_{1}$ of problem (\ref{de.}), (\ref{icheck{v}_1^1}) given by (\ref{e1}), (\ref{3.4'}).\\
ii) The function $\hat{a}_1(w)$ admits the following asymptotics
\begin{align}\label{ashat{a}_1}
& \hat{a}_1(w)=\pm\displaystyle\frac{\sin\Phi}{\Phi}\big(w-\displaystyle\frac{\pi i}{2}\Big)+o\Big(e^{\mp\frac{\pi}{2\Phi}w}\Big),\quad \Re w\to\pm\infty,
\\\nonumber\\\label{4.27'}
& \displaystyle\frac{d}{dw}\hat{a}_1(w)=\pm\displaystyle\frac{\sin\Phi}{\Phi}+o\big(e^{\mp \frac{\pi}{\Phi}w}\big),\quad \Re w\to\pm\infty,
\end{align}
uniformly with respect to $\Im w, w\in\hat{\Pi}_{+}$.\\
iii) The solution $\hat{a}_1$ has poles in $\overline{V_{-\frac{\pi}{2}-\Phi}^{\pi+\Phi}}$ for $\Phi>\displaystyle\frac{3\pi}{2}$ only at $q_1:=-p_1-\pi i+2i\Phi,~ -q_1+\pi i=p_1+2\pi i-2i\Phi$ and $p_1-2\pi i$ with residues
\begin{equation}\label{rcheck_1^1}
\overset{}{\underset{q_1}{res}}\;\hat{a}_{1}=-r_1,\qquad \overset{}{\underset{-q_1+\pi i}{res}}\;\hat{a}_{1}=r_1,\quad \overset{}{\underset{p_1-2\pi i}{res}}\;\hat{a}_{1}=r_2.
\end{equation}
For $\Phi\leq\displaystyle\frac{3\pi}{2}$, $\hat{a}_1$ has poles in $\overline{V_{-\frac{\pi}{2}-\Phi}^{\pi+\Phi}}$ only at $p_1+2\pi i$, $-p_1-\pi i$ and $-p_1+\pi i-2i\Phi$ with residues
\begin{equation}\label{3.4''}
\overset{}{\underset{p_1+2\pi i}{res}}\;\hat{a}_1=-r_1,\qquad \overset{}{\underset{-p_1-\pi i}{res}}\;\hat{a}_1=r_1,\quad \overset{}{\underset{-p_1+\pi i-2i\Phi}{res}}\;\hat{a}_1=r_2.
\end{equation}
\end{prop}
{\bf Proof.} Statement $i)$ is proved above. The asymptotics (\ref{ashat{a}_1}), (\ref{4.27'}) are proved in Appendix\;\ref{AC1}.\\
Statement $iii)$ follow from the difference equation (\ref{de.}), $h_1$-automorphicity of $\hat{a}_1$, Lemma\;\ref{lh_1}, Lemma\;\ref{lpG2},
since the function $\hat{a}_1$ is analytic in $\hat{\Pi}$ by i), see Fig.\;\ref{phi+}, \ref{Phi+1}.~~~~~$\blacksquare$
\medskip
As we will see below this asymptotics coincides with the asymptotics of the function $\hat{v}_1^1$ in the case $\Phi=\displaystyle\frac{3\pi}{2}$.
In the following Lemma we describe the poles of the particular meromorphic solution to problem (\ref{de.}), (\ref{icheck{v}_1^1}) constructed above.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.27]{phi+++3.pdf}
\caption{$\Phi>\displaystyle\frac{3\pi}{2}$}\label{phi+}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.33]{PPPhi+2.pdf}
\caption{$\Phi<\displaystyle\frac{3\pi}{2}$}\label{Phi+1}
\end{figure}
\newpage
\subsection{Solution of difference equation, case $\Phi\neq\displaystyle\frac{3\pi}{2}$}
We want to modify $\hat{a}_1$ into $\hat{v}_1^1$ which will satisfy all the conditions of Proposition\;\ref{pns}. To this end for $\Phi>\displaystyle\frac{3\pi}{2}$ we add first $T_1$ to $\hat{a}_1$ removing the poles $q_1$ and $-q_1+\pi i$ (see (\ref{rcheck_1^1})), since by this proposition $\hat{v}_1^1$ must be analytic at these points. It turns out that it is possible to construct $T_1$ in such a way that it produces the pole $-p_1-\pi i$ with the desired residue, as the same proposition requires. \\
Second, we add $T_2$ producing the pole $p_1$ with the desired residue according to the same proposition.\\
Consider
\begin{equation}\label{T_1}
T_1(w)=\displaystyle\frac{\pi}{2\Phi} r_1\Bigg(\coth\Big(\displaystyle\frac{\pi(w-q_1)}{2\Phi}\Big)+\coth\Big(\displaystyle\frac{\pi(-w+\pi i-q_1)}{2\Phi}\Big)\Bigg),
\end{equation}
where $r_1$ is given by (\ref{u2}), and $q_1$ is defined in Lemma\;\ref{Pol}.\\
It is easy to see that the function $T_1$ satisfies the following conditions:
\begin{equation}\label{PT_1}
T_1(-w+\pi i)=T_1(w),\qquad T_1(w+2i\Phi)=T_1(w).
\end{equation}
The poles of $T_1$ belonging to $\overline{V_{-\Phi}^{\pi+\Phi}}$ are
\begin{equation}\label{pt1}
q_1,\quad -p_1-\pi i,\quad -q_1+\pi i,\quad p_1+2\pi i,
\end{equation}
(see Fig.\;\ref{Phi+1}), and
\begin{equation}\label{resw_1}
\overset{}{\underset{q_1}{res}}\;T_1=\overset{}{\underset{-p_1-\pi i}{res}}\;T_1=r_1;\qquad \overset{}{\underset{p_1+2\pi i}{res}}\;T_1=\overset{}{\underset{p_1+2\pi i}{res}}\;T_1=-r_1.
\end{equation}
Further, we define
\begin{align}\label{T_2}
T_2(w)&:=\displaystyle\frac{\pi}{2\Phi} r_2\Bigg(\coth\Big(\displaystyle\frac{\pi(w-p_1)}{2\Phi}\Big)+\coth\Big(\displaystyle\frac{\pi(-w+\pi i-p_1)}{2\Phi}\Big)\Bigg),
\end{align}
where $r_2$ is defined by (\ref{r1r2}).
Obviously $T_2$ also satisfies (\ref{PT_1}).\\
The poles of $T_2$ in $\overline{V_{-\Phi}^{\pi+\Phi}}$ are only
\begin{equation}\label{polT_2}
p_1,\quad -p_1+\pi i \quad {\rm and }\quad \overset{}{\underset{p_1}{res}}\;T_2=r_2,\quad \overset{}{\underset{-p_1+\pi i}{res}}\;T_2=-r_2.
\end{equation}
Finally, we define
\begin{equation}\label{hat{v}_1^1}
\hat{v}_1^1(w):=\left\{
\begin{array}{rcl}
\hat{a}_1(w)+T_1(w)+T_2(w),& & \Phi>\displaystyle\frac{3\pi}{2}\\\\
\hat{a}_1(w)+T_2(w),& & \Phi<\displaystyle\frac{3\pi}{2},
\end{array}
\right.
\end{equation}
where $\hat{a}_1(w)$ is given in Proposition\;\ref{P_2a_2}.
\newpage
\begin{teo}\label{Theo2}
Let $\Phi\neq\displaystyle\frac{3\pi}{2}$.\\
{\bf i)} The function $\hat{v}_1^1$ satisfies all the hypothesis of Proposition\;\ref{pns}.\\
{\bf ii)} $\hat{v}_1^1(w)\in\mathcal{H}\Big(\overline{V_{\pi}^{\frac{3\pi}{2}}}\setminus\Gamma_{\pi}\Big)$ and it has a unique pole $-p_1+\pi i$ on $\Gamma_{\pi}$ with residue
\begin{align}\label{res}
\overset{}{\underset{-p_1+\pi i}{res}}\;\hat{v}_1^1=-r_2.
\end{align}
{\bf iii)} $\hat{v}_1^1(w)\in\mathcal{M}\Big(\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}\Big)$ and it has a unique pole at $p_1-2\pi i$ for $\Phi>\displaystyle\frac{3\pi}{2}$,
\begin{equation}\label{ren1}
\overset{}{\underset{p_1-2\pi i}{res}}\;\hat{v}_1^1=r_2.
\end{equation}
\end{teo}
{\bf Proof.} {\bf i)} $\hat{v}_1^1$ satisfies (\ref{de.}) and (\ref{icheck{v}_1^1}) by Proposition\;\ref{P_2a_2}, (\ref{hat{v}_1^1}) and (\ref{PT_1}).\\
Let us prove (\ref{cu1}) and (\ref{u2}).
Consider $\Phi>\displaystyle\frac{3\pi}{2}$. By Proposition\;\ref{P_2a_2}, (\ref{resw_1}), (\ref{pt1}) and (\ref{polT_2}), the possible poles of $\hat{v}_1^1$ in $\overline{V_{-\Phi}^{\pi+\Phi}}$ belong to $\Big\lbrace q_1, -q_1+\pi i\Big\rbrace \cup P$, where $P$ is given by (\ref{P}).
Moreover, $\overset{}{\underset{q_1}{res}}\;\hat{v}_1^1=\overset{}{\underset{-q_1+\pi i}{res}}\;\hat{v}_1^1=0$ by (\ref{rcheck_1^1}), (\ref{resw_1}) and (\ref{polT_2}). Hence, $\hat{v}_1^1$ satisfies (\ref{cu1}). Moreover, by (\ref{resw_1}),
$\overset{}{\underset{-p_1-\pi i}{res}}\;\hat{v}_1^1=r_1$, $\overset{}{\underset{p_1}{res}}\;\hat{v}_1^1=r_2$. Thus, (\ref{u2}) is proven for
$\Phi>\displaystyle\frac{3\pi}{2}$, and $\hat{v}_1^1$ satisfies all the hypothesis of Proposition\;\ref{pns} in this case (see Fig.\;\ref{phi+}).\\
Consider $\Phi<\displaystyle\frac{3\pi}{2}$. In this case all the poles of $\hat{v}_1^1$ in $\overline{V_{-\Phi}^{\pi+\Phi}}$ belong to $P$ by Proposition\;\ref{P_2a_2}, (\ref{hat{v}_1^1}). Hence, (\ref{cu1}) holds. Moreover, from (\ref{3.4''}), (\ref{polT_2})
$
\overset{}{\underset{p_1+2\pi i}{res}}\;\hat{v}_1^1=\overset{}{\underset{p_1+2\pi i}{res}}\;\hat{a}_1=-r_1.
$
Hence, $\overset{}{\underset{p_1-\pi i}{res}}\;\hat{v}_1^1=r_1$ by (\ref{icheck{v}_1^1}).
The equality $\overset{}{\underset{p_1}{res}}\;\hat{v}_1^1=r_2$ follows from (\ref{hat{v}_1^1}), (\ref{polT_2}) and the analyticity of $\hat{a}_1$ in $\hat{\Pi}$. Thus $\hat{v}_1^1$ satisfies (\ref{u2}) too (see Fig.\;\ref{Phi+1}).\\
{\bf ii)} By Proposition\;\ref{P_2a_2}, $a_1\in\mathcal{H}\Big(\overline{V_{-\frac{\pi}{2}-\Phi}^{\frac{\pi}{2}+\Phi}}\Big)$ which implies $a_1\in\mathcal{H}\Big(\overline{V_{\pi}^{\frac{3\pi}{2}}}\Big)$. By (\ref{pt1}), $T_1$ has poles in $\overline{V_{-\Phi}^{\pi+\Phi}}$ only at $q_1$, $-p_1-\pi i$, $-q_1+\pi i$, $p_1+2\pi i$. For $\Phi>\displaystyle\frac{3\pi}{2}$ none of these poles belong to $\overline{V_{\pi}^{\frac{3\pi}{2}}}$. $T_2$ has a pole in $\overline{V_{-\Phi}^{\pi+\Phi}}$ only in $p_1$, $-p_1+\pi i$ by (\ref{T_2}). Further, $p_1\notin\overline{V_{\pi}^{\frac{3\pi}{2}}}$, $-p_1+\pi i\in\Gamma_{\pi}$ and hence ${\bf ii)}$ holds by (\ref{hat{v}_1^1}), (\ref{polT_2}).\\
{\bf iii)} Consider $\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}$. By Proposition\;\ref{P_2a_2}, $\hat{a}_1$ has poles here at $p_1-2\pi i$ for $\Phi>\displaystyle\frac{3\pi }{2}$ and at $-p_1+\pi i-2\Phi i$ for $\Phi<\displaystyle\frac{3\pi}{2}$ with residues (\ref{rcheck_1^1}) and (\ref{3.4''}).\\
From (\ref{T_1}) and (\ref{T_2}) it follows that $T_1$ does not have poles at $\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}$ and $T_2$ has a unique pole at $-p_1+\pi i-2i\Phi$ here only for $\Phi\leq\displaystyle\frac{3\pi}{2}$ and $\overset{}{\underset{-p_1+\pi i-2i\Phi}{res}}\;T_2=-r_2$.
Hence $\hat{v}_1^1$ has a pole at $p_1-2\pi i$ for $\Phi> \displaystyle\frac{3\pi}{2}$ and a possible pole at $-p_1+\pi i-2i\Phi$ for $\Phi< \displaystyle\frac{3\pi}{2}$.\\
From (\ref{hat{v}_1^1}) we obtain
$$
\overset{}{\underset{p_1-2\pi i}{res}}\;\hat{v}_1^1=\overset{}{\underset{p_1-2\pi i}{res}}\;\hat{a}_1+\overset{}{\underset{p_1-2\pi i}{res}}\;T_1+\overset{}{\underset{p_1-2\pi i}{res}}\;T_2=r_2+0+0=r_2.
$$
Similarly
$$
\overset{}{\underset{-p_1+\pi i-2i\Phi}{res}}\;\hat{v}_1^1=\overset{}{\underset{-p_1+\pi i-2i\Phi}{res}}\;\hat{a}_1+\overset{}{\underset{-p_1+\pi i-2i\Phi}{res}}\;T_1+\overset{}{\underset{-p_1+\pi i-2i\Phi}{res}}\;T_2=r_2+0-r_2=0.
$$
Therefore {\bf iii)} and hence Theorem\;\ref{Theo2} are proven. ~~~~~$\blacksquare$
\begin{coro}\label{cv1}
For $\Phi\neq\displaystyle\frac{3\pi}{2}$ a unique pole of $\hat{v}_1$ belonging to $\overline{V_{-\frac{\pi}{2}-\Phi}^{\frac{3\pi}{2}}}$ is $-p_1+\pi i$ and
\begin{equation}\label{reshat{v}1}
\overset{}{\underset{-p_1+\pi i}{res}}\;\hat{v}_1=2i\sin\Phi.
\end{equation}
\end{coro}
{\bf Proof.} A unique pole of $\hat{v}_1^1(w)$ in $\overline{V_{\pi}^{\frac{3\pi}{2}}}$ is only $-p_1+\pi i$ by Theorem\;\ref{Theo2} ii) with residue (\ref{res}). Hence, $\hat{v}_1$ has a unique pole at $-p_1+\pi i$ in $\overline{V_{\pi}^{\frac{3\pi}{2}}}$ and (\ref{reshat{v}1}) follows from (\ref{Si2}), (\ref{res}) and (\ref{r1r2}).
The function $\hat{v}_1$ is analytic in $\hat{V}_{\Sigma}=V_{-\Phi}^{\pi}$ by Proposition\;\ref{pns}, since $\hat{v}_1^1$ satisfies all the hypothesis of this Proposition by Theorem\;\ref{Theo2}.\\
It remains only to prove that $\hat{v}_1$ is analytic in $\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}$. By Theorem\;\ref{Theo2} the function $\hat{v}_1^1$ has a unique pole at $p_1-2\pi i$ in $\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}$ with residue (\ref{ren1}) and the function $\hat{G}$ also has a unique pole at this point with residue $-r_2$ by (\ref{r1r2}). Hence, the function $\hat{v}_1$ is analytic in $\overline{V_{-\frac{\pi}{2}-\Phi}^{-\Phi}}$ for $\Phi\neq\displaystyle\frac{3\pi}{2}$ and the Corollary is proven.~~~~~$\blacksquare$
\section{$h_1$-invariant solution of the difference equation in the case $\Phi=\displaystyle\frac{3\pi}{2}$}
\setcounter{equation}{0}
In the previous sections we have constructed a solution to problem (\ref{de.}), (\ref{icheck{v}_1^1}), satisfying all the conditions of Proposition\;\ref{pns} for $\Phi\neq\displaystyle\frac{3\pi}{2}$.\\
It is possible to construct a solution for $\Phi=\displaystyle\frac{3\pi}{2}$ using the same method. A slight technical inconvenience in this case arises from the fact that the function $\check{G}_2(t)$ has a pole on $\check{\beta}$. Nevertheless, one can obtain a solution with the properties indicated in Theorem\;\ref{Theo2}.\\
However, we prefer to find a solution of the problem in the case $\Phi=\displaystyle\frac{3\pi}{2}$ by another method.
The point is that in this case it is easy to find a solution of the difference equation (\ref{de.}) in an {\it explicit form} without using the Cauchy-type integral.\\
Using the Liouville theorem it is easy to show that this elementary solution coincides with the solution obtained by the Cauchy-type integral.\\
In this section we give a meromorphic $h_1$-invariant solution of (\ref{de.}).
\subsection{Meromorphic solution of the difference equation for $\Phi=\displaystyle\frac{3\pi}{2}$}
In this case the construction of a meromorphic solution of difference equation (\ref{de.}) is simpler than in the case $\Phi\neq\displaystyle\frac{3\pi}{2}$ and $\hat{v}_1^1$ is expressed through elementary functions.
By (\ref{d21}), for $\Phi=\displaystyle\frac{3\pi}{2}$, we have
\begin{equation}\label{G_2}
\begin{array}{lll}
G_2(w) = \displaystyle\frac{i\omega^2\sinh 2w}{\omega^2\sinh^2 w+k^2}.
\end{array}
\end{equation}
Let us solve difference equation (\ref{de.}) in this case.
First, we solve (\ref{de.}) in the class of meromorphic functions. It is easy to guess a solution, using the $3\pi i$-periodicity of $G_2$. Let us define
\begin{equation*}
m_1(w):= \frac{i w \;G_2(w)}{3\pi}.
\end{equation*}
Then, by (\ref{G_2}), $m_1$ satisfies (\ref{de.}).
Of course, this solution is not unique. All the other solutions differ from it by a $3\pi i$-periodic function. Similarly to the case $\Phi\neq\displaystyle\frac{3\pi}{2}$, we will modify this solution in such a way that it will satisfy all the conditions of Proposition\;\ref{pns}.\\
Function (\ref{G_2}) is not automorphic with respect to $\hat{h}_1$. Let us symmetrize it.\\
Define
\begin{equation}\label{overline u}
m(w):=\frac{m_1(w)+m_1(-w+\pi i)}{2}.
\end{equation}
Then
\begin{equation}\label{m(w)}
m(w)=\displaystyle\frac{\pi+2i w}{6\pi}\cdot G_2(w).
\end{equation}
\begin{lem}\label{lu}
i) The function $m$ is an $h_1$-automorphic solution to (\ref{de.}).\\
ii) $m$ has poles in $\overline{V_{\frac{-5\pi}{2}}^{\frac{5\pi}{2}}}$ only at the points of the set
\begin{equation}\label{p3}
P_1:=\Big\lbrace \pm p_1, p_1\pm\pi i, -p_1\pm\pi i, p_1\pm2\pi i, -p_1\pm2\pi i\Big\rbrace,
\end{equation}
(see Fig.\;\ref{AAJ}), and
\begin{equation}\label{re20}
\begin{array}{lll}
&&\overset{}{\underset{p_1}{res}}\;m =m_1,\quad \overset{}{\underset{p_1+\pi i}{res}}\;m =m_2,\quad \overset{}{\underset{p_1-\pi i}{res}}\;m =m_3,\quad \overset{}{\underset{p_1+2\pi i}{res}}\;m =m_4,\quad \overset{}{\underset{p_1-2\pi i}{res}}\;m =m_5,\\\\
&& \overset{}{\underset{-p_1}{res}}\;m = -m_2,\quad \overset{}{\underset{-p_1+\pi i}{res}}\;m =-m_1, \quad \overset{}{\underset{-p_1-\pi i}{res}}\;m =-m_4,\quad\overset{}{\underset{-p_1+2\pi i}{res}}\;m =-m_3,\quad \overset{}{\underset{-p_1-2\pi i}{res}}\;m=m_6,
\end{array}
\end{equation}
where
\begin{equation}\label{mk}
\begin{array}{lll}
&& m_1=-\displaystyle\frac{p_1}{3\pi}+\displaystyle\frac{i}{6},\quad m_2=-\displaystyle\frac{p_1}{3\pi}-\displaystyle\frac{i}{6},\quad m_3=-\displaystyle\frac{p_1}{3\pi}+\displaystyle\frac{i}{2},\quad m_4=-\displaystyle\frac{p_1}{3\pi}-\displaystyle\frac{i}{2},\\\\
&& m_5=-\displaystyle\frac{p_1}{3\pi}+\displaystyle\frac{5i}{6},\quad m_6=\displaystyle\frac{p_1}{3\pi}+\displaystyle\frac{5i}{6}.
\end{array}
\end{equation}
\end{lem}
{\bf Proof}. i) The assertion follows from a direct substitution of (\ref{m(w)}) into (\ref{de.}), and (\ref{icheck{v}_1^1}) follows from (\ref{overline u}).
\medskip
ii) The zeros of $\omega^2\sinh^{2} w+k^2$ are
\begin{equation*}
\pm p_1+2k\pi i,\quad \pm p_1+\pi i+2k\pi i,\quad k\in\mathbb{Z},
\end{equation*}
where $p_1$ is defined by (\ref{P}). Obviously, only the poles from $P_1$ belong to $\overline{V_{-\frac{5\pi}{2}}^{\frac{5\pi}{2}}}$, see Fig.\;\ref{AAJ}.\\
Formulas (\ref{re20}) follow from (\ref{m(w)}) and Lemma\;\ref{lh_1}.~~~~$\blacksquare$
\medskip
Now we modify the function $m(w)$ in such a way that it will satisfy the conditions (\ref{cu1}), (\ref{u2}) of Proposition\;\ref{pns}.
To this end we add to $m$ an appropriate $3\pi i$-periodic function.\\
Since for $\Phi=\displaystyle\frac{3\pi}{2}$, $r_1=r_2=i$, conditions (\ref{cu1}), (\ref{u2}) take the form
\begin{equation*}
\hat{v}_1^1(w)\in\mathcal{H}\Big(V_{-\frac{3\pi}{2}}^{\frac{5\pi}{2}}\setminus P\Big),
\end{equation*}
where $P$ is given by (\ref{P}), and
$
\overset{}{\underset{p_1}{res}}\;\hat{v}_1^1(w)=i, \overset{}{\underset{-p_1-\pi i}{res}}\;\hat{v}_1^1(w)=i.
$
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.25]{AAJ1.pdf}
\caption{Poles of the function $m$, $\Phi=\displaystyle\frac{3\pi}{2}$}\label{AAJ}
\end{figure}
\vspace{0.5cm}
\subsection{Solution of the difference equation for $\Phi=\displaystyle\frac{3\pi}{2}$}
By (\ref{p3}), the function $m$ has 8 poles in $\overline{V_{-\frac{3}{2}\pi}^{\frac{5}{2}\pi}}$ belonging to $P_1$ with residues (\ref{re20}).
We modify $m$ so that (\ref{cu1}) and (\ref{u2}) hold.
To this end we first add to $m$ functions $Q_1$ and $Q_3$ which ``correct'' the residues $-m_1$ at the point $p_1$ and $-m_4$ at the point $-p_1-\pi i$ by $i$. Then we add $Q_2$ which anihilates the poles $-p_1, p_1+\pi i$ and $p_1-\pi i, -p_1+2\pi i$.\\
So, consider the following functions (the periodic supplements)
\begin{align}\label{overline{T}_1}
Q_1(w)&:=\displaystyle\frac{i-m_1}{3}\Bigg[\coth \displaystyle\frac{w-p_1}{3}-\coth \displaystyle\frac{w-(-p_1+\pi i)}{3}\Bigg],\\ \nonumber\\
\label{overline{T}_2}
Q_2(w)&:=-\displaystyle\frac{m_2}{3}\Bigg[\coth \displaystyle\frac{w-(p_1-\pi i)}{3}-\coth \displaystyle\frac{w-(-p_1)}{3}\Bigg],
\\ \nonumber\\ \label{T_3}
Q_3(w)&:=-\displaystyle\frac{m_3}{3}\Bigg[\coth\displaystyle\frac{w-(p_1-\pi i)}{3}-\coth\displaystyle\frac{w-(-p_1-\pi i)}{3}\Bigg],
\end{align}
where $m_{1,2,3}$ are given by (\ref{mk}).
Obviously, the functions $Q_{1,2,3}$ are $3\pi i$-periodic, and $\hat{h}_1$-automorphic:
\begin{equation}\label{PT_{1,2}}
Q_{1,2,3}(w+3\pi i)=Q_{1,2,3}(w),\qquad Q_{1,2,3}\big(\hat{h}_1 w\big)=Q_{1,2,3}(w).
\end{equation}
Finally, define
\begin{equation}\label{T}
\begin{array}{lll}
Q(w)&:=&Q_1(w)+Q_2(w)+Q_3(w)
=\displaystyle\frac{i-m_1}{3}\coth \displaystyle\frac{w-p_1}{3}-\displaystyle\frac{m_2+m_3}{3} \coth \displaystyle\frac{w-(p_1-\pi i)}{3}-\\\\
&-&\displaystyle\frac{i-m_1}{3} \coth \displaystyle\frac{w-(-p_1+\pi i)}{3}
+\displaystyle\frac{m_2}{3} \coth \displaystyle\frac{w-(-p_1)}{3}+\displaystyle\frac{m_3}{3} \coth\displaystyle\frac{w-(-p_1-\pi i)}{3}.
\end{array}
\end{equation}
From (\ref{overline{T}_1})-(\ref{T}), it follows directly that the set of the poles of
$Q$ in $\overline{V_{-\frac{5\pi}{2}}^{\frac{5\pi}{2}}}$ is $P_1$ given by (\ref{p3}) (see Fig.\;\ref{AAJ}) and
\begin{equation}\label{PolT}
\begin{split}
&\overset{}{\underset{p_1}{res}}\;Q =i-m_1,\quad\overset{}{\underset{p_1+\pi i}{res}}\;Q=\overset{}{\underset{p_1-2\pi i}{res}}\;Q=-m_2,\quad\overset{}{\underset{p_1-\pi i}{res}}\;Q=\overset{}{\underset{p_1+2\pi i}{res}}\;Q=-m_3,\\\\
&\overset{}{\underset{-p_1-2\pi i}{res}}\;Q=\overset{}{\underset{-p_1+\pi i}{res}}\;Q=-(i-m_1), \quad\overset{}{\underset{-p_1}{res}}\;Q=m_2,\quad \overset{}{\underset{-p_1+2\pi i}{res}}\;Q=\overset{}{\underset{-p_1-\pi i}{res}}\;Q=m_3.
\end{split}
\end{equation}
\medskip
Define
\begin{equation}\label{u_3}
\hat{v}_1^1(w):=m(w)+Q(w),
\end{equation}
where $m$ is given by (\ref{m(w)}) and $Q$ is given by (\ref{T}).
\begin{teo}\label{lu_2}
{\bf i)} For $\Phi=\displaystyle\frac{3\pi}{2}$ the function $\hat{v}_1^1$ satisfies all the conditions of Proposition\;\ref{pns}.\\
{\bf ii)} The poles of $\hat{v}_1^1$ in $\overline{V_{-\frac{5\pi}{2}}^{\frac{3\pi}{2}}}$ are
\begin{equation}\label{Pol}
-p_1-\pi i,\quad -p_1+\pi i, \quad p_1,\quad p_1-2\pi i
\end{equation}
with the following residues
\begin{equation}\label{5.17'}
\overset{}{\underset{-p_1-\pi i}{res}}\;\hat{v}_1^1=i,\quad \overset{}{\underset{-p_1+\pi i}{res}}\;\hat{v}_1^1=-i,\quad \overset{}{\underset{p_1}{res}}\;\hat{v}_1^1=i,\quad \overset{}{\underset{p_1-2\pi i}{res}}\;\hat{v}_1^1=i.
\end{equation}
\end{teo}
\noindent {\bf Proof.} {\bf i)} Equations (\ref{de.}) and (\ref{icheck{v}_1^1}) follow from Lemma\;\ref{lu} and (\ref{u_3}).
From (\ref{re20}), (\ref{mk}) and (\ref{PolT}) we obtain that (\ref{u2}) holds.\\
Let us prove (\ref{cu1}). Since all the poles of $\hat{v}_1^1$ in $\overline{V_{-\frac{3\pi}{2}}^{\frac{5\pi}{2}}}$ belong to $P_1$ by Lemma\;\ref{lu}, it suffices to prove that $\hat{v}_1^1$ is analytic in $P_1\setminus P=\Big\lbrace -p_1, p_1\pm \pi i, -p_1+2\pi i\Big\rbrace$.\\
From (\ref{re20}), (\ref{PolT}), (\ref{icheck{v}_1^1}) and Lemma\;\ref{lh_1} we obtain
$
\overset{}{\underset{w=-p_1}{res}}\;\hat{v}_1^1=\overset{}{\underset{p_1+\pi i}{res}}\;\hat{v}_1^1=\overset{}{\underset{-p_1+2\pi i}{res}}\;\hat{v}_1^1=\overset{}{\underset{p_1-\pi i}{res}}\;\hat{v}_1^1=0.
$
Thus (\ref{cu1}) and i) are proven.\\
{\bf ii)} By (\ref{p3}) and (\ref{T}) the poles of $m$ and $Q$ in $\overline{V_{-\frac{5\pi}{2}}^{\frac{3\pi}{2}}}$ are
$
\pm p_1, p_1\pm \pi i, \pm p_1-2\pi i, -p_1\pm \pi i.
$
From (\ref{re20}), (\ref{mk}), (\ref{PolT}) and (\ref{u_3}) it follows that $\hat{v}_1^1(w)$
does not have poles at $-p_1, p_1+\pi i, -p_1-2\pi i$ and has poles (\ref{Pol}) with residues (\ref{5.17'}).
Statement ii) is also proven.~~~~~$\blacksquare$
\medskip
Now we establish an important property of the function $\hat{v}_1$ similar to Corollary\;\ref{cv1} for the case $\Phi=\displaystyle\frac{3\pi}{2}$.\\
We recall that this function plays the crucial role in the construction of the Sommerfeld-type representation for the solution of the main problem. This representation will be given in the following section.
\begin{coro}\label{rv1}
For $\Phi=\displaystyle\frac{3\pi}{2}$ the function $\hat{v}_1$ given by (\ref{Si2}) has a unique pole at $-p_1+\pi i$ belonging to $\overline{V_{-\frac{5\pi}{2}}^{\frac{3\pi}{2}}}$, and
$
\overset{}{\underset{-p_1+\pi i}{res}}\;\hat{v}_1=-2i.
$
\end{coro}
{\bf Proof.} The function $\hat{v}_1\in\mathcal{H}\big(\hat{V}_{\Sigma}\big)=\mathcal{H}\big(V_{-\frac{3\pi}{2}}^{\pi}\big)$ by Proposition\;\ref{pns} and Theorem\;\ref{lu_2}. It suffices to analyze $\overline{V_{-\frac{5\pi}{2}}^{\frac{3\pi}{2}}}\setminus\hat{V}_{\Sigma}=\overline{V_{\pi}^{\frac{3\pi}{2}}}\cup \overline{V_{-\frac{5\pi}{2}}^{-\frac{3\pi}{2}}}$.\\
First, consider $\overline{V_{\pi}^{\frac{3\pi}{2}}}$. By Theorem\;\ref{lu_2} ii), (\ref{G(w)}), and $(2\pi i)$-periodicity of $\hat{G}$, unique poles of $\hat{v}_1^1$ and $\hat{G}$ in $\overline{V_{\pi}^{\frac{3\pi}{2}}}$ are $-p_1+\pi i$ and $p_1$, and the unique pole in $\overline{V_{-\frac{5\pi}{2}}^{-\frac{3\pi}{2}}}$ of the same function is $p_1-2\pi i$ with residue (\ref{5.17'}) and (\ref{r1r2}). Hence the statement follows from (\ref{Si2}). $~~~~~\blacksquare$
\newpage
\section{Asymptotics of $\hat{v}_1$ at infinity}
\setcounter{equation}{0}
We will need to prove (\ref{SP}). For this we have to find the asymptotics of the integrand $\hat{v}_1(w)$ at infinity.
\subsection{Asymptotics of $\hat{v}_1^1$ at infinity}
\begin{lem}\label{lashat{v}_1^1}
For any $\Phi\in(\pi,2\pi)$ the function $\hat{v}_1^1$ admits the following asymptotics:
\begin{equation}\label{as hat{v}_1^1}
\left.
\begin{array}{rcl}
\hat{v}_1^1(w)&=&\pm \displaystyle\frac{\sin\Phi}{\Phi} \Big(w-\displaystyle\frac{\pi i}{2}\Big)+o\Big(e^{\mp\frac{\pi}{2\Phi}w}\Big),\\\\
\displaystyle\frac{d}{dw}\hat{v}_1^1(w)&=&\pm\displaystyle\frac{\sin\Phi}{\Phi}+o\big(e^{\mp \frac{\pi}{2\Phi}w}\big)
\end{array}
\right| \quad \Re w\to \pm\infty.
\end{equation}
\end{lem}
{\bf Proof.}
From (\ref{T_1}) it follows that $T_1(w)$ admits the following asymptotics
$$
T_1(w)=o\Big(e^{\mp\frac{\pi}{2\Phi}w}\Big),\quad \Re w\to\pm\infty.
$$
Similarly, $T_2(w)$ admits the same asymptotics by (\ref{T_2}) and, by (\ref{hat{v}_1^1}), (\ref{ashat{a}_1}), $\hat{v}_1^1$ satisfies (\ref{as hat{v}_1^1}) in the case $\Phi\neq\displaystyle\frac{3\pi}{2}$.\\
Consider the case $\Phi=\displaystyle\frac{3\pi}{2}$. From (\ref{u_3}), (\ref{m(w)}), (\ref{T}) it follows that the asymptotics (\ref{as hat{v}_1^1}) holds in this case too.
Similarly, differentiating (\ref{u_3}) we obtain (\ref{as hat{v}_1^1})~~~~~$\blacksquare$
\begin{obs}
The asymptotics of $\hat{v}_1^1$ coincide for the cases $\Phi\neq\displaystyle\frac{3\pi}{2}$ and $\Phi=\displaystyle\frac{3\pi}{2}$.
\end{obs}
\subsection{Asymptotics of $\hat{v}_1(w)$}
By (\ref{Si2}),
$
\hat{v}_1(w)=\hat{v}_1^1(w)-\hat{G}(w),\quad w\in\mathbb{C},
$
where $\hat{G}(w)$ is given by (\ref{G(w)}).
Obviously,
\begin{align*}
\hat{G}(w)=\pm e^{\mp i\Phi}+o\big(e^{\mp\frac{\pi}{2\Phi}w}\big),\quad
\displaystyle\frac{d}{dw} \hat{G}(w)=o\big(e^{\mp\frac{\pi}{2\Phi}w}\big),\quad \Re w\to\pm\infty.
\end{align*}
Hence,
\begin{align}\label{as hat{v}_1}
& \hat{v}_1(w)={\rm sign}(\Re w)\cdot\displaystyle\frac{\sin\Phi}{\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big) +{\rm sign} (\Re w) e^{-{\rm sign} (\Re w)wi\Phi}+o\Big(e^{{-\rm sign}(\Re w)\frac{w\pi}{2\Phi}}\Big),
\\\nonumber\\\label{as dhat{v}_1}
&\displaystyle\frac{d}{dw}\hat{v}_1(w)={\rm sign}(\Re w)\displaystyle\frac{\sin\Phi}{\Phi}+o\Big(e^{{-\rm sign}(\Re w)\frac{w\pi}{2\Phi}}\Big),\quad\Re w\to\pm\infty.
\end{align}
by (\ref{as hat{v}_1^1}).~~~~~~$\blacksquare$
\section{Sommerfeld-type representation of solution to problem (\ref{u_1}) }
\setcounter{equation}{0}
In this section we give a Sommerfeld-type representation of solution to problem (\ref{u_1}). This representation was obtained by A. Sommerfeld and it is widely used in Mathematical Diffraction Theory \cite{Somm}. This representation is an integral with a specially chosen integrand along a Sommerfeld-type contour. This contour has double-loop form as in Fig.\;\ref{CS3}.\\
We define first this curvilinear contour depending on $\omega\in\mathbb{C}^{+}$ (in contrast to the Sommerfeld contour), and then we reduce it to the rectilinear contour which coincides with the Sommerfeld contour $\mathcal{C}$ (Fig.\;\ref{CS3})).
Define $\mathcal{C}(\omega)=\mathcal{C}_1(\omega)\cup\mathcal{C}_2(\omega)$, where
\begin{equation*}
\mathcal{C}_2(\omega):=\Big\lbrace w\in\Gamma_{-\frac{5\pi}{2}}(\omega), w_1\leq -b\Big\rbrace \cup \gamma_{2}(w)\cup \Big\lbrace w\in\Gamma_{-\frac{\pi}{2}}(\omega), w_1\leq -b\Big\rbrace,
\end{equation*}
$\gamma_{2}(\omega)$ is the segment of the line $\big\lbrace -b+iw_2, w_2\in\mathbb{R}\big\rbrace$ lying between $\Gamma_{-\frac{5\pi}{2}}$ and $\Gamma_{-\frac{\pi}{2}}$, $\mathcal{C}_1(\omega)=-\mathcal{C}_2(\omega)-5\pi i$ and $b\geq 2|\Re p_1|$ (see Fig.\;\ref{C_1C_2}).\\
In our case the integrand is the Sommerfeld exponential $e^{-\omega\rho\sinh w}$ multiplied by a kernel $\hat{v}_1$ which was constructed in the previous sections.\\
\noindent {\bf Proof of the main Theorem\;\ref{Tu_1u_2}.}
First, we consider the Sommerfeld integral with the contour $\mathcal{C}(\omega)$. We write the integral (\ref{u_1'}) with $\mathcal{C}(\omega)$ instead of $\mathcal{C}$. We keep the notation $u_1$ for this integral because we will see later that these two integrals coincide.\\
So, let
\begin{equation}\label{2.1'}
u_1(\rho,\theta)=\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\mathcal{C}(\omega)} e^{-\omega\rho \sinh w}\;\hat{v}_1(w+i\theta)\;dw,
\end{equation}
where $\mathcal{C}(\omega)$ is defined above.
Here and in the following we will use the following estimate: for $\rho>0$, $\tau\in[\tau_0,\pi-\tau_0]$ with $0<\tau_0\leq\displaystyle\frac{\pi}{2}$, $w=w_1+iw_2\in\Gamma_0$ and $\omega\in\mathbb{C}^{+}$
\begin{equation}\label{eee}
\big|e^{-\omega\rho\sinh(w-i\tau)}\big|\leq e^{-C(\omega,\tau_0)\rho\cosh w_1},
\end{equation}
where $C(\omega,\tau_0)>0$. The proof of this estimate is given in Appendix\;\ref{ic}.
Hence the integral (\ref{2.1'}) converges by the asymptotics (\ref{as hat{v}_1}), (\ref{check{v}_1}), and (\ref{G(w)}), since $\hat{v}_1(w+i\theta)$ does not have poles on $\mathcal{C}(\omega)$ by Corollaries\;\ref{cv1} and \ref{rv1} (see Fig.\;\ref{C_1C_2}, where the exponential decreases superexponentially in the shaded regions).\\
Let us prove that $u_1(\rho,\theta)$ satisfies the first equation of (\ref{u_1}). To this end we rewrite (\ref{2.1'}) as
\begin{equation*}
u_1(\rho,\theta)=\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\mathcal{C}(\omega)+i\theta} e^{-\omega\rho\sinh(w-i\theta)}\; \hat{v}_1(w) dw.
\end{equation*}
Let us fix $\rho>0, \theta_0\in(\phi,2\pi)$. By the Cauchy Theorem
\begin{equation*}
u_1(\rho,\theta)=\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\mathcal{C}(\omega)+i\theta_0} e^{-\omega\rho\sinh(w-i\theta)}\; \hat{v}_1(w) dw
\end{equation*}
for any $\theta_0$ sufficiently close to $\theta$.\\
Now the differentiation in $(\rho,\theta)$ under the sign of the integral is possible and the first equation in (\ref{u_1}) follows from the formula $\big(\Delta+\omega^2\big) e^{-\omega\rho\sinh (w-i\theta)}=0$.\\
Finally, boundary conditions (\ref{ra2}) and (\ref{ra3}) are proved in the next section. The integral (\ref{2.1'}) is transformed into the integral (\ref{u_1'}) over the contour $\mathcal{C}=\mathcal{C}(i)$, which no longer depends on $\omega$ (see Fig.\ref{CS3}).~~~~~$\blacksquare$
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.17]{C_1C_2.pdf}
\caption{Sommerfeld double-loop contour\;$C_{\omega}$}\label{C_1C_2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.17]{C_loop.pdf}
\caption{Sommerfeld two-loop contour $\mathcal{C}=\mathcal{C}(i)$}\label{CS3}
\end{figure}
\newpage
\section{Proof of the boundary conditions}
\setcounter{equation}{0}
\subsection{Decomposition of the solution into a plane wave and a wave dispersed by the vertex}
In this section we decompose the solution of problem (\ref{u_1}) given by (\ref{2.1'}) into two parts: the first part is the plane wave generated by the first boundary condition (\ref{u_1}) and the second part is the wave dispersed by the edge of the wedge.
\newpage
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{Pa1.pdf}
\caption{Decomposition of the solution}\label{CC}
\end{figure}
\vspace{0.5cm}
To give this decomposition we recall that a unique pole of $\hat{v}_1(w)$ lying in $\overline{V_{-\frac{\pi}{2}-\Phi}^{\frac{3\pi}{2}}}$ is
\begin{equation}\label{reshat{v}_1}
-p_1+\pi i\quad{\rm and}\quad \overset{}{\underset{-p_1+\pi i}{res}}\;\hat{v}_1(w)=2i\sin\Phi,
\end{equation}
as follows from Corollaries\;\ref{cv1} and \ref{rv1}.\\
Define a plane wave generated by the first boundary condition in (\ref{u_1}) ;
\begin{equation}\label{u_p}
u_{p}(\rho,\theta):=e^{-\omega\rho \sinh(p_1+i\theta)},\quad \rho>0,\quad \theta\in\mathbb{R},
\end{equation}
where $p_1$, is given by (\ref{P}), and a ``diffracted'' wave
\begin{equation*}
u_{d}(\rho,\theta):=\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\underrightarrow{\Gamma_{-\frac{5\pi}{2}}}\;\cup\; \underleftarrow{\Gamma_{-\frac{\pi}{2}}}} \; e^{-\omega\rho\sinh w} \hat{v}_1(w+i\theta) dw,\quad w\in(2\pi-\Phi, 2\pi].
\end{equation*}
The integrand here coincides with the integrand in (\ref{u_1'}), but the contour of integration differs from $\mathcal{C}$ (see Fig.\;\ref{CC}).\\
It turns out that the solution $u_1$ is decomposed into the sum of $u_{p}$ and $u_{d}$ and the corresponding decomposition is more convenient for the proof of boundary conditions.
\begin{teo}\label{td}
The solution of problem (\ref{u_1}) given by (\ref{u_1'}) admits the following representation
\begin{equation}\label{des}
u_1(\rho,\theta)=\left\{ \begin{array}{rcl}
\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\underrightarrow{\Gamma_{-\frac{5\pi}{2}}}\;\cup\; \underleftarrow{\Gamma_{-\frac{\pi}{2}}}} \; e^{-\omega\rho\sinh w} \hat{v}_1(w+i\theta) dw,~~~~~~ \theta\in\Big[2\pi-\Phi,\displaystyle\frac{3\pi}{2}\Big)\\\\
\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\underrightarrow{\Gamma_{-\frac{5\pi}{2}}}\;\cup\; \underleftarrow{\Gamma_{-\frac{\pi}{2}}}} \; e^{-\omega\rho\sinh w} \hat{v}_1(w+i\theta) dw+u_{p}(\rho,\theta), \theta\in\Big(\displaystyle\frac{3\pi}{2},2\pi\Big].
\end{array}
\right.
\end{equation}
\end{teo}
{\bf Proof.} By the Cauchy Theorem, $u_1$ defined by (\ref{2.1'}) admits the representation
\begin{equation}\label{8.2'}
\begin{array}{lll}
u_1(\rho,\theta)=\displaystyle\frac{1}{4\pi \sin\Phi} \displaystyle\int\limits_{\Gamma_{-\frac{5\pi}{2}}\cup\Gamma_{-\frac{\pi}{2}}} \;e^{-\omega\rho\sinh w} \hat{v}_1(w+i\theta) dw-\displaystyle\frac{1}{4\pi \sin\Phi}\displaystyle\int\limits_{\gamma(\omega)} e^{-\omega\rho\sinh w} \;\hat{v}_1(w+i\theta) dw,
\end{array}
\end{equation}
where $\gamma(\omega)$ is the contour bounded by $\gamma_{2}(\omega)$, $\gamma_1(\omega)$ and $\Gamma_{-\frac{\pi}{2}}$, $\Gamma_{-\frac{5\pi}{2}}$.\\
Let us find the poles of $\hat{v}_1(w+i\theta), w\in\Omega$ for any $\theta\in(2\pi-\Phi, 2\pi)$, where $\Omega$ is the region bounded by $\gamma(\omega)$.\\
Let $w^{\ast}(\theta)\in\Omega$ be a pole of $\hat{v}_1(w+i\theta)$. Then $w_{p}:=w^{\ast}+i\theta$ is a pole of $\hat{v}_1(w)$ belonging to $V_{-\frac{\pi}{2}-\Phi}^{\frac{3\pi}{2}}$.
The function $\hat{v}_1(w)$ has a unique pole $w_p=-p_1+\pi i$ in $\overline{V_{-\frac{\pi}{2}-\Phi}^{\frac{3\pi}{2}}}$ with the residue (\ref{reshat{v}_1}).\\
Hence, $w^{\ast}(\theta)=w_{p}-i\theta=-p_1+\pi i-i\theta$. Obviously $w^{\ast}(\theta)\in\Omega$
only for $\theta\in\Big(\displaystyle\frac{3\pi}{2}, 2\pi\Big]$.
Calculating the second integral in (\ref{8.2'}) with the help of residues, we obtain (\ref{des}) for $\theta\in\Big(\displaystyle\frac{3\pi}{2}, 2\pi\Big]$.
Therefore, (\ref{des}) holds.~~~~~$\blacksquare$
\begin{obs}
It may seem that this formula gives a discontinuous solution on the ray $\theta=\displaystyle\frac{3\pi}{2}$, but this is not the case, since (\ref{des}) coincides with (\ref{2.1'}) by construction. Nevertheless, we also give in Appendix\;\ref{AC} an independent proof of the continuity of $u_1(\rho,\theta)$.
\end{obs}
\newpage
\subsection{Boundary values of the solution}
We continue to prove the main theorem.
\begin{prop}
The solution $u_1(\rho,\theta)$ given by (\ref{u_1'}) is a solution to (\ref{u_1}).
\end{prop}
{\bf Proof.}
The fact that $u_1$ satisfies the Helmholtz equation in (\ref{u_1}) has been proven in Section\;6.
Let us prove the boundary conditions in (\ref{u_1}). First, we prove the first condition (\ref{u_1}) which in the polar coordinates takes the form
$$
u_1(\rho,2\pi)=e^{-ik\rho},\quad \rho>0.
$$
Since by (\ref{u_p}), (\ref{P}), $u_{p}(\rho,2\pi)=e^{-i k\rho}$ it suffices to prove that
$u_{d}(\rho,\theta)$ satisfies the homogeneous conditions.
From (\ref{Si2}) we have
\begin{equation*}
\begin{array}{lll}
u_d(\rho,2\pi)
=\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\underrightarrow{\Gamma_{-\frac{5\pi}{2}}}\;\cup\; \underleftarrow{\Gamma_{-\frac{\pi}{2}}}} \; e^{-\omega\rho\sinh w}\Big[\hat{v}_1^1(w+2\pi i)-\hat{G}(w+2\pi i)\Big]dw,
\end{array}
\end{equation*}
since $\hat{G}$ is $2\pi i$-periodic, the integral of the second summand is equal to $0$
because $\underleftarrow{\Gamma_{-\frac{5\pi}{2}}}=-\underrightarrow{\Gamma_{\frac{\pi}{2}}}+2\pi i$.
Thus, it suffices to prove that
\begin{equation}\label{o}
\displaystyle\int\limits_{\underrightarrow{\Gamma_{-\frac{5\pi}{2}}}\;\cup\; \underleftarrow{\Gamma_{-\frac{\pi}{2}}}} \; e^{-\omega\rho\sinh w}\hat{v}_1^1(w+2\pi i) dw=0.
\end{equation}
Making the change of the variable
$
w'=w+2\pi i, w'\in \underrightarrow{\Gamma_{-\frac{\pi}{2}}}\;\cup\; \underleftarrow{\Gamma_{\frac{3\pi}{2}}},
$
we obtain (\ref{o}) since (after the change) the integrand is an $h_1$-automorphic function and
$
h_1\underrightarrow{\Gamma_{-\frac{\pi}{2}}}=\underleftarrow{\Gamma_{\frac{3\pi}{2}}}.
$\\
Let us prove the second boundary condition in (\ref{u_1}), (\ref{des}):
$
u_1(\rho,2\pi-\Phi)=0, \rho>0.
$
From (\ref{CE}), (\ref{check v_1}) (see Remark\:\ref{r3.7}) we have
\begin{equation*}
\begin{array}{lll}
u_1(\rho,2\pi-\Phi)
=-\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\underrightarrow{\Gamma_{-\frac{5\pi}{2}}}\;\cup\; \underleftarrow{\Gamma_{-\frac{\pi}{2}}}} \; e^{-\omega\rho\sinh w}\; \hat{v}_2^1(w+2\pi i-i\Phi) dw.
\end{array}
\end{equation*}
Making the change of the variable $w'=w+2\pi i-i\Phi$, we obtain
$$
\\\\
u_1(\rho,2\pi-\Phi)=-\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\underrightarrow{\Gamma_{-\frac{\pi}{2}-\Phi}}\;\cup\; \underleftarrow{\Gamma_{-\frac{3\pi}{2}-\Phi}}}\; e^{-\omega\rho\sinh(w-2\pi i+i\Phi)}\; \hat{v}_2^1(w) dw=0,
$$
since $\hat{v}_2^1$ is an $h_2$-automorphic function and
$
h_2\underrightarrow{\Gamma_{-\frac{\pi}{2}-\Phi}}=\underleftarrow{\Gamma_{\frac{3\pi}{2}-\Phi}}.~~~~~~\blacksquare
$
\section{The solution belongs to the functional class $E$ and is unique}
\setcounter{equation}{0}
\subsection{Behavior at infinity}
\begin{lem}\label{lb}
The solution $u_1$ is an $C^{\infty}$-function in $\overline{Q}\setminus \lbrace 0\rbrace$, bounded in $\overline{Q}\cap \Big\lbrace (\rho,\theta), \rho\geq\varepsilon>0\Big\rbrace$ with all its first derivatives.
\end{lem}
{\bf Proof.} This follows from the superexponential decay of the exponential $e^{-\omega\rho\sinh w}$ in the integral (\ref{2.1'}) (see shaded region in Fig.\;\ref{C_1C_2}), analyticity of $\hat{v}_1(w)$ for $|\Re w|>|\Re p_1|$ (see Corollaries\;\ref{cv1}, \ref{rv1}) and the asymptotics (\ref{as hat{v}_1}), (\ref{as dhat{v}_1}).
\subsection{Asymptotics of the solution at the origin}
We continue to prove the main theorem, namely we prove that $u_1$ given by (\ref{u_1'}) belongs to $E$.
It remains only to prove the asymptotics (\ref{SP}).
Let us prove the first asymptotics in (\ref{SP}). Represent the contour $\mathcal{C}$ as
\begin{equation*}
\mathcal{C}:=\big(\mathcal{C}_{+}\cup\gamma_{+}\big)\cup\big(\mathcal{C}_{-}\cup\mathbb{C}\big),
\end{equation*}
where
$$
\mathcal{C}_{+}:=\underleftarrow{\Gamma_{+}}\cup \big(\underrightarrow{\Gamma_{+}}-2\pi i\big),\quad \mathcal{C}_{-}:=\underleftarrow{\Gamma_{-}}\cup \big(\underrightarrow{\Gamma_{-}}-2\pi i\big),
$$
and the contours $\Gamma_{\pm}, \gamma_{\pm}$ are shown in Fig.\;\ref{CS3}. Note that the ``finite'' part of the integral (\ref{u_1'}), has a ``good'' asymptotics, since
\begin{equation}\label{as f}
\displaystyle\int\limits_{\gamma_{+}\cup\gamma_{-}} e^{-\omega\rho\sinh w}\;\hat{v}_1(w+i\theta)\;dw=C(\theta)+C_1(\theta)\rho+O(\rho),\quad\rho\to0
\end{equation}
by (\ref{as hat{v}_1}).\\
Thus it suffices only to find the principal term of the asymptotics of the ``infinite'' part of $u_1$. Since $\sinh w$ is a $2\pi i$-periodic function and is even on $\Gamma_{-}\cup\Gamma_{+}$, we have
\begin{equation*}
\begin{array}{lll}
u_{1,i}&:=&\displaystyle\int\limits_{\mathcal{C}_{+}\cup \mathcal{C}_{-}} e^{-\omega\rho \sinh w}\;{\rm sign} (\Re w)\Bigg(\displaystyle\frac{\sin\Phi}{\Phi}\Big(w-\displaystyle\frac{\pi i}{2}+i\theta\Big)+ e^{-{\rm sign} (\Re w)i\Phi}\Bigg) dw=0
\end{array}
\end{equation*}
and
$$
u_1(\rho,\theta)=\displaystyle\int\limits_{\mathcal{C}} e^{-\omega\rho\sinh w}\;\hat{v}_1(w+i\theta) dw=C(\theta)+o(1),\quad \rho\to0~~ {\rm by}~~ (\ref{as hat{v}_1}).~~~~~~\blacksquare
$$
\medskip
Let us prove the second asymptotics in (\ref{SP}). Using polar coordinates (\ref{PC}), we have
\begin{align*}
& \nabla u_1(\rho,\theta)=\displaystyle\int\limits_{\mathcal{C}} \big(\partial_{y_1},\partial_{y_2}\big) e^{-\omega \rho\sinh w}\;\hat{v}_1(w+i\theta)\;dw=\displaystyle\int\limits_{\mathcal{C}}\Big(K_1(\rho,\theta,w), K_2(\rho,\theta,w)\Big)\hat{v}_1(w)\;dw,
\end{align*}
where
$$
K_1(\rho,\theta,w)=\displaystyle\frac{\partial}{\partial y_1}\Bigg[e^{-\omega\rho\sinh w}\;\hat{v}_{1}(w+i\theta)\Bigg]=K_{11}(\rho,\theta,w)+K_{12}(\rho,\theta,w),
$$
\begin{align}\label{K_{11} K_{12}}
K_{11}:=e^{-\omega \rho\sinh w}\;(-\omega \sinh w) \cos\theta\;\hat{v}_1(w+i\theta),\qquad
K_{12}:=-\displaystyle\frac{i}{\rho}\sin\theta\;\partial_{w} \hat{v}_1(w+i\theta);
\end{align}
and
$$
K_2(\rho,\theta,w):=\partial_{y_2}\Big[e^{-\omega\rho\sinh w}\;\hat{v}_1(w+i\theta)\Big]=K_{21}(\rho,\theta,w)+K_{22}(\rho,\theta,w),
$$
where
\begin{align*}
K_{21}:=e^{-\omega \rho\sinh w}\;(-\omega \sinh w) \sin\theta\;\hat{v}_1(w+i\theta);\qquad
K_{22}:=\displaystyle\frac{i\cos\theta}{\rho}\sin\theta\;\partial_{w} \hat{v}_1(w+i\theta).
\end{align*}
It suffices to find the asymptotics for $\partial_{y_1}\hat{u}_1(\rho,\theta)$, since the asymptotics of $\partial_{y_2}\hat{u}_1$ is similar.
\medskip
By (\ref{as dhat{v}_1}),
\begin{align}\label{10.4'}
\displaystyle\int\limits_{\mathcal{C}} K_{11}(\rho,\theta,w)\;dw=\Bigg(\displaystyle\int\limits_{\gamma_{+}\cup\gamma_{-}}+\displaystyle\int\limits_{\mathcal{C}_{+}\cup\mathcal{C}_{-}}\Bigg) K_{11}(\rho,\theta,w)\;dw.
\end{align}
We have
\begin{equation}\label{10.4''}
\displaystyle\int\limits_{\gamma_{+}\cup\gamma_{-}} K_{11}(\rho,\theta,w)\;dw=C(\theta)+C_1(\theta)\rho+O(\rho^2),\quad\rho\to0
\end{equation}
similarly to (\ref{as f}). We need the folowing simple statement whose proof is given in Appendix\;\ref{lem10}.
\begin{lem}\label{l8}
Let $\omega\in\mathbb{C}^{+}$,
\begin{equation*}
A(\rho):=\displaystyle\int\limits_{0}^{\infty} e^{i\omega\rho\cosh w}\;\cosh w\;o(e^{-\frac{\pi}{2\Phi}w}) dw.
\end{equation*}
Then
\begin{equation}\label{as A}
A(\rho)\sim C(\omega)\rho^{-1+\frac{\pi}{2\Phi}}+O(\rho^{\frac{\pi}{2\Phi}}),\quad\rho\to0.
\end{equation}
\end{lem}
\medskip
By (\ref{as hat{v}_1}), noting that $e^{-\omega\rho \sinh w}=e^{i\omega\rho\cosh w}, w\in \mathcal{C}_{+} \cup \mathcal{C}_{-}$, using Lemma\;\ref{l8} and the arguments of the proof of (\ref{10.4'}), we obtain
\begin{equation*}
\begin{array}{lll}
\displaystyle\int\limits_{\mathcal{C}_{+}\cup \mathcal{C}_{-}}\;K_{11}(\rho,\theta,w)\;dw
=C\rho^{-1+\frac{\pi}{2\Phi}}+C_1+O(\rho^{-\frac{\pi}{2\Phi}}),\quad\rho\to0.
\end{array}
\end{equation*}
Hence, using (\ref{10.4''}),
$$
\displaystyle\int\limits_{\mathcal{C}} K_{11}(\rho,\theta,w)\;dw=C_{11}+D_{11}\rho^{-1+\frac{\pi}{2\Phi}}+O(\rho^{\frac{\pi}{2\Phi}}),\quad \rho\to0.
$$
Consider
\begin{equation}\label{10.5'}
\begin{array}{lll}
\displaystyle\int\limits_{\mathcal{C}} K_{12}(\rho,\theta,w)\;dw &=&\Bigg(\displaystyle\int\limits_{\gamma_{+}\cup\gamma_{-}}+\displaystyle\int\limits_{\mathcal{C}_{+}\cup \mathcal{C}_{-}}\Bigg)\cdot K_{12}(\rho,\theta,w)\;dw.
\end{array}
\end{equation}
Similarly to (\ref{10.4''}) and using (\ref{as hat{v}_1^1}), we have
\begin{equation}\label{9.9'}
\displaystyle\int\limits_{\gamma_{+}\cup \gamma_{-}}\;K_{12}(\rho,\theta,w)\;dw
=\displaystyle\frac{1}{\rho} C(\theta)+C_1(\theta)+O(\rho),\quad\rho\to0.
\end{equation}
Similarly to (\ref{10.4'}), we have
\begin{equation*}
-\displaystyle\frac{i}{\rho}\sin\theta\displaystyle\int\limits_{\mathcal{C}_{+}\cup \mathcal{C}_{-}} e^{-\omega \rho\sinh w}\; \Bigg[{\rm sign} (\Re w)\;\displaystyle\frac{\sin\Phi}{\Phi}\Bigg]\;dw=0.\\
\end{equation*}
Obviously,
\begin{equation*}
\begin{array}{lll}
-\displaystyle\frac{i}{\rho}\sin\theta\displaystyle\int\limits_{\mathcal{C}_{+}\cup \mathcal{C}_{-}} e^{-\omega \rho\sinh w}\;o\big(e^{-{\rm sign} (\Re w)\frac{w\pi}{2\Phi}}\big) dw
=\displaystyle\frac{1}{\rho} C_{12}+D_{12}+o(1),\quad \rho\to0.
\end{array}
\end{equation*}
Hence, substituting the asymptotics (\ref{as hat{v}_1^1}) for $\displaystyle\frac{d}{dw} \hat{v}_1^1$ into (\ref{K_{11} K_{12}}), we obtain from (\ref{10.5'}), (\ref{9.9'}) that
\begin{equation*}
\begin{array}{lll}
\displaystyle\int\limits_{\mathcal{C}} K_{1,2}(\rho,\theta,w)\;dw=\displaystyle\frac{1}{\rho} C_{12}+D_{12}+o(1),\quad \rho\to0.
\end{array}
\end{equation*}
Thus,
\begin{equation*}
\displaystyle\frac{\partial}{\partial y_1} u_1(\rho,\theta) = \displaystyle\frac{C_{1}}{\rho}+C_{2}+o(1),\quad\rho\to0,\quad \theta\in[2\pi-\Phi,2\pi].
\end{equation*}
Similar asymptotics for $\partial_{y_2} u_1$ holds and the second asymptotics of (\ref{SP}) is proven.~~~~~$\blacksquare$
\subsection{Uniqueness}
In this section we prove Statement $ii)$ of Theorem\;\ref{Tu_1u_2}.
Obviously, it suffices to prove the uniqueness of solution of problem (\ref{ra1})-(\ref{ra2}) in the same space $E$.
Let $v(x)$, $\sigma(x)$ be two solutions of problem (\ref{ra1})-(\ref{ra3}) belonging to the space $E$, and $v_{l}^{\beta}(x_{l})$, $\sigma_{l}^{\beta}(x_{l})$ be their Cauchy data ($l=1,2; \beta=0,1$). Then $\hat{v}_1^1(w), \hat{\sigma}_1^1(w)$ are $h_1$-automorphic solutions of the difference equation (\ref{de.}) and they have the same poles and residues in $V_{-\Phi}^{\Phi+\pi}$ by Proposition\;\ref{pns}. Hence, their difference $\hat{\varphi}_1^1(w):=\hat{v}_1^1(w)-\hat{\sigma}_1^1(w)$ is an analytic solution of the homogeneous equation (\ref{de.}), that is, an entire periodic function on $\mathbb{C}$.\\
Moreover, since $v_{1}^{1}(x_{1})$ and $\sigma_{1}^{1}(x_{1})$ admit the same asymptotics (\ref{SP}), $\varphi_1^1(x_1)$ also admits the asymptotics (\ref{SP}). Hence its F-L transform satisfies $\tilde{\varphi}_1^1(z_1)=-\ln z+C+o(1),~z\in\mathbb{C}^{+},~\Re z\to+\infty$ and hence,
\begin{equation}\label{ar}
\hat{\varphi}_{1}(w) \sim C w\; {\rm sign}(w),\quad \Re w\to \pm\infty, \quad w\in \hat{V}_{1}^{+}.
\end{equation}
Since $\hat{\varphi}_{1}(w)$ is a periodic function with period $2\Phi i$, the asymptotics (\ref{ar}) holds for $w\in\mathbb{C}$. This implies that
\begin{equation}\label{ct}
\hat{\varphi}_1(w)\equiv {\rm const}.
\end{equation}
In fact, let us apply the conformal mapping $\overline{\hat{\Pi}^{+}}\to \mathbb{C}^{\ast}$ given by (\ref{t(w)}).\\
It is easy to show that
\begin{equation*}
\check{\varphi}_1(t):= \hat{\varphi}_1\big(w(t)\big)\in\mathcal{H}\Big(\mathbb{C}^{\ast}\setminus\lbrace 1\rbrace\Big)
\end{equation*}
and it admits the asymptotics
$
\check{\varphi}_1(t)\sim C\log(t-1)+C, t\to1.
$
Hence it has a removable singularity at $t=1$. By the Liouville Theorem, this implies (\ref{ct}). Therefore,
$
\hat{v}_1^1(w)=\hat{\sigma}_1^1(w)+const
$
and hence by (\ref{al})
$
\hat{v}_2^1(w)=\hat{\sigma}_2^1(w)+const.
$
This implies that
$
\tilde{v}_1^1(z_1)=\tilde{\sigma}_1^1(z_1)+const, \tilde{v}_2^1(z_2)=\tilde{\sigma}_2^1(z_2)+const, z_{1,2}\in\mathbb{C}^{+}
$
and
$
v_1^1(x_1)=\sigma_1^1(x_1),~~ x_1>0; v_2^1(x_2)=\sigma_2^1(x_2), x_2>0.
$
Since $v_1^0(x_1)=\sigma_1^0(x_1)=e^{-ik_1 x_1}, x_1>0$ and $v_2^0(x_2)=\sigma_2^0(x_2)=0$ this means that the Dirichlet and Neumann data of $(v(x)-\sigma(x))$ are zeros. Hence,
\begin{equation*}
v(x)-\sigma(x)=0
\end{equation*}
by the uniqueness of solution of elliptic equations.~~~~~~$\blacksquare$
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.24]{loop33.pdf}
\caption{}\label{loop33}
\end{figure}
\newpage
\section{Appendices}
\setcounter{equation}{0}
\subsection{\;\label{aal} Proof of the algebraic equation}
Since $\hat{v}_1^1(w)$ is meromorphic in $\hat{V}_{\Sigma}$ by (\ref{al}) and is automorphic with respect to $\pi i/2$, we extend $\hat{v}_1^1$ by symmetry with respect to $\pi i/2$ to $h_1\hat{V}_{\Sigma}=\hat{V}_{0}^{\pi+\Phi}$. Namely, define
\begin{equation*}
\overline{v}_1^1(w)=\left\{
\begin{array}{rcl}
\hat{v}_{1}^{1}(w), && w\in\hat{V}_{\Sigma}\\\\
\hat{v}_{1}^{1}(-w+\pi i),&& w\in\hat{V}_{\pi}^{\pi+\Phi}.
\end{array}
\right.
\end{equation*}
Obviously, $\overline{v}_1^1(w)$ is meromorphic on $\Gamma_{\pi}$ since $\hat{v}_1^1$ is meromorphic on $\Gamma_{0}$. We will still use the notation $\hat{v}_1^1$ for $\overline{v}_1^1$. Thus (\ref{icheck{v}_1^1}) holds for this extension too.\\
Since $\hat{G}(w)$ is meromorphic in $\mathbb{C}$, by (\ref{al}) $\hat{v}_2^1$ admits a meromorphic continuation onto $\hat{V}_{-\Phi}^{\pi+\Phi}$ which we also denote $\hat{v}_2^1$. Hence
\begin{equation}\label{3.25''}
\hat{v}_1^1(w)+\hat{v}_2^1(w)= \hat{G}(w),\quad w\in \hat{V}_{-\Phi}^{\pi+\Phi}
\end{equation}
by the uniqueness of analytic continuation.
Let us extend $\hat{v}_2^1(w), w\in \hat{V}_{-\pi}^{\pi+\Phi}$, to $\hat{V}_{-\pi}^{\pi+\Phi}\cup h_2 \hat{V}_{-\pi}^{\pi+\Phi}=V_{-3\Phi}^{\pi-\Phi}$ $\Big({\rm see}~(\ref{h_1h_2})\Big)$. Similarly to $\hat{v}_1^1, \hat{v}_2^1$ is a meromorphic function in $V_{-3\Phi}^{\pi-\Phi}$. Now we extend (\ref{3.25''}) to $V_{-3\Phi}^{\pi-\Phi}$ and obtain
$
\hat{v}_1^1(w)+\hat{v}_2^1(w)=\hat{G}(w), w\in V_{-3\Phi}^{\pi-\Phi}.
$
Hence
$
\hat{v}_{l}^{1}\in \mathcal{H}(\hat{V}_{l}^{+})\cap \mathcal{M}\Big(V_{-3\Phi}^{\pi-\Phi}\Big).
$
\\
Continuing the process of extension of $\hat{v}_{1}^{1}$ and $\hat{v}_{2}^{1}$ by symmetries (\ref{h_1h_2}), (\ref{h_1}), we can extend equation (\ref{3.25''}) to $\mathbb{C}$ and obtain (\ref{3.25})-(\ref{icheck{v}_1^1}). ~~$\blacksquare$
\subsection{\label{ade}Proof of Lemma\;\ref{Vin}}
Let us apply the automorphism $h_2$ to equation (\ref{3.25}). Since by (\ref{check{v}_1}), (\ref{h_2}),
$$
\hat{G}(h_2 w)=\displaystyle\frac{i\omega\sinh(w+3i\Phi)}{i\omega\sinh(w+2i\Phi)+k},
$$
(\ref{al}) gives
\begin{equation}\label{3.25'}
\hat{v}_1^1(h_2 w)+\hat{v}_2^1(h_2 w)=\hat{G}(h_2 w)=\displaystyle\frac{i\omega\sinh(w+3i\Phi)}{i\omega\sinh(w+2i\Phi)+k},\quad w\in\mathbb{C}.
\end{equation}
Hence, subtracting equation (\ref{3.25'}) from equation (\ref{al}), we obtain
$
\hat{v}_1^1(w)-\hat{v}_1^1(h_2 w)=\hat{G}_2(w),
$
where $\hat{G}_2$ is given by (\ref{d21}).\\
Using (\ref{h_1}), we can represent $\hat{v}_1^1\Big(h_2 w\Big)$ as a function with shifted argument. Applying (\ref{h_1h_2}) we have
$$
\hat{v}_1^1(h_2 w)=\hat{v}_1^1(h_1 h_2 w)=\hat{v}_1^1(h w),
$$
where $h(w)$ is a shift since
$
h w=h_1 h_2 w=(-h_2 w)+\pi i=(w-\pi i+2i\Phi)+\pi i=w+2i\Phi.
$
Hence, by (\ref{d21}),
$
\hat{v}_1^1(w)-\hat{v}_1^1(w+2i\Phi)=\hat{G}_2(w).~~~~~\blacksquare
$
\subsection{\label{ic}Proof of the estimate (\ref{eee})}
For $w\in\Gamma_0$, $\omega=\omega_1+i\omega_2, \omega_2>0$ consider
$
e^{-\omega\rho\sinh(w-i\tau)},\quad w\in\Gamma_0,\quad 0<\tau_0\leq\tau<\pi-\tau_0.
$
We have
$e^{-\omega\rho\sinh w\cos\tau}=e^{-i\omega\sinh w (-i\rho\cos\tau)}=e^{z_1(w)\cdot(-i\rho\cos\tau)}$,
where $z_1(w)$ is given by (\ref{z_{1,2}(om)}). We have for $w\in\Gamma_0$ that $z_1(w)\in\mathbb{R}$ by (\ref{Gamma_0'}).
Hence $\big|e^{-\omega\rho\sinh w\cos\tau}\big|=1$
and, since
$
-\omega\rho\sinh(w-i\tau)=-\omega\rho\Big[\sinh w\cos\tau-i\cosh w\sin\tau\Big],
$
we obtain
$
\big|e^{-\omega\rho\sinh(w-i\tau)}\big|=\big|e^{i\omega\rho\cosh w\sin\tau}\big|
= \big|e^{-\omega_2\rho\sin\tau\cosh w_1\cos w_2}\big| \cdot\big|e^{-\omega_1\rho\sin\tau\sinh w_1\sin w_2}\big|.
$\\
Since for $w\in\Gamma_0$, $w_2=\arctan \displaystyle\frac{\omega_1}{\omega_2}\tanh w_1$, (see (\ref{Gamma_0'})), $\cos w_2\geq C(\omega)>0$, we have for $\tau\in[\tau_0,\pi-\tau_0]$
$$
\big|e^{-\omega_2\rho\sin\tau\cosh w_1\cos w_2}\big|\leq e^{-C(\omega,\tau_0)\rho\cosh w_1},\quad C(\omega,\tau_0)>0,\quad w_2\in\Gamma_0, \quad\omega_2>0.
$$
Moreover, for $w\in\Gamma_0$
$
\big|e^{-\omega_1\rho\sin\tau\cosh w_1\sin w_2}\big|=\Big|e^{-\omega_1^2\rho\sin\tau\;\frac{\sinh^2 w_1}{\sqrt{\omega_2^2\cosh^2 w_1+\omega_1^2\sinh^2w_2}}}\Big|\leq 1.
$
Hence, (\ref{eee}) follows.~~~~~~$\blacksquare$
\subsection{\label{AC} Analysis of the solution near the ray $\theta=\displaystyle\frac{3\pi}{2}$}
We prove that $u_1\Big(\rho, \displaystyle\frac{3\pi}{2}+\delta\Big)$ is continuous in $\delta$ for small $\delta$. By (\ref{des}) it suffices to prove that
\begin{equation}\label{I}
I(\rho,\delta):=\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\Gamma_{-\frac{\pi}{2}}} e^{-\omega\rho\sinh w} \hat{v}_1\Big(w+\displaystyle\frac{3\pi i}{2}+i\delta\Big) dw
\end{equation}
satisfies
\begin{equation}\label{SI}
I(\rho,0-)=I(\rho,0+)+u_{p}\Big(\rho,\displaystyle\frac{3\pi}{2}\Big)
\end{equation}
since $\hat{v}_1$ does not have poles on $\Gamma_{-\pi}$ by (\ref{reshat{v}_1}).\\
Making the change of the variable $w'\to w+\displaystyle\frac{3\pi i}{2}$ in (\ref{I}), using (\ref{reshat{v}_1}), and the Sokhotski-Plemelj Theorem, we obtain
\begin{equation*}
I(\rho,0+)=\displaystyle\lim_{\delta\to 0}\;\displaystyle\frac{1}{4\pi\sin\Phi}\displaystyle\int\limits_{\underleftarrow{\Gamma_{\pi}}} e^{-\omega\rho\sinh (w-\frac{3\pi i}{2})}\;\displaystyle\frac{\hat{v}_1(w+i\delta)\big(w-(-p_1+\pi i)+i\delta\big)}{w-(-p_1+\pi i)+i\delta} dw=-\displaystyle\frac{1}{2} e^{i\omega\rho\cosh p_1}+V.P.,
\end{equation*}
where $V.P.$ here and in the following denotes the principal value of the integral (\ref{I}).\\
Similarly,
$$
I(\rho, 0-)=\displaystyle\frac{1}{2} e^{i\omega\rho\cosh p_1}+V.P.
$$
Hence (\ref{SI}) follows, since $u_{p}\Big(\rho,\displaystyle\frac{3\pi}{2}\Big)=e^{i\omega\rho \cosh p_1}$
by (\ref{u_p}).~~~~~~$\blacksquare$
\subsection{\label{AC1} Proof of asymptotics (\ref{ashat{a}_1})}
From (\ref{hat{a}_1(w)}), (\ref{as hat{a}_1}) it follows that
\begin{equation}\label{ashat{a}1}
\hat{a}_1(w)=-\displaystyle\frac{\check{G}_2(1)}{2\pi i}\ln\displaystyle\frac{1}{\coth^2\Big(\displaystyle\frac{\pi}{2\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big)\Big)-1}+C+O\big(e^{\mp\frac{\pi}{\Phi}w}\big)+C_1,\quad \Re w\to\pm\infty
\end{equation}
since
\begin{equation*}
O(t(w)-1)=O\Big(e^{\mp\frac{\pi}{\Phi}w}\Big),\quad \Re w\to\pm\infty.
\end{equation*}
Let us to prove (\ref{ashat{a}_1}).
We have for $m=\displaystyle\frac{\pi}{2\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big)$
\begin{equation*}
\begin{array}{lll}
\ln\displaystyle\frac{1}{\coth^2\displaystyle\frac{\pi}{2\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big)-1}
=\pm2m-\ln 4+o\big(e^{\mp m}\big),\quad \Re m\to\pm\infty;
\end{array}
\end{equation*}
\begin{equation*}
\begin{array}{lll}
-\displaystyle\frac{\check{G}_2(1)}{2\pi i}\ln\Bigg(\displaystyle\frac{1}{\coth^2\displaystyle\frac{\pi}{2\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big)}\Bigg)&=&-\displaystyle\frac{\check{G}_2(1)}{2\pi i}\Bigg(\pm\displaystyle\frac{\pi}{\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big)-\ln 4\Bigg)+o\big(e^{\mp \frac{\pi}{2\Phi}w}\big)\\\\
&=& \pm \displaystyle\frac{\sin\Phi}{\Phi}\Big(w-\displaystyle\frac{\pi i}{2}\Big)-\displaystyle\frac{\sin\Phi}{\pi} \ln 4+o\big(e^{\mp \frac{\pi}{2\Phi}w}\big).
\end{array}
\end{equation*}
Hence, using (\ref{hat{a}_1(w)}), (\ref{defC_1}), (\ref{ashat{a}1}) and (\ref{asG2}), we obtain (\ref{ashat{a}_1}).\\
Let us prove (\ref{4.27'}). From (\ref{hat{a}_1(w)}) and (\ref{t(w)}) we have
\begin{equation*}
\begin{array}{lll}
\displaystyle\frac{d}{dw}\hat{a}_1(w)=
-\displaystyle\frac{\pi}{\Phi}\;\displaystyle\frac{\cosh\Big(\displaystyle\frac{\pi}{2\Phi}\big(w-\displaystyle\frac{\pi i}{2}\big)\Big)}{\sinh^3\Big(\displaystyle\frac{\pi}{2\Phi}\big(w-\displaystyle\frac{\pi i}{2}\big)\Big)}\;\displaystyle\frac{d}{dt}\check{a}_1(t(w)).
\end{array}
\end{equation*}
Using (\ref{4.22'}) we obtain (\ref{ashat{a}1}).~~~~~~$\blacksquare$
\subsection{\label{lem10} Proof of Lemma\;\ref{l8}}
{\bf Proof.} Obviously, the principal term admits the asymptotics
\begin{align*}
A(\rho)\sim \displaystyle\int\limits_{0}^{\infty} e^{-\omega_2\rho e^{w}}\;e^{w} e^{-\frac{\pi}{2\Phi}w}\; dw,\quad \rho\to0.
\end{align*}
Making the change of the variable $\rho e^{w}=s$, we obtain
\begin{align*}
A(\rho)\sim \rho^{-1+\frac{\pi}{2\Phi}} \displaystyle\int\limits_{\rho}^{\infty} e^{-\omega_2 s}\;s^{(1-\frac{\pi}{2\Phi})}\;\varphi(s) ds,\quad \varphi(s)\to0,\quad s\to\infty.
\end{align*}
Hence (\ref{as A}) follows.~~~~~~$\blacksquare$
\section{Conclusion}
As is known, an angle is one of the few regions where the boundary value problems for the Helmholtz equation admit an explicit solution. As far as we know, this has always been done for decreasing boundary data, where the operator methods are normally used with the exception of a very specific boundary value problem associated with the plane incident wave (Sommerfeld's diffraction problem \cite{Somm}). In the presented work, we solve the Dirichlet boundary problem not related to the incidence of a plane wave and we obtain an explicit solution in the form of the Sommerfeld integral.
The proposed method is suitable for the Neumann (NN) and Dirichlet-Neumann (DN) boundary conditions, and for angles less then $\pi$.
We hope that the method is suitable for solving such problems with a real wave number in the Helmholtz operator and also for nonstationary problems.
\vspace{3cm}
|
1,108,101,565,921 | arxiv | \section{Introduction}
Theory and observations support a scenario where galaxy growth is tightly linked to the availability of cold gas. Several key galaxy scaling relations can be explained by an `equilibrium' (or `gas regulator') model, in which galaxy growth self-regulates through accretion of gas from the cosmic web, star formation, and the ejection of gas triggered by star formation and feedback from active galactic nuclei (AGN, see e.\,g. \citealp{Lilly2013, Dave2013}). Galaxy-integrated scaling relations successfully reproduced by this model range from the mass-metallicity relation \citep{Zahid2014,Brown2018}, the baryonic mass fraction of halos \citep{Bouche2010}, and the redshift evolution of the gas contents of galaxies \citep{Saintonge2013}.
The next logical step is to explore how this gas-centric galaxy evolution model performs in explaining the resolved properties of galaxies. This is a particularly timely question as integral field spectroscopic (IFS) surveys continue to provide detailed maps of the stellar and chemical composition of large, homogeneous, representative galaxy samples. The Calar Alto Legacy Integral Field Area survey (CALIFA; \citealp{Sanchez2012}), the Sydney-AAO Multi-object Integral field galaxy survey (SAMI; \citealp{Croom2012}), and the Mapping Nearby Galaxies at Apache Point Observatory survey (MaNGA; \citealp{Bundy2015}), for instance, focus on samples of hundreds to thousands of galaxies in the nearby Universe. Similar surveys at $z>1$ are also now possible with IFS instruments operating in the near-infrared such as KMOS (e.g. the KMOS3D and KROSS surveys, \citealp{Wisnioski2015} and \citealp{Stott2016},
respectively).
Observations of colour and star formation rate (SFR) across the discs of nearby galaxies suggest a scenario in which galaxies form and quench from the inside out; overall, the outskirts of disc galaxies tend to be bluer \citep{deJong1996,Wang2011,Perez2013} and remain star-forming for longer \citep{Belfiore2017b,Medling2018}.
Alongside star formation (SF) profiles, a great deal insight can be gained by focusing on spatial variations of the chemical composition of the gas. In general, gas at the outskirts of a galaxy tends to be more metal-poor than at its centre \citep[e.g.][]{Searle1971,Shields1974,Sanchez2014}.
Chemo-dynamical models of galaxies suggest different underlying physical processes to explain the formation of metallicity gradients. Early on \citet{Matteucci1989} found, via Galactic models, that the inflow of metal-poor gas is vital to the formation of metallicity gradients. The chemical evolution models by \citet{Boissier1999} for the Milky Way and by \citet{Boissier2000} for disc galaxies additionally emphasise the role of radial variation of star formation rate and efficiency, as well as inside-out-growth for the formation of metallicity gradients. Furthermore, radial gas flows are found to be vital in reproducing the metallicity gradients of the Milky Way \citep{Schoenrich2009}
Within the `equilibrium' framework, such metallicity gradients would be explained by the accretion of metal-poor gas onto the outer regions of these galaxies. \citet{Pezzulli2016} showed that metallicity gradients already form in closed-box models due to the fact that denser (i.e. more central) regions of galaxies evolve faster than less dense (outer) regions. However, to arrive at realistic metallicity gradients, their analytical model requires radial gas flows and, to a lesser extent, also inside-out growth.
Metallicity gradients are common and there are many explanations of their presence. In particular, several physical processes (inside-out growth, radial flows, gradients in the star formation efficiency) have been predicted to give rise to metallicity gradients and it is difficult to assess their relative importance. It is also unclear whether or not the direction and strength of metallicity gradients depend on global galaxy properties. For example, \citet{Sanchez-Menguiano2016} and \citet{Ho2015} find no relation between metallicity gradients and the stellar mass of the galaxies. They argue that metallicity at a certain radius is only determined by local conditions and the evolutionary state of the galaxy at that radius, rather than by global properties. There are however analyses finding correlations (both positive and negative) between stellar mass and metallicity gradients. \citet{Poetrodjojo2018}, \citet{Belfiore2017a} and {}
\citet{Perez-Montero2016} find hints of flatter metallicity gradients in lower mass galaxies, while \citet{Moran2012} (hereafter M12) find that galaxies at the lower mass end of their sample have steeper gradients. We note, however, that the low-mass end of \citetalias{Moran2012} is at $10^{10}$\,M$_\odot$, where \citet{Belfiore2017a} find a turnover in the strength of the metallicity gradient, such that both more massive and less massive galaxies show flatter metallicity gradients than galaxies with masses around {}
$10^{10}$\,M$_\odot$. In addition, the sample from \citet{Poetrodjojo2018} only includes galaxies with stellar masses below $10^{10.5}$\,M$_\odot$.
As it is based on an extensive longslit spectroscopy campaign rather than IFU maps, the \citetalias{Moran2012} study may lack the full mapping of metallicity across the galaxy discs, but it does benefit from having access to direct measurements of the cold gas contents of the galaxies through the $GALEX$ Arecibo SDSS Survey (GASS) and CO Legacy Database for GASS (COLD\,GASS) surveys \citep{Catinella2010, Saintonge2011a}. Their finding is that metallicity gradients tend to be flat within the optical radius of the galaxies, but that the magnitude of any drop in metallicity in the outskirts is well-correlated with the total atomic hydrogen content of the galaxy. This provides support for a scenario where low-metallicity regions are connected to the infall of metal-poor gas, as also found by \citet{Carton2015}. Indeed, the chemical evolution models of
\citet{Ho2015}, \citet{Kudritzki2015} and \citet{Ascasibar2015} (amongst others) are able to predict metallicity gradients from radial variations in the gas-to-stellar-mass ratio. Using dust extinction maps derived from the Balmer decrement to infer local gas masses, \citet{Barrera-Ballesteros2018} find a relation between the radial profiles of gas to stellar mass ratio and metallicity that is in good agreement with the predictions from the local gas-regulator model (similar to the global model, but on local scales).
In this paper, we revisit the results of \citetalias{Moran2012} but for an increased sample of galaxies, which crucially extents the stellar mass range by an order of magnitude. This is achieved by combining new optical longslit spectra for galaxies in the stellar mass range of $9 < \log \rm{M_{\star} [M_{\odot}]} < 10$ with global cold gas measurements from the xGASS\ and xCOLD\,GASS\ surveys \citep{Catinella2018,Saintonge2017}. The sample studied here, while lacking spatially resolved gas observations, is larger than those studied by {}
\citet{Ho2015}, \citet{Kudritzki2015} and \citet{Carton2015}, and it benefits from direct, homogeneous CO and HI observations.
This paper is organised as follows: In Sect.~\ref{sec:survey}, we present the galaxy sample and auxiliary data from the Sloan Digital Sky Survey \citep{York2000} and the xGASS/xCOLD\,GASS\ surveys. Details on observation, data reduction and data analysis are provided in Sect.~\ref{sec:analysis}. Our results are presented in Sects.~\ref{sec:results1} and \ref{sec:results2}. We discuss these results and offer our conclusions in Sect.~\ref{sec:diss}.
Throughout the paper we assume a standard $\Lambda$CDM cosmology ($H_0= 70$km~s$^{-1}$~Mpc$^{-1}$, $\Omega_M = 0.30$ and $\Omega_{\Lambda} = 0.70$) and a Chabrier initial mass function \citep{Chabrier2003}.
\section{Sample selection and global galaxy properties}
\label{sec:survey}
The extended GASS (xGASS, \citealp{Catinella2018}) and the corresponding extended COLD\,GASS (xCOLD\,GASS, \citealp{Saintonge2017}) surveys are projects designed to provide a complete view of the cold atomic and molecular gas contents across the local galaxy population with stellar masses in excess of $10^9$${\rm M_{\odot}}$. The survey galaxies were randomly selected from the parent sample of objects in the SDSS DR7 spectroscopic catalogue \citep{Abazajian2009}, with
$0.01<z<0.05$ and $\log \rm{M_{\star} [M_{\odot}]} > 9.0$, and located within the footprint of the Arecibo HI ALFALFA survey \citep{Giovanelli2005a,Haynes2018}. No additional selection criteria were applied, making the sample representative of the local galaxy population. As shown in Fig. \ref{fig:sfr_vs_mstar}, it samples the entire SFR--${\rm M_{\star}}$\ plane. The xGASS\ survey provides total HI masses for 1200 galaxies and xCOLD\,GASS\ derived total molecular gas masses from CO(1-0) observations
of a subset of 532 of these. A complete description of the sample selection, observing procedures and data products of xGASS\ and xCOLD\,GASS\ can be found in \citet{Catinella2018} and \citet{Saintonge2017}, respectively.
\begin{figure}
\center
\includegraphics[width=3.15in]{Images/fig1.pdf}
\caption{Distribution of the sample in the stellar mass-SFR plane. Blue open circles represent the xGASS\ sample, light blue crosses the xCOLD\,GASS\ sample, and yellow open squares mark galaxies that were included in the \citetalias{Moran2012} analysis but not in this work. Galaxies marked with a red symbol are included in the sample used in this paper, where diamonds represent galaxies with new observations and filled circles galaxies with observations from \citetalias{Moran2012}. }
\label{fig:sfr_vs_mstar}
\end{figure}
In addition to the \textsc{H\,i}\ and CO\ measurements, optical longslit spectra were obtained for a subset of the xGASS/xCOLD\,GASS\ galaxies. For galaxies with stellar masses $\log \rm{M_{\star} [M_{\odot}]} > 10$, these data were obtained with the 6.5-m MMT telescope on Mount Hopkins, Arizona (182 galaxies) and the 3.5-m telescope at Apache Point Observatory (APO), New Mexico (51 galaxies, \citetalias{Moran2012}). For 27 galaxies in the stellar mass range of $9.0 < \log \rm{M_{\star} [M_{\odot}]} < 10.0$, optical longslit spectra have been obtained with the EFOSC2 spectrograph at the ESO New Technology Telescope (NTT) in La Silla, Chile; these are new observations, presented for the first time. These 27 galaxies were randomly selected from the xGASS\ parent sample. The only selection criterion was observability with the NTT.
In this work we combine the new NTT observations with data from \citetalias{Moran2012}. As can be seen in Fig.~\ref{fig:sfr_vs_mstar}, all low-stellar-mass galaxies with optical spectra from the new NTT observations (red diamonds) are star-forming galaxies, meaning that they are located on or nearby the star formation main sequence (SFMS). To obtain a uniform sample, only star-forming galaxies from the \citetalias{Moran2012} sample are included in this work. This is achieved by selecting only those galaxies that are within $\pm1.5 \sigma$ of the SFMS, as defined by \citet{Catinella2018} and described in more detail by \citet{Janowiecki2020}.
This selection criterion is applied at all stellar masses and a compromise between including as many low-mass galaxies as possible (20), as well as including only galaxies near a well defined SFMS. Combining the high stellar mass star-forming sample (86 galaxies) with the 20 new low-mass galaxies results in a sample of 106 galaxies. We note that for three high-mass galaxies, optical longslit data are available from both the MMT and APO. For these galaxies, we chose to use only the MMT spectra, as more reliable metallicity measurements are available at similar or larger galactocentric radii from the MMT data than from the APO data.
\begin{figure*}
\center
\includegraphics[width=6.3in]{Images/fig2.pdf}
\caption{Distribution of stellar mass (\textbf{left panel}), stellar mass surface density (\textbf{middle panel}) and $NUV$-$r$\ colour (\textbf{right panel}) for the galaxy sample of xGASS\ (dark blue), xCOLD\,GASS\ (light blue), \citetalias{Moran2012} (yellow) and this work (red). See Sect.~\ref{sec:grad_fit} for a description of the derivation of these quantities. }
\label{fig:sample2}
\end{figure*}
Figure~\ref{fig:sample2} shows the distribution of stellar mass (left panel), stellar mass surface density (middle panel), and $NUV$-$r$\ colour (right panel). As can be seen here, this work expands the work by \citetalias{Moran2012} to lower stellar masses and includes more galaxies with low $\mu_\star$. As expected, due to the selection of SFMS galaxies, we include fewer high $\mu_\star$galaxies than in the \citetalias{Moran2012} analysis and no quiescent galaxies.
The overlap between the samples with \textsc{H\,i}, CO, and optical spectroscopic measurements is not perfect because of the timing of the various observing campaigns. Of the 106 galaxies in our sample of main-sequence galaxies with longslit optical spectra, 99 have \textsc{H\,i}\ measurements from Arecibo and 76 have CO\ observations from the IRAM-30m telescope. When correlating measurements from optical longslit spectra with \textsc{H\,i}\ or CO\ observations, those galaxies lacking information are excluded. In Fig.~\ref{fig:fhi_vs_mstar}, the \textsc{H\,i}\ and H$_{2}$\ gas mass-to-stellar mass ratios are shown as a function of stellar mass. The galaxies selected for this study have the typical gas fractions of main-sequence galaxies.
\begin{figure*}
\center
\includegraphics[width=6.3in]{Images/fig3.pdf}
\caption{The \textsc{H\,i}\ (\textbf{left panel}) and H$_{2}$\ (\textbf{right panel}) gas-to-stellar mass ratio as a function of stellar mass. Symbols are as in Fig.~\ref{fig:sfr_vs_mstar}. If a galaxy has not been detected either in \textsc{H\,i}\ or CO, its upper limit is shown as an arrow in the respective panel.}
\label{fig:fhi_vs_mstar}
\end{figure*}
\section{NTT observations, data reduction and analysis}
\label{sec:analysis}
\subsection{Observations}
The optical longslit spectra of the 27 low-mass galaxies were obtained with the EFOSC2 spectrograph at the ESO New Technology Telescope (NTT) in La Silla, Chile in September~2012 and April~2013. The slit size was 1.5\,arcsec by 4\,arcmin and aligned along the major axis of each galaxy. In order to measure all the strong emission lines required for metallicity measurements, ranging in wavelength from [OII]372.7\,nm to H$\alpha$ at 656.3\,nm, two observations of every galaxy were needed, one for the bluer half of the spectrum (368.0\,nm to 550.0\,nm) and one for the redder half (535.0\,nm to 720.0\,nm). The overlap was used to check for consistency in flux calibration across the entire wavelength range. Individual science exposures were observed for 900\,s. The total exposure time varied according to the surface brightness of the galaxy, but amounted on average to 3600\,s per spectrum half. After including a binning factor of 2, the image size is 1024\,pixels by 1024\,pixels. The spectral resolution is 0.123\,nm and 0.113\,nm in the red and blue halves of the spectrum, respectively, which is approximately equivalent to a velocity resolution of 59 and 74\,km\,s$^{-1}$.
The raw spectra were first reduced with standard \textsc{iraf} procedures. Bias images, dome, and sky flats were taken at the beginning of each night. Bias images were subtracted from the science images, as is the dark current, which was estimated from the overscan regions of the science exposures. To obtain the overall flat field correction, both dome and sky flats were observed. First the spatial flattening was calculated from dome flats and the spectral flattening from sky flats. Then these two were multiplied to get a master flat field, which was applied to all the science images from a given night of observing. Since all exposures for one of the two spectral setups of each galaxy were obtained in one night, it was possible to stack individual frames at this point. During the stacking process cosmic rays were removed by an outlier rejection algorithm, and any remaining ones then removed manually.
Wavelength and flux calibration as well as straightening the image along the slit, were then performed on the stacked spectra. For wavelength calibration observations of a HeAr lamp in addition to the sky lines were used. The flux calibration was based on observation of multiple standard stars per night. The standard stars for the September 2012 run were EG\,21, Feige\,110 and LTT\,7987, whereas during the April 2013 run, the standard stars EG\,274, Feige\,56, LTT\,3218, LTT\,6248 were observed.
\subsection{Extraction of spatially resolved optical spectra}
\label{sec:red}
\begin{figure*}
\center
\includegraphics[width=160mm]{Images/fig4.pdf}
\caption{Example of spatially resolved spectrum. To the left a SDSS post stamp image of the galaxy (GASS 101037). North is up and east is left. The yellow crosses denote the light-weighted location of the individual spectra and the dashed line indicates the location of the slit. On the right, the individual spectra are plotted in black. Blue shows the fits to the emission lines and green the continuum fit. The spectrum plotted in the topmost panel is located farthest in the north-east of the galaxy. }
\label{fig:example}
\end{figure*}
To extract spatially resolved, one-dimensional spectra from the reduced two-dimensional spectra, we used the same pipeline as \citetalias{Moran2012}. First, the two spectral halves were merged and possible flux mismatches were removed. Next, a rotation curve was fitted to absorption line measurements, thus it was possible to conduct all following steps in the rest frame. After that, the spectrum was spatially binned, starting in the centre and moving outwards. The size of the spatial bins was chosen such that a minimum continuum signal to noise ratio (S/N) of 5 was reached. This binning procedure resulted in one dimensional spectra covering a certain radial range at a certain radial position of the galaxy (see Fig.~\ref{fig:example}).
The stellar continuum of all the one dimensional spectra extracted in the previous step were then fitted with a superposition of simple stellar population models \citep{Bruzual2003}. The best-fitting continuum model was subtracted from the spectrum and the remaining emission lines were fitted with Gaussian functions \citep{Tremonti2004}. This process results in measurements of emission lines and stellar continuum at different galactocentric radii. An example for a typical spatially resolved spectrum with the fits to stellar continuum and emission lines is given in Fig.~\ref{fig:example}.
\subsection{Measuring radial metallicity profiles and gradients}
\label{sec:grad_fit}
With emission lines measured at different galactocentric radii, we are able to measure the gas-phase metallicity for each radial bin. After correcting the emission line fluxes for extinction \citepalias{Moran2012}, the gas-phase oxygen abundance was measured from ratios of the [OIII]$\lambda = 500.7$\,nm, H$\beta$, [NII]$\lambda =658.4$\,nm and the H$\alpha$ emission line flux following the prescription by \citet{Pettini2004}:
\begin{equation}
O3N2 = \log \left( \frac{[O III]\lambda 500.7{\rm \,nm} / H\beta }
{[N II] \lambda 658.3{\rm \,nm} / H\alpha}\right)
,\end{equation}
\begin{equation}
12 + \log (O/H) = 8.73 - 0.32 \times O3N2
.\end{equation}
There are many metallicity calibrators with different zero points, which were previously proposed and discussed in the literature (see e.\,g. \citealp{Kewley2008}). Given the available emission lines in our observations and that metallicity calibrators based on the $O3N2$ ratio are considered robust and are widely used in the literature, we focus on these types of metallicity estimators. In addition to the \citet{Pettini2004} prescription, we also use the \citet{Marino2013} $O3N2$ metallicity calibrator to make direct comparisons with\ other works (Sect.~\ref{sec:compare_manga}).
However, since the \citet{Marino2013} $O3N2$ calibrator tends to underestimate high metallicities \citep{Erroz-Ferrer2019}, we used the \citet{Pettini2004} calibrator for the majority of the {}analysis.
The same data products derived with the same pipeline are available for the \citetalias{Moran2012} galaxies. Thus, all the following analysis steps were performed for both the \citetalias{Moran2012} and the new data.
The analysis procedure described in the previous sections provided radial profiles of the gas-phase metallicity. The radial variation of these profiles was quantified by the slope of a linear fit to the metallicity as a function of radius. To account for the varying sizes of galaxies and their different distances, the galactocentric, light-weighted radius of each radial bin was normalised or converted to kpc. For normalisation, the SDSS 25\,mag\,arcsec$^{-2}$ isophotal radius ($\rm R_{25}$), Petrosian 90\,percent radius ($\rm r_{90}$) \citep{Petrosian1976} and the effective radius ($\rm r_{eff}$) were used in $r$ band. Using $u$ or $i$ band radii, that is, focusing on the young or old stellar population, does not affect the results. We therefore focus on radii measured in the $r$ band only. These radii have been published with SDSS
DR7 \citep{Abazajian2009} and are taken from the MPA-JHU catalogue\footnote{https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/}. 12 + log(O/H) was then fitted as a linear function of $r / r_{norm}$:
\begin{equation}
{\rm 12 + log (O/H)} = (\Delta {\rm 12 + log (O/H)}) \times (r / r_{norm}) + a
,\end{equation}
using the \textsc{scipy}\footnote{http://www.scipy.org/} \citep{Virtanen2020} function \texttt{curve\_fit}, which utilises the least squares-based Levenberg--Marquardt algorithm. As the first derivative of this function, $\Delta {\rm 12 + log (O/H)}$ was then defined as the radial gradient of the metallicity.
In order to improve the reliability of the results, some radial bins were excluded from the analysis. We rejected any bin where AGN emission was significantly contributing to the ionisation. Those were identified by using strong emission line ratios to place the measurement in the [NII]/H$\alpha$ versus [OIII]/H$\beta$ Baldwin -- Phillips -- Terlevich diagnostic plot (BPT, \citealp{Baldwin1981}). Any radial bin with measured line ratios falling above the empirical threshold of \citet{Kauffmann2003} was excluded from the analysis. Furthermore, we required a signal-to-noise (S/N) detection of 3 for the four emission lines [O\,III]$\lambda$\,500.7\,nm, [N\,II]$\lambda$\,658.3\,nm, H$\alpha$ and H$\beta$.
In previous studies (e.\,g. \citealp{Sanchez2014}, \citealp{Ho2015} and \citealp{Sanchez-Menguiano2016}), metallicity gradients were often calculated after discarding measurements within a certain galactocentric radius to avoid contamination by any active nucleus. While we disregard radial bins with AGN-like emission in general, we calculated metallicity gradients twice to allow for fair comparisons with these results: once using all reliable data points and once only considering measurements coming from the region outside of 0.5 times the effective $r$ band radius $\rm r_{eff,r}$. When requiring a minimum of three radial bins for gradient measurement, we measured gradients from the entire radial profile for 88 galaxies, and gradients from the radial profile between 0.5 and 2\,$\rm r_{eff,r}$ for 74 galaxies. Of these galaxies, 75 and 66 galaxies have stellar masses higher
than $10^{10}$\,M$_\odot$, respectively.
In the following sections, we investigate correlations between metallicity gradients and the stellar mass ($\rm \log\,M_{\star}~[M_{\odot}]$), stellar mass surface density ($\rm \log\,\mu_{\star}$, as a proxy for morphology), the concentration index ($\rm c = r_{90} / r_{50}$, proxy for bulge to total mass ratio), specific star formation rate ($\rm sSFR = SFR / M_\star$), $NUV$-$r$\ colour, atomic and molecular gas mass fraction (gas mass fractions are defined as $\rm \log\,f_{\rm Gas} = \log\,M_{\rm Gas}/M_{\star}$), and the deficiency factor for atomic and molecular gas. Details on the derivation of these quantities are given in \citet{Saintonge2017} and {}
\citet{Catinella2018}. The deficiency factor is the difference between an estimate of the gas mass fraction from a scaling relation and the actually measured gas fraction. Here we use the best and tightest scaling relations available from the xGASS\ and xCOLD\,GASS\ analysis. For \textsc{H\,i}, this is the relation between $\rm \log\,f_{HI}$\ and $NUV$-$r$\ colour, more specifically the binned medians from Table~1 in \citet{Catinella2018}. For H$_{2}$, we used the scaling relation between $\rm \log\,f_{H2}$\ and {}
log\,sSFR based on the "Binning" values for the entire xCOLD\,GASS\ sample in Table~6 of \citet{Saintonge2017}. In both cases we extrapolated between the bins to get an expected gas mass fraction. The deficiency factor is then:
\begin{equation}
\rm def = \log f_{expected} - \log f_{measured}
,\end{equation}
with f the gas mass to stellar mass fraction. Therefore a negative deficiency factor indicates that a galaxy is more gas-rich than the average galaxy sharing similar $NUV$-$r$\ colour or sSFR.
\section{Results: Metallicity gradients}
\label{sec:results1}
In this section, we present the results of a detailed analysis of the correlation between metallicity gradients and global galaxy properties, in particular, star formation activity and gas content.
We start by analysing and establishing which correlations between metallicity gradient and global galaxy property are of interest in our sample. In this process, we consider both gradients measured from profiles with all radial data points and gradients measured from profiles without data points inside of 0.5\,$\rm r_{eff,r}$. Then we compare our results to the literature and discuss potential differences.
\subsection{Investigating correlations between gradients and global galaxy properties}
\label{sec:correlations}
In order to test for the presence and strength of correlations between gradients and global galaxy properties, we applied multiple methods. Firstly, we calculated Spearman correlation coefficients between metallicity gradients and each global galaxy property.
For those global galaxy properties that have correlation coefficients that significantly depart from zero, we obtained (semi-) partial Spearman correlation coefficients. These are correlation coefficients that take into account the intercorrelation between various global galaxy properties.
Through a backward elimination based on the results of a multiple linear regression, we searched for the most important global galaxy property to determine the metallicity gradients.
We trained a random forest model to predict metallicity gradients from those global galaxy properties that have correlation coefficients significantly different from zero.0. Then we asked the model which feature was most important in predicting the metallicity gradient.
\subsubsection{Spearman correlation coefficients}
We measured correlation coefficients between metallicity gradients and global properties as Spearman R values and calculated their errors through bootstrapping: for a sample of $n$ measurements, $0.8\times n$ measurements were randomly drawn from the sample (with replacement) and their correlation coefficient was measured. This process was repeated $0.8\times n$ times. The error of the correlation coefficient was then set to the standard deviation of the sample of $0.8\times n$ correlation coefficients. The median bootstrapping error of all measured correlation coefficients amounts to $0.1$. Therefore, for a relation to be further considered and analysed, an absolute correlation coefficient ${\rm |R|} > 0.3$ was required ($3\,\sigma$ different from 0). The absolute correlation coefficient ${\rm |R|}$ can have values between 0 and 1, where numbers closer to 1 present tighter and stronger correlations (or an anti-correlation if R is negative).
\begin{figure*}
\center
\includegraphics[width=6.3in]{Images/fig5.pdf}
\caption{Correlation coefficients calculated for relations between metallicity gradients and global galaxy properties (colour code) with their error bars. The panels show, from left to right, correlation coefficients for metallicity gradients in units of dex\,$\rm r_{eff,r}^{-1}$, dex\,$\rm r_{90,r}^{-1}$, dex\,$\rm R_{25,r}^{-1}$, dex\,$\rm kpc^{-1}$. We note that all radii have been measured in the $r$ band, similar results can be obtained for radii in the $u$ and $i$ band. For each global property four different correlation coefficients are presented, and are marked by the shape of the data point. The correlation coefficient were measured with gradients based on: \textbf{(A)} the entire radial profile for all galaxies in the sample; \textbf{(B)} the entire radial profile for massive galaxies; \textbf{(C)} the radial profile outside of 0.5\,$\rm r_{eff,r}$
{}for all galaxies in the sample; \textbf{(D)} the radial profile outside of 0.5\,$\rm r_{eff,r}$ for massive galaxies. Black dashed lines mark correlation coefficients of -0.3 and 0.3. }
\label{fig:corr_coef}
\end{figure*}
Figure~\ref{fig:corr_coef} shows Spearman R correlation coefficients for metallicity gradients (measured with and without the central 0.5\,$\rm r_{eff,r}$) and global properties. Each data point represents the correlation coefficient between one gradient (e.\,g. the gas-phase metallicity gradient in units of dex\,$\rm r_{90, r}^{-1}$) and one global galaxy property (e.\,g. stellar mass). The shape of the data points indicates the dataset for which the correlation coefficient was measured. We note that diamonds and triangles (correlations with gradients measured from radial profiles outside of 0.5\,$\rm r_{eff,r}$) generally indicate less pronounced correlations than squares and circles (gradients measured from the full radial profile).
The median maximal radius, at which we can reliably measure metallicities, is 2.5\,$\rm r_{eff,r}$ for massive galaxies, and 1.5\,$\rm r_{eff,r}$ for low-mass galaxies. This means that when measuring metallicity gradients from the radial range between 0.5 and 2.0\,$\rm r_{eff,r}$, we mostly exclude low-mass galaxies because they do not have enough (i.\,e. three or more) radial metallicity measurements between 0.5 and 2.0\,$\rm r_{eff,r}$ to fit a gradient. In order to understand the effect of looking at massive star-forming galaxies only, we also measured correlation coefficients for massive galaxies ($\rm \log\,M_{\star}~[M_{\odot}]$\ > 10., cases (B) and (D) in Fig.~\ref{fig:corr_coef}). These correlation coefficients for massive galaxies are usually closer to 0 than the correlation coefficients for the entire sample. This points to a scenario in which the observed trends are amplified by low-mass galaxies.
For now, we are focusing on all relations for which the correlation coefficient is larger than $0.3$ or smaller than $-0.3$ and which are thus three times larger than the median error or in other words significantly different from zero. For gradients measured from the entire radial profile and when considering all galaxies for which a gradient could be measured (circles in Fig.~\ref{fig:corr_coef}), we find that the correlation coefficient is significantly different from zero for relations between metallicity gradient in units of dex\,$\rm r_{eff, r}^{-1}$ and $\rm \log\,\mu_{\star}$, $\rm \log\,M_{\star}~[M_{\odot}]$, $\rm \log\,f_{HI}$, $NUV$-$r$\ colour, concentration index c, and log~SFR.
The correlation coefficients for other metallicity gradients, namely, in units of dex\,$\rm r_{90, r}^{-1}$, dex\,$\rm R_{25, r}^{-1}$ and dex\,$\rm kpc^{-1}$ show similar trends, which are generally weaker, however.
Overall, we have a wide radial coverage all the way out to 2\,$\rm r_{eff,r}$ for massive galaxies but the more central measurements of metallicity often cannot be used for metallicity gradient measurement as their location on the BPT diagnostic plot indicates that the emission lines are excited by AGN-like emission rather than the emission of star-forming regions. Hence, restricting a study only to consider massive galaxies already goes in the direction of analysing correlations for metallicity gradients measured only from data points outside of 0.5\,$\rm r_{eff,r}$.
This analysis points towards a scenario in which the correlations we see between a metallicity gradient and global galaxy properties are affected by the radial location of the radial metallicity measurements, which are used for the metallicity gradient measurement. To further analyse this assumption, we only look at metallicity gradients, which were measured on radial profiles without the data in the inner 0.5\,$\rm r_{eff,r}$. These data are shown as triangles and diamonds in Fig.~\ref{fig:corr_coef}. In this case, we find no correlation remaining with absolute correlation coefficients ${\rm |R|} > 0.3$, except for the ones between metallicity gradients in units of dex\,$\rm r_{eff}^{-1}$ and $\rm \log\,\mu_{\star}$ for all galaxies and between metallicity gradients in units of dex\,kpc$^{-1}$ and $\rm \log\,\mu_{\star}$, $\rm \log\,M_{\star}~[M_{\odot}]$ and $NUV$-$r$\ colour.
In the following sections, we delve deeper into a statistical analysis of these correlations. We are especially interested in understanding which correlation is primary and which are secondary effects. We only focus on metallicity gradients in units of dex\,$\rm r_{eff}^{-1}$, because these are widely used in the literature, they yield the tightest correlations and the other metallicity gradients behave similarly.
\subsubsection{(semi-)Partial Spearman correlation coefficients}
To further investigate the correlations found in the last section, we also considered (semi-)partial Spearman correlation coefficients, which provide the same information as the correlation coefficients introduced above, but they allow us to fold in inter-correlations between the global galaxy properties. This is achieved by holding one measurement constant while looking at the correlation coefficient of two other measurements. In practice, this means computing correlation coefficients between residuals. Since $\rm \log\,\mu_{\star}$\ and $\rm \log\,f_{HI}$\ are correlated \citep{Catinella2018, Brown2015} and both appear to be correlated with the metallicity gradient, we must control for $\rm \log\,\mu_{\star}$\ to evaluate the strength of the 'remaining' correlation between $\rm \log\,f_{HI}$\ and the metallicity gradient. We performed this analysis for the ensemble of $\rm \log\,\mu_{\star}$, $\rm \log\,M_{\star}~[M_{\odot}]$, $\rm \log\,f_{HI}$, $NUV$-$r$\ colour, concentration index, log~sSFR and log~SFR and their correlation to the metallicity gradients in units of dex\,$\rm r_{eff,r}^{-1}$. The global galaxy properties are presented here as all those properties that showed any significant correlation in the previous section. Furthermore, we add the concentration index, c, as it was found to be correlated with the metallicity gradient by \citetalias{Moran2012}. When controlling for these properties using the \texttt{partial\_corr} implementation in the \textsc{pingouin} package\footnote{https://pingouin-stats.org}, we only find a strong correlation between $\rm \log\,\mu_{\star}$\ and metallicity gradients. This result holds for both, metallicity gradients measured from the entire radial profile and metallicity gradients measured without data in the central 0.5\,$\rm r_{eff,r}$. For all other galaxy properties, the (semi-)partial correlation coefficients are significantly smaller than 0.3 and the majority of their 95\,percent confidence intervals includes 0, that is, both a correlation and an anti-correlation would be possible.
\subsubsection{Backward elimination}
A second method to find the one variable in a set of features that contributes most (or most optimally) to predicting a result is backward elimination. We used backward elimination in the following way: we fitted a general ordinary least squares multiple linear regression, such that the metallicity gradient in units of dex\,$\rm r_{eff,r}^{-1}$ was the dependent variable and was described as a linear combination of $\rm \log\,\mu_{\star}$, $\rm \log\,M_{\star}~[M_{\odot}]$, $\rm \log\,f_{HI}$, $NUV$-$r$\ colour, concentration index, log~sSFR, and log~SFR (the same selection of global galaxy properties as in the previous section) plus a constant. Then we took a look at the statistics of this model (as provided by the \texttt{OLS} module of the \textsc{statsmodel} package, \citealp{Seabold2010}). These statistics provide among other measures a p-value for the T-statistics of the fit. If this p-value is large for one of the variables, then this variable is likely not useful in the fit. In the context of our backward elimination, we used the p-value in the following, iterative way: after the first multiple linear regression, we eliminated the variable with the largest p-value, then ran the fit again without the eliminated variable. We continued to eliminate variables and run the fit until the p-values of all remaining variables were below 0.05, which is a p-value commonly judged as statistically significant.
Applying this procedure to our data returned the following results: for metallicity gradients measured on the entire radial profile, the backward elimination leaves $\rm \log\,\mu_{\star}$\ and $\rm \log\,f_{HI}$, with the importance (i.\,e. the coefficient) of $\rm \log\,\mu_{\star}$\ twice the one of $\rm \log\,f_{HI}$. For metallicity gradients measured without data points at radii smaller than 0.5\,$\rm r_{eff,r}$, only $\rm \log\,\mu_{\star}$\ remains with a p-value smaller than 0.05.
A caveat of this method is the underlying assumption of linear relations between metallicity gradients and global galaxy properties, which is not necessarily the case. We improve on this caveat in the next section by using a random forest regression.
\subsubsection{Random Forest model}
A random forest \citep{Ho1995} is a non-parametric, supervised machine learning technique made up of a set of decision trees. The result of a random forest is the mean of all decision trees in the forest and is thus generally more robust than a single decision tree. The aim of this analysis is to train a random forest to predict the metallicity gradient in units of dex\,r$_{eff,r}^{-1}$ from $\rm \log\,\mu_{\star}$, $\rm \log\,M_{\star}~[M_{\odot}]$, $\rm \log\,f_{HI}$, $NUV$-$r$\ colour, concentration index c, log~sSFR, and log~SFR. Once the model is fully trained, we can ask what is the relative contribution of the different features to predicting the metallicity gradient.
We used the implementation provided by \textsc{scikit-learn} \citep{Pedregosa2011} and trained the model to optimise the mean squared error. We allow for a maximum of 20 leaf nodes in the decision trees, use 160 decision trees and leave the default settings for all other parameters. As mentioned above, not all galaxies in our sample are equipped with all measurements and sometimes metallicity gradients could not be measured due to too few radial bins with sufficient emission line detections. Thus the samples to work with contain 81 (67) galaxies for metallicity gradients measured on the entire radial profile (only from data points outside of 0.5\,$\rm r_{eff,r}$). Of each sample, we use 80\,percent of the galaxies for training purposes and 20\,percent to test the resulting model. Tests after the training showed that metallicity gradients can be predicted with a mean absolute error of 0.06 (0.07)\,dex\,$\rm r_{eff,r}^{-1}$ for metallicity gradients measured on the entire radial profile (only from data points outside of 0.5\,$\rm r_{eff,r}$), and the most relevant features for the prediction are $\rm \log\,f_{HI}$\ and $\rm \log\,\mu_{\star}$\ in both cases.
\subsection{Correlation between the metallicity gradient, stellar mass surface density, stellar mass, and f$_{HI}$}
\label{sec:detailed_corrs}
\begin{figure*}
\center
\includegraphics[width=6.3in]{Images/fig6.pdf}
\caption{Metallicity gradients shown as a function of stellar mass surface density (left), stellar mass (middle), and atomic gas mass fraction (right). Coloured circles connected with dashed lines present the median gradient of individual profiles. The error bars in y-direction give the bootstrapping error. Coloured diamonds connected by dotted lines show the metallicity gradient of the stacked profiles. The grey data points in the background represent the metallicity gradients measured per galaxy. From top to bottom, the radius used when fitting the gradient is normalised by $\rm r_{eff}$ and r$_{90}$. The text in the upper part of each panel provides the Spearman R correlation coefficient for all galaxies, its bootstrapping error and the corresponding p-value. The coefficient is calculated between the data points for individual galaxies rather than the binned values. Where available, the yellow squares show an expected average metallicity gradient; see the text for more details.}
\label{fig:grad_with}
\end{figure*}
\begin{figure*}
\center
\includegraphics[width=6.3in]{Images/fig7.pdf}
\caption{Metallicity gradients shown as a function of stellar mass surface
density (left), stellar mass (middle), and atomic gas mass fraction (right), as in Fig. 6, but here the gradients have been measured from profiles without the central 0.5\,$\rm r_{eff}$. We note how most correlations from Fig.~\ref{fig:grad_with} turn into scatter plots and gradients of stacked profiles deviate from average gradients of individual profiles.}
\label{fig:grad_wo}
\end{figure*}
With the statistical tests shown in the last sections, a scenario is building in which $\rm \log\,\mu_{\star}$\ is the main driver of metallicity gradients and $\rm \log\,f_{HI}$\ may play a secondary rule. We present further investigations of these correlations and the correlation with stellar mass, because this one is best studied in the literature. We examined them by dividing the sample into five bins of the global property, such that each bin contained about the same number of galaxies. The average radial gradient in each bin was then estimated in two different ways: (i) a gradient measured from the average stacked metallicity profile based on the radial metallicity measurements of all galaxies in the bin, to be called a 'gradient of a stacked profile' and (ii) the median of all gradients measured from individual galaxy metallicity profiles, to be called 'the average gradient of individual profiles'. To obtain the gradient of the stacked profile, we take all radial data points of all galaxies within one bin and fit a line to all radial metallicity measurements that fulfil our quality criteria. Radial data points for all galaxies are weighted equally and radii are normalised or measured in kpc. The resulting correlations are shown in Fig.~\ref{fig:grad_with} (metallicity gradients measured on the entire radial metallicity profile) and Fig.~\ref{fig:grad_wo} (gradient measured from the radial metallicity profile outside of 0.5\,$\rm r_{eff,r}$).
The strongest correlation between global galaxy properties and metallicity gradients in our samples, in all cases, is observed with $\rm \log\,\mu_{\star}$: generally, galaxies with lower $\mu_\star$ have steeper metallicity gradients than high $\mu_\star$ galaxies, regardless of the radius normalisation or unit. When comparing these correlations to the ones obtained when measuring the metallicity gradients from radial profiles without the inner 0.5\,$\rm r_{eff,r}$ (Fig.~\ref{fig:grad_wo}, left column), an increase in scatter and overall flattening of the trends can be seen. This is also reflected in the correlation coefficients, which generally decrease from around 0.5 to around 0.2. A notable exception is the correlation coefficient between $\rm \log\,\mu_{\star}$\ and the metallicity gradient in units of
{}dex\,$\rm r_{eff,r}^{-1}$ (top row, left panel in Fig~\ref{fig:grad_wo}), which is the only correlation coefficient that is larger than 0.3 in the analysis of gradients measured from profiles without the central 0.5\,$\rm r_{eff,r}$.
The relation between M$_\star$ and metallicity gradients is weaker than the one between $\rm \log\,\mu_{\star}$\ and metallicity gradients. For gradients measured on the entire radial metallicity profile, we find correlation coefficients larger than 0.3 for all normalising radii (middle column, Fig.~\ref{fig:grad_with}). The scatter is larger for gradients in units of dex\,R$_{25, r}^{-1}$ or dex\,r$_{90, r}^{-1}$. In particular, for gradients in units of dex\,$\rm r_{eff,r}^{-1}$, it can be seen that the trend between stellar mass and metallicity gradients measured from the entire radial profile is driven by low-mass galaxies. Once we remove the inner 0.5\,$\rm r_{eff,r}$ from the metallicity profile for gradient measurement
(middle column, Fig.~\ref{fig:grad_wo}), the resulting gradients do not correlate with stellar mass any longer: the scatter of gradients from individual profiles increases and both approaches to measuring binned, average gradients either produce a flat relation or scatter throughout the parameter space.
A further test whether stellar mass or $\mu_\star$ is the more defining factor in determining the metallicity gradient was inspired by Fig.~5 from \citet{Belfiore2017a} but the results are inconclusive. The yellow symbols and lines in Fig.~\ref{fig:grad_with} and \ref{fig:grad_wo} show the expected gradients. For the relation between $\mu_\star$ and metallicity gradient, we calculate the average stellar mass in each bin of $\mu_\star$ and then extrapolate between the nearest M$_\star$ bins to get the expected metallicity gradient and vice versa. As the expected average metallicity gradients match with the measured average metallicity gradients, this test does not provide additional insights.
The third global property that we are considering here is the \textsc{H\,i}\ gas mass fraction. The strongest correlations are measured between gradients in units of dex\,$\rm r_{eff,r}^{-1}$. While correlation coefficients for gradients with normalising radii $\rm r_{eff}$ are at least $3\,\sigma$ different from zero, we find that gradients with normalising radii r$_{90}$, R$_{25}$ and in units of dex\,kpc$^{-1}$ are only 2 to 3\,$\sigma$ different from zero. Once moving from gradients measured on the entire radial metallicity profiles to gradients measured without the central 0.5\,$\rm r_{eff,r}$, we find a similar behaviour as observed for the correlations between stellar mass and metallicity gradients: the scatter of the individual gradients increases, correlations of binned values either flatten or their scatter increases as well.
The sample selection and the resulting distribution of global galaxy properties can also affect correlations between metallicity gradients and global galaxy properties. As can be seen, in Fig.~\ref{fig:sample2}, for example the stellar mass range $9.0 \le \log {\rm M_{\star} [M_{\odot}]} \le 10.0$, is more sparsely sampled. Hence, individual extreme and low-mass galaxies might significantly drive correlations. To show that this is not the case, we show the individual metallicity gradients in Figs.~\ref{fig:grad_with} and \ref{fig:grad_wo} as small grey symbols.
\subsection{Comparison to MaNGA}
\label{sec:compare_manga}
\begin{figure*}
\center
\includegraphics[width=6.3in]{Images/fig8.pdf}
\caption{Comparison with MaNGA: all panels show the metallicity gradient as a function of global properties: $\rm \log\,\mu_{\star}$, log~M$_\star,$ and $\rm \log\,f_{HI}$\ (from left to right). Trimmed mean gradients agree with median gradients within the standard deviation (error bars), thus we only show the median gradients. Grey, open squares in the background show individual MaNGA metallicity gradients and green, filled squares show MaNGA median gradients. Grey, open circles in the background, and teal filled circles show the data from this work for the case that metallicity gradients were measured from profiles without radial data points inside of 0.5\,$\rm r_{eff}$. The bins within which median gradients were measured, were set to be equidistant in order to mitigate any effects of different distributions of the global properties. We note that we use the M13 $O3N2$ calibration in this figure.}
\label{fig:manga}
\end{figure*}
As indicated in the introduction, the rise of large IFU surveys has provided large samples of local star-forming galaxies for which a metallicity gradient can be measured. In Fig.~\ref{fig:manga}, the results of this paper are compared to data from MaNGA.
The MaNGA data for this comparison comes from the data release 15 and includes two value added catalogues: MaNGA Pipe3D value added catalog: Spatially resolved and integrated properties of galaxies for DR15 \citep{Sanchez2018a,Sanchez2016,Sanchez2016a} and HI-MaNGA Data Release 1 \citep{Masters2019}. The Pipe3D catalogue includes gas-phase metallicity gradients measured in units of dex\,$\rm r_{eff,r}^{-1}$, the local gas-phase metallicity measured at the effective radius, the total stellar mass, and global star formation rates (from H$\alpha$ emission lines). We note that the metallicity estimator used in this MaNGA catalogue is the $O3N2$ estimator by \citet{Marino2013}. While both the
\citet{Marino2013} (M13) and the \citet{Pettini2004} (PP04, used in this paper) metallicity prescription are based on the $O3N2$ line ratio, their normalisation is slightly different. For a consistent comparison, we use the M13 $O3N2$ method in every figure that includes MaNGA data (and note in the figure caption when this is the case).
We combine the Pipe3D catalogue with the SDSS DR7 MPA-JHU catalogue\footnote{https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/} to obtain 50\,percent Petrosian radii for all galaxies. Together with the stellar mass as given by the MaNGA team, we are thus able to calculate $\mu_{\star}$ (see Sect.~\ref{sec:survey}). In addition, we use the \textsc{H\,i}\ masses and upper mass limits as provided by \citet{Masters2019}.
If we are only selecting those galaxies that have measured metallicity gradients, as well as at least an upper limit for the \textsc{H\,i}\ mass, a match in the MPA-JHU catalogue (for $\mu_{\star}$ measurements), and which are within $\pm1.5 \sigma$ of the \citet{Catinella2018} SFMS, we get a sample of 544\,galaxies from the MaNGA data sets. For simplicity, we treat \textsc{H\,i}\ mass upper limits as their true value. In Fig.~\ref{fig:manga}, we show the different correlations between metallicity gradient and global galaxy properties (from left to right: stellar mass surface density, stellar mass and \textsc{H\,i}\ mass fraction). As the distribution of our and the MaNGA galaxies in these properties are different, we fix the widths (0.5\,dex) and centres of the bins of global galaxy properties. This approach simulates a flat distribution in stellar mass surface density, stellar mass and \textsc{H\,i}\ mass fraction for both the MaNGA and our sample. In each of these bins, we only use galaxies within a certain metallicity gradient percentile range (within the 16-84 percentile range) to remove extreme outliers, and refer to the corresponding quantities, for example, the mean gradient, as 'trimmed'. For each bin of galaxies, we thus calculate a trimmed mean gradient, standard deviation, error of the trimmed mean, and a median gradient.
In Fig.~\ref{fig:manga}, the median gradients are shown at the centre of the bin. The numbers in the bottom indicate how many galaxies contributed to the median. As trimmed means and medians agree within the standard deviation, we only show the median gradients.
We also use a second, more stringent percentile range (40-60) as a check of the initial, broader range. Although the numbers of galaxies drop significantly for the 40-60 percentile cut, the mean and median gradients trimmed in this way agree well between the 40-60 percentile cut method and the 16-84 percentile cut method. This means that the mild cut is sufficient to estimate a robust mean. The resulting trends agree with observations in Fig.\ref{fig:grad_wo}. In the MaNGA data, correlations between the metallicity gradients and these global properties can also be seen. In most cases, our metallicity gradients, which were measured without data points at radii smaller than 0.5\,$\rm r_{eff,r}$ agree with with the MaNGA data, except for low-mass, low $\mu_\star$ systems. It is interesting to note that generally the trend between metallicity gradients and $\rm \log\,\mu_{\star}$\ (galaxies with lower $\mu_\star$ have steeper declining metallicity gradients), is also seen in the MaNGA data, except for the lowest $\rm \log\,\mu_{\star}$\ bin. Considering the distribution and location of the individual gradient measurements (grey symbols in Fig.~\ref{fig:manga}), this shows that overall our sample covers a similar parameter space as the MaNGA measurements. There are three low-mass galaxies with steeper gradients than most MaNGA galaxies at the same stellar mass. We have placed both the MaNGA galaxies and our sample on various scaling relations to understand whether these galaxies are special with respect to star formation or gas content. However, this is not the case
here (see also App.~\ref{app:compare_manga} and Fig.~\ref{fig:manga_sample}).
\citet{Belfiore2017a} investigated the correlation between metallicity gradients and stellar mass based on MaNGA data and found that the steepest (most negative) metallicity gradients are measured for galaxies with stellar masses around $10^{10}$ to $10^{10.5}$\,M$_{\odot}$. Galaxies at lower and higher stellar masses have flatter radial metallicity profiles. This trend can also be seen in the middle column of Fig.~\ref{fig:manga}. This is particularly interesting as the \citet{Belfiore2017a} results are not based on the Sanchez et al. value-added catalogue that we use on the present work.
\subsection{Radial variation of the gas-phase metallicity}
\label{sec:radial_profs}
\begin{figure*}
\center
\includegraphics[width=6in]{Images/fig9.pdf}
\caption{Median metallicity profiles in bins of different global galaxy properties. Each row of plots corresponds to one global galaxy property, from top to bottom: $\mu_\star$, M$_\star,$ and f$_{HI}$. Each panel in a row shows average radial metallicity profiles of all galaxies within the bin of the global galaxy property, with the range given at the top of the panel. The dark shaded region corresponds to $\rm 0.5 \leq r_{eff,r} \leq 2.0$, which is the radial region within which MaNGA computes metallicity gradients. Circles show the median metallicity profiles of all galaxies and triangles the median metallicity profile of massive galaxies only (M$_\star > 10^{10}$\,M$_\odot$). These profiles have been computed in radial bins, all of which have the same radial width. The small grey dots show the individual radial metallicity data points. The number in the bottom right corner of each panel indicates the percentage of radial data points located within the range $0.5 \leq \rm r_{eff,r} \leq 2.0$. }
\label{fig:metal_profile}
\end{figure*}
One reason why the correlations between metallicity gradient and M$_\star$, f$_{HI}$ (and $\mu_\star$) change depending on how the metallicity gradient is measured and which sub-sample is considered can be seen when taking the average shape of the radial metallicity profiles into account. To do so, a median radial metallicity profile was calculated for each bin of $\mu_\star$, M$_\star$ and f$_{HI}$. One profile is the running median of all data points meeting the criteria to be included in the gradient fit of all galaxies within one bin of global galaxy property. These profiles are shown in Fig.~\ref{fig:metal_profile}. As we can see, it is not only the metallicity gradient, but also the shape and y-axis intercept of the median metallicity profile that vary with $\mu_\star$, M$_\star$ and f$_{HI}$. Overall, galaxies with lower $\mu_\star$, lower stellar masses and higher \textsc{H\,i}\ mass fractions have lower central metallicities, which is in agreement with the mass--metallicity relation (see e.g. \citealp{Tremonti2004,Bothwell2013,Brown2018}).
In addition, the median profiles of higher $\mu_\star$ galaxies with higher stellar masses and lower \textsc{H\,i}\ mass fractions show a plateau or even decrease of metallicity within approximately $0.5\,\rm r_{eff,r}$. When fitting a line to such a profile with a plateau in the centre, the resulting slope will be flatter. Thus, the central metallicity measurements within $0.5\,\rm r_{eff,r}$ affect the resulting metallicity gradient. This effect is enhanced by the fact that only approximately 50\,percent of all radial data points are within the radial range $0.5 \leq \rm r_{eff,r} \leq 2.0$. Another 30-40 \,percent of our radial metallicity data points are at radii smaller than
0.5\,$\rm r_{eff,r}$. When measuring metallicity gradients including the inner 0.5\,$\rm r_{eff,r}$, the results are thus significantly affected by these data. This effect has already been observed before by, for example, \citet{Rosales-Ortega2009} and \citet{Sanchez2014} and it is one reason why some studies dismiss central metallicity measurements in their gradient estimation.
We computed these profiles for different subsets of our galaxy sample (circles: all galaxies, triangles: only massive galaxies, i.e. M${_\star}>10^{10}$\,M$_\odot$). In particular, for bulgy and relatively \textsc{H\,i}-poor galaxies, the radial profiles are dominated by massive galaxies and are consistent between the median profiles of all galaxies and massive galaxies only. For the lowest $\mu_\star$, \textsc{H\,i}-rich galaxies, we find that the median profile of all galaxies differs from the median profile of massive galaxies (see top left and bottom right panel in Fig.~\ref{fig:metal_profile}). Thus, the largest effect of low-mass galaxies on the measurements of average gradients is seen in these bins (smallest $\mu_\star$, most \textsc{H\,i}-rich). This effect together with low number statistics and the fact that some of our lowest mass galaxies have relatively steep gradients contribute to the discrepancy with MaNGA at low stellar mass surface densities and high \textsc{H\,i}\ mass fractions.
At this point, we found that all correlations are to some degree dependent on the sample selection and the definition of the metallicity gradient. Our analysis suggests that there is a correlation between metallicity gradients and $\mu_\star$ for galaxies on the star formation main sequence, especially when measuring the gradient from the entire radial metallicity profile. Before we move on to discussing this finding in greater detail in Sect.~\ref{sec:diss}, we explore the relation between local metallicity and global \textsc{H\,i}\ content.
\section{Results: Local metallicity and global H{\small I} content}
\label{sec:results2}
We now focus on the correlation between local gas-phase metallicity measurements and the global \textsc{H\,i}\ mass fraction as found by \citetalias{Moran2012}. With the new NTT data presented here, we are able to build on the findings of previous works.
\label{sec:local_metal}
\begin{figure}
\center
\includegraphics[width=3.15in]{Images/fig10.pdf}
\caption{Local metallicity at the outskirts of galaxies as a function of the global \textsc{H\,i}\ mass fraction. Circles show the global \textsc{H\,i}\ mass fraction as a function of metallicity measurements outside of $0.7{\rm r_{90,r}}$. The data points are colour coded according to their galactocentric radius normalised by ${\rm r_{90,r}}$. We note that some galaxies have multiple metallicity measurements outside of $0.7{\rm r_{90,r}}$ and would thus appear multiple times. The number in the lower left corner provides the Spearman correlation coefficient. }
\label{fig:outermetal_vs_fhi}
\end{figure}
\begin{figure}
\center
\includegraphics[width=3.15in]{Images/fig11.pdf}
\caption{Local metallicity at and around $\pm10$\,percent of the effective radius as a function of the global \textsc{H\,i}\ mass fraction for MaNGA (dark blue squares and arrows) and our sample (green circles), respectively. The numbers in the bottom left provide the Spearman correlation coefficient R and the number of galaxies used to calculated the statistic. For MaNGA we only used \textsc{H\,i}\ detections in the computation of R. The yellow and red line show our model for different ratios of \textsc{H\,i}\ to stellar radius, namely, 3.3 and 5.6, respectively. The underlying model of the dashed lines assumes an effective yield of 0.00268 \citep{Pilyugin2004} and the dotted lines a stellar yield of 0.037 (\citealp{Vincenzo2016} and references therein). We note that we use the M13 $O3N2$ calibration in this figure.}
\label{fig:re_metal_vs_fhi}
\end{figure}
Previously, \citetalias{Moran2012} reported a correlation between the local metallicity at the edge of the stellar disc and the global \textsc{H\,i}\ mass fraction. In Fig.~\ref{fig:outermetal_vs_fhi}, we added the data of the new low-mass galaxies and find that the correlation holds. For MaNGA galaxies, only local metallicity measurements at one effective radius are provided in the value-added catalogues. Together with all those galaxies from our sample, which have a metallicity measurement within $\pm10$\,percent of the effective radius in $r$ band, the correlation between local metallicity around the effective radius and the global \textsc{H\,i}\ mass fraction is shown in Fig.~\ref{fig:re_metal_vs_fhi}. Again, a correlation is recovered. In summary, we find that local metallicity correlates with global \textsc{H\,i}\ mass fraction.
One way to explain these correlations between local metallicity and global \textsc{H\,i}\ mass fraction is suggested by the following simple model. We assume an exponential stellar disc, such that the stellar mass surface density is given by:
\begin{equation}
\label{equ:stellar_suf_dens}
\Sigma_\star = \Sigma_{0,\star} \times e^{-r / {\rm r_{0, \star}}},
\end{equation}
and the total stellar mass by:
\begin{equation}
{\rm M_\star} = 2\pi \times \Sigma_{0,\star} \times {\rm r_{0,\star}}^2,
\end{equation}
where $\rm r_{0, \star}$ is the stellar scale length and $\Sigma_{0,\star}$ the central stellar column density. For the \textsc{H\,i}\ disc we describe the \textsc{H\,i}\ mass by:
\begin{equation}
{\rm M_{HI}} = \pi \times \ \Sigma_{0,{\rm HI}} \times {\rm r_{HI}}^2
,\end{equation}
where $\rm r_{HI}$ is the \textsc{H\,i}\ disc size and $\Sigma_{0,{\rm HI}}$ the (central/ constant) \textsc{H\,i}\ column density. Furthermore, we assume a local closed-box model where the local metallicity $Z$ at radius $r$ can be described as (see e.\,g. \citealp{Mo2010}):
\begin{equation}
\label{equ:metal_rad}
Z(r) = - y_{eff} \ln \left(\frac{\Sigma_{{\rm HI}}(r) + \Sigma_{{\rm H2}}(r)}{\Sigma_{{\rm HI}}(r) + \Sigma_{{\rm H2}}(r)+\Sigma_\star(r)}\right)
,\end{equation}
with $y_{eff}$ the effective yield and the $\Sigma(r)$ the local column densities of \textsc{H\,i}, H$_{2}$\ and stars at radius $r$. To evaluate this equation at the effective radius $\rm r_{eff}$, we take into account that (i) the \textsc{H\,i}\ and H$_{2}$\ column densities are approximately equal at $\rm r_{eff}$ \citep{Bigiel2012}; (ii) the \textsc{H\,i}\ column density at the effective radius is approximately the same as in the centre, which is suggested by the tight \textsc{H\,i}\ mass size relation \citep{Wang2016,Broeils1997}; and (iii) $\rm r_{eff} \approx 1.7 \times r_{0, \star}$ (and use Eq.~\ref{equ:stellar_suf_dens}).
We thus obtain
\begin{align}
Z(r=r_e) &= -y_{eff} \ln \left(\frac{2 \times \Sigma_{{\rm HI}}}{2 \times \Sigma_{{\rm HI}}+0.18 \Sigma_{0,\star}}\right),\\
{} &= -y_{eff} \ln \left(\frac{\pi \times {\rm r_{HI}}^2 \times \Sigma_{{\rm HI}}}{\pi \times {\rm r_{HI}}^2 \times \Sigma_{{\rm HI}}+0.09\pi \times {\rm r_{HI}}^2 \Sigma_{0,\star}}\right),\\
{} &= -y_{eff} \ln \left(\frac{{\rm M}_{{\rm HI}}}{{\rm M}_{{\rm HI}}+0.045 \times {\rm M}_\star \times ({\rm {\rm r_{HI}} / r_{0,\star}})^2 }\right) .
\end{align}
According to \citet{Broeils1997}, for instance, there is a good correlation between the radius of the \textsc{H\,i}\ and the stellar disc for spiral galaxies. Thus, this (local closed-box) model suggests indeed a correlation between the local metallicity at the effective radius and the \textsc{H\,i}\ mass fraction. Since the stellar scale length is also tightly correlated to r$_{90}$, a similar calculation can be carried out for Fig.~\ref{fig:outermetal_vs_fhi}.
In Fig.~\ref{fig:re_metal_vs_fhi}, we added the model prediction assuming a stellar oxygen yield of 0.037 \citet{Vincenzo2016} from the \citet{Romano2010} and \citet{Nomoto2013} stellar models assuming a
\citet{Chabrier2003} initial mass (dotted lines) and an effective oxygen yield of 0.00268 measured by \citep{Pilyugin2004} (dashed lines) in spiral galaxies. Furthermore, we follow \citet{DeVis2017} to convert between metallicity mass fraction and metallicity number density fractions. We note that we show both an example for true stellar yields and one for an effective yield. When using the effective yield small amounts of in- and outflows are included in this toy model. With the stellar yield this is a pure closed-box model. In addition, we show two different ratios of $\rm r_{HI} / r_{0,\star}$ = 3.3, and 5.6 in yellow and red, respectively.
These ratios are approximately equivalent to $\rm r_{HI} / R_{25}$ = 1.0, and 1.7, with $\rm r_{HI} / R_{25}$ = 1.7 (red line) the preferred value by observations of spiral galaxies \citep{Broeils1997}. More recent measurements of this ratio by \citet{Wang2016} suggest a range of values $\rm 0.6\lessapprox r_{HI} / R_{25} \lessapprox 5$.
\section{Discussion}
\label{sec:diss}
\subsection{Metallicity gradients}
We analysed the radial metallicity profile of a sample of star formation main sequence galaxies from the xGASS\ and xCOLD\,GASS\ galaxy sample and investigated the correlation with global galaxy properties, such as \textsc{H\,i}\ and H$_{2}$\ gas mass fraction, stellar mass, morphology and star formation activity. Depending on the method for measuring the gradients and the radial region in which the gradient is measured, we get the following results.
\textbf{Firstly, measuring the metallicity gradient from the entire radial profile:} We find correlations between metallicity gradients and multiple global galaxy properties, which have correlation coefficients significantly different from zero. However, the correlation coefficients get closer to 0, when considering only massive galaxies (M$_\star$ > 10$^{10}$\,M$_{\odot}$). The correlations between metallicity gradients and global galaxy properties are tightest for metallicity gradients measured in units of dex\,$\rm r_{eff,r}^{-1}$. However, also when we are normalising the galactocentric radius with Petrosian r$_{90}$, the isophotal radius R$_{25}$ or measuring the radius in kpc, we recover these correlations. The correlations are such that less massive, more \textsc{H\,i}-rich galaxies with smaller $\mu_\star$ have steeper metallicity gradients than more massive, higher $\mu_\star$, more \textsc{H\,i}-poor galaxies.
\textbf{Secondly, measuring the metallicity gradient from the radial profile without the central 0.5\,$\rm r_{eff,r}$}: In this case, we only recover a correlation coefficient significantly different from zero for $\rm \log\,\mu_{\star}$\ and metallicity gradient in units of dex\,$\rm r_{eff,r}^{-1}$. All other relations between metallicity gradients and global galaxy properties are either flat or the data are too scattered across the parameter space. This implies that the stellar mass surface density not only shapes the radial metallicity profile in the centre of galaxies but also the steepness of the metallicity decline towards the outskirts.
\begin{figure*}
\center
\includegraphics[width=6.3in]{Images/fig12.pdf}
\caption{Direct comparison to Figure 5 of \citetalias{Moran2012}. The dark crosses in the background are data by \citetalias{Moran2012}, the yellow points our linear fits to the metallicity profiles of massive, quiescent galaxies (off the SFMS) and the orange points our linear fits to the metallicity profiles of massive, star-forming galaxies (on the SFMS).}
\label{fig:moran2012}
\end{figure*}
In both cases, a deeper analysis of inter-correlations between the global galaxy properties revealed that only $\rm \log\,\mu_{\star}$\ determines metallicity gradients. All other correlations appear to be driven by the relation between $\rm \log\,\mu_{\star}$\ and other global galaxy properties. \citetalias{Moran2012} found that the concentration c (as a proxy for bulge to total mass ratio) is closer related to metallicity gradient than $\mu_\star$. To understand these differences between two studies that use the same underlying data set, we compared the distribution of c and $\rm \log\,\mu_{\star}$\ for the \citetalias{Moran2012} and our sample (see Fig.~\ref{fig:moran2012}).
While we use the same radially binned spectra for galaxies with stellar masses greater than $10^{10}$\,M$_\odot$ as \citetalias{Moran2012}, in this paper, we use a different method to fit the gradients and we only use galaxies within $\pm1.5 \sigma$ of the star formation main sequence as defined by \citet{Catinella2018}. Generally, we recover the same trends as \citetalias{Moran2012}. As can be seen from the middle and right panel of Fig.~\ref{fig:moran2012}, the correlations between the metallicity gradients and stellar mass surface density $\mu_\star$ or the concentration index c are different for our work than for
\citetalias{Moran2012}. Where the removal of quiescent galaxies emphasises a correlation between metallicity gradients and $\mu_\star$, the same step wipes out the correlation between metallicity gradient and c. Thus, selecting only SFMS galaxies, as we did, preferentially removes galaxies with flat or positive gradients and large concentration indices and galaxies with all types of gradients and large stellar mass surface densities from the \citetalias{Moran2012} sample. Thus slightly different trends are induced. Overall, the results from \citetalias{Moran2012} and our results agree in the sense that 'more bulge-dominated' galaxies have flatter radial metallicity profiles than 'more disc-dominated' galaxies. This is in contrast to results based on CALIFA data, which did not find any correlation between the metallicity gradient and Hubble type \citep{Sanchez2014,Sanchez-Menguiano2016}.
Overall, we observe that the steepness of metallicity gradients and the shape of radial metallicity profiles are driven by the stellar mass surface density. We also see correlations with stellar mass but our statistical tests suggest that stellar mass surface density is a more important driver. To understand what this finding implies for galaxy evolution, we consider two chemo-dynamical models by \citet{Pezzulli2016} and \citet{Boissier2000}.
\citet{Pezzulli2016} consider models with growing exponential stellar disks. They find that in models, in which gas accretes from the intergalactic medium (IGM) such that the disk grows with a constant exponential scale length, galaxies form metallicity gradients that are not compatible with observations. Once adding radial flows, gradients become more realistic. When considering IGM gas accretion plus radial flows plus inside-out growth, realistic gradients are formed and less IGM accretion is needed than in the previous case. Overall, the steepness of their metallicity gradients is driven by the angular momentum misalignment of accreted gas with respect to the disc. The more miss-aligned the accreted gas, the larger radial gas flows, the steeper metallicity gradients. In the context of our observational findings, these models suggest that galaxies with smaller $\mu_\star$ would have larger radial flows, as indicated by their steeper metallicity gradients. Galaxies with larger $\mu_\star$ have smaller radial flows, which would mean that less and less gas arrives at their centres. Once these galaxies use up the gas in their centres, inside-out quenching would set in. Shortly afterwards, these galaxies would reach equilibrium in their centres. In our observations, this equilibrium state is reflected in the flattening of the radial metallicity profiles towards galaxy centres. Such a saturation effect has also been suggested in such works as \citet{Koppen1999}.
The chemo-dynamical models of \citet{Boissier2000} investigate the galaxy evolution as a function of halo spin parameter $\lambda$ and rotation velocity, which is a proxy for mass. These models rely purely on IGM accretion and inside-out growth of an exponential disk. No radial flows are implemented. The central surface brightness in their model galaxies is determined by the halo spin parameter, such that galaxies with smaller central surface brightness tend to reside in haloes with larger spins. This is also found in other simulations and models (e.\,g. \citealp{Kim2013}). Quantitatively their metallicity gradients are steeper than commonly measured. However, qualitatively their Fig.~15 shows that their model galaxies form steeper gradients the higher the halo spin parameter, and thus the lower the central surface brightness. Furthermore, galaxies with very low halo spin and thus high central surface brightness appear to form a metallicity plateau in their centres. These results agree with our observations. Once more the flattening of the radial metallicity profile in the centre can be explained with different accretion patterns in low and higher $\mu_\star$ galaxies. The IGM accretion onto more massive galaxies with higher total surface density is higher in the beginning but shuts down faster than for less massive and less dense galaxies (their Fig.~3). With the decrease in gas supply, once more metallicity converges towards an equilibrium value, as can be seen in the centres of our high $\mu_\star$ galaxies.
The comparison to two chemo-dynamical models suggests that our observational finding of steeper metallicity gradients in galaxies with lower $\mu_\star$ can be explained. Our recovered relation can either be interpreted as the impact of (i) the halo spin parameter on the inside-out growth of exponential disks or (ii) smaller radial velocities in galaxies of earlier type.
The correlation between metallicity gradient and stellar mass has often been discussed in the literature. The CALIFA team \citep{Sanchez2014,Sanchez-Menguiano2016} as well as \citet{Kudritzki2015} and \citet{Ho2015} find a universal metallicity gradient, that is, no correlation with stellar mass or morphology. On the other hand, in the MaNGA data, \citet{Belfiore2017a} find the steepest declining metallicity profiles, that is, the steepest metallicity gradients
for galaxies around stellar masses of $10^{10}$ to $10^{10.5}$\,M$_\odot$ and flatter metallicity profiles in lower and higher mass galaxies. We recover these trends in the MaNGA data, which we use for comparison with our sample (middle column Fig.\ref{fig:manga}). All these studies measure the gradients from radial metallicity profiles within the radial range of $0.5 \leq$ $\rm r_{eff,r}$ $\leq 2.0$. When measuring metallicity gradients for our sample from the radial profiles without the central 0.5\,$\rm r_{eff,r}$, we recover a relatively flat correlation with large scatter (see in particular middle column in Fig.~\ref{fig:grad_wo}) and thus agree with previous studies. Interestingly, \citet{Bresolin2019} have studied metallicity gradients in low-mass spirals with longslit spectroscopy and also found relatively steep metallicity gradients. These are consistent with or steeper than our measurements (see e.\,g. their Fig.~8).
\citet{Poetrodjojo2018} measured metallicity gradients for a small number of SAMI galaxies using the entire radial metallicity profile and find that low-mass galaxies have flatter metallicity gradients than more massive galaxies. We note, however, that their upper stellar mass limit is 10$^{10.5}$\,M$_\odot$. They furthermore caution that the stellar mass distribution of the sample heavily impacts on the observed trends between metallicity gradients and stellar mass. In addition, diffuse ionised gas might pose a problem \citep{Poetrodjojo2019}.
The lower stellar mass limit of the galaxy sample investigated by \citetalias{Moran2012} is at $10^{10}$\,M$_\odot$ and metallicity gradients were also measured on the entire radial metallicity profiles. They observed more massive galaxies to have flatter metallicity gradients. These two results are not mutually exclusive and given that \citet{Belfiore2017a} observe a change of trend around the stellar mass limits of the \citet{Poetrodjojo2018} and \citetalias{Moran2012} stellar mass limits, the two results might even be complementary. In this case, we would expect to observe this turnover in our results.
For our sample, however, the stellar mass range 9.0 $<$ $\rm \log\,M_{\star}~[M_{\odot}]$\ $<$ 10.0 is more sparsely sampled than higher stellar masses and we only measured average gradients in one stellar mass bin in this stellar mass range (see in particular middle column, second row from top in Fig.~\ref{fig:grad_with}). Thus, a turnover can not be robustly recovered from our data. Nonetheless, our analysis shows that in addition to the stellar mass distribution of the sample (as observed by \citealp{Poetrodjojo2018}), the radial location where the metallicity gradient is measured affects results regarding the correlation between gradients and global galaxy properties.
{}Until today, there have only been few studies, aside from that of \citetalias{Moran2012},
investigating the link between metallicity and \textsc{H\,i}\ content. \citet{Brown2018}, \citet{Bothwell2013}, and \citet{Hughes2013} find that larger \textsc{H\,i}\ content leads to lower (central) metallicities, \citet{Bothwell2016} reported that molecular gas is more relevant in determining the metallicity.
With respect to gradients (rather than central metallicities as in \citealp{Bothwell2016}), we find that \textsc{H\,i}\ is more tightly correlated to metallicity than H$_{2}$. \citet{Carton2015} investigated metallicity gradients in a sample of massive galaxies and they find, in contrast to our results, that more \textsc{H\,i}-rich galaxies have flatter gradients. Their sample, however, covers a smaller stellar mass range than our sample, doesn't reach as high \textsc{H\,i}\ mass fractions as our sample and they use a different metallicity estimator. Hence, the comparison is difficult. Nonetheless, we only observe the same trends as \citet{Carton2015} when considering our analysis of the MaNGA sample: higher \textsc{H\,i}\ mass fractions come with flatter metallicity gradients.
{}Overall, we find that $\rm \log\,\mu_{\star}$\ determines metallicity gradients in our sample of SFMS galaxies, which reflects predictions from the chemo-dynamical evolution models by \citet{Pezzulli2016} and \citet{Boissier2000}. Correlations with stellar mass and \textsc{H\,i}\ mass fraction are less robust and a more detailed analysis suggests that these trends are induced due to correlations between $\rm \log\,\mu_{\star}$\ and stellar mass as well as $\rm \log\,f_{HI}$.
\subsection{Local metallicity and global HI mass fraction in local closed-box models}
Based on the previous findings of \citetalias{Moran2012}, we investigated the correlation between local metallicity and global \textsc{H\,i}\ mass fraction. Here, we consider local metallicities measured in the vicinity of either $\rm r_{eff,r}$ (for our sample and for a sample of MaNGA galaxies with \textsc{H\,i}\ mass) or r$_{90, r}$ (only for our sample). In both cases, we find a correlation between the \textbf{local} metallicity and the \textbf{global} \textsc{H\,i}\ to stellar mass ratio. When comparing the observed correlation to the relation expected for a local closed-box model utilising a the true stellar yield, we find that metallicities, as expected, are significantly overestimated. When using an effective yield, which accounts for in- and outflows and turns the model in a gas regulator model, we find that this model is in better agreement with the data. The detailed choices are discussed below.
Simulations \citep{Forbes2014a} have shown that these radial gas flows are vital for the evolution of galaxies but they are in equilibrium around a redshift of 0. Observations \citep{Schmidt2016} of radial flows, which bring metal-poor gas towards the centres of galaxies, in the \textsc{H\,i}\ kinematics, show that they exist but are not detected in every galaxy, mostly likely because they are small. Also, the \citet{Pezzulli2016} model suggests that small radial flows are necessary but not the main driver of metallicity gradients.
To compare the model to the data, we have to make assumptions for the (effective) yield and the ratio of \textsc{H\,i}\ to stellar disc size. The ratio of \textsc{H\,i}\ to stellar disc size has not yet been studied extensively. \citet{Broeils1997} find a remarkably tight correlation between \textsc{H\,i}\ disc size and 25\,mag\,arcsec$^{-2}$ isophotal radius $\rm R_{25}$ for spiral galaxies, with the average radius ratio being 1.7. However, galaxies with higher $\mu_\star$ contain less \textsc{H\,i}\ and, thus, the ratio between \textsc{H\,i}\ and stellar disc size likely decreases. An extensive analysis by \citet{Wang2016} find a range of radius ratios: $\rm 0.6\lessapprox r_{HI} / R_{25} \lessapprox 5$.
Thus, we also show the model results with an \textsc{H\,i}\ to 25\,mag\,arcsec$^{-2}$ isophotal radius ratio of 1.0. For the yield, we chose two different values: 0.00268, an effective yield obtained by \citet{Pilyugin2004} for spiral galaxies, and 0.037, a stellar yield obtained by \citet{Vincenzo2016} from the \citet{Romano2010} and \citet{Nomoto2013} stellar models assuming a
\citet{Chabrier2003} initial mass function and the average gas phase metallicities of our galaxies. Being a measure of the true stellar yield, the prediction based on the \citet{Vincenzo2016} yield is an upper limit. Thus, indeed outflows of metal-rich gas or inflow of metal-poor gas must have taken place in our sample galaxies. The \citet{Pilyugin2004} appears at the lower end of our data, which might imply that in- and outflows in our sample galaxies is less effective or pronounced than in the spiral galaxies analysed by
\citet{Pilyugin2004}. In addition the differing metallicity estimators between our work and \citet{Pilyugin2004} might induce differences \citep{Vincenzo2016}. Overall, this model works well to explain the correlation between a \textbf{local} metallicity measurement and the \textbf{global} \textsc{H\,i}-to-stellar-mass ratio.
Recent large surveys of the \textsc{H\,i}\ fraction and its correlation to other global properties of galaxies suggest that the morphology (as described by the stellar mass surface density $\mu_\star$) is one defining factor (secondary to $NUV$-$r$\ colour) in setting the \textsc{H\,i}\ mass fraction \citep{Catinella2013,Catinella2018,Brown2015}. Together with the analysis of the primary driver of metallicity gradient, this might explain why $\rm \log\,f_{HI}$\ correlates with metallicity gradients. Another approach might be provided by our simple calculations in Sect.~\ref{sec:local_metal}, which show that the global \textsc{H\,i}\ mass fraction sets the local metallicity at specified radii
(here $\rm r_{eff}$ and $\rm r_{90}$). Once the global \textsc{H\,i}\ mass fraction determines the metallicity at, for example,\, $\rm r_{eff}$ and $\rm r_{90}$, f$_{HI}$ also determines the rate at which the metallicity changes from $\rm r_{eff}$ to $\rm r_{90}$ and, thus, the metallicity gradients. In this way, our simple model could also explain why the metallicity gradient seems to correlate with \textsc{H\,i}\ mass fraction.
\citet{Barrera-Ballesteros2018} did not look at the correlation between metallicity gradients or local metallicity and global \textsc{H\,i}\ content but the authors did offer their report that local metallicity depends on local cold gas mass fractions (estimates based on the optical extinction $A_V$). In particular, they found lower metallicities in regions where the ratio of local gas to local total mass is high. As we assume constant \textsc{H\,i}\ column density across a exponentially declining stellar disc, our model also suggests lower metallicities where the \textsc{H\,i}\ to stellar surface density is higher. Thus, both our simple model and our data agree with the findings by \citet{Barrera-Ballesteros2018}. We are furthermore able to specify that \textsc{H\,i}\ is more important than H$_{2}$\ in defining the metallicity. In light of these results, it will be interesting to follow up on these investigations once resolved \textsc{H\,i}\ and metallicity observations are available for a large number of galaxies, in particular, through combinations of surveys such as MaNGA and Apertif (Adams et al. in prep)\footnote{https://www.astron.nl/telescopes/wsrt-apertif/apertif-dr1-documentation/data-access/data-usage-policy/} or WALLABY \citep{Koribalski2020}.
\section{Conclusion}
\label{sec:sum_conclude}
In this work, we present new optical longslit spectra for 27 low-mass galaxies from the xGASS\ \citep{Catinella2018} and xCOLD\,GASS\ surveys \citep{Saintonge2017}. By combining the new data with data from xGASS\ and xCOLD\,GASS, we investigated the relation between gas-phase oxygen abundance, gas content, and star formation. In particular, we focused on metallicity gradients and the local metallicity at different galactocentric radii and their correlation to global galaxy properties. Our findings can be summarised as follows:
\begin{itemize}
\item While there is a number of global galaxy properties that correlate with the metallicity gradient, various statistical analyses suggest that only the stellar mass surface density $\mu_\star$ drives metallicity gradients. Other correlations come about as $\rm \log\,\mu_{\star}$\ correlates with these global galaxy properties.
\item The correlation between $\mu_\star$ and metallicity gradient can be interpreted with the help of chemo-dynamical evolution models of \citet{Pezzulli2016} and \citet{Boissier2000}: The observed correlation can be interpreted as a sign of (i) different spin parameters of the host halo or (ii) different accretion and radial flow patterns in galaxies, depending on their stellar mass surface density.
\item The local metallicity is correlated with the global \textsc{H\,i}\ mass fraction. Although it is surprising that a local measurement should be informed about global galaxy properties, this correlation can actually be modelled with a simple gas regulator model, which is described by a local closed-box model plus an effective yield, which accounts for small radial flows.
\item When comparing to metallicity gradients in the literature, in particular MaNGA \citep{Bundy2015,Belfiore2017a,Sanchez2018a,Sanchez2016,Sanchez2016a} and SAMI
\citep{Croom2012, Poetrodjojo2018}, we find that our results agree within the errors for high-mass galaxies. In the lower stellar mass regime we observe relatively steep gradients. These discrepancies can not be explained by sample selection but potentially by small sample statistics. We expect further discussions in the literature over trends with metallicity gradients for galaxies at stellar masses, M$_\star \leq 10^{10}$\,M$_\odot$, or with low stellar mass surface densities (small to no bulges). Furthermore, it is vital that metallicity gradients are measured from metallicities at similar radial regions. Once data points inwards of 0.5\,$\rm r_{eff,r}$ are included in the gradient measurement, which was not done by the MaNGA team, our results start to differ significantly.
\end{itemize}
In particular, the (local) correlation between metallicity and \textsc{H\,i}\ has not yet been studied in great detail across galaxy discs. Upcoming and ongoing surveys such as MaNGA, SAMI, WALLABY, and Apertif, as well as future surveys on MeerKAT will provide more information and further details about local ISM enrichment and radial gas flows.
\begin{acknowledgements}
We would like to thank the referee for a constructive report that helped improving this paper.
KL would like to thank Virginia Kilborn and Gabrielle Pezzulli for valuable discussions during the making of this paper.\\
LC is the recipient of an Australian Research Council Future Fellowship (FT180100066) funded by the Australian Government.\\
Parts of this research were supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.\\
Besides software packages already mentioned in the main body of this paper, this work has also made use of Python\footnote{http://www.python.org} and the Python packages: astropy \citep{AstropyCollaboration2013}, NumPy\footnote{http://www.numpy.org/}, matplotlib\footnote{https://matplotlib.org/} \citep{Hunter2007}, pandas\citep{Reback2020,McKinney2010} and seaborn\footnote{https://seaborn.pydata.org/}. Furthermore TOPCAT has been used \citep{Taylor2005}. \\
This research has made use of the VizieR catalogue access tool, CDS,
Strasbourg, France \citep{Ochsenbein2000}; the SIMBAD database,
operated at CDS, Strasbourg, France \citep{Wenger2000}; TOPCAT
\citep{Taylor2005}; the "Aladin sky atlas" developed at CDS, Strasbourg
Observatory, France \citep{Bonnarel2000,Boch2014}; NASA’s Astrophysics Data
System Bibliographic Services.\\
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan
Foundation, the Participating Institutions, the National Science
Foundation, the U.S. Department of Energy, the National Aeronautics and Space
Administration, the Japanese Monbukagakusho, the Max Planck Society, and
the Higher Education Funding Council for England. The SDSS Web Site is
http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the
Participating Institutions. The Participating Institutions are the American
Museum of Natural History, Astrophysical Institute Potsdam, University of
Basel, University of Cambridge, Case Western Reserve University, University
of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the
Japan Participation Group, Johns Hopkins University, the Joint Institute
for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and
Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences
(LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for
Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New
Mexico State University, Ohio State University, University of Pittsburgh,
University of Portsmouth, Princeton University, the United States Naval Observatory,
and the University of Washington.\\
This project makes use of the MaNGA-Pipe3D dataproducts. We thank the IA-UNAM
MaNGA team for creating this catalogue, and the Conacyt Project CB-285080 for
supporting them.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,565,922 | arxiv | \section{Introduction}
Building a large scale quantum information processor is a daunting technology
integration challenge. Most current experiments demonstrate static circuits,
where a pre-compiled sequence of gates is terminated by qubit measurements. In
some cases, conditional control flow is emulated by postselecting data on
certain measurement outcomes~\cite{Chow14}, or by gating duplicate hardware
behind a switch to handle a single branch in a pulse
program~\cite{Steffen:2013,Riste2013}. However, because of the need for quantum
error correction~\cite{Gottesman2009}, fault-tolerant quantum computation is
inevitably an actively controlled process. This active control may manifest as:
continuous entropy removal from the system via active reset \cite{Aharonov1996},
active error correction after decoding syndrome measurements, Pauli frame
updates for subsequent pulses after state injection~\cite{Bravyi2005,
Knill2005}, or non-deterministic ``repeat-until-success''~\cite{Paetznick:2014}
gates. The community is now tackling the challenge of dynamically steering an
experiment within the coherence time of the
qubits~\cite{Riste2012,CampagneIbarcq2013,Pfaff2014,Ofek2016}. For
superconducting qubits this coherence time---although continuously
improving---is currently 50--100$\units{\mu s}$. To achieve control fidelities
compatible with expected thresholds for fault-tolerant quantum
computation~\cite{Knill2005,Raussendorf2007b}, the feedback/feedforward time
must be less than 1\% of this coherence time, or on the order of a few hundred
nanoseconds.
Superconducting qubit control systems send a coordinated sequence of microwave
pulses, with durations from tens to hundreds of nanoseconds, down coaxial lines
of a dilution refrigerator to implement both control and readout of the qubits.
Currently, the microwave pulses are produced and recorded at r.f. frequencies by
mixing up or down with a microwave carrier, allowing commonly available
$\approx1$ GS/s digital-to-analog (DAC) and analog-to-digital (ADC) converters
to be used. In the circuit quantum electrodynamics (QED)
platform~\cite{Schuster2007}, the qubit state is encoded in the amplitude and
phase of a measurement pulse that interacts with a microwave cavity coupled
dispersively to the qubit. This microwave pulse is typically captured with a
room temperature receiver, then converted into a qubit state assignment via a
digital signal processing (DSP) pipeline. Programming the control sequences for
dynamic experiments also requires a supporting framework from the pulse
sequencing language and hardware. Conventional arbitrary waveform generator
(AWG) sequence tables are far too restrictive to support control flow beyond
simple repeated sections. The desired control flow requires conditional
execution, loops with arbitrary nesting, and subroutines for code reuse.
The required timescale for active control is beyond the capabilities of a
software solution running on a general purpose operating system; however, it is
within reach of custom gateware running on field programmable gate arrays
(FPGAs) directly connected to analog $\leftrightarrow$ digital converters for
both qubit control and measurement. Many groups in superconducting and ion trap
quantum computing have turned to this approach and started to build a framework
of controllers and actuators. For trapped ions, the Advanced Real-Time
Infrastructure for Quantum physics (ARTIQ)~\cite{artiq} is a complete framework
of hardware, gateware, and software for controlling quantum information
experiments. However, ARTIQ's control flow architecture uses general purpose
CPUs implemented in FPGA fabric, so called \emph{soft-core} CPUs, which cannot
maintain the event rate required by superconducting qubits (gates are 1--2
orders of magnitude slower in ion traps). Researchers at
UCSB/Google~\cite{Chen:2012,Jeffrey:2014}, ETH Zurich~\cite{Steffen:2013}, TU
Delft~\cite{Riste2013,Bultink:2016}, and Yale~\cite{Ofek2016} have also built
superconducting qubit control and/or readout platforms using FPGAs, and even
explored moving them to the cryogenic stages~\cite{Homulle:2016,ConwayLamb2016},
but have generally not made these tools available to the broader quantum
information community.
In this work, we introduce the \texttt{QDSP} framework and Arbitrary Pulse
Sequencer 2 (APS2) for qubit readout and control, respectively. \texttt{QDSP}
implements state assignment and data recording in FPGA gateware for a
commercially available receiver/exciter system (the Innovative Integration
X6-1000M, also used in the Yale work\cite{Ofek2016}). We show how latency can be
minimized for rapid qubit state decisions by consolidating many of the
conventional DSP stages into one. The APS2, shown in Fig.~\ref{fig:APS2}, has
gateware designed to naturally support arbitrary control flow in quantum circuit
sequences on superconducting qubits. For circuits involving multiple qubits,
state information from many qubits must be collated and synthesized into a
steering decision by a controller. To this end we designed the Trigger
Distribution Module (TDM) to capture up to eight channels of qubit state
information, execute arbitrary logic on an FPGA, and then distribute steering
information to APS2 output modules over low-latency serial data links. All the
systems presented here are either commercially available or full source code for
gateware and drivers has been posted under a permissive open-source license.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/APS2-QuVDe.jpg}
\caption{\label{fig:APS2} A fully-populated APS2 system, with 18 analog output
channels (9 APS2 modules) and a trigger distribution module (TDM, far right).
Each APS2 module provides two 14-bit analog output channels with 1.2 GS/s
sampling rate and four digital marker channels. The 6U enclosure provides power
and cooling. Inter-module signaling is handled by the star network of SATA
cables between the TDM and each APS2 output module. Host control is via 1Gb
Ethernet to each module with a combination of a Comblock 5402 TCP stack,
\texttt{APS2-Comms} custom HDL (\url{github.com/BBN-Q/APS2-Comms}) and the
\texttt{libaps2} \protectC\nobreak\hspace{-.05em}\raisebox{.4ex}{\tiny\bf +}\nobreak\hspace{-.10em}\raisebox{.4ex}{\tiny\bf +}~software driver (\url{github.com/BBN-Q/libAPS2})}
\end{figure}
To validate the developed gateware and hardware we demonstrate multi-qubit
routines and quantum gates that require feedback and feedforward: active qubit
initialization, entanglement generation through measurement, and
measurement-based logic gates. Although these are specific examples, they are
implemented in a general framework that enables arbitrary steering of quantum
circuits. Furthermore, with appropriate quantum hardware, different circuits
are all achieved without re-wiring the control systems, but simply by executing
different programs on the APS2 and TDM.
\section{Qubit State Decisions in Hardware}
\label{sec:qubit-state-decision}
The first requirement for quantum feedback is extracting qubit state decisions
with minimal latency. Typical superconducting qubit measurements involve sending
a microwave pulse to a readout resonator, recording the reflected/transmitted
signal, filtering noise and other out-of-band signals, and reducing the record
to a binary decision about the qubit state. Conventionally, this is accomplished
with a superheterodyne transmitter and receiver operating with an intermediate
frequency (IF) of 10s of MHz which allows the IF stages to be handled
digitally.
Since many measurement channels may be frequency multiplexed onto the same line,
the DSP chain involves several stages of filtering to \emph{channelize} the
signal. This involves mixing the captured record with a continuous wave (CW) IF
signal---produced by a numerically controlled oscillator (NCO)---and several
low-pass filtering and decimation stages to recover a baseband complex-valued
phasor as a function of time (Fig.~\ref{fig:qubit-dsp}). This complex-time
series is then integrated with a kernel, which may be a simple box car filter or
optimized to maximally distinguish the qubit
states~\cite{Gambetta2007,Ryan2015}. A final qubit state is determined by
thresholding the integrated value. These receiver functions, which have
frequently been implemented in software, are ideally suited to DSP resources
available in modern FPGAs. Moving these functions into custom gateware has
additional benefits for parallel processing of simultaneous measurements,
reducing CPU load on the control PC, and greatly reducing latency of qubit state
decisions.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/qubit_dsp_blake}
\caption{ \label{fig:qubit-dsp} Block diagram of typical superheterodyne
receiver qubit decision chain: filtering/decimation, demodulation,
filtering/decimation, integration and thresholding. The filtering and decimating
may be combined into a polyphase decimating filter. This filter may also consist
of multiple stages for stability or efficiency reasons.}
\end{figure}
\subsection{Filter design}
The design of the channel filter for qubit readout is the result of balancing
several considerations:
\begin{enumerate}
\item bandwidth of the channel---should be some small multiple of the resonator
bandwidth, $\kappa$;
\item stopband attenuation sufficient to remove channel crosstalk;
\item numerical stability---particularly when implemented with either single
precision or fixed-point representation;
\item latency;
\item computational resources.
\end{enumerate}
Some of these criteria are in competition with each other. For instance, one may
decrease channel crosstalk by using a higher-order filter, but this comes at the
expense of increased latency and computational cost. Qubit devices used in our
lab have typical resonator bandwidths of $1-3\units{MHz}$. In the high fidelity,
QND readout regime we have noticed harmonic content in the readout signal at
multiples of the dispersive shift, $\chi$, that extends the signal bandwidth by
roughly a factor of 2. Consequently, we have designed channel filters with
$10\units{MHz}$ bandwidth. The downconversion structure of
Fig.~\ref{fig:qubit-dsp} selects symmetric channels around the IF frequency;
thus, a $10\units{MHz}$ channel corresponds to a filter with a $3\units{dB}$
bandwidth of $5\units{MHz}$. We also want sufficient stopband attenuation to
limit channel crosstalk. We have chosen the stopband attenuation such that a
fullscale signal in an adjacent channel is suppressed below the
least-significant bit of the selected channel. Given the signed 12-bit ADCs on
our target platform, this requires $20\log_{10}(1/2^{11}) \approx 66\units{dB}$
stopband attenuation.
The relatively narrow bandwidth of the readout channels compared to the 1 GS/s
sampling rate of the ADC leads to numerical stability problems in fixed-point or
single-precision designs. Re-expressed as a relative bandwidth, the
$f_{3\mathrm{dB}} = 5$ MHz channel described above has $n_{3\mathrm{dB}} =
0.01$. However, it is difficult to construct stable filters with normalized
bandwidth $n_{3\mathrm{dB}}<0.1$. This may be solved by cascading several
polyphase decimating filters to boost the 3 dB bandwidth of the later stages---
this brings an additional benefit of reducing the computational resources.
\subsection{Fast Integration Kernels}
While the complete time-trace of the measurement record is a useful debugging
tool for observing and understanding the cavity response from the two (or more)
qubit states, a conventional channelizer with multiple stages of signal
processing (NCO mixing, filtering and integrating) forces an undesirable
latency. Take a typical example of $10\units{MHz}$ channels spaced
$20\units{MHz}$ apart. A Parks-McClellan \cite{McClellan1973} designed FIR
low-pass filter for a $250\units{MHz}$ sampling rate with a pass band from
$0-5\units{MHz}$ and stop-band from $15-125\units{MHz}$ with $60\units{dB}$
suppression requires at least 86 taps. At a typical FPGA clock speed of
$250\units{MHz}$ this results in 100s of nanoseconds of latency. However, the
qubit state decision reduces the time dimension to a single value with a kernel
integrator. The intermediate filtering stage is thus superfluous if we can
construct an appropriate frequency-selective kernel. This crucial insight
enables us to drive down the signal processing latency to a few clock cycles.
More formally, consider the discrete time measurement record $v(t_l)$ for a
total of length $L$ samples. Applying the DSP chain of Fig.~\ref{fig:qubit-dsp} ,
the final single complex value qubit state signal (before thresholding and
ignoring decimation for simplicity) is:
\begin{equation}
\label{eq:3stage-dsp}
q = \underbrace{\sum_{l=0}^L k_l}_{\text{kernel}} \left[\underbrace{\sum_{n=0}^N b_n }_{\text{filter}} \left[ \underbrace{e^{-i\omega (t_{l-n})}}_{\text{mix-down}} v(t_{l-n}) \right]\right],
\end{equation}
where the demodulation frequency is $\omega$, the channel is selected with an
$N$-tap FIR filter with coefficients $b_n$ and a final kernel integration $k_l$
is applied for the length of the record $L$. The nested sum and product can be
expanded and the terms collected into a single kernel integration, with a modified kernel
\begin{equation}
\label{eq:modified-kernel}
q = \sum_{l=0}^{L} k_l' v(t_l); \quad k_l' = e^{-i\omega t_l}\sum_{n=0}^N k_{l+n} b_n.
\end{equation}
Thus, the three-stage pipeline of Fig.~\ref{fig:qubit-dsp} is reduced
into a single-stage pipeline consisting solely of the kernel integration
step.
This reduction of the pipeline to a single stage has substantial advantages for
DSP latency. In particular, the FIR filter block of the three-stage pipeline has
a minimum latency of N clock cycles for a N-tap filter. As discussed above, this
can be 100s of nanoseconds and this single filter stage consumes the entire
latency budget in a single step. By constrast, the DSP pipeline of
Eq.~\ref{eq:modified-kernel} can be achieved with 1-3 clock cycles of latency on
the FPGA, or $\le 15\units{ns}$.
While equations \ref{eq:3stage-dsp} and \ref{eq:modified-kernel} demonstrate the
mathematical equivalence of the 1-stage and 3-stage DSP pipelines, in practice
it is not necessary to transform a baseband integration kernel via
Eq.~\ref{eq:modified-kernel}. Instead, one can use the average unfiltered (IF)
response at the ADC after preparing a qubit in $\ket{0}$ and $\ket{1}$ to
construct a matched filter \cite{Ryan2015}. The frequency response of the
resulting filter will match that of the measurement pulse itself. Consequently,
as long as the measurement pulse is itself band-limited --- which should always be
the case with an appropriately designed dispersive cavity measurement --- the
resulting matched filter will also optimally ``channelize'' the ADC input and
suppress interference from other multiplexed qubit measurement channels.
\subsection{Hardware Implementation}
To minimize overall latency, we implement our \texttt{QDSP} qubit readout system in
custom FPGA gateware (\texttt{QDSP} - \url{github.com/BBN-Q/BBN-QDSP-X6}) and
software drivers (\texttt{libx6} - \url{github.com/BBN-Q/libx6/}) for a
commercially available hardware platform (Innovative Integration X6-1000M). The
X6 hardware provides two 12-bit 1 GS/s ADCs and four 16-bit 500 MS/s DACs.
Although \texttt{QDSP} focuses on the receiver
application, it also provides basic AWG functionality to drive the DACs for
simple waveforms such as measurement pulses. A block diagram of the receiver
section of the \texttt{QDSP} gateware is shown in Fig.~\ref{fig:X6-block-diagram}. The
structure includes a fast path for low-latency qubit state decision output, as
well as a conventional receiver chain for debugging and calibration. The
gateware and drivers allow users to tap the data stream at several points for
data recording or debugging.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/qdsp-blocks}
\caption{ \label{fig:X6-block-diagram} Block diagram of \texttt{QDSP} filter
blocks, with low-latency feedback (upper) and calibration/diagnostic (lower)
paths. The low-latency path drives digital outputs which may be connected to the
control system, such as the TDM (see section~\ref{sec:TDM}). $N$ copies (not
shown) of this low-latency path support multiplexed readout. The clock-domain
crossing (CDC) FIFO on the diagnostic path allows the low-pass filter in the
channelizer to run at a slower clock rate, easing timing closure. Both slow and
fast paths are duplicated four times per ADC in order to handle multiplexed
signals. The user may choose to tap these data streams at various points, and
send the data over PCIe for recording on the host PC.} \end{figure}
The raw ADC values from each ADC are presented to the FPGA four samples wide at
250 MHz when sampling at 1 GS/s (we sample at the maximum rate to minimize noise
aliasing). We immediately decimate by a factor of 4 by summing the four values
so that subsequent processing deals with only one sample per clock. This is
mainly for convenience: the raw integrators could run in parallel and the data
could be serialized for the subsequent filtering. The data is copied to $N$ IF
kernel integrators for multiplexed readout. The outputs of these fast
integrators are connected to variable thresholders which drive digital outs to
make fast qubit state decisions available to the pulse sequencing hardware for
feedback. These values are also available in software as complex values.
For more conventional downconversion, each raw stream is also broadcast to a
channelizer module. The module consists of a numerically controlled oscillator
(NCO) that generates cosine and sine at the chosen frequency. The incoming ADC
data is multiplied with the NCO outputs in a complex multiplier. The mixed
signal is then low-pass filtered by a two-stage decimating finite-impulse
response (FIR) filter chain. Polyphase FIR filters are chosen for each stage to
minimize use of specialized DSP hardware on the FPGA. The FIR filters are
equiripple with the coefficients designed by the Remez exchange algorithm
\cite{McClellan1973}. The number of taps was chosen to optimally fit onto the
DSP blocks of the FPGA (with reuse from hardware oversampling) and to suppress
the stopband by 60 dB, nearly down to the bit level of the 12-bit ADCs. The
low-pass filtered and decimated stream is useful for observing and debugging the
cavity response. Finally, a decision engine using a baseband kernel integrator
is attached to the demodulated stream to complete the conventional DSP chain.
\section{Dynamic Arbitrary Pulse Sequencing: APS2}
There are demanding requirements on bandwidth, latency and noise for dynamic
pulse sequencing with superconducting qubits. The sequencer should naturally
represent the quantum circuit being applied, i.e., it should be able to apply a
sequence of $\approx 20\units{ns}$ pulses (typical single qubit gate times)
rather than treating the entire sequence as a waveform. Simply concatenating
waveforms together to create a sequence places extreme demands on the size of
waveform memory, and transferring and compiling sequences to the AWG becomes an
experimental bottleneck. The sequencer should be able to respond to real-time
information from qubit measurement results to make dynamic sequence selection
within some small fraction of the relaxation time of the qubits. Finally, the
sequencer output should have sufficiently low noise not to limit gate fidelity.
Typical AWGs rely on a precalculated list of sequences played out in a
predetermined manner, or at best, loops of segments with simple jump responses
to an event trigger. Dynamic sequences that implement quantum algorithms require
more sophisticated control flow with conditional logic and branching in response
to measurement results. In addition to dynamic control flow, the sequencer
should also support code reuse through function calls and looping constructs to
keep memory requirements reasonable for long verification and validation
experiments such as randomized benchmarking~\cite{Knill2008} or gate set
tomography~\cite{Blume-Kohout2013}.
Figure~\ref{fig:dynamic-quantum-circuits} shows some elementary circuits that
require fast feedback or feedforward. A simple and immediately useful primitive
is the active reset of a qubit shown in
Fig.~\ref{fig:dynamic-quantum-circuits}(a). This can remove entropy from the
system by refreshing ancilla qubits or simply improve the duty cycle of an
experiment in comparison to waiting several multiples of $T_1$ for the qubit to
relax to the ground state. With appropriate control flow instructions, reset
with a maximum number of tries is naturally expressed as a looping construct
with conditional branching for breaking out of the loop. Indeed the entire
routine could be wrapped as a function call to be reused at the beginning of
every sequence. Entanglement generation by measurement, shown in
Fig.~\ref{fig:dynamic-quantum-circuits}(b) is another useful primitive for
resource state production that relies on feedforward. The circuit is also a
useful testbench as it is very similar to the circuits for syndrome measurement
in error correcting codes. Finally, Fig.~\ref{fig:dynamic-quantum-circuits}(c)
shows a more sophisticated use of feedforward. Implementing T gates will most
likely dominate the run time of an error corrected quantum circuit
\cite{Raussendorf2007a}. However, if the circuit can be probabilistic then the
average T gate depth can be reduced. These ``repeat-until-success''
circuits~\cite{Paetznick:2014} bring in one or more ancilla qubits and perform a
series of gates and interactions. Then, \emph{conditional} on the result of
measuring the ancilla either the desired gate or a identity operation has been
applied to the data qubit. In the identity case, the gate can be attempted again
by repeating the circuit with a refreshed ancilla.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/dynamic_circuits}
\caption{ \label{fig:dynamic-quantum-circuits} Example circuits with dynamic
steering: (a) active qubit reset; (b) deterministic entanglement creation with
feedforward; (c) ``repeat-until-success'' implementation of a non-Clifford gate
$V_3 = \frac{1+2iZ}{\sqrt{5}}$.}
\end{figure}
The APS2 was constructed to satisfy all these criteria by tailored design of the
sequencer. The sequencing engine processes an instruction set that provides full
arbitrary control flow and can play a new waveform every $6.66\units{ns}$ (two FPGA
clock cycles) to naturally and compactly represent any superconducting qubit
circuit with feedback or feedfoward. Realtime state information is fed in via
high-speed serial links from the TDM. A cache controller intermediates access to
deep memory for longer experiments. We now discuss in detail some of the design
choices.
\subsection{Arbitrary Control Flow}
\label{sub:arbitrary-control-flow}
Arbitrary control flow can be fulfilled with three concepts: sequences, loops
(repetition) and conditional execution. We add to this set the concept of
subroutines because of their value in structured programming and memory re-use.
The gateware implements a control unit state machine with four additional
resources: a loadable incrementing program counter indicating the current
address in instruction memory; a loadable decrementing repeat counter; a stack
that holds the repeat and program counter values; and a comparison register that
holds the last comparison boolean result. The specific instruction set supported
is shown in Table~\ref{table:APS2-instruction-set}.
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{r p{1.5\columnwidth} l}
\texttt{WAVEFORM} & dispatch instruction to waveform engine(s) & \rdelim\}{3}{25mm}[output] \\
\texttt{MARKER} & dispatch instruction to marker engine(s) \\
\texttt{MODULATOR} & dispatch instruction to I/Q modulator engine \\
\texttt{WAIT} & broadcast wait command to all output engines & \rdelim\}{2}{25mm}[synchronization] \\
\texttt{SYNC} & wait until all execution engine queues are empty \\
\texttt{LOAD\_REPEAT} & load value into the repeat register & \rdelim\}{9}{25mm}[flow control] \\
\texttt{REPEAT} & if repeat register is 0 continue; otherwise decrement repeat register and jump to given address \\
\texttt{LOAD\_CMP} & load comparison register with next value in serial link FIFO \\
\texttt{CMP} & compare given mask to comparison register with given binary comparison operation ($ =$, $\ne$, $<$, $>$) and store result in comparison result register \\
\texttt{GOTO} & jump to given instruction address (optionally conditionally) \\
\texttt{CALL} & push stack and jump to the given instruction address (optionally conditionally) \\
\texttt{RETURN} & pop stack and return to the call site \\
\texttt{PREFETCH} & prefetch an instruction cache line
\end{tabular}
\end{ruledtabular}
\caption{The APS2 instruction set which enables arbitrary control flow with waveform generation.}
\label{table:APS2-instruction-set}
\end{table*}
The \texttt{WAVEFORM}, \texttt{MARKER} and \texttt{MODULATOR} instructions
enable analog and digital output and are immediately dispatched to output
execution engines (see sections~\ref{subs:superscalar-architecture} and
\ref{subs:output-engines} below). The next two instructions, \texttt{WAIT} and
\texttt{SYNC}, enable synchronization both between output engines on the same
APS2 and between APS2 modules (see section~\ref{subs:superscalar-architecture}
below). The next set of instructions provides arbitrary control-flow:
\texttt{LOAD\_REPEAT} and \texttt{REPEAT} enable looping constructs;
\texttt{LOAD\_CMP} enables access to the real-time steering information fed from
the TDM; \texttt{CMP} and \texttt{GOTO} enable conditional branching;
\texttt{CALL} and \texttt{RETURN} allow for subroutines and recursion, enabling,
for example, nested loops without multiple loop counters. Finally, although not
directly related to control flow, \texttt{PREFETCH} gives hints to the cache
controller to avoid cache misses.
\subsubsection{Super-scalar Architecture}
\label{subs:superscalar-architecture}
Each APS2 module has multiple outputs driven by individual execution engines:
two analog channels and four marker channels. We use dispatch from a single
instruction stream to simplify synchronization of control flow across multiple
output engines (Fig.~\ref{fig:superscalar-architecture}). Since each execution
engine has its own internal FIFO buffer, this also allows the decoder/dispatcher
to greedily look ahead and process instructions (contingent on deterministic
control flow) and potentially dispatch to the execution units. The look ahead
strategy absorbs the pipelining latency due to an instruction counter address
jump after a \texttt{CALL}, \texttt{RETURN} or \texttt{REPEAT} instruction.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/superscalar_architecture_blake}
\caption{ \label{fig:superscalar-architecture} The APS2 has a superscalar
architecture where a linear instruction stream is dispatched to multiple
execution engines which then execute in parallel. The program counter increments
by default sending a stream of instruction to the decoder/dispatcher.
Control-flow instructions can cause the decoder to jump the program counter and
flush the instruction stream coming from memory.}
\end{figure}
The superscalar approach has to accept some additional complexity in order to
convert a serial instruction stream into potentially simultaneous operations in
the execution engines. The APS2 provides two mechanisms to solve this
synchronization task. The first mechanism is a \texttt{WAIT} instruction that
stalls the execution engines until a trigger signal arrives. While the engines
are stalled, the control flow unit/dispatcher continues to load instructions
into the output engine buffers. The execution engines respond synchronously to
trigger signals, so in this mechanism an external signal provides simultaneity
and a method to synchronize multiple modules. The second mechanism, the
\texttt{SYNC} instruction, acts as a fence or barrier to ensure that all
execution engines are at the same point by stalling processing of instructions
until all engines' execution queues are empty. This is also useful for
resynchronizing after a non-deterministic wait time - e.g. an uncertain delay
before a measurement result is valid.
\subsubsection{Output Engines}
\label{subs:output-engines}
Each analog and digital output channel is sequenced by a waveform or marker
``output engine'' that takes a more limited set of instructions.
\paragraph{Waveform Engine}
\label{par:Waveform Engine}
The waveform engines create analog waveforms from the following set of instructions:
\begin{enumerate}
\item \texttt{PLAY} play a waveform starting at a given address for a given count;
\item \texttt{WAIT} stall playback until a trigger arrives;
\item \texttt{SYNC} stall until the main decoder indicates all engines are synchronized;
\item \texttt{PREFETCH} fill a page of the waveform cache from deep memory - see section \ref{subs:waveform-cache} for further details.
\end{enumerate}
Typically, each \texttt{PLAY} instruction corresponds to a pulse implementing a
gate and so it is important that the waveform engine be fed and be able to
process instructions on a timescale commensurate with superconducting qubit
control pulses. The main decoder can dispatch a waveform instruction every
$3.33\units{ns}$ and the waveform engine can jump to a new pulse every
$6.66\units{ns}$. In addition, typical pulse sequences contain idle periods of
zero or constant output. Rather than inefficiently storing repeated values in
waveform memory. Rather the instruction is ``play this waveform value for $n$
samples'' \cite{Morgan1987}. We refer to these as \texttt{Time-Amplitude (TA)}
pairs and can mark any waveform command as such.
\paragraph{Marker Engine}
Marker engines creates digital outputs from the following set of instructions:
\begin{enumerate}
\item \texttt{PLAY} play marker with a given state for a given count;
\item \texttt{WAIT} stall playback until a trigger arrives;
\item \texttt{SYNC} stall until the main decoder indicates all engines are synchronized.
\end{enumerate}
The natural sample rate for the marker \texttt{PLAY} commands are in terms of
the sequencer FPGA clock which runs at a quarter of the analog output rate. To
provide single sample resolution we route the marker outputs through dedicated
serializer hardware (Xilinx OSERDESE2). For all but the last sample the 4 marker
samples are simply copies of the desired output state. However, the last word is
programmable as part of the \texttt{PLAY} instruction to provide full
$833\units{ps}$ resolution of the marker rising/falling edge.
\subsubsection{Modulation Engine}
\label{subs:modulation-engine}
An APS2 module is typically used to drive the I and Q ports of an I/Q mixer to
modulate the amplitude and phase of a microwave carrier, thus producing the
control or readout signal. To improve the on/off ratio, the carrier is typically
detuned from the qubit or cavity frequency and the I/Q waveforms modulated at
the difference frequency with an appropriate phase shift to single-sideband
(SSB) modulate the carrier up or down to the qubit/cavity frequency. Qubit
control is defined in a rotating frame at the qubit frequency so the phase of
the modulation has to track the detuning frequency. Z-rotations are implemented
as frame updates that shift the phase of all subsequent pulses \cite{Knill2000}.
For deterministic sequences, the modulation and frame changes can be
pre-calculated and stored as new waveforms in the pulse library. However, for
conditional execution or for experiments with non-deterministic delays, this is
not possible and the modulation and frame changes must be done in real-time.
To support both SSB modulation and dynamic frame updates, the APS2 includes a
modulation engine which phase modulates the waveform output, and that can be
controlled via sequence instructions. The modulation engine contains multiple
NCOs to enable merging multiple ``logical'' channels at different frequencies
onto the same physical channel pair. For example, to control two qubits, two
NCOs can be set to the detuning frequencies of each qubit, and control pulses
can be sent to either qubit with the appropriate NCO selection, while the
hardware tracks the other qubit's phase evolution. The phase applied to each
pulse is the sum of the accumulated phase increment (for frequency detuning), a
fixed phase offset (e.g. for setting an $X$ or $Y$ pulse), and an accumulated
frame (to implement $Z$-rotations). The modulation engine supports the following
instructions
\begin{enumerate}
\item \texttt{WAIT} stall until a trigger is received;
\item \texttt{SYNC} stall until the main decoder indicates all engines are synchronized;
\item \texttt{RESET\_PHASE} reset the phase and frame of the selected NCO(s);
\item \texttt{SET\_PHASE\_OFFSET} set the phase offset of the selected NCO(s);
\item \texttt{SET\_PHASE\_INCREMENT} set the phase increment of the selected NCO(s);
\item \texttt{UPDATE\_FRAME} update the frame of the selected NCO(s);
\item \texttt{MODULATE} select a NCO for a given number of samples.
\end{enumerate}
All NCO phase commands are held until the the next instruction boundary, which
is the end of the currently playing \texttt{MODULATE} command or a
synchronization signal being received. The commands are held to allow them to
occur with effectively no delay: for example, the phase should be reset when the
trigger arrives; or a $Z$ rotation should happen instantaneously between two
pulses.
In addition, I/Q mixers have imperfections that can be compensated for by
appropriate adjustments to the waveforms. In particular, carrier leakage may be
minimized by adjusting DC offsets, and amplitude/phase imbalance compensated with a
2x2 correction matrix applied to the I/Q pairs. The APS2 includes correction
matrix and offset blocks after the modulator to effect these adjustments, as
shown in Fig.~\ref{fig:modulator}.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/modulator}
\caption{ \label{fig:modulator} Block diagram of the APS2 modulation
capabilities. The modulation engine controls the NCO phase accumulators and
selects the desired NCO on a pulse-by-pulse basis. The complex waveform data is
rotated by the selected NCO's phase and subsequently processed by an arbitrary
2x2 matrix for amplitude and phase imbalance correction, channel scaling, and
offset. To save FPGA resources and reduce latency, the scaling is combined with
the mixer correction.}
\end{figure}
\subsection{Caching Strategies}
\label{sub:caching-strategies}
Some qubit experiments, e.g. calibration and characterization, require long
sequences and/or many waveform variants. Supporting such sequences requires an
AWG with deep memory. However, AWG sequencers immediately run into a well-known
depth/speed trade-off for memory: SDRAM with many gigabytes of memory has random
access times of 100s of nanoseconds whereas SRAM, or on-board FPGA block RAM,
can have access times of only a few clock cycles but are typically limited to
only a few megabytes. This memory dichotomy drives some of the sequencing
characteristics of commercial AWGs. For example, the Tektronix 5014B requires
$400\units{ns}$ to switch sequence segments and the Keysight M8190A requires a
minimum sequence segment length of $1.37\units{ms}$. These delay times are
incompatible with the typical gate times of 10s of nanoseconds for
superconducting qubits. However, it is possible to borrow from CPU design and
hide this latency by adding instruction and waveform caches to the memory
interface.
The APS2 has $1\units{GB}$ of DDR3 SDRAM to dynamically allocate to a
combination of sequence instructions and waveforms. This corresponds to up to
128 million sequence instructions or 256 million complex waveform points,
sufficient for most current experiments. The sequencer and waveform engines
interface with this deep memory through a cache controller with access to FPGA
block RAM. If the requested data is in the cache, then it can be returned
deterministically within a few clock cycles, whereas if there is a cache miss
the sequencer stalls while the data is fetched from SDRAM. Cache misses during a
sequence are generally catastrophic given superconducting qubit coherence times.
However, with heuristics and \texttt{PREFETCH} hints from the compiler, the
cache controller can ensure data has been preloaded into the block RAM before it
is requested and avoid any cache stalls.
\subsubsection{Instruction Cache}
\label{subs:instruction-cache}
The APS2 instruction cache is split into two parts to support two different
heuristics about how sequences advance through the instruction stream---see
Fig.~\ref{fig:cache-architectures}(a-b). We chose cache line sizes of 128
instructions or 1\,kB, which is significantly larger than those used in a typical
CPU (Intel/AMD processors typically have cache lines of only 64 bytes) but
reflects the lack of a nested cache hierarchy and the more typical linear
playback of quantum gate sequences. The first cache is a circular buffer
centered around the current instruction address that supports the notion that
the most likely direction is forward motion through the instructions, with
potential local jumps to recently played addresses when looping. The controller
greedily prefetches additional cache lines ahead of the current address but
leaves a buffer of previously played cache lines for looping. Function calls, or
subroutines, require random access so the second instruction cache is fully
associative. The associative cache lines are filled in round-robin fashion
with explicit \texttt{PREFETCH} instructions. This first-in-first-out
replacement strategy for the associative cache ignores any information about
cache line usage. Since the cache controller tracks access, a simple extension
would be a Least Recently Used (LRU) or pseudo-LRU algorithm. It also places a
significant burden on the compiler to insert the \texttt{PREFETCH} instructions
and group subroutines into cache lines. However, given the severe penalty of a
cache miss it is difficult to envisage a hardware-implemented cache controller
that can alleviate that burden.
\begin{figure*}
\includegraphics[width=0.95\textwidth]{figures/cache_architectures}
\caption{ \label{fig:cache-architectures} Instruction and waveform cache
architectures. The instruction cache has two parts: (a) a circular sequential
instruction cache that supports continuous playback by prefetching cache lines
(green) following the cache line containing the currently playing address (red)
up to a local jump buffer of previously played lines (blue); (b) a fully
associative subroutine cache that supports jumps to arbitrary addresses and is
explicitly filled by \texttt{PREFETCH} instructions. (c) The waveform cache
supports either single usage of the full 128 ksamples or a ping-pong mode where
while one half is active the other half is filled by a waveform engine
\texttt{PREFETCH} command.}
\end{figure*}
\subsubsection{Waveform Cache}
\label{subs:waveform-cache}
In use cases we have examined, waveform access does not have the nearly linear
structure of sequence instructions. Rather, a sequence tends to require random
access to a small library of short waveforms, where that library may change over
time due to calibration or feedback signals, or the desire to scan a range of
waveforms. The APS2 has a waveform cache of 128 ksamples to support fast access
to a large waveform library. For scenarios demanding that the library change
over time, the cache is split into two pages of 64 ksamples---see
Fig.~\ref{fig:cache-architectures}(c). The cache is composed of dual-port block
RAM and so a sequence can be actively playing waveforms from one page while the
second page is filled from SDRAM. The two pages' roles can then alternate
supporting total waveform library sizes up to the limit of the SDRAM. For this
mode of operation we do not expect to change the waveform library within a
single sequence. Filling an entire waveform cache page takes $\sim180\units{\mu
s}$, meaning that at typical repetition rates of 10s of kHz we can exchange the
waveform library every few sequences.
\section{Synthesizing and Distributing Steering Information}
\label{sec:TDM}
As we move beyond simple single qubit feedback circuits we need to synthesize
steering decisions from multiple qubit measurement results, and then communicate
the steering decision to multiple sequencers. We have designed a dedicated
hardware module, the Trigger Distribution Module (TDM), to take in up to eight qubit
state decisions and send steering information to up to nine pulse
sequencers---see Fig.~\ref{fig:tdm-blocks} for a block diagram.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figures/tdm_blocks}
\caption{ \label{fig:tdm-blocks} Block diagram of the Trigger Distribution
Module (TDM) functionality. 8 SMA inputs to programmable comparators send qubit
state information to the control logic. High-speed serial connections over SATA
cables provide input from other TDM modules and output to APS2 and TDM modules.
The TDM can send a system-wide trigger for intra-crate synchronization}
\end{figure}
There are eight SMA inputs that feed variable comparators for reading in qubit
measurement results from \texttt{QDSP}, with one input used as a data valid
strobe. The TDM can communicate to all the APS2 modules in an enclosure via a
high speed serial connection over SATA cables. The star distribution network
also allows us to use the distribution module for synchronization. A reserved
symbol acts as a trigger that can be broadcast to all APS2 modules in an
enclosure for synchronous multi-module output. There is one additional SATA
serial link that can be used for inter-crate communications with other TDM's for
for future larger circuits that cannot be controlled with a single crate.
The baseline TDM gateware \texttt{APS2-TDM} (\url{github.com/BBN-Q/APS2-TDM})
currently broadcasts the measurement results to all APS2 modules. As a result,
every APS2 must allow a sequence branch for each result, even when the
controlled qubit is not affected by that particular measurement. A more flexible
decision logic and sequence steering will become critical in larger circuits.
Since all measurement results flow through the TDM, it is natural to consider it
orchestrating the entire experiment. For example, in error correction, syndrome
decoding could be implemented by the TDM and the required qubit corrections sent
to the relevant APS2s only. We see the TDM as a testbed for building out a more
scalable qubit control platform with a hierarchy of controllers, where the TDM
assumes the role of routing measurement results and steering the computation.
\section{Latency}
With all the pieces in place we can examine the latency budget of a closed
feedback loop and highlight potential areas for improvement. A detailed listing
is provided in Table~\ref{table:latency}. The total latency from the end of a
measurement pulse to the next conditional pulse coming out of the APS2 is
$\approx430\units{ns}$. Our test setup incurs an additional
$\approx110\units{ns}$ of latency from cabling to/from the qubit device in the
dilution refrigerator, as well as analog filtering. The total latency is
comparable to 1\% of the qubit relaxation time and our measurement time, and is
not the limiting factor in our circuit implementation fidelities.
\begin{table}[htb]
\begin{ruledtabular}
\begin{tabular}{ l | l }
Step & Latency (ns) \\
\hline
ADC capture & 32 \\
digital signal processing & 56 (14 clocks) \\
X6 to TDM interface & 10 (1 clock) \\
TDM distribution logic & 10 (1 clock) \\
TDM to APS2 module interface & 210 \\
APS2 address jump & 53 (16 clocks) \\
APS2 waveform signal processing & 30 (9 clocks) \\
DAC output & 29 \\
\hline
Total & 428 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{table:latency}Latency budget for closed loop qubit control.}
\end{table}
However, there are a few areas amenable to improvement. The APS2 design
prioritized instruction throughput and waveform cache size. This required
significant buffering and pipelining. Optimizing instead for latency could
tradeoff those capabilities for reduced latency for an APS2 address jump. The
serial link between the TDM and APS2 is slow due to FIFOs that manage data
transfer through asynchronous clock domains. However, synchronizing the TDM and
APS2 to a common $10\units{MHz}$ reference creates a stable phase relationship
between clocks domains which would allow these FIFOs to be removed and save
$\approx 100\units{ns}$. Modest benefit could be obtained by integrating the
readout system into the TDM, saving two data transfer steps.
While not listed in the table, the delays from cabling and analog filtering are
also non-negligible. Since we digitize data at 1GS/s, minimal analog low pass
filtering after mixing down to the IF is necessary, except to prevent
overloading amplifiers or the ADC. Moving the hardware physically closer to the
top of the dilution refrigerator would save $\approx20\units{ns}$. The reduction
in cable delays is one potential benefit to cyrogenic control systems, but is
only a fraction of the total latency budget.
\section{Feedback and feedforward in circuit QED}
The integration of \texttt{QDP} systems and APS2/TDM modules into a circuit QED
apparatus enables a variety of qubit experiments requiring feedback or
feedforward. \emph{Feedback} indicates that measurements modify control of the
measured qubit, while in \emph{feedforward} the conditional control acts on
different qubits. Here we present some examples of simultaneous dynamic control
of up to three qubits. We emphasize that the hardware system was designed for
flexible multi-qubit experiments that allows for programming different
experiments in software, with minimal or no hardware changes.
The quantum processor used here, first introduced in
Ref.~\onlinecite{Riste2017}, is a five-qubit superconducting device housed in a
dilution refrigerator at $\approx10\units{mK}$. The wiring inside the refrigerator is
very similar to the reference above, with the exception of the addition of a
Josephson parametric amplifier (JPA)~\cite{Hatridge2011} to boost the readout
fidelity of one qubit. The control flow of qubit instructions, previously a
pre-orchestrated sequence of gates and measurements, is now steered in real time
by a TDM. This module receives the digital qubit measurements from \texttt{QDSP} digital
outputs, and distributes the relevant data to the APS2 units which then
conditionally execute sequences.
\subsection{Fast qubit initialization}
As a first test of our control hardware, we start with the simplest closed-loop
feedback scheme --- fast qubit reset~\cite{Riste2012, CampagneIbarcq2013}. A
reliable way to initialize qubit registers is one of the prerequisites for
quantum computation~\cite{DiVincenzo2000}. Conventionally, initialization of
superconducting qubits is accomplished by passive thermalization of the qubit to
the near zero-temperature environment. However, with a characteristic relaxation
time $T_1 = 40\units{\mu s}$ (see Table~\ref{table:latency} for relaxation time
details), the necessary waiting constitutes the majority of the experiment wall
clock time. Furthermore, passive initialization slows re-use of ancilla qubits
during a computation, a feature that would relieve the need for a continuous
stream of fresh qubits in a fault-tolerant system~\cite{BenOr2013}.
Feedback-based reset aims to remove entropy on demand using measurement and a
conditional bit-flip gate (Fig.~\ref{fig:fast-reset} inset)~\cite{Riste2012}.
This operation ideally resets the qubit state to $\ket{0}$ if the measurement
result is $1$, or leaves it unchanged if $0$, giving an unconditional output
state $\ket{0}$. The effect of reset is evident when considering the
initialization success probability compared to no reset (passive initialization)
(Fig.~\ref{fig:fast-reset}). As the initialization time is decreased to $T_1$ or
lower, passive initialization becomes increasingly faulty, while active reset is
largely unaffected.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/fast-reset}
\caption{ \label{fig:fast-reset} Fast qubit initialization. A simple experiment
consisting of a single $X$ gate is repeated with variable initialization time.
Feedback (green) is used to reset the qubit in the ground state $\ket{0}$ faster
than by waiting for its thermal relaxation (blue). The success probability is
defined as the probability to find the qubit in $\ket{0}$ at the end of each
cycle. Inset: gate sequence per cycle, with a dashed box indicating the feedback
loop. Similar to Ref.~\onlinecite{Riste2012}.}
\end{figure}
We extend this protocol to reset a register of three qubits simultaneously. This
is accomplished with no additional hardware beyond that already required for the
open-loop control of the same number of qubits. We exploit frequency
multiplexing to combine two readout signals, so that all signal processing can
be accomplished with the two analog inputs of a single X6-1000M. The control
flow simply replicates the conditional bit-flip logic across the three qubits
$\ket{A}, \ket{B}, \ket{C}$ (Fig.~\ref{fig:multi-reset}a). We assess the
performance of the three-qubit reset by measuring the success probabilities for
resetting each individual qubit starting from the eight computational input states
(Fig.~\ref{fig:multi-reset}b). The deviation in success probabilies is largely due
to the difference in readout fidelities (Table~\ref{table:qubits}), as only
qubit $\ket{C}$ is equipped with a JPA.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/multi-reset}
\caption{ \label{fig:multi-reset} Simultaneous reset of three qubits. (a)
Frequency multiplexed signals are used to independently measure three qubits in
a single \texttt{QDSP} card. A second round of reset can be concatenated to improve
performance. (b) Success probability to reset each qubit measured after 1 (light
bars) and two (dark) rounds. Only one of the readout lines (qubit $C$) is
equipped with a superconducting parametric amplifier~\cite{Hatridge2011},
granting higher readout and reset fidelities.}
\end{figure}
\subsection{Measurement-based $S$ and $T$ gates}
Our hardware is also readily applicable to feedforward scenarios, where the
result of a measurement conditions the control of different qubits. A first
example is the realization of measurement-based gates. In an error-corrected
circuit, gates on a logical qubit can be made fault-tolerant by applying them
transversally to all the underlying physical qubits. However, for any given
code, a universal gate set cannot all be implemented
transversally~\cite{Xie2008}. For instance, in the surface code, all Pauli
operations $X$, $Y$, $Z$ are transversal, but partial rotations such as
$Z(\pi/2)$ are not. To fill this gap, fault-tolerant gates can be constructed
with interactions with ancilla qubits and control conditioned on measurement
results~\cite{Bravyi2005}.
Here we demonstrate the basic principle of measurement-based gates, implementing
partial $Z$ rotations on a physical qubit, using an ancilla and feedforward
operations. The initial state of the ancilla, which can be prepared offline to
the computation, determines the rotation angle $\theta$. Typical gates are
denoted with $S$ ($\theta = \pi/4$) and $T$ ($\theta = \pi/8$). An $S$ gate can
be decomposed into an ancilla measurement and a conditional $Z(\pi)$
gate~\cite{Bravyi2005}, which is transversal in the surface code
(Fig.~\ref{fig:Zgates}a). Starting with the ancilla in a superposition state,
$\ket{\psi_0} = (\ket{0} + \ket{1})/\sqrt{2}$, the result of the ancilla
measurement determines whether the final state approximates the desired
$S\ket{\psi_0} = (\ket{0} + i\ket{1})/\sqrt{2}$ (Fig.~\ref{fig:Zgates}d), or the
$\pi$ shifted $ZS\ket{\psi_0}$ (e). In the latter case, a corrective $Z$, applied
as a frame update (see Sec.~\ref{subs:modulation-engine}), gives the intended
state $S\ket{\psi_0}$ deterministically (f). The reduced coherence, indicated by
the length of the arrow, is mainly due to the measurement time ($0.9\units{\mu s}$), with
the addition of $\sim 0.54\units{\mu s}$ decision latency in (f).
\begin{figure}
\includegraphics[width=\columnwidth]{figures/S_T_gates}
\caption{\label{fig:Zgates} Measurement-based $S$ and $T$ gates. Gate sequence
to implement $S$ (a) and $T$ (b, c) gates with an ancilla and feedforward. To
construct (c), we replace the $S$ gate in (b) with the circuit from (a). (d-i)
Projected state tomography on the $x$-$y$ plane for initial state $\ket{x} =
(\ket{0} + \ket{1})/\sqrt{2}$ and applied $S$ (d-f) and $T$ (g-i). The data are
postselected on the ancilla measurement result $a=0$ (d, g), $a=1$ (e, h), or
not postselected when feedforward is activated (f,i). The $T$ gate can be made
fully transversal by conditionally implementing the $S$ correction as another
feedforward subroutine. (c, and pink arrow in i).}
\end{figure}
Similarly, a $T$ gate can be implemented with a different ancilla preparation
and a conditional $S$ gate (Fig.~\ref{fig:Zgates}b). However, as seen before,
the $S$ gate cannot be applied transversally, so it is in turn decomposed into
the feedforward sequence above. The result is a nested feedforward loop with up
to two ancilla measurements and conditional sequences (Fig.~\ref{fig:Zgates}c).
We reuse the same ancilla in the second round, taking advantage of the first
measurement to initialize it in a known state. By using the CLEAR
protocol~\cite{McClure2016}, we reduce the latency before we can reuse the
ancilla (Fig.~\ref{fig:Zgates}g-i).
\subsection{Entanglement generation through measurement}
With three qubits, feedforward control can be used to generate entanglement by
measurement. Two qubits separately interact with a third ancilla qubit to
implement a parity measurement of the first two qubits (Fig.~\ref{fig:ebm}a).
With the first two qubits starting in an equal superposition state, the parity
measurement projects them onto either an even or odd Bell state with the ancilla
measurement result containing the information about which
(Fig.~\ref{fig:ebm}b-c). This parity measurement scenario, with ancillas and
feedforward, is also relevant for syndrome extraction in quantum error
correction schemes~\cite{Bravyi1998, Mermin2007} and has been experimentally
demonstrated in post-selected form\cite{Saira14, Chow14}. With our hardware we
can go one step further, and deterministically create the odd state by
converting the projected even state into an odd one by a conditional bit-flip on
one of the data qubits (Fig.~\ref{fig:ebm}d). This deterministic protocol has
also been realized in Ref.~\onlinecite{Riste2013}, but with the ancilla qubit
replaced by a cavity mode.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/ebm}
\caption{ \label{fig:ebm} State tomography of entanglement by measurement and
feedforward. The post-selected result of a two-qubit parity measurement (a)
determines whether the qubits are projected onto an even (b) or odd (c)
entangled state~\cite{Chow14, Saira14}. Programming feedforward control that
conditionally switches the parity from even to odd generates the target
entangled state deterministically~\cite{Riste2013} (d). State tomograms shown in
the Pauli basis. Opaque (transparent) bars indicate the measured (ideal)
expectation values for the two-qubit Pauli operators.}
\end{figure}
\begin{table*}
\label{table:qubits}
\begin{ruledtabular}
\begin{tabular}{ l | l | l | l }
& {Fig.~\ref{fig:fast-reset}} & Fig.~\ref{fig:multi-reset} & Fig.~\ref{fig:ebm} \\
\hline
Measurement time ($\mu$s) & 2.2 & 4.5 & 2.2 \\
Characteristic relaxation time $T_1$ ($\mu$s) & $\sim20$ & $\sim40$ ($A$), 40 ($B$), 20 ($C$) & $\sim20$\\
Measurement assignment fidelity & 0.95 & 0.70 ($A$), 0.81 ($B$), 0.94 ($C$) & 0.95 \\
\end{tabular}
\end{ruledtabular}
\caption{Relevant measurement and qubit parameters for the experiments in Figs.~\ref{fig:fast-reset}-\ref{fig:ebm}.}
\end{table*}
\section{Conclusion}
The APS2 and \texttt{QDSP} platforms are a complete hardware solution for
dynamic quantum computing systems. They achieve this with tailored gateware and
hardware that enable flexible, low-latency manipulation, thus allowing users to
program generic quantum circuits without hardware reconfiguration. We have
proved this hardware \emph{in situ} with a superconducting quantum processor,
showing a variety of novel dynamic circuits utilizing feedback and feedforward.
To further improve this platform we intend to integrate control and readout into
a unified hardware system, investigate improvements to the APS2 analog output
chain and generalize system synchronization.
Upconversion systems generically require a multitude of components and suffer
from various mixer imperfections, leading to instability and a spectrum polluted
by mixer product spurs. Future hardware revisions may solve these issues by
moving to faster RF DACs that can directly generate microwave tones with a
cleaner spectrum~\cite{Glascott-Jones2014}. Direct RF output allows for greater
frequency agility, allowing for channel re-use for both control and measurement.
New DACs with sampling rates from 4--6$\units{GS/s}$ support output modes that
direct power into higher Nyquist zones, removing pressure for ultra-high clock
speeds. Future FPGAs may include many on-chip RF DACs~\cite{Erdmann2017},
potentially drastically increasing channel densities in control systems.
The typical way to achieve system synchronization is by building trigger fanout
trees. This strategy becomes increasingly cumbersome and fragile as system sizes
grow. A more scalable approach consists of sharing frequency and time between
all devices, so that all modules in the system have a synchronous copy of a
global counter. To achieve this, future hardware revisions may incorporate a
time distribution protocol such as White Rabbit~\cite{Serrano2009}. Sharing time
changes the synchronization paradigm from ``go on trigger'' to ``go at time $t$''.
Finally, we are exploring methods to combine real-time computation with dynamic
control-flow on the individual APSs. For example, a controller of a system of
logical qubits must combine information from a logical decoder with program
control-flow. A softcore CPU running on the TDM would enable rapid development
of realtime infrastructure.
\begin{acknowledgments}
Schematic capture and PC board layout for the APS2 and TDM were done by Ray
Zeller and Chris Johnson of \emph{ZRL Inc.}, Bristol, RI. Nick Materise
developed an initial prototype of the \texttt{QDSP} system in VHDL. This was
converted into a Simulink model and tested with MATLAB HDL Coder before finally
being converted back into pure VHDL. The data analysis for the experimental
section was performed using code written in Julia~\cite{Bezanson2014}, and the
figures were made with Seaborn~\cite{seaborn_v0.7.1} and
matplotlib~\cite{Hunter2007}. We used Scipy~\cite{scipy} to construct the filter
coefficients for the \texttt{QDSP} system. The authors would like to thank
George A. Keefe and Mary B. Rothwell for device fabrication, and Nissim Ofek for
discussions about AWG instruction sets. This research was funded by the Office
of the Director of National Intelligence (ODNI), Intelligence Advanced Research
Projects Activity (IARPA), through the Army Research Office contract
No.~W911NF-10-1-0324 and No.~W911NF-14-1-0114. All statements of fact, opinion
or conclusions contained herein are those of the authors and should not be
construed as representing the official views or policies of IARPA, the ODNI, or
the U.S. Government.
\end{acknowledgments}
|
1,108,101,565,923 | arxiv |
\section{Introduction}
\noindent{\em Graph games and multi-objectives.}
Two-player games on graphs are central in many applications
of computer science. For example, in the synthesis problem, implementations are
obtained from winning strategies in games with a qualitative objective such as
$\omega$-regular specifications~\cite{RW87,PnueliR89,AbadiLW89}.
In these applications, the games have a qualitative (boolean) objective
that determines which player wins.
On the other hand, games with quantitative objective which are natural models
in economics (where players have to optimize a real-valued payoff)
have also been studied in the context of automated design~\cite{Sha53,Condon92,ZP96}.
In the recent past, there has been considerable interest in the design of
reactive systems that work in resource-constrained environments
(such as embedded systems).
The specifications for such reactive systems are quantitative, and these
give rise to quantitative games.
In most system design problems, there is no unique objective to be optimized,
but multiple, potentially conflicting objectives.
For example, in designing a computer system, one is interested not
only in minimizing the average response time but also the average power
consumption.
In this work we study such multi-objective generalizations of the
two most widely used quantitative objectives in games, namely,
\emph{mean-payoff} and
\emph{energy} objectives~\cite{EM79,ZP96,CAHS03,BFLMS08}.
\smallskip\noindent{\em Generalized mean-payoff games.}
A {\em generalized mean-payoff game} is played on a finite weighted game graph by two players.
The vertices of the game graph are partitioned into positions that belong to
Player~$1$ and positions that belong to Player~$2$.
Edges of the graphs are labeled with $k$-dimensional vectors $w$ of integer values, i.e.,
$w \in \mathbb{Z}^k$. The game is played as follows. A pebble is placed on a
designated initial vertex of the game graph. The game is played in rounds in which
the player owning the position where the pebble lies
moves the pebble to an adjacent position of the graph using an outgoing edge.
The game is played for an infinite number of rounds, resulting in an infinite
path through the graph, called a play. The value associated to a play is the
mean value in each dimension of the vectors of weights labeling the edges of the play.
Accordingly, the winning condition for Player~1
is defined by a vector of integer values $v \in \mathbb{Z}^k$ that specifies a
threshold for each dimension. A play is winning for Player~$1$ if its vector
of mean values is at least~$v$. All other plays are winning for Player~$2$,
thus the game is zero-sum. We are interested in the problem of deciding
the existence of a finite-memory winning strategy for Player~$1$ in generalized
mean-payoff games. Note that in general infinite memory may be required to win
generalized mean-payoff games, but for practical applications such as the
synthesis of reactive systems with multiple resource constraints, the
generalized mean-payoff games with finite memory is the relevant model.
Moreover, they provide the framework for the
synthesis of specifications defined by~\cite{AlurDMW09,Concur10}, and
the synthesis question for such specifications under \emph{regular (ultimately periodic)}
words correspond to generalized mean-payoff games with finite-memory strategies.
\smallskip\noindent{\em Generalized energy games.}
In generalized energy games, the winning condition for Player~1 requires that,
given an initial credit $v_0 \in \mathbb{N}^k$, the sum of $v_0$ and all the
vectors labeling edges up to position $i$ in the play is nonnegative, for all
$i \in \mathbb{N}$. The decision problem for generalized energy games
asks whether there exists an initial credit $v_0$ and a strategy for Player~1
to maintain the energy nonnegative in all dimensions against all strategies
of Player~2.
\smallskip\noindent{\em Contributions.}
In this paper, we study the strategy complexity and computational complexity
of solving generalized mean-payoff and energy games.
Our contributions are as follows.
\noindent
First, we show that generalized energy and mean-payoff games
are determined when played with finite-memory strategies, however, they are not
determined for memoryless strategies.
For generalized energy games determinacy under finite-memory coincides
with determinacy under arbitrary strategies (each player has a
winning strategy iff he has a finite-memory winning strategy).
In contrast, we show for generalized mean-payoff games that determinacy under finite-memory
and determinacy under arbitrary strategies do not coincide.
Thus with finite-memory strategies these games are determined, they correspond
to the synthesis question with ultimately periodic words, and enjoy pleasant
mathematical properties like existence of the limit of the
mean value of the weights, and hence we focus on the study of
generalized mean-payoff and energy games with finite-memory strategies.
\noindent
Second, we show that under the hypothesis that both players play either
finite-memory or memoryless strategies, the generalized mean-payoff game
and the generalized energy game problems are equivalent.
\noindent
Third, our main contribution is the study of the computational complexity of the
decision problems for generalized mean-payoff games and generalized energy games,
both for finite-memory strategies and the special case of memoryless strategies.
Our complexity results can be summarized as follows:
(A)~For finite-memory strategies, we provide a nondeterministic polynomial time
algorithm
for deciding negative instances of the problems\footnote{Negative
instances are those where Player~1 is losing, and by determinacy under finite-memory where Player~2 is winning.}.
Thus we show that the decision problems are in coNP.
This significantly improves the complexity as compared to the EXPSPACE
algorithm that can be obtained by reduction to {\sc Vass} (vector addition systems with states)~\cite{BJK10}.
Furthermore, we establish a coNP lower bound for these problems by reduction
from the complement of the 3SAT problem, hence showing that the problem
is coNP-complete.
(B)~For the case of memoryless strategies, as the games are not determined, we
consider the problem of determining if Player 1 has a memoryless
winning strategy. First, we show that the problem of determining if Player 1
has a memoryless winning strategy is in NP,
and then show that the problem is NP-hard (i)~even when the weights are restricted
to $\{-1,0,1\}$; or
(ii)~when the weights are arbitrary and the dimension is~2.
\noindent{\em Related works.}
Mean-payoff games, which are the one-dimension version of our generalized
mean-payoff games, have been extensively studied starting with the works of
Ehrenfeucht and Mycielski in~\cite{EM79} where they prove memoryless determinacy
for these games. Because of memoryless determinacy, it is easy to show that
the decision problem for mean-payoff games lies in NP~$\cap$~coNP,
but despite large research efforts, no polynomial time algorithm is known for
that problem. A pseudo-polynomial time algorithm has been proposed by Zwick
and Paterson in~\cite{ZP96}, and improved in~\cite{BCDGR09}. The one-dimension
special case of generalized energy games have been introduced in~\cite{CAHS03}
and further studied in~\cite{BFLMS08} where log-space equivalence with
classical mean-payoff games is established.
Generalized energy games can be viewed as games played on {\sc Vass} (vector addition systems with states)
where the objective
is to avoid unbounded decreasing of the counters. A solution to such games
on {\sc Vass} is provided in~\cite{BJK10} (see in particular Lemma 3.4 in~\cite{BJK10}) with a PSPACE
algorithm when the weights are $\{-1,0,1\}$, leading to an EXPSPACE algorithm when the
weights are arbitrary integers.
We drastically improve the EXPSPACE upper-bound by providing a coNP
algorithm for the problem, and we also provide a coNP lower bound even when
the weights are restricted to $\{-1,0,1\}$.
\section{Generalized Mean-payoff and Energy Games}\label{sec:def}
\noindent{\bf Well quasi-orders.}
Let $D$ be a set. A relation $\preceq$ over $D$ is a {\em well quasi-order},
wqo for short, if the following holds: (a)~$\preceq$ is transitive and reflexive;
and (b)~for all $f : \mathbb{N} \rightarrow D$, there exists
$i_1,i_2 \in \mathbb{N}$ such that $i_1 < i_2$ and $f(i_1) \preceq f(i_2)$.
\begin{lemma}\label{wqo}
$(\mathbb{N}^k,\leq)$ is well quasi-ordered.
\end{lemma}
\noindent{\bf Multi-weigthed two-player game structures.}
A {\em multi-weigthed two-player game structure} is a tuple
$G=(S_1,S_2,s_{{\sf init}},E,k,w)$ where $S_1 \cap S_2 = \emptyset$,
and $S_i$ ($i = 1,2$) is the finite set of {\em Player~$i$ positions},
$s_{{\sf init}} \in S_1$ is the {\em initial position},
$E \subseteq (S_1 \cup S_2) \times (S_1 \cup S_2)$ is
the set of {\em edges} such that for all $s \in S_1 \cup S_2$,
there exists $s' \in S_1 \cup S_2$ such that $(s,s') \in E$, $k \in \mathbb{N}$ is
the {\em dimension} of the multi-weights, $w : E \rightarrow \mathbb{Z}^k$ is the
{\em multi-weight labeling function}.
$G$ is a multi-weighted {\em one-player} game structure if $S_2 = \emptyset$.
A {\em play} in $G$ is an infinite sequence of $\pi=s_0 s_1 \dots s_n \dots$
such that $(i)$ $s_0=s_{{\sf init}}$, $(ii)$ for all $i \geq 0$ we have $(s_i,s_{i+1}) \in E$.
A play $\pi=s_0 s_1 \dots s_n \dots$ is {\em ultimately periodic} if
it can be decomposed as $\pi=\rho_1 \cdot \rho_2^{\omega}$ where $\rho_1$ and
$\rho_2$ are two finite sequences of positions. The {\em prefix} up to position
$n$ of a play $\pi=s_0 s_1 \dots s_n \dots$ is the finite sequence
$\pi(n)=s_0 s_1 \dots s_n$, its last element $s_n$ is denoted by ${\sf Last}(\pi(n))$.
A prefix $\pi(n)$ belongs to Player~$i$ ($i \in \{1,2\}$) if ${\sf Last}(\pi(n)) \in S_i$.
The set of plays in $G$ is denoted by ${\sf Plays}(G)$,
the corresponding set of prefixes is denoted by ${\sf Prefs}(G)$,
the set of prefixes that belongs to Player~$i$ ($i \in \{1,2\}$) is denoted
by ${\sf Prefs}_i(G)$, and the set of ultimately periodic plays in $G$ is denoted by ${\sf Plays}^{up}(G)$.
The {\em energy level vector} of a prefix of play $\rho=s_0 s_1 \dots s_n$ is
${\sf EL}(\rho)=\sum_{i=0}^{i=n-1} w(s_i,s_{i+1})$, and the {\em mean-payoff vector}
of an ultimately periodic play
$\pi=s_0 s_1 \dots s_n \dots$ is ${\sf MP}(\pi)=\lim_{n\rightarrow \infty} \frac{1}{n} {\sf EL}(\pi(n))$.
\smallskip\noindent{\bf Strategies.}
A strategy for Player~$i$ ($i \in \{1,2\}$) in~$G$ is a function
$\lambda_i : {\sf Prefs}_i(G) \rightarrow S_1 \cup S_{2}$ such that for all $\rho \in {\sf Prefs}_i(G)$ we have
$({\sf Last}(\rho),\lambda_i(\rho)) \in E$. A play $\pi=s_0 s_1 \dots \in {\sf Plays}(G)$
is \emph{consistent} with a strategy $\lambda_i$ of Player~$i$ if
$s_{j+1}=\lambda_i(s_0 s_1 \dots s_j)$ for all $j \geq 0$ such that $s_j \in S_i$.
The {\em outcome of a pair of strategies}, $\lambda_1$ for Player~1 and $\lambda_2$
for Player~2, is the (unique) play which is consistent with both $\lambda_1$ and $\lambda_2$.
We denote $\mathsf{outcome}_G(\lambda_1,\lambda_2)$ this outcome.
A strategy $\lambda_1$ for Player~$1$ has {\em finite-memory} if it can be encoded
by a deterministic Moore machine $(M,m_0,\alpha_u,\alpha_n)$ where $M$ is a finite
set of states (the memory of the strategy), $m_0 \in M$ is the initial memory state,
$\alpha_u: M \times (S_{1} \cup S_2) \to M$ is an update function,
and $\alpha_n : M \times S_{i} \to S_{1} \cup S_2$ is the next-action function.
If the game is in a Player-$1$ position $s \in S_1$ and $m \in M$ is the current memory value,
then the strategy chooses $s' = \alpha_n(m,s)$ as the next position and the memory
is updated to $\alpha_u(m,s)$. Formally, $\tuple{M, m_0, \alpha_u, \alpha_n}$
defines the strategy $\lambda$ such that $\lambda(\rho\cdot s) = \alpha_n(\hat{\alpha}_u(m_0, \rho), s)$
for all $\rho \in (S_1 \cup S_2)^*$ and $s \in S_1$, where $\hat{\alpha}_u$ extends $\alpha_u$ to sequences
of positions as expected. A strategy is \emph{memoryless} if $\abs{M} = 1$.
For a finite-memory strategy $\lambda_1$ of Player~1, let $G_{\lambda_1}$ be the graph obtained as the product
of $G$ with the Moore machine defining $\lambda_1$, with initial vertex $\tuple{m_0,s_{{\sf init}}}$
and where $(\tuple{m,s},\tuple{m',s'})$ is a transition in $G_{\lambda_1}$ if
$m' = \alpha_u(m,s)$, and either $s \in S_1$ and $s'=\alpha_n(m,s)$, or $s \in S_2$ and $(s,s') \in E$.
The set of inifinite paths in $G_{\lambda_1}$ and the set of plays consistent with
$\lambda_1$ coincide. A similar definition can be given for the case of Player~2.
\smallskip\noindent{\bf Objectives.}
An {\em objective} for Player~$1$ in $G$ is a set of plays $W \subseteq {\sf Plays}(G)$.
A strategy $\lambda_1$ for Player~1 is {\em winning} for $W$ in $G$ if for all plays in
$\pi \in {\sf Plays}(G)$ that are consistent with $\lambda_1$, we have that $\pi \in W$.
A strategy $\lambda_2$ for Player~2 is {\em spoiling} for $W$ in $G$ if for all plays in
$\pi \in {\sf Plays}(G)$ that are consistent with $\lambda_2$, we have that $\pi \not\in W$.
We consider the following objectives:
\begin{itemize}
\item {\em Multi Energy objectives}. Given an initial energy vector
$v_0 \in \mathbb{N}^k$, the {\em multi energy objective}
${\sf PosEnergy}_G(v_0)=\{ \pi \in {\sf Plays}(G) \mid \forall n \geq 0 : v_0 + {\sf EL}(\pi(n)) \in \mathbb{N}^k \}$
requires that the energy level in all dimensions remains always nonnegative.
\item {\em Multi Mean-payoff objectives}. Given a threshold vector
$v \in \mathbb{Z}^k$, the {\em multi mean-payoff objective}
${\sf MeanPayoff}_G(v)=\{ \pi \in {\sf Plays}^{up}(G) \mid {\sf MP}(\pi) \geq v \}$
requires for all dimensions $j$ the mean-payoff for dimension $j$ is at least
$v(j)$.
\end{itemize}
\smallskip\noindent{\bf Decision problems.}
We consider the following decision problems:
\begin{itemize}
\item The {\em unknown initial credit problem} asks, given an
multi-weighted two-player game structure $G$, to decide whether there exists
an initial credit vector $v_0 \in \mathbb{N}^k$ and a winning strategy $\lambda_1$
for Player~1 for the objective ${\sf PosEnergy}_G(v_0)$.
\item The {\em mean-payoff threshold problem} (for finite memory) asks, given an multi-weighted two-player
game structure $G$ and a threshold vector $v \in \mathbb{Z}^k$, to decide whether there
exists a \emph{finite-memory} strategy $\lambda_1$ for Player~1 such that for all
\emph{finite-memory} strategies
$\lambda_2$ of Player~2, $\mathsf{outcome}_G(\lambda_1,\lambda_2) \in {\sf MeanPayoff}_G(v)$.
\end{itemize}
\noindent Note that in the unknown initial credit problem, we allow arbitrary strategies
(and we show in Theorem~\ref{thrm_gen_energy_fin} that actually finite-memory strategies are sufficient),
while in the mean-payoff threshold problem, we require finite-memory strategy
which is restriction (according to Theorem~\ref{thrm_gen_mean}) of a more general problem
of deciding the existence of arbitrary winning strategies.
\smallskip\noindent{\bf Determinacy and determinacy under finite-memory.}
A game $G$ with an objective $W$ is \emph{determined} if either Player~1 has
a winning strategy, or Player~2 has a spoiling strategy.
A game $G$ with an objective $W$ is \emph{determined under finite-memory} if either
(a)~Player~1 has a \emph{finite-memory} strategy $\lambda_1$ such that for all
\emph{finite-memory} strategies $\lambda_2$ of Player~2,
we have $\mathsf{outcome}_G(\lambda_1,\lambda_2) \in W$; or
(b)~Player~2 has a \emph{finite-memory} strategy $\lambda_2$ such that for all
\emph{finite-memory} strategies $\lambda_1$ of Player~1,
we have $\mathsf{outcome}_G(\lambda_1,\lambda_2) \not\in W$.
Games with objectives $W$ are determined (resp. determined under finite-memory) if
all game structures with objectives $W$ are determined (resp. determined under finite-memory).
We say that determinacy and determined under finite-memory coincide for a
class of objectives,
if for all objectives in the class and all game structures,
the answer of the determinacy and determined under finite-memory
coincide (i.e., Player~1 has a winning strategy iff there is a finite-memory
winning strategy, and similarly for Player~2).
Generalized mean-payoff and energy objectives are measurable:
(a)~generalized mean-payoff objectives can be expressed as finite intersection
of mean-payoff objectives and mean-payoff objectives are complete
for the third level of Borel hierarchy~\cite{ChaTCS}; and
(b) generalized energy objectives can be expressed as finite intersection
of energy objectives, and enery objectives are closed sets.
Hence determinacy of generalized mean-payoff and energy games follows
from the result of~\cite{Mar75}.
\begin{theorem}[Determinacy~\cite{Mar75}]
Generalized mean-payoff and energy games are determined.
\end{theorem}
\section{Determinacy under Finite-memory and Inter-reducibility}
In this section, we establish four results. First, we show that to win generalized energy games, it is sufficient for Player 1 to
play {\em finite-memory strategies}. Second, we show that to spoil generalized energy games, it is sufficient for Player 2
to play {\em memoryless strategies}. As a consequence, generalized energy games are determined under finite-memory.
Third, using this finite-memory determinacy result, we show that the decision problems for generalized energy and
mean-payoff games (see Section~\ref{sec:def}) are log-space inter-reducible. Finally,
we show that infinite-memory strategies are more powerful than finite-memory strategies in generalized mean-payoff games.
For generalized energy games, we first show that finite-memory strategies are sufficient for Player~1,
and then that memoryless strategies are sufficient for Player~2.
\begin{lemma}
\label{lem:energy-player1-finite-memory}
For all multi-weighted two-player game structures $G$, the answer to the unknown
initial credit problem is {\sc Yes} iff there exists a initial credit
$v_0 \in \mathbb{N}^k$ and a finite-memory strategy $\lambda^{\sf FM}_1$
for Player 1 such that for all strategies $\lambda_2$ of Player 2,
$\mathsf{outcome}_G(\lambda^{\sf FM}_1,\lambda_2) \in {\sf PosEnergy}_G(v_0)$.
\end{lemma}
\begin{proof}
One direction is trivial. For the other direction, assume that $\lambda_1$ is a
(not necessary finite-memory) winning strategy for Player~1 in $G$ with initial
credit $v_0 \in \mathbb{N}^k$. We show how to construct from $\lambda_1$
a finite-memory strategy $\lambda_1^{\sf FM}$ which is winning against all
strategies of Player~2 for initial credit $v_0$. For that we consider the
unfolding of the game graph $G$ in which Player~1 plays according to $\lambda_1$.
This infinite tree, noted $T_{G(\lambda_1)}$, has as set of nodes all the
prefixes of plays in $G$ when Player~1 plays according to $\lambda_1$.
We associate to each node $\rho=s_0 s_1 \dots s_n$ in this tree the energy
vector $v_0+{\sf EL}(\rho)$. As $\lambda_1$ is winning, we have that
$v_0+{\sf EL}(\rho) \in \mathbb{N}^k$ for all $\rho$. Now, consider the set
$(S_1 \cup S_2) \times \mathbb{N}^k$, and the relation $\sqsubseteq$ on this set
defined as follows: $(s_1,v_1) \sqsubseteq (s_2,v_2)$ iff $s_1=s_2$ and
$v_1 \leq v_2$ i.e., for all $i$, $1 \leq i \leq k$, $v_1(i) \leq v_2(i)$.
The relation $\sqsubseteq$ is a wqo (easy consequence of Lemma~\ref{wqo}).
As a consequence, on every infinite branch $\pi=s_0 s_1 \dots s_n \dots$
of $T_{G(\lambda_1)}$, there exists two positions $i < j$ such that
${\sf Last}(\pi(i))={\sf Last}(\pi(j))$ and ${\sf EL}(\pi(i)) \leq {\sf EL}(\pi(j))$.
We say that node $j$ subsumes node $i$. Now, let $T^{\sf FM}_{G(\lambda_1)}$
be the tree $T_{G(\lambda_1)}$ where we stop each branch when we reach a node
$n_2$ which subsumes one of its ancestor node $n_1$.
Clearly, $T^{\sf FM}_{G(\lambda_1)}$ is finite.
Also, it is easy to see that Player~1 can play in the subtree rooted at $n_2$ as
she plays in the subtree rooted in $n_1$ because its energy level in $n_2$ is
greater than in $n_1$. From $T^{\sf FM}_{G(\lambda_1)}$, we can construct a
Moore machine which encode a finite-memory strategy $\lambda_1^{\sf FM}$ which
is winning the generalized energy game $G$ as it is winning for initial energy
level $v_0$.
\end{proof}
\begin{lemma}\cite{BJK10}
\label{lem:player-two-memoryless}
For all multi-weigthed two-player game structures $G$, the answer to the
unknown initial credit problem is {\sc No} if and only if there exists
a \emph{memoryless} strategy $\lambda_2$ for Player~$2$, such that
for all initial credit vectors $v_0 \in \mathbb{N}^k$ and all strategies $\lambda_1$
for Player~$1$ we have $\mathsf{outcome}_G(\lambda_1,\lambda_2) \not\in {\sf PosEnergy}_G(v_0)$.
\end{lemma}
\begin{proof}
The proof was given in~\cite{BJK10}[Lemma 19]. Intuitively, consider a Player-$2$
state $s \in S_2$ with two sucessors $s',s''$.
If an initial credit vector $v'_0$ is sufficient for Player~$1$ to win
against Player-$2$ always choosing $s'$, and $v''_0$ is sufficient against
Player-$2$ always choosing $s''$, then $v'_0+v''_0$ is sufficient against
Player-$2$ arbitrarily alternating between $s'$ and $s''$. This is because
of the fact that if Player~$1$ maintains all energies nonnegative when initial credit
is $v_0$, then he can maintain all energies above $\Delta$ when initial credit
is $v_0 + \Delta$ ($\Delta \in \mathbb{N}^k$).
\end{proof}
\noindent As a consequence of the two previous lemmas, we have the following theorem.
\begin{theorem}\label{thrm_gen_energy_fin}
Generalized energy games are determined under finite-memory, and determinacy
coincide with determinacy under finite-memory for generalized energy games.
\end{theorem}
\begin{remark}
Note that even if Player 2 can be restricted to play memoryless strategies in generalized energy games, it may be that Player~$1$ is winning with some initial
credit vector $v_0$ when Player~$2$ is memoryless, and is not winning with
the same initial credit vector $v_0$ when Player~$2$ can use arbitrary strategies.
This situation is illustrated in \figurename~\ref{fig:memory-needed} where Player~$1$
(owning round states) can maintain the energy nonegative in all dimensions
with initial credit $(2,0)$ when Player~$2$ (owning square states) is memoryless.
Indeed, either Player~$2$ chooses the left edge from $q_0$ to $q_1$ and Player~$1$ wins,
or Player~$2$ chooses the right edge from $q_0$ to $q_2$, and Player~$1$ wins as well by
alternating the edges back to $q_0$. Now, if Player~$2$ has memory, then Player~2 wins
by choosing first the right edge to $q_2$, which forces Player~$1$ to come back
to $q_0$ with multi-weight $(-1,1)$. The energy level is now $(1,1)$ in $q_0$ and Player~$2$
chooses the left edge to $q_1$ which is losing for Player~$1$. Note that Player~$1$
wins with initial credit $(2,1)$ and $(3,0)$ (or any larger credit) against all
arbitrary strategies of Player~$2$.
\begin{figure}[!tb]
\begin{center}
\input{figures/memory.tex}
\caption{Player~$1$ (round states) wins with initial credit $(2,0)$ when Player~$2$ (square states) can use memoryless strategies,
but not when Player~$2$ can use arbitrary strategies. \label{fig:memory-needed}}
\end{center}
\end{figure}
\end{remark}
We now show that generalized mean-payoff games (where players are restricted to play finite-memory strategies by definition)
are log-space equivalent to generalized energy games.
First note that the mean-payoff threshold problem with threshold vector $v \in \mathbb{Z}^k$ can
be reduced to the mean-payoff threshold problem with threshold vector $\{0\}^k$,
by shifting all multi-weights in the game graph by $v$ (which has the effect of
shifting the mean-payoff value by $v$). Given this reduction, the following result
shows that the unknown initial credit problem (for multi-energy games) and
the mean-payoff threshold problem (with finite-memory strategies) are equivalent.
\begin{theorem}\label{thrm_inter}
\label{lem:energy-mean-payoff-reduction}
For all multi-weigthed two-player game structures $G$ with dimension $k$, the answer to the
unknown initial credit problem is {\sc Yes} if and only if the answer to the
mean-payoff threshold problem (for finite memory) with threshold vector $\{0\}^k$ is {\sc Yes}.
\end{theorem}
\begin{proof}
First, assume that there exists a winning strategy $\lambda_1$ for Player~$1$
in~$G$ for the multi energy objective ${\sf PosEnergy}_G(v_0)$ (for some $v_0$).
Theorem~\ref{thrm_gen_energy_fin} establishes that finite memory is sufficient to win
multi-energy games, so we can assume that $\lambda_1$ has finite memory.
Consider the restriction of the graph $G_{\lambda_1}$ to the reachable vertices,
and we show that the energy vector of every simple cycle is nonnegative. By contradcition,
if there exists a simple cycle with energy vector negative in one dimension,
then the infinite path that reaches this cycle and loops through it forever
would violate the objective ${\sf PosEnergy}_G(v_0)$ regardless of the vector $v_0$.
Now, this shows that every reachable cycle in $G_{\lambda_1}$ has nonnegative
mean-payoff value in all dimensions, hence $\lambda_1$ is winning for the
multi mean-payoff objective ${\sf MeanPayoff}_G(\{0\}^k)$.
Second, assume that there exists a finite-memory strategy $\lambda_1$ for Player~$1$
that is winning in~$G$ for the multi mean-payoff objective ${\sf MeanPayoff}_G(\{0\}^k)$.
By the same argument as above, all simple cycles in $G_{\lambda_1}$ are nonnegative
and the strategy $\lambda_1$ is also winning for the objective ${\sf PosEnergy}_G(v_0)$
for some $v_0$. Taking $v_0 = \{n W\}^k$ where $n$ is the number of states
in~$G_{\lambda_1}$ (which bounds the length of the acyclic paths) and $W \in \mathbb{Z}$
is the largest weight in the game suffices.
\end{proof}
Note that the result of Theorem~\ref{thrm_inter} does not hold for arbitrary
strategies as shown in the following lemma.
\begin{lemma}\label{lemm_inf_power}
In generalized mean-payoff games, infinite memory may be necessary to win
(finite-memory strategies may not be sufficient).
\end{lemma}
\begin{proof}
To show this, we first need to define
the mean-payoff vector of arbitrary plays (because arbitrary strategies, i.e.,
infinite-memory strategies, may produce non-ultimately periodic plays). In particular, the limit
of $\frac{1}{n} \cdot {\sf EL}(\pi(n))$ for $n \to \infty$ may not exist for arbitrary plays~$\pi$.
Therefore, two possible definitions are usually considered, namely either
$\underline{{\sf MP}}(\pi) = \liminf_{n \to \infty} \frac{1}{n} \cdot {\sf EL}(\pi(n))$,
or $\overline{{\sf MP}}(\pi) = \limsup_{n \to \infty} \frac{1}{n} \cdot {\sf EL}(\pi(n))$.
In both cases, better payoff can be obtained with infinite memory:
the example of \figurename~\ref{fig:crazy} shows a game where all states belong to
Player~$1$. We claim that $(a)$ for $\underline{{\sf MP}}$, Player~$1$ can achieve
a threshold vector $(1,1)$, and $(b)$ for $\overline{{\sf MP}}$,
Player~$1$ can achieve a threshold vector $(2,2)$; $(c)$ if we restrict Player~$1$
to use a finite-memory strategy, then it is not possible to win the
multi mean-payoff objective with threshold $(1,1)$
(and thus also not with $(2,2)$). To prove $(a)$, consider the strategy
that visits $n$ times $q_a$ and then $n$ times $q_b$, and repeats this forever
with increasing value of $n$. This guarantees a mean-payoff
vector $(1,1)$ for $\underline{{\sf MP}}$ because in the long-run roughly half of the
time is spent in $q_a$ and roughly half of the time in $q_b$.
To prove~$(b)$, consider the strategy that
alternates visits to $q_a$ and $q_b$ such that after the $n$th alternation,
the self-loop on the visited state $q$ ($q \in \{q_a,q_b\}$) is taken so
many times that the average frequency of $q$ gets larger than~$\frac{1}{n}$
in the current finite prefix of the play.
This is always possible and achieves threshold $(2,2)$ for $\overline{{\sf MP}}$.
Note that the above two strategies require infinite memory. To prove $(c)$,
notice that finite-memory strategies produce an ultimately periodic play
and therefore $\underline{{\sf MP}}$ and $\overline{{\sf MP}}$ coincide with ${\sf MP}$. It is easy to
see that such a play cannot achieve $(1,1)$ because the periodic
part would have to visit both $q_a$ and $q_b$ and then the mean-payoff vector $(v_1,v_2)$
of the play would be such that $v_1 + v_2 < 2$ and thus $v_1 = v_2 = 1$ is
impossible.
\end{proof}
\begin{figure}[!tb]
\begin{center}
\input{figures/crazy.tex}
\caption{A generalized mean-payoff game where infinite memory is necessary to win (Lemma~\ref{lemm_inf_power}).}\label{fig:crazy}
\end{center}
\end{figure}
\noindent Theorem~\ref{thrm_inter} and Lemma~\ref{lemm_inf_power},
along with Theorem~\ref{thrm_gen_energy_fin} gives the following result.
\begin{theorem}\label{thrm_gen_mean}
Generalized mean-payoff games are determined under finite-memory, however
determinacy and determined under finite-memory do not coincide for generalized mean-payoff
games.
\end{theorem}
\section{coNP-completeness for Finite-Memory Strategies}
In this section, we present a nondeterministic polynomial time algorithm
to recognize the instances for which there is no winning strategies for Player~1
in a multi-energy game.
First, we show that the one-player version of this game can be solved
by checking the existence of a circuit (i.e., a not necessarily simple cycle)
with overall nonnegative effect
in all dimensions. Second, we build on this and the memoryless result
for Player~2 to define a coNP algorithm.
The main result (Theorem~\ref{thrm_complete}) is derived from
Lemma~\ref{lem:coNp-membership} and Lemma~\ref{thrm_hard} below.
\begin{theorem}\label{thrm_complete}
The unknown initial credit and the mean-payoff threshold problems for
multi-weighted two-player game structures are coNP-complete.
\end{theorem}
\smallskip\noindent{\bf coNP upper bound.}
First, we need the following result about finding zero circuits in
multi-weighted directed graphs (a graph is a one-player game).
A zero circuit is a finite sequence $s_0 s_1 \dots s_n$ such that
$s_0 = s_n$, $(s_i,s_{i+1}) \in E$ for all $0 \leq i < n$, and
$\sum_{i=0}^{n-1} w(s_i,s_{i+1}) = (0,0,\dots,0)$. The circuit need
not be simple.
\begin{lemma}[\cite{KS88}]\label{lem:zero-cycle}
Determining if a $k$-dimensional directed graph contains a zero circuit
can be done in polynomial time.
\end{lemma}
\begin{lemma}\label{lem:coNp-membership}
The unknown initial credit and the mean-payoff threshold problems for
multi-weighted two-player game structures are in coNP.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:player-two-memoryless}, we know that Player~2 can be restricted
to play memoryless strategies. A coNP algorithm can guess a memoryless
strategy $\lambda$ and check in polynomial time that it is winning using the following argument.
First, consider the graph $G_{\lambda}$ as a one-player game (in which all states
belong to player~$1$. We show that if there exists an initial energy level $v_0$ and an
infinite play $\pi=s_0 s_1 \dots s_n \dots$ in $G_{\lambda}$ such that $\pi \in {\sf PosEnergy}(v_0)$
then there exist a reachable circuit in $G_{\lambda}$ that has nonnegative effect in all
dimensions. To show that, we extend $\pi$ with the energy information as follows:
$\pi'=(s_0,w_0) (s_1,w_1) \dots (s_n,w_n) \dots$ where $w_0=v_0$ and for all
$i \geq 1$, $w_i=v_0+{\sf EL}(\pi(i))$. As $\pi \in {\sf PosEnergy}(v_0)$, we know that
for all $i \geq 0$, $w_i \in \mathbb{N}^k$. So, we can define the following order
on the pairs $(s,w) \in (S_1 \cup S_2) \times \mathbb{N}^k$ in the run:
$(s,w) \sqsubseteq (s',w')$ iff $s=s'$ and $w(j) \leq w'(j)$ for all $1 \leq j \leq k$.
From Lemma~\ref{wqo}, it is easy to show that $\sqsubseteq$ is a wqo.
Then there exist two positions $i_1 < i_2$ in $\pi'$ such that
$(s_{i_1},w_{i_1}) \sqsubseteq (s_{i_2},w_{i_2})$.
The circuit underlying those two positions has nonnegative effect in all dimensions.
Based on this, we can decide if there exists an initial energy vector $v_0$ and an infinite path
in $G_{\lambda}$ that satisfies ${\sf PosEnergy}_G(v_0)$ using the result of Lemma~\ref{lem:zero-cycle}
on modified version of $G_{\lambda}$ obtained as follows.
In every state of $G_{\lambda}$, we add $k$ self-loops with respective multi-weight $(-1,0,\dots,0)$,
$(0,-1,0,\dots,0)$, $\dots$, $(0,\dots,0,-1)$, i.e. each self-loop removes one unit
of energy in one dimension. It is easy to see that $G_{\lambda}$ has a circuit with nonnegative
effect in all dimensions if and only if the modified $G_{\lambda}$ has a zero circuit, which
can be determined in polynomial time. The result follows.
\end{proof}
\smallskip\noindent{\bf Lower bound: coNP-hardness.}
We show that the unknown initial credit problem for
multi-weighted two-player game structures is coNP-hard.
We present a reduction from the complement of the 3SAT problem
which is NP-complete~\cite{PapaBook}.
\smallskip\noindent\emph{Hardness proof.}
We show that the problem of deciding whether Player~1 has a winning
strategy for the unknown initial credit problem for multi-weighted
two-player game structures is at least as hard as
deciding whether a 3SAT formula is unsatisfiable.
Consider a 3SAT formula $\psi$ in CNF with clauses $C_1,C_2,\ldots,C_k$
over variables $\{x_1, x_2, \ldots, x_n\}$, where each clause consists
of disjunctions of exactly three literals (a literal is a variable or its
complement).
Given the formula $\psi$, we construct a game graph as shown in
Figure~\ref{fig:3sat}.
The game graph is as follows: from the initial position, Player~1 chooses
a clause, then from a clause Player~2 chooses a literal that appears
in the clause (i.e., makes the clause true). From every literal
the next position is the initial position.
We now describe the multi-weight labeling function $w$.
In the multi-weight function there is a component for every literal.
For edges from the initial position to the clause positions, and from
the clause positions to the literals, the weight for every component
is~0.
We now define the weight function for the edges from literals back to the
initial position: for a literal $y$, and the edge from $y$ to the
initial position, the weight for the component of $y$ is~$1$, the weight for
the component of the complement of $y$ is~$-1$, and for all the other
components the weight is~$0$.
We now define a few notations related to assignments of truth values
to literals.
We consider \emph{assignments} that assign truth values to all the literals.
An assignment is \emph{valid} if for every literal the truth value assigned
to the literal and its complement are complementary (i.e., for all $1 \leq i
\leq n$, if
$x_i$ is assigned true (resp. false), then the complement $\overline{x}_i$
of $x_i$ is assigned false (resp. true)).
An assignment that is not valid is \emph{conflicting} (i.e., for some
$1 \leq i \leq n$, both $x_i$ and $\overline{x}_i$ are assigned the same
truth value).
If the formula $\psi$ is satisfiable, then there is a valid assignment
that satisfies all the clauses.
If the formula $\psi$ is not satisfiable, then every assignment that satisfies
all the clauses must be conflicting.
We now present two directions of the hardness proof.
\begin{figure}[tb]
\centering
\begin{picture}(65,40)(0,0)
\drawcurve(51,33)(47,44)(10,20)
\drawcurve(51,20)(49,12)(15,15)
\drawcurve(51,5)(49,-2)(10,10)
\node[Nmarks=i](q0)(10,15){}
\node[Nmr=0](q1)(35,33){$C_1$}
\fmark[fangle=20,flength=10](q1)
\fmark[fangle=0,flength=10](q1)
\fmark[fangle=-20,flength=10](q1)
\node[Nmr=0](q2)(35,20){$C_2$}
\fmark[fangle=20,flength=10](q2)
\fmark[fangle=0,flength=10](q2)
\fmark[fangle=-20,flength=10](q2)
\node[Nmr=0](q3)(35,5){$C_k$}
\fmark[fangle=20,flength=10](q3)
\fmark[fangle=0,flength=10](q3)
\fmark[fangle=-20,flength=10](q3)
\drawedge(q0,q1){ }
\drawedge(q0,q2){ }
\drawedge(q0,q3){ }
\put(35,11){$\vdots$}
\put(53,31){{\Huge $\}$}}
\put(53,18){{\Huge $\}$}}
\put(53,3){{\Huge $\}$}}
\put(58,32){ literal}
\put(58,19){ literal}
\put(58,4){ literal}
\end{picture}
\caption{Game graph construction for a 3SAT formula (Lemma~\ref{thrm_hard}).}
\label{fig:3sat}
\end{figure}
\smallskip\noindent\emph{$\psi$ satisfiable implies Player~2 winning.}
We show that if $\psi$ is satisfiable, then Player~2 has a memoryless
winning strategy.
Since $\psi$ is satisfiable, there is a valid assignment $A$ that
satisfies every clause.
The memoryless strategy is constructed from the assignment $A$ as follows:
for a clause $C_i$, the strategy chooses a literal as successor that appears
in $C_i$ and is set to true by the assignment.
Consider an arbitrary strategy for Player~1, and the infinite play:
the literals visited in the play are all assigned truth values true by $A$,
and the infinite play must visit some literal infinitely often.
Consider the literal $x$ that appears infinitely often in the play, then the
complement literal $\overline{x}$ is never visited, and every
time literal $x$ is visited, the component corresponding to $\overline{x}$
decreases by~$1$, and since $x$ appears infinitely often it follows that
the play is winning for Player~2 for every finite initial credit.
It follows that the strategy for Player~2 is winning, and the answer to the
unknown initial credit problem is ``No".
\smallskip\noindent\emph{$\psi$ not satisfiable implies Player~1 is winning.}
We now show that
if $\psi$ is not satisfiable, then Player~1 is winning.
By determinacy, it suffices to show that Player~2 is not winning,
and by existence of memoryless winning strategy for Player~2
(Lemma~\ref{lem:player-two-memoryless}), it suffices to show that there is no
memoryless winning strategy for Player~2.
Fix an arbitrary memoryless strategy for Player~2, (i.e., in every clause
Player~2 chooses a literal that appears in the clause).
If we consider the assignment $A$ obtained from the memoryless strategy, then
since $\psi$ is not satisfiable it follows that the assignment $A$ is
conflicting.
Hence there must exist clause $C_i$ and $C_j$ and variable $x_k$ such that the
strategy chooses the literal $x_k$ in $C_i$ and the complement variable
$\overline{x}_k$ in $C_j$.
The strategy for Player~1 that at the starting position alternates between
clause $C_i$ and $C_j$, along with that the initial credit of $1$ for the
component of $x_k$ and $\overline{x}_k$, and~$0$ for all other components,
ensures that the strategy for Player~2 is not winning.
Hence the answer to the unknown initial credit problem is ``Yes", and we have
the following result.
\begin{lemma}\label{thrm_hard}
The unknown initial credit and the mean-payoff threshold problems for
multi-weighted two-player game structures are coNP-hard.
\end{lemma}
\noindent Observe that our hardness proof works with weights restricted to the
set $\{-1,0,1\}$.
\begin{comment}
\section{$\Pi^2$ Upper-Bound}
In this section, we present a nondeterministic polynomial time algorithm
that makes one call to a nondeterministic polynomial time oracle to recognize
the instances for which there is no winning strategies for Player~1 in an multi-energy game.
First, we show that the one-player version of this game can be solved
by checking the existence of a circuit in $G$ with overall positive effect
in all the dimensions. Second, we build on this and the memoryless results
for Player~2 recalled in the previous section to define our $\Pi^2$ algorithm
($\Pi^2$ is the second level of the polynomial hierarchy and represents the
class of algorithms described as a coNP algorithm with a NP oracle).
\smallskip\noindent{\bf Finding all-dimension positive circuit in weighted graphs.}
Let $G=(S_1,S_2,s_{\sf init},E,k,w)$ be a one-player multi-weighted game structure (i.e., $S_2 = \emptyset$).
Given a set of edges $F \subseteq E$, we note $S(F)$ the set of states that
are source or target of at least one edge in $F$ i.e.,
$S(F)=\{ s \mid \exists (s_1,s_2) \in F : s=s_1 \lor s=s_2\}$.
A set $F \subseteq E$ of edges is called {\em well-connected} if for all $s,s' \in S(F)$,
there exists a sequence of edges $(s_1,s_1')(s_2,s_2') \dots (s_n,s_n') \in F^{*}$
such that: $(i)$ $s=s_1$, $(ii)$ $s'=s_n'$, and $(iii)$ for all $i$, $1 \leq i < n$,
$s_i'=s_{i+1}$.
Let $F$ be a set of edges, we write $F(s,\cdot)$ for the set $\{ (s_1,s_2)\in F \mid s_1=s \}$,
and $F(\cdot,s)$ for the set $\{ (s_1,s_2) \in F \mid s_2=s \}$. Let $\{ x_e \mid e \in F \}$
be a set of integer variables, one for each edge $e \in F$. For each $s \in S(F)$,
let us note ${\sf Flow}(s)$ the following equation: $\sum_{e \in F(\cdot,s)} x_e = \sum_{e \in F(s,\cdot)} x_e$.
Let $F \subseteq E$ be a well-connected set of edges, we note ${\sf Circuit}(F)$ the
following set of linear constraints:
\begin{equation}
\bigwedge_{s \in F(s)} {\sf Flow}(s) \wedge \bigwedge_{e \in F} x_e > 0
\end{equation}
\begin{lemma}
\label{lem:eurlerian}
Let $G=(S_1,S_2,s_{\sf init},E,k,w)$ be a one-player multi-weighted game structure,
$F \subseteq E$ be a well-connected set of edges, $v : \{ x_e \mid e \in F \} \rightarrow \mathbb{Z}$
be a solution to the system ${\sf Circuit}(F)$. For every $s \in S(F)$,
there exists a path $\rho=(s_1,s_1') (s_2,s_2') \dots (s_n,s_n')$ in $G$ such that:
\begin{enumerate}
\item $\rho \in F^{*}$ i.e., $\rho$ contains only edges in $F$
\item $s_1=s$, and $s_n'=s$ i.e., $\rho$ is a circuit on $s$
\item $v(e)$ is equal to the number of occurrences of $e$ in $\rho$.
\end{enumerate}
\end{lemma}
\begin{proof}
To $F$ and $v$, we associate the following directed graph $H=(V,D)$, where:
\begin{itemize}
\item $V=S(F) \cup \{ (s,((s,s'),i)) \mid (s,s') \in F \land 1 \leq i \leq v(x_{(s,s')}) \}$
i.e., one vertex is associated to each vertex in $F(S)$ and $v(x_e)$ vertices
are associated to each edge $e$;
\item $D= \{ (s,((s,s'),i)) \mid 1 \leq i \leq v(x_{(s,s')}) \} \cup \{ ((s',s),i),s) \mid 1 \leq i \leq v(x_{(s',s)}) \}$
\end{itemize}
The directed graph $H$ is strongly connected and every vertex in the graph has
equal {\em in} and {\em out degrees}, so $H$ contains a Eulerian circuit.
Clearly, from that Eulerian circuit, we can construct a circuit for any position
$s \in F$ in the one-player game structure $G$ such that the multiplicity of
edge $e$ in this circuit is exaclty $v(x_e)$.
\hfill\qed
\end{proof}
The following set of linear constraints, noted ${\sf PosCirsuit}(F)$ expresses
that the circuit must have overall positive effect on all the dimensions:
\begin{equation}
\label{eqn:poscircuit}
{\sf Circuit}(F) \land \bigwedge_{1 \leq i \leq k} \big( \sum_{e \in F} x_e \cdot w(i) \geq 0 \big)
\end{equation}
\begin{corollary}
To any solution $v : F(S) \rightarrow \mathbb{Z}$ to the constraints
in ${\sf PosCirsuit}(F)$ corresponds a circuit through the states in $S(F)$ with
positive effect on all the dimensions.
\end{corollary}
Conversely, we show the following lemma.
\begin{lemma}\label{lemm-converse}
We can associate to any circuit through the states in $S(F)$ with positive effect
on all the dimensions, a solution $v : \{ x_e \mid e \in F \} \rightarrow \mathbb{Z}$ to the constraints
in ${\sf PosCirsuit}(F)$.
\end{lemma}
\begin{proof}
Let $\rho$ be a circuit in $G$ with positive overall effect. Let $F$ the edges
used in this circuit and $S(F) \subseteq S_1 \cup S_2$ the positions appearing
as sources or targets of those edges. Let $v : F \rightarrow \mathbb{N}_0$ the
number of times that each edge appear in the circuit $\rho$. It is not difficult
to see that $v$ is a valuation that satisfies all the constraints in ${\sf PosCirsuit}(F)$.
\hfill\qed
\end{proof}
So, finding a solution to the system ${\sf PosCirsuit}(F)$ allows us to decide
if, in a strongly connected directed graphs, there is a circuit that uses each edge at least
once and has positive effect on all dimensions. Furthermore,
the following lemma states that the system of linear constraints ${\sf PosCirsuit}(F)$
has an integer solution if and only if it has a rational solution. This allow us to solve it
in polynomial time.
\begin{lemma}
The set of constraints ${\sf PosCirsuit}(F)$ has a integer solution iff it has a
rational solution.
\end{lemma}
\begin{proof}
This is because any multiple of a solution to ${\sf PosCirsuit}(F)$ is also a solution.
\hfill\qed
\end{proof}
\end{comment}
\section{NP-completeness for Memoryless Strategies}
In this section we consider the unknown initial credit and the mean-payoff
threshold problems for multi-weighted two-player game structures when
Player~1 is restricted to use memoryless strategies.
We will show NP-completeness for these problems.
\begin{lemma}\label{lemm_memless_1}
The unknown intial credit and the mean-payoff threshold problems for
multi-weighted two-player game structures for memoryless strategies for
Player~1 lie in NP.
\end{lemma}
\begin{proof}
The inclusion in NP is obtained as follows: the polynomial witness is the
memoryless strategy for Player~1, and once the strategy is fixed we obtain
a game graph with choices for Player~2 only.
The verification problem for the unknown initial credit checks that for every
dimension there is no negative cycle, and the verification problem for
mean-payoff threshold checks that for every dimension every cycle satisfy the
threshold condition.
Both the above verification problem can be achieved in polynomial time by
solving the energy-game and mean-payoff game problem on graphs with choices
for Player~2 only~\cite{Karp,BFLMS08,CAHS03}.
The desired result follows.
\end{proof}
Lemma~\ref{lemm_memless_2} shows NP-hardness for dimension $k=2$
and arbitrary integral weights, and is obtained by a reduction
from the {\sc Knapsack} problem.
If $k=1$, then the problems reduces to the classical energy and mean-payoff games,
and is in NP $\cap$ coNP~\cite{BFLMS08,CAHS03,ZP96}
(so the hardness result cannot be obtained for $k=1$).
\begin{lemma}\label{lemm_memless_2}
The unknown intial credit and the mean-payoff threshold problems for
multi-weighted two-player game structures for memoryless strategies for
Player~1 are NP-hard, even in one-player game structures with dimension $k=2$ for
the weight function.
\end{lemma}
\begin{proof}
We present a reduction from the {\sc Knapsack} problem.
The {\sc Knapsack} problem consists of a set $I=\{1,2, \ldots,n\}$ of
$n$ items, for each item $i$ there is a profit $p_i \in \mathbb{N}$ and a weight
$w_i \in \mathbb{N}$.
Given a weight bound $B$ and profit bound $P$, the {\sc Knapsack} problem
asks whether there exists a subset $J \subseteq I$ of items such that
(a)~$\sum_{j \in J} w_j \leq B$; and (b)~$\sum_{j \in J} p_j \geq P$
(i.e., a profit of $P$ can be accumulated without exceeding weight $B$).
The {\sc Knapsack} problem is NP-hard~\cite{PapaBook}.
Our reduction is as follows: given an instance of the {\sc Knapsack} problem
we construct a one-player game structure with a weight function of dimension
$2$.
The set of positions is as follows:
$S_1 =I \cup \{(i,j) | i \in I, j \in\{Y,N\}\} \cup \{n+1\}$ and $S_2 = \emptyset$.
The set of edges is as follows:
$E=\{(i,(i,Y)), (i,(i,N)) \mid i \in I\} \cup
\{((i,Y),i+1), ((i,N),i+1) \mid i \in I\} \cup \{(n+1,1)\}.$
Intuitively, in the game structure, for every item Player~1 has a choice of
``Yes" (edge from $i$ to $(i,Y)$) to select item $i$, and choice of ``No"
(edge from $i$ to $(i,N)$) to not select item $i$.
From $(i,Y)$ and $(i,N)$ the next position is $i+1$, and from the position
$n+1$ the next position is~1.
The weight function function $w:E \to \mathbb{Z}^2$ has two dimensions:
(a)~for edge $e=(i,(i,N))$ we have $w(e)=(0,0)$ (i.e., for the choice of ``No"
all the weights are~0);
(b)~for an edge $e=(i,(i,Y))$ we have $w(e)=(p_i,-w_i)$ (i.e., for the choice
of ``Yes", the first component gains the profit and the second component
loses the weight of item $i$);
(c)~for an edge $e=((i,Y),i+1)$ or $e=((i,N),i+1)$ we have $w(e)=0$; and
(d)~for the edge $e=(n+1,1)$ we have $w(e)=(-P,B)$ (i.e., there is a loss of
$P$ in the first component and a gain of $B$ in the second component).
The construction is illustrated in Fig~\ref{figure:knapsack}.
Given a solution $J$ for the {\sc Knapsack} problem, the memoryless strategy
that choose $(j,(j,Y))$ for $j \in J$, and $(j',(j',N))$ for
$j' \in I \setminus J$, with intial credit $(0,B)$ is a solution for the
unknown initial credit problem.
Conversely, given a memoryless strategy $\lambda_1$ for the unknown initial credit problem,
the set $J=\{j \in I \mid \lambda_1(j)=(j,Y)\}$ is a solution to
the {\sc Knapsack} problem.
The argument for the mean-payoff threshold problem is analogous.
The result follows.
\end{proof}
\begin{figure}[ht]
\begin{center}
\unitlength = 3mm
\begin{picture}(53,16)(0,0)
\gasset{Nw=4,Nh=4,Nmr=2}
\node(x2?)(2,8){$1$}
\node(x2)(7,14){$(1,Y)$}
\node(!x2)(7,2){$(1,N)$}
\node(y2?)(12,8){$2$}
\node(y2)(17,14){$(2,Y)$}
\node(!y2)(17,2){$(2,N)$}
\node(z2?)(22,8){$3$}
\node(n2?)(32,8){$n$}
\node(n2)(37,14){$(n,Y)$}
\node(!n2)(37,2){$(n,N)$}
\node(fin2)(42,8){$n+1$}
\drawedge(x2?,x2){$(p_1,-w_1)$}
\drawedge[ELside=r](x2?,!x2){$(0,0)$}
\drawedge[ELside=r](x2,y2?){$(0,0)$}
\drawedge(!x2,y2?){$(0,0)$}
\drawedge(y2?,y2){$(p_2,-w_2)$}
\drawedge[ELside=r](y2?,!y2){$(0,0)$}
\drawedge[ELside=r](y2,z2?){$(0,0)$}
\drawedge(!y2,z2?){$(0,0)$}
\drawedge(n2?,n2){$(p_n,-w_n)$}
\drawedge[ELside=r](n2?,!n2){$(0,0)$}
\drawedge[ELside=r](n2,fin2){$(0,0)$}
\drawedge(!n2,fin2){$(0,0)$}
\drawedge[dash={0.5}0](z2?,n2?){}
\gasset{Nw=5,Nh=5,Nmr=2.5,curvedepth=15}
\drawline(44,8)(45.5,8)
\node[Nframe=n](edgef)(49,8){$(-P,B)$ to~1}
\end{picture}
\end{center}
\caption{{\sc Knapsack} Reduction.}
\label{figure:knapsack}
\end{figure}
In Lemma~\ref{lemm_memless_3} we show the hardness of the
problem when the weights are in $\{-1,0,1\}$, but the dimension is arbitrary.
It has been shown in~\cite{Cha10} that if the weights are $\{-1,0,1\}$
and the dimension is~2, then the problem can be solved in polynomial time.
\begin{lemma}\label{lemm_memless_3}
The unknown intial credit and the mean-payoff threshold problems for
multi-weighted two-player game structures for memoryless strategies for
Player~1 are NP-hard, even in one-player game structures when weights are
restricted to $\{-1,0,1\}$.
\end{lemma}
\begin{proof}
We present a reduction from the 3SAT problem.
Consider a 3SAT formula $\Phi$ over a set $X=\{x_1,x_2,\ldots,x_n\}$ of
variables, and a set $C_1,C_2,\ldots,C_m$ of clauses such that each
clause has 3-literals (a literal is a variable or its complement).
We construct a one-player game structure with a weight function of
dimension $m$ from $\Phi$.
The set of positions is $S_1 = X \cup \{(x_i,j) \mid x_i \in X, j \in \{T,F\}\} \cup \{x_{n+1}\}$
and $S_2 = \emptyset$.
The set of edges is as follows:
$E=\{(x_i,(x_i,T)), (x_i,(x_i,F)) \mid x_i \in X\} \cup
\{((x_i,T),x_{i+1}), ((x_i,F),x_{i+1}) \mid x_i \in X\} \cup \{(x_{n+1},x_1)\}.$
Intuitively, in the game structure, for every variable Player~1 has a choice to set
$x_i$ as ``True" (edge from $x_i$ to $(x_i,T)$), and choice to set $x_i$ as ``False"
(edge from $x_i$ to $(x_i,F)$).
From $(x_i,T)$ and $(x_i,F)$ the next position is $x_{i+1}$, and from the
position $x_{n+1}$ the next position is~$x_1$.
The construction of the graph is similar as in Fig~\ref{figure:knapsack}.
The weight function $w:E \to \mathbb{Z}^m$ has $m$ dimensions:
(a)~for an edge $e=(x_i,(x_i,T))$ (resp. $e=(x_i,(x_i,F))$) and $1\leq k \leq m$,
the $k$-th component of $w(e)$ is~1 if the choice $x_i$ as ``True" (resp. ``False")
satisfies clause $C_k$, and otherwise the $k$-th component is~0;
(b)~for edges $e=((x_i,j),x_{i+1})$, with $j \in \{T,F\}$, every component of $w(e)$ is~0; and
(c)~for the edge $e=(x_{n+1},x_1)$, for all $1 \leq k \leq m$, the $k$-th component of
$w(e)=-1$.
If $\Phi$ is satisfiable, then consider a satisfying assignment $A$, and we construct a
memoryless strategy $\lambda_1$ as follows: for a position $x_i$, if $A(x_i)$ is ``True", then
choose $(x_i,T)$, otherwise choose $(x_i,F)$.
The memoryless strategy $\lambda_1$ with initial credit vector $\{0\}^m$ ensures that the
answer to the unknown initial credit problem for memoryless strategies is ``Yes".
Conversely, if there is a memoryless strategy $\lambda_1$ for the unknown initial credit
problem, then the memoryless strategy must satisfy every clause.
A satisfying assignment $A$ for $\Phi$ is as follows: $A(x_i)$ is ``True" if $\lambda_1(x_i)=(x_i,T)$,
and ``False", otherwise.
It follows that $\Phi$ is satisfiable iff the answer to the unknown initial credit problem
for memoryless strategies is ``Yes".
The argument for the mean-payoff threshold problem is analogous.
The desired result follows.
\end{proof}
The following theorem follows from the results of Lemma~\ref{lemm_memless_1}, Lemma~\ref{lemm_memless_2}
and Lemma~\ref{lemm_memless_3}.
\begin{theorem}\label{thrm_memless}
The unknown initial credit and the mean-payoff threshold problems for
multi-weighted two-player game structures for memoryless strategies for
Player~1 are NP-complete.
\end{theorem}
\section{Conclusion}
In this work we considered games with multiple mean-payoff and energy
objectives, and established determinacy under finite-memory, inter-reducibility of
these two classes of games for finite-memory strategies, and improved the
complexity bounds from EXPSPACE to coNP-complete.
Two interesting problems are open: (A)~for generalized mean-payoff games,
the winning strategies with infinite memory are more powerful than finite-memory strategies,
and the complexity of solving generalized mean-payoff games with infinite-memory
strategies remains open.
(B)~it is not knwon how to compute the exact or
approximate Pareto curve (trade-off curve) for multi-objective mean-payoff and
energy games.
\smallskip\noindent{\bf Acknowledgement.}
We are grateful to Jean Cardinal for pointing the reference~\cite{KS88}.
\bibliographystyle{plain}
|
1,108,101,565,924 | arxiv | \section{Introduction}
The stellar atmospheres of the first generations of low-mass ($M\leq$
0.8 $M_\odot$) stars are expected to retain, to a large extent, detailed
information on the chemical composition of the nearly pristine gas of
the interstellar medium (ISM) at the time and place of their birth.
Detailed abundance analyses of metal-poor stars thus enable studies of
the formation and evolution of the elements in the early Galaxy. At
these early times, the two light elements carbon and lithium play a major
role in cosmological studies, as well as in our understanding of early
star formation. In addition, the production site(s) of the elements
beyond the iron peak remains a major unanswered question.
Recent studies, such as \citet{carollo2012}, \citet{lee2013}, and
\citet{norris2013b} confirm that carbon-enhanced metal-poor (CEMP)
stars\footnote{Originally defined by \citet{beerschristlieb2005} as metal-poor
($\mathrm{[Fe/H]} \leq -1.0$) stars with $\mathrm{[C/Fe]}\geq +1.0$; a level
of carbon enrichment $\mathrm{[C/Fe]} \geq +0.7$ is used in most contemporary
work.} constitute a large fraction of the most metal-poor stars known and that
the fraction of CEMP stars increases dramatically with decreasing
metallicity, accounting for $\sim$40\% of all stars with
$\mathrm{[Fe/H]} \leq -3.5$. In fact, four of the five stars previously known
with $\mathrm{[Fe/H]}< -4.5$ are confirmed CEMP stars
\citep{christlieb2002,frebel2005, norris2007, caffau2011, keller2014}. In this
paper, we add another confirmed CEMP star with $\mathrm{[Fe/H]} < -4.5$, as
described below.
\citet{beerschristlieb2005} specify a nomenclature that identifies a
number of subclasses for CEMP stars. The CEMP-$s$ stars exhibit
over-abundances of elements predominantly produced by the so-called slow
neutron-capture process, or $s$ process, such as barium. These stars are
the most commonly observed subclass of CEMP stars; around 80\% of CEMP
stars exhibit $s$-process-element enhancements \citep{aoki2007},
including both the CEMP-$s$ and CEMP-$r/s$ subclass (stars that, in
addition to exhibiting $s$-process element enhancement, are also
enhanced in elements predominantly produced in the rapid neutron-capture
process, or $r$ process, such as europium). The favored scenario for the
production of CEMP-$s$ (and CEMP-$r/s$) stars is mass transfer of
carbon- and $s$-process-enhanced material from the envelope of an
asymptotic giant branch (AGB) star to its (presently observed) binary
companion \citep[e.g., ][]{herwig2005,sneden2008}. Observational
evidence now exists to suggest that the CEMP-$r/s$ stars (and other
$r$-process-element-rich stars) were enhanced in $r$-process elements in
their natal gas clouds by previous generations of supernovae (SNe), and
did not require a contribution of $r$-process elements from a binary
companion \citep[see ][]{thansen2011}.
The CEMP-no subclass includes CEMP stars that exhibit no enhancements in
their neutron-capture elements. It has been shown that at extremely low
metallicity, $\mathrm{[Fe/H]} < -3.0$, the CEMP-no stars are the
dominant subclass \citep{aoki2010,norris2013b}. Different progenitors
have been suggested for the CEMP-no stars, such as pollution by faint SNe that
experienced extensive mixing and fallback during their explosions
\citep{umeda2003,umeda2005,tominaga2007,tominaga2013,ito2009,ito2013,nomoto2013},\footnote{This
model well-reproduces the observed elemental-abundance pattern of the
CEMP-no star BD+44$^{\circ}$493, the ninth-magnitude, $\mathrm{[Fe/H]} =
-3.8$ star (with $\mathrm{[C/Fe]} = +1.3$, $\mathrm{[N/Fe]} = +0.3$,
$\mathrm{[O/Fe]} = +1.6$) discussed by \citet{ito2009,ito2013}.}
winds from massive, rapidly rotating, mega metal-poor ($\mathrm{[Fe/H]} <
-6.0$) stars, sometimes referred to as ``spinstars''
\citep{hirschi2006,meynet2006,hirschi2007,meynet2010,cescutti2013}, or mass
transfer from an AGB star companion \citep{suda2004, masseron2010}. This
latter explanation encounters difficulties, however, when confronted
with results from recent radial-velocity monitoring programs.
Radial-velocity data support the expected differences in the
binary nature of CEMP-$s$ (CEMP-$r/s$) and CEMP-no stars.
\citet{lucatello2005} argued that multiple-epoch observations of
CEMP-$s$ stars are consistent with essentially all CEMP-$s$ (CEMP-$r/s$)
stars being members of binary systems. Although more data are desired for
CEMP-no stars, \citet{thansen2013} report that the fraction of binaries
among stars of this class is no higher than expected for random samples
of very metal-poor giants. \citet{norris2013b} reach similar conclusions
for their limited radial-velocity data for a number of CEMP-no stars.
The measured Li abundances of CEMP stars do not show an obvious
correlation with C at the lowest metallicities, but do exhibit a general
downward trend with declining [Fe/H]. \citet{masseron2012} considered
the CEMP stars with measured Li reported in the literature and added 13
new stars to the list; they highlight the large spread in the measured
Li abundances for CEMP stars. In addition to the production of Li during
big bang nucleosynthesis, Li can also be produced via the Cameron--Fowler
mechanism in AGB stars if $^7$Be, created at the bottom of the
convective envelope, captures an electron \citep{sackmann1992}. If CEMP
stars are the result of mass transfer from an AGB companion, then the Li
abundances in CEMP stars will reflect a combination of (1) Galactic
chemical evolution and (2) Li production/destruction in the AGB
companion. Additional data are necessary to explore and test this
hypothesis in more detail. It should also be recalled that
\citet{piau2006} argued that primordial Li could be astrated by
first-generation stars, objects similar in nature to the massive-star
progenitors suggested for CEMP-no stars. In this view, the fact that Li
abundances for CEMP-no stars are always {\it below} the level of the
Spite Li plateau \citep[see e.g., ][]{masseron2012} can be understood as
the result of various degrees of local mixing between Li-astrated
material ejected from first-generation stars and the surrounding gas
having the primordial level of Li.
Most elements beyond the iron peak are produced by neutron capture,
either in the $s$ process or the $r$ process \citep[e.g.,
][]{burbidge1957,sneden2008}. The neutron-capture elements strontium and
barium are those that are most easily measured in low-metallicity
stars. At solar metallicity, these elements are produced in the main
$s$ process in AGB stars \citep{busso1999,kappeler2011}, but at low
metallicity, AGB stars may not have had time to sufficiently enrich the
ISM. Hence, the Sr and Ba abundances observed in low-metallicity stars are
presumably produced via the main $r$ process, most likely occurring in the
final stages of the life of massive stars \citep{truran1981,thielemann2011},
or in the weak $s$ process suggested to occur in spinstars
\citep{pignatari2008,cescutti2013}.
More studies of the lowest metallicity stars are required to gain a
deeper understanding of the nucleosynthesis processes taking place in
the early universe, for both the light and heavy elements, since at
present fewer than 10 stars with $\mathrm{[Fe/H]} \leq -4.2$ have been
analyzed. This paper presents four newly discovered ultra metal-poor (UMP)
stars ($\mathrm{[Fe/H]} < -4.0$), three of which are enhanced in
carbon but not in neutron-capture elements, and are hence classified as
CEMP-no stars. We also detect lithium in the spectra of two of the
stars, one of these being the second most metal-poor star with detected
Li known to date.
\section{Observations and Data Analysis}
The four stars presented in this paper are part of a larger sample of
metal-poor candidates selected from the Hamburg/ESO survey, followed-up
with medium-resolution spectroscopy on a variety of 2$-$4 m class
telescopes, then observed at high spectral resolution with Very Large
Telescope (VLT)/UVES \citep{dekker2000}. The complete sample will be presented
in Paper~II of this series, along with a detailed description of the
observations, data reduction procedure, parameter determination, and abundance
analysis. Here, only the key points of the techniques employed are listed.
Figure \ref{fig1} shows the medium-resolution spectra of the
program stars. It is possible to see features such as the Ca\,{\sc ii}~K
line, $H_{\beta}$, $H_{\gamma}$, and $H_{\delta}$, as well as the CH and
CN molecular carbon bands for HE~1310$-$0536. Both the Southern Astrophysical
Research (SOAR) 4.1 m and KPNO/Mayall 4 m data have a wavelength coverage of 3550--5500\,{\AA},
with a resolving power of $R\sim 1500$ and signal-to-noise ratios of
S/N$\sim 30$ per pixel at 4000\,{\AA}. For the ESO~3.6 m data, the
resolving power and signal-to-noise were similar to the SOAR 4.1 m and Mayall
4 m data, but the wavelength range is narrower, covering the interval
3700--5100\,{\AA}.
Medium-resolution spectra obtained with the Wide Field Spectrograph
\citep[WiFeS; ][]{dopita2007} on the Australian National University
2.3 m Telescope at Siding Spring Observatory were used for the
temperature determination.
The high-resolution data was obtained during the nights of 2005 November 17
and 20, and 2006 April 17. The data cover a wavelength range from 3100\,{\AA}
to 9500\,{\AA}, with a resolving power of $R \sim$ 45000. The spectra were
reduced using the UVES reduction pipeline, version 4.9.8. Radial-velocity
shifts, co-addition of the spectra, and continuum normalization were all
performed using IRAF\footnote{IRAF is distributed by the National Astronomy
Observatory, Inc., under cooperative agreement with the National Science
Foundation.}. The average S/N of the reduced spectra is
S/N $\sim 10$, $\sim 30$, and $\sim 55$ pixel$^{-1}$ at 3400\,{\AA},
4000\,{\AA}, and 6700\,{\AA}, respectively.
\begin{figure*}
\begin{center}
\includegraphics[angle=0,width=6.8in]{medres.eps}
\end{center}
\caption{Medium-resolution spectra of our four program stars. The locations of
the Ca\,{\sc ii}~K line, $H_{\beta}$,
$H_{\gamma}$, and $H_{\delta}$ lines are shown. For HE~1310$-$0536, the CH and
CN molecular carbon bands are clearly visible. \label{fig1}}
\end{figure*}
\subsection{Stellar Parameters}
The stellar atmospheric parameters were determined by standard
techniques, generally following the steps outlined in \citet{yong2013}.
Effective temperatures were determined by fitting the spectrophotometric
observations with model atmosphere fluxes \citep{bessell2007,
norris2013a}. LTE model atmosphere fluxes from the MARCS grid
\citep{gustafsson2008}, with $\mathrm{[\alpha/Fe]}=+0.4$, were used for
the model fitting. Estimates of surface gravity were determined from the
$Y^2$ isochrones \citep{demarque2004}, assuming an age of 10 Gyr and an
$\alpha$-element enhancement of $\mathrm{[\alpha/Fe]}=+0.3$. These
isochrones only extend down to $\mathrm{[Fe/H]}=-3.5$; therefore, a linear
extrapolation down to $\mathrm{[Fe/H]}=-4.7$ has been used to obtain the
surface-gravity estimates for our four stars. The average difference
between the listed surface gravities, where the actual $\mathrm{[Fe/H]}$
values have been used, and the surface gravity obtained using the
$\mathrm{[Fe/H]}=-3.5$ isochrone, is rather small (on the order of 0.07
dex). Metallicities were determined from equivalent-width measurements
of the Fe~\textsc{i} lines. Non-LTE (NLTE) effects might be present in the
Fe~\textsc{i} lines, which can affect the derived metallicity
\citep{lind2012}, but no Fe~\textsc{ii} lines were detected in any of the four
program stars. The measured Fe abundance may also be subject to uncertainties
from three-dimensional (3D) effects. \citet{collet2006} report a 3D correction
of $\sim -0.2$ dex for the Fe abundance for two of the most metal-poor stars
known (HE~0107-5240 and HE~1327$-$2326), both of which have temperatures and
gravities that are comparable, within the combined error bars, to those of the
stars presented in this paper. A better basis for comparison, at the
same metallicity as our program stars, is clearly desirable.
\citet{bergemann2012} found, however, that departures from LTE will
likely partly compensate such 3D LTE effects, leaving a smaller net
effect. Our stars have several Fe~\textsc{i} lines in common with the study
of \citet{bergemann2012}. A full 3D NLTE study is clearly warranted,
but beyond the scope of the present study.
The microturbulent velocity was computed in the usual way, by forcing
the abundances from Fe~\textsc{i} lines to show no trend with reduced equivalent
width, $\log(W_{\lambda}/\lambda)$. For HE~0233$-$0343, too few Fe~\textsc{i}
lines were present to determine the microturbulent velocity in this way,
so a fixed valued of $\xi = 2$~km s$^{-1}$ was used for this star.
For the warmer stars, HE~0233$-$0343 and HE~2239$-$5019, two possible
solutions for the surface gravity were found. Several tests were made to
settle on the listed values, both consistent with subgiant, rather than
dwarf, classifications. This aspect will be explored further in Paper~II
of this series. The final stellar parameters and their associated
uncertainties are listed in Table~\ref{tab1}.
\subsection{Abundance Analysis}
The abundance analysis has been carried out by synthesizing individual
spectral lines with the 2011 version of MOOG \citep{sneden1973}, which
includes a proper treatment of continuum scattering \citep{sobeck2011}.
A set of $\alpha$-enhanced ATLAS9 models \citep{castelli2003} have been
used, along with interpolation software tested in \citet{allende2004},
which produces models with the required stellar parameters \citep[e.g.,
][]{reddy2003,allende2004}. For HE~0233$-$0343, the metallicity in the
model atmosphere was $\mathrm{[m/H]} = -4.5$, which differs by 0.18 dex
from the metallicity of the star. This difference is within the
uncertainty of the derived $\mathrm{[Fe/H]}$ of the star and given the
small difference, we expect no change in any of the abundances when
using a model with $\mathrm{[m/H]} = -4.7$.
The {\it Gaia}/ESO line list version 3 has been used (Heiter et al., in
preparation). Atomic data from VALD \citep{kupka2000} were adopted for lines
not included in that line list. Hyperfine splitting was taken into account
for lines of Sc, Mn, and Co, using the data from \citet{kurucz1995}. For
Ba and Li, both hyperfine splitting and isotope shifts are present, and
data from \citet{mcwilliam1998} and \citet{asplund2006} were included,
respectively. The molecular information for CH, CN, and NH was kindly
provided from T. Masseron (private communication)
The derived elemental abundances, along with propagated uncertainties
arising from the effects of uncertain stellar parameters, continuum
placement, and line information, are listed in Table~\ref{tab1}. The
adopted solar abundances are from \citet{asplund2009}. All listed
abundances are derived under one-dimentional (1D) and LTE assumptions. NLTE
effects will be explored in Paper~II.
\begin{deluxetable*}{lrrrr}
\tablecaption{Stellar Parameters and Derived Abundances \label{tab1}}
\tablewidth{0pt}
\tablehead{
\colhead{} & \colhead{\object{HE 0134$-$1519}} & \colhead{\object{HE 0233$-$0343}} &
\colhead{\object{HE 1310$-$0536}} & \colhead{\object{HE 2239$-$5019}}}
\startdata
R.A. & 01 37 05.4 & 02 36 29.7 & 13 13 31.2 & 22 42 26.9\\
Decl. & $-$15 04 24 & $-03$ 30 06 & $-05$ 52 13 & $-50$ 04 01\\
$V$\tablenotemark{a} & 14.47& 15.43& 14.35& 15.85\\
$B-V$\tablenotemark{a} & 0.50 & 0.34 & 0.71 & 0.39 \\
$J-K$\tablenotemark{a} & 0.43 & 0.30 & 0.64 & 0.40 \\
Radial velocity (km s$^{-1}$) & 244 & 64 & 113 & 370 \\
\cutinhead{Parameters}
$T_{\rm eff}$ ($\pm$100~K) & 5500 & 6100 & 5000 & 6100 \\
$\log g$ ($\pm$0.3~dex) & 3.2 & 3.4 & 1.9 & 3.5 \\
$[$Fe/H$]$ ($\pm$0.2~dex) & $-4.0$ & $-4.7$ & $-4.2$ & $-4.2$ \\
$\xi$ ($\pm$0.3~km s$^{-1}$) & 1.5 & 2.0 & 2.2 & 1.8 \\
\cutinhead{Abundances}
$A$(Li) & 1.27 (0.19) & 1.77 (0.18) & $<$0.80\nodata &$<$1.70\nodata \\
$\mathrm{[Fe/H]}$ & $-$3.98 (0.30) & $-$4.68 (0.30) & $-$4.15 (0.30) & $-$4.15 (0.30)\\
$\mathrm{[C/Fe]}$ & $+$1.00 (0.26) & $+$3.48 (0.24) & $+$2.36 (0.23) &$<$$+$1.70\nodata \\
$\mathrm{[N/Fe]}$ &$<$$+$1.00\nodata & $<$$+$2.80\nodata & $+$3.20 (0.37) &$<$$+$2.70\nodata \\
$\mathrm{[Na/Fe]}$ & $-$0.24 (0.15) & $<$$+$0.50\nodata & $+$0.19 (0.14) &$<$$-$0.30\nodata \\
$\mathrm{[Mg/Fe]}$ & $+$0.25 (0.14) & $+$0.59 (0.15) & $+$0.42 (0.16) & $+$0.45 (0.15) \\
$\mathrm{[Al/Fe]}$ & $-$0.38 (0.20) & $<$$+$0.03\nodata & $-$0.39 (0.21) & $-$0.57 (0.21) \\
$\mathrm{[Si/Fe]}$ & $+$0.05 (0.16) & $+$0.37 (0.15) & $<$$+$0.25\nodata & $+$0.06 (0.15) \\
$\mathrm{[Ca/Fe]}$ & $+$0.10 (0.13) & $+$0.34 (0.15) & 0.00 (0.20) & $+$0.23 (0.15) \\
$\mathrm{[Sc/Fe]}$ & $-$0.10 (0.18) & $<$$+$0.20\nodata & $-$0.23 (0.16) & $+$0.26 (0.16) \\
$\mathrm{[Ti/Fe]}$ & $+$0.11 (0.21) & $+$0.18 (0.17) & $+$0.35 (0.18) & $+$0.37 (0.17) \\
$\mathrm{[Cr/Fe]}$ & $-$0.22 (0.18) & $<$$+$0.50\nodata & $-$0.49 (0.26) & 0.00 (0.17) \\
$\mathrm{[Mn/Fe]}$ & $-$1.19 (0.19) & $<$$-$0.10\nodata & $-$1.40 (0.20) &$<$$-$0.60\nodata \\
$\mathrm{[Co/Fe]}$ & $+$0.25 (0.18) & $<$$+$1.60\nodata & $+$0.10 (0.16) &$<$$+$0.70\nodata \\
$\mathrm{[Ni/Fe]}$ & $+$0.19 (0.19) & $<$$+$0.90\nodata & $-$0.12 (0.20) & $+$0.24 (0.17) \\
$\mathrm{[Sr/Fe]}$ & $-$0.30 (0.19) & $+$0.32 (0.19) & $-$1.08 (0.14) &$<$$-$0.60\nodata \\
$\mathrm{[Ba/Fe]}$ &$<$$-$0.50\nodata & $<$$+$0.80\nodata & $-$0.50 (0.15) & $<$0.00\nodata \\
\enddata
\tablenotetext{a}{\citet{beers2007}}
\end{deluxetable*}
\section{Results}
\subsection{Radial Velocity}
Two of the stars listed in Table~\ref{tab1}, HE~0134$-$1519 and
HE~2239$-$5019, exhibit quite high radial velocities, 244~km~s$^{-1}$
and 370~km~s$^{-1}$, respectively. The uncertainty of the listed radial
velocities is on the order of $\sim 1$~km~s$^{-1}$. Such high velocities
may suggest membership in the proposed outer-halo population of the
Milky Way \citep{carollo2007, carollo2010, beers2012}. A kinematic
analysis of the full space motions of our complete program sample,
including the four stars reported on here, will be presented in Paper~II
of this series. In this context, it is interesting that
\citet{carollo2014} present tentative evidence that the CEMP-$s$ and
CEMP-no stars may well be associated with progenitors that belong, in
different proportion, to the suggested inner- and outer-halo populations
of the Milky Way.
\subsection{Elemental Abundances}
Our analysis has produced abundance estimates, or upper limits,
for 17 elements -- Li, C, N, Na, Mg, Al, Si, Ca, Sc, Ti, Cr, Mn, Fe, Co,
Ni, Sr, and Ba. We describe these analyses in detail in the subsections
below.
\subsubsection{Lithium}
We derived lithium abundances from synthesis of the Li~\textsc{i} 6707.8\,{\AA}
doublet. Lithium is detected for two of our program stars--HE~0134$-$1519, with
$A$(Li) = 1.27\footnote {$A$(Li) is defined in the usual manner, $A$(Li) $=
\log(N(\mbox{Li})/N(\mbox{H})) + 12$.}, and HE~0233$-$0343, with $A$(Li) =
1.77. Figure \ref{fig2} shows the spectral region around the Li line for two
of our stars (top: HE~0134$-$1519, and bottom: HE~0233$-$0343), together with
three synthetic spectra computed with $A$(Li) = 1.46, 1.27, and 1.08,
respectively, for HE~0134$-$1519, and $A$(Li) = 1.95, 1.77, and 1.59,
respectively, for HE~0233$-$0343. HE~0233$-$0343 is the second most metal-poor
star with a detected lithium line, as lithium was also detected in the most
metal-poor star known, SMSS~J031300.36-670839.3 with $\mathrm{[Fe/H]} < -7$,
recently discovered by \citet{keller2014} ($A$(Li) = 0.7). Li is not detected
for the two remaining program stars; we computed upper limits of $A$(Li) $ <
0.8$ and $A$(Li) $ < 1.70$ for HE~1310$-$0536 and HE~2239$-$5019,
respectively. The very low upper limit detected in HE~1310$-$0536 is expected,
as this star is sufficiently evolved that it has undergone first dredge
up. Its convective zone likely extends down to layers in the atmosphere where
lithium has been destroyed by nuclear burning.
Figure \ref{fig3} displays the Li abundance for our two
CEMP-no stars with Li detections, as a function of their luminosity,
following Figure 16 of \citet{masseron2012}. Luminosities have been determined
in the same way as in \citet{masseron2012}, assuming $M=0.8M_\odot$. For
comparison, we also plot the CEMP-no stars of their sample. The solid line
marks the division between Li-normal (above) and Li-depleted (below)
stars. The line is computed from the Li abundance of non-CEMP stars with
luminosities in the range $-0.2<\log(L/L_\odot)<2.1$. The line follows the
Spite Li plateau for dwarf stars, then exhibits a linear decline in the Li
abundances of giants, where the Li is expected to be gradually depleted
due to convective burning episodes \citep[see ][ for
details]{masseron2012}. Stars outside the above range in luminosity are
expected to have destroyed all their Li. Note that HE~1310$-$0536, with
$\log(L/L_\odot) = 2.11$, falls outside that range. Our two Li detections
both lie above the Li-normal line, but with lithium abundances below the Spite
plateau. Hence, Li has been depleted in these stars, consistent with the
result found by \citet{masseron2012}, that the CEMP-no class {\it only}
contains Li-depleted stars, even at these low metallicities.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=3.5in]{UMPsynthLi.eps}
\end{center}
\caption{Li line fit for HE~0134$-$1519 (top) $A$(Li) = 1.46,
1.27, and 1.08 (blue dashed line, solid green line, and red dot-dashed
line, respectively) and HE~0233$-$0343 (bottom) $A$(Li) =
1.95, 1.77, and 1.59 (blue dashed line, solid green line, and red
dot-dashed line, respectively). The blue dashed and red dot-dashed lines
correspond to $A$(Li)$\pm\sigma$(Li), respectively, as listed in
Table~\ref{tab1}. \label{fig2}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=3.5in]{lumLi.eps}
\end{center}
\caption{LTE lithium abundances, $A$(Li), as a function of luminosity, for
HE~0134$-$1519 and HE~0233$-$0343 (green circles), along with the CEMP-no
stars of \citet{masseron2012} (black diamonds). Upper limits are indicated by
arrows. The solid line indicates the division between Li-normal (above) and
Li-depleted (below) stars. \label{fig3}}
\end{figure}
\subsubsection{Carbon}\label{carbon}
Three of our four program stars, HE~0134$-$1519, HE~0233$-$0343, and
HE~1310$-$0536, are carbon enhanced, with $\mathrm{[C/Fe]}\geq +0.7$.
They exhibit no enhancements in their neutron-capture elements
\citep[$\mathrm{[Ba/Fe]}\leq 0.0$;][]{beerschristlieb2005}, and are considered
CEMP-no stars. Technically, the status of HE~0233$-$0343 cannot be confirmed,
as only an upper limit for the Ba abundance of $\mathrm{[Ba/Fe]} < +0.8$ is
found. Considering that the great majority of CEMP stars with
$\mathrm{[Fe/H]}<-3$ are CEMP-no stars \citep{aoki2010}, and the fact that
there are no known CEMP-$s$ stars with [Fe/H] $< -3.5$, there is a high
likelihood that HE~0233$-$0343 also belongs to the CEMP-no class. The last of
the four stars, HE~2239$-$5019, shows no clear carbon enhancement; we compute
an upper limit of $\mathrm{[C/Fe]} < +1.7$ for this star. With no carbon
detected, this star is a potential candidate to be in the same class as
SDSS~J102915+172927, the only star with $\mathrm{[Fe/H]} < -4.5$ found not to
be carbon enhanced \citep{caffau2011}.
Figure~\ref{fig4} shows the spectral range including the CH
$G$ band for SDSS~J102915+172927, HE~2239$-$5019, and HE~0233$-$0343.
HE~0233$-$0343 has similar stellar parameters as HE~2239$-$5019, but it is
more iron poor and carbon enhanced. Similar to SDSS~J102915+172927, no
CH features are visible in HE~2239$-$5019. However, the noise level in the
spectrum of HE~2239$-$5019 is quite high, resulting in a high derived
upper limit on the carbon abundance, so it cannot be ruled out as being
a CEMP star.
Since three out of the four stars are carbon enhanced, the oxygen and
nitrogen abundances are also of interest. Nitrogen was detected in only
one star, HE~1310$-$0536, where the abundance listed in Table~\ref{tab1}
is derived from synthesis of the CN band at $3883$\,{\AA}. For the
remaining three stars, upper limits are derived from synthesis of the NH
band at $3360$\,{\AA}. Previous studies, such as \citet{sivarani2006}
and \citet{norris2013b}, have found a correlation of $\mathrm{[N/Fe]}$
with $\mathrm{[C/Fe]}$ for CEMP stars. The N abundance and upper limits
that we derive support this correlation. Oxygen was not detected in any of our
program stars, and the noise levels in the spectra were too high to compute a
meaningful upper limit on its abundance.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=3.5in]{speccom.eps}
\end{center}
\caption{Spectral range including the CH $G$ band in the spectra of
SDSS~J102915+172927 (top), HE~2239$-$5019 (middle), and HE~0233$-$0343
(bottom). The carbon lines are clearly seen in the spectrum of
HE~0233$-$0343, but are absent in the other two spectra.\label{fig4}}
\end{figure}
\subsubsection{Light Elements and Neutron-Capture Elements}
Since the stars in this sample have been analyzed in a similar manner as
those of \citet{yong2013}, the two samples are directly comparable. In
the top panel of Figure \ref{fig5}, the mean [$\alpha$/Fe] (taken to be
the mean of $\mathrm{[Mg/Fe]}$, $\mathrm{[Ca/Fe]}$, and
$\mathrm{[Ti/Fe]}$) abundance ratios of our four stars is compared to
those of \citet{yong2013}. Their sample includes some of the most
metal-poor stars known to date (HE~0107$-$5240: \citet{christlieb2002};
HE~1327$-$2326: \citet{frebel2005}; and HE~0557$-$4840: \citet{norris2007}.
A small over-abundance of the [$\alpha$/Fe] ratio is seen in the four
new stars, consistent with the existing picture of the $\alpha$-element
abundances in metal-poor stars, reflecting the enrichment from
core--collapse SNe in the early universe. \citet{norris2013b}
found that 50\% of their CEMP stars are more enhanced in the light
elements Na, Mg, Al, and Si, compared to other (C-normal) EMP stars with
similar stellar parameters. Among our program stars, HE~0233$-$0343
exhibits higher abundances of these elements relative to the rest of the
sample. However, none of our stars show over-abundances of
these elements as large as those found for some CEMP stars in the sample of
\citet{norris2013b}. The observed abundances for Al and Mn in our four
stars lie somewhat below the level predicted by the Galactic chemical
evolution models of \citet{nomoto2013}. This may be due to NLTE effects.
\citet{gehren2004} report NLTE corrections of +0.5~dex for Al in a
sample of metal-poor turn-off stars, while \citet{bergemann2008} find
corrections of up to +0.7~dex for Mn in their sample of metal-poor giant
and dwarf stars. This would bring Al to the predicted level, whereas Mn
would stay just below.
The middle and bottom panels of Figure \ref{fig5} display the
$\mathrm{[Sr/Fe]}$ and $\mathrm{[Ba/Fe]}$ abundance ratios,
respectively, as functions of metallicity for our program stars and
those of \citet{yong2013}. Both samples exhibit a large spread in the
$\mathrm{[Sr/Fe]}$ and $\mathrm{[Ba/Fe]}$ ratios. The spread of
abundances for these two elements was also discussed by
\citet{chansen2012,chansen2013} and \citet{ yong2013}, all suggesting that
more than one production site exists for Sr and Ba. The scatter in the Sr and
Ba abundances of EMP stars has also been discussed by \citet{aoki2013}, who
studied the [Sr/Ba] ratios in a sample of 260 EMP stars. They detected no
stars with $\mathrm{[Sr/Fe]} > 0.0$ for $\mathrm{[Fe/H]} < -3.6$ (note
that their sample only includes four stars with $\mathrm{[Fe/H]} <
-3.6$). They proposed to explain the distribution in the observed
[Sr/Ba] ratios with a truncated $r$-process taking place in a type II SN, as
described by \citet{boyd2012}. \citet{aoki2013} also stated that neither the
$r$ process nor the truncated $r$ process are expected to produce stars with
$\mathrm{[Sr/Ba]} < -0.5$. They find six stars in their sample with
$\mathrm{[Sr/Ba]} < -0.5$, but suspect these to be contaminated with
$s$-process material.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=3.5in]{AlphaSr.eps}
\end{center}
\caption{Mean [$\alpha$/Fe] (top), [Sr/Fe] (middle), and [Ba/Fe] (bottom)
abundances for our four UMP stars (green circles) and the sample of
\citet{yong2013} (black crosses). Upper limits are indicated by arrows; the
dashed line is the solar value.\label{fig5}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=3.5in]{SrBaH.eps}
\end{center}
\caption{$\mathrm{[Sr/Ba]}$ ratios plotted against Ba abundances,
$\mathrm{[Ba/H]}$, for our three CEMP-no stars (green circles) and the sample
of \citet{yong2013} (black crosses). Arrows indicate upper limits; the dashed
red line indicates $\mathrm{[Sr/Ba]=-0.4}$. Ratios above this line indicate
production of Sr and Ba by the weak $s$ process in massive stars or by the
$r$ process, while those below indicate production by the main $s$ process
in AGB stars.\label{fig6}}
\end{figure}
\section{Discussion}
The lithium abundances in carbon-enhanced stars is a relatively
unexplored chapter in the history of Galactic chemical evolution;
theoretical efforts include \citet{stancliffe2009}. Only a few CEMP
stars have detected lithium and even fewer of these are CEMP-no stars,
though the samples of CEMP-no stars are increasing quickly, in
particular, from dedicated searches for CEMP stars \citep[e.g.,
][]{placco2010,placco2011,placco2013,placco2014}. We have detected Li
for two of the stars in our sample and the derived Li abundances for
these indicate a Li depletion in the stars relative to the Spite Li
plateau. These detections highlight the need for a progenitor of CEMP-no
stars that produces large amounts of carbon, but only small amounts of
neutron-capture elements, while to some extent depleting the lithium.
\citet{masseron2012} test how mass transfer from an AGB companion will
affect the Li abundance of a CEMP star. They examined a set of different
AGB models and different depletion factors for the transferred material,
but found that none of the models could explain the observed spread in
Li abundances of the CEMP-no stars of their sample. The other suggested
progenitor candidates for the CEMP-no stars include faint SNe
that experienced mixing and fallback, as well as spinstars. If these are
indeed the progenitors, the Li abundance of CEMP-no stars should lie
below the level found in non-carbon-enhanced stars, as Li should be
depleted (or totally destroyed) in such objects. Hence, when the gas
from these mixes with the ISM in their surroundings (and forms the
CEMP-no stars), the overall Li abundance will be lowered
\citep{meynet2010}. In fact, as suggested by \citet{piau2006}, this
process might be responsible for the lowering of the primordial Li
abundance from the level predicted from big bang nucleosynthesis
calculations, the lack of scatter among stars on the plateau at
metallicities $-2.5 < \mathrm{[Fe/H]} < -1.5$, due to complete mixing
\citep[e.g., ][]{ryan1999}, the downturn and increase of scatter in the
Li abundances for stars with$ \mathrm{[Fe/H]} < -2.5$, due to incomplete local
mixing \citep{sbordone2010}, and the very low (or absent) Li among the
lowest metallicity stars \citep[e.g., ][]{frebel2005,keller2014}.
The sample of Li measurements for CEMP-no stars is presently very
limited, and at this stage all of the proposed progenitors of CEMP-no
stars involve some variation of mixing. When mixing has occurred, it is
natural that the Li abundance is depleted, leading to lower abundances
of lithium in carbon-enhanced stars. Also, it is uncertain how much of
the Li can be depleted after a possible mass transfer via mixing and
rotation of the CEMP star itself \citep{talon2005,stancliffe2007}. More
Li detections (or strong upper limits) in CEMP-no stars are needed in
order to better understand the nature of the progenitors of these stars.
The carbon enhancement, detected for three of our four program stars
is consistent with the picture of carbon enhancement in the early
universe found by other authors \citep[e.g.,][]{carollo2012,
norris2013b}. An enrichment of carbon in the early universe also
supports one of the proposed formation scenarios for low-mass stars,
that gas clouds can fragment as a result of cooling via fine-structure
lines of carbon and oxygen \citep{frebel2007}.
\citet{spite2013} examined the carbon abundances of
dwarfs and turnoff stars, stars in which mixing has not altered the
carbon abundance at the surface of the star. From their sample, they
suggested the presence of two plateaus of the carbon abundances, one for
$\mathrm{[Fe/H]} > -3.0$ at $A$(C) $\sim$ 8.25 and one for
$\mathrm{[Fe/H]} < -3.4$ at $A$(C) $\sim$ 6.8. They point to the
low number of stars observed with $\mathrm{[Fe/H]} < -3.4$, and highlight the
difficulty of observing carbon in warmer, unmixed stars. As a result, they
could not conclude if the lower plateau is just an upper limit on the
detections or an actual plateau. We derive a carbon abundance of
$A$(C) = 7.23 for the unmixed CEMP subgiant in our sample
HE~0233$-$0343, placing it a little above the plateau. Clearly, observations
of additional unmixed CEMP stars are needed to clarify if such a plateau
exists.
The origin of neutron-capture elements in low-metallicity stars is not
yet well-understood. A large spread is seen in the abundances of the
neutron-capture elements Sr and Ba for CEMP-no stars (indistinguishable
from that of non-carbon-rich metal-poor stars). For the CEMP-$s$ stars,
the carbon and $s$-process overabundances are believed to be the result
of mass transfer from a binary AGB companion. Indeed,
\citet{lucatello2005} showed that a significant fraction of these stars
(perhaps all) are in fact in binary systems. For the CEMP-no stars,
however, early results from radial-velocity monitoring do not require
them to be in binary systems
(\citet{cohen2013,thansen2013,norris2013b,starkenburg2014}; J. Andersen et al.,
in prep.).
Figure \ref{fig6} shows the derived $\mathrm{[Sr/Ba]}$ ratios of our
three CEMP-no stars, together with the ratios for stars from the
\citet{yong2013} sample that had detections of both Sr and Ba. The
dashed red line indicates $\mathrm{[Sr/Ba]=-0.4}$, used as an upper
limit for the main $s$-process signature of AGB stars
\citep{spite2013}. At solar metallicity, Sr is a tracer of the weak
$s$ process, in massive stars \citep{heil2009,pignatari2010}, while Ba is
a tracer of the main $s$ process taking place in AGB stars
\citep{busso1999, kappeler2011}. At low metallicity, where the main
$s$ process is not yet active, the picture is different.
To assess the origin of the Sr and Ba detected for our three CEMP-no
stars, the $\mathrm{[Sr/Ba]}$ ratio can be compared to that for
classical main $s$-process-enhanced metal-poor stars, and in
strongly $r$-process-enhanced metal-poor stars.
\citet{lucatello2003} reported on the abundances analysis of
HE~0024$-$2523, a classical main $s$-process-enhanced star with
carbon enhancement. This star was also found to be in a binary system,
and the authors argued that the carbon and $s$-process-element
enhancement is the result of mass transfer from an AGB companion. The
$\mathrm{[Sr/Ba]}$ ratio in this star is $\mathrm{[Sr/Ba]}=-1.12$, a
very low value, due to its high Ba abundance. Although a large spread is
seen in the efficiency of the main $s$-process element production
of the AGB stars \citep{bisterzo2011}, a low $\mathrm{[Sr/Ba]}$ ratio is
observed for $s$-process elements produced in AGB stars;
\citet{spite2013} use $\mathrm{[Sr/Ba] < -0.4}$ as an upper limit, while
$\mathrm{[Sr/Ba]=-0.5}$ was used by \citet{aoki2013}. For our CEMP-no
stars, we find the following [Sr/Ba] ratios: $\mathrm{[Sr/Ba]} > +0.20$
(HE~0134$-$1519), $\mathrm{[Sr/Ba]} > -0.48$ (HE~0233$-$0343), and
$\mathrm{[Sr/Ba]} = -0.58$ (HE~1310$-$0536). The ratios found in
HE~0233$-$0343 and HE~1310$-$0536 could indicate production by the main
$s$ process. However, these stars are CEMP-no stars, i.e., their individual
abundance ratios of Ba relative to iron are low ($\mathrm{[Ba/Fe]} <
0$), and they are also UMP stars ($\mathrm{[Fe/H]} < -4.0$). At such low
metallicity, the Ba is more likely produced in the main $r$ process from
SNe and Sr in the weak $s$ process in massive stars. The following
$\mathrm{[Sr/Ba]}$ ratios have been found in strongly $r$-process-enhanced
metal-poor stars, $\mathrm{[Sr/Ba]}=-0.52$ in CS~31082-001 \citep{hill2002},
$\mathrm{[Sr/Ba]}=-0.41$ for CS~22892$-$052 \citep{sneden2003}, and
$\mathrm{[Sr/Ba]}=-0.46$ for CS~29497$-$004 \citep{christlieb2004}. These ratios
are, very similar to those we find, for HE~0233$-$0343 and HE~1310$-$0536. The
$\mathrm{[Sr/Ba]}$ ratio found for HE~0134$-$1519 indicates that the Sr and Ba
in this star could have been produced in the weak $s$ process in spinstars.
\citet{cescutti2013} proposed that the
spread in Sr and Ba abundances detected in CEMP-no stars could be
explained by spinstar progenitors. Their model includes a standard
$r$ process (presumably in the natal clouds) plus a contribution from
the weak $s$ process occurring in spinstars. With this combination,
they can model the spread seen in the abundances of Sr and Ba in
metal-poor stars, including the CEMP-no stars, while also reproducing
the low scatter in $\alpha$-elements. They do, however, state that their
models cannot reproduce the $\mathrm{[C/O]}$ and $\mathrm{[N/O]}$ ratios
in the same CEMP-no stars, but point to the scenario of
\citet{meynet2010}, where low-mass stars belonging to the forming
stellar cluster of a spinstar are enriched in carbon via stellar winds
from the spinstar.
\section{Summary}
We have conducted a detailed chemical-abundance analysis of
four new UMP stars, with $\mathrm{[Fe/H]} \leq -4.0$; fewer than 10
such stars were previously known. The Li, C, Sr, and Ba measurements
provide a new observational window to examine nucleosynthesis at the
earliest times in our Galaxy. While one star has an upper limit of
$\mathrm{[C/Fe]} < +1.7$, the remaining three stars are all C-rich, confirming
the prevalence of CEMP stars in the early universe. The detection of
Li in two of the clear CEMP-no stars requires that whatever process(es)
produce(s) the large amount of C (and presumably the N, O often found in
CEMP-no stars, but which could not be detected in our stars), does not
completely destroy Li. In light of the newer data for C and Li for
these, and other recently studied CEMP-no stars, we suggest that it is
worth revisiting the Li astration model described by \citet{piau2006}.
Finally, our detections of Sr and Ba for several additional UMP stars
demonstrates that the process(es) creating these elements are at work even at
very low metallicities, a conclusion also reached by
\citet{roederer2013}. Since there still remain a number of stars at the lowest
metallicities with only upper limits on Sr and/or Ba, increasing the sample
sizes and the quality of the available high-resolution spectroscopy for stars
at these metallicities is an essential step toward understanding
nucleosynthesis at the earliest epochs and ultimately to characterize ``the
frequency and environmental influence of the astrophysical sites of
heavy-element production'' \citep{roederer2013}.
\acknowledgments
We thank E. Caffau for providing the spectrum of SDSS~J102915+172927.
This work was supported by Sonderforschungsbereich SFB 881 ``The Milky
Way System'' (subproject A4) of the German Research Foundation (DFG).
T.C.B. acknowledges partial support from grant PHY 08-22648: Physics
Frontier Center/Joint Institute for Nuclear Astrophysics (JINA),
awarded by the U.S. National Science Foundation. V.M.P. acknowledges
support from the Gemini Observatory. M.A., M.S.B., J.E.N., and D.Y.
acknowledge support from the Australian Research Council (grants
DP0342613, DP0663562, and FL110100012) for studies of the Galaxy's most
metal-poor stars. A.F. acknowledges support from NSF grant AST-1255160.
Furthermore, we thank the referee for helpful comments.
This work made use of the Southern Astrophysical Research (SOAR)
4.1 m Telescope (proposal SO2013B-001),
which is a joint project of the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, e
Inova\c{c}\~{a}o (MCTI) da Rep\'{u}blica Federativa do Brasil, the U.S.
National Optical Astronomy Observatory (NOAO), the University of North
Carolina at Chapel Hill (UNC), and Michigan State University (MSU).
This work also made use of the Kitt Peak National Observatory Mayall 4 m
telescope (proposal 2013B-0046) of the National Optical Astronomy Observatory,
which is operated by the Association of American Universities for Research in
Astronomy (AURA), under cooperative agreement with the National Science
Foundation.
This work also made use of observations conducted with ESO
Telescopes at the La Silla Observatory and the VLT on Paranal, under
programme ID 69.D-0130(A).
\bibliographystyle{apj}
|
1,108,101,565,925 | arxiv | \section{Introduction}
A well-known conjecture in the theory of random walk on a group asserts that
for the
random walk on the symmetric group generated by all permutations of a common
cycle structure, the mixing time of the walk depends only on the number of fixed
points. Given measure $\mu$ on $S_n$ and integer $t \geq 1$, denote $\mu^{*t}$ its $t$-fold convolution, which is the law of the random walk starting from the identity in $S_n$ after $t$ steps from $\mu$. We measure convergence to uniformity in the total variation metric, which for probabilities $\mu$ and $\nu$ on $S_n$ is given by
\[
\|\mu - \nu\|_{T.V.} = \max_{A \subset S_n} |\mu(A) - \nu(A)|.
\]
A precise statement of the conjecture is as follows.
\begin{conjecture}\label{main_conjecture}
For each $n > 1$ let $C_n$ be a conjugacy class of the symmetric group $S_n$
having $n-k_n< n$ fixed points, and let $\mu_{C_n}$ denote the uniform probability measure on $C_n$. Let $t_2, t_3, t_4, ...$ be
a sequence of positive integer steps and for each $n \geq 2$ let $U_n$ be
uniform measure on the coset of
the alternating group $A_n$ supporting the measure $\mu_{C_n}^{*t_n}$. For any
$\varepsilon > 0$, if eventually $t_n
\geq (1 +\varepsilon) \frac{n}{k_n} \log n$, then
\[
\lim_{n \to \infty} \|\mu_{C_n}^{*t_n} - U_n\|_{T.V.} = 0.
\]
If eventually $t_n \leq (1-\varepsilon) \frac{n}{k_n}
\log n$, then
\[
\lim_{n \to \infty} \|\mu_{C_n}^{*t_n} - U_n\|_{T.V.} = 1.
\]
\end{conjecture}
\noindent This
conjecture aims to generalize the mixing time analysis of Diaconis and
Shahshahani \cite{diaconis_shahshahani} for the random transposition walk. The
formal conjecture seems to have first
appeared in \cite{roichman_1}.
When $k$ grows with $n$ like a constant times $n$ the
conjecture is known to be false because the
walk mixes too rapidly. This is the work of a number of authors, but first
\cite{lulov_pak} and for later results see \cite{larsen_shalev} and
references
therein. Whenever $k = o(n)$ the conjecture
is expected to hold, however, in part because the proposed lower bound on mixing
time follows from standard techniques in the field. We give the second moment
method proof in Appendix \ref{lower_bound_appendix}.
Since the initial work of Diaconis and Shahshahani, Conjecture
\ref{main_conjecture} has received quite a bit of attention, see
\cite{flatto_odlyzko_wales}, \cite{lulov_thesis}, \cite{roichman_1},
\cite{roichman_2},
\cite{roussel_1}, \cite{roussel_2}, \cite{vershik_kerov} for cases of conjugacy classes with finite numbers of non-fixed points.
A discussion of ongoing work of Schlage-Puchta towards the general conjecture is contained in \cite{saloff-coste_zuniga}. See
\cite{diaconis_book} and \cite{saloff-coste_book} for broader
perspective.
Recently Berestycki, Schramm and Zeitouni
\cite{berestycki}
have established the conjecture for any set of $k$ cycles
with $k$ fixed as $n \to \infty$. Moreover, they assert that their analysis
will go through to treat the case of any conjugacy class having bounded total
length of non-trivial cycles, and may cover the case when the total cycle
length grows like $o(\sqrt{n})$, although they state that they have not checked carefully the
uniformity in $k$ with respect to $n$.
The purpose of this article is to prove Conjecture \ref{main_conjecture}
for the random $k$ cycle walk in the full range $k = o(n)$.
\begin{theorem}\label{main_theorem}
Conjecture \ref{main_conjecture} holds when $C$ is the conjugacy class of
all $k$ cycles with $k$ permitted to be any function of $n$ that satisfies $k =
o(n)$ as $n \to \infty$. If $k = o\left(\frac{n}{\log n}\right)$ then the
conclusion of Conjecture
\ref{main_conjecture} remains valid when $\varepsilon = \varepsilon(n)$ is any
function satisfying $\varepsilon(n) \log n \to \infty$.
\end{theorem}
\begin{rem}
When $k = o\left(\frac{n}{\log n}\right)$ the window $\epsilon(n)\log n \to \infty$ is essentially best possible.
\end{rem}
As remarked above, the lower bound was already known. Prior to
\cite{berestycki}, which is purely combinatorial, all approaches to Conjecture
\ref{main_conjecture} have followed the work of Diaconis and Shahshahani in
passing through bounds for characters on the symmetric group. We
return to this previous approach;
in particular, we give an asymptotic
evaluation of character ratios at a $k$ cycle for many
representations, in the range $k = O\left(n^{\frac{1}{2}-\epsilon}\right)$. The
asymptotic formula
appears to be new when $k > 6$, see \cite{ingram} for the smaller cases. A
precise statement of our result appears in Section
\ref{S_n_char_theory_section},
after we introduce
the necessary notation.
The basis for our argument is an old formula of Frobenius \cite{frobenius},
which gives the value of a character of the symmetric group evaluated at a
$k$ cycle as a contour integral of a certain rational function, characterized by
the cycle and the representation. In the special case of a transposition this
formula was already used by Diaconis and Shahshahani. Previous authors in
attempting to extend the
result of \cite{diaconis_shahshahani} had also used Frobenius' formula, but they
had attempted to estimate the sum of residues of the function directly, which
entails significant difficulties since nearby residues of the function are
unstable once the cycle length $k$ becomes somewhat
large.
We avoid these difficulties by estimating the integral itself rather
than the residues in most situations. In doing so, a certain regularity of the
character values becomes evident. For instance, while nearby residues
appear irregular, by grouping clumps of poles inside a common integral we are
able to show that the `amortized' contribution of any pole is slowly
varying.
It may be initially surprising that our analysis becomes greatly simplified as
$k$ grows. For instance, once $k$ is larger than a sufficiently large
constant times $\log n$, essentially trivial bounds for the contour integral
suffice, and the greatest part of the analysis goes into showing that our method
can handle the handful of small cases which had been treated
previously using the character approach. A similar feature occurs in the
related paper \cite{hough_jiang} where a contour integral is also used
to bound character ratios at rotations on the orthogonal group. It seems
that the contour method works best when there is significant oscillation in the
sum of residues, but is difficult to use in the cases where the residues are
generally positive.
In Appendix \ref{contour_appendix} we show that
Frobenius' formula has a
natural generalization to other conjugacy classes on the symmetric group, but
with increasing complexity, since one contour integration enters for each
non-trivial cycle. The analysis here applies more broadly than to
just the class of $k$ cycles, but we have not pushed the method to its limit
because it seems that some new ideas are needed
to obtain the full Conjecture \ref{main_conjecture} when the number of small
cycles in
the cycle decomposition becomes large. From our point of view, the classes
containing $\geq n^{1-\epsilon}$ 2-cycles would appear to pose the greatest
difficulty.
We remark that, as typical in the character ratio approach, our upper
bound is a consequence of a corresponding upper bound for the walk in $L^2$, so
that our result gives a broader set of Markov chains on $S_n$ to which one can
apply the comparison techniques of \cite{diaconis_saloff_coste}.
Regarding notation, it will occasionally be convenient to use the
Vinogradov notation $A \ll B$, with the same meaning as $A = O(B)$. The
implicit constants in notation of both types should be assumed to vary from line
to line.
\subsection*{Acknowledgement} I am grateful to Persi Diaconis and
John Jiang for
stimulating discussions and to K. Soundararajan for drawing my attention to the
paper \cite{larsen_shalev}. I thank the referees for correcting a number of minor errors in the original submission.
\section{Character theory and mixing times}
We recall some basic facts regarding the character theory of a finite group. A good reference for these is
\cite{diaconis_book}.
A conjugation-invariant measure $\mu$ on finite group $G$ is a class
function, which means that it has a `Fourier expansion' expressing $\mu$ as a
linear combination of the irreducible characters $X(G)$ of $G$. It will
be convenient to normalize the Fourier coefficients by setting
\[
\forall \chi \in X(G), \qquad \hat{\mu}(\chi)= \frac{1}{\chi(1)}\sum_{g \in
G}\mu(g)\overline{\chi(g)}.
\]
By orthogonality of characters, the Fourier expansion takes the form
\begin{equation}\label{fourier_expansion}
\mu = \frac{1}{|G|}\sum_{\chi \in X(G)}\chi(1)
\hat{\mu}(\chi)
\chi,
\end{equation}
since, writing $C_x$ for the conjugacy class of $x \in G$,
\begin{align*}
\frac{1}{|G|} \sum_{\chi \in X(G)} \chi(1)\hat{\mu}(\chi) \chi(x) &=
\frac{1}{|G|}
\sum_{\chi \in X(G)} \sum_{g \in G} \mu(g) \overline{\chi}(g) \chi(x)\\
&= \sum_{g \in C_x} \frac{\mu(g)}{|C_x|}= \mu(x).
\end{align*}
In this setting, the Plancherel identity is
\begin{equation}\label{plancherel}
\|\mu\|_2^2 = \frac{1}{|G|}\sum_{g \in G} |\mu(g)|^2 =
\frac{1}{|G|^2}\sum_{\chi \in X(G)} \chi(1)^2 |\hat{\mu}(\chi)|^2.
\end{equation}
Note the somewhat non-standard factor of the inverse of the dimension
$\chi(1)^{-1}$ in the Fourier coefficients. The advantage of this choice is
that the Fourier map satisfies
the
familiar property of carrying convolution to pointwise multiplication: for
conjugation-invariant measures $\mu_1$ and $\mu_2$
\begin{equation}\label{convolution}
\forall \chi \in X(G), \qquad \widehat{\mu_1 \ast \mu_2}(\chi) =
\hat{\mu}_1(\chi) \cdot \hat{\mu}_2(\chi).
\end{equation}
This is because the regular representation in $L^2(G)$ splits as the direct sum
over irreducible
representations,
\[
L^2(G) = \bigoplus_{\rho} d_\rho M_\rho,
\]
and convolution by $\mu_i$ acts as a scalar multiple of the identity on each
representation space $M_{\rho}$, the scalar of proportionality being
$\hat{\mu}_i(\chi_\rho)$.
When $G = S_n$ is the symmetric
group on $n$ letters, the irreducible representations are indexed by
partitions of $n$, with the partition $(n)$ corresponding to the trivial
representation, and partition $(1^n)$ corresponding to the sign.
Given a conjugacy class $C$ on $S_n$ and integer $t
\geq 1$, the measure $\mu_C^{\ast t}$ is conjugation invariant, and supported on
permutations of a fixed sign, odd if both $C$ and $t$ are odd, and otherwise
even. We let $U_t$ be the uniform measure on permutations of this sign:
\[
\forall \sigma \in S_n, \qquad U_t(\sigma) = \frac{1 +
\mathrm{sgn}(\sigma)\cdot\mathrm{sgn}(C)^t}{n!}
\]
with Fourier coefficients
\[
\hat{U_t}(\chi^n) = 1, \qquad \hat{U_t}(\chi^{1^n}) = \mathrm{sgn}(C)^t, \qquad
\hat{U_t}(\chi^\lambda) = 0, \; \lambda \neq n, 1^n.
\]
The total variation distance between $\mu_C^{\ast t}$ and $U_t$ is equal to
\begin{align*}
\|\mu_C^{*t} - U_t\|_{T.V.}&:= \sup_{A \subset S_n} \left|\mu_C^{*t}(A) -
U_t(A)\right| \\&=\frac{1}{2}\sum_{\sigma \in S_n} \left|\mu_C^{*t}(\sigma) -
\frac{1
+ \mathrm{sgn}(\sigma)\cdot\mathrm{sgn}(C)^t}{n!}\right|.
\end{align*}
Thus Cauchy-Schwarz, Plancherel, and the convolution identity give the upper
bound
\begin{equation}\label{upper_bound_lemma}
\|\mu_C^{*t}-U_t\|_{T.V.} \leq \frac{1}{2} \left(\sum_{\lambda \vdash n,
\lambda
\neq n, 1^n} (\chi^\lambda(1))^2
\left(\frac{\chi^\lambda(C)}{\chi^\lambda(1)}\right)^{2t}\right)^{\frac{1}{2}}.
\end{equation}
The main ingredient in the proof of the upper bound of Theorem
\ref{main_theorem} will thus be the following estimate for character
ratios.
\begin{proposition}\label{char_ratio_prop}
Let $C$ be the class of $k$ cycles
on $S_n$. There exists a constant $\delta > 0$ and a constant $c_1> 0$ such that
uniformly in $n$, partition $\lambda \vdash n$ and all $2 \leq k < \delta n$
\[ \left|\frac{\chi^\lambda(C)}{\chi^\lambda(1)}\right|^{\frac{n}{k}(\log n
+ c_1)}
\leq \frac{1}{\chi^\lambda(1)}.\]
\end{proposition}
For the deduction of Theorem
\ref{main_theorem} we will use the following technical result on the dimensions
of the irreducible representations of $S_n$.
\begin{proposition}\label{dimension_prop}
For a sufficiently large fixed constant $c_2>0$,
\[
\sum_{\lambda \vdash n} (\chi^\lambda(1))^{-\frac{c_2}{\log n}} = O(1),
\]
with the estimate uniform in $n$.
\end{proposition}
\begin{proof}[Deduction of mixing time upper bound of Theorem \ref{main_theorem}]
We use the fact that, apart from the trivial and sign representations, the lowest dimensional irreducible representation of $S_n$ has dimension $n-1$ (so $S_n$ is essentially quasi-random). Let $c_1, c_2$ as as above, and let $C>0$. Then for $t > \frac{n}{k_n}(\log n + c_1)(1 + \frac{c_2 + C}{2\log n})$ the application of Cauchy-Schwarz above gives
\[
\|\mu_C^{*t}-U_t\|_{T.V.}^2 \leq \frac{1}{4} \sum_{\lambda \vdash n,
\lambda
\neq n, 1^n} (\chi^\lambda(1))^{- \frac{c_2 + C}{\log n}} = O(e^{-C}).
\]
\end{proof}
The proof of Proposition \ref{char_ratio_prop} will require some more detailed
information regarding the characters of $S_n$ evaluated at a cycle. We discuss
this in the next section. Proposition \ref{dimension_prop} is deduced from a
very useful approximate dimension formula of Larsen-Shalev
\cite{larsen_shalev}. The rather technical proof of this proposition is given
in Appendix \ref{dimension_bound_appendix}.
\section{Character theory of $S_n$}\label{S_n_char_theory_section}
The irreducible representations of $S_n$ are indexed by partitions of $n$.
Given a partition \[\lambda \vdash n,\qquad \lambda = (\lambda_1 \geq \lambda_2
\geq... \geq \lambda_n \geq 0),\qquad \sum \lambda_i = n,\] the dual partition
$\lambda'$ is found by reflecting the diagram of $\lambda$ along its diagonal.
We write $\chi^\lambda$ for the character of representation $\rho^\lambda$, and
$f^\lambda = \chi^\lambda(1)$ for the dimension. Let $\mu = \lambda + (n-1,
n-2, ..., 0)$. Then the dimension is given by (see e.g. \cite{macdonald})
\begin{equation}\label{dimension_formula}
f^\lambda = \frac{n!}{\mu_1!\mu_2!...\mu_n!}\prod_{i < j} (\mu_i - \mu_j).
\end{equation}
Setting apart those terms that pertain to $\lambda_1$, we find
\[
f^\lambda = \binom{n}{\lambda_1}\prod_{j=2}^n \frac{\lambda_1-\lambda_j +
j-1}{\lambda_1 + j-1} f^{\lambda-\lambda_1}.
\]
The product is bounded by 1, and $\sum_{\lambda \vdash n}
\left(f^\lambda\right)^2 = n!$,
so that we have the bound employed by Diaconis-Shahshahani
\begin{equation}\label{diaconis_shahshahni_dim_bound}
f^\lambda \leq \binom{n}{\lambda_1}\sqrt{(n-\lambda_1)!}.
\end{equation}
The trivial representation is $\rho^n$ while the sign representation is
$\rho^{1^n}$, both one-dimensional. $\rho^{n-1,1}$ corresponds to the standard representation,
which is the irreducible $(n-1)$-dimensional sub-representation of the representation $\rho$ in $\mathbb{R}^n$ given by
\[
\rho(\sigma) \cdot e_i = e_{\sigma(i)}, \qquad \sigma \in S_n.
\] This representation and its dual are the lowest dimensional non-trivial irreducible representations of $S_n$.
Given the character $\chi^\lambda$, the
character of the dual representation is given by \[\chi^{\lambda'}(\sigma) =
\mathrm{sgn}(\sigma)\cdot\chi^\lambda(\sigma).\] The characters corresponding to
partitions with long first piece have relatively simple interpretations. For
instance, if we write $i_1$ for the number of fixed points and $i_2$ for the
number of 2-cycles in permutation $\sigma$ then (\cite{fulton_harris}, ex. 4.15)
\begin{align}\notag
\chi^{n-1,1}(\sigma)& = i_1 -1, \\ \label{small_chars} \chi^{n-2,1,1}(\sigma) &=
\frac{1}{2}(i_1-1)(i_1-2) - i_2,\\ \notag \chi^{n-2,2}(\sigma) &=
\frac{1}{2}(i_1-1)(i_1-2) + i_2-1.
\end{align}
It is now immediate that
\begin{equation}\label{tensor_decomp}
(\chi^{n-1,1})^2 = \chi^{n-2,2} + \chi^{n-2, 1, 1} + \chi^{n-1,1} + \chi^{n}.
\end{equation}
Also,
\[
f^n = 1, \;\; f^{n-1,1} = n-1, \;\; f^{n-2,1,1} = \binom{n-1}{2}, \;\;
f^{n-2,2} = \binom{n-1}{2} - 1.
\]
We use these formulas in the proof of the lower bound of Conjecture
\ref{main_conjecture}.
Many properties of partitions are most readily evident in Frobenius
notation, and we will use this notation to state our main technical theorem.
In Frobenius notation we identify the partion $\lambda \vdash n$
by
drawing the diagonal, say of length $m$, and measuring the legs that extend
horizontally to the right and vertically below the diagonal:
\[ \lambda = (a_1, a_2, ..., a_m| b_1, b_2, ..., b_m);\] \[a_ i = \lambda _i
- i
+
\frac{1}{2},\qquad b_i = \lambda_i' -i + \frac{1}{2}.\]
Notice that
\[
\sum_{i =1}^m a_i + \sum_{i=1}^m b_i = n.
\]
In Appendix \ref{dimension_bound_appendix} we consider the quantity $\Delta$,
which is the number of boxes contained neither in the square formed by the
diagonal nor in the first row.
The notation used here is summarized in Figure \ref{frobenius_figure}.
\begin{figure}
\caption{Partition $\lambda$ in Frobenius
notation.}\label{frobenius_figure}
\centering
\includegraphics[width = 0.7 \textwidth]{frobenius}
\end{figure}
We now state our asymptotic evaluation of the character ratio.
\begin{theorem}\label{character_ratio_theorem}
Let $n$ be large, let $2 \leq k
\leq n$, and let $C$ be the class of $k$ cycles on $S_n$. Let
$\lambda \vdash
n$ be a
partition of $n$ with Frobenius notation $\lambda = (a_1, ..., a_m|b_1, ...,
b_m)$.
\begin{enumerate}
\item[a.] (Long first row) Let $0 < \epsilon < \frac{1}{2}$, let $r = n-\lambda_1$ and suppose
that $r + k + 1 < (\frac{1}{2} - \epsilon)n$. Then
\begin{align}\label{char_ratio_asym_large}
\frac{\chi^\lambda(C)}{f^\lambda} &= \frac{(a_1 -
\frac{1}{2})^{\underline{k}}}{n^{\underline{k}}} \prod_{j=2}^m \frac{a_1 -a_j -
k}{a_1 - a_j} \prod_{j=1}^m \frac{a_1 + b_j}{a_1 + b_j-k} \\\notag&+
O_\epsilon\left(\exp\left(k \left[ \log \frac{(1 + \epsilon)(k+1+r)}{n-k} +
O_\epsilon\left(r^{-\frac{1}{2}}\right)\right]\right)\right) .\end{align} If
$r < k$
then the
error term is actually 0.
\item[b.] (Large $k$, short first row and column)
Let $\theta > \frac{2}{3}$. There exists $\epsilon(\theta)>0$ such that,
for all $n$ sufficiently large, for all $k$ with $6\log n \leq k \leq
\epsilon n$ and for all $\lambda \vdash n$ such that $b_1 \leq a_1 \leq
e^{-\theta}n$,
\[
\left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq e^{\frac{-k}{2}}.
\]
\item[c.] (Asymptotic expansion) Let $0<\epsilon < \frac{1}{2}$ and suppose now that $k <
n^{\frac{1}{2}-\epsilon}$.
We
have the approximate formula
\begin{align}\label{char_ratio_asym_small}
\frac{\chi^\lambda(C)}{f^\lambda} = &\sum_{a_i > kn^{\frac{1}{2}}}
\frac{a_i^k}{n^k}\left(1 +
O_\epsilon\left(\frac{kn^{\frac{3}{4}+\epsilon}}{a_i}\right)\right)\\&\notag
\;\;+
(-1)^{k-1}\sum_{ b_i > kn^{\frac{1}{2} }} \frac{b_i^k}{n^k}\left(1 +
O_\epsilon\left(\frac{kn^{\frac{3}{4}+\epsilon}}{b_i}\right)\right) \\ \notag
&\;\;+
O_\epsilon\left(n^{\frac{1}{2}}(\log n)^2 \left(\frac{k \log^2
n}{\sqrt{n}}\right)^k\right).
\end{align}
\end{enumerate}
\end{theorem}
This Theorem is the main technical result of the paper. Actually it gives
more than we will need, and we apply the detailed statement
of part c. only in the case when the cycle length $k$ is relatively short, $k
\leq 6 \log n$. The cruder bounds of parts a. and b. suffice when the cycle
length $k$ is larger.
\section{Proof of Theorem \ref{character_ratio_theorem}}
When $C$ is the class of $k$ cycles Frobenius \cite{frobenius} (see
\cite{fulton_harris}, p.52
ex. 4.17 b) proved a famous formula that expresses the character ratio of a
given representation at
$C$ as the `residue at $\infty$' of a meromorphic function depending
upon the representation. In Frobenius notation, and using the
falling power \[z^{\underline{r}} =z(z-1)...(z-r+1),\] the
formula is
\begin{align}\label{contour_short_form}
\frac{\chi^\lambda(C)}{f^\lambda}
&=\frac{-1}{kn^{\underline{k}}}\frac{1}{2\pi i}\oint F_k^{a,b}(z)
dz, \quad
F_k^{a,b}(z) = \left(z+
\frac{k-1}{2}\right)^{\underline{k}}F^a_k(z)F^b_k(z)\\ & F^a_k(z) =
\prod_{j=1}^m \frac{z-a_j-\frac{k}{2}}{z-a_j+\frac{k}{2}} , \quad F^b_k(z) =
\prod_{j=1}^m
\frac{z + b_j + \frac{k}{2}}{z+b_j-\frac{k}{2}}\notag
\end{align}
where the integration has winding number 1
around each (finite) pole of the integrand. [Note that our $(a_i, b_i)$
correspond
to $(b_{m-i+1} + \frac{1}{2}, a_{m-i + 1} + \frac{1}{2})$ of
\cite{fulton_harris}, and replace $y$ there with our $z = y -
\frac{k-1}{2}$.]
Our proof of Theorem \ref{character_ratio_theorem} is an asymptotic
estimation of this integral. We first record several properties of $F_k^{a,b}$
in the following lemma.
\begin{lemma}\label{F_properties}
Let $\lambda = (a_1, ..., a_m|b_1, ..., b_m)$ be a partition of $n$ in
Frobenius notation.
\begin{enumerate}
\item Each of $F_k^{a}(z)$ and $F_k^b(z)$ has at most $\sqrt{n}$ poles.
\item If $k > n$ then $F_k^{a,b}(z)$ is holomorphic.
\end{enumerate}
Denote by $\lambda' = \lambda \setminus \lambda_1 = (a_1',
..., a_{m'}'|b_1', ..., b_{m'}')$ the partition of $n-\lambda_1$ found
by deleting the first row of $\lambda$.
\begin{enumerate}
\item[(3)] We have
\[
F_k^{a,b}(z) \frac{z- a_1 +\frac{k}{2}}{z-a_1 - \frac{k}{2}}
= F_k^{a',b'}(z+1).
\]
\item[(4)] If $a_1 > n-k - \frac{1}{2}$ and if $n \geq 2k$ then $F_k^{a,b}(z)$
has a
single
simple pole at $z= a_1 - \frac{k}{2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first item follows from the bound for the diagonal $m \leq \sqrt{n}$, since
both $F_k^a$
and $F_k^b$ are products of $m$ terms.
For 2., first observe that $a_i$ and $b_i$ are strictly decreasing so that the
poles of $F_k^a(z)$ are all simple, as are the poles of $F_k^b(z)$. Since $k >
n$, $a_i + b_j - k < 0$, so $a_i - \frac{k}{2} \neq \frac{k}{2} - b_j$. Thus
the poles of $F_k^a(z)F_k^b(z)$ are all simple, and are all cancelled by the
factor of $\left(z + \frac{k-1}{2}\right)^{\underline{k}}$.
For 3., observe that deleting the first row of the diagram for
$\lambda$ shifts the diagonal down one square. Thus, if
$b_m = \frac{1}{2}$ then $m' = m-1$ and
\[
\forall\; 1 \leq i \leq m',\quad a_{i}' = a_{i+1} + 1, \quad b_i' = b_i-1.
\]
In this case, accounting for the lost factor from $b_m$,
\begin{equation}\label{factor_drop}
F^a_k(z)F^b_k(z)\cdot \frac{z - a_1 + \frac{k}{2}}{z-a_1 - \frac{k}{2}} =
F^{a'}_k(z+1)F^{b'}_k(z+1) \frac{z +\frac{k+1}{2}}{z -\frac{k-1}{2}}.
\end{equation}
If instead $b_m > \frac{1}{2}$ then $a_m = \frac{1}{2}$, which implies $m' =
m$, and
\[
\forall\; 1 \leq i \leq m-1, \quad a_{i}' = a_{i+1} + 1,\; a_m' = \frac{1}{2},
\quad \forall\; 1 \leq j \leq m, \quad b_j' = b_j-1.
\]
Thus (\ref{factor_drop}) still holds, the new factor accounting for the
introduction of $a_{m}' = \frac{1}{2}$. Thus, in either case,
\begin{align*}
F_k^{a,b}(z)&\cdot \frac{z - a_1 + \frac{k}{2}}{z-a_1 - \frac{k}{2}} =
\left(z + \frac{k-1}{2}\right)^{\underline{k}}
F^{a'}_k(z+1)F^{b'}_k(z+1) \frac{z +\frac{k+1}{2}}{z -\frac{k-1}{2}}
\\ &=
\left(z+\frac{k+1}{2}\right)^{\underline{k}} F^{a'}_k(z+1)F^{b'}_k(z+1) \\&=
F_k^{a',b'}(z+1).
\end{align*}
For 4., the condition $a_1 > n-k - \frac{1}{2}$ and $n
\geq 2k$ implies that $a_1 > k -\frac{1}{2}$. Thus, for all $i$, $a_1 + b_i >
k$, so that $a_1 - \frac{k}{2} \neq \frac{k}{2} - b_i$, and $a_1 -\frac{k}{2}$
is a simple pole of $F_k^a(z)F_k^b(z)$. Furthermore, it is not cancelled by
$\left(z+\frac{k-1}{2}\right)^{\underline{k}}$, so that $a_1 - \frac{k}{2}$ is a
simple pole of $F_k^{a,b}(z)$. It is the only pole, since $F_k^{a',b'}(z)$ has
$n-\lambda_1 < k$, hence is holomorphic.
\end{proof}
Parts a. and b. of Theorem \ref{character_ratio_theorem} are more easily
proven. We
give these proofs immediately, and then prove several lemmas before proving part
c.
\begin{proof}[Proof of Theorem \ref{character_ratio_theorem} part a.] In this part, the first row $\lambda_1$ of partition $\lambda$ is significantly larger than the remainder of the partition, of size $r$. We extract a residue contribution from the pole corresponding to $\lambda_1$, and bound the remainder of the character ratio by approximating it with a character ratio on $S_r$.
Observe that $\sum_{i>1} a_i + \sum_i b_i =n-a_1 = r + \frac{1}{2}$. Recall
that we assume $r + k + 1 \leq \left(\frac{1}{2}-\epsilon\right)n$, and that
this part of the theorem is the asymptotic
\begin{align*}
\frac{\chi^\lambda(C)}{f^\lambda} &= \frac{(a_1 -
\frac{1}{2})^{\underline{k}}}{n^{\underline{k}}} \prod_{j=2}^m \frac{a_1 -a_j -
k}{a_1 - a_j} \prod_{j=1}^m \frac{a_1 + b_j}{a_1 + b_j-k} \\\notag&+
O_\epsilon\left(\exp\left(k \left[ \log \frac{(1 + \epsilon)(k+1+r)}{n-k} +
O_\epsilon\left(r^{-\frac{1}{2}}\right)\right]\right)\right),\end{align*}
with the error equal to 0 if $r <k$.
The poles of $F_k^{a,b}$ are among the points $a_i - \frac{k}{2}$, $\frac{k}{2}
- b_i$, $i = 1, ..., k$, some of which may be cancelled by the numerator.
The condition $k+ 1 + r \leq \left(\frac{1}{2}-\epsilon\right) n$
guarantees that \[a_1 - \frac{k}{2} = n - r - \frac{k+1}{2} >
\left(\frac{1}{2} + \epsilon\right)n.\] Since $a_i, b_j \leq r+\frac{1}{2}$ for
$i \geq 2$,
$j \geq 1$, it follows that $a_1 - \frac{k}{2}$ is the furthest pole from zero
of the function $F^a_k(z)F^b_k(z)$, and that this is a simple pole of
$F_k^{a,b}(z)$.
Set $R = (1 + \epsilon)\left(r + \frac{k+1}{2}\right)$ and notice \[R <
\frac{n}{2} < a_1 - \frac{k}{2}, \qquad R > \max_{i \geq 2, j \geq
1}\left(\left|a_i - \frac{k}{2}\right|, \left|b_j -
\frac{k}{2}\right|\right),\] so that $a_1 - \frac{k}{2}$ is outside the loop
$|z| = R$, while all other poles are inside. Thus
\begin{equation}\label{error_integral}
\frac{\chi^\lambda(C)}{f^\lambda} = \frac{-1}{kn^{\underline{k}}}
\mathrm{Res}_{z = a_1 - \frac{k}{2}} F_k^{a,b}(z) + \frac{-1}{k n^{\underline{k}}}
\frac{1}{2\pi i} \int_{|z| = R} F_k^{a,b}(z)dz.
\end{equation}
The residue term
is equal to the main term of the theorem, so it remains to bound the integral.
Recall the notation
$(a_1', ..., a_{m'}'|b_1', ..., b_{m'}') = \lambda
\setminus \lambda_1$, and that \[F_k^{a,b}(z) = \frac{z-a_1 -
\frac{k}{2}}{z-a_1+\frac{k}{2}} F_k^{a',b'}(z+1).\] Thus we express the
integral of
(\ref{error_integral}) as
\begin{equation}\label{residue_extracted}
\frac{-1}{k n^{\underline{k}}} \frac{1}{2\pi i} \int_{|z| = R} F_k^{a,b}(z) dz=
\frac{-1}{kn^{\underline{k}}} \frac{1}{2\pi i} \int_{|z| = R}
F_k^{a',b'}(z+1) \frac{z- a_1 - \frac{k}{2}}{z-a_1 + \frac{k}{2}} dz.
\end{equation}
When $k > r$, $F_k^{a',b'}(z+1)$ is holomorphic, and so
(\ref{residue_extracted})
is zero, which proves the latter claim of the theorem. Thus we may now assume
that $r \geq k$.
On the contour $|z| = R$,
\[\left|z- a_1 +
\frac{k}{2}\right| \geq a_1 - R - \frac{k}{2} \geq n - (2 + \epsilon)\left(r +
\frac{k+1}{2}\right) \gg_\epsilon n,
\]
so that $\left|\frac{z- a_1 - \frac{k}{2}}{z-a_1 + \frac{k}{2}}
-1\right| = O_\epsilon \left(\frac{k}{n}\right).$ Thus
(\ref{residue_extracted}) is equal to ($C'$ is the $k$ cycle class on $S_r$)
\begin{equation}\label{contracted_evaluation}
\frac{r^{\underline{k}}}{n^{\underline{k}}} \frac{\chi^{\lambda\setminus
\lambda_1}(C')}{f^{\lambda\setminus \lambda_1}} +
O_\epsilon\left(\frac{k}{n} \frac{1}{ k n^{\underline{k}}}
\int_{|z| = R}
\left|F_{k}^{a',b'}(z+1)\right|
d|z|\right).
\end{equation}
We bound the first term by $\frac{r^{\underline{k}}}{n^{\underline{k}}}$, since
all character ratios are bounded by 1.
To bound the integral, note that $F^{a'}_k(z)$ and $F^{b'}_k(z)$
each have at most $\sqrt{r}$ poles. On the contour $|z| = R$,
\[\left|\frac{z-a_i -\frac{k}{2}}{z-a_i +\frac{k}{2}}\right| \leq 1 +
O_\epsilon\left(\frac{k}{r}\right),\] and the terms in $F^{b'}_k(z)$ are bounded
similarly. It follows that on $|z| = R$, \[|F^{a'}_k(z+1)F^{b'}_k(z+1)| \leq
\exp\left(O_\epsilon\left(\frac{k}{\sqrt{r}}\right)\right).\] Meanwhile, also
on $|z| =
R$,
\[ \left|\left(z + \frac{k+1}{2}\right)^{\underline{k}}\right| \leq \left(|z| +
\frac{k+1}{2}\right)^k \leq
\left((1 + \epsilon)(k+1+r)\right)^k.\]
Putting these bounds together, we deduce a bound for the second term of
(\ref{contracted_evaluation}) of
\begin{align*} \frac{1}{n\cdot n^{\underline{k}}}&\frac{1}{2\pi}\int_{|z| = R}
|F^{a',b'}_k(z+1)| d|z| \leq\frac{1}{n^{\underline{k}} } \sup_{|z| = R}
\left|F_k^{a',b'}(z+1)\right|
\\ & = O_\epsilon\left(\frac{1}{n^{\underline{k}}} \exp\left(k
\left[\log\left((1 +\epsilon)(r + k + 1)\right) +O_\epsilon
\left(\frac{1}{\sqrt{r}}\right)\right] \right)\right).
\end{align*}
To complete the proof of the theorem, use $n^{\underline{k}} \geq
(n-k)^k$ and note that the term $\frac{r^{\underline{k}}}{n^{\underline{k}}}$
from the character ratio in (\ref{contracted_evaluation}) is trivially absorbed
into this error term.
\end{proof}
\begin{proof}[Proof of Theorem \ref{character_ratio_theorem} part b.]
Choose $\theta_1$ with $\frac{2}{3} < \theta_1 < \theta$.
Since the poles of $F^{a,b}_k$ are among $a_i - \frac{k}{2}$ and $
\frac{k}{2}- b_i$, $i = 1, 2, 3, ...$, choosing $\epsilon = \epsilon(\theta)>0
$ sufficiently small, $a_i, b_i <e^{-\theta}n$ and $k \leq \epsilon n$
guarantees that the contour $|z| =R = e^{-\theta_1}
n$
contains all
poles of $F^{a,b}_k(z)$. Thus
\[
\frac{\chi^{\lambda}(C)}{f^\lambda} =
\frac{-1}{kn^{\underline{k}}}\frac{1}{2\pi i} \int_{|z| = R}
F^{a,b}_k(z) dz.
\]
Write $ \frac{1}{k n^{\underline{k}}} F^{a,b}_k(z) =\frac{(z +
\frac{k-1}{2})^{\underline{k}}}{k n^{\underline{k}}} F^a_k(z)F^b_k(z)$. Since
$k
\leq \epsilon n$, for $|z| = R$, \[\left|\frac{(z +
\frac{k-1}{2})^{\underline{k}}}{n^{\underline{k}}}\right| \leq
\frac{\left(R+\frac{k}{2}\right)^k}{(n-k)^k} \leq \exp(k
(-\theta_1 + o(1))),\] with $o(1)$ indicating a quantity that may be made
arbitrarily small with a sufficiently small choice of $\epsilon$. Also,
$F^a_k(z)$
and $F^b_k(z)$ are each composed of at
most $ \sqrt{n}$ factors, each of size at most $1 +
O_{\theta,\theta_1,\epsilon}(\frac{k}{n})$. We deduce
that for $|z| = R$,
\[
\left|\frac{1}{k n^{\underline{k}}}
F^{a,b}_k(z)\right| \leq \exp\left(k\left(-\theta_1 +
o_{\theta,\theta_1}(1) \right)\right),
\]
the error term requiring that first $\epsilon$ be small, and then that $n$ be
large.
The length of the contour
is $O(n)=O\left(\exp\left(\frac{k}{6}\right)\right)$. Since $\theta_1 >
\frac{1}{2} + \frac{1}{6}$, it follows that the bound
$\exp\left(-\frac{k}{2}\right)$ holds for the character ratio for all $n$
sufficiently large.
\end{proof}
We now turn to part c. of Theorem \ref{character_ratio_theorem}. We will
again estimate the integral
\begin{equation}\label{contour_short_form_again}
\frac{\chi^\lambda(C)}{f^\lambda}
=\frac{-1}{kn^{\underline{k}}}\frac{1}{2\pi i}\oint F_k^{a,b}(z)
dz,
\end{equation}
but the idea now
will be to evaluate clumps of
poles together. With an appropriate choice of contour, at any given point
the poles that are sufficiently far away make an essentially constant
contribution to the integrand, and so may be safely removed, simplifying the
integral.
Denote by
\[\mathcal{P} = \left\{a_j - \frac{k}{2}: 1\leq j \leq m\right\} \cup
\left\{\frac{k}{2} - b_j:
1 \leq j \leq
m\right\}\] the multi-set (that is, set counted with multiplicities) of poles of
$F_k^{a}(z)$ and $F_k^b(z)$.
One trivial fact concerning the distribution of poles in $\mathcal{P}$ is the
following bound.
\begin{lemma}\label{trivial_upper_bound} For any $x > 0$,
\[
\#\{p \in \mathcal{P}: |p| > x\} \leq \frac{n + km}{x}.
\]
\end{lemma}
\noindent Indeed, this follows from the fact that
\[
\sum_{i=1}^m \left|a_i - \frac{k}{2}\right| + \sum_{i=1}^m\left|\frac{k}{2} -
b_i\right| \leq km + \sum a_i + \sum b_i = km+n.
\]
A useful consequence is that
for $x$ real, $|x| > k\sqrt{n}$, we can always find a real point $y$ nearby
$x$ with large distance from all poles in $\mathcal{P}$.
\begin{definition}
Let $x \in \mathbb{R}\setminus 0$ and let $L>0$ be a parameter. We say that a real
number $y$ is $L$\emph{-well-spaced} for $x$ with respect to $\mathcal{P}$ if
the bound is satisfied
\begin{equation}\label{well_spaced_condition}
\sum_{p \in \mathcal{P}: |p-x|< \frac{|x|}{2}}\frac{1}{|p-y|} \leq
\frac{L}{|x|}.
\end{equation}
\end{definition}
The following lemma says that if $|x|$ is sufficiently large, then there
are always many points $y$ nearby $x$ that are well-spaced for $x$ with respect
to $\mathcal{P}$.
\begin{lemma}\label{point_spacing}
Let $n > e^5$ and let $1<k < \sqrt{n}$.
Let $x$ be real with
$|x| > k \sqrt{n}\log n$. Then there exists a real
$y$ with $|y-x| < \sqrt{n}$ which is $4 \sqrt{n} \log n$-well-spaced for $x$
with respect to $\mathcal{P}$.
\end{lemma}
\begin{proof}
Let $I$ be the interval $I = \{y: |y-x| < \sqrt{n}\}$. By Lemma
\ref{trivial_upper_bound}, the interval $I$ contains at most
\[\frac{n+km}{|x|-\sqrt{n}}
\leq \frac{2n}{|x|-\sqrt{n}} \leq
\frac{2.5\sqrt{n}}{k \log n}\] poles of $\mathcal{P}$. Deleting the segment of
radius $k$ around each pole in $\mathcal{P}\cap I$ removes a set of total
length at most
\[
\frac{5 \sqrt{n}}{ \log n} \leq \sqrt{n}\]
leaving $I'$ with $|I'|\geq
\frac{|I|}{2} = \sqrt{n}$. Now
\begin{align}\notag
\frac{1}{|I'|} \int_{y \in I'} \sum_{p \in \mathcal{P}, |p-x|< \frac{|x|}{2}}
\frac{dy}{|y-p|}&\leq \frac{1}{|I'|} \sum_{p \in \mathcal{P}, |p-x|<
\frac{|x|}{2}} \int_{u \in I, |u-p|\geq k} \frac{du}{|u-p|}\\ & \leq
\frac{1}{|I'|} \sum_{p \in \mathcal{P}, |p-x|<
\frac{|x|}{2}}2 \times \int_{k}^{\sqrt{n}} \frac{du}{u} \notag \\&\leq
\frac{\log
n}{|I'|} \left|\left\{p \in \mathcal{P}: |p-x| < \frac{|x|}{2}\right\}\right|.
\label{average_of_p_to_y}
\end{align}
Applying the cardinality bound of Lemma \ref{trivial_upper_bound} a second time (recall
$|I'|\geq \sqrt{n}$) we obtain a bound for (\ref{average_of_p_to_y}) of
\[
\frac{\log n}{\sqrt{n}} \frac{ (n + km)}{\frac{|x|}{2}} \leq \frac{4 \sqrt{n}
\log
n}{|x|}.
\]
It follows that a typical point $y \in I'$ is $4 \sqrt{n}\log n$-well-spaced
for $x$.
\end{proof}
Recall that $k \leq n^{\frac{1}{2}-\epsilon}$. We now assume, as we may, that
$n$ is sufficiently large to guarantee $k \leq \frac{\sqrt{n}}{(\log n)^2}$ and
we choose a sequence of well-spaced points at which we partition our integral.
To find our sequence of points, first set
\[
x_0 = \frac{k}{2}\sqrt{n}(\log n)^2
\]
and
\[
x_j = x_0 + 2j \sqrt{n}, \qquad j = 1,2,...
\]
for those $j$ such that $x_j \leq n + 4 \sqrt{n}$.
In each interval \[[x_j-\sqrt{n}, x_j +
\sqrt{n}), \qquad j = 0, 1, 2, ...\] apply Lemma \ref{point_spacing} to find
$y_j^+$, a
$4\sqrt{n}\log n$-well-spaced point for $x_j$.
Also find a point $y_j^-$ in each interval $(-x_j-\sqrt{n}, - x_j +
\sqrt{n}]$ which is $4\sqrt{n}\log n$-well-spaced for $-x_j$. Thus
we have the
disjoint
intervals
\[I_0 = (y_0^-, y_0^+), \qquad I_j^+ = [y_{j-1}^+,y_j^+), \quad I_j^- =
(y_j^-, y_{j-1}^-], \quad j = 1, 2, 3, ...\] which together cover
\[(-n-\sqrt{n}, n+\sqrt{n}).\] Let $q_j^+$ (resp. $q_j^-, q_0$) be the number of
poles of $\mathcal{P}$ contained in $I_j^+$, (resp. $I_j^-, I_0$).
Around each interval $I_j^+,$ $j \geq 1$ we draw
a
rectangular box
\[
{\mathcal{B}}_j^+ = \left\{z \in \mathbb{C}: y_{j-1}^+ \leq \Re(z) \leq y_j^+, \; |\Im(z)|
\leq k q_j^+\right\}
\]
and similarly around $I_j^-$. If $q_j^\pm = 0$ then the box may be
discarded. We also draw a large box ${\mathcal{B}}_0$ containing the origin, with
endpoints
at $y_0^- \pm i
k\sqrt{n}, y_0^+ \pm i k\sqrt{n}$. A permissible contour with which to apply
integral formula (\ref{contour_short_form_again}) is given by
\[
\partial {\mathcal{B}}_0 \cup \bigcup_{j\geq 1} \partial {\mathcal{B}}_j^+ \cup \bigcup_{j\geq 1} \partial
{\mathcal{B}}_j^-,
\]
each box being positively oriented, see figure \ref{contour_figure}. We use a
somewhat shortened
version of this contour.
\begin{figure}
\caption{Schematic of the boxes ${\mathcal{B}}_j$. Between
successive equally spaced points $x_j, x_{j+1}$ denoted by (+), point $y_j$ is chosen to
be well-spaced from the set of poles (red). Box
${\mathcal{B}}_j$ has vertical edges at $\Re(z) = y_{j-1}, y_j$,
and height proportional to the number of poles
contained.}\label{contour_figure}
\centering
\includegraphics[width = 0.7 \textwidth]{contour}
\end{figure}
In what follows we treat only the integrals around the boxes ${\mathcal{B}}_j^+$ and
${\mathcal{B}}_0$,
and we
drop superscripts (so we write e.g. $q_j$ for $q_j^+$, $I_j$ for $I_j^+$ etc). The argument may be carried out symmetrically for ${\mathcal{B}}_j^-$.
The main proposition is as follows.
\begin{proposition}
Let $j \geq 1$. There exists a piecewise linear contour $\mathcal{C}_j$,
having total length
\[
|\mathcal{C}_j| \leq 12 k q_j n^{\frac{1}{4}},
\]
such that $\mathcal{C}_j$ has winding number 1 around each pole $p \in
\mathcal{P} \cap {\mathcal{B}}_j$ and winding number 0 around all poles $p' \in
\mathcal{P}\setminus {\mathcal{B}}_j$. For $z\in \mathcal{C}_j$,
$F_k^{a,b}(z)$ satisfies
\begin{equation}\label{approximate_formula}
F^{a,b}_k(z) = \left( x_j +
\frac{k-1}{2}\right)^{\underline{k}} \prod_{p \in \mathcal{P} \cap {\mathcal{B}}_j}
\frac{z-p- k}{z-p} +
O\left( k x_j^{k-1} \sqrt{n} \log n\right).
\end{equation}
\end{proposition}
\begin{proof}
Recall that ${\mathcal{B}}_j$ is a box containing $q_j$ poles and surrounding interval
$I_j = [y_{j-1}, y_j)$, with corners at
\[
y_{j-1} \pm ikq_j, \qquad y_j \pm ikq_j.
\]
Recall also that $y_j \in [x_j - \sqrt{n}, x_j+\sqrt{n})$ and
$x_j -
x_{j-1} = 2\sqrt{n}$, so that $|y_{j} - y_{j-1}| \leq 4 \sqrt{n}$.
\begin{figure}
\caption{Schematic of contour ${\mathcal{C}}_j$. Poles in red, box ${\mathcal{B}}_j$ in blue, ${\mathcal{C}}_j$ in green. }\label{C_contour_figure}
\centering
\includegraphics[width = \textwidth]{C_Contour}
\end{figure}
To form $\mathcal{C}_j$ we shorten the contour $\partial {\mathcal{B}}_j$. At each pole
$p
\in {\mathcal{B}}_j$ consider the segment
\[S_p = [p-kq_j, p+kq_j].\] If there exists a subinterval $J$ of interval
$I_j$
that is not covered by $\cup_{p \in \mathcal{P}\cap {\mathcal{B}}_j}S_p$ then such $J$
may
be discarded from the box ${\mathcal{B}}_j$ by drawing vertical segments at the endpoints
of
$J$ and deleting the parts of ${\mathcal{B}}_j$ vertically above and below $J$.
Deleting subintervals in this way if it reduces the total perimeter,
or allowing them to remain when the perimeter is increased, we arrive at a
contour $\mathcal{C}_j$ which is the union of some rectangles, and has total
length at most
\[ |\mathcal{C}_j| \leq \min(8 \sqrt{n} + 4 k q_j, 8kq_j^2),\]
the first term bounding the result of doing nothing, and the second
bounding the length of a contour with an individual box around every pole.
Using $\min(a,b) \leq
\sqrt{ab}$,
we have the bound
\[
|\mathcal{C}_j| \leq 4k q_j + \min(8 \sqrt{n}, 8kq_j^2) \leq 4kq_j + 8 k q_j
n^{\frac{1}{4}} \leq 12 k q_j n^{\frac{1}{4}}.
\]
We now prove the formula (\ref{approximate_formula}). Observe that
by Lemma \ref{trivial_upper_bound}, the number of poles $q_j$ in $\mathcal{P}
\cap {\mathcal{B}}_j$ satisfies
$q_j \ll \frac{n}{x_j} \ll \frac{\sqrt{n}}{k}$, so that each point of
$\mathcal{C}_j$ has
distance $O(\sqrt{n})$ from $x_j$. Thus
\[
\forall z \in \mathcal{C}_j, \qquad\left(z + \frac{k-1}{2}\right)^{\underline{k}} = \left( x_j +
\frac{k-1}{2}\right)^{\underline{k}}\left(1 +
O\left(\frac{k\sqrt{n}}{x_j}\right)\right)
.\]
In particular, it follows that
\begin{equation}\label{first_approximation_j}
\forall z \in \mathcal{C}_j, \qquad F_k^{a,b}(z) = \left(1 + O \left(\frac{k
\sqrt{n}}{x_j}\right)\right) \left( x_j +
\frac{k-1}{2}\right)^{\underline{k}}F_k^a(z)F_k^b(z).
\end{equation}
We prove
\begin{equation}\label{distant_poles_bound}
\forall z \in \mathcal{C}_j, \qquad F_k^a(z)F_k^b(z) = \left( 1 +
O\left(\frac{k\sqrt{n} \log n}{x_j}\right)\right) \cdot \prod_{p \in \mathcal{P}
\cap {\mathcal{B}}_j}\frac{z-p-k}{z-p},
\end{equation}
and
\begin{equation}\label{near_poles_bound}
\forall z \in \mathcal{C}_j, \qquad \prod_{p \in \mathcal{P}
\cap {\mathcal{B}}_j}\frac{z-p-k}{z-p} = O(1).
\end{equation}
Inserted in (\ref{first_approximation_j}) these combine to prove the
formula (\ref{approximate_formula}), (use $ \left( x_j +
\frac{k-1}{2}\right)^{\underline{k}} \leq x_j^k$).
We first consider (\ref{distant_poles_bound}). This
estimate is equivalent to \[
\forall z \in \mathcal{C}_j, \qquad \prod_{p \in \mathcal{P} \setminus {\mathcal{B}}_j}
\frac{z - p \mp k}{z-p} = 1 + O\left(\frac{k\sqrt{n} \log n}{x_j}\right),
\]
the sign determined according as $p$ is a pole from $F_k^a$ or $F_k^b$.
Say a pole $p \in \mathcal{P} \setminus {\mathcal{B}}_j$ is `good' if
$|p-x_j| > \frac{x_j}{2}$ and `bad' otherwise. For $z \in \mathcal{C}_j$,
\[|z-x_j| = O(\sqrt{n}) < \frac{x_j}{3}\] if $n$ is sufficiently large, which
implies that for good $p$, $|z-p| \geq \frac{x_j}{6}$. Since the total number
of poles is $O(\sqrt{n})$, the product over good poles evidently satisfies the
bound
\[
\prod_{p \text{ good}} \left(1 \mp \frac{k}{z-p}\right) = 1 +
O\left(\frac{k\sqrt{n}}{x_j}\right).
\]
Since it is not contained in the box ${\mathcal{B}}_j$, a bad pole $p$ necessarily lives
in
the complement $\mathbb{R}
\setminus [y_{j-1},y_j]$, and either satisfies $p > y_j$ and $|p- x_j|\leq
\frac{x_j}{2}$ or
else $p < y_{j-1}$, which entails $\frac{x_j}{2} \leq p \leq x_j$. Since
$\frac{x_{j-1}}{2} < \frac{x_j}{2}$, and $\frac{3 x_{j-1}}{2} > x_j$ (at least
if $n$ is sufficiently large), it follows that in the second case, $|p -
x_{j-1}| \leq \frac{x_{j-1}}{2}$. Thus, for $z \in
\mathcal{C}_j$, the fact that $y_{j-1} \leq \Re(z) \leq y_j$ implies
\[
\sum_{p \text{ bad},\; p > y_j} \frac{1}{|z-p|} \leq \sum_{\substack{p > y_j\\
|p-x_j| < \frac{x_j}{2}}} \frac{1}{p - y_j} \leq \frac{4 \sqrt{n} \log n}{x_j}
\]
and
\[
\sum_{p \text{ bad},\; p< y_{j-1}} \frac{1}{|z-p|} \leq
\sum_{\substack{p<y_{j-1}\\ |p -x_{j-1}| < \frac{x_{j-1}}{2} }} \frac{1}{y_{j-1}
- p} \leq \frac{4 \sqrt{n}\log n}{x_{j-1}} \leq
\frac{8 \sqrt{n} \log n}{x_j},
\]
by invoking the $4 \sqrt{n}\log n$ well-spaced property of $y_{j-1}$ and
$y_j$.
It follows
\[
\prod_{p \text{ bad}} \left(\frac{z-p \mp k}{z-p}\right) = 1 +
O\left(\frac{k\sqrt{n}\log n}{x_j}\right),
\]
proving (\ref{distant_poles_bound}).
To prove (\ref{near_poles_bound}) we consider two cases. When either $\Re(z) =
y_{j-1}$ or $\Re(z) = y_j$, observe that for each $p \in \mathcal{P} \cap
{\mathcal{B}}_j$,
\[|p - x_j| \leq 2 \sqrt{n} \leq \frac{x_j}{2}.\] Also, since $x_j-x_{j-1} =
2\sqrt{n}$, \[|p - x_{j-1}| \leq 4 \sqrt{n} \leq \frac{x_{j-1}}{2} \qquad (n
\text{ large}).\] Thus, invoking the well-spaced property of $y_{j-1}$ and
$y_j$,
\[
\sum_{p \in \mathcal{P} \cap {\mathcal{B}}_j} \frac{1}{|p-y_{j-1}|} \leq \frac{4
\sqrt{n}\log n}{x_{j-1}}, \qquad \sum_{p \in \mathcal{P} \cap {\mathcal{B}}_j}
\frac{1}{|p - y_j|} \leq \frac{4 \sqrt{n} \log n}{x_j}
\]
so that if $\Re(z) = y_{j-1}$,
\[
\prod_{p \in \mathcal{P} \cap {\mathcal{B}}_j} \left(1 - \frac{k}{z-p}\right)
\leq \exp\left(O \left(\frac{k \sqrt{n}\log n}{x_{j-1}}\right)\right) = O(1),
\]
and similarly when $\Re(z) = y_j$.
When $\Re(z) \not \in \{y_{j-1}, y_j\}$ then $|z-p| \geq k q_j$ for each $p \in
\mathcal{P}\cap {\mathcal{B}}_j$, so that in this case the bound
\[
\prod_{p \in \mathcal{P} \cap {\mathcal{B}}_j} \left(1 - \frac{k}{z-p}\right) = O(1)
\]
is immediate from $|\mathcal{P} \cap {\mathcal{B}}_j| = q_j$. Thus
(\ref{near_poles_bound}) holds in either case.
\end{proof}
We now bound integration around the box ${\mathcal{B}}_0$.
\begin{lemma}\label{cB_0_bound} Assume $n > e^5$.
We have
\[
\frac{-1}{k n^{\underline{k}}} \frac{1}{2\pi i} \oint_{\partial {\mathcal{B}}_0}
F^{a,b}_k(z) dz = O\left(\sqrt{n} (\log
n)^2 \left(\frac{k\log^2
n}{n^{\frac{1}{2}}}\right)^k \right).
\]
\end{lemma}
\begin{proof}
Since ${\mathcal{B}}_0$ is a box with corners at $y_0^{\pm} \pm ik\sqrt{n}$, the integral
has length $O(k\sqrt{n}(\log n)^2)$, so it will suffice to prove
\[
\forall z \in \partial {\mathcal{B}}_0, \qquad \left|\frac{F_k^{a,b}(z)}{k
n^{\underline{k}}} \right| =
O\left(\frac{1}{k}\left(\frac{k\log^2
n}{n^{\frac{1}{2}}}\right)^k\right).
\]
Recall that
\[
F_{k}^{a,b}(z) = \left(z +
\frac{k-1}{2}\right)^{\underline{k}} F_k^a(z)F_k^b(z).
\]
Recall also that $|y_0^{\pm} \mp x_0| \leq \sqrt{n}$ and $x_0 =
\frac{k}{2}\sqrt{n}(\log n)^2$. Thus, on $\partial {\mathcal{B}}_0$ we have $|\Re z| \leq
\frac{k}{2}\sqrt{n} (\log n)^2 + \sqrt{n}$. Hence, using $k < \sqrt{n}$,
\begin{align*}\left(z + \frac{k-1}{2}\right)^{\underline{k}} &\leq
\left(|\Re(z)| + |\Im(z)| + \frac{k-1}{2} \right)^k \\&\leq
\left(\frac{k}{2}\sqrt{n} (\log n)^2 + 2\sqrt{n} + k \sqrt{n}\right)^k \\&\leq
(k \sqrt{n}(\log n)^2)^k,\end{align*} the last bound requiring $k+2 \leq
\frac{k}{2}(\log n)^2$, which plainly holds for $n > e^5$.
Since $n^{\underline{k}} \gg n^k$ when $k < \sqrt{n}$, we obtain the required
bound for $\sup_{z \in \partial {\mathcal{B}}_0}
\left|\frac{F_{k}^{a,b}(z)}{kn^{\underline{k}}}\right|$ by checking that
\[\sup_{z \in
\partial {\mathcal{B}}_0}|F_k^a(z)F_k^b(z)|
= O(1).\]
To do so, for each $p
\in \mathcal{P}$ write the corresponding
factor of $F^a_k(z)$ or $F^b_k(z)$ as
\[
\frac{ z-p\pm k}{z-p} = 1 \pm \frac{k}{z-p}.
\]
For a given $z \in \partial {\mathcal{B}}_0$, say the pole $p \in \mathcal{P}$ is `good'
for $z$ if $|z-p| \geq k \sqrt{n}$. Since the total number of poles in
$\mathcal{P}$ is $O(\sqrt{n})$, the part of the product in $F^a_k(z)F^b_k(z)$
contributed by good poles is bounded by $O(1)$, so we may consider only the
bad poles.
Bad poles exist only if $z$ is on one of the two vertical sides
of the box, that is $\Re(z) = y_0^{\pm}$. Notice that, e.g.
\[
|p-y_0^+| < k\sqrt{n} \qquad \Rightarrow \qquad |p - x_0| < (k+1)\sqrt{n} <
\frac{k}{4}\sqrt{n}(\log n)^2 = \frac{x_0}{2}
\]
(similarly if $p$ is near $y_0^-$), and so bad poles satisfy
$\min(|p + x_0|, |p - x_0|) <
\frac{x_0}{2}$.
By the well-spaced property of $y_0^\pm$, when $\Re(z) = y_0^\pm$ we have
\begin{align*}
\sum_{p \text{ bad}} \frac{1}{|z - p|} &\leq \sum_{p \text{ bad}, p>0}
\frac{1}{|y_0^+ - p|} + \sum_{p \text{ bad}, p < 0} \frac{1}{|y_0^- - p|}\\&
\leq
\frac{8 \sqrt{n} \log n}{x_0} \leq \frac{16}{k \log n}.
\end{align*}
It follows that
\[
\prod_{p \text{ bad}}\left(1 + \frac{1}{|z-p|}\right) = O(1).
\]
\end{proof}
We now have all of the estimates that we need to complete the proof of
Theorem \ref{character_ratio_theorem}. We use the following
integral formula.
\begin{lemma}\label{exact_integral}
Let $p_1, p_2, ..., p_s \in \mathbb{C}$ be a sequence of poles and let $d$ be an
arbitrary
complex number. Then
\[
I(p_1, ..., p_s, d) = \frac{1}{2\pi i}\oint \prod_{j=1}^s \frac{z-p_j
+d}{z-p_j} = sd
\]
where the contour is such that it has winding number one about each
pole.
\end{lemma}
\begin{proof}
The integral is independent of the appropriate contour, so take the contour to
be a large loop containing the poles. We may write the integrand as
$\prod_{j=1}^s \left(1 + \frac{d}{z-p_j}\right).$ Differentiating the integrand
with respect to $p_j$ results in a rational function having total degree $-2$.
It follows that $\frac{\partial}{\partial p_j} I(p_1, ..., p_s, d) = 0$ for each
$j$, since the integral can be made made arbitrarily small by taking the contour
to be sufficiently large. Taking $p_j = 0$ for each $j$, we find that the
residue at 0 is $sd$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{character_ratio_theorem} part c.]
Write
\[
\frac{\chi^\lambda(C)}{f^\lambda} = \frac{-1}{kn^{\underline{k}}} \frac{1}{2\pi
i}\left\{
\oint_{\partial {\mathcal{B}}_0} + \sum_{j \geq 1} \oint_{\mathcal{C}_j^+} + \sum_{j \geq
1} \oint_{\mathcal{C}_j^-}\right\} F_k^{a,b}(z) dz.
\]
The contribution from $\partial {\mathcal{B}}_0$ is bounded in Lemma \ref{cB_0_bound}
by
\[\frac{-1}{kn^{\underline{k}}} \frac{1}{2\pi i} \oint_{\partial {\mathcal{B}}_0}
F^{a,b}_k(z) dz = O\left(\sqrt{n} (\log
n)^2 \left(\frac{k\log^2
n}{n^{\frac{1}{2}}}\right)^k \right).
\]
which may be absorbed into the error term of the theorem.
For each $j \geq 1$, write the contribution from the $q_j^+$ poles inside $\mathcal{C}_j^+$ as
\[
\frac{1}{2\pi i} \int_{\mathcal{C}_j^+} \left[ \frac{-( x_j +
\frac{k-1}{2})^{\underline{k}}}{k
n^{\underline{k}}} \prod_{p \in \mathcal{P} \cap {\mathcal{B}}_j} \frac{z-p- k}{z-p} +
O\left( \left(\frac{x_j}{n}\right)^{k-1} \frac{\log n}{\sqrt{n}}\right)\right]
dz.
\]
For the main term of this integral, Lemma \ref{exact_integral} gives the
evaluation
\begin{align*}
\frac{1}{2\pi i} \int_{\mathcal{C}_j^+} \frac{-( x_j +
\frac{k-1}{2})^{\underline{k}}}{k
n^{\underline{k}}}& \prod_{p \in \mathcal{P} \cap {\mathcal{B}}_j} \frac{z-p- k}{z-p} dz
=
q_j \frac{\left(x_j + \frac{k-1}{2}\right)^{\underline{k}}}{n^{\underline{k}}}
\\&= \sum_{a_i: a_i - \frac{k}{2} \in {\mathcal{B}}_j^+} \left(\frac{a_i}{n}\right)^k
\left(1 +
O\left(\frac{k\sqrt{n}}{a_i}\right)\right),
\end{align*}
the error resulting since each of the $k$ terms of
$\left(x+\frac{k-1}{2}\right)^{\underline{k}}$ is equal to $a_i + O(\sqrt{n})$
-- we use that $a_i = \Omega(k\sqrt{n})$ here.
Had we considered $\mathcal{C}_j^-$ rather than $\mathcal{C}_j^+$ this main term
would involve a
sum over the $b_i$, with an appropriate sign factor.
The integral of the error term over
$\mathcal{C}_j$ is bounded trivially by using $|\mathcal{C}_j| = O(k q_j
n^{\frac{1}{4}})$, which gives
\[
O\left(k q_j n^{\frac{1}{4}} \cdot \left(\frac{x_j}{n}\right)^{k-1} \frac{\log
n}{\sqrt{n}} \right).
\] The ratio of the
error from $\mathcal{C}_j$
to the corresponding main term is
\[
O\left(\frac{k n^{\frac{3}{4}} \log n}{x_j}\right),
\]
and since each $a_i$ in the sum above is within constants of $x_j$, this
proves the Theorem.
Note that in the
claimed formula of the Theorem, the sums over $a_i$ and $b_j$ contain terms
for which $k\sqrt{n} < a_i \leq y_0^+ + \frac{k}{2}$
and $k \sqrt{n} < b_j \leq -y_0^- + \frac{k}{2}$, which do not appear in
our
evaluation. However, only a bound, rather than an asymptotic, is claimed in this
range, so that the theorem is valid with the extra terms discarded.
\end{proof}
\section{Proof of upper bound for Theorem
\ref{main_theorem}}\label{upper_bound_section}
We now prove the upper bound on mixing time from Theorem \ref{main_theorem} by
proving Proposition \ref{char_ratio_prop}. Recall that this proposition is a
bound for character ratios at class $C$ of $k$ cycles,
\begin{equation}\label{target_char_ratio_bound}
\left|\frac{\chi^\lambda(C)}{f^\lambda}\right|^{\frac{n}{k}(\log n + c)}
\leq \frac{1}{f^\lambda},
\end{equation}
uniformly for $k$ less than a fixed constant times $ n$ and for all $\lambda
\vdash n$.
Before proving the estimate, we collect together several
observations. First note that the character ratio bound is trivial for the
one-dimensional representations $\lambda = (n), (1^n)$. Also, exchanging
$\lambda$ with its dual $\lambda'$ leaves the dimension $f^\lambda$ unchanged,
and at most changes the sign of $\chi^\lambda(C)$, so we will assume
without loss of generality that $\lambda$ has $a_1 \geq b_1$. We set $r =
n-\lambda_1 = n-a_1 - \frac{1}{2}$.
\begin{lemma}[Criterion lemma]\label{the_target_lemma} To prove the character ratio bound (\ref{target_char_ratio_bound}), it suffices to prove that for all $n$ larger than the
maximum of $k$ and a fixed constant, and for
a
sufficiently large $c > 0$, that for all non-trivial $\lambda$ with $a_1 \geq
b_1$,
\[
\log \left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq \max \left(
\frac{-kr}{n} + \frac{kr \log r}{2n \log n} + \frac{ckr}{n\log n},
\frac{-k}{2} + \frac{ck}{\log n}\right).
\]
\end{lemma}
\begin{proof}
By the bound
(\ref{diaconis_shahshahni_dim_bound}) for the dimension, we have (use $\log r!
\geq r \log \frac{r}{e}$)
\[
f^\lambda \leq \binom{n}{r} \sqrt{r!} \leq \frac{n^r}{\sqrt{r!}}\leq
\exp\left(r \log n -
\frac{1}{2}\left(r \log \frac{r}{e}\right)\right)
\]
so that a bound of
\begin{equation*}\label{long_target}
\log \left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq \frac{-kr}{n} +
\frac{kr \log r}{2 n \log n} + \frac{c kr}{n \log n}
\end{equation*}
is sufficient.
On the other hand, for all $\lambda$, $f^\lambda \leq \sqrt{n!}$, so that a
bound of
\[
\log \left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq \frac{-k}{2} +
\frac{ck}{\log n}
\]
also suffices.
\end{proof}
Our proof splits into two cases depending upon whether $k \geq 6\log n$. The
essential tool in both cases will be the evaluation of the character ratio from
Theorem
\ref{character_ratio_theorem}.
For large $k$ we will use only parts a. and b.
of that theorem. Recall that part a. had a main term equal to
\[
\operatorname{MT} := \frac{(a_1 -
\frac{1}{2})^{\underline{k}}}{n^{\underline{k}}} \prod_{j=2}^m \frac{a_1 -a_j -
k}{a_1 - a_j} \prod_{j=1}^m \frac{a_1 + b_j}{a_1 + b_j-k}.
\]
\begin{lemma}\label{MT_lemma} Assume $k + r+1 <
\frac{n}{2}$. We have the bound
$
\operatorname{MT} \leq \exp\left(\frac{-kr}{n}\right).
$
\end{lemma}
\begin{proof}
Recall $a_1 = n-r-\frac{1}{2}$. We estimate
\begin{align*}
\operatorname{MT}&\leq \prod_{i=1}^k \left(\frac{n-r-i}{n+1-i}\right)\prod_{j=2}^m
\left(1 -
\frac{k}{n-r}\right)\prod_{j=1}^m\left(1 + \frac{k}{n-k-r}\right)
\\& \leq \frac{n-r}{n-k-r} \prod_{i=1}^k \frac{n-r-i}{n-i+1} \\&= \prod_{i=1}^k
\frac{n-r-i+1}{n-i+1} \leq \left(1-\frac{r}{n}\right)^k \leq
\exp\left(\frac{-kr}{n}\right).
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{char_ratio_prop} when $6 \log n \leq k
\leq \delta n$]
When $r > 0.49 n$ we have $a_1 < 0.51 n$. Let $\theta = 0.67$, and note that
$e^{-\theta} > 0.511$. Thus $a_1 < e^{-\theta}n$ so that part
b of Theorem \ref{character_ratio_theorem} guarantees that there exists
$\delta =\epsilon(0.67)>0$, such that, for $n$ larger than a fixed constant, for
all $6 \log n \leq k \leq
\delta n$,
$\left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq e^{\frac{-k}{2}}.$ Thus the
second condition of Lemma \ref{the_target_lemma} is satisfied.
So we may suppose that $r \leq 0.49 n$ and appeal to part a. of Theorem
\ref{character_ratio_theorem}. Suppose that $\delta$ is sufficiently small so
that that $r + k + 1 <
\frac{n}{2}$. If $r < k$ then the error term of part a. of Theorem
\ref{character_ratio_theorem} is
zero so that the previous lemma implies
\[
\left|\frac{\chi^\lambda(C)}{f^\lambda}\right| = \operatorname{MT} \leq
\exp\left(\frac{-kr}{n}\right).
\]
Thus the first criterion of Lemma \ref{the_target_lemma} is satisfied with $c =
0$.
Assume now that $r \geq k$. Let now $\delta$ sufficiently small
so that we may choose $\epsilon = \frac{1}{200}$ in part a, that is, $r + k +
1 \leq \left(\frac{1}{2} - \frac{1}{200}\right)n$, and also assume that
$\frac{(1 + \epsilon)(k + r+1)}{n-k} < 0.5 - \eta$ for some fixed $\eta
> 0$.
Then part a. gives $\left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq \operatorname{MT} +
\operatorname{ET}$ with
\[
\operatorname{ET} \ll \exp \left(k \left[ \log \frac{(1 + \epsilon) (k+r+1)}{n-k} +
O_\epsilon\left(r^{\frac{-1}{2}}\right)\right]\right) \leq 2^{-k} \; (n
\text{ large}).
\]
Since $r < \frac{n}{2}$ we deduce that
\begin{align*}
\operatorname{MT} + \operatorname{ET} &\leq \exp\left(\frac{-kr}{n}\right) + \exp\left(-k \log 2\right)
\\&\leq \exp\left(\frac{-kr}{n}\right) \left(1 + \exp\left(-k\left(\log 2 -
\frac{1}{2}\right)\right)\right).
\end{align*}
Since $k \geq 6 \log n$ and $6(\log 2 - .5) >1.15$ we deduce
\[
\log \left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq \log(\operatorname{MT} + \operatorname{ET}) \leq
\frac{-kr}{n} + O(n^{-1.15}),
\]
so that the first criteria of Lemma \ref{the_target_lemma} is satisfied.
\end{proof}
When $k \leq 6\log n$ we make essential use of the asymptotic evaluation of the
character ratio proved in part c. of Theorem \ref{character_ratio_theorem}. The
next lemma shows that we may restrict attention to only the main term of that
evaluation.
\begin{lemma}[Small $k$ criterion lemma]\label{condition_lemma}
Let $2 \leq k \leq 6 \log n$. We have the bound
\begin{align*}
&\left|\frac{\chi^\lambda(C)}{f^\lambda}\right|\leq\\ &\quad \left(1 + O
\left(\frac{\log
n}{n^{\frac{1}{4}}}\right)\right)\left[\sum_{a_i > k
n^{\frac{1}{2}}} \frac{a_i^k}{n^k} + \sum_{b_i > k n^{\frac{1}{2}}}
\frac{b_i^k}{n^k}\right] +
O\left(\frac{e^{-k}(\log n)^4}{n^{\frac{1}{4}}}\right).
\end{align*}
In particular, if $r > n^{\frac{5}{6}}$ and $n$ is sufficiently large, and if
\begin{align*}
& \log \left(\sum_{a_i > k n^{\frac{1}{2}}} \frac{a_i^k}{n^k} + \sum_{b_i > k
n^{\frac{1}{2}}} \frac{b_i^k}{n^k}\right)\\ & \qquad\qquad\qquad\qquad\leq
\max\left(
\frac{-kr}{n} + \frac{kr \log r}{2n \log n} + \frac{ckr}{n \log n}, \frac{-k}{2}
+ \frac{ck}{\log n}\right)
\end{align*}
then after changing constants, the same estimate holds for
$\log \left| \frac{\chi^\lambda(C)}{f^\lambda}\right|$, so that the condition of
Lemma \ref{the_target_lemma} is satisfied.
\end{lemma}
\begin{proof}
To deduce the second statement from the first, note that
\[\log\left(1 + O\left(\frac{\log n}{n^{\frac{1}{4}}}\right)\right) =
O\left(\frac{\log n}{n^{\frac{1}{4}}}\right),\] which, for $r >
n^{\frac{5}{6}}$ may be absorbed into the RHS of the second statement by
increasing the value of $c$. Regarding
the error of $O\left(\frac{e^{-k}(\log n)^4}{n^{\frac{1}{4}}}\right)$, it is no
loss of generality to assume that \[\sum\frac{a_i^k}{n^k} +
\sum\frac{b_i^k}{n^k} \geq e^{\frac{-k}{2}},\] so that this error term has
relative size $1 + O\left(\frac{(\log
n)^4e^{-\frac{k}{2}}}{n^{\frac{1}{4}}}\right)$.
Again, the logarithm of this error may be absorbed by increasing $c$.
To prove the first statement, part c. of Theorem \ref{character_ratio_theorem}
gives
\begin{align*}
\left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq& \sum_{a_i >
kn^{\frac{1}{2}}} \frac{a_i^k}{n^k}\left(1 +
O\left(\frac{kn^{\frac{3}{4}}}{a_i}\right)\right) + \sum_{b_i > k
n^{\frac{1}{2}}} \frac{b_i^k}{n^k}\left(1 + O
\left(\frac{kn^{\frac{3}{4}}}{b_i}\right)\right)\\ & \quad +
O\left(n^{\frac{1}{2}}(\log n)^2\left(\frac{k \log n}{\sqrt{n}}\right)^k\right).
\end{align*}
For $2 \leq k \leq 6 \log n$, the last error term is plainly
$O\left(\frac{e^{-k}(\log n)^4}{\sqrt{n}}\right)$. Split the sum over $a_i$
according to
$a_i \geq \frac{n}{e^2}$. Thus the sum over $a_i$ is bounded by
\begin{align*}
& \left(1 + O\left(\frac{\log n}{n^{\frac{1}{4}}}\right)\right) \sum_{a_i \geq
\frac{n}{e^2}} \frac{a_i^k}{n^k} + \sum_{kn^{\frac{1}{2}} < a_i <
\frac{n}{e^2}} \frac{ a_i^k}{n^k} \\ &\qquad \qquad\qquad\qquad+ O\left(\frac{
e^{-k}}{n^{\frac{1}{4}}}
\sum_{kn^{\frac{1}{2}} < a_i < \frac{n}{e^2}} \frac{k(e
a_i)^{k-1}}{n^{k-1}}\right).
\end{align*}
In the last sum, $k\left(\frac{ e a_i}{n}\right)^{k-1}$ is minimized at $k=2$.
Thus
\[
\sum_{kn^{\frac{1}{2}} < a_i < \frac{n}{e^2}} \frac{k(e
a_i)^{k-1}}{n^{k-1}} \leq 2e \sum_{a_i} \frac{a_i}{n} = O(1).
\]
Handling the sum over $b_i$ in the same way proves the lemma.
\end{proof}
Recall that we require $a_1 \geq b_1$ and that $a_1 = n-r-\frac{1}{2}$.
Thinking of $r$ as fixed, set $\delta := \frac{r}{n}$. Since $\sum a_i + \sum
b_i = n$, an upper bound for $\sum \frac{a_i^k}{n^k} + \sum \frac{b_i^k}{n^k}$
is given by the solution to the following optimization problem.
Let $x_1, x_2, x_3, ...$ be real variables and let $k \geq 2$.
\begin{align*}
\text{maximize}:& \qquad \sum x_i^k \\
\text{subject to}:& \qquad 0 \leq x_i \leq 1-\delta, \; \sum x_i = 1 .
\end{align*}
Let $\ell = \left \lfloor (1-\delta)^{-1} \right \rfloor.$ It is easily
checked by varying parameters that an optimal solution of this problem has $x_1
= ... = x_\ell = 1-\delta$, and $x_{\ell+1} = 1 - \ell (1-\delta)$, $x_i = 0$
for $i > \ell+1$, which yields the maximum
\[
\ell (1-\delta)^k + (1- \ell(1-\delta))^k \leq
(1-\delta)^{k-1}
\]
$(1-\delta)^{k-1}$ being the solution of the continuous analogue of the
optimization problem
\begin{align*}
\text{maximize}:& \qquad \|f\|_{L^k(\mathbb{R})}^k\\
\text{subject to}:& \qquad \|f\|_{L^1(\mathbb{R})} =1, \; \|f\|_{L^\infty(\mathbb{R})} \leq 1-\delta,
\end{align*}
which is less constrained.
\begin{lemma}\label{calculus_lemma}
Let $k \geq 2$ and let $r = n-a_1 -\frac{1}{2}$. Assume $a_1
\geq b_1$. There exists a $c_0 > 0$ such that if $r \leq c_0 n$ then
\[
\sum \frac{a_i^k}{n^k} + \sum \frac{b_i^k}{n^k} \leq e^{-\frac{kr}{n} +
\frac{kr^2}{n^2}}.
\]
For all $r \leq n$ we have
\[
\sum \frac{a_i^k}{n^k} + \sum \frac{b_i^k}{n^k} \leq e^{-\frac{kr}{2n}}.
\]
\end{lemma}
\begin{proof}
Set $\delta =\frac{r}{n} \leq c_0$. We may assume that $c_0 \leq
\frac{1}{2}$. Then the first bound for the maximum reduces to $\delta^k +
(1-\delta)^k$ and we require the statement
\[
\forall\; 0 \leq \delta \leq c_0, \quad (\delta^k + (1-\delta)^k)^{\frac{1}{k}}
\leq e^{-\delta + \delta^2}.
\]
Write the left hand side as $(1-\delta) \left(1 +
\left(\frac{\delta}{1-\delta}\right)^k\right)^{\frac{1}{k}}$. Then it is
readily checked with calculus that this is decreasing in $k$, hence maximized at
$k = 2$. Since
\[
(1-\delta)^2 + \delta^2 = 1 - 2\delta + 2 \delta^2, \qquad e^{-2\delta +2
\delta^2} = 1-2\delta +4\delta^2 + O(\delta^3)
\]
it follows that the left is bounded by the right for $\delta$ sufficiently
small.
For the second statement, we use the second bound of the maximum, so that we
need to show $(1-\delta)^{k-1} \leq e^{-\frac{k\delta}{2}}$. The worst case is
$k = 2$, which reduces to the true inequality $(1-\delta) \leq e^{-\delta}$ for
$0 \leq \delta \leq 1$.
\end{proof}
We can now prove the case $2 \leq k \leq 6\log n$ of Proposition
\ref{char_ratio_prop}.
\begin{proof}[Proof of Proposition \ref{char_ratio_prop} for $k \leq 6\log n$]
As usual, assume $a_1 \geq b_1$. As in the proof of the
Proposition in the case $k > 6 \log n$, we write $\lambda_1 = a_1 + \frac{1}{2}
= n-r$.
For $r \leq n^{\frac{5}{6}}$ we appeal to part a. of Theorem
\ref{character_ratio_theorem}, with the main term $\operatorname{MT}$ from Lemma
\ref{MT_lemma}.
This gives
\[
\left|\frac{\chi^\lambda(C)}{f^\lambda}\right| \leq e^{\frac{-kr}{n}} + O
\left(\exp\left(-k\left(\log \frac{n}{r} +O(1)\right)\right)\right).
\]
In this range, $\exp(-\frac{kr}{n}) = 1 - \frac{kr}{n} +
O\left(\left(\frac{kr}{n}\right)^2\right) = 1-o(1)$, so that, since $k \geq 2$,
the error term is
negligible, and the condition of the first criterion lemma, Lemma
\ref{the_target_lemma}, is satisfied.
Now suppose $r > n^{\frac{5}{6}}$.
Let $c_0$ be the constant from Lemma \ref{calculus_lemma}. Let $c_1$ be a
constant, such that, for $r \leq c_1 n$, $\frac{r}{\log r} \leq \frac{n}{2\log
n}$. Notice that this implies $\frac{r^2}{n^2} \leq \frac{r \log r}{2n \log n}$.
For $r \leq \min(c_0, c_1) n$, Lemma \ref{calculus_lemma} implies that
\[
\log\left(\sum \frac{a_i^k}{n^k} + \sum \frac{b_i^k}{n^k}\right) \leq
\frac{-kr}{n} + \frac{k
r^2}{n^2} \leq \frac{-kr}{n} + \frac{kr\log r}{2n\log n},
\]
so that this quantity is bounded by the first term in the maximum of the Small
$k$ criterion lemma, Lemma
\ref{condition_lemma}. Thus
we may assume that $r > \min(c_0, c_1)n$. In this case, using that $\log r =
\log n + O(1)$, the second bound of
Lemma \ref{calculus_lemma} implies that for a sufficiently large constant $c$
\[
\log\left(\sum \frac{a_i^k}{n^k} + \sum \frac{b_i^k}{n^k}\right) \leq
\frac{-kr}{2n} \leq
\frac{-kr}{n} + \frac{kr \log r}{2 n \log n} + \frac{ c kr}{n \log n}.
\]
Thus again this is bounded by the first term in the maximum of Lemma
\ref{condition_lemma}.
\end{proof}
|
1,108,101,565,926 | arxiv | \section{Introduction}
When studying large networks, it is important to understand what sorts of computations can be
performed in a distributed way on a given network.
In particular, it is natural to consider the
setting
where each node acts independently in parallel, and where the network is specified separately from the computation to be performed.
In order to study networks whose size is considerably larger than can be held in memory by the computational unit at any single node, it is often useful to model the network as an infinite graph. (For a discussion of modeling large networks via infinite graphs, see, e.g., \cite{MR2555927}.)
We define a notion of \emph{graph Turing machine} that is meant to capture this setting. This notion generalizes several other well-known models of computation, including ordinary Turing machines, cellular automata,
and parallel graph dynamical systems. Each of these models, in turn, occurs straightforwardly as a special case of a graph Turing machine, suggesting that graph Turing machines capture a natural concept of parallel computation on graphs.
A graph Turing machine (henceforth abbreviated as ``graph machine'') performs computation on a
vertex-labeled edge-colored directed multigraph satisfying certain properties. This notion of computation is
designed to capture the idea that in each timestep, every vertex performs a limited amount of computation (in parallel, independently of the other vertices), and can only distinguish vertices connected to it when they are connected by different colors of edges.
In this paper we study the functions that can be computed using graph machines, which we call \emph{graph computable} functions. As we will see, this parallel notion of computation will yield significantly greater computational strength than ordinary Turing machines, even when we impose constraints on the time and space resources allowed for the graph computation, or when we require the underlying graph to be efficiently computable.
We will see that the computational strength of graph machines is exactly that of
$\mbf{0}^{(\EM{\omega})}$, the Turing degree of true arithmetic (thereby providing another natural construction of this degree).
We also examine the relationship between various properties of the underlying graph (e.g., finiteness of degree) and the computational strength of the resulting graph machines.
\subsection{Main results and overview of the paper}
We begin by introducing the notions of colored graphs, graph machines, and graph computability (including resource-bounded variants) in \cref{Graph Computing Definitions}.
Our main results fall into two classes: bounds on the computational power of arbitrary computable graph machines, and bounds among machines with an underlying graph every vertex of which has finite degree (which we say is of \emph{finite degree}).
Theorem~\ref{Total graph computable functions are computable from 0^w} states that every graph computable function is Turing reducible to $\mbf{0}^{(\EM{\omega})}$.
In the other direction, we show in \cref{omega-jump} that this bound is attained by a single graph Turing machine.
Sitting below $\mbf{0}^{(\EM{\omega})}$ are the arithmetical Turing degrees, i.e., those less than $\mbf{0}^{(n)}$
for some $n \in {\EM{{\mbb{N}}}}$, where $\mbf{0}^{(n)}$ denotes the $n$-fold iterate of the halting problem.
We show in Corollary~\ref{Pointwise computable Turing degrees} that every arithmetical Turing degree contains a
function that is
graph Turing computable, moreover in constant time.
(It remains open whether every degree below $\mbf{0}^{(\EM{\omega})}$ can be achieved.)
We next
show in Corollary~\ref{Finite degree implies computable in 0'} that
functions determined by
graph machines with underlying graph of finite degree
are reducible to
the halting problem, $\mbf{0}'$.
Further, we show in Corollary~\ref{Constant degree implies computable} that if we restrict to graph
machines
where every vertex has the same (finite) degree, then the resulting graph
computable function is computable by an ordinary Turing machine.
We also show in Theorem~\ref{Lower bound on finite degree main result} that every Turing degree below $\mbf{0}'$ is the degree of some linear-space graph computable function with underlying graph of finite degree.
When the Turing degree is
$k$-computably enumerable (for some $k \in {\EM{{\mbb{N}}}}$) then we may further take the graph machine to run in constant time.
In \cref{Other graph-theoretic properties}, we sketch
two properties of the graph machine or its
underlying graph that do not restrict which functions are graph computable.
We show, in \cref{eff-comp}, that we may take the graph machine to be itself efficiently computable, without changing the degree of functions thereby computed.
We also show, in \cref{symmetric-ok},
that the requirement that the graph be directed adds no generality: any function which can be computed by a graph machine can be computed by a graph machine whose underlying graph is symmetric.
In
\cref{representations-sec},
we examine how several other models of computation
relate to graph machines.
A graph Turing machine can be thought of as a generalization of an ordinary Turing machine, where the one-dimensional read-write tape is replaced by an arbitrary graph. In \cref{otm-subsec}, we describe how to simulate an ordinary Turing machine via a graph Turing machine.
One of the fundamental difficulties when generalizing ordinary Turing machines to graph Turing machines is to figure out how to determine the location of the head. We have taken an approach whereby there is no unique head, and hence each node processes its own data. The approach taken by \cite{MR3163227} is to have a unique head, but to allow its location to be nondeterministic, in that it is allowed to move to any vertex connected to the current location that is displaying an appropriate symbol. (Note that they call their different notion a ``graph Turing machine'' as well.)
Our graph machines can also be viewed directly as dynamical systems, or as a sort of network computation. Cellular automata can be simulated by graph machines, as we show in
\cref{cellular-subsec}, and parallel graph dynamical systems are essentially equivalent to the finite case of graph machines (see \cref{gds-subsec}).
Indeed, parallel graph dynamical systems include the above case of cellular automata, and also boolean networks and other notions of network computation, as described in \cite[\S2.2]{MR3332130}.
We conclude with \cref{Possible-extensions} on possible extensions of graph machines, and
\cref{Open Questions} on open questions.
\subsection{Notation}
\label{Notation}
When $f\:A \to B$ is a partial function and $a \in A$, we let $f(a) {\,\!\uparrow}$ signify that $f$ is not defined at $a$, and $f(a) {\,\!\downarrow}$ signify that $f(a)$ is defined at $a$. Suppose $f, g\:A \to B$ are partial functions. For $a \in A$ we say that $f(a) \cong g(a)$ if either ($f(a) {\,\!\uparrow}$ and $g(a) {\,\!\uparrow}$) or ($f(a){\,\!\downarrow}$, $g(a) {\,\!\downarrow}$, and $f(a) = g(a)$). We say that $f \cong g$ when $(\forall a \in A)\ f(a) \cong g(a)$. If $f\:A \to \prod_{i \leq n}B_i$ and $k \leq n\in{\EM{{\mbb{N}}}}$, then we let $f_{[k]}\:A \to B_k$ be the composition of $f$ with the projection map onto the $k$'th coordinate.
Fix an enumeration of computable partial functions, and for $e \in {\EM{{\mbb{N}}}}$, let $\{e\}$ be the $e$'th such function in this list.
If $X$ and $Y$ are sets with $0 \in Y$, let $Y^{<X}= \{\eta\:X \to Y \,:\, |\{a\,:\, \eta(a) \neq 0\}| < \EM{\omega}\}$, i.e., the collection of functions from $X$ to $Y$ for which all but finitely many inputs yield $0$.
(Note that by this notation we do \emph{not} mean partial functions from $X$ to $Y$ supported on fewer than $|X|$-many elements.)
For a set $X$, let $\ensuremath{\mathfrak{P}}_{<\EM{\omega}}(X)$ denote the collection of finite subsets of $X$. Note that the map which takes a subset of $X$ to its characteristic function is a bijection between $\ensuremath{\mathfrak{P}}_{<\EM{\omega}}(X)$ and $\{0,1\}^{<X}$.
When working with computable graphs,
sometimes the underlying set of the graph will be a finite coproduct of finite sets and ${\EM{{\mbb{N}}}}^k$ for $k\in{\EM{{\mbb{N}}}}$. The standard notion of computability for
${\EM{{\mbb{N}}}}$ transfers naturally to such settings, making implicit use of the
computable bijections between ${\EM{{\mbb{N}}}}^k$ and ${\EM{{\mbb{N}}}}$, and between $\coprod_{i \leq k} {\EM{{\mbb{N}}}}$ and ${\EM{{\mbb{N}}}}$, for $k \in {\EM{{\mbb{N}}}}$.
We will sometimes say \emph{computable set} to refer to some computable subset (with respect to these bijections) of such a finite coproduct $X$, and \emph{computable function} to refer to a computable function
having domain and codomain of that form or of the form $F^{<X}$ for some finite set $F$.
For sets $X, Y \subseteq {\EM{{\mbb{N}}}}$,
we write $X \leq_{\mathrm{T}} Y$ when $X$ is Turing reducible to $Y$ (and similarly for functions and other computably presented countable objects).
In several places we make use of (ordinary) Turing machines, described in terms of their state space, transition function, and alphabet of symbols.
For more details on results and notation in computability theory, see \cite{MR882921}.
\section{Graph computing}
\label{Graph Computing Definitions}
In this section we will make precise what we mean by graph Turing machines as well as graph computable functions.
\begin{definition}
\label{Colored graph}
A \defn{colored graph} is a tuple ${\EM{\mc{G}}}$
of the form $(G, (L, V), (C, E), \gamma)$
where
\begin{itemize}
\item $G$ is a set, called the \defn{underlying set} or the \defn{set of vertices},
\item $L$ is a set called the \defn{set of labels} and $V\: G \to L$ is the \defn{labeling function}.
\item $C$ is a set called the \defn{set of colors} and $E\: G \times G \to \ensuremath{\mathfrak{P}}_{< \EM{\omega}}(C)$ is called the \defn{edge coloring}
\item $\gamma\: L \to \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(C)$ is called the \defn{allowable colors function} and satisfies
\[
(\forall v, w \in G)\ E(v, w) \subseteq \gamma(V(v)) \cap \gamma(V(w)).
\]
\end{itemize}
A \defn{computable colored graph} is a colored graph along with indices witnessing that $G$, $L$, and $C$ are computable sets and that $V$,$E$, and $\gamma$ are computable functions.
\end{definition}
The intuition is that a colored graph is an edge-colored directed multigraph where each vertex is assigned a label, and such that among the edges between any two vertices, there is at most one edge of each color, and only finitely many colors appear (which must be among those given by $\gamma$ applied to the label). Eventually, we will allow each vertex to do some fixed finite amount of computation, and we will want vertices with the same label to perform the same computations.
For the rest of the paper by a \emph{graph} we will always mean a colored graph, and will generally write the symbol ${\EM{\mc{G}}}$ (possibly decorated) to refer to graphs.
\begin{definition}
A graph ${\EM{\mc{G}}}$ is \defn{finitary} when its set of labels is finite.
\end{definition}
For a finitary graph, without loss of generality one may further assume that the set of colors is finite and that every color is allowed for every label.
Notice that graphs, as we have defined them, are allowed to have infinitely many labels and edge colors, so long as the edges connecting to any particular vertex are assigned a finite number of colors, depending only on the label. However, there is little harm in the reader assuming that the graph is finitary.
The main thing lost in such an assumption is the strength of various results providing upper bounds on the functions that can be computed using graph machines, as we take care to achieve our lower bounds (\cref{Pointwise computable Turing degrees},
\cref{omega-jump}, and
\cref{Lower bound on finite degree main result})
using finitary graphs.
Let ${\EM{\mc{G}}}$ be a graph with underlying set $G$, and suppose $A \subseteq G$. Define ${\EM{\mc{G}}}|_A$ to be the graph with underlying set $A$ having the same set of labels and set of colors as ${\EM{\mc{G}}}$, such that the labeling function, edge coloring function, and allowable colors function of ${\EM{\mc{G}}}|_A$ are the respective restrictions to $A$.
\begin{definition}
A \defn{graph Turing machine}, or simply \defn{graph machine}, is
a tuple ${\EM{\mathfrak{M}}} = ({\EM{\mc{G}}}, (\EM{\mc{L}}, \{0,1\}), (S, s, \alpha), T)$
where
\begin{itemize}
\item ${\EM{\mc{G}}} = (G, (L, V), (C, E), \gamma)$ is a graph, called the \defn{underlying graph}. We will speak of the components of the underlying graph as if they were components of the graph machine itself.
For example, we will call $G$ the \emph{underlying set} of ${\EM{\mathfrak{M}}}$ as well as of ${\EM{\mc{G}}}$.
\item $\EM{\mc{L}}$ is a finite set, called the \defn{alphabet}, having distinguished symbols $0$ and $1$.
\item
$S$ is a countable set called the \defn{collection of states}.
\item
$\alpha\:L \to \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(S)$ is the \defn{state assignment function}.
\item
$s$ is a distinguished state, called the \defn{initial state}, such that $s \in \alpha(\ell)$ for all
$\ell \in L$.
\item $T\: L \times \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(C) \times \EM{\mc{L}} \times S \to \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(C) \times \EM{\mc{L}} \times S$ is a function, called the \defn{lookup table}, such that for each $\ell \in L$ and each $z \in \EM{\mc{L}}$,
\begin{itemize}
\item if $c \not \subseteq \gamma(\ell)$ or $t \not \in \alpha(\ell)$, then $T(\ell, c, z, t) = (c, z, t)$, i.e., whenever the inputs to the lookup table are not compatible with the structure of the graph machine, the machine acts trivially, and
\item if $t \in \alpha(\ell)$ and $c \subseteq \gamma(\ell)$, then $T_{[1]}(\ell, c, z, t) \subseteq \gamma(\ell)$
and $T_{[3]}(\ell, c, z, t) \in \alpha(\ell)$, i.e.,
whenever the inputs are compatible with the structure, so are the outputs.
\end{itemize}
Further, for all $\ell \in L$, we have $T(\ell, \emptyset, 0, s) = (\emptyset, 0, s)$, i.e., if any vertex is in the initial state, currently displays $0$, and has received no pulses, then that vertex doesn't do anything in the next step. This lookup table can be thought of as specifying a \emph{transition function}.
\end{itemize}
A \defn{${\EM{\mc{G}}}$-Turing machine} (or simply a \defn{${\EM{\mc{G}}}$-machine})
is a graph machine with underlying graph ${\EM{\mc{G}}}$.
A \defn{computable graph machine} is a graph machine along with indices witnessing that ${\EM{\mc{G}}}$ is a computable graph, $S$ is computable set, and $\alpha$ and $T$ are computable functions.
\end{definition}
If $A$ is a subset of the underlying set of ${\EM{\mathfrak{M}}}$, then ${\EM{\mathfrak{M}}}|_A$ is the graph machine
with underlying graph ${\EM{\mc{G}}}|_A$ having the same alphabet, states, and lookup table as ${\EM{\mathfrak{M}}}$.
The intuition is that a graph machine should consist of a graph where at each timestep, every vertex is assigned a state and an element of the alphabet, which it displays. To figure out how these assignments are updated over time, we apply the transition function determined by the lookup table which tells us, given the label of a vertex, its current state, and its currently displayed symbol, along with the colored \emph{pulses} the vertex has most recently received, what state to set the vertex to, what symbol to display next, and what colored pulses to send to its neighbors.
For the rest of the paper, ${\EM{\mathfrak{M}}} = ({\EM{\mc{G}}}, (\EM{\mc{L}}, \{0,1\}), (S, s, \alpha), T)$ will denote a \emph{computable} graph machine whose underlying (computable) graph is ${\EM{\mc{G}}} = (G, (L, V), (C, E), \gamma)$.
\begin{definition}
A \defn{valid configuration} of ${\EM{\mathfrak{M}}}$ is a function $f\: G \to \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(C) \times \EM{\mc{L}} \times S$ such that for all $v \in G$, we have
\begin{itemize}
\item $f_{[1]}(v) \subseteq \gamma(V(v))$ and
\item $f_{[3]}(v) \in \alpha(V(v))$.
\end{itemize}
\end{definition}
In other words, a valid configuration is an assignment of colors, labels, and states
that is consistent with the underlying structure of the graph machine, in the sense that the pulses received and the underlying state at each vertex are consistent with what its allowable colors function and state assignment function permit.
\begin{definition}
A \defn{starting configuration} of ${\EM{\mathfrak{M}}}$ is a function $f\: G \to \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(C) \times \EM{\mc{L}} \times S$ such that
\begin{itemize}
\item $(\forall v \in G)\ f_{[1]}(v) = \emptyset$,
\item $f_{[2]} \in \EM{\mc{L}}^{< G}$, and
\item $(\forall v \in G)\ f_{[3]}(v) = s$.
\end{itemize}
We say that $f$ is \defn{supported} on a finite set $A\subseteq G$ if $f_{[2]}(v) = 0$ for all $v\in G\setminus A$.
Note that any starting configuration of ${\EM{\mathfrak{M}}}$ is always a valid configuration of ${\EM{\mathfrak{M}}}$.
\end{definition}
In other words, a starting configuration
is an assignment
in which
all vertices are in the initial state $s$, no pulses have been sent, and only finitely many vertices display a non-zero symbol.
Note that if $A$ is a subset of the underlying set of ${\EM{\mathfrak{M}}}$ and $f$ is a valid configuration for ${\EM{\mathfrak{M}}}$, then $f|_A$ is also a valid configuration for ${\EM{\mathfrak{M}}}|_A$. Similarly, if $f$ is a starting configuration for ${\EM{\mathfrak{M}}}$, then $f|_A$ is a starting configuration for ${\EM{\mathfrak{M}}}|_A$.
\begin{definition}
\label{Run on a valid configuration}
Given a valid configuration $f$ for ${\EM{\mathfrak{M}}}$, the \defn{run} of ${\EM{\mathfrak{M}}}$ on $f$ is the function $\<{\EM{\mathfrak{M}}}, f\>\: G \times {\EM{{\mbb{N}}}} \to \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(C) \times \EM{\mc{L}} \times S$ satisfying, for all $v\in G$,
\begin{itemize}
\item $\<{\EM{\mathfrak{M}}}, f\>(v, 0) = f(v)$ and
\item
$\<{\EM{\mathfrak{M}}}, f\>(v, n+1) = T(V(v), X, z, t)$
for all $n \in {\EM{{\mbb{N}}}}$,
where
\begin{itemize}
\item $z = \<{\EM{\mathfrak{M}}}, f\>_{[2]}(v, n)$ and $t = \<{\EM{\mathfrak{M}}}, f\>_{[3]}(v, n)$, and
\item $X = \bigcup_{w \in G}
\bigl( E(w, v) \cap \<{\EM{\mathfrak{M}}}, f\>_{[1]}(w, n)\bigr)$.
\end{itemize}
\end{itemize}
We say that a run \defn{halts at stage $n$} if
$\<{\EM{\mathfrak{M}}}, f\>(v, n) = \<{\EM{\mathfrak{M}}}, f\>(v, n+1)$ for all $v\in G$.
\end{definition}
A run of a graph machine is the function which takes a valid configuration for the graph machine and a natural number $n$, and returns the result of letting the graph machine process the valid configuration for $n$-many timesteps.
The following lemma is immediate from Definition~\ref{Run on a valid configuration}.
\begin{lemma}
\label{run on a valid configuration lemma}
Suppose $f$ is a valid configuration for ${\EM{\mathfrak{M}}}$.
For $n\in{\EM{{\mbb{N}}}}$, define
$f_n {\EM{\ :=\ }} \<{\EM{\mathfrak{M}}}, f\>(\,\cdot\,, n)$.
Then the following hold.
\begin{itemize}
\item
For all $n \in {\EM{{\mbb{N}}}}$, the function
$f_n$ is a valid configuration for ${\EM{\mathfrak{M}}}$.
\item For all $n, m \in {\EM{{\mbb{N}}}}$ and $v \in G$,
\[
f_{n+m}(v) = \<{\EM{\mathfrak{M}}}, f_n\>(v, m) = \<{\EM{\mathfrak{M}}}, f\>(v, n+ m).
\]
\end{itemize}
\end{lemma}
We now describe how a graph machine defines a function.
\begin{definition}
For $x \in \EM{\mc{L}}^{<G}$, let $\widehat{x}$ be the valid configuration such that
$\widehat{x}(v) = (\emptyset, x(v), s)$
for all $v \in G$.
Define
\[
\{{\EM{\mathfrak{M}}}\}\:\EM{\mc{L}}^{<G} \to \EM{\mc{L}}^{G}
\]
to be the partial function such that
\begin{itemize}
\item $\{{\EM{\mathfrak{M}}}\}(x){\,\!\uparrow}$, i.e., is undefined, if the run $\<{\EM{\mathfrak{M}}}, \widehat{x}\>$ does not halt, and
\item $\{{\EM{\mathfrak{M}}}\}(x) = y$ if $\<T, \widehat{x}\>$ halts at stage $n$ and
for all $v \in G$,
\[
y(v) = \<{\EM{\mathfrak{M}}}, \widehat{x}\>_{[2]}(v, n).
\]
\end{itemize}
Note that $\{{\EM{\mathfrak{M}}}\}(x)$
is well defined as $\widehat{x}$ is always a starting configuration for ${\EM{\mathfrak{M}}}$.
\end{definition}
While in general, the output of $\{{\EM{\mathfrak{M}}}\}(x)$ might have infinitely many non-zero elements, for purposes of considering which Turing degrees are graph computable, we will mainly be interested in the case where $\{{\EM{\mathfrak{M}}}\}(x) \in \EM{\mc{L}}^{<G}$, i.e., when all but finitely many elements of $G$ take the value $0$.
When defining a function using a graph machine, it will often be convenient to have extra vertices whose labels don't affect the function being defined, but whose presence allows for a simpler definition.
These extra vertices can be thought as ``scratch paper'' and play the role of extra tapes (beyond the main input/output tape) in a multi-tape Turing machine. We now make this precise.
\begin{definition}
\label{(G X)-computable}
Let $X$ be an infinite computable subset of $G$.
A function $\zeta\: \EM{\mc{L}}^{<X} \to \EM{\mc{L}}^{X}$ is \defn{$\<{\EM{\mc{G}}}, X\>$-computable} via ${\EM{\mathfrak{M}}}$ if
\begin{itemize}
\item[(a)] $\{{\EM{\mathfrak{M}}}\}$ is total,
\item[(b)] for $x, y \in \EM{\mc{L}}^{<G}$, if
$x|_{X} = y|_{X}$
then $\{{\EM{\mathfrak{M}}}\}(x) = \{{\EM{\mathfrak{M}}}\}(y)$,
and
\item[(c)] for all $x \in \EM{\mc{L}}^{<G}$, for all $v \in G \setminus X$, we have $\{{\EM{\mathfrak{M}}}\}(x)(v) = 0$, i.e., when $\{{\EM{\mathfrak{M}}}\}(x)$ halts, $v$ displays $0$, and
\item[(d)] for all $x \in \EM{\mc{L}}^{<G}$, we have $\{{\EM{\mathfrak{M}}}\}(x)|_{X} = \zeta(
x|_{X})$.
\end{itemize}
A function is \defn{${\EM{\mc{G}}}$-computable via ${\EM{\mathfrak{M}}}$} if it is
$\<{\EM{\mc{G}}}, X\>$-computable via ${\EM{\mathfrak{M}}}$
for some infinite computable $X\subseteq G$.
A function is \defn{${\EM{\mc{G}}}$-computable} if it is
${\EM{\mc{G}}}$-computable via ${\EM{\mathfrak{M}}}^\circ$ for some computable ${\EM{\mc{G}}}$-machine ${\EM{\mathfrak{M}}}^\circ$.
A function is
\defn{graph Turing computable}, or simply \defn{graph computable}, when it is ${\EM{\mc{G}}}^\circ$-computable for some computable graph ${\EM{\mc{G}}}^\circ$.
\end{definition}
The following lemma captures the sense in which functions that are
$\<{\EM{\mc{G}}}, X\>$-computable via ${\EM{\mathfrak{M}}}$
are determined by their restrictions to $X$.
\begin{lemma}
Let $X$ be an infinite computable subset of ${\EM{\mc{G}}}$.
There is at most one function $\zeta\: \EM{\mc{L}}^{<X} \to \EM{\mc{L}}^{X}$ that is $\<{\EM{\mc{G}}}, X\>$-computable via ${\EM{\mathfrak{M}}}$, and it must be Turing equivalent to $\{{\EM{\mathfrak{M}}}\}$.
\end{lemma}
\begin{proof}
Suppose there is some $\zeta\: \EM{\mc{L}}^{<X} \to \EM{\mc{L}}^{X}$ that is $\<{\EM{\mc{G}}}, X\>$-computable via ${\EM{\mathfrak{M}}}$.
Then by Definition~\ref{(G X)-computable}(a), $\{{\EM{\mathfrak{M}}}\}$ is total. By Definition~\ref{(G X)-computable}(b), for any $x \in \EM{\mc{L}}^{<G}$, the value of $\{{\EM{\mathfrak{M}}}\}(x)$ only depends on $x|_{X}$, and so $\{{\EM{\mathfrak{M}}}\}$ induces a function $\delta\: \EM{\mc{L}}^{<X} \to \EM{\mc{L}}^{G}$.
By Definition~\ref{(G X)-computable}(d),
the map $\EM{\mc{L}}^{<X} \to \EM{\mc{L}}^{X}$ given by $a\mapsto \delta(a)|_{X}$ is the same as $\zeta$. Therefore there is at most one function $\EM{\mc{L}}^{<X} \to \EM{\mc{L}}^{X}$ that is $\<{\EM{\mc{G}}}, X\>$-computable via ${\EM{\mathfrak{M}}}$.
By Definition~\ref{(G X)-computable}(c), $\{{\EM{\mathfrak{M}}}\}(x)|_{G \setminus X}$ is the constant $0$ function for all $x$.
Therefore $\{{\EM{\mathfrak{M}}}\}$ is Turing equivalent to $\zeta$.
\end{proof}
\subsection{Resource-bounded graph computation}
Just as one may consider
ordinary computability restricted by bounds on the time and space needed for the computation, one may devise and study complexity classes for graph computability.
However, as we will see in
\S\S\ref{infinite-degree-lower-bounds} and \ref{finite-degree-lower-bounds}, unlike with ordinary computability, a great deal can be done with merely \emph{constant time},
and
in the finitary case, our key constructions can be carried out by machines that run in \emph{linear space} --- both of which define here.
Throughout this subsection, $Q$ will be a collection of functions from ${\EM{{\mbb{N}}}}$ to ${\EM{{\mbb{N}}}}$.
\begin{definition}
A function $\zeta$
is \defn{$Q$-time computable via ${\EM{\mathfrak{M}}}$} if
\begin{itemize}
\item[(a)] $\zeta$ is ${\EM{\mc{G}}}$-computable via ${\EM{\mathfrak{M}}}$, and
\item[(b)]
there is a $q\in Q$ such that
for all finite connected subgraphs $A\subseteq G$ and all starting configurations $f$ of ${\EM{\mathfrak{M}}}$
supported on $A$,
\[
(\forall v\in G) \ \ \<{\EM{\mathfrak{M}}}, f\>\bigl(v, q(|A|)\bigr) = \<{\EM{\mathfrak{M}}}, f\>\bigl(v, q(|A|)+1\bigr)
\]
i.e., ${\EM{\mathfrak{M}}}$ halts in at most $q(|A|)$-many timesteps.
\end{itemize}
A function is \defn{$Q$-time graph computable} if it is $Q$-time computable
via ${\EM{\mathfrak{M}}}^\circ$ for
some
graph machine ${\EM{\mathfrak{M}}}^\circ$.
\end{definition}
In this paper, we consider mainly time bounds where $Q$ is the collection of constant functions $\{\lambda x.n \,:\, n\in{\EM{{\mbb{N}}}}\}$, in which case we speak of \emph{constant-time graph computability}.
When bounding the space used by a computation, we will consider graph computations that depend only on a ``small'' neighborhood of the input.
\begin{definition}
\label{nneigh}
For each $A \subseteq G$ and $n \in {\EM{{\mbb{N}}}}$,
the \defn{$n$-neighborhood} of $A$, written
$\EM{\mathbf{N}}_n(A)$, is
defined by induction as follows. \newline\nl
\ul{Case $1$}: The $1$-neighborhood of $A$ is
\[
\EM{\mathbf{N}}_1(A) {\EM{\ :=\ }} A \cup \{v \in G\,:\, (\exists a \in A)\ E(v,a) \cup E(a,v) \neq \emptyset\}.
\]
\ul{Case $k+1$}: The $k+1$-neighborhood of $A$ is
\[
\EM{\mathbf{N}}_{k+1}(A) = \EM{\mathbf{N}}_1(\EM{\mathbf{N}}_k(A)).
\]
\end{definition}
\begin{definition}
A graph machine ${\EM{\mathfrak{M}}}$ \defn{runs in $Q$-space} if
$\{{\EM{\mathfrak{M}}}\}$ is total and
there are $p, q\in Q$ such that for any finite connected subgraph $A\subseteq G$ and any starting configuration $f$ of ${\EM{\mathfrak{M}}}$ that is supported on $A$, we have $|\EM{\mathbf{N}}_{p(n)}(A)| \leq q(n)$ and
\[
\<{\EM{\mathfrak{M}}}, f\>(v, \,\cdot\,) =
\<{\EM{\mathfrak{M}}}|_{\EM{\mathbf{N}}_{p(n)}(A)}, f|_{\EM{\mathbf{N}}_{p(n)}(A)}\> (v, \,\cdot\,)\]
for all $v\in A$,
where $n {\EM{\ :=\ }} |A|$.
A function $\zeta$ is
\defn{$Q$-space graph computable} via ${\EM{\mathfrak{M}}}$ if $\zeta$ is
${\EM{\mc{G}}}$-computable
via ${\EM{\mathfrak{M}}}$ where ${\EM{\mathfrak{M}}}$ runs in $Q$-space.
We say that $\zeta$ is \defn{$Q$-space graph computable} if it is $Q$-space graph computable via ${\EM{\mathfrak{M}}}^\circ$ for some graph machine ${\EM{\mathfrak{M}}}^\circ$.
\end{definition}
In this paper, the main space bound we consider is where $Q$ is the collection of linear polynomials, yielding \emph{linear-space graph computability}.
This definition generalizes the standard notion of linear-space computation, and reduces to it in the case of a graph machine that straightforwardly encodes an ordinary Turing machine (for details of the encoding see \cref{otm-subsec}).
For such encodings, the only starting configurations yielding nontrivial computations are those supported on a neighborhood containing the starting location of the Turing machine read/write head.
In the case of arbitrary computable graph machines, computations can meaningfully occur from starting configurations supported on arbitrary connected subgraphs. This is why the bound on the size of the neighborhoods required to complete the computation is required to depend only on the size of a connected subgraph the starting configuration is supported on.
One way to view space-bounded graph computation is as computing functions that need
only a finite amount of a graph to perform their computation, where this amount depends only on the ``size'' of the input (as measured by the size of a connected support of the starting configuration corresponding to the input).
This perspective is especially natural if one thinks of the infinite graph as a finite graph that is larger than is needed for all the desired inputs.
\section{Arbitrary graphs}
\label{arbitrary-sec}
In this section, we consider the possible Turing degrees of total graph computable functions. We begin with
a bound for finite graphs.
\begin{lemma}
\label{lemma-finite-graph}
Suppose $G$ is finite.
Let $\mbf{h}$ be the map which takes a valid configuration $f$ for ${\EM{\mathfrak{M}}}$ and returns $n \in {\EM{{\mbb{N}}}}$ if $\<{\EM{\mathfrak{M}}}, f\>$ halts at stage $n$ (and not earlier), and returns $\infty$ if $\<{\EM{\mathfrak{M}}}, f\>$ doesn't halt. Then
\begin{itemize}
\item $\<{\EM{\mathfrak{M}}}, f\>$ is computable and
\item $\mbf{h}$ is computable.
\end{itemize}
\end{lemma}
\begin{proof}
Because $G$ is finite,
$\<{\EM{\mathfrak{M}}}, f\>$ is computable.
Further, there are are only finitely many valid configurations of ${\EM{\mathfrak{M}}}$.
Hence there must be some $n, k\in{\EM{{\mbb{N}}}}$ such that for all vertices $v$ in the underlying set of ${\EM{\mathfrak{M}}}$, we have
$\<{\EM{\mathfrak{M}}}, f\>(v, n) = \<{\EM{\mathfrak{M}}}, f\>(v, n+k)$, and the set of such pairs $(n,k)$ is computable.
Note that
$\<{\EM{\mathfrak{M}}}, f\>$ halts
if and only if
there is some $n$, less than or equal to the number of valid configuration for ${\EM{\mathfrak{M}}}$, for which this holds for $(n,1)$.
Hence
$\mbf{h}$, which searches for the least such $n$, is computable.
\end{proof}
We now investigate which Turing degrees are achieved by arbitrary computable graph machines.
\subsection{Upper bound}
We will now show that every graph computable function is computable from $\mbf{0}^{(\EM{\omega})}$.
\begin{definition}
Let $f$ be a valid configuration for ${\EM{\mathfrak{M}}}$, and let $A$ be a finite subset of $G$. We say that $(B_i)_{i \leq n}$ is an \defn{$n$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$} if
\begin{itemize}
\item $A = B_0$,
\item $B_i \subseteq B_{i+1} \subseteq G$
for all $i < n$, and
\item if $B_{i+1} \subseteq B \subseteq G$ then for all $v\in B_{i+1}$,
\[
\<{\EM{\mathfrak{M}}}|_{B_{i+1}}, f_i|_{B_{i+1}}\>(v, 1) = \<{\EM{\mathfrak{M}}}|_{B}, f_i|_{B}\>(v, 1),
\]
\end{itemize}
where again $f_i {\EM{\ :=\ }} \<{\EM{\mathfrak{M}}}, f\>(\,\cdot\,, i)$.
\end{definition}
The following proposition
(in the case where $\ell = n - n'$)
states that if $(B_i)_{i \leq n}$ is an $n$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$,
then as long as we are only running ${\EM{\mathfrak{M}}}$ with starting configuration $f$ for $\ell$-many steps, and are only considering the states of elements within $B_{n'}$,
then it suffices to restrict ${\EM{\mathfrak{M}}}$ to $B_{n}$.
\begin{proposition}
\label{Approximation = Full}
The following claim holds
for every $n \in {\EM{{\mbb{N}}}}$:
For every valid configuration $f$ for ${\EM{\mathfrak{M}}}$,
and finite $A\subseteq G$,
\begin{itemize}
\item
there is an $n$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$, and
\item if $(B_i)_{i \leq n}$ is such an approximation, then
\begin{align*}
\label{Approximation = Full: Equation 1}
\notag
(\forall n' < n)
(\forall \ell \le n-n') (\forall v \in B_{n'})\hspace*{88pt}
\\
\<{\EM{\mathfrak{M}}}|_{B_{n'+\ell}}, f|_{B_{n'+\ell}}\>(v, \ell)
= \<{\EM{\mathfrak{M}}}, f\>(v, \ell). \hspace*{10pt}(\square_n)
\end{align*}
\end{itemize}
\end{proposition}
\begin{proof}
We will prove this by induction on $n$.
\vspace*{5pt}
\noindent \ul{Base case:} The claim is vacuous for $n=0$, as there are no nonnegative values of $n'$ to check.
\vspace*{5pt}
\noindent \ul{Inductive case:} Proof of the claim for $n = k+1$, assuming its truth for $n\le k$.\newline
To establish $(\square_{k+1})$ consider
\begin{align*}
(\forall v \in B_{n'})\ \<{\EM{\mathfrak{M}}}|_{B_{n' + \ell}}, f|_{B_{n' + \ell}}\>(v, \ell)
= \<{\EM{\mathfrak{M}}}, f\>(v, \ell)
\tag{$\dagger$}
\end{align*}
where
$n' < k+1$ and $\ell \le (k+1) - n'$.
If $\ell < (k+1)-n'$ and $k=0$ then
$\ell = 0$, and
$(\dagger)$ holds
trivially.
If $\ell < (k+1)-n'$ and $k>0$ then
$\ell \le k -n'$, and so
$(\dagger)$ holds by the inductive hypothesis $(\square_{k})$.
Hence we may restrict attention to the case where $\ell = (k+1)-n'$.
Let $f$ be a valid configuration for ${\EM{\mathfrak{M}}}$, let
$A \subseteq G$ be finite, and
let $(B_i)_{i \leq k}$ be a $k$-approximation of ${\EM{\mathfrak{M}}}$ and
$f_1$
on $A$.
Let $D_{k+1}$ be a subset of $G$ such that for every $v \in B_k$ and every color $c$, if vertex $v$ receives a pulse (in $\<{\EM{\mathfrak{M}}}, f\>$)
of color $c$
at the start of timestep $1$,
then there is some vertex $d \in D_{k+1}$ which sends a pulse
of color $c$
to $v$
during timestep $1$. Note that because there are only finitely many colors of pulses which elements of $B_k$ can receive, and because $B_k$ is finite, we can assume that $D_{k+1}$ is finite as well.
Now let $B_{k+1} = B_k \cup D_{k+1}$.
Because each vertex in $B_k$
receives the same color pulses in ${\EM{\mathfrak{M}}}$ as it does in ${\EM{\mathfrak{M}}}|_{B}$ for any set $B$ containing $B_{k+1}$, we have that $\<{\EM{\mathfrak{M}}}|_{B}, f|_B\>(b, 1)$ agrees with $\<{\EM{\mathfrak{M}}}, f\>(b, 1)$ whenever $b \in B_k$ and $B_{k+1} \subseteq B$. Therefore $(B_i)_{i \leq k+1}$ is a $(k+1)$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$, and $(B_k, B_{k+1})$ is a $1$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $B_k$.
If $n'=k$, then $\ell = 1$, and so $(\dagger)$ holds by $(\square_1)$ applied to the approximation
$(B_k, B_{k+1})$.
If $n' < k$, then by induction, we may use $(\square_k)$ where the bound variable $f$ is instantiated by $f_1$, the bound variable $n'$ by $n'$, and the bound variable $\ell$ by $\ell - 1$,
to deduce that
\begin{align*}
\<{\EM{\mathfrak{M}}}|_{B_{n'+\ell - 1}}, f_1|_{B_{n'+\ell - 1}}\>(v, \ell - 1)
= \<{\EM{\mathfrak{M}}}, f_1\>(v, \ell - 1)
\end{align*}
for all $v\in B_{n'}$.
By Lemma~\ref{run on a valid configuration lemma},
we have
$\<{\EM{\mathfrak{M}}}, f_1\>(v,\ell - 1)= \<{\EM{\mathfrak{M}}}, f\>(v,\ell)$
and
$\<{\EM{\mathfrak{M}}}|_{B_{n'+\ell - 1}}, f_1|_{B_{n'+\ell - 1}}\>(v, \ell - 1) =
\<{\EM{\mathfrak{M}}}|_{B_{n'+\ell - 1}}, f|_{B_{n'+\ell - 1}}\>(v, \ell)$
for all $v\in B_{n'}$.
Because
$(B_i)_{i \leq k}$ is a $k$-approximation of ${\EM{\mathfrak{M}}}$ and $f_1$ on $A$,
we know that
$\<{\EM{\mathfrak{M}}}|_{B_{n'+\ell - 1 }}, f|_{B_{n'+\ell - 1}}\>(v, \ell) =
\<{\EM{\mathfrak{M}}}|_{B_{n'+\ell}}, f|_{B_{n'+\ell}}\>(v, \ell)$
for all $v\in B_{n'}$.
Therefore
$(\dagger)$ holds, and we have established
$(\square_{k+1})$.
\end{proof}
We now analyze the computability of approximations and of runs.
\begin{proposition}
\label{Computability of approximations to graph machines}
Let $n\in{\EM{{\mbb{N}}}}$.
For all computable graph machines ${\EM{\mathfrak{M}}}$ and configurations $f$ that are valid for ${\EM{\mathfrak{M}}}$,
the following are $\mbf{f}^{(n)}$-computable, where
$\mbf{f}$ is the Turing degree of $f$.
\begin{itemize}
\item The collection $P_n(f) {\EM{\ :=\ }} \{(A, (B_i)_{i \leq n})\,:\, A\subseteq G$ is finite and $(B_i)_{i \leq n}$ is an $n$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A\}$.
\item The function $f_n {\EM{\ :=\ }} \<{\EM{\mathfrak{M}}}, f\>(\,\cdot\,, n)$.
\end{itemize}
Further, these computability claims are uniform in $n$.
\end{proposition}
\begin{proof}
We will prove this by induction on $n$. The uniformity follows, since
for all $n> 1$,
we provide the same reduction to $\mbf{f}^{(n)}$ and parametrized earlier quantities.
\vspace*{5pt}
\noindent \ul{Base case (a):} Proof of claim for $n = 0$.\newline
Let ${\EM{\mathfrak{M}}}$ be a graph machine and $f$ a valid configuration for ${\EM{\mathfrak{M}}}$.
Given a finite $A\subseteq G$, the sequence $(A)$ is the only $0$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$. Hence $P_0(f) = \{(A,(A))\,:\, A\subseteq G$ finite$\}$ is computable.
Further, $f_0 = f$
is computable from $\mbf{f}^{(0)} = \mbf{f}$.
\vspace*{5pt}
\noindent \ul{Base case (b):}
Proof of claim for $n = 1$.\newline
Let ${\EM{\mathfrak{M}}}$ be a graph machine and $f$ a valid configuration for ${\EM{\mathfrak{M}}}$.
For each finite $A \subseteq G$ and each finite $B_0, B_1$ containing $A$ we can $\mbf{f}$-compute whether $\<{\EM{\mathfrak{M}}}|_{B_0}, f|_{B_0}\>(v, 1) = \<{\EM{\mathfrak{M}}}|_{B_1}, f|_{B_1}\>(v, 1)$,
and so we can
$\mbf{f}'$-compute $P_1(f)$.
But by Proposition~\ref{Approximation = Full},
we know that if $(A, (A, B)) \in P_1$ then for any $v \in A$ we have $\<{\EM{\mathfrak{M}}}, f\>(v, 1) = \<{\EM{\mathfrak{M}}}|_B, f|_B\>(v, 1)$. Hence we can compute
$f_1$
from $P_1$, and so it is $\mbf{f}'$-computable.
\vspace*{5pt}
\noindent\ul{Inductive case:}
Proof of claim for $n = k+1$ (where $k\ge 1$), assuming it for $n=k$.\newline
Let ${\EM{\mathfrak{M}}}$ be a graph machine and $f$ a valid configuration for ${\EM{\mathfrak{M}}}$.
We know that $(B_i)_{i \leq k+1}$ is a $(k+1)$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$ if and only if both (i) the sequence $(B_i)_{i \leq k}$ is a $k$-approximation of ${\EM{\mathfrak{M}}}$ and $f_1$ on $A$, and (ii) the sequence $(B_k, B_{k+1})$ is a $1$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $B_k$.
We can therefore compute $P_{k+1}(f)$ from
$P_k(f_1)$
and $P_1(f)$. By the inductive hypothesis, $P_1(f)$ is $\mbf{f}'$-computable.
Hence we must show that
$P_k(f_1)$ is
$\mbf{f}^{(k+1)}$-computable.
Also by the inductive hypothesis,
$P_k(f_1)$ is computable
from the $k$'th Turing jump of
$f_1$, and $f_1$
is $\mbf{f}'$-computable.
Hence
$P_k(f_1)$ is
$\mbf{f}^{(k+1)}$-computable.
Finally, by
Proposition~\ref{Approximation = Full},
if $(B_i)_{i \leq k+1}$ is an approximation of ${\EM{\mathfrak{M}}}$ for $f$ and $A$ up to $k+1$ then for any $v \in A$ we have $\<{\EM{\mathfrak{M}}}|_{B_{k+1}}, f|_{B_{k+1}}\>(v, k+1) = \<{\EM{\mathfrak{M}}}, f\>(v, k+1)$. We can therefore compute
$f_{k+1}$
from $P_{k+1}(f)$ (which will find such an approximation). Hence
$f_{k+1}$
is $\mbf{f}^{(k+1)}$-computable.
\end{proof}
We then obtain the following two results.
\begin{corollary}
\label{Run is computable from omega th jump}
If $f$ is a valid configuration for ${\EM{\mathfrak{M}}}$,
then
$f_n$
is $\mbf{f}^{(n)}$-computable and so $\<{\EM{\mathfrak{M}}}, f\>$ is $\mbf{f}^{(\EM{\omega})}$-computable,
where $\mbf{f}$ is the Turing degree of $f$.
\end{corollary}
\begin{proof}
By Proposition~\ref{Approximation = Full}, for each $v \in G$ (the underlying set of ${\EM{\mathfrak{M}}}$) and each $n \in {\EM{{\mbb{N}}}}$, there is an approximation of ${\EM{\mathfrak{M}}}$ for $f$ and $\{v\}$ up to $n$. Further, by Proposition~\ref{Computability of approximations to graph machines} we can
$\mbf{f}^{(n)}$-compute such an approximation,
uniformly in $v$ and $n$.
But if $(B_i^v)_{i \leq n}$ is an approximation of ${\EM{\mathfrak{M}}}$ for $f$ and $\{v\}$ up to $n$ then $\<{\EM{\mathfrak{M}}}|_{B^v_n}, f|_{B^v_n}\>(v, n) = \<{\EM{\mathfrak{M}}}, f\>(v, n)$. So $f_n = \<{\EM{\mathfrak{M}}}, f\>(\,\cdot\,, n)$ is $\mbf{f}^{(n)}$-computable, uniformly in $n$. Hence $\<{\EM{\mathfrak{M}}}, f\>$ is $\mbf{f}^{(\EM{\omega})}$-computable.
\end{proof}
\begin{theorem}
\label{Total graph computable functions are computable from 0^w}
Suppose that $\{{\EM{\mathfrak{M}}}\}\:\EM{\mc{L}}^{<G} \to \EM{\mc{L}}^{G}$ is a total function. Then $\{{\EM{\mathfrak{M}}}\}$ is computable from $\mbf{0}^{(\EM{\omega})}$.
\end{theorem}
\begin{proof}
Let $f$ be any starting configuration of ${\EM{\mathfrak{M}}}$. Then $f$ is computable.
Hence
by Corollary~\ref{Run is computable from omega th jump},
$\<{\EM{\mathfrak{M}}}, f\>(v, n+1)$ is $\mbf{0}^{(n+1)}$-computable.
This then implies that the function determining
whether or not $\{{\EM{\mathfrak{M}}}\}(x)$ halts after $n$ steps is $\mbf{0}^{(n+2)}$-computable.
But by assumption, $\{{\EM{\mathfrak{M}}}\}(x)$ halts for every $x \in \EM{\mc{L}}^{<G}$, and so $\{{\EM{\mathfrak{M}}}\}$ is $\mbf{0}^{(\EM{\omega})}$-computable.
\end{proof}
\subsection{Lower bound}
\label{infinite-degree-lower-bounds}
We have seen that every graph computable function is computable from $\mbf{0}^{(\EM{\omega})}$.
In this subsection, we will see that this bound can be obtained.
We begin by showing that every arithmetical
Turing degree has an element that
is graph computable in constant time. From this we then deduce
that there is a graph computable function Turing equivalent to $\mbf{0}^{(\EM{\omega})}$.
We first recall the following standard result from computability theory (see {\cite[III.3.3]{MR882921}}).
\begin{lemma}
\label{Description of arithmetical functions}
Suppose $n \in {\EM{{\mbb{N}}}}$ and
$X\subseteq {\EM{{\mbb{N}}}}$. Then the following are equivalent.
\begin{itemize}
\item $X \leq_{\mathrm{T}} \mbf{0}^{(n)}$.
\item There is a computable function $g\: {\EM{{\mbb{N}}}}^{n+1} \to {\EM{{\mbb{N}}}}$ such that
\begin{itemize}
\item $h(\,\cdot\,) {\EM{\ :=\ }} \lim_{x_0 \to \infty}\cdots \lim_{x_{n-1} \to \infty} g(x_0, \dots, x_{n-1}, \,\cdot\,)$ is total.
\item $h \equiv_{\mathrm{T}} X$.
\end{itemize}
\end{itemize}
\end{lemma}
We now give the following construction.
\begin{proposition}
\label{Pointwise computable for iterative limits}
Let $n\in{\EM{{\mbb{N}}}}$ and suppose $g\: {\EM{{\mbb{N}}}}^{n+1} \to {\EM{{\mbb{N}}}}$ is computable such that
\[
h(\,\cdot\,) {\EM{\ :=\ }} \lim_{x_0 \to \infty}\cdots \lim_{x_{n-1} \to \infty} g(x_0, \dots, x_{n-1}, \,\cdot\,)
\]
is total. Then $h$ is
graph computable in constant time $5n+5$, via a graph machine whose labels, colors, alphabets, states, and lookup table are all finite and do not depend on $n$ or $g$.
\end{proposition}
\begin{proof}
The first step in the construction is to define a graph machine which can take the limit of a sequence. We will think of this a subroutine which we can (and will) call several times. Let ${\EM{\mc{G}}}_{\mathfrak{L}}$ be the following graph.
\begin{itemize}
\item The underlying set is ${\EM{{\mbb{N}}}} \cup \{*\}$, where $*$ is some new element.
\item There is only one label, $p$, which all vertices have.
\item The colors of ${\EM{\mc{G}}}_\mathfrak{L}$ are $\{\mbf{B}_0, \mbf{B}_1, \mbf{SB}, \mbf{SF}_0, \mbf{SF}_1, \mbf{A}\}$.
\item The edge coloring is $E_{\mathfrak{L}}$, satisfying the following for all $m,m' \in{\EM{{\mbb{N}}}}$.
\begin{itemize}
\item
$E_{\mathfrak{L}}(m, m') = \emptyset$ and $E_{\mathfrak{L}}(m', m) = \{\mbf{B}_0, \mbf{B}_1\}$
when $m < m'$.
\item $E_{\mathfrak{L}}(m, m) = \{\mbf{B}_0, \mbf{B}_1\}$.
\item $E_{\mathfrak{L}}(*, m) = \{\mbf{SB}\}$ and $E(m, *) = \{\mbf{SF}_0, \mbf{SF}_1\}$.
\item $E_{\mathfrak{L}}(*, *) = \emptyset$.
\end{itemize}
\end{itemize}
Let ${\EM{\mathfrak{M}}}_\mathfrak{L}$ be the following graph machine.
\begin{itemize}
\item The underlying graph is ${\EM{\mc{G}}}_\mathfrak{L}$.
\item The alphabet is $\{0,1\}$.
\item The states of ${\EM{\mathfrak{M}}}_\mathfrak{L}$ are $\{s, a_0, a_1, a_2, u, b\}$.
\item The lookup table $T_{\mathfrak{L}}$ satisfies the following, for all
$z$ in the alphabet, states $x$, and collections $X$ of colors.
\begin{itemize}
\item[(i)] $T_{\mathfrak{L}}(p, \emptyset, z, s) = T_{\mathfrak{L}}(\emptyset, 0, s)$.
\item[(ii)] $T_{\mathfrak{L}}(p, X \cup \{\mbf{A}\}, z, x) = (\emptyset, 0, a_0)$.
\item[(iii)] $T_{\mathfrak{L}}(p, X, z, a_0) = (\{\mbf{SB}\}, 0, a_1)$.
\item[(iv)] $T_{\mathfrak{L}}(p, X, z, a_1) = (\emptyset, 0, a_2)$.
\item[(v)] $T_{\mathfrak{L}}(p, X, z, a_2) = (\emptyset, k, u)$ if $\mbf{SF}_{k} \in X$ and $\mbf{SF}_{1-k} \not \in X$ for some $k \in \{0,1\}$.
\item[(vi)] $T_{\mathfrak{L}}(p, X \cup \{\mbf{SB}\}, z, x) = (\{\mbf{B}_{z}\}, 0, b)$ if $x \not \in \{a_0, a_1, a_2\}$.
\item[(vii)] $T_{\mathfrak{L}}(p, X, z, b) = (\{\mbf{SF}_{k}\}, 0, u)$ if $\mbf{B}_{k} \in X$ and $\mbf{B}_{1-k} \not \in X$ for some $k\in\{0,1\}$.
\item[(viii)] $T_{\mathfrak{L}}(p, X, z, b) = (\emptyset, 0, u)$ if $\{\mbf{B}_0, \mbf{B}_1\} \subseteq X$.
\item[(ix)] $T_{\mathfrak{L}}(p, X, z, u) = (\emptyset, z, u)$ if $\mbf{A} \not\in X$.
\item[(x)] $T_{\mathfrak{L}}(p, X, z, x) = (\emptyset, 0, u)$ in all other cases.
\end{itemize}
\end{itemize}
We now describe what this graph machine does, beginning from a starting configuration. First, condition (i) sets everything to a clean slate, i.e., makes sure that at the beginning of the second timestep, every vertex will display $0$.
This ensures that the outcome won't depend on the values initially displayed on any vertex.
Next, by condition (ii), if a vertex receives a pulse of type $\mbf{A}$ then at the next timestep it enters state $a_0$.
We can think of this as signaling that this subroutine
has been ``activated''. This will only ever be sent to the element $*$, which we call the ``activation vertex''.
Then, by conditions (iii) and (iv), once the activation vertex is in state $a_0$ it will send an $\mbf{SB}$-pulse to every other vertex. This signals to them to start calculating the limit. The activation vertex will then pause and enter state $a_1$ and then $a_2$. This pause will give the other vertices an opportunity to calculate their limiting value.
The way the vertices calculate the limiting values is as follows. Once a vertex receives an $\mbf{SB}$-pulse, it sends a pulse to its predecessors (in ${\EM{{\mbb{N}}}}$) announcing its currently displayed symbol (using the encoding $0\mapsto \mbf{B}_0$ and $1\mapsto \mbf{B}_1$). This is described in condition (vi).
Once a vertex has received the collection of those symbols displayed on vertices greater than it, it asks whether both symbols $0$ and $1$ occur in this collection. If so, then it knows that its displayed symbol is not the limit, and so it enters state $u$ and does nothing (i.e., $u$ signifies termination for the subroutine). This is described in condition (viii).
On the other hand, if a vertex sees that there is only one symbol displayed among those vertices larger than itself, then it knows that that symbol is the limiting value of the subroutine. The vertex then passes this information along to the activation vertex, via a pulse of type $\mbf{SF}_0$ or $\mbf{SF}_1$ (depending on the limiting value) and enters the subroutine termination state. This is described in condition (vii).
Finally, if the activation vertex is in $a_2$ and receives exactly one pulse among $\mbf{SF}_0$ and $\mbf{SF}_1$, then it knows that this is the limiting value, and sets its display symbol to that value and enters the subroutine termination state. This is described in condition (v).
Of course, once a vertex is in the termination state, it will stay there, displaying the same symbol, unless and until it receives a pulse of type $\mbf{A}$. This is described in condition (ix).
Condition (x) was added to complete the description of the lookup table, but
because $h$ is total,
this will never occur in any actual run that begins at a starting configuration, even when this graph machine is embedded as a subroutine into a larger graph machine, as we subsequently describe.
Note that this subroutine will always complete its computation within $4$ timesteps.
We need one more graph machine to operate as a subroutine. The purpose of this subroutine will be to send pulses, in sequence, that activate other vertices.
Let ${\EM{\mc{G}}}_{{\EM{\mathfrak{B}}}}^n$ be the graph satisfying the following.
\begin{itemize}
\item The underlying set is $\{\star_{-5n}, \dots, \star_0\}$.
\item There is only one label $q$, which all elements have.
\item The colors are $\{\mbf{S}, \mbf{R}, \mbf{Q}, \mbf{A}, \mbf{SF}_0, \mbf{SF}_1\}$
\item The edge coloring is $E_{{\EM{\mathfrak{B}}}}^n$.
\item The only vertex pairs having non-empty sets of edge colors are the following.
\begin{itemize}
\item $E_{{\EM{\mathfrak{B}}}}^n(\star_i,\star_{ i+1}) = \{\mbf{R}\}$
for $-5n \leq i < 0$.
\item $E_{{\EM{\mathfrak{B}}}}^n(\star_0, \star_0) = \{\mbf{S}\}$.
\item $E_{{\EM{\mathfrak{B}}}}^n(\star_{0}, \star_{-5n}) = \{\mbf{Q}\}$.
\end{itemize}
\end{itemize}
We define the graph machine ${\EM{\mathfrak{M}}}_{{\EM{\mathfrak{B}}}}^n$ as follows.
\begin{itemize}
\item The underlying graph is ${\EM{\mc{G}}}_{{\EM{\mathfrak{B}}}}^n$.
\item The alphabet is $\{0,1\}$.
\item The states are $\{s, d, r, u\}$.
\item The lookup table $T_{{\EM{\mathfrak{B}}}}^n$ satisfies the following for all
$z$ in the alphabet, states $x$, and collections $X$ of colors.
\begin{itemize}
\item[(i)] $T_{{\EM{\mathfrak{B}}}}^n(q, X, 0, s) = (\emptyset, 0, s)$ if $\{\mbf{R}, \mbf{S}\} \cap X = \emptyset$.
\item[(ii)] $T_{{\EM{\mathfrak{B}}}}^n(q, X, 1, s) = (\{\mbf{S}\}, 0, s)$.
\item[(iii)] $T_{{\EM{\mathfrak{B}}}}^n(q, X \cup \{\mbf{S}\}, z, s) = (\{\mbf{Q}\}, 0, s)$ if $\mbf{R} \not \in X$.
\item[(iv)] $T_{{\EM{\mathfrak{B}}}}^n(q, X, z, s) = (\{\mbf{A}\}, 0, d)$ if $\{\mbf{Q}, \mbf{R}\} \cap X \neq \emptyset$.
\item[(v)] $T_{{\EM{\mathfrak{B}}}}^n(q, X, z, d) = (\{\mbf{R}\}, 0, r)$.
\item[(vi)]
$T_{{\EM{\mathfrak{B}}}}^n(q, X, z, r) = (\emptyset, k, d)$ if $\mbf{SF}_{k} \in X$ and $\mbf{SF}_{1- k} \not \in X$ for some $k\in\{0,1\}$, and otherwise
$T_{{\EM{\mathfrak{B}}}}^n(q, X, z, r) = (\emptyset, 0, r)$.
\item[(vii)] $T_{{\EM{\mathfrak{B}}}}^n(q, X, z, u) = (X, z, u)$.
\item[(viii)] $T_{{\EM{\mathfrak{B}}}}^n(q, X, z, x) = (\emptyset, 0, u)$ in all other cases.
\end{itemize}
\end{itemize}
We now describe what the graph machine ${\EM{\mathfrak{M}}}_{{\EM{\mathfrak{B}}}}^n$ does. First notice that the only way for a vertex to get out of the initial state $s$ is for it to receive an $\mbf{S}$-pulse, a $\mbf{Q}$-pulse or an $\mbf{R}$-pulse. Only $\mbf{S}$-pulses can be sent from a vertex that is in the initial state and which hasn't received any other pulses. Also, there is only one $\mbf{S}$-edge, namely, a self loop at $\star_0$. Hence the first timestep of the graph machine's behavior is determined by what the vertex $\star_0$ initially displays.
If the vertex $\star_0$ initially displays $0$, then all vertices display $0$ in the next step, and the subroutine does nothing else. This is described in condition (i). If, however, vertex $\star_0$ initially displays $1$, then an $\mbf{S}$-pulse is sent by $\star_0$ to itself. This is described in condition (ii). Once $\star_0$ receives the $\mbf{S}$-pulse, it reverts back to displaying $0$, and sends a $\mbf{Q}$-pulse to $\star_{-5n}$. This is described in condition (iii).
Once vertex $\star_{-5n}$ receives a $\mbf{Q}$-pulse,
the main loop begins. In the main loop, first vertex $\star_{-5n}$ sends out an $\mbf{A}$-pulse and moves to a state $d$, as described in condition (iv). The purpose of the $\mbf{A}$-pulse is to tell the vertex that receives it to \emph{activate} and start calculating a limit. While there are no vertices in ${\EM{\mathfrak{M}}}_{{\EM{\mathfrak{B}}}}^n$ with $\mbf{A}$-colored edge, we will combine ${\EM{\mathfrak{M}}}_{{\EM{\mathfrak{B}}}}^n$ with copies of ${\EM{\mathfrak{M}}}_{\mathfrak{L}}$, connecting the two graphs using
$\mbf{A}$-colored edges. Note that only vertices of the form $\star_{-5k}$ with $k\le n$ will be connected to copies of ${\EM{\mathfrak{M}}}_{\mathfrak{L}}$ via $\mbf{A}$-colored edges. The other vertices are there to provide additional timesteps in between $\mbf{A}$-pulses to allow the activated copies of ${\EM{\mathfrak{M}}}_{\mathfrak{L}}$ time to complete their computations.
Once a vertex is in state $d$, it sends an $\mbf{R}$-pulse to its ``neighbor to the right'' (the vertex with least index greater than it), and moves to state $r$, where it will stay unless it is $\star_0$, as described in conditions (v) and (vi). Every vertex acts the same way upon arrival of an $\mbf{R}$-pulse as on an $\mbf{Q}$-pulse. Hence each vertex in the sequence sends, in succession, an $\mbf{A}$-pulse, and then enters the state $r$.
Condition (vii) ensures that when a vertex enters the subroutine termination state (which will only ever happens to $\star_0$ in the course of this subroutine's use by the larger program), the displayed symbol remains constant. Condition (viii) also describes a circumstance that happens only when the subroutine is used by the larger program, as does the first clause of condition (vi) (which agrees with condition (v) of ${\EM{\mathfrak{M}}}_\mathfrak{L}$).
When we connect up ${\EM{\mathfrak{M}}}_{{\EM{\mathfrak{B}}}}^n$ with copies of ${\EM{\mathfrak{M}}}_{\mathfrak{L}}$, vertex
$\star_0$ participates in calculating the final limit.
Hence, in addition to having edges colored by $\mbf{A}$,
vertex $\star_0$ also has $\mbf{SF}_0$ and $\mbf{SF}_1$-edges. If $\star_0$ receives a pulse of one of those colors, it then displays the corresponding value and moves to a state that keeps this value constant.
We now combine the graph machines for subroutines,
to get a graph machine that calculates $h$.
For $e \in {\EM{{\mbb{N}}}}$, define the graph ${\EM{\mc{G}}}_{g,e}$ as follows.
\begin{itemize}
\item The underlying set is ${\EM{{\mbb{N}}}}^n \cup {\EM{{\mbb{N}}}}^{n-1} \cup\cdots \cup {\EM{{\mbb{N}}}}^2 \cup {\EM{{\mbb{N}}}} \cup \{\star_{-5n}, \dots, \star_0\}$.
\item There are two labels, $p$ and $q$. Vertex $\star_i$ is labeled by $q$ (for $-5n \leq i \leq 0$), and every other vertex is labeled by $p$.
\item The colors are $\{\mbf{S}, \mbf{C}_0, \mbf{C}_1, \mbf{B}_0, \mbf{B}_1, \mbf{SB}, \mbf{SF}_0, \mbf{SF}_1, \mbf{A}, \mbf{Q}, \mbf{R}\}$.
\item The edge coloring is $E$, satisfying the following.
\begin{itemize}
\item For each ${\EM{\ol{c}}} \in {\EM{{\mbb{N}}}}^{n-1}$ and $k, m \in {\EM{{\mbb{N}}}}$ with $k \neq m$ we have the following.
\begin{itemize}
\item $E({\EM{\ol{c}}} k, {\EM{\ol{c}}} m) = E_{\mathfrak{L}}(k, m)$.
\item $E({\EM{\ol{c}}} k, {\EM{\ol{c}}} k) = \{\mbf{C}_{g({\EM{\ol{c}}} k e)}, \mbf{B}_0, \mbf{B}_1\}$.
\item $E({\EM{\ol{c}}}, {\EM{\ol{c}}} k) = E_{\mathfrak{L}}(*, k)$ and $E({\EM{\ol{c}}} k, {\EM{\ol{c}}}) = E_{\mathfrak{L}}(k, *)$.
\item $E(\star_{-5n}, {\EM{\ol{c}}} k) = \{\mbf{A}\}$.
\end{itemize}
\item For each ${\EM{\ol{c}}} \in {\EM{{\mbb{N}}}}^i$ for $0 < i < n-1$ and $k, m \in {\EM{{\mbb{N}}}}$ we have the following.
\begin{itemize}
\item $E({\EM{\ol{c}}} k, {\EM{\ol{c}}} m) = E_{\mathfrak{L}}(k, m)$.
\item $E({\EM{\ol{c}}}, {\EM{\ol{c}}} k) = E_{\mathfrak{L}}(*, k)$ and $E({\EM{\ol{c}}} k, {\EM{\ol{c}}}) = E_{\mathfrak{L}}(k, *)$.
\item $E(\star_{-5(n-i+1)}, {\EM{\ol{c}}} k) = \{\mbf{A}\}$.
\end{itemize}
\item For each $k, m\in {\EM{{\mbb{N}}}}$ we have the following.
\begin{itemize}
\item $E(k, m) = E_{\mathfrak{L}}(k, m)$.
\item $E(\star_0, k) = E_{\mathfrak{L}}(*, k)$ and $E(k, \star_0) = E_{\mathfrak{L}}(k, *)$.
\item $E(\star_0, \star_0) = \{\mbf{S}, \mbf{A}\}$.
\end{itemize}
\item For each $-5n \leq k, m < 0$ we have the following.
\begin{itemize}
\item $E(\star_k, \star_m) = E_{{\EM{\mathfrak{B}}}}^n(\star_k, \star_m)$.
\item $E(\star_k, \star_0) = E_{{\EM{\mathfrak{B}}}}^n(\star_0, \star_0)$.
\end{itemize}
\end{itemize}
\end{itemize}
The graph ${\EM{\mc{G}}}_{g,e}$ is such that for any tuple ${\EM{\ol{c}}} \in {\EM{{\mbb{N}}}}^{\leq k}$, the set $\{{\EM{\ol{c}}}\} \cup \{{\EM{\ol{c}}} k\}_{k \in {\EM{{\mbb{N}}}}}$ is isomorphic to ${\EM{\mc{G}}}_{\mathfrak{L}}$ (after ignoring the edges $\{\mbf{C}_0, \mbf{C}_1\}$). This allows us to iteratively take limits. Further, each ${\EM{\ol{c}}} \in {\EM{{\mbb{N}}}}^{n}$ has a self-loop which encodes the value of $g({\EM{\ol{c}}} e)$. This will be used to initialize the displayed symbols of vertices in the matrix that we will later use to take the limits.
We define the graph machine ${\EM{\mathfrak{M}}}_{g, e}$ as follows.
\begin{itemize}
\item The underlying graph is ${\EM{\mc{G}}}_{g, e}$.
\item The states are $\{d, s, a_0, a_1, a_2, u, b\}$.
\item The lookup table $T$ is such that the following hold, for all
$z$ in the alphabet, states $t$, and collections $X$ of colors.
\begin{itemize}
\item[(i)] $T(q, X, z, t) = T_{{\EM{\mathfrak{B}}}}^n(q, X, z, t)$.
\item[(ii)] $T(p, X, z, t) = T_{\mathfrak{L}}(p, X, z, t)$ if ($t \neq s$ and $\mbf{A} \not \in X$) or ($t = a_0$ and $\mbf{C}_k \in X$ and $\mbf{C}_{1-k} \not \in X$) for some $k \in \{0, 1\}$.
\item[(iii)] $T(p, X \cup \mbf{A}, z, s) = (\{\mbf{C}_0, \mbf{C}_1\}, 0, a_0)$.
\item[(iv)] $T(p, X, z, a_0) = (\emptyset, k, u)$ if $\mbf{C}_k \in X$ and $\mbf{C}_{1-k} \not \in X$ for some $k\in\{0, 1\}$.
\item[(v)] $T(p, X, z, t) = (\emptyset, 0, u)$ in all other cases.
\end{itemize}
\end{itemize}
We now describe a run of ${\EM{\mathfrak{M}}}_{g, e}$ on a starting configuration. First, just as with ${\EM{\mathfrak{M}}}_{{\EM{\mathfrak{B}}}}^n$, the computation begins by observing the behavior or $\star_0$.
If $\star_0$ initially displays $0$, then all vertices stay in the initial state and display $0$ on the next timestep.
If $\star_0$ initially displays $1$, then the computation proceeds as in ${\EM{\mathfrak{M}}}_{{\EM{\mathfrak{B}}}}^n$, and
vertex $\star_{-5n}$ sends an $\mbf{A}$-pulse, which \emph{activates} all vertices of the form ${\EM{\ol{c}}} \in {\EM{{\mbb{N}}}}^n$.
Vertices of the form ${\EM{\ol{c}}} \in {\EM{{\mbb{N}}}}^n$ are not designed to compute the limits of anything, and so upon activation their values must be initialized. This is done by each such vertex attempting to send itself an $\mbf{C}_0$-pulse and $\mbf{C}_1$-pulse. In other words, each such vertex sends a $\mbf{C}_0$-pulse and $\mbf{C}_1$-pulse along all $\mbf{C}_0$-edges and
$\mbf{C}_1$-edges connected to it, respectively, if any exist (and all $\mbf{C}_0$-edges and $\mbf{C}_1$-edges connected to such a vertex are self-loops).
Because of how the graph ${\EM{\mc{G}}}_{g,e}$ was constructed, each such ${\EM{\ol{c}}}$ will only receive the pulse $\mbf{C}_{g({\EM{\ol{c}}} e)}$.
Hence after the pulse is received, vertex ${\EM{\ol{c}}}$ completes its initialization by
setting its displayed symbol to the index of whichever pulse it receives.
Meanwhile,
vertices $\{\star_{-5n}, \dots, \star_0\}$
are sending $\mbf{R}$-pulses in succession along the sequence, causing each such vertex, in order,
to attempt to send an activation $\mbf{A}$-pulse (i.e., a pulse along all $\mbf{A}$-edges connected to it, of which there will be at most one). However, because only every fifth vertex in the sequence
$\{\star_{-5n}, \dots, \star_0\}$ is connected via an $\mbf{A}$-colored edge, by the time the next $\mbf{A}$-pulse is sent, i.e., along the edge attached to $\star_{-5n+5}$, the initialization procedure has finished.
The next activation pulse is then sent from $\star_{-5n+5}$ to all vertices of the form ${\EM{\ol{c}}} \in {\EM{{\mbb{N}}}}^{n-1}$. It causes these vertices to begin the process of calculating the limit of the sequence currently displayed by the vertices $\{{\EM{\ol{c}}} 0, {\EM{\ol{c}}} 1, \dots\}$. While this limit is being calculated,
the vertices in
$\{\star_{-5n+5}, \dots, \star_0\}$ are attempting to send out
activation pulses,
in sequence. By the time the next $\mbf{A}$-pulse is sent out, at $\star_{-5n+10}$, each vertex ${\EM{\ol{c}}}\in{\EM{{\mbb{N}}}}^{n-1}$ is displaying the limit of the values displayed by $\{{\EM{\ol{c}}} 0, {\EM{\ol{c}}} 1, \dots\}$.
This process then repeats until we get to $\star_0$, which plays a double role. First, once $\star_0$ has received an $\mbf{R}$-pulse, it sends out an $\mbf{A}$-pulse to itself. This signals $\star_0$ to begin the process of calculating the limit of the symbols displayed at $\{0, 1, \dots\}$. Secondly, when this calculations finishes, $\star_0$ displays
\[
\lim_{x_0 \to \infty}\cdots \lim_{x_{n-1} \to \infty} g(x_0, \dots, x_{n-1}, e),\]
which is the desired value, $h(e)$.
Note that the map $e \mapsto {\EM{\mathfrak{M}}}_{g,e}$ is computable, and that the lookup table $T$ is independent of $e$. Further, the time it takes for ${\EM{\mathfrak{M}}}_{g,e}$ to run is independent of $e$.
Therefore $h$ is
graph computable in constant time.
\end{proof}
In summary, we define a ``subroutine'' graph machine that, on its own, computes the limit of a computable binary sequence.
We then embed $n$ repetitions of this subroutine into a single graph machine that computes the $n$-fold limit of the $(n+1)$-dimensional array given by $g$.
The subroutine graph machine has a countably infinite sequence (with one special vertex) as its underlying graph, in that
every vertex is connected to all previous vertices (and all are connected to the special vertex). Each vertex first activates itself, setting its displayed symbol to the appropriate term in the sequence whose limit is being computed. Each vertex sends a pulse to every previous vertex, signaling its displayed state. Any vertex which receives both $0$ and $1$ from vertices later in the sequence knows that the sequence alternates at some later index.
Finally, any vertex which only receives a $0$ or $1$ pulse, but not both, sends a pulse corresponding to the one it receives to the special vertex. This special vertex then knows the limiting value of the sequence.
This technical construction allows us to conclude the following.
\begin{corollary}
\label{Pointwise computable Turing degrees}
Suppose $X\subseteq {\EM{{\mbb{N}}}}$ is such that $X \leq_{\mathrm{T}} \mbf{0}^{(n)}$. Then
$X$ is Turing-equivalent to some function that is
graph computable in constant time by a machine whose underlying graph is finitary.
\end{corollary}
\begin{proof}
By Lemma~\ref{Description of arithmetical functions},
$X$ is Turing equivalent to
the $n$-fold limit of some computable function. By Proposition~\ref{Pointwise computable for iterative limits},
this $n$-fold limit is graph computable in constant time
by a machine whose underlying graph is finitary.
\end{proof}
Not only are all graph computable functions Turing reducible to $\mbf{0}^{(\EM{\omega})}$, but this bound can be achieved.
\begin{theorem}
There is a graph computable function
(via a machine whose underlying graph is finitary)
that
is Turing equivalent to $\mbf{0}^{(\EM{\omega})}$.
\label{omega-jump}
\end{theorem}
\begin{proof}
For each $n\in{\EM{{\mbb{N}}}}$, uniformly choose
a computable function $g_n\:{\EM{{\mbb{N}}}}^{n+1}\to{\EM{{\mbb{N}}}}$ such that its $n$-fold limit $h_n$ satisfies
$h_n \equiv_{\mathrm{T}} \mbf{0}^{(n)}$.
For $e,n \in {\EM{{\mbb{N}}}}$, let the graph machines ${\EM{\mc{G}}}_{g_n, e}$ and ${\EM{\mathfrak{M}}}_{g_n, e}$
be the graphs and graph machines described in the proof of Proposition~\ref{Pointwise computable for iterative limits}.
Let ${\EM{\mc{G}}}_\EM{\omega}$ be the graph which is the (computable) disjoint union of the uniformly computable graphs
$\{{\EM{\mc{G}}}_{g_n, e}\,:\, e, n \in {\EM{{\mbb{N}}}}\}$.
Note that ${\EM{\mc{G}}}_\EM{\omega}$ is finitary as the (finite) number of labels of ${\EM{\mc{G}}}_{g_n, e}$ does not depend on $n$ or $e$.
Further note that for all $e, n \in {\EM{{\mbb{N}}}}$, the graph machines ${\EM{\mathfrak{M}}}_{g_n, e}$ have the same lookup table. We can therefore let ${\EM{\mathfrak{M}}}_\EM{\omega}$ be the graph machine with underlying graph ${\EM{\mc{G}}}_\EM{\omega}$ having this lookup table.
The function $\{{\EM{\mathfrak{M}}}_\EM{\omega}\}$ is total because each
$\{{\EM{\mathfrak{M}}}_{g_n, e}\}$ is (and the corresponding submachines of the disjoint union do not interact).
By the construction of ${\EM{\mathfrak{M}}}_\EM{\omega}$, we have
$\mbf{0}^{(\EM{\omega})} \leq_{\mathrm{T}} \{{\EM{\mathfrak{M}}}_\EM{\omega}\}$.
On the other hand, $\{{\EM{\mathfrak{M}}}_\EM{\omega}\} \leq_{\mathrm{T}} \mbf{0}^{(\EM{\omega})}$ holds
by Theorem~\ref{Total graph computable functions are computable from 0^w}.
Hence $\{{\EM{\mathfrak{M}}}_\EM{\omega}\} \equiv_{\mathrm{T}} \mbf{0}^{(\EM{\omega})}$.
\end{proof}
\section{Finite degree graphs}
\label{finite-deg-sec}
We have seen that
every arithmetical function is graph computable. However, as we will see in this section, if we instead limit ourselves to graphs where each vertex has finite degree, then not only is every graph computable function computable from $\mbf{0}'$, but also we can obtain more fine-grained control over the Turing degree of the function by studying the degree structure of the graph.
\subsection{Upper bound}
Before we move to the specific case of graphs of finite degree (defined below), there is an important general result concerning bounds on graph computability and approximations to computations.
\begin{definition}
Let $\Theta\: \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(G) \to \ensuremath{\mathfrak{P}}_{<\EM{\omega}}(G)$.
We say that $\Theta$ is a \defn{uniform approximation} of ${\EM{\mathfrak{M}}}$ if
for all finite subsets $A \subseteq G$,
\begin{itemize}
\item $A \subseteq \Theta(A)$, and
\item for any valid configuration $f$ for ${\EM{\mathfrak{M}}}$, the pair
$(A, \Theta(A))$ is a $1$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$.
\end{itemize}
\end{definition}
\begin{lemma}
\label{Uniform approximation implies approximation}
Let $\Theta(A)$ be a uniform approximation of ${\EM{\mathfrak{M}}}$. Then for any finite subset $A$ of $G$, any valid configuration $f$ for ${\EM{\mathfrak{M}}}$, and any $n \in {\EM{{\mbb{N}}}}$, the tuple $(A, \Theta(A), \Theta^2(A), \dots, \Theta^n(A))$ is an $n$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$.
\end{lemma}
\begin{proof}
We prove this by induction on $n$.
\vspace*{5pt}
\noindent \ul{Base case:} $n = 1$ \newline
We know by hypothesis
that $(A, \Theta(A))$ is a $1$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$.
\vspace*{5pt}
\noindent \ul{Inductive case:} $n = k+1$\newline
We know that $(A, \Theta(A), \dots, \Theta^k(A))$ is a $k$-approximation of ${\EM{\mathfrak{M}}}$ and $f_1$ on $A$. It therefore suffices to show that $(\Theta^k(A), \Theta^{k+1}(A))$ is a $1$-approximation of ${\EM{\mathfrak{M}}}$ and $f$ on $A$. But this holds by our assumption on $\Theta$.
\end{proof}
Note that while we will be able to get even better bounds in the case of finite degree graphs, we do have the following bound on computability.
\begin{lemma}
\label{Computability properties of uniform approximations}
Let $\Theta$ be a uniform approximation of ${\EM{\mathfrak{M}}}$. Then for any valid configuration $f$,
\begin{itemize}
\item[(a)] $\<{\EM{\mathfrak{M}}}, f\>$ is computable from $\Theta$ and $f$ (uniformly in $f$), and
\item[(b)] if $\{{\EM{\mathfrak{M}}}\}$ is total, then $\{{\EM{\mathfrak{M}}}\} \leq_{\mathrm{T}} \Theta'$.
\end{itemize}
\end{lemma}
\begin{proof}
Clause (a) follows from Lemma~\ref{Uniform approximation implies approximation} and the definition of an approximation.
If $\{{\EM{\mathfrak{M}}}\}$ is total, then for each starting configuration $f$ of ${\EM{\mathfrak{M}}}$, there is an $n$ such that $f_n = f_{n+1}
= \{{\EM{\mathfrak{M}}}\}(f)$.
Hence $\{{\EM{\mathfrak{M}}}\}$ is computable from the Turing jump of $\<{\EM{\mathfrak{M}}}, \,\cdot\,\>$, and so it is computable from $\Theta'$. Therefore clause (b) holds.
\end{proof}
We now introduce the \emph{degree function} of a graph.
\begin{definition}
For $v \in G$, define the \defn{degree} of $v$ to be the number of vertices incident with it, i.e.,
\[
\deg_{\EM{\mc{G}}}(v){\EM{\ :=\ }} |\{w\,:\, E(v,w) \cup E(w,v) \neq \emptyset\}|,
\]
and call $\deg_{\EM{\mc{G}}}(\,\cdot\,) \: G \to {\EM{{\mbb{N}}}} \cup \{\infty\}$ the \defn{degree function} of ${\EM{\mc{G}}}$.
We say that ${\EM{\mc{G}}}$ has \defn{finite degree} when
$\mathrm{rng}(\deg_{\EM{\mc{G}}}) \subseteq {\EM{{\mbb{N}}}}$,
and say that
${\EM{\mc{G}}}$ has \defn{constant degree} when $\deg_{\EM{\mc{G}}}$ is constant.
\end{definition}
We will see that for a graph ${\EM{\mc{G}}}$ of finite degree, its degree function
bounds the computability of ${\EM{\mc{G}}}$-computable functions.
The following easy lemma will allow us to provide a computation bound on graph Turing machines all vertices of whose underlying graph have finite degree.
\begin{lemma}
\label{Degree computable from 0'}
Suppose that
${\EM{\mc{G}}}$ has finite degree. Then $\deg_{\EM{\mc{G}}} \leq_{\mathrm{T}} \mbf{0}'$.
\end{lemma}
\begin{proof}
Let $G$ be the underlying set of ${\EM{\mc{G}}}$.
Because ${\EM{\mc{G}}}$ is computable, for any vertex $v\in G$, the set of neighbors of $v$ is computably enumerable, uniformly in $v$. The size of this set is therefore $\mbf{0}'$-computable, uniformly in $v$, and so $\deg_{\EM{\mc{G}}} \leq_{\mathrm{T}} \mbf{0}'$.
\end{proof}
Recall the definition of $n$-neighborhood (\cref{nneigh}).
\begin{lemma}
\label{Properties of 1-neighborhoods}
Suppose that
${\EM{\mc{G}}}$
has finite degree. Then
\begin{itemize}
\item[(a)] the $1$-neighborhood map $\EM{\mathbf{N}}_1$ is computable from $\deg_{\EM{\mc{G}}}$, and
\item[(b)] for any ${\EM{\mc{G}}}$-machine ${\EM{\mathfrak{M}}}$, the map $\EM{\mathbf{N}}_1$ is a uniform approximation to ${\EM{\mathfrak{M}}}$.
\end{itemize}
\end{lemma}
\begin{proof}
Clause (a) follows from the fact that given the degree of a vertex one can search for all of its neighbors, as this set is computably enumerable (uniformly in the vertex) and of a known finite size.
Clause (b) follows from the fact that if a vertex receives a pulse, it must have come from some element of its $1$-neighborhood.
\end{proof}
We now obtain the following more precise upper bound on complexity for finite degree graphs.
\begin{theorem}
\label{Computability of Finite Degree Graph}
Suppose that ${\EM{\mc{G}}}$ has finite degree and
$\{{\EM{\mathfrak{M}}}\}$ is total. Then $\{{\EM{\mathfrak{M}}}\}$ is computable from $\deg_{\EM{\mc{G}}}$ and its range is contained in $\EM{\mc{L}}^{<G}$.
\end{theorem}
\begin{proof}
First note that if $f$ is a starting configuration of ${\EM{\mathfrak{M}}}$ and for all $k' > k$ we have $f(k') = 0$, then for any $m \in {\EM{{\mbb{N}}}}$ and any $v \in G \setminus \EM{\mathbf{N}}_m(\{0, \dots, k\})$, we have that $\<{\EM{\mathfrak{M}}}, f\>(v, m) = \<{\EM{\mathfrak{M}}}, f\>(v, 0)$. Therefore $\{{\EM{\mathfrak{M}}}\}(f)$ halts in $m$ steps if and only if $\{{\EM{\mathfrak{M}}}|_{\EM{\mathbf{N}}_m(\{0, \dots, k\})}\}(f|_{\EM{\mathbf{N}}_m(\{0, \dots, k\})})$ halts in $m$ steps.
Therefore we can determine whether or not $\{{\EM{\mathfrak{M}}}\}$ halts in $m$ steps by examining $\{{\EM{\mathfrak{M}}}|_{\EM{\mathbf{N}}_m(\{0, \dots, k\})}\}(f|_{\EM{\mathbf{N}}_m (\{0, \dots, k\})})$, which is uniformly computable from $\EM{\mathbf{N}}_m$ by \cref{lemma-finite-graph}, since $\EM{\mathbf{N}}_m(\{0, \dots, k\})$ is finite.
But $\EM{\mathbf{N}}_m$ is just $\EM{\mathbf{N}}_1^m$, and by Lemma~\ref{Properties of 1-neighborhoods}(a), $\EM{\mathbf{N}}_1$ is computable from $\deg_{\EM{\mc{G}}}$.
For each $m$ we can uniformly $\deg_{\EM{\mc{G}}}$-computably check whether $\{{\EM{\mathfrak{M}}}\}(f)$ halts at stage $m$.
Hence $\{{\EM{\mathfrak{M}}}\}$ is $\deg_{\EM{\mc{G}}}$-computable as
$\{{\EM{\mathfrak{M}}}\}$ halts on all starting configurations.
\end{proof}
We then obtain the following important corollaries.
\begin{corollary}
\label{Finite degree implies computable in 0'}
Suppose that ${\EM{\mc{G}}}$
has finite degree.
Then any ${\EM{\mc{G}}}$-computable function is $\mbf{0}'$-computable.
\end{corollary}
\begin{proof}
This follows from Theorem~\ref{Computability of Finite Degree Graph} and Lemma~\ref{Degree computable from 0'}.
\end{proof}
\begin{corollary}
\label{Constant degree implies computable}
Suppose that ${\EM{\mc{G}}}$
has
constant degree.
Then any ${\EM{\mc{G}}}$-computable function is computable (in the ordinary sense).
\end{corollary}
\begin{proof}
This follows from Theorem~\ref{Computability of Finite Degree Graph} and the fact that $\deg_{\EM{\mc{G}}}$ is constant (and hence computable).
\end{proof}
\subsection{Lower bound}
\label{finite-degree-lower-bounds}
In this subsection, we consider the possible Turing degrees of graph computable functions where the underlying graph has finite degree. In particular, we show that every Turing degree below $\mbf{0}'$ is the degree of some total graph computable function where the underlying graph has finite degree.
Recall from Lemma~\ref{Description of arithmetical functions} (for $n = 1$) that a set $X \subseteq {\EM{{\mbb{N}}}}$ satisfies $X \leq_{\mathrm{T}} \mbf{0}'$ when the characteristic function of $X$ is the limit of a 2-parameter computable function.
The following standard definition (see \cite[III.3.8]{MR882921})
describes a hierarchy among sets $X \leq_{\mathrm{T}} \mbf{0}'$.
\begin{definition}
Let $A\subseteq {\EM{{\mbb{N}}}}$ be such that $A \leq_{\mathrm{T}} \mbf{0}'$, and let $h$ be the characteristic function of $A$.
For $k \in {\EM{{\mbb{N}}}}$, the set $A$ is
\defn{$k$-computably enumerable},
or $k$-c.e., when there is a computable function $g\:{\EM{{\mbb{N}}}} \times {\EM{{\mbb{N}}}} \to \{0,1\}$ such that
\begin{itemize}
\item $(\forall m \in {\EM{{\mbb{N}}}})\ g(0, m) = 0$,
\item $(\forall m \in {\EM{{\mbb{N}}}})\ h(m) = \lim_{n \to \infty}g(n, m)$, and
\item for each $m \in {\EM{{\mbb{N}}}}$, $|\{\ell\in {\EM{{\mbb{N}}}}\,:\, g(\ell,m) \neq g(\ell+1, m)\}| \le k$, i.e., the uniformly computable approximation to the limit alternates at most $k$-many times for each input.
\end{itemize}
\end{definition}
In particular, the $1$-c.e.\ sets are precisely the c.e.\ sets.
We next construct a collection of graph machines $\{{\EM{\mathcal{M}}}_e\,:\, e\in{\EM{{\mbb{N}}}}\}$ that we will use in \cref{Lower bound on finite degree main result}.
\begin{lemma}
\label{Lower bound piece finite degree}
There is a finite lookup table ${\EM{\mc{F}}}$ and a collection, definable uniformly in $e$, of graph machines ${\EM{\mathcal{M}}}_e$ having edge coloring $E_e$ and common lookup table ${\EM{\mc{F}}}$ such that whenever $e \in {\EM{{\mbb{N}}}}$ satisfies
\begin{itemize}
\item $\{e\} \: {\EM{{\mbb{N}}}} \to \{0, 1\}$ is total, and
\item $\lim_{n \to \infty}\{e\}(n) = m_e$ exists,
\end{itemize}
then the following hold.
\begin{itemize}
\item ${\EM{\mc{G}}}_e$, the underlying graph of ${\EM{\mathcal{M}}}_e$, has only finitely many edges and each vertex is of degree at most 3.
\item If $f$ is a valid configuration of ${\EM{\mathcal{M}}}_e$ where all vertices are in the initial state, then
\begin{itemize}
\item if $f_{[1]}(0) = 0$, i.e., vertex $0$ displays $0$ in the configuration $f$,
then $\<{\EM{\mathcal{M}}}_e,f\>$ halts with every vertex displaying $0$, and
\item if $f_{[1]}(1) = 0$, i.e., vertex $0$ displays $1$ in the configuration $f$, then $\<{\EM{\mathcal{M}}}_e,f\>$ halts with vertex $0$ displaying $\lim_{n \to \infty}\{e\}(n) = m_e$ (and every other vertex displaying $0$).
\end{itemize}
\item If $|\{n\,:\, \{e\}(n) \neq \{e\}(n+1)\}| < k$, then ${\EM{\mathcal{M}}}_e$ halts on any starting configuration in at most
$(2k+4)$-many timesteps.
\end{itemize}
\end{lemma}
\begin{proof}
First we describe the graphs ${\EM{\mc{G}}}_e$. For notational convenience, write $\{e\}(-1) {\EM{\ :=\ }} 1 - \{e\}(0)$. The edges are determined by the following, for $n \ge 0$.
\begin{itemize}
\item $E_e(0, 1) = \{r\}$, and $E_e(2,0) = \{b\}$, and $E_e(1,2) = \{g\}$.
\item If $\{e\}(n) = \{e\}(n+1)$, then vertices $2n+1$ and $2n+2$ have degree $0$, i.e., there are no edges connecting them to any other vertices.
\item If $\{e\}(n) \neq \{e\}(n+1)$, then $E_e(2k+3, 2n+3) = E_e(2n+3, 2n+4) = E_e(2n+4, 2k+4) = \{g\}$, where $k$ is the ``most recent alternation'', i.e., the largest $k$ such that $-1 \le k < n$ and $\{e\}(k) \neq \{e\}(k+1)$.
\end{itemize}
We now define the lookup table ${\EM{\mc{F}}}$.
\begin{itemize}
\item There are two states: $s$ (the initial state) and $a$.
\item If a vertex displays $1$, then it sends an $r$-pulse and sets its display to $0$.
\item If a vertex receives an $r$-pulse or a $g$-pulse, then it sends a $b$-pulse and a $g$-pulse.
\item If a vertex receives a $b$-pulse, then it sets its state to $a$ and alternates its displayed symbol.
\end{itemize}
When ${\EM{\mathcal{M}}}_e$ is run, on the first step every vertex sets its displayed symbol to $0$. If vertex $0$
had initially displayed $0$, then this is all that happens. However, if vertex $0$ had initially displayed $1$, then vertex $0$ will also send out a $r$-pulse to vertex $1$. This is the only $r$-pulse that will ever be sent (as $0$ is the only vertex which is the source of an $r$-colored edge, and it will not send any other $r$-pulses).
At the second stage, this $r$-pulse can be thought of as being converted to a $g$-pulse. Further, $g$-pulses have the property that (1) they propagate forward along directed edges, splitting whenever possible, and (2) they turn into a $b$-pulse when being sent from vertex $2$ to vertex $0$. As such, the number of $b$-pulses that vertex $0$ receives is the number of vertices in ${\EM{\mathcal{M}}}_e$ with two out edges (i.e., the number of times a $g$-pulse \emph{splits} in two). However, by construction, the number of such vertices is the number of times that $\{e\}$ alternates values.
Hence the number of $b$-pulses that vertex $0$ receives is equal to the number of times that $\{e\}$ alternates values. But vertex $0$ alternates the symbols it displays exactly when it receives a $b$-pulse, and so when the graph reaches a halting configuration (in the sense that the subsequent configuration one timestep later is the same),
vertex $0$ will display $m_e$.
\end{proof}
\begin{theorem}
\label{Lower bound on finite degree main result}
For every $X\: {\EM{{\mbb{N}}}} \to \{0,1\}$ such that $X \leq_{\mathrm{T}} \mbf{0}'$ there is a graph machine ${\EM{\mathcal{N}}}_X$ with lookup table ${\EM{\mc{F}}}$ and finitary underlying graph ${\EM{\mc{H}}}_X$ such that
\begin{itemize}
\item every vertex of ${\EM{\mc{H}}}_X$ has degree at most $3$,
\item $\{{\EM{\mathcal{N}}}_X\}$ is total and Turing equivalent to $X$, and
\item if $X$ is $k$-c.e.\
then $\{{\EM{\mathcal{N}}}_X\}$ halts in $(2k+4)$-many steps on any input.
\end{itemize}
In particular, ${\EM{\mathcal{N}}}_X$ runs in linear space.
\end{theorem}
\begin{proof}
Let $Y$ be a computable function such that $X(m) = \lim_{n \to \infty}Y(n, m)$ for all $m\in{\EM{{\mbb{N}}}}$, and such that
if $X$ is $k$-c.e.,
then $Y$ witnesses this fact. Let $y_m$ be a code such
$\{y_m\}(n) = Y(n,m)$ for all $n\in{\EM{{\mbb{N}}}}$.
Recall the graphs
${\EM{\mc{G}}}_{e}$ (for $e\in{\EM{{\mbb{N}}}}$)
defined in Lemma~\ref{Lower bound piece finite degree},
and let ${\EM{\mc{H}}}_X$ be the disjoint union of
$\{{\EM{\mc{G}}}_{y_m}\,:\, m\in{\EM{{\mbb{N}}}}\}$
in the following sense:
\begin{itemize}
\item ${\EM{\mc{H}}}_X$ is a graph with underlying set ${\EM{{\mbb{N}}}}\times{\EM{{\mbb{N}}}}$.
\item For each $m \in {\EM{{\mbb{N}}}}$, let ${\EM{\mc{H}}}_{X,m}$ be the subgraph of ${\EM{\mc{H}}}_X$ consisting of elements whose second coordinate is $m$. Then
the map
defined by
$(n, m) \mapsto n$ for $n\in{\EM{{\mbb{N}}}}$
is an isomorphism from ${\EM{\mc{H}}}_{X,m}$
to ${\EM{\mc{G}}}_{y_m}$.
\item There are no edges between any elements of ${\EM{\mc{H}}}_{X,m}$ and ${\EM{\mc{H}}}_{X, m'}$
for distinct $m, m' \in{\EM{{\mbb{N}}}}$.
\end{itemize}
Note that ${\EM{\mc{H}}}_X$ is finitary (and in fact has just one label) by the construction of the graphs ${\EM{\mc{G}}}_{e}$, which have common lookup table ${\EM{\mc{F}}}$.
Suppose that $x \in \{0, 1\}^{{\EM{{\mbb{N}}}} \times {\EM{{\mbb{N}}}}}$ has only finitely many $0$'s. As earlier, let $\widehat{x}$ be the valid configuration satisfying
$\widehat{x}(\ell) = (\emptyset, x(\ell), s)$ for all $\ell \in {\EM{{\mbb{N}}}} \times {\EM{{\mbb{N}}}}$.
Then by Lemma~\ref{Lower bound piece finite degree}, $\<{\EM{\mathcal{N}}}_X, \widehat{x}\>$ halts and satisfies the following.
\begin{itemize}
\item For any $n, m \in {\EM{{\mbb{N}}}}$, the vertex
$(n+1, m)$ displays $0$.
\item For any $m \in {\EM{{\mbb{N}}}}$, if $x(0, m) = 0$ then $\<{\EM{\mathcal{N}}}_X, \widehat{x}\>$ halts with $0$ displayed at vertex $(0, m)$.
\item For any $m \in {\EM{{\mbb{N}}}}$, if $x(0, m) = 1$ then $\<{\EM{\mathcal{N}}}_X, \widehat{x}\>$ halts with $X(m)$ displayed at vertex $(0, m)$.
\end{itemize}
In particular, $\{{\EM{\mathcal{N}}}_X\}$ is a total function that is Turing equivalent to $X$.
Finally note that
given any connected subgraph $A$ on which the starting configuration is supported, the computation is completely determined by the $(2k+4)$-neighborhood of that subgraph.
Further, each vertex has degree at most $3$, and so the size of each such neighborhood is at most $3^{2k+4} \cdot |A|$.
Hence ${\EM{\mathcal{N}}}_X$ runs in linear space.
\end{proof}
\section{Graph-theoretic properties that do not affect computability}
\label{Other graph-theoretic properties}
We now show that the graph machines themselves can be taken to be efficiently computable without affecting the Turing degrees of the functions that can be computed via them.
We also show that, while we have made use of directed edges throughout the above constructions,
from a computability perspective this was just a notational convenience.
The arguments in this section will be sketched, and we refer the reader to a forthcoming extended version of this paper for detailed discussion.
\subsection{Efficiently computable graphs}
\label{eff-comp-graph-subsec}
So far we have considered how resource bounds interact with graph computability, though the graph machines themselves have remained arbitrary computable structures.
Here we show that requiring the graph machines to be efficiently computable has no
effect on the complexity functions that can be graph computed.
Cenzer and Remmel \cite{MR1130218} showed that an arbitrary computable relational structure is computably isomorphic to some polynomial-time structure. We base our result here on their key ideas.
\begin{definition}
A graph machine ${\EM{\mathfrak{M}}}$ is \defn{polynomial-time computable} if (a) its underlying
graph ${\EM{\mc{G}}}$ is polynomial-time computable, in the sense that
set $G$ is a polynomial-time computable subset of ${\EM{{\mbb{N}}}}$, the
labels $L$ and colors $C$
are polynomial-time computable subsets of ${\EM{{\mbb{N}}}}$, and the functions $V$, $E$, and $\gamma$ are polynomial-time computable, and (b) the functions $S$, $\alpha$, and $T$ are polynomial-time computable, all with respect to a standard encoding of finite powersets.
\end{definition}
\begin{proposition}
For every computable graph machine ${\EM{\mathfrak{M}}}$, there is a computably isomorphic graph machine ${\EM{\mathfrak{M}}}^\circ$ that polynomial-time computable.
\label{eff-comp}
\end{proposition}
\begin{proofsketch}
We will build the polynomial-time computable graph machine ${\EM{\mathfrak{M}}}^\circ$ along with its computable isomorphism to ${\EM{\mathfrak{M}}}$ simultaneously.
Suppose $\{a_i \,:\, i\in{\EM{{\mbb{N}}}}\}$ is a computable increasing enumeration of $G$.
We will define $\{b_i \,:\, i\in{\EM{{\mbb{N}}}}\}$ by induction so that the map $a_i \mapsto b_i$ is the desired isomorphism.
Suppose we have defined $\{b_i \,:\, i < n\}$. First
consider if any new labels, colors, or states are needed to describe the relationship $a_n$ and $\{a_i \,:\, i < n\}$. If there are such labels, colors, or states, we look at the transition table $T$ and consider the length of a minimal computations needed to calculate $T$ on these new labels, colors, and states, along with the ones previously identified.
Note that there can be at most finitely many new labels, colors, and states introduced at this stage, and so there is an upper bound on the length of these computations.
We then add corresponding new labels, colors, or states to the structure ${\EM{\mathfrak{M}}}^\circ$ where the natural numbers associated to these labels, colors, or states are large enough to ensure that the computation of $T^\circ$ remains polynomial-time computable.
Similarly, we consider the length of the computation needed to determine
$V(a_n)$, $E(a_i, a_n)$, $E(a_n, a_i)$, and $\gamma(a_n)$ for $i< n$ and choose a natural number to assign to $b_n$ which is large enough to ensure that
the corresponding
$V^\circ(b_n)$, $E^\circ(b_i, b_n)$, $E^\circ(b_n, b_i)$, and $\gamma^\circ(b_n)$ for $i< n$ are all polynomial-time computable.
One can verify that this yields the desired polynomial-time computable graph machine and computable isomorphism.
\end{proofsketch}
In particular, because computably isomorphic graph machines compute functions of the same Turing degrees,
in
\cref{Pointwise computable Turing degrees}
and
\cref{omega-jump}
we may obtain the stated graph computable functions via graph machines whose underlying graphs are polynomial-time computable.
Likewise, in \cref{Lower bound on finite degree main result}, we may take the underlying graph ${\EM{\mc{H}}}_X$ to be polynomial-time computable.
\subsection{Symmetric graphs}
\label{symmetric-graphs}
\begin{definition}
The graph ${\EM{\mc{G}}}$
is \defn{symmetric} if
$E(v,w) = E(w,v)$
for all $v, w \in G$.
\end{definition}
\begin{proposition}
Every graph computable function is ${\EM{\mc{G}}}_S$-comput\-able for some symmetric graph ${\EM{\mc{G}}}_S$.
\label{symmetric-ok}
\end{proposition}
\begin{proofsketch}
Given a computable graph ${\EM{\mc{G}}}$ and a ${\EM{\mc{G}}}$-machine ${\EM{\mathfrak{M}}}$,
define ${\EM{\mc{G}}}_S$ to be the \emph{symmetric} graph with underlying set $G_S = G \cup \{e_{(v,w)}, e^*_{(v, w)} \,:\, v, w\in G \}$, set of colors $C$, and edge coloring $E_S$ defined such that
\begin{align*}
E(v,w) &=
E_S(v, e_{(v,w)}) = E_S(e_{(v,w)}, e^*_{(v, w)}) \\
&= E_S(e^*_{(v,w)}, w) = E_S(v, e^*_{(v,w)})
\end{align*}
for all $v, w \in G$.
In particular, whenever $c \in E(v,w)$ we have that (i) $e^*_{(v,w)}$ is connected to $w$ via an edge of color $c$, and (ii) there are paths of length 1 and length 2 (of color $c$) connecting $v$ and $e^*_{(v, w)}$.
We now define ${\EM{\mathfrak{M}}}_S$ to be the ${\EM{\mc{G}}}_S$-machine that simulates ${\EM{\mathfrak{M}}}$ as follows. Every three timesteps of ${\EM{\mathfrak{M}}}_S$ corresponds to one timestep of ${\EM{\mathfrak{M}}}$. State transitions in ${\EM{\mathfrak{M}}}_S$ between elements of $G$ are the same as in ${\EM{\mathfrak{M}}}$ (except that each transition takes three timesteps to be processed). However, any time that
a vertex $v$ sends a $c$-pulse to $w$
within ${\EM{\mathfrak{M}}}$,
the corresponding pulse in ${\EM{\mathfrak{M}}}_S$ instead goes from $v$ to both $e_{(v,w)}$ and $e^*_{(v,w)}$. When $e_{(v,w)}$ receives a $c$-pulse, it sends a $c$-pulse to $e^*_{(v,w)}$. However, $e^*_{(v,w)}$ only sends a $c$-pulse to $w$ after it has itself received $c$-pulses in \emph{two} successive timesteps (i.e., first one from $v$, and then one from $e_{(v,w)}$).
In this way, in ${\EM{\mathfrak{M}}}_S$ when $v$ sends out a $c$-pulse, this causes (three timesteps later) a $c$-pulse to reach all vertices $w \in G$ for which $c\in E(v,w)$. However, even though in ${\EM{\mc{G}}}_S$ the relation $E_S$ is symmetric, when $v$ sends out a $c$ pulse it will not reach any $w \in G$ such that $E(w,v)$ holds.
Hence the behavior of ${\EM{\mathfrak{M}}}_S$ within $G$ simulates the (slowed-down) behavior of ${\EM{\mathfrak{M}}}$.
\end{proofsketch}
\section{Representations of other computational models via graph machines}
\label{representations-sec}
We now describe several other models of computation, mainly on graphs, and describe how they can be viewed as special cases of graph machines. These examples provide further evidence for graph machines being a universal model of computation on graphs.
\subsection{Ordinary Turing machines}
\label{otm-subsec}
We begin by showing how to simulate an ordinary Turing machine by a graph Turing machine.
Let $M$ be a Turing machine (with a finite set of states and finite transition function, as usual).
Then there is an equivalent graph machine whose underlying graph
represents a Turing machine tape, as we now describe.
The underlying graph is a one-sided chain with underlying set $\{-1\} \cup {\EM{{\mbb{N}}}}$.
Every vertex in ${\EM{{\mbb{N}}}}$ has degree $2$ (with edges both to and from its predecessor on the left and successor on the right), while $-1$ is connected only to $0$.
There are two labels, one of which holds of all of ${\EM{{\mbb{N}}}}$ and other of which holds at $-1$.
Any vertex of ${\EM{{\mbb{N}}}}$ which is in the initial state and has not received any pulse does nothing. When the vertex $-1$ is in the initial state and displays $1$, it sends a pulse to $0$ to signal the creation of the read/write head ``above $0$''.
The presence of a head above a vertex is encoded via the state of that vertex.
The lookup table of the graph machine will ensure that there is a unique head, above precisely one vertex, at each timestep following any starting configuration with $1$ displayed at $-1$.
At any subsequent timestep, only the vertex with the head will send out pulses.
Between two any adjacent vertices in ${\EM{{\mbb{N}}}}$, there are sufficiently many edge colors to transmit the current state of $M$, and so at each timestep, the vertex with the head
uses the transition function of $M$ to determine the location of the head at the next time step, and (if the head needs to move according to $M$) transmits the appropriate signal (containing the current state of $M$ as well as signaling that the head is above it now), to its neighbor, and accordingly adjusts its own state. If the head does not need to move, the vertex with the head merely sets its own state and displayed symbol according to the transition function of $M$.
One can show that not only does this embedding yield a graph machine that computes functions of the same Turing degree as
the original Turing machine $M$, but that the output of this graph machine
produces (on ${\EM{{\mbb{N}}}}$) the exact same
function as $M$, moreover via a weak bisimulation (in which the function is computed with
merely a small linear time overhead).
The doubly-infinite one-dimensional read/write tape of an ordinary Turing machine has cells indexed by ${\EM{{\mbb{Z}}}}$, the free group on one generator, and in each timestep the head moves according to the generator or its inverse. This interpretation of a Turing machine as a ${\EM{{\mbb{Z}}}}$-machine has been generalized to $H$-machines for arbitrary finitely generated groups $H$ by \cite{Aubrun2016}, and our simulation above extends straightforwardly to this setting as well.
One might next consider how cleanly one might embed various extensions of Turing machines where the tape is replaced by a graph, such as Kolmogorov--Uspensky machines \cite{KU}, Knuth's pointer machines \cite[pp.\ 462--463]{MR0286317}, and Sch\"onhage's storage modification machines \cite{MR584506}. For a further discussion of these and their relation to sequential abstract state machines, see \cite{gurevich1993kolmogorov} and \cite{MR1858823}.
\subsection{Cellular automata}
\label{cellular-subsec}
We now consider cellular automata; for background,
see, e.g., the book \cite{toffoli1987cellular}.
Cellular automata (which we take to be always
finite-dimensional, finite-radius
and with finitely-many states) can be naturally simulated by graph machines, moreover of constant degree. In particular,
\cref{Constant degree implies computable}
applies to this embedding.
Not only is the evolution of every such cellular automata computable (moreover via this embedding as a graph machine), but there are particular automata whose evolution encodes the behavior of a universal Turing machine
(\cite{MR2211290} and \cite{MR2493991}).
Several researchers have also considered the possibility of expressing intermediate Turing degrees via this evolution (\cite{BaldwinReview}, \cite{CohnReview}, and \cite{MR1964811}). Analogously, one might ask which Turing degrees can be expressed in the evolution of graph machines.
We now describe this embedding.
Cells of the automata are taken to be the vertices of the graph, and cells are connected to its ``neighbors'' (other cells within the given radius)
by a collection of edges (of the graph) whose labels encode their relative position (e.g., ``1 to the left of'') and all possible cellular automaton states. (In particular, every vertex has the same finite degree.)
The displayed symbol of each vertex encodes the cellular automaton state of that cell.
The rule of the cellular automaton is encoded in the lookup table so as to achieve the following:
At the beginning of each timestep, each cell announces its state by virtue of its vertex sending
out a pulse to the vertices of the neighboring cells, along edges whose labels encode this state.
(If during some timestep a vertex does not receive a pulse from the direction corresponding to a neighboring cell, then it assumes that the vertex corresponding to the neighboring cell is in its initial state and is displaying $0$.)
Each vertex then updates its displayed symbol based on how the corresponding cell should update its state, based on the states of its neighbors, according to the rule of the original cellular automaton.
Note that this encoding produces a bisimulation between the original cellular automaton and the graph machine built based on it.
\subsection{Parallel graph dynamical systems}
\label{gds-subsec}
Parallel graph dynamical systems \cite{MR3332130} can be viewed as essentially equivalent to the finite case of graph Turing machines, as we now describe.
Finite cellular automata
can also be viewed as a special case of parallel graph dynamical systems,
as can finite boolean networks,
as noted in \cite[\S2.2]{MR3332130}.
For more on parallel graph dynamical systems, see \cite{MR3339901}, \cite{MR3332130}, and \cite{MR2078971}.
A parallel graph dynamical system specifies a finite (possibly directed) graph and an evolution operator (determining how a given tuple of states for the vertices of the graph transitions into a new tuple). The evolution operator is required to be local, in the sense of determining the new state of a vertex given only its state and those of its adjacencies.
We may embed a parallel graph dynamical system as a graph machine similarly to how we embedded cellular automata above.
However, because different vertices have non-isomorphic neighborhoods, we can no longer label edges connecting a vertex to a given neighbor based on the ``direction'' of this neighbor. We therefore require a different collection of label types for every vertex of the graph, which are used to signal the source of the edge.
This embedding produces a bisimulation between the given parallel graph dynamical system and the graph machine built from it.
Note that this embedding also works for the natural infinite extension of parallel graph dynamical systems, where each vertex is required to have finite in-degree. This restriction on in-degree ensures that each vertex of the corresponding graph machine has only finitely many edge colors, even though the set of all edge colors may be infinite.
In contrast, it is not immediately clear how best to encode an arbitrary (parallel) abstract state machine \cite{MR2002271} as a graph Turing machine (due to the higher arity relations of the ASM).
\section{Possible extensions}
\label{Possible-extensions}
We now describe several possible extensions of graph Turing machines that may be worth studying.
It would be interesting to develop a non-deterministic version of graph Turing machines, where certain pulses are allowed, but not required, to be sent.
We expect that such a framework would naturally encompass an infinitary version of Petri nets \cite{DBLP:journals/csur/Peterson77}, as well as the machines described by
\cite{MR3163227}.
Another interesting extension would be a randomized version of graph Turing machines, where the pulses fire independently according to a specified probability distribution. In this setting, one could then study the joint distribution of overall behavior. Randomized graph Turing machines might encompass dynamical Bayesian networks \cite{MR2704368} and other notions of probabilistic computation on a graph.
Our framework involves underlying graphs that are specified before the computation occurs. Is there a natural way to extend our framework to allow for new nodes to be added, destroyed, duplicated, or merged? Such an extension might naturally encompass infinitary generalizations of
models of concurrency and parallelism based on graph rewrite rules, such as
interaction nets \cite{DBLP:conf/popl/Lafont90} and bigraphs \cite{DBLP:books/daglib/0022395}.
\section{Open questions}
\label{Open Questions}
We conclude with several open questions.
\begin{itemize}
\item Does every Turing degree below $\mbf{0}^{(\EM{\omega})}$ contain a graph computable function? So far, we merely know that every arithmetical degree contains a graph computable function --- but there are Turing degrees below $\mbf{0}^{(\EM{\omega})}$ that are not below $\mbf{0}^{(n)}$ for any $n\in{\EM{{\mbb{N}}}}$.
\item Are there graph computable functions that are not graph computable in constant time? In particular, can $\mbf{0}^{(\EM{\omega})}$ be graph computed in constant time? (The construction in \cref{omega-jump} is linear-time.)
\item Are there graph computable functions that are not graph computable by a graph machine with finitary underlying graph?
\end{itemize}
\section*{Acknowledgements}
The authors would like to thank Tomislav Petrovi\'c, Linda Brown Westrick, and the anonymous referees of earlier versions for helpful comments.
\vspace*{20pt}
\bibliographystyle{amsnomr}
|
1,108,101,565,927 | arxiv | \section{Introduction}
The purpose of the present paper is to derive a non-oscillatory and convergent numerical method for a large class of nonlocal continuity equations of the form
\begin{equation}\label{eq:conslaw}
\begin{split}
\partial_t \mu + \nabla\cdot\big(V[\mu]\mu\big) = 0 &\qquad x\in\xset,\ t>0 \\
\mu(x,0) = \mu_0(x) & \qquad x\in\xset
\end{split}
\end{equation}
on some domain $\xset \subset \mathbb{R}^d$. Here, $\mu(x,t)$ is interpreted as a density of particles at time $t$, and $V[\mu]:\xset\to\mathbb{R}^d$ depends nonlocally on $\mu$, typically the convolution of $\mu$ with some kernel. Conservation laws of this form arise in a large variety of applications, such as the simulation of crowd dynamics, microbiology, flocking of birds, traffic flow and more \cite{Buck,Kuramoto,Winfree}. One particular instance of \eqref{eq:conslaw} which has been studied extensively is the so-called \emph{Kuramoto--Sakaguchi equation}, also called the \emph{kinetic Kuramoto equation},
\begin{equation}\label{eq:ks}
\begin{split}
\partial_{t}f + \partial_{\theta}(\omega[f]f) = 0 &\qquad (\theta,\Omega)\in\mathbb{T}\times\mathbb{R},\ t>0 \\
f(\theta, \Omega, 0) = f_{0}(\theta, \Omega) &\qquad (\theta,\Omega)\in\mathbb{T}\times\mathbb{R}
\end{split}
\end{equation}
where $\mathbb{T} = \mathbb{R}/2\pi$ is the one-dimensional torus and
\[
\omega[f](\theta, \Omega, t):=\Omega - K\int_{\mathbb{T}}\mbox{sin}(\theta-\theta^*)\rho(\theta^*,t) \,d\theta^*, \qquad \rho(\theta,t) := \int_\mathbb{R} f(\theta, \Omega, t)\, d\Omega
\]
This equation arises as the mean-field limit of the \emph{Kuramoto equation}, a system of ordinary differential equations for coupled oscillators which was first studied by Winfree and Kuramoto \cite{KuramotoI,Kuramoto,Winfree}. The unknowns in the Kuramoto equation are the phase $\theta\in\mathbb{T}$ and the natural frequency $\Omega\in\mathbb{R}$ of each oscillator, and the interaction between the oscillators depends on the coupling strength $K>0$ and the relative difference in phase $\theta-\theta^*$ between pairs of oscillators. The mean-field limit as the number of oscillators goes to infinity is a probability distribution $f=f(\theta,\Omega,t)$ obeying the above nonlocal continuity equation; see e.g.~\cite{Dobrushin1979,Neunzert,Chiba}. See also the recent paper \cite{Carrillo} for some qualitative properties of \eqref{eq:ks}.
We will also let $g(\Omega,t):=\int_\mathbb{T} f(\theta,\Omega,t)\,d\theta$ denote the distribution function for natural frequencies. From \eqref{eq:ks} it is easily seen that $g$ is constant in time, $g(\Omega,t)\equiv g(\Omega)$. Here and elsewhere we will assume that $f_0$ (and hence also $g$) is compactly supported.
The above equations are valid for oscillators with non-identical natural frequencies. The situation is somewhat simpler in the case of identical oscillators, i.e.\ oscillators whose natural frequencies coincide. This corresponds to $g$ being a Dirac measure, and without loss of generality we can assume that $g=\delta$, the Dirac measure located at $\Omega = 0$. Consequently,
\[
f(\theta, \Omega, t) = \rho(\theta, t) \delta(\Omega), \qquad \omega[f]f = L[\rho]\rho, \qquad L[\rho](\theta,t):=-K\int_{\mathbb{T}}\mbox{sin}(\theta - \theta^*) \rho(\theta^*,t) \,d\theta^*.
\]
Therefore, \eqref{eq:ks} reduces to the following equation for $\rho$:
\begin{equation}\label{eq:ksident}
\begin{split}
\partial_t\rho + \partial_\theta(L[\rho]\rho) = 0 \\
\rho(\theta, 0) = \rho_{0}(\theta).
\end{split}
\end{equation}
Both equation \eqref{eq:ks} and \eqref{eq:ksident} are instances of general nonlocal conservation laws of the form \eqref{eq:conslaw}.
The purpose of the present paper is to derive and analyze a finite volume numerical method for a large class of equations of the form \eqref{eq:conslaw} (including the kinetic Kuramoto equations \eqref{eq:ks}, \eqref{eq:ksident}). In particular, we prove that the scheme converges strongly to the unique weak solution $\mu$ whenever $\mu_0\in L^1\cap \textrm{BV}$, and in all other cases converges in the sense of measures to the so-called \emph{measure-valued} solution $\mu$. We emphasize that the stability and convergence properties of the scheme are valid even when $\mu_0$ (and hence also the exact solution) has point-mass singularities.
In the recent works \cite{Amadori,AmadoriI}, Amadori et al.~designed and analyzed a front-tracking numerical method for the the Kuramoto--Sakaguchi equation for identical oscillators \eqref{eq:ksident}. In contrast to their method, our finite volume method works in any number of dimensions, does not require any regularity on the initial data, and does not impose ``entropy conditions'' on solutions. We also mention the work by Crippa and L\'ecureux-Mercier \cite{CriMer13} where well-posedness of a \emph{system} of nonlocal continuity equations of the form \eqref{eq:conslaw} is established. The extension of our finite volume scheme to such systems should be straightforward.
\subsection{Nonlocal conservation laws}
Nonlocal conservation laws of the form \eqref{eq:conslaw} were first studied by Neunzert \cite{Neunzert} and Dobrushin \cite{Dobrushin1979}. Using techniques which by now are standard, they showed existence and uniqueness of solutions of \eqref{eq:conslaw}. We will consider solutions which are weakly continuous maps $\mu : \mathbb{R}_+ \to {\EuScript{M}}(\xset)$, mapping time $t$ to probability measures $\mu_t\in {\EuScript{P}}(\xset)$. Here, ${\EuScript{M}}(\xset) = (C_0(\xset))^*$ is the space of bounded Radon measures on $\xset$, ${\EuScript{P}}(\xset)$ is the subset of probability measures, and by weakly continuous we mean that $t \mapsto \ip{\mu_t}{\phi} := \int_\xset \phi(x)\,d\mu_t(x)$ is continuous for every $\phi\in C_0(\xset)$. We define the \emph{1-Wasserstein} metric
\[
d_W(\nu_1,\nu_2):=\sup\left\{\int_\xset \psi(x)\,d(\nu_2-\nu_1)(x)\,:\, \psi\in C_b(\xset),\ \|\psi\|_{{\textrm{Lip}}(\xset)}\leq 1 \right\}, \qquad \nu_1,\nu_2\in{\EuScript{P}}(\xset).
\]
It can be shown that $d_W$ metrices the topology of weak (or \emph{narrow}) convergence in ${\EuScript{P}}_1(\xset)$, the set of probability measures $\nu$ with finite first moment, $\int_\xset |x|\,d\nu(x)<\infty$ (see e.g.~\cite{Villani}).
\begin{definition}
$\mu \in C_w(\mathbb{R}_+; {\EuScript{M}}(\xset))$ is a \emph{measure-valued solution} of \eqref{eq:conslaw} if it satisfies \eqref{eq:conslaw} in the sense of distributions, i.e.
\begin{equation}\label{eq:measvalsoln}
\int_0^\infty\int_\xset \partial_t\phi(x,t) + V[\mu_t](x)\cdot\nabla\phi(x,t)\,d\mu_t(x)\,dt + \int_\xset \phi(x,0)\,d\mu_0(x) = 0
\end{equation}
for every $\phi\in C_c^\infty(\xset\times\mathbb{R}_+)$.
\end{definition}
Since $\mu$ is assumed to be weakly continuous in time, one can show that $\mu$ is a measure valued solution \emph{if and only if}
\begin{equation}\label{eq:weaksolndef2}
\ip{\mu_t}{\psi}-\ip{\mu_s}{\psi} = \int_{s}^{t}\int_\xset V[\mu_\tau](x)\cdot\nabla\psi(x)\,d\mu_\tau(x)\,d\tau
\end{equation}
for every $0\leq s<t$ and every $\psi\in C_c^\infty(\xset)$. We say that $\mu$ is a \emph{weak solution} of \eqref{eq:conslaw} if it is a measure-valued solution which is absolutely continuous with respect to Lebesgue measure.
We will henceforth assume that $\xset$ is of the form $\xset = \mathbb{K}_1\times\cdots\times\mathbb{K}_d$, where each $\mathbb{K}_i$ is either the torus $\mathbb{T}$ or the whole real line $\mathbb{R}$. For the well-posedness of \eqref{eq:conslaw} and the convergence of our numerical scheme, we also need some of the following assumptions on $V$:
\begin{enumerate}[label=(A\arabic*)]
\item\label{cond:linfbound} $\exists\ C_1>0$ such that $\|V[\nu]\|_{L^\infty(\xset)} \leq C_1$ for all $\nu\in{\EuScript{P}}(\xset)$
\item\label{cond:spacelipschitz} $\exists\ C_2>0$ such that $|V[\nu](x)-V[\nu](y)| \leq C_2|x-y|$ for all $x,y\in\xset$ and $\nu\in{\EuScript{P}}(\xset)$
\item\label{cond:lipschitz} $\exists\ C_3>0$ such that $\|V[\nu_1]-V[\nu_2]\|_{L^\infty(\xset)} \leq C_3d_W(\nu_1,\nu_2)$ for all $\nu_1,\nu_2\in{\EuScript{P}}(\xset)$
\item\label{cond:c2bound} $\exists\ C_4>0$ such that $\big\|\nabla V[\nu]\big\|_{{\textrm{Lip}}(\xset)} \leq C_4$ for all $\nu\in{\EuScript{P}}(\xset)$.
\end{enumerate}
The main well-posedness result for the nonlocal conservation law is the following.
\begin{theorem}[Neunzert \cite{Neunzert}, Dobrushin \cite{Dobrushin1979}]\label{thm:conslaw}
Let $\mu_0\in{\EuScript{P}}_1(\xset)$ and let $V$ satisfy assumptions \ref{cond:linfbound}, \ref{cond:spacelipschitz}, \ref{cond:lipschitz}. Then there exists a unique measure-valued solution of \eqref{eq:conslaw}. This solution is Lipschitz in time, $d_W(\mu_t,\mu_s)\leq C_1|t-s|$. If $\mu_0 \in L^1(\xset)$ then $\mu_t\in L^1(\xset)$ for all $t>0$.
\end{theorem}
\begin{remark}
It is easy to see that both kinetic Kuramoto equations \eqref{eq:ks} and \eqref{eq:ksident} satisfy assumptions \ref{cond:linfbound}--\ref{cond:c2bound}, and are therefore well-posed by Theorem \ref{thm:conslaw}. Indeed, in the case of \eqref{eq:ks} we let $d=2$, $\xset = \mathbb{T}\times\mathbb{R}$ and $V[\nu] = \big(V^1[\nu],V^2[\nu]\big)$, where $V^2[\nu]\equiv 0$ and
\[
V^1[\nu](\theta,\Omega) = \begin{cases}
\Omega - K\int_\xset \sin(\theta-\theta^*)\,d\nu(\theta^*,\Omega^*) & \text{for } \Omega\in\supp g \\
0 & \text{otherwise.}
\end{cases}
\]
We can then choose $C_1 = K + \max\{|\Omega| : \Omega\in\supp g\}$, $C_2=1+K$ and $C_3=C_4=K$. In the case of \eqref{eq:ksident} we let $d=1$, $\xset=\mathbb{T}$, $V=L$ and $C_1=C_2=C_3=C_4=K$.
\end{remark}
``Kinetic'' PDEs \eqref{eq:conslaw} under our assumptions (A1)--(A3) are rather well-behaved under approximations. For instance, in a \emph{particle approximation} one approximates the solution as a convex combination of Dirac measures, $\mu_t^M = \sum_{i=1}^M \delta_{x_i(t)}\mu_i$, where $x_i(t)$ is the position of the $i$th particle and $\mu_i\in(0,\infty)$ is its mass. It is straightforward to see that $\mu^M$ is in fact a measure-valued solution of \eqref{eq:conslaw} provided $x_i(t)$ satisfy the system of ODEs
\begin{equation}\label{eq:particleapprox}
\frac{dx_i}{dt}(t) = V[\mu^M](x_i(t)).
\end{equation}
Assumptions \ref{cond:linfbound}, \ref{cond:spacelipschitz} guarantee that this system has a unique solution. Taking an approximating sequence $\mu^M_0$ converging weakly to $\mu_0$, assumption \ref{cond:lipschitz} guarantees that the limit is a measure-valued solution. Thus, if one can solve the system of ODEs \eqref{eq:particleapprox} then the question of convergence boils down to the approximation of the initial data by Dirac measures \cite{Neunzert,Dobrushin1979}.
Although the dynamical system \eqref{eq:particleapprox} can be studied qualitatively in certain simple cases, it is computationally infeasible to solve \eqref{eq:particleapprox} in more realistic settings. In the next section we proceed to construct a simple, computationally efficient numerical approximation to the nonlocal PDE \eqref{eq:conslaw}.
\section{The Lax--Friedrichs scheme}
\subsection{Derivation of the method}
For the sake of simplicity we derive the method in one space dimension with either $\xset=\mathbb{R}$ or $\xset=\mathbb{T}$, and then simply state the method in multiple space dimensions (Section \ref{sec:multid}). We start by deriving a \emph{staggered} version of the method but---again for the sake of simplicity---we will only analyze the unstaggered version of this method.
Consider a mesh $\ldots < x_{i-\hf} < x_{i+\hf} < \ldots$, where $i$ run over $\iset:=\mathbb{Z}$ in the unbounded case $\xset=\mathbb{R}$, or over some finite set $\iset:=\{1,\dots,N_x\}$ in the periodic case $\xset=\mathbb{T}$, and such that $\bigcup_i [x_{i-\hf},x_{i+\hf})=\xset$. For simplicity we assume a uniform mesh, $x_{i+\hf}-x_{i-\hf}\equiv {\Delta x}$. We will denote $x_i:=\frac{x_{i-\hf}+x_{i+\hf}}{2}$. Let $0=t^0<t^1<\dots<t^N=T$ be a partition of the time interval $[0,T]$ with uniform step size $t^{n+1}-t^n\equiv{\Delta t}$. Assuming that we are given an approximate solution $\mu^n=\sum_i \delta_{x_i}\mu_i^n$ at time $t^n$, we compute an approximation $\mu^{n+1}$ at $t=t^{n+1}$ as follows. Let $\mu_t$ be the exact solution of
\begin{equation}\label{eq:godunov}
\begin{split}
\partial_t \mu + \partial_x\big(V[\mu]\mu\big) = 0 &\qquad t^n<t<t^{n+1}\\
\mu_{t^n} = \mu^n.
\end{split}
\end{equation}
Define $\mu_{i+\hf}^{n+1}:= \ip{\mu_{t^{n+1}}}{\psi_{i+\hf}}$, where $\psi$ are the usual ``witch's hat'' finite element basis functions,
\[
\psi_{i+\hf}(x) := \begin{cases}
\frac{x-x_{i-\hf}}{{\Delta x}} & x_{i-\hf}\leq x<x_{i+\hf} \\
\frac{x_{i+\thf}-x}{{\Delta x}} & x_{i+\hf}\leq x<x_{i+\thf} \\
0 & \text{else,}
\end{cases}
\]
and the index $i$ is taken over all $i\in\iset$ if $n$ is even and over $i\in\iset-{\unitfrac{1}{2}}$ is $n$ is odd. The approximation $\mu^{n+1}$ at time $t^{n+1}$ is then defined to be the projected solution $\mu^{n+1} := \sum_i \mu_{i+\hf}^{n+1}\delta_{x_{i+\hf}}$.
We can derive a simplified expression of $\mu_{i+\hf}^{n+1}$ as follows. Since the initial data $\mu^n$ in \eqref{eq:godunov} is a convex combination of Dirac measures, the solution of \eqref{eq:godunov} can be written as $\mu_t = \sum_i \delta_{x_i(t)}\mu_i^n$, where $x_i$ solve the system of ODEs \eqref{eq:particleapprox}. If we assume the \emph{CFL condition}
\begin{equation}\label{eq:CFLstagg}
\frac{{\Delta t}}{{\Delta x}}\big\|V[\mu]\big\|_{L^\infty(\xset)} \leq \frac{1}{2}
\end{equation}
then the particle with position $x_i(t)$ stays in the interval $(x_{i-\hf},x_{i+\hf})$ for all $t\in[t^n,t^{n+1})$. In particular, the atoms $\delta_{x_i(t)}$ in $\mu_t$ stay away from the kinks in $\psi_{i+\hf}$, and hence we may use the (non-differentiable) function $\psi_{i+\hf}$ as a test function in the weak formulation \eqref{eq:weaksolndef2}. Using the fact that $\ip{\mu^n}{\psi_{i+\hf}} = \frac{\mu_i^n+\mu_{i+1}^n}{2}$, we obtain
\begin{align*}
\mu_{i+\hf}^{n+1} &= \frac{\mu_i^n+\mu_{i+1}^n}{2} + \int_{t^n}^{t^{n+1}}\int_\mathbb{R} V[\mu_t](x)\partial_x\psi(x)\,d\mu_t(x)dt \\
&= \frac{\mu_i^n+\mu_{i+1}^n}{2} - \frac{1}{{\Delta x}}\int_{t^n}^{t^{n+1}}V[\mu_t](x_{i+1}(t))\mu_{i+1}^n - V[\mu_t](x_{i}(t))\mu_{i}^n\,dt.
\end{align*}
Approximating $V[\mu_t](x_{i}(t)) \approx V[\mu^n](x_{i})$ yields our final scheme,
\begin{equation}\label{eq:stagglaxfr}
\mu_{i+\hf}^{n+1} = \dfrac{\mu_i^n+\mu_{i+1}^n}{2} - {\Delta t}\dfrac{V[\mu^n]_{i+1}\mu_{i+1}^n - V[\mu^n]_i\mu_{i}^n}{{\Delta x}}
\end{equation}
(where we denote $V[\mu^n]_i := V[\mu^n](x_i)$). The initial data is set as $\mu_i^0 = \ip{\mu_0}{\psi_i}$. We refer to \eqref{eq:stagglaxfr} as the \emph{staggered Lax--Friedrichs method}, after \cite{Tadmor}.
For the sake of simplicity, we will only consider an unstaggered version of \eqref{eq:stagglaxfr}, obtained by inserting an extra mesh point between all pairs of neighboring mesh points. The \emph{unstaggered Lax--Friedrichs method} is then
\begin{equation}\label{eq:unstagglaxfr}
\begin{split}
\mu_i^0 &= \ip{\mu_0}{\psi_i} \\
\mu_i^{n+1} &= \dfrac{\mu_{i-1}^n+\mu_{i+1}^n}{2} - {\Delta t}\dfrac{V[\mu^n]_{i+1}\mu_{i+1}^n - V[\mu^n]_{i-1}\mu_{i-1}^n}{2{\Delta x}}
\end{split}
\end{equation}
where $\mu^n = \sum_i \mu_i^n\delta_{x_i}$.
\subsection{Properties of the method}
In this section we prove several stability properties of the staggered Lax--Friedrichs method. Since the multi-dimensional method shares the same properties as the one-dimensional method, we will prove these properties for the one-dimensional method \eqref{eq:unstagglaxfr} and merely state the properties for the two-dimensional method \eqref{eq:unstagglaxfr2d} in Section \ref{sec:multid}.
\begin{proposition}\label{prop:stabprop1d}
Consider the unstaggered, one-dimensional Lax--Friedrichs method \eqref{eq:unstagglaxfr} and define the piecewise linear measure-valued map
\begin{equation}\label{eq:approxsolndef}
\mu_t^{\Delta x} := \frac{t^{n+1}-t}{{\Delta t}}\mu^n + \frac{t-t^n}{{\Delta t}}\mu^{n+1} \qquad \text{for } t\in[t^n,t^{n+1})
\end{equation}
where $\mu^n := \sum_i \mu_i^n \delta_{x_i}$. Assume that $V$ satisfies condition \ref{cond:linfbound}, and that the {CFL} condition
\begin{equation}\label{eq:CFL}
\lambda_0 \leq \frac{{\Delta t}}{{\Delta x}} \leq C_1^{-1}
\end{equation}
is satisfied for some $\lambda_0>0$ (where $C_1$ is the constant in \ref{cond:linfbound}). Then for all $t\geq0$
\begin{enumerate}[label=(\roman*)]
\item\label{property:positive} $\mu^{\Delta x}_t \geq 0$
\item\label{property:unitmass} $\|\mu^{\Delta x}_t\|_{{\EuScript{M}}(\mathbb{K})} = 1$
\item\label{property:finitepropspeed} if $\supp\mu_0\subset B_M(0)$ then $\supp\mu^{\Delta x}_t\subset B_{M+\lambda_0^{-1}t}(0)$
\item\label{property:timecontwass} $d_W\big(\mu^{\Delta x}_t,\mu^{\Delta x}_s\big) \leq \lambda_0^{-1}|t-s|$ for all $s\geq0$.
\end{enumerate}
\end{proposition}
\begin{proof}
Writing
\[
\mu_i^{n+1} = \mu_{i-1}^n\frac{1+\frac{{\Delta t}}{{\Delta x}}V[\mu^n]_{i-1}}{2} + \mu_{i+1}^n\frac{1-\frac{{\Delta t}}{{\Delta x}}V[\mu^n]_{i+1}}{2},
\]
it is clear that \eqref{eq:CFL} and \ref{cond:linfbound} ensure that $\mu_i^{n+1}$ is a convex combination of $\mu_{i-1}^n$ and $\mu_{i+1}^n$, whence \ref{property:positive} and \ref{property:unitmass} follow. From \eqref{eq:unstagglaxfr} we see that if $\mu_{i-1}^n=\mu_i^n=\mu_{i+1}^n=0$ then $\mu_i^{n+1}=0$. Hence, after $n$ time steps the support of $\mu_t^{\Delta x}$ can have grown at most a distance $n{\Delta x} \leq n\frac{{\Delta t}}{\lambda_0} = \lambda_0^{-1}t^n$, and \ref{property:finitepropspeed} follows. For \ref{property:timecontwass}, it is clear that by the definition \eqref{eq:approxsolndef} of $\mu^{\Delta x}$, it is enough to prove the claim for $s=t^n$, $t=t^{n+1}$. Let $\psi\in C_b(\mathbb{K})$ be Lipschitz continuous and write $\psi_i=\psi(x_i)$. Then
\begin{align*}
\ip{\mu^{n+1}-\mu^n}{\psi} &= \sum_i \psi_i\left(\frac{\mu_{i-1}^n-2\mu_i^n+\mu_{i+1}^n}{2} - \frac{{\Delta t}}{2{\Delta x}}\big(V[\mu^n]_{i+1}\mu_{i+1}^n-V[\mu^n]_{i-1}\mu_{i-1}^n\big)\right) \\
\intertext{\textit{(summation by parts)}}
&= -\frac{1}{2}\sum_i (\psi_{i+1}-\psi_i)\left(\mu_{i+1}^n-\mu_i^n - \frac{{\Delta t}}{{\Delta x}}\big(V[\mu^n]_{i}\mu_{i}^n+V[\mu^n]_{i+1}\mu_{i+1}^n\big)\right) \\
&= \frac{1}{2}\sum_i (\psi_{i+1}-\psi_i)\left(\mu_{i}^n\left(1 + \frac{{\Delta t}}{{\Delta x}}V[\mu^n]_i\right) + \mu_{i+1}^n\left(\frac{{\Delta t}}{{\Delta x}}V[\mu^n]_{i+1}-1\right)\right) \\
&\leq \frac{1}{2}{\Delta x}\|\psi\|_{\textrm{Lip}}\sum_i \mu_{i}^n\left(1 + \frac{{\Delta t}}{{\Delta x}}V[\mu^n]_i\right) + \mu_{i+1}^n\left(1-\frac{{\Delta t}}{{\Delta x}}V[\mu^n]_{i+1}\right) \\
&= {\Delta x}\|\psi\|_{\textrm{Lip}},
\end{align*}
the last two steps following from the CFL condition and \ref{property:unitmass}. Taking the supremum over $\psi$ with $\|\psi\|_{\textrm{Lip}}\leq1$ yields \ref{property:timecontwass}.
\end{proof}
\subsection{Weak convergence of the method}
We split the proof of convergence of the numerical method to the (unique) measure-valued solution into two parts. First we show that our method is consistent (Lemma \ref{lem:laxwendroff}), in the sense that \emph{if} the method converges, then the limit is the measure-valued solution. Next, we show that the method indeed converges either weakly (Theorem \ref{thm:convmeasval}) or strongly (Theorem \ref{thm:convstrongsoln}), depending on the assumptions on the velocity field $V$ and the initial data $\mu_0$.
\begin{lemma}\label{lem:laxwendroff}
Assume that $V$ satisfies conditions \ref{cond:spacelipschitz}, \ref{cond:lipschitz} and define $\mu^{\Delta x}$ by \eqref{eq:approxsolndef}. Assume that $d_W(\mu_t^{\Delta x},\mu_t) \to 0$ as ${\Delta x}\to0$ uniformly on bounded intervals $t\in[0,T]$, for some $\mu \in C_w([0,\infty);{\EuScript{M}}(\mathbb{K}))$. Then $\mu$ is the measure-valued solution of \eqref{eq:conslaw}.
\end{lemma}
\begin{proof}
The convergence $d_W(\mu_t^{\Delta x},\mu_t) \to 0$ implies that $\mu_t\in{\EuScript{P}}(\xset)$ for all $t$. Let $\phi\in C_c^\infty(\mathbb{R}\times\mathbb{R})$; we want to show that $\mu$ satisfies \eqref{eq:measvalsoln} with $\xset=\mathbb{K}$. Denoting $\phi_i^n=\phi(x_i,t^n)$, we multiply \eqref{eq:unstagglaxfr} by $\phi_i^n$ and sum over $i,n$:
\begin{align*}
0 &= \sum_{n=0}^\infty\sum_i \left(\phi_i^n\mu_i^{n+1}-\phi_i^n\frac{\mu_{i+1}^n+\mu_{i+1}^n}{2} + {\Delta t}\phi_i^n\frac{V[\mu^n]_{i+1}\mu_{i+1}^n-V[\mu^n]_{i-1}\mu_{i-1}^n}{2{\Delta x}}\right) \\
\intertext{\textit{(summation by parts)}}
&= -{\Delta t}\sum_{n=0}^\infty\sum_i \frac{\frac{\phi_{i-1}^n+\phi_{i+1}^n}{2}-\phi_i^{n-1}}{{\Delta t}}\mu_i^{n} - {\Delta t}\sum_{n=0}^\infty\sum_i \frac{\phi_{i+1}^n-\phi_{i-1}^n}{2{\Delta x}}V[\mu^n]_i\mu_i^n - \sum_i\phi_i^{-1}\mu_i^0 \\
&= -\int_0^\infty\int_{\mathbb{K}} \frac{\frac{\phi(x-{\Delta x},t)+\phi(x+{\Delta x},t)}{2}-\phi(x,t)}{{\Delta t}} + \frac{\phi(x+{\Delta x},t)-\phi(x-{\Delta x},t)}{2{\Delta x}}V[\mu^{\Delta x}_t](x)\,d\mu_t^{\Delta x}(x)dt \\
&\quad - \int_{\mathbb{K}}\phi(x,-{\Delta t})\,d\mu_0^{\Delta x}(x).
\end{align*}
By condition \ref{cond:lipschitz} we know that $V[\mu_t^{\Delta x}] \to V[\mu_t]$ uniformly on $\mathbb{R}\times[0,T]$, and by condition \ref{cond:spacelipschitz}, the functions $V[\mu_t^{\Delta x}]$ are uniformly Lipschitz. It follows that the above integral converges to
\[
-\int_0^\infty\int_{\mathbb{K}} \partial_t\phi(x,t) + \partial_x\phi(x,t)V[\mu_t](x)\,d\mu_t(x)dt - \int_{\mathbb{K}}\phi(x,0)\,d\mu_0(x)
\]
and the proof is complete.
\end{proof}
For the convergence proof we use the following compactness lemma, whose proof is postponed to the appendix.
\begin{lemma}\label{lem:compactness}
Let $K\subset{\EuScript{P}}_1(\mathbb{R}^d)$ be a bounded set, i.e.\ $\sup_{\mu\in K}d_W(\mu,\bar{\mu})<\infty$ for some $\bar{\mu}\in{\EuScript{P}}(\mathbb{R}^d)$. Then $K$ is relatively compact in the metric space $({\EuScript{P}}_1(\mathbb{R}^d),d_W)$.
\end{lemma}
\begin{theorem}\label{thm:convmeasval}
Assume that $V$ satisfies conditions \ref{cond:linfbound}, \ref{cond:spacelipschitz}, \ref{cond:lipschitz} and let $\mu_0\in{\EuScript{P}}(\mathbb{K})$ have compact support. Assume that the CFL condition \eqref{eq:CFL} is satisfied. Then the Lax--Friedrichs method \eqref{eq:unstagglaxfr} converges weakly to the measure-valued solution of \eqref{eq:conslaw}. More precisely, if $\mu^{\Delta x}$ if given by \eqref{eq:approxsolndef} then
\[
\sup_{t\in[0,T]}d_W\big(\mu^{\Delta x}_t,\mu_t\big) \to 0 \qquad \text{as } {\Delta x} \to 0
\]
for any $T>0$.
\end{theorem}
\begin{proof}
By Proposition \ref{prop:stabprop1d} \ref{property:positive}, \ref{property:unitmass} and \ref{property:finitepropspeed}, the measures $\mu^{\Delta x}_t$ stay in ${\EuScript{P}}(\mathbb{R})$ and have uniformly bounded support for all ${\Delta x}>0$, $t\in[0,T]$, and by \ref{property:timecontwass}, the maps $\mu^{\Delta x}:[0,T]\to{\EuScript{P}}(\mathbb{R}^d)$ are Lipschitz. Moreover, $d_W(\mu_0,\mu^{\Delta x}_t) \leq d_W(\mu_0,\mu^{\Delta x}_0)+d_W(\mu^{\Delta x}_0,\mu^{\Delta x}_t) \leq {\Delta x} + \frac{1}{\lambda_0}t$, so by Lemma \ref{lem:compactness} the set
\[
K := \big\{\mu^{\Delta x}_t\ :\ 0<{\Delta x}<1,\ t\in[0,T] \big\}
\]
is relatively compact with respect to $d_W$-convergence. Hence, by Ascoli's theorem, there exists a subsequence ${\Delta x}_k\to0$ as $k\to\infty$ and some Lipschitz map $\mu:[0,T]\to{\EuScript{P}}(\mathbb{R}^d)$ such that $d_W(\mu^{{\Delta x}_k}_t,\mu_t) \to 0$ as $k\to\infty$, uniformly in $t\in[0,T]$. By Lemma \ref{lem:laxwendroff}, the limit $\mu$ is the measure-valued solution of \eqref{eq:conslaw}. But this solution is unique, which implies that the whole sequence $\mu^{\Delta x}$ converges to $\mu$.
\end{proof}
\subsection{Strong convergence of the method}
If the initial data and the velocity field are sufficiently smooth then we can show that the numerical method in fact converges \emph{strongly}. We assume that the initial data is absolutely continuous with respect to Lebesgue measure, and hence has a density function $\mu_0(x)$. This data is sampled by its cell averages,
\[
\hat{\mu}^0_i := \frac{1}{{\Delta x}}\int_{x_{i-\hf}}^{x_{i+\hf}}\mu_0(x)\,dx
\]
and the numerical solution is realized as the linear-in-time, piecewise constant $L^1$ function
\begin{equation}\label{eq:strongapproxsolndef}
\hat{\mu}_t^{\Delta x}(x) = \frac{t^{n+1}-t}{{\Delta t}}\hat{\mu}^n + \frac{t-t^n}{{\Delta t}}\hat{\mu}^{n+1} \qquad \text{for } t\in[t^n,t^{n+1}),
\end{equation}
where
\[
\hat{\mu}^n(x) = \sum_i \mu_i^n \ind_{[x_{i-\hf},x_{i+\hf})}(x).
\]
\begin{proposition}\label{prop:strongstabprop1d}
Consider the unstaggered, one-dimensional Lax--Friedrichs method \eqref{eq:unstagglaxfr}. Assume that $V$ satisfies conditions \ref{cond:linfbound}, \ref{cond:spacelipschitz}, \ref{cond:c2bound}, and that the {CFL} condition \eqref{eq:CFL} is satisfied. Then for all $t\in[0,T]$
\begin{enumerate}[label=(\roman*)]
\item\label{property:positive2} $\hat{\mu}^{\Delta x}_t \geq 0$
\item\label{property:unitmass2} $\|\hat{\mu}^{\Delta x}_t\|_{L^1(\mathbb{K})} = 1$
\item\label{property:tvb} ${\textrm{TV}}(\hat{\mu}^{\Delta x}_t) \leq {\textrm{TV}}(\hat{\mu}_0)e^{C_2t} + (e^{C_2t}-1)\frac{C_4}{2C_2}$
\item\label{property:timecontmeas} $\|\hat{\mu}^{\Delta x}_t-\hat{\mu}^{\Delta x}_s\|_{L^1(\mathbb{K})} \leq C_T|t-s|$ for all $s\geq0$.
\end{enumerate}
(In \ref{property:timecontmeas}, $C_T = C_2+\lambda_0^{-1}c_T$, where $c_T$ is the upper bound from \ref{property:tvb}, $c_T := {\textrm{TV}}(\mu_0)e^{C_2T} + (e^{C_2T}-1)\frac{C_4}{2C_2}$.)
\end{proposition}
\begin{proof}
Properties \ref{property:positive2} and \ref{property:unitmass2} obviously follow from Proposition \ref{prop:stabprop1d} (i) and (ii). To show \ref{property:tvb} we split ${\textrm{TV}}(\hat{\mu}^{n+1})$ as follows:
\begin{align*}
{\textrm{TV}}(\hat{\mu}^{n+1}) &\leq \frac{1}{2}\sum_i \biggl|\big(\mu^{n}_{i+2}-\mu^{n}_{i+1}\big)\left(1-\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i+2}\right) + \big(\mu^{n}_{i}-\mu^{n}_{i-1}\big)\left(1+\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i}\right) \\
&\qquad\qquad\quad - \frac{{\Delta t}}{{\Delta x}}\mu^{n}_{i+1}\big(V[\hat{\mu}^n]_{i+2}-V[\hat{\mu}^n]_{i+1}\big) + \frac{{\Delta t}}{{\Delta x}}\mu^{n}_i\big(V[\hat{\mu}^n]_i-V[\hat{\mu}^n]_{i-1}\big)\biggr| \\
&\leq A^{n}+B^{n}
\end{align*}
where
\[
A^{n} = \frac{1}{2}\sum_i \big|\mu^{n}_{i+2}-\mu^{n}_{i+1}\big|\left(1-\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i+2}\right) + \big|\mu^{n}_{i}-\mu^{n}_{i-1}\big|\left(1+\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i}\right) \leq {\textrm{TV}}(\hat{\mu}^n)
\]
and
\begin{align*}
B^{n} &= \frac{{\Delta t}}{2{\Delta x}}\sum_i \Big|\mu^n_{i+1}\big(V[\hat{\mu}^n]_{i+2}-V[\hat{\mu}^n]_{i+1}\big) - \mu^n_{i-1}\big(V[\hat{\mu}^n]_{i}-V[\hat{\mu}^n]_{i-1}\big)\Big| \\
&\leq \frac{{\Delta t}}{2}\sum_i \Big[ |\mu^n_{i+1}-\mu^n_i| + |\mu^n_{i+1}-\mu^n_i| \Big] \frac{\big|V[\hat{\mu}^n]_{i+2}-V[\hat{\mu}^n]_{i+1}\big|}{{\Delta x}} \\
& \quad \qquad + \frac{{\Delta t}}{2}\sum_i \mu^n_{i-1}\frac{\big|V[\hat{\mu}^n]_{i+2}-V[\hat{\mu}^n]_{i+1}-V[\hat{\mu}^n]_{i}+V[\hat{\mu}^n]_{i-1}\big|}{{\Delta x}^2}{\Delta x} \\
&\leq C_2{\Delta t}{\textrm{TV}}(\hat{\mu}^n) + \frac{C_4}{2}{\Delta t}.
\end{align*}
Iterating over all $n$ gives
\[
{\textrm{TV}}(\hat{\mu}^{n+1}) \leq (1+C_2{\Delta t}){\textrm{TV}}(\hat{\mu}^n) + \frac{C_4}{2}{\Delta t} \leq \dots \leq (1+C_2{\Delta t})^{n+1}\left({\textrm{TV}}(\mu_0)+\frac{C_4}{2C_2}\right)-\frac{C_4}{2C_2}
\]
and \ref{property:tvb} follows.
Finally, we show the time continuity \ref{property:timecontmeas}:
\begin{align*}
\|\hat{\mu}^{n+1}-\hat{\mu}^n\|_{L^1(\mathbb{K})} &= \sum_i|\mu^{n+1}_i-\mu^n_i|{\Delta x} \\
&= \frac{{\Delta x}}{2}\sum_i \bigg|(\mu_{i+1}^n-\mu_i^n)\left(1-\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i+1}\right) - (\mu_i^n-\mu_{i-1}^n)\left(1+\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i-1}\right) \\
&\qquad\qquad - \frac{{\Delta t}}{{\Delta x}}\mu_i^n\big(V[\hat{\mu}^n]_{i+1}-V[\hat{\mu}^n]_{i-1}\big)\bigg| \\
&\leq \frac{{\Delta x}}{2}\sum_i |\mu_{i+1}^n-\mu_i^n|\left(1-\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i+1}\right) + |\mu_i^n-\mu_{i-1}^n|\left(1+\frac{{\Delta t}}{{\Delta x}}V[\hat{\mu}^n]_{i-1}\right) \\
&\qquad\qquad + \frac{{\Delta t}}{{\Delta x}}\mu_i^n\big|V[\hat{\mu}^n]_{i+1}-V[\hat{\mu}^n]_{i-1}\big| \\
&\leq {\Delta x}{\textrm{TV}}(\hat{\mu}^n) + C_2{\Delta t}.
\end{align*}
Iterating over $n$ yields \ref{property:timecontmeas}.
\end{proof}
\begin{theorem}\label{thm:convstrongsoln}
Assume that $V$ satisfies conditions \ref{cond:linfbound}, \ref{cond:spacelipschitz}, \ref{cond:lipschitz}, \ref{cond:c2bound} and let $\mu_0\in{\EuScript{P}}(\mathbb{K})$ be compactly supported and absolutely continuous with respect to Lebesgue measure with density $\mu_0\in{\textrm{BV}}(\mathbb{K})$. Assume that the CFL condition \eqref{eq:CFL} holds. Then the measure-valued solution $\mu$ of \eqref{eq:conslaw} is absolutely continuous, and the one-dimensional Lax--Friedrichs method \eqref{eq:unstagglaxfr} converges strongly to $\mu$. More precisely, if $\hat{\mu}^{\Delta x}$ is given by \eqref{eq:strongapproxsolndef} then
\[
\sup_{t\in[0,T]}\big\|\hat{\mu}^{\Delta x}_t- \mu_t\big\|_{L^1(\mathbb{K})} \to 0 \qquad \text{as } {\Delta x} \to 0
\]
for any $T>0$.
\end{theorem}
\begin{proof}
By Proposition \ref{prop:strongstabprop1d} (ii) and (iii), the set
\[
K := \big\{\hat{\mu}^{\Delta x}_t\ :\ 0<{\Delta x}<1,\ t\in[0,T] \big\}
\]
is uniformly bounded in ${\textrm{BV}}(\mathbb{K})$, so by Helly's theorem, $K$ is relatively compact in $L^1(\mathbb{K})$. Hence, by Ascoli's theorem there is a subsequence ${\Delta x}_k\to0$ as $k\to\infty$ and some Lipschitz map $\tilde{\mu}:[0,T]\to L^1(\mathbb{K})$ such that $\hat{\mu}^{{\Delta x}_k}_t\to\tilde{\mu}_t$ in $L^1(\mathbb{K})$ as $k\to\infty$ uniformly in $t\in[0,T]$.
Since convergence in $L^1$ implies convergence in $d_W$, also $d_W(\hat{\mu}^{{\Delta x}_k}_t,\tilde{\mu}_t) \to 0$.
On the other hand, Theorem \ref{thm:convmeasval} implies that $d_W(\mu^{{\Delta x}_k}_t,\mu_t) \to 0$, where $\mu$ is the measure-valued solution of \eqref{eq:conslaw}. Noting that $d_W(\hat{\mu}^{{\Delta x}_k}_t,\mu^{{\Delta x}_k}_t) \leq \frac{{\Delta x}}{2}$, we can conclude $\mu=\tilde{\mu}$. The conclusion follows.
\end{proof}
\subsection{Multiple dimensions}\label{sec:multid}
The extension to multiple space dimensions is done in a tensorial fashion; we limit ourselves to the two-dimensional variant for the sake of notational simplicity.
As before, let $\mathbb{K}_{1}, \mathbb{K}_{2}$ be either the torus $\mathbb{T}$ or the real line $\mathbb{R}$. Let $(x_{i+\hf})_i$ and $(y_{j+\hf})_j$ be discretizations
of $\mathbb{K}_1$ and $\mathbb{K}_2$ with mesh lengths $x_{i+\hf}-x_{i-\hf}\equiv{\Delta x}$ and $y_{j+\hf}-y_{j-\hf}\equiv{\Delta y}$, respectively. With obvious notation we get the unstaggered, two-dimensional
Lax--Friedrichs method:
\begin{equation}\label{eq:unstagglaxfr2d}
\begin{split}
\mu_{i,j}^0 &= \ip{\mu_0}{\phi_{i,j}} \\
\mu_{i,j}^{n+1} &= \frac{\mu_{i-1,j}^n+\mu_{i+1,j}^n+\mu_{i,j-1}^n+\mu_{i,j+1}^n}{4} - {\Delta t}\frac{V^1[\mu^n]_{i+1,j}\mu_{i+1,j}^n - V^1[\mu^n]_{i-1,j}\mu_{i-1,j}^n}{2{\Delta x}} \\
&\quad - {\Delta t}\frac{V^2[\mu^n]_{i,j+1}\mu_{i,j+1}^n - V^2[\mu^n]_{i,j-1}\mu_{i,j-1}^n}{2{\Delta y}}
\end{split}
\end{equation}
where we denote $\mu^n = \sum_i\sum_j \mu_{i,j}^n\delta_{(x_i,y_j)}$. As before, $\mu$ is interpolated linearly between time steps,
\begin{equation}\label{eq:approxsolndef2d}
\mu_t^{{\Delta x}, {\Delta y}} := \frac{t^{n+1}-t}{{\Delta t}}\mu^n + \frac{t-t^n}{{\Delta t}}\mu^{n+1} \qquad \text{for } t\in[t^n,t^{n+1}).
\end{equation}
We omit the proofs of the following stability and convergence results, as they are straightforward generalizations of their one-dimensional counterparts.
\begin{proposition}\label{prop:stabprop2d}
Consider the unstaggered, two-dimensional Lax--Friedrichs method \eqref{eq:unstagglaxfr2d}. Assume that $V$ satisfies condition \ref{cond:linfbound}, and that the {CFL} condition
\begin{equation}\label{eq:CFL2d}
\lambda_0\leq\frac{{\Delta t}}{\min({\Delta x},{\Delta y})}C_1 \leq \frac{1}{2}
\end{equation}
is satisfied for some $\lambda_0>0$. Then for all $t\geq0$
\begin{enumerate}[label=(\roman*)]
\item\label{property:positive2d} $\mu^{{\Delta x}, {\Delta y}}_t \geq 0$
\item\label{property:unitmass2d} $\|\mu^{{\Delta x}, {\Delta y}}_t\|_{{\EuScript{M}}(\xset)} = 1$
\item\label{property:timecontwass2d} $d_W\big(\mu^{{\Delta x}, {\Delta y}}_t,\mu^{{\Delta x}, {\Delta y}}_s\big) \leq \frac{{\Delta x}+{\Delta y}}{2{\Delta t}}|t-s|$ for all $s\geq0$.
\end{enumerate}
\end{proposition}
\begin{theorem}\label{thm:convmeasval2d}
Consider the two-dimensional equation \eqref{eq:conslaw}. Assume that $V$ satisfies conditions \ref{cond:linfbound}, \ref{cond:spacelipschitz}, \ref{cond:lipschitz} and let $\mu_0\in{\EuScript{P}}(\mathbb{K}_1\times\mathbb{K}_2)$ have compact support. Assume that the CFL condition \eqref{eq:CFL2d} is satisfied. Then the Lax--Friedrichs method \eqref{eq:unstagglaxfr2d} converges weakly to the measure-valued solution of \eqref{eq:conslaw}. More precisely, if $\mu^{{\Delta x}, {\Delta y}}$ if given by \eqref{eq:approxsolndef2d} then
\[
\sup_{t\in[0,T]}d_W\big(\mu^{{\Delta x}, {\Delta y}}_t,\mu_t\big) \to 0 \qquad \text{as } {\Delta x},{\Delta y} \to 0
\]
for any $T>0$.
\end{theorem}
As in the $1$-dimensional case, sufficient smoothness for the initial data and the velocity field will yield \textit{strong} convergence of the numerical method. Assume the initial datum is absolutely continuous with respect to the Lebesgue measure with density function $\mu_{0}(x, y)$. This initial data is sampled by its cell averages:
\[
\hat{\mu}^0_{i,j} := \frac{1}{{\Delta x} {\Delta y}}\int_{x_{i-\hf}}^{x_{i+\hf}} \int_{y_{j-\hf}}^{y_{j+\hf}}\mu_0(x,y)\,dx dy
\]
Similarly we define the numerical solution as piecewise linear $L^1$ function as
\begin{equation}\label{eq:strongapproxsolndef2d}
\hat{\mu}_t^{{\Delta x}, {\Delta y}} = \frac{t^{n+1}-t}{{\Delta t}}\hat{\mu}^n + \frac{t-t^n}{{\Delta t}}\hat{\mu}^{n+1} \qquad \text{for } t\in[t^n,t^{n+1})
\end{equation}
where
\[
\hat{\mu}^n(x, y) = \sum_{i, j} \hat{\mu}_{i, j}^n \ind_{[x_{i-\hf},x_{i+\hf}) \times [y_{j-\hf},y_{j+\hf})}(x, y).
\]
\begin{proposition}\label{prop:strongstabprop2d}
Consider the unstaggered, one-dimensional Lax–Friedrichs method \eqref{eq:unstagglaxfr}. Assume that $V$ satisfies the conditions \ref{cond:linfbound}, \ref{cond:spacelipschitz}, \ref{cond:c2bound} and that the CFL condition \eqref{eq:CFL2d} is satisfied. Then for all $t\in[0,T]$
\begin{enumerate}[label=(\roman*)]
\item\label{property:positive2dd} $\hat{\mu}^{{\Delta x}, {\Delta y}}_t \geq 0$
\item\label{property:unitmass2dd} $\|\hat{\mu}^{{\Delta x}, {\Delta y}}_t\|_{L^1(\mathbb{K}_{1} \times \mathbb{K}_{2})} = 1$
\item\label{property:tvb2dd} ${\textrm{TV}}(\hat{\mu}^{{\Delta x}, {\Delta y}}_t) \leq c_T:= {\textrm{TV}}(\hat{\mu}_0)e^{C_2T} + (e^{C_2T}-1)\frac{C_4}{2C_2}$
\item\label{property:timecontmeas2dd} $\|\hat{\mu}^{{\Delta x}, {\Delta y}}_t-\hat{\mu}^{{\Delta x}, {\Delta y}}_s\|_{L^1(\mathbb{K}_{1} \times \mathbb{K}_{2})} \leq C_T|t-s|$ for all $s\geq0$.
\end{enumerate}
(In \ref{property:timecontmeas2dd}, $C_T = C_2+\lambda_0^{-1}c_T$, where $c_T$ is the upper bound from \ref{property:tvb2dd}.)
\end{proposition}
So now we can finally state:
\begin{theorem}\label{thm:convstrongsoln2d}
Assume that $V$ satisfies conditions \ref{cond:linfbound}, \ref{cond:spacelipschitz}, \ref{cond:lipschitz}, \ref{cond:c2bound} and let $\mu_0\in{\EuScript{P}}(\mathbb{K}_1 \times \mathbb{K}_2)$ be compactly supported and absolutely continuous with respect to Lebesgue measure with density $\mu_0\in{\textrm{BV}}(\mathbb{K}_1 \times \mathbb{K}_2)$. Assume that the CFL condition \eqref{eq:CFL2d} holds. Then the measure-valued solution $\mu$ of \eqref{eq:conslaw} takes values in $L^1(\mathbb{K}_1\times\mathbb{K}_2)$, and the two-dimensional Lax--Friedrichs method \eqref{eq:unstagglaxfr2d} converges strongly to $\mu$. More precisely, if $\hat{\mu}^{{\Delta x}, {\Delta y}}$ if given by \eqref{eq:strongapproxsolndef2d} then
\[
\sup_{t\in[0,T]}\big\|\hat{\mu}^{{\Delta x},{\Delta y}}_t-\mu_t\big\|_{L^1(\mathbb{K}_{1} \times \mathbb{K}_{2})} \to 0 \qquad \text{as } {\Delta x}, {\Delta y} \to 0
\]
for any $T>0$.
\end{theorem}
\section{Numerical experiments}
In this section we illustrate our analytical results by performing several numerical experiments for the Kuramoto equation with identical \eqref{eq:ksident} and non-identical \eqref{eq:ks} natural frequencies. Recall that in these equations, $\theta$ takes values in the periodic domain $\mathbb{T}$, while $\Omega$ lies in $\mathbb{R}$.
\subsection{One-dimensional simulations}
We consider first the one-dimensional Kuramoto equation with identical oscillators \eqref{eq:ksident}. In all experiments we have used the unstaggered Lax--Friedrichs method \eqref{eq:unstagglaxfr} with CFL number 0.4 and with $T=0.5$ as the final time. All simulations were computed on a sequence of meshes with $N=32,64,128,256,512$ grid points, as well as a reference solution with $N=4096$ points. The tables in each subsection shows the number of grid points as well as the error and the experimental order of convergence (EOC), computed both in the 1-Wasserstein distance $d_W$ and in $L^1$.
The experiments in this section solve the Kuramoto equation with progressively more singular initial data. As will be seen from the tables, the EOC in the 1-Wasserstein distance is close to 1 in all cases, while in $L^1$ it is close to 1 only for smooth data.
\subsubsection{Polynomial initial data}\label{sec:numexp1}
The initial data is taken to be the continuous and piecewise parabolic function
\[
\mu_{0}(\theta) = \begin{cases}
\frac{6}{\pi^{3}}(\frac{3\pi}{2} - \theta) (\theta - \frac{\pi}{2}) & \text{if } \theta\in[\frac{\pi}{2}, \frac{3\pi}{2}) \\
0 & \text{otherwise.}
\end{cases}
\]
As shown in Figure \ref{fig:piecewiseparabolic}, the numerical solution is a non-oscillatory and reasonable approximation at all mesh resolutions. Table \ref{tab:piecewiseparabolic} shows that the numerical method seems to converge at a rate close to 1, both in the 1-Wasserstein and the $L^1$ distances.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{Figure3.pdf}
\caption{Comparing solutions on different meshes at $t=0.5$ for smooth initial data.}
\label{fig:piecewiseparabolic}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{c|cc|cc}
& \multicolumn{2}{c|}{1-Wasserstein} & \multicolumn{2}{c}{$L^1$} \\
$N$ & Error & EOC & Error & EOC \\ \hline
32 & 0.1351 & & 0.2811 & \\
64 & 0.0634 & 1.09 & 0.1336 & 1.07 \\
128 & 0.0303 & 1.07 & 0.0663 & 1.01 \\
256 & 0.0153 & 0.99 & 0.0336 & 0.98 \\
512 & 0.0073 & 1.07 & 0.0157 & 1.10 \\
\end{tabular}
\caption{Errors and EOC in the $1$-Wasserstein and $L^1$ distances at $t=0.5$ for smooth initial data.}
\label{tab:piecewiseparabolic}
\end{table}
\subsubsection{Piecewise constant initial data}\label{sec:numexp2}
This experiment was taken from \cite{Amadori} and uses piecewise constant initial data,
\[
\mu_{0}(\theta) =
\begin{cases}
\frac{2}{3\pi} & \text{if } \theta \in [\frac{\pi}{2}, \frac{3\pi}{2}) \\
\frac{1}{3\pi} & \text{otherwise.}
\end{cases}
\]
Figure \ref{fig:piecewiseconstant} shows again that the approximation is reasonably accurate even on coarse meshes. Table \ref{tab:piecewiseconstant1d} shows that the rate of convergence in 1-Wasserstein is again 1, while it is around 0.7 in $L^1$.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figure1.pdf}
\caption{Comparing solutions on different meshes for piecewise constant initial data.}
\label{fig:piecewiseconstant}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c|cc|cc}
& \multicolumn{2}{c|}{1-Wasserstein} & \multicolumn{2}{c}{$L^1$} \\
$N$ & Error & EOC & Error & EOC \\ \hline
32 & 0.0517 & & 0.0610 & \\
64 & 0.0258 & 1.00 & 0.0315 & 0.95 \\
128 & 0.0131 & 0.98 & 0.0196 & 0.69 \\
256 & 0.0067 & 0.97 & 0.0123 & 0.67 \\
512 & 0.0034 & 0.98 & 0.0071 & 0.79 \\
\end{tabular}
\caption{Errors and EOC in the $1$-Wasserstein and $L^1$ distances at $t=0.5$ for piecewise constant initial data.}
\label{tab:piecewiseconstant1d}
\end{table}
\subsubsection{Singular initial data}\label{sec:numexp3}
The final experiment uses the singular initial data
\[
\mu_{0} = \frac{1}{4} \big(\delta_{\frac{3\pi}{4}} + \delta_{\frac{5\pi}{4}}\big) + \frac{1}{2} \chi_{[\frac{\pi}{2}, \frac{3\pi}{2}]}
\]
where $\chi_{[a,b]}$ is the indicator function of the interval $[a,b]$. As shown in Figure \ref{fig:diracinitial}, the numerical approximation is nonoscillatory and seems to converge, even in the presence of Dirac singularities. This is confirmed in Table \ref{tab:diracinitial}: The method seems to converge at a rate of around $3/4$ in $d_W$, while it does not converge at all in $L^1$, as expected.
\begin{figure}
\centering
\includegraphics[width=8cm]{Figure2.pdf}
\caption{Comparing solutions on different meshes at $t=0.5$ for initial data with Dirac masses.}
\label{fig:diracinitial}
\end{figure}
\begin{table}
\centering
\begin{tabular}{c|cc|cc}
& \multicolumn{2}{c|}{1-Wasserstein} & \multicolumn{2}{c}{$L^1$} \\
$N$ & Error & EOC & Error & EOC \\ \hline
32 & 0.1737 & & 0.8703 & \\
64 & 0.1119 & 0.63 & 0.7945 & 0.13 \\
128 & 0.0687 & 0.70 & 0.7131 & 0.16 \\
256 & 0.0433 & 0.67 & 0.6153 & 0.21 \\
512 & 0.0260 & 0.74 & 0.4861 & 0.34 \\
\end{tabular}
\caption{Errors and EOC in the $1$-Wasserstein and $L^1$ distances at $t=0.5$ for singular initial data.}
\label{tab:diracinitial}
\end{table}
\subsection{Polynomial initial data in 2-D}\label{sec:numexp4}
The final numerical experiment approximates the two-dimensional Kuramoto equation \eqref{eq:ks}. We consider the piecewise linear initial data
\[
\mu_{0}(\theta,\Omega) =
\begin{cases}
\frac{64}{3\pi^{2}} \theta\Omega & \text{if } \theta\in[\frac{\pi}{4},\frac{\pi}{2}) \text{ and } \Omega \in [0,1] \\
0 & \text{otherwise}.
\end{cases}
\]
Figure \ref{fig:2d} shows the solution at $t=0.5$ and Table \ref{tab:2d} shows the $L^1$ errors. (Due to the complexities of computing the Wasserstein distance for multi-dimensional measures, we only compute the $L^1$ errors in this experiment.) The convergence rate in $L^1$ is seen to be about the same as in the piecewise constant one-dimensional example in Section \ref{sec:numexp2}.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{Figure4.pdf}
\caption{Comparing solutions on different meshes for two-dimensional polynomial initial data.}
\label{fig:2d}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{c|c c}
& \multicolumn{2}{c}{$L^1$} \\
\hline
$N$ & Error & EOC \\ \hline
32 & 1.2912 & \\
64 & 1.0054 & 0.36 \\
128 & 0.7346 & 0.45 \\
256 & 0.4925 & 0.58 \\
512 & 0.2999 & 0.72 \\
1024 & 0.1638 & 0.87 \\
\end{tabular}
\caption{Table of $L^{1}$-errors for two-dimensional initial data.}
\label{tab:2d}
\end{table}
|
1,108,101,565,928 | arxiv | \section{Introduction}\label{sec:introduction}
\vspace*{-0.1in}
Initial algebra semantics~\cite{bdm97} is one of the cornerstones of
the modern theory of data types. It has long been known to deliver
practical programming tools --- such as pattern matching, induction
rules, and structured recursion operators --- as well as principled
reasoning techniques --- like relational parametricity~\cite{rey83}
--- for algebraic data types (ADTs). Initial algebra semantics has
also been developed for the syntactic generalization of ADTs known as
nested types~\cite{bm98}, and it has been shown to deliver analogous
tools and techniques for them as well~\cite{jg07}. Generalized
algebraic data types (GADTs)~\cite{pvww06,sp04,xcc04} generalize
nested types --- and thus further generalize ADTs --- syntactically:
\vspace*{-0.05in}
\begin{equation}\label{eq:hier}
\begin{tikzcd}[column sep = huge]
\text{\fbox{$\mathsf{ADTs}$}}\;
\ar[r,hookrightarrow,"\text{syntactically}","\text{generalized by}"']
& \;\text{\fbox{$\mathsf{nested~types}$}}\;
\ar[r,hookrightarrow,"\text{syntactically}","\text{generalized by}"']
& \;\text{\fbox{$\mathsf{GADTs}$}}
\end{tikzcd}
\end{equation}
\noindent
Given their ubiquity in modern functional programming, an important
open question is whether or not an initial algebra semantics exists
for GADTs.\looseness=-1
The starting point for initial algebra semantics is to interpret types
as objects in a suitably structured category $\mathcal C$, and to
interpret open type expressions as endofunctors on this category. An
ADT is interpreted as the least fixpoint of the endofunctor on
$\mathcal C$ interpreting its underlying type expression. For example,
the type expression underlying the standard data
type\footnote{Although our results apply to GADTs in any programming
language, we will use Agda syntax for all code in this paper unless
otherwise specified. But whereas Agda allows type parameters in the
types of GADT data constructors to be implicit, we will always write
all type parameters explicitly. We use {\sf sans serif} font for
code snippets and {\em italic} font for mathematics.}
\vspace*{-0.1in}
\begin{equation}\label{eq:lists}
\begin{array}{l}
\mathsf{data\; List \;:\;Set \to Set\;where}\\
\hspace*{0.2in}\mathsf{nil\;\;\;\;:\; \forall\,A \to List\;A}\\
\hspace*{0.2in}\mathsf{cons\;:\;\forall \,A \to A \rightarrow List\;A
\rightarrow List\;A}
\end{array}
\end{equation}
\vspace*{-0.02in}
\noindent
of lists of data of type $\mathsf{A}$ is $\mathsf{L_A \, X \,=\,1 + A
\times X}$. This is essentially the unfolding of the definition of
a type $\mathsf{X}$ parameterized on $\mathsf{A}$ recognizing that an
element of $\mathsf{X}$ can be constructed either from no data using
the data constructor $\mathsf{nil}$, or from one datum of type
$\mathsf{A}$ and one already-constructed datum of type $\mathsf{X}$
using the data constructor $\mathsf{cons}$. Replacing $\mathsf{X}$ by
$\mathsf{List\,A}$ in~\eqref{eq:lists} gives a recursive equation
defining this type, so if $A$ interprets $\mathsf{A}$ then the least
fixpoint of the endofunctor $L_A X = 1 + A \times X$ on $\mathcal C$
interpreting $\mathsf{L_A}$ interprets
$\mathsf{List\,A}$.\looseness=-1
\pagebreak
Nested types generalize ADTs by allowing their constructors to take as
arguments data whose types involve instances of the nested type other
than the one being defined. The return type of each of its data
constructors must still be precisely the instance being defined,
though. This is illustrated by the following standard definitions of
the nested types $\mathsf{PTree}$ of perfect trees and $\mathsf{Bush}$
of bushes:\looseness=-1
\vspace*{-0.25in}
\begin{equation*}\label{eq:ptrees}
\begin{array}{lll}
\mathsf{data\; PTree \;:\;Set \to Set\;where} & \hspace*{0.3in} &
\mathsf{data\; Bush \;:\;Set \to Set\;where}\\
\hspace*{0.2in}\mathsf{pleaf\;\;\,:\; \forall\,A \to A \to PTree\;A} &
& \hspace*{0.2in}\mathsf{bnil\;\;\;\;:\; \forall\,A \to Bush\;A}\\
\hspace*{0.2in}\mathsf{pnode\;:\;\forall \,A \to PTree\;(A \times A)
\rightarrow PTree\;A} & & \hspace*{0.2in}\mathsf{bcons\;:\;\forall
\,A \to A \rightarrow Bush\;(Bush\;A) \rightarrow Bush\;A}\\
\end{array}
\end{equation*}
\vspace*{-0.13in}
\noindent
A nested type $\mathsf{N}$ with at least one data constructor at least
one of whose argument types involves an instance of $\mathsf{N}$ that
itself involves an instance of $\mathsf{N}$ is called a {\em truly
nested type}. The type of the data constructor $\mathsf{bcons}$ thus
witnesses that $\mathsf{Bush}$ is a truly nested type. Because the
recursive calls to a nested type's type constructor can be at
instances of the type other than the one being defined, a nested type
thus defines an entire family of types that must be constructed
simultaneously. That is, a nested type defines an {\em inductive
family of types}. By contrast, an ADT is usually understood as a
family of inductive types, one for each choice of its type
arguments. This is because every recursive call to an ADT's type
constructor must be at the same instance as the one being
defined.\looseness=-1
Like ADTs, (truly) nested types can still be interpreted as least
fixpoints of endofunctors. But because the recursive calls in a nested
type's definition are not necessarily at the instance being defined,
the endofunctor interpreting its underlying type expression must
necessarily be a {\em higher-order} endofunctor on $\mathcal C$. For
example, the endofunctor interpreting the type expression underlying
$\mathsf{PTree}$ is $P\,F\,X = X + F\,(X \times X)$ and the
endofunctor interpreting the type expression underlying
$\mathsf{Bush}$ is $B\,F\,X = 1 + F\,(F\,X)$. The fact that fixpoints
of higher-order endofunctors are themselves necessarily functors thus
entails that nested types are interpreted as endofunctors on, rather
than elements of, $\mathcal{C}$. This ensures that the fixpoint
interpretation of a nested type has a functorial action and, moreover,
that the map function for a nested type --- such as is required to
establish the nested type as an instance of Haskell's
$\mathsf{Functor}$ class\footnote{We write $\mathsf{map_{D}}$ for the
syntactic function $\mathsf{fmap :: (A \to B) \to (D\,A \to D\,B)}$
witnessing that the type constructor $\mathsf{D}$ is an instance of
Haskell's $\mathsf{Functor}$ class. Such functions are expected to
satisfy syntactic reflections of the functor laws --- i.e.,
preservation of identity functions and composition of functions ---
even though there is no compiler mechanism to enforce this.} --- can
be obtained as its syntactic reflection. For example,
$\mathsf{map_{PTree}}$ is the syntactic reflection of the functorial
action of the fixpoint of $P$, and $\mathsf{map_{Bush}}$ is the
syntactic reflection of the functorial action of the fixpoint of
$B$. Because nested types, including ADTs and truly nested types, are
defined polymorphically, we can think of each element of such a type
$\mathsf{N}$ as a ``container'' for data arranged at various {\em
positions} in the underlying {\em shape} determined by the data
constructors of $\mathsf{N}$ used to build it. Given a function
$\mathsf{f : A \to B}$, the function $\mathsf{map_N\,f}$ is then the
expected shape-preserving-but-possibly-data-changing function that
transforms an element of $\mathsf{N}$ with shape $S$ containing data
of type $\mathsf{A}$ into another element of $\mathsf{N}$ also of
shape $S$ but containing data of type $\mathsf{B}$ by applying
$\mathsf{f}$ to each of its elements. The standard map functions for
ADTs can be obtained in the very same way --- i.e., by interpreting
them as fixpoints of (now trivially) higher-order endofunctors, rather
than of first-order endofunctors, on $\mathcal C$ and reflecting the
functorial actions of those fixpoints back into syntax. For example,
the usual map function $\mathsf{map_{List}}$ for lists is nothing more
than the syntactic reflection of the functorial action of the fixpoint
of the higher-order endofunctor $L'\,F\,X = 1 + X \times F\,X$
underlying $\mathsf{List}$.\looseness=-1
\label{page:seq}
Since GADTs syntactically subsume nested types, they would also
require higher-order endofunctors for their interpretation. We might
therefore expect GADTs to have {\em functorial initial algebra
semantics}, and thus to support
shape-preserving-but-possibly-data-changing map functions, just like
nested types do. But because the shape of an element of a {\em proper}
GADT --- i.e., a GADT that is not a nested type (and thus is not an
ADT) --- is not independent of the data it contains, and is, in fact,
{\em determined by} this data, not all GADTs do. For example, the
GADT\looseness=-1
\vspace*{-0.2in}
\begin{equation*}\label{eq:seq}
\begin{array}{l}
\mathsf{data\; Seq \;:\;Set \to Set\;where}\\
\hspace*{0.3in}\mathsf{const\;:\; \forall A \to A \to Seq\,A}\\
\hspace*{0.3in}\mathsf{pair\;\;\;:\;\forall A\,B \to Seq\,A \to Seq\,B
\rightarrow Seq\,(A \times B)}
\end{array}
\end{equation*}
of sequences does not support a standard
structure-preserving-but-possibly-data-changing map function like ADTs
and nested types do. If it did, then the clause of
$\mathsf{map_{Seq}}$ for an element of $\mathsf{Seq}$ of the form
$\mathsf{pair\,x\,y}$ for $\mathsf{x : A}$ and $\mathsf{y : B}$ would
be such that if $\mathsf{f : (A \times B) \to C}$ then
$\mathsf{map_{Seq}\,f\,(pair\,x\,y) = pair\,u\,v : Seq\,C}$ for some
appropriately typed $\mathsf{u}$ and $\mathsf{v}$. But there is no way
to achieve this unless $\mathsf{C}$ is of the form $\mathsf{A' \times
B'}$ for some $\mathsf{A'}$ and $\mathsf{B'}$, $\mathsf{u :
Seq\,A'}$ and $\mathsf{v : Seq\,B'}$, and $\mathsf{f = f_1 \times
f_2}$ for some $\mathsf{f_1 : A \to A'}$ and $\mathsf{f_2 : B \to
B'}$. The non-uniformity in the type-indexing of proper GADTs ---
which is the very reason a GADT programmer is likely to use GADTs in
the first place --- thus turns out to be precisely what prevents them
from supporting standard map functions.
Despite this, GADTs are currently known to support two different
functorial initial algebra semantics, namely, the discrete semantics
of~\cite{jg08} and the functorial completion semantics
of~\cite{jp19}. The problem is that neither of these leads to a
satisfactory uniform theory of type-indexed data types. On the one
hand, the discrete semantics of~\cite{jg08} interprets GADTs as
fixpoints of higher-order endofunctors on the {\em discretization} of
the category $\mathcal C$ interpreting types, rather than on $\mathcal
C$ itself. In this semantics, the map function for every GADT is
necessarily trivial. Viewing nested types as particular GADTs thus
gives a functorial initial algebra semantics for them that does not
coincide with the expected one. In other words, the discrete
interpretation of~\cite{jg08} results in a semantic situation that
does not reflect the syntactic one depicted in~\eqref{eq:hier}, and is
thus inadequate. On the other hand, the functorial completion
semantics of~\cite{jp19} interprets GADTs as endofunctors on $\mathcal
C$ itself. Each GADT thus, like every nested type, has a non-trivial
map function. This is, however, achieved at the cost of adding new
``junk'' elements, unreachable in syntax but interpreting elements in
the ``map closure'' of its syntax, to the interpretation of every
proper GADT.
Functorial completion for $\mathsf{Seq}$, e.g., adds interpretations
of elements of the form $\mathsf{map\,f\,(pair\,x\,y)}$ even though
these may not be of the form $\mathsf{pair\,u\,v}$ for any terms
$\mathsf{u}$ and $\mathsf{v}$. Importantly, functorial completion adds
no junk to interpretations of nested types or ADTs, so unlike the
semantics of~\cite{jg08}, that of~\cite{jp19} does indeed properly
extend the usual functorial initial algebra semantics for them. But
since the interpretations of~\cite{jp19} are bigger than expected for
proper GADTs, this semantics, too, is unacceptable. Although they are
at the two extremes of the junk vs.~functoriality spectrum, both known
functorial initial algebra semantics for GADTs are fundamentally
unsatisfactory.\looseness=-1
In this paper we pursue a middle ground and ask: how much
functoriality can we salvage for GADTs while still ensuring that their
interpretations contain no junk? We already know that not every
function on a proper GADT's type arguments will be mappable over
it. But this paper answers this question more precisely by developing
an algorithm for detecting exactly which functions are. Our algorithm
takes as input a term $t$ whose type is (an instance of) a GADT
$\mathsf{G}$ and a function $f$ to be mapped over $t$. It then
detects the {\em minimal possible shape} of $t$ as an element of
$\mathsf{G}$, and returns a {\em minimal set of constraints} $f$ must
satisfy in order to be mappable over $t$. The crux of the algorithm is
its ability to separate $t$'s {\em essential structure} as an element
of $\mathsf{G}$ --- i.e., the part of $t$ that is essential for it to
have the shape of an element of $\mathsf{G}$ --- from its {\em
incidental structure} as an element of $\mathsf{G}$ --- i.e., the
part of $t$ that is simply data in the positions of this shape. The
algorithm then ensures that the constraints ensuring that $f$ is
mappable come only from $t$'s essential structure as an element of
$\mathsf{G}$.
The separation of a term into essential and incidental structure
relative to a given specification is far from trivial, however. In
particular, it is considerably more involved than simply inspecting
the return types of $\mathsf{G}$'s constructors. As for ADTs and other
nested types, a subterm built using one of $\mathsf{G}$'s data
constructors can be an input term to another one (or to itself again),
and this creates a kind of ``feedback loop'' in the well-typedness
computation for the overall term. Moreover, if $\mathsf{G}$ is a
proper GADT, then such a loop can force structure to be essential in
the overall term even though it would be incidental in the subterm if
the subterm were considered in isolation, and this can impose
constraints on the functions mappable over it. This is illustrated in
Examples~\ref{ex:ex2} and~\ref{ex:ex3} below, both of which involve a
GADT $\mathsf{G}$ whose data constructor $\mathsf{pairing}$ can
construct a term suitable as input to $\mathsf{projpair}$.
Our algorithm is actually far more flexible than we have just
described. Rather than simply considering $t$ to be an element of the
top-level GADT in its type, it can instead take as a third argument a
specification, in the form of a perhaps deeper\footnote{An ADT/nested
type/GADT is {\em deep} if it is (possibly mutually inductively)
defined in terms of other ADTs/nested types/GADTs (including,
possibly, itself). For example, $\mathsf{List\,(List\,\mathbb{N})}$ is
a deep ADT, $\mathsf{Bush\,(List\,(PTree\,A))}$ is a deep nested type,
and $\mathsf{Seq\,(PTree\,A)}$, and $\mathsf{List\,(Seq\,A)}$ are deep
GADTs.} data type $\mathsf{D}$, one of whose instances it should be
considered an element of. The algorithm will still return a minimal
set of constraints $f$ must satisfy in order to be mappable over $t$,
but now these constraints are relative to the deep specification
$\mathsf{D}$ rather than to the ``shallow'' specification
$\mathsf{G}\,\beta$. The feedback loops in and between the data types
appearing in the specification $\mathsf{D}$ can, however,
significantly complicate the separation of essential and incidental
structure in terms. For example, if a term's specification is
$\mathsf{G}\,(\mathsf{G}\,\beta)$ then we will first need to compute
which functions are mappable over its relevant subterms relative to
$\mathsf{G}\,\beta$ before we can compute those mappable over the term
itself relative to $\mathsf{G}\,(\mathsf{G}\,\beta)$. Runs of our
algorithm on deep specifications are given in Examples~\ref{ex:ex5}
and~\ref{ex:ex5-again} below, as well as in our accompanying code~\cite{code}.
This paper is organized as follows. Motivating examples highlighting
the delicacies of the problem our algorithm solves are given in
Section~\ref{sec:overview}. Our algorithm is given in
Section~\ref{sec:algorithm}, and fully worked out sample runs of it
are given in Section~\ref{sec:examples}. Our conclusions, related
work, and some directions for future work are discussed in
Section~\ref{sec:conclusion}. Our Agda implementation of our algorithm
is available at~\cite{code}, along with a collection of examples on
which it has been run. This collection includes examples involving
deep specifications and mutually recursively defined GADTs, as well as
other examples that go beyond just the illustrative ones appearing in
this paper.
\section{The Problem and Its Solution: An Overview}\label{sec:overview}
In this section we use well-chosen example instances of the mapping
problem for GADTs and deep data structures both to highlight its
subtlety and to illustrate the key ideas underlying our algorithm that
solves it. For each example considering a function $f$ to be mapped
over a term $t$ relative to the essential structure specified by $D$
we explain, intuitively, how to obtain the decomposition of $t$ into
the essential and incidental structure specified by $D$ and what the
minimal constraints are that ensure that $f$ is mappable over $t$
relative to it. By design, we handle the examples only informally in
this section. The results obtained by running our algorithm on their
formal representations are given in
Section~\ref{sec:examples}.\looseness=-1
Our algorithm will treat all GADTs in the class $\mathcal{G}$, whose
elements have the following general form when written in Agda:
\vspace*{-0.1in}
\begin{equation}\label{eq:gadts}
\begin{array}{l}
\mathsf{data\ G : Set}^k
\mathsf{\to Set\ where}\\
\mathsf{\;\;\;\;\;\;\;\;c}_1\, \mathsf{:\, t}_1\\
\;\;\;\;\;\;\;\;\vdots\\
\mathsf{\;\;\;\;\;\;\;\;c}_m\, \mathsf{:\, t}_m\\
\end{array}
\end{equation}
\noindent
where $k$ and $m$ can be any natural numbers, including $0$. Writing
$\ol v$ for a tuple $(v_1,...,v_l)$ when its length $l$ is clear from
context, and identifying a singleton tuple with its only element, each
data constructor $\mathsf{c}_i$, $i \in \{1,...,m\}$, has type
$\mathsf{t}_i$ of the form
\vspace*{-0.1in}
\begin{equation}\label{eq:data-constr-types}
\mathsf{\forall \ol\alpha \to\,}
F_1^{\mathsf{c}_i} \mathsf{\ol \alpha \to ... \to\,}
F_n^{\mathsf{c}_i} \mathsf{\ol \alpha \to G\,}
(K_1^{\mathsf{c}_i} \mathsf{\ol \alpha, ... ,}
K_k^{\mathsf{c}_i} \mathsf{\ol\alpha)}
\end{equation}
\noindent
Here, for each $j \in \{1,...,n\}$, $F_j^{\,\mathsf{c}_i}\ol\alpha$ is
either a closed type, or is $\alpha_d$ for some $d \in
\{1,...,|\ol\alpha|\}$, or is $\mathsf{D}_j^{\mathsf{c}_i}
\,(\ol{\phi_j^{\mathsf{c}_i}\ol\alpha})$ for some user-defined data
type constructor $\mathsf{D}_j^{\mathsf{c}_i}$ and tuple
$\ol{\phi_j^{\mathsf{c}_i}\ol\alpha}$ of type expressions at least one
of which is not closed. The types $F_j^{\,\mathsf{c}_i}\ol\alpha$ must
not involve any arrow types. However, each
$\mathsf{D}_j^{\mathsf{c}_i}$ can be any GADT in $\mathcal{G}$,
including $\mathsf{G}$ itself, and each of the type expressions in
$\ol{\phi_j^{\mathsf{c}_i}\ol\alpha}$ can involve such GADTs as
well. On the other hand, for each $\ell \in \{1,...,k\}$,
$K_\ell^{\mathsf{c}_i} \ol\alpha$ is a type expression whose free
variables come from $\ol\alpha$, and that involves neither $G$ itself
nor any proper GADTs.\footnote{Formally, a GADT is a {\em proper} GADT
if it has at least one {\em restricted data constructor}, i.e., at
least one data constructor $\mathsf{c}_i$ with type as
in~\eqref{eq:data-constr-types} for which $K_\ell^{\mathsf{c}_i} \ol
\alpha \not = \ol \alpha$ for at least one $\ell \in \{1,...,k\}$.}
When $|\ol \alpha| = 0$ we suppress the initial quantification over
types in~\eqref{eq:data-constr-types}. All of the GADTs appearing in
this paper are in the class $\mathcal{G}$. All GADTs we are aware of
from the literature whose constructors' argument types do not involve
arrow types are also in $\mathcal{G}$. Our algorithm is easily
extended to GADTs without this restriction provided all arrow types
involved are strictly positive.\looseness=-1
Our first example picks up the discussion for $\mathsf{Seq}$ on
page~\pageref{page:seq}. Because $\mathsf{pair}$ is the only
restricted data constructor for $\mathsf{Seq}$, so that the feedback
dependencies for $\mathsf{Seq}$ are simple, it is entirely
straightforward.
\begin{example}\label{ex:ex1}
The functions $f$ mappable over
\vspace*{-0.05in}
\begin{equation}
t = \mathsf{pair}\,(\mathsf{ pair}\,(\mathsf{{const}\,tt})\,
(\mathsf{{ const}\,2}))\;(\mathsf{{const}\,5}) :
\mathsf{Seq}\,(\,(\mathsf{Bool} \times \mathsf{Int}) \times
\mathsf{Int})
\end{equation}
\vspace*{0.1in}
\noindent
relative to the specification $\mathsf{Seq}\,\beta$ are exactly those
of the form $f = (f_1 \times f_2) \times f_3$ for some $f_1 :
\mathsf{Bool} \to X_1$, $f_2 : \mathsf{Int} \to X_2$, and $f_1 :
\mathsf{Int} \to X_3$, and some types $X_1$, $X_2$, and
$X_3$. Intuitively, this follows from two analyses similar to that on
page~\pageref{page:seq}, one for each occurrence of $\mathsf{pair}$ in
$t$. Writing the part of a term comprising its essential structure
relative to the given specification in {\color{blue} blue} and the
parts of the term comprising its incidental structure in black, our
algorithm also deduces the following essential structure for $t$:
\vspace*{-0.1in}
\[\mathsf{\color{blue} pair}\,(\mathsf{\color{blue}
pair}\,(\mathsf{{\color{blue} const}\,tt})\, (\mathsf{{\color{blue}
const}\,2}))\;(\mathsf{{\color{blue} const}\,5}) :
\mathsf{Seq}\,(\,(\mathsf{Bool} \times \mathsf{Int}) \times
\mathsf{Int})\]
\vspace*{-0.1in}
\end{example}
The next two examples are more involved because $\mathsf{G}$ has
purposely been crafted so that its data constructor $\mathsf{pairing}$
can construct a term suitable as input to $\mathsf{projpair}$.
\begin{example}\label{ex:ex2}
Consider the GADT
\vspace*{-0.1in}
\begin{equation*}\label{eq:crazy}
\begin{array}{l}
\mathsf{data\; G\;:\;Set \to Set\;where}\\
\hspace*{0.3in}\mathsf{const\;\;\;\;:\; G\,\mathbb{N}}\\
\hspace*{0.3in}\mathsf{flat\;\;\;\;\;\;\;:\;\forall\,A \to
List\,(G\,A) \to G\,(List\,A)}\\
\hspace*{0.3in}\mathsf{inj\;\;\;\;\;\;\;\;\,:\; \forall\,A \to A \to
G\,A}\\
\hspace*{0.3in}\mathsf{pairing\;\;:\;\forall\,A\,B \to G\,A \to G\,B
\rightarrow G\,(A \times B)}\\
\hspace*{0.3in}\mathsf{projpair\;:\;\forall\,A\,B \to G\,(G\,A \times
G\,(B \times B)) \to G\,(A \times B)}
\end{array}
\end{equation*}
The functions mappable over
\vspace*{-0.1in}
\begin{equation*}\label{eq:u-inf}
t = {\mathsf{projpair}} \;(\;{\mathsf{inj}\;} \;
(\mathsf{{inj}\;(cons\,2\,nil)},\; {\mathsf{pairing}}\; ({\mathsf{
inj}\,2})\;{\mathsf{const}}) \;) :
\mathsf{G}\,(\mathsf{List}\,\mathbb{N} \, \times \, \mathbb{N})
\end{equation*}
relative to the specification $\mathsf{G}\,\beta$ are exactly those of
the form $f = f_1 \times \id_\mathbb{N}$ for some type $X$ and
function $f_1 : \mathsf{List}\,\mathbb{N} \to X$. This makes sense
intuitively: The call to $\mathsf{projpair}$ requires that a mappable
function $f$ must at top level be a product $f_1 \times f_2$ for some
$f_1$ and $f_2$, and the outermost call to $\mathsf{inj}$ imposes no
constraints on $f_1 \times f_2$. In addition, the call to
$\mathsf{inj}$ in the first component of the pair argument to the
outermost call to $\mathsf{inj}$ imposes no constraints on $f_1$, and
neither does the call to $\mathsf{cons}$ or its arguments. On the
other hand, the call to $\mathsf{pairing}$ in the second component of
the pair argument to the second call to $\mathsf{inj}$ must produce a
term of type $\mathsf{G\,(\mathbb{N} \times \mathbb{N})}$, so the
argument $\mathsf{2}$ to the rightmost call to $\mathsf{inj}$ and the
call to $\mathsf{const}$ require that $f_2 = \id_{\mathbb{N}}$. Our
algorithm also deduces the following essential structure for $t$:
\vspace*{-0.1in}
\begin{equation}\label{ex:decomp}
{\mathsf{\color{blue} projpair}} \;(\;{\mathsf{\color{blue}
inj}\;} \; (\mathsf{{\color{blue} inj}\;(cons\,2\,nil)},\;
{\mathsf{\color{blue} pairing}}\; ({\mathsf{\color{blue}
inj}\,2})\;{\mathsf{\color{blue} const}}) \;) :
\mathsf{G}\,(\mathsf{List}\,\mathbb{N} \, \times \, \mathbb{N})
\end{equation}
Note that, although the argument to $\mathsf{projpair}$ decomposes
into essential structure and incidental structure as
$\mathsf{\color{blue}inj}\,(\mathsf{{inj}\;(cons\,2\,nil)},\;
{\mathsf{pairing}}\; ({\mathsf{ inj}\,2})\;{\mathsf{const}})$ when
considered as a standalone term relative to the specification
$\mathsf{G}\,\beta$, the feedback loop between $\mathsf{pairing}$ and
$\mathsf{projpair}$ ensures that $t$ has the decomposition in
\eqref{ex:decomp} relative to $\mathsf{G}\,\beta$ when this argument
is considered in the context of $\mathsf{projpair}$. Similar comments
apply throughout this paper.
\end{example}
\begin{example}\label{ex:ex3}
The functions $f$ mappable over
\vspace*{-0.1in}
\begin{equation*}\label{eq:t-inf}
t = {\mathsf{projpair}}\;(\;{\mathsf{inj}\;} \;
({\mathsf{flat}}\;({\mathsf{ cons}}\;{\mathsf{const}}\;{\mathsf{
nil}}),\; {\mathsf{pairing}}\; ({\mathsf{
inj}\,2})\;{\mathsf{const}}) \;) :
\mathsf{G}\,(\mathsf{List}\,\mathbb{N} \, \times \, \mathbb{N})
\end{equation*}
relative to the specification $\mathsf{G}\,\beta$ for $\mathsf{G}$ as
in Example~\ref{ex:ex2} are exactly those of the form $f =
map_{\mathsf{List}}\,\id_{\mathbb{N}} \times \id_\mathbb{N}$. This
makes sense intuitively: The call to $\mathsf{projpair}$ requires that
a mappable function $f$ must at top level be a product $f_1 \times
f_2$ for some $f_1$ and $f_2$, and the outermost call to
$\mathsf{inj}$ imposes no constraints on $f_1 \times f_2$. In
addition, the call to $\mathsf{flat}$ in the first component of the
pair argument to $\mathsf{inj}$ requires that $f_1 =
map_{\mathsf{List}}\,f_3$ for some $f_3$, and the call to
$\mathsf{cons}$ in $\mathsf{flat}$'s argument imposes no constraints
on $f_3$, but the call to $\mathsf{const}$ as $\mathsf{cons}$'s first
argument requires that $f_3 = \id_{\mathbb{N}}$. On the other hand, by
the same analysis as in Example~\ref{ex:ex2}, the call to
$\mathsf{pairing}$ in the second component of the pair argument to
$\mathsf{inj}$ requires that $f_2 = \id_{\mathbb{N}}$. Our algorithm
also deduces the following essential structure for $t$:
\vspace*{-0.1in}
\[{\mathsf{\color{blue}
projpair}}\;(\;{\mathsf{\color{blue} inj}\;} \;
({\mathsf{\color{blue} flat}}\;({\mathsf{\color{blue}
cons}}\;{\mathsf{\color{blue} const}}\;{\mathsf{\color{blue}
nil}}),\; {\mathsf{\color{blue} pairing}}\; ({\mathsf{\color{blue}
inj}\,2})\;{\mathsf{\color{blue} const}}) \;) :
\mathsf{G}\,(\mathsf{List}\,\mathbb{N} \, \times \, \mathbb{N})\]
\end{example}
The feedback loop between constructors in the GADT $\mathsf{G}$ in the
previous two examples highlights the importance of the specification
relative to which a term is considered. But this can already be seen
for ADTs, which feature no such loops. This is illustrated in
Examples~\ref{ex:ex4} and~\ref{ex:ex5} below.
\begin{example}\label{ex:ex4}
The functions $f$ mappable over
\vspace*{-0.1in}
\begin{equation*}
t = \mathsf{cons}\,(\mathsf{cons\,1\,(cons\,2\,nil))\,
(cons\,(cons\,3\,nil)\,nil) : List\,(List\,\mathbb{N})}
\end{equation*}
relative to the specification $\mathsf{List\,\beta}$ are exactly those
of the form $f : \mathsf{List}\,\mathbb{N} \to X$ for some type
$X$. This makes sense intuitively since any function from the element
type of a list to another type is mappable over that list. The
function need not satisfy any particular structural constraints. Our
algorithm also deduces the following essential structure for $t$:
\[\mathsf{{\color{blue}cons}}\,(\mathsf{cons\,1\,(cons\,2\,nil))\,
({\color{blue}cons}\,(cons\,3\,nil)\,{\color{blue}nil})}\]
\end{example}
\begin{example}\label{ex:ex5}
The functions $f$ mappable over
\vspace*{-0.1in}
\begin{equation*}
t = \mathsf{cons\,(cons\,1\,(cons\,2\, nil))\,(cons\,(cons\,3\,
nil)\,nil) : List\,(List\,\mathbb{N})}
\end{equation*}
relative to the specification $\mathsf{List\, (List\,\beta)}$ are
exactly those of the form $f = \mathsf{map_{List}}\,f'$ for some type
$X'$ and function $f' : \mathbb{N} \to X'$. This makes sense
intuitively: The fact that any function from the element type of a
list to another type is mappable over that list requires that $f :
\mathsf{List}\,\mathbb{N} \to X$ for some type $X$ as in
Example~\ref{ex:ex4}. But if the internal list structure of $t$ is
also to be preserved when $f$ is mapped over it, as indicated by the
essential structure $\mathsf{List}\,(\mathsf{List}\,\beta)$, then $X$ must
itself be of the form $\mathsf{List}\,X'$ for some type $X'$. This, in
turn, entails that $f = map_{\mathsf{List}}f'$ for some $f' :
\mathbb{N} \to X'$. Our algorithm also deduces the following essential
structure for $t$:\looseness=-1
\vspace*{-0.1in}
\[\mathsf{{\color{blue}cons}\,({\color{blue}cons}\,1\,({\color{blue}cons}\,2\,
{\color{blue}nil}))\,({\color{blue}cons}\,({\color{blue}cons}\,3\,
{\color{blue}nil})\,{\color{blue}nil}) : List\,(List\,\mathbb{N})}\]
\end{example}
The specification $\mathsf{List}\,(\mathsf{List}\,\beta)$ determining
the essential structure in Example~\ref{ex:ex5} is deep {\em by
instantiation}, rather than {\em by definition}. That is, inner
occurrence of $\mathsf{List}$ in this specification is not forced by
the definition of the data type $\mathsf{List}$ that specifies its
top-level structure. The quintessential example of a data type that is
deep by definition is the ADT\looseness=-1
\vspace*{-0.15in}
\begin{equation*}\label{eq:rose}
\begin{array}{l}
\mathsf{data\; Rose \;:\;Set \to Set\;where}\\
\hspace*{0.2in}\mathsf{rnil\;\;\;\;:\; \forall\,A \to Rose\;A}\\
\hspace*{0.2in}\mathsf{rnode\;:\;\forall \,A \to A \to List\,(Rose\,A)
\to Rose\;A}
\end{array}
\end{equation*}
of rose trees, whose data constructor $\mathsf{rnode}$ takes as input
an element of $\mathsf{Rose}$ at an instance of another ADT.
Reasoning analogous to that in the examples above suggests that no
structural constraints should be required to map appropriately typed
functions over terms whose specifications are given by nested types
that are deep by definition. We will see in Example~\ref{ex:list}
that, although the runs of our algorithm are not trivial on such input
terms, this is indeed the case.
With more tedious algorithmic bookkeeping, results similar to those of
the above examples can be obtained for data types --- e.g.,
$\mathsf{Bush\,(List\,(PTree\,A))}$, $\mathsf{Seq\,(PTree\,A)}$, and
$\mathsf{List\,(Seq\,A)}$ --- that are deep by instantiation~\cite{code}.
\section{The Algorithm}\label{sec:algorithm}
In this section we give our algorithm for detecting mappable
functions. The algorithm $\mathsf{adm}$ takes as input a data
structure $t$, a tuple of functions to be mapped over $t$, and a
specification --- i.e., a (deep) data type --- $\Phi$. It detects the
minimal possible shape of $t$ relative to $\Phi$ and returns a minimal
set $C$ of constraints $\ol f$ must satisfy in order to be mappable
over $t$ viewed as an element of an instance of $\Phi$. A call
\vspace*{-0.1in}
\[\mathsf{adm}\;\:t\;\:\ol f\;\;\Phi\]
is to be made only if there exists a tuple $(\Sigma_1
\ol\beta,...,\Sigma_k\ol\beta)$ of type expressions such that
\begin{itemize}
\item $\Phi = \mathsf{G}\,(\Sigma_1 \ol\beta,...,\Sigma_k\ol\beta)$
for some data type constructor $\mathsf{G} \in \mathcal{G} \cup
\{{\times},{+}\}$ and some type expressions $\Sigma_\ell \ol\beta$,
for $\ell \in\{1,...,k\}$
\end{itemize}
and
\begin{itemize}
\item if $\Phi = \times (\Sigma_1 \ol \beta, \Sigma_2 \ol \beta)$,
then $t = (t_1, t_2)$, and $k = 2$, $\ol f = (f_1, f_2)$
\item if $\Phi = + (\Sigma_1 \ol \beta, \Sigma_2 \ol \beta)$ and $t =
\mathsf{inl}\,t_1$, then $k = 2$, $\ol f = (f_1, f_2)$
\item if $\Phi = + (\Sigma_1 \ol \beta, \Sigma_2 \ol \beta)$ and $t =
\mathsf{inr}\,t_2$, then $k = 2$, $\ol f = (f_1, f_2)$
\item if $\Phi = \mathsf{G}\,(\Sigma_1 \ol\beta,...,\Sigma_k\ol\beta)$
for some $\mathsf{G} \in \mathcal{G}$ then
\begin{enumerate}[label=\arabic*)]
\item $t = \mathsf{c}\,t_1 ... t_n$ for some appropriately typed terms
$t_1,...,t_n$ and some data constructor $\mathsf{c}$ for $\mathsf{G}$
with type of the form in~\eqref{eq:data-constr-types},
\item \label{page:5b} $t : \mathsf{G}\,(K_1^\mathsf{c} \ol w, ... ,
K_k^\mathsf{c} \ol w)$ for some tuple $\ol w =
(w_1,...,w_{|\ol\alpha|})$ of type expressions, and
$\mathsf{G}\,(K_1^\mathsf{c} \ol w, ... , K_k^\mathsf{c} \ol w)$ is
exactly $\mathsf{G}\,(\Sigma_1 \ol s,...,\Sigma_k \ol s)$ for some
tuple $\ol s = (s_1,...,s_{|\ol\beta|})$ of types, and\label{invariant2}
\item for each $\ell \in \{1,...,k\}$, $f_\ell$ has domain
$K_\ell^\mathsf{c} \ol w$
\end{enumerate}
\end{itemize}
These invariants are clearly preserved for each recursive call to
$\mathsf{adm}$.
As an optimization, the free variables in the type expressions
$\Sigma_\ell\ol\beta$ for $\ell \in \{1,...,k\}$ can be taken merely
to be {\em among} the variables in $\ol \beta$, since the calls
$\mathsf{adm}\;t\;\;\ol f\;\;\mathsf{G}\,(\Sigma_1\ol\beta,...,
\Sigma_k\ol\beta)$ and $\mathsf{adm}\;t\;\;\ol
f\;\;\mathsf{G}\,(\Sigma_1\ol{\beta^+},..., \Sigma_k\ol{\beta^+})$
return the same set $C$ (up to renaming) whenever $\ol{\beta}$ is a
subtuple of the tuple $\ol{\beta^+}$. We therefore always take $\ol
\beta$ to have minimal length below.
The algorithm is given as follows by enumerating each of its legal
calls. Each call begins by initializing a set $C$ of constraints to
$\emptyset$.
\begin{itemize}
\item[A.]
$\mathsf{adm}\;(t_1,t_2)\;\;(f_1,f_2)\;\;{\times} (\Sigma_1\ol\beta,
\Sigma_2\ol\beta)$
\begin{enumerate}
\item Introduce a tuple $\ol g = g_1,...,g_{|\ol\beta|}$ of fresh
function variables, and add the
constraints $\langle \Sigma_1 \ol g, f_1 \rangle$ and $\langle \Sigma_2 \ol
g, f_2 \rangle$ to $C$.
\item For $j \in \{1,2\}$,
if $\Sigma_j\ol\beta = \beta_i$ for some $i$ then do nothing and
go to the next $j$ if there is one. Otherwise, $\Sigma_j\ol\beta =
\mathsf{D}\, (\zeta_1\ol\beta,...,\zeta_r\ol\beta)$, where
$\mathsf{D}$ is a data type constructor in $\mathcal{G} \cup
\{{\times},{+}\}$ of arity $r$, so make the recursive call
$\mathsf{adm}\;t_j\;\;(\zeta_1\ol g,...,\zeta_r\ol
g)\;\;\mathsf{D}\, (\zeta_1\ol\beta,...,\zeta_r\ol\beta)$ and add
the resulting constraints to $C$.
\item Return $C$.
\end{enumerate}
\item[B.] $\mathsf{adm}\; (\mathsf{inl}\, t)\;\; (f_1 , f_2)\;\;
{+} (\Sigma_1\ol\beta, \Sigma_2\ol\beta)$
\begin{enumerate}
\item Introduce a tuple $\ol g = (g_1,...,g_{|\ol\beta|})$ of fresh
function variables, and add the constraints $\langle \Sigma_1 \ol g,
f_1 \rangle$ and $\langle \Sigma_2 \ol g, f_2 \rangle$ to $C$.
\item If $\Sigma_1\ol\beta = \beta_i$ for some $i$ then do nothing.
Otherwise, $\Sigma_1\ol\beta = \mathsf{D}\,
(\zeta_1\ol\beta,...,\zeta_r\ol\beta)$, where $\mathsf{D}$ is a data
type constructor in $\mathcal{G} \cup \{{\times},{+}\}$ of arity
$r$, so make the recursive call $\mathsf{adm}\;t\;\;(\zeta_1\ol
g,...,\zeta_r\ol g)\;\;\mathsf{D}\,
(\zeta_1\ol\beta,...,\zeta_r\ol\beta)$ and add the resulting
constraints to $C$.
\item Return $C$.
\end{enumerate}
\item[C.] $\mathsf{adm}\; (\mathsf{inr}\, t)\;\; (f_1 , f_2)\;\;
{+} (\Sigma_1\ol\beta, \Sigma_2\ol\beta)$
\begin{enumerate}
\item Introduce a tuple $\ol g = (g_1,...,g_{|\ol\beta|})$ of fresh
function variables, and add the constraints $\langle \Sigma_1 \ol g,
f_1 \rangle$ and $\langle \Sigma_2 \ol g, f_2 \rangle$ to $C$.
\item If $\Sigma_2\ol\beta = \beta_i$ for some $i$ then do nothing.
Otherwise, $\Sigma_2\ol\beta = \mathsf{D}\,
(\zeta_1\ol\beta,...,\zeta_r\ol\beta)$, where $\mathsf{D}$ is a data
type constructor in $\mathcal{G} \cup \{{\times},{+}\}$ of arity
$r$, so make the recursive call $\mathsf{adm}\;t\;\;(\zeta_1\ol
g,...,\zeta_r\ol g)\;\;\mathsf{D}\,
(\zeta_1\ol\beta,...,\zeta_r\ol\beta)$ and add the resulting
constraints to $C$.
\item Return $C$.
\end{enumerate}
\item[D.] $\mathsf{adm}\; (\mathsf{c}\, t_1,...,t_n)\;\; (f_1,...,f_k)\;\;
\mathsf{G}\, (\Sigma_1 \ol\beta,...,\Sigma_k \ol\beta)$
\begin{enumerate}
\item Introduce a tuple $\ol g = (g_1,...,g_{|\ol\beta|})$ of fresh
function variables and add the constraints $\langle \Sigma_\ell \ol g,
f_\ell \rangle$ to $C$ for each $\ell \in \{1,...,k\}$.
\item If $\mathsf{c}\, t_1,...,t_n : \mathsf{G}\,(K_1^\mathsf{c} \ol
w,..., K_k^\mathsf{c} \ol w)$ for some tuple $\ol w =
(w_1,...,w_{|\ol\alpha|})$ of types, let $\ol\gamma =
(\gamma_1,...,\gamma_{|\ol\alpha|})$ be a tuple of fresh type
variables and solve the system of matching problems
\vspace*{-0.2in}
\begin{align*}
&\Sigma_1\ol\beta \equiv K^\mathsf{c}_1\ol\gamma\\
&\Sigma_2\ol\beta \equiv K^\mathsf{c}_2\ol\gamma\\
&\vdots\\
&\Sigma_k\ol\beta \equiv K^\mathsf{c}_k\ol\gamma
\end{align*}
\vspace*{-0.1in}
\noindent
to get a set of assignments, each of the form $\beta \equiv
\psi\ol\gamma$ or $\sigma \ol \beta \equiv \gamma$ for some type
expression $\psi$ or $\sigma$. This yields a (possibly empty) tuple of
assignments $\ol {\beta_i \equiv \psi_i\ol\gamma}$ for each
$i\in\{1,...,|\ol\beta|\}$, and a (possibly empty) tuple of
assignments $\ol {\sigma_{i'}\ol\beta \equiv \gamma_{i'}}$ for each
$i'\in\{1,\dots,\len{\ol\gamma}\}$. Write $\beta_i\equiv
\psi_{i,p}\ol\gamma$ for the $p^{th}$ component of the former and
$\sigma_{i',q}\ol\beta \equiv \gamma_{i'}$ for the $q^{th}$ component
of the latter. An assignment $\beta_i \equiv \gamma_{i'}$ can be seen
as having form $\beta_i \equiv \psi\ol\gamma_{i'}$ or form
$\sigma\ol\beta_i \equiv \gamma_{i'}$, but always choose the latter
representation. (This is justified because $\mathsf{adm}$ would
return an equivalent set of assignments --- i.e., a set of assignments
yielding the same requirements on $\ol f$ --- were the former
chosen. The latter is chosen because it may decrease the number of
recursive calls to $\mathsf{adm}$.)
\item For each $i' \in\{1,\dots,\len{\ol\gamma}\}$, define
$\tau_{i'}\ol\beta\ol\gamma$ to be either
$\sigma_{i',1}\ol\beta$ if this exists, or $\gamma_{i'}$ otherwise.
\item Introduce a tuple $\ol h = (h_1,...,h_{\len{\ol\gamma}})$ of fresh
function variables for $i' \in \{1,...,|\ol\gamma|\}$.
\item For each $i\in\{1,\dots,\len{\ol\beta}\}$ and each
constraint $\beta_i \equiv \psi_{i,p}\ol\gamma$, add the
constraint $\langle \psi_{i,p}\ol h , g_i \rangle$ to $C$.
\item For each $i'\in\{1,\dots,\len{\ol\gamma}\}$ and each constraint
$\sigma_{i',q}\ol\beta \equiv \gamma_{i'}$ with $q>1$, add the
constraint $\langle \sigma_{i',q}\ol g , \sigma_{i',1}\ol g \rangle$ to $C$.
\item For each $j\in \{1,\dots,n\}$, let $R_j = F^\mathsf{c}_j
(\tau_1\ol\beta\ol\gamma,...,\tau_{\len{\ol\gamma}}\ol\beta\ol\gamma)$.
\begin{itemize}[label=--]
\item if $R_j$ is a closed type, then do nothing and go to the next
$j$ if there is one.
\item if $R_j = \beta_i$ for some $i$ or $R_j = \gamma_{i'}$ for some
$i'$, then do nothing and go to the next $j$ if there is one.
\item otherwise $R_j = \mathsf{D}\,
(\zeta_{j,1}\ol\beta\ol\gamma,...,\zeta_{j,r}\ol\beta\ol\gamma)$, where
$\mathsf{D}$ is a type constructor in $\mathcal{G}\; \cup\;
\{{\times},{+}\}$ of arity $r$, so make the recursive call
\vspace*{-0.15in}
\[\mathsf{adm}\;t_j\;\;(\zeta_{j,1}\ol g \ol h, ... , \zeta_{j,r}\ol
g\ol h)\;\; R_j\]
\noindent
and add the resulting constraints to $C$.
\end{itemize}
\item Return $C$.
\end{enumerate}
\end{itemize}
We note that the matching problems in Step~(ii) in the last bullet
point above do indeed lead to a set of assignments of the specified
form. Indeed, since invariant \ref{invariant2} on
page~\pageref{page:5b} ensures that $\mathsf{G}\,(K_1^\mathsf{c} \ol
w, ... , K_k^\mathsf{c} \ol w)$ is exactly $\mathsf{G}\,(\Sigma_1 \ol
s,...,\Sigma_k \ol s)$, each matching problem $\Sigma_\ell \ol\beta
\equiv K_\ell \ol\gamma$ whose left- or right-hand side is not already
just one of the $\beta$s or one of the $\gamma$s must necessarily have
left- and right-hand sides that are {\em top-unifiable}~\cite{dj92},
i.e., have identical symbols at every position that is a non-variable
position in both terms. These symbols can be simultaneously peeled
away from the left- and right-hand sides to decompose each matching
problem into a unifiable set of assignments of one of the two forms
specified in Step~(ii). We emphasize that the set of assignments is
not itself unified in the course of running $\mathsf{adm}$.
It is only once $\mathsf{adm}$ is run that the set of constraints it
returns is to be solved. Each such constraint must be either of the
form $\langle \Sigma_\ell \ol g, f_\ell \rangle$, of the form
$\langle \psi_{i,p}\ol h , g_i \rangle$, or of the form
$\langle \sigma_{i',q}\ol g , \sigma_{i',1}\ol g \rangle$. Each constraint
of the first form must have top-unifiable left- and right-hand
components by virtue of invariant \ref{invariant2} on
page~\pageref{page:5b}. It can therefore be decomposed in a manner
similar to that described in the preceding paragraph to arrive at a
unifiable set of constraints. Each constraint of the second form
simply assigns a replacement expression $\psi_{i,p}\ol h$ to each
newly introduced variable $g_i$. Each constraint of the third form
must again have top-unifiable left- and right-hand components. Once
again, invariant \ref{invariant2} on page~\pageref{page:5b} ensures that
these constraints are decomposable into a unifiable set of constraints
specifying replacement functions for the $g$s.\looseness=-1
Performing first-order unification on the entire system of constraints
resulting from the decompositions specified above, and choosing to
replace more recently introduced $g$s and $h$s with ones introduced
later whenever possible, yields a {\em solved system} comprising
exactly one binding for each of the $f$s in terms of those
later-occurring variables. These bindings actually determine the
collection of functions mappable over the input term to $\mathsf{adm}$
relative to the specification $\Phi$. It is not hard to see that our
algorithm delivers the expected results for ADTs and nested types
(when $\Phi$ is the type itself), namely, that all appropriately typed
functions are mappable over each elements of such types. (See
Theorem~\ref{thm:ext} below.) For GADTs, however, there is no {\em
existing} understanding of which functions should be mappable over
their terms. We therefore regard the solved system's bindings for the
$f$s as actually {\em defining} the class of functions mappable over a
given term relative to a specification $\Phi$.
\begin{theorem}\label{thm:ext}
Let $\mathsf{N}$ be a nested type of arity $k$ in $\mathcal{G}$, let
$\ol w = (w_1,\ldots, w_k)$ comprise instances of nested types in
$\mathcal{G}$, let $t : \mathsf{N}\,\ol w$ where $\mathsf{N}\,\ol w$
contains $n$ free type variables, let $\ol\beta =
(\beta_1,\ldots,\beta_n)$, and let $\mathsf{N}\,(\Sigma_1 \ol\beta,
\ldots, \Sigma_k \ol\beta)$ be in $\mathcal{G}$. The solved system
resulting from the call $~\mathsf{adm}\;\;t\;\;(\Sigma_1 \ol
f,\ldots,\Sigma_k \ol f)\;\;\mathsf{N}\,(\Sigma_1 \ol\beta, \ldots,
\Sigma_k \ol\beta)$ for $\ol f = (f_1,\ldots,f_n)$ has the form
$\bigcup_{i=1}^n\{\langle g_{i,1} ,f_i \rangle, \langle g_{i,2}, g_{i,1} \rangle,
\ldots, \langle g_{i,r_i-1}, g_{i,r_i} \rangle \}$, where each $r_i \in
\mathbb{N}$ and the $g_{i,j}$ are pairwise distinct function
variables. It thus imposes no constraints on the functions mappable
over terms of ADTs and nested types.\looseness=-1
\end{theorem}
\begin{proof}
The proof is by cases on the form of the given call to
$\mathsf{adm}$. The constraints added to $\mathcal C$ if this call is
of the form A, B, or C are all of the form $\langle \Sigma_j \ol g,
\Sigma_j\ol f \rangle$ for $j = 1, 2$, and the recursive calls made are
all of the form $\mathsf{adm}\;t'\;\;(\zeta_1\ol g,..., \zeta_r\ol
g)\;\;\mathsf{D}\, (\zeta_1\ol\beta,...,\zeta_r\ol\beta)$ for some
$t'$, some $(\zeta_1,...,\zeta_r)$, and some nested type
$\mathsf{D}$. Now suppose the given call is of the form D. Then Step
(i) adds the constraints $\langle \Sigma_i\ol g, \Sigma_i \ol f \rangle$ for
$i = 1,\dots,k$ to $\mathcal C$. In Step (ii), $|\ol\alpha| = k$, and
$K^{\mathsf{c}}_i \ol w = w_i$ for $i = 1,\ldots,k$ for every data
constructor $\mathsf{c}$ for every nested type, so that the matching
problems to be solved are $\Sigma_i\ol\beta \equiv \gamma_i$ for $i =
1,\ldots,k$. In Step (iii) we therefore have $\tau_i\ol\beta\ol\gamma
= \Sigma_i \ol\beta$ for $i = 1,\ldots,k$. No constraints involving
the variables $\ol h$ introduced in Step (iv) are added to $\mathcal
C$ in Step (v), and no constraints are added to $\mathcal C$ in Step
(vi) since the $\gamma$s are all fresh and therefore pairwise
distinct. For each $R_j$ that is of the form $\mathsf{D}\,
(\zeta_{j,1}\ol\beta\ol\gamma, \ldots ,\zeta_{j,r}\ol\beta\ol\gamma)$,
where $\mathsf{D}$ is a nested type, the recursive call added to
$\mathcal C$ in Step (vii) is of the form
$\mathsf{adm}\;\;t_j\;\;(\zeta_{j,1}\ol g \ol h, \ldots
,\zeta_{j,r}\ol g \ol h)\;\;\mathsf{D}\,(\zeta_{j,1}\ol\beta\ol\gamma,
\ldots ,\zeta_{j,r}\ol\beta\ol\gamma)$, which is again of the same
form as in the statement of the theorem. For $R_j$s not of this form
there are no recursive calls, so nothing is added to $\mathcal
C$. Hence, by induction on the first argument to $\mathsf{adm}$, all
of the constraints added to $\mathcal C$ are of the form $\langle \Psi
\ol \phi, \Psi \ol \psi \rangle$ for some type expression $\Psi$ and some
$\phi$s and $\psi$s, where the $\phi$s and $\psi$s are all pairwise
distinct from one another.
Each constraint of the form $\langle \Psi \ol \phi, \Psi \ol \psi \rangle$
is top-unifiable and thus leads to a sequence of assignments of the
form $\langle \phi_i, \psi_i \rangle$. Moreover, the fact that
$\tau_i\ol\beta\ol\gamma = \Sigma_i \ol\beta$ in Step (iii) ensures
that no $h$s appear in any $\zeta_{j,i} \ol g \ol h$, so the solved
constraints introduced by each recursive call can have as their
right-hand sides only $g$s introduced in the call from which they
spawned. It is not hard to see that the entire solved system resulting
from the original call must comprise the assignments $\langle g_{1,1},
f_1 \rangle,...,\langle g_{1,n}, f_n \rangle$ from the top-level call, as well
as the assignments $\langle g_{j_{i}+1,1}, g_{j_{i},1} \rangle,...,\langle
g_{j_{i}+1,n}, g_{j_{i},n} \rangle$, for $j_i = 0,...,m_i-1$ and $i =
1,...,n$, where $m_i$ is determined by the subtree of recursive calls
spawned by $f_i$.
Re-grouping this ``breadth-first'' collection of assignments
``depth-first'' by the trace of each $f_i$ for $i = 1,...,n$, we get a
solved system of the desired form.\looseness=-1
\end{proof}
\section{Examples}\label{sec:examples}
\begin{example}
For $t$ as in Example~\ref{ex:ex1}, the call
$~\mathsf{adm}\;t\;\;f\;\;\mathsf{Seq}\,\beta_1~$ results in the
sequence of calls:
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|l|l l l l|}
\hline
\emph{call 1} & $\mathsf{adm}$ & $t$ & $f$ &
$\mathsf{Seq}\,\beta_1$\\ \hline
\emph{call 2.1} & $\mathsf{adm}$ &
$\mathsf{pair}\;(\mathsf{const\,tt})\;(\mathsf{const\,2})$ & $h^1_1$
& $\mathsf{Seq}\,\gamma^1_1$\\ \hline
\emph{call 2.2} & $\mathsf{adm}$ & $\mathsf{const\,5}$ & $h^1_2$ &
$\mathsf{Seq}\,\gamma^1_2$\\ \hline
\emph{call 2.1.1} & $\mathsf{adm}$ & $\mathsf{const\,tt}$ & $h^{2.1}_1$ &
$\mathsf{Seq}\,\gamma^{2.1}_1$\\ \hline
\emph{call 2.1.2} & $\mathsf{adm}$ & $\mathsf{const\,2}$ & $h^{2.1}_2$ &
$\mathsf{Seq}\,\gamma^{2.1}_2$\\ \hline
\end{tabular}}
\end{figure}
\noindent
The steps of $\mathsf{adm}$ corresponding to these call are given in
the table below, with the most important components of these steps
listed explicitly:
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|p{0.3cm}|p{2.4cm}|p{3cm}|p{2.3cm}|p{3.3cm}|p{2.8cm}|}
\hline
\thead{step} & \thead{{matching}} & \thead{$\ol\tau$} & \thead{$\ol
R$} & \thead{$\ol{\zeta}$} & \thead{{constraints}}\\
\thead{no.} & \thead{{problems}} & \thead{ } &\thead{ } & \thead{ } &
\thead{{added to $C$}}\\\hline\hline
\emph{1}
& $\beta_1 \equiv \gamma^1_1 \times \gamma^1_2$
& $\tau_1 \beta_1 \gamma^1_1 \gamma^1_2 = \gamma^1_1$ \newline $\tau_2
\beta_1 \gamma^1_1 \gamma^1_2 = \gamma^1_2$
& $R_1 = \mathsf{Seq}\,\gamma^1_1$ \newline
$R_2 = \mathsf{Seq}\,\gamma^1_2$
& $\zeta_{1,1}\beta_1\gamma^1_1\gamma^1_2 = \gamma^1_1$ \newline
$\zeta_{2,1}\beta_1\gamma^1_1\gamma^1_2 = \gamma^1_2$
& $\langle g^1_1, f \rangle$ \newline $\langle h^1_1 \times h^1_2, g^1_1
\rangle$ \\\hline
\emph{2.1}
& $\gamma^1_1 \equiv \gamma^{2.1}_1 \times \gamma^{2.1}_2$
& $\tau_1 \gamma^1_1 \gamma^{2.1}_1 \gamma^{2.1}_2 =
\gamma^{2.1}_1$ \newline $\tau_2 \gamma^1_1 \gamma^{2.1}_1
\gamma^{2.1}_2 = \gamma^{2.1}_2$
& $R_1 = \mathsf{Seq}\,\gamma^{2.1}_1$ \newline
$R_2 = \mathsf{Seq}\,\gamma^{2.1}_2$
& $\zeta_{1,1}\gamma^1_1\gamma^{2.1}_1\gamma^{2.1}_2 =
\gamma^{2.1}_1$ \newline
$\zeta_{2,1}\gamma^1_1\gamma^{2.1}_1\gamma^{2.1}_2 =
\gamma^{2.1}_2$
& $\langle g^{2.1}_1, h^1_1 \rangle$ \newline $\langle h^{2.1}_1 \times
h^{2.1}_2, g^{2.1}_1 \rangle$
\\\hline
\emph{2.2}
& $\gamma^2_1 \equiv \gamma^{2.2}_1$
& $\tau_1 \gamma^1_2 \gamma^{2.2}_1 = \gamma^1_2$
& $R_1 = \gamma^1_2$
&
& $\langle g^{2.2}_1, h^1_2 \rangle$
\\\hline
\emph{2.1.1}
& $\gamma^{2.1}_1 \equiv \gamma^{2.1.1}_1$
& $\tau_1 \gamma^{2.1}_1 \gamma^{2.1.1}_1 = \gamma^{2.1}_1$
& $R_1 = \gamma^{2.1}_1$
&
& $\langle g^{2.1.1}_1, h^{2.1}_1 \rangle$
\\\hline
\emph{2.1.2}
& $\gamma^{2.1}_2 \equiv \gamma^{2.1.2}_1$
& $\tau_1 \gamma^{2.1}_2 \gamma^{2.1.2}_1 = \gamma^{2.1}_2$
& $R_1 = \gamma^{2.1}_2$
&
& $\langle g^{2.1.2}_1, h^{2.1}_2 \rangle$
\\\hline
\end{tabular}}
\end{figure}
\noindent
Since the solution to the generated set of constraints imposes the
requirement that $f = (g^{2.1.1}_1 \times g^{1.2.1}_1) \times
g^{2.2}_1$, we conclude that the most general functions mappable over
$t$ relative to the specification $\mathsf{Seq}\,\beta_1$ are those of
the form $f = (f_1 \times f_2) \times f_3$ for some types $X_1$,
$X_2$, and $X_3$ and functions $f_1 : \mathsf{Bool} \to X_1$, $f_2 :
\mathsf{Int} \to X_2$, and $f_3 : \mathsf{Int} \to X_3$. This is
precisely the result obtained informally in Example~\ref{ex:ex1}.
\end{example}
\vspace*{0.05in}
\begin{example}\label{ex:u}
For $\mathsf{G}$ and $t$ as in Example~\ref{ex:ex2} and $f :
\mathsf{List}\,\mathbb{N} \times \mathbb{N} \to X$ the call
$~\mathsf{adm}\;t\;\;f\;\;\mathsf{G}\,\beta_1~$ results in the
sequence of calls:
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|l|l l l l|}
\hline
\emph{call 1} & $\mathsf{adm}$ & $t$ & $f$ & $\mathsf{G}\beta_1$\\ \hline
\emph{call 2} & $\mathsf{adm}$ & $t_2$ & $\mathsf{G}h^1_1 \times
\mathsf{G}(h^1_2 \times h^1_2)$ & $\mathsf{G}(\mathsf{G}\gamma^1_1
\times \mathsf{G}(\gamma^1_2 \times \gamma^1_2))$\\ \hline
\emph{call 3} & $\mathsf{adm}$ & $t_3$ & $(\mathsf{G}g^2_1,
\mathsf{G}(g^2_2 \times g^2_2))$ & $\mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2)$\\ \hline
\emph{call 4.1} & $\mathsf{adm}$ &
${\mathsf{inj}}\;({\mathsf{cons}}\;2\;{\mathsf{nil}})$
& $g^3_1$ & $\mathsf{G}\gamma^2_1$\\ \hline
\emph{call 4.2} & $\mathsf{adm}$ & ${\mathsf{pairing}}\;
({\mathsf{inj}\,2})\;{\mathsf{const}}$ & $g^3_2 \times
g_2^3$ & $\mathsf{G}(\gamma^2_2 \times \gamma^2_2)$\\ \hline
\emph{call 4.2.1} & $\mathsf{adm}$ & $\mathsf{inj}\,2$ &
$g_1^{4.2}$
& $\mathsf{G}\gamma^2_2$\\ \hline
\emph{call 4.2.2} & $\mathsf{adm}$ & $\mathsf{const}$ &
$g_1^{4.2}$
& $\mathsf{G}\gamma^2_2$\\ \hline
\end{tabular}}
\end{figure}
\noindent
where
\[\begin{array}{lll}
t & = & \mathsf{projpair} \;(\;{\mathsf{inj}}\;
(\,{\mathsf{inj}}\;({\mathsf{cons}}\;2\;{\mathsf{nil}}),\;
{\mathsf{pairing}}\; ({\mathsf{inj}\,2})\;{\mathsf{const}}\;)\;)\\
t_2 & = & {\mathsf{inj}}\;
(\;{\mathsf{inj}}\;({\mathsf{cons}}\;2\;{\mathsf{nil}}),\;
{\mathsf{pairing}}\; ({\mathsf{inj}\,2})\;{\mathsf{const}}\;)\\
t_3 & = & (\;{\mathsf{inj}}\;({\mathsf{cons}}\;2\;{\mathsf{
nil}}),\; {\mathsf{pairing}}\;
({\mathsf{inj}\,2})\;{\mathsf{const}}\;)
\end{array}\]
\noindent
The steps of $\mathsf{adm}$ corresponding to these call are given in
Table~\ref{fig:u}, with the most important components of these steps
listed explicitly. Since the solution to the generated set of
constraints imposes the requirement that $f = g^{4.1}_1
\times \id_\mathbb{N}$, we conclude that the most general functions
mappable over $t$ relative to the specification $\mathsf{G}\,\beta_1$
are those of the form $f = f' \times \id_\mathbb{N}$ for some type $X$
and some function $f' : \mathsf{List}\,\mathbb{N} \to X$. This is
precisely the result obtained intuitively in
Example~\ref{ex:ex2}.\looseness=-1
\begin{sidewaystable}
\begin{minipage}{0.5\textwidth}
\centering
\scalebox{.85}{
\begin{tabular}{|p{1cm}|p{3.5cm}|p{4.5cm}|p{4cm}|p{4.5cm}|p{5.7cm}|}
\hline
\thead{\emph{call}} &
\thead{\emph{matching}}&
\thead{$\ol\tau$} & \thead{$\ol R$} & \thead{$\ol\zeta$} &
\thead{\emph{constraints}}\\
\thead{\emph{no.}} & \thead{\emph{problems}} &\thead{ }
&\thead{ } &\thead{ } &\thead{\emph{added to $C$}}\\\hline\hline
1
& $\beta_1 \equiv \gamma^1_1 \times \gamma^1_1$
& $\tau_1 \beta_1 \gamma^1_1\gamma^1_2 = \gamma^1_1$ \newline
$\tau_2 \beta_1 \gamma^1_1\gamma^1_2 = \gamma^1_2$
& $R_1 = \mathsf{G}\,(\mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2))$
& $\zeta_{1,1} \beta_1\gamma^1_1\gamma^1_2 = \mathsf{G}\gamma^1_1
\times \mathsf{G}(\gamma^1_2 \times \gamma^1_2)$
& $\langle g_1^1,f\rangle$ \newline $\langle h_1^1 \times h_2^1, g_1^1 \rangle$
\\\hline
2
& $\mathsf{G}\gamma^1_1 \times \mathsf{G}(\gamma^1_2 \times \gamma^1_2)
\equiv \gamma_1^2$
& $\tau_1\gamma^1_1\gamma^1_2\gamma_1^2 = \mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2)$
& $R_1 = \mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2)$
& $\zeta_{1,1}\gamma^1_1\gamma^1_2\gamma_1^2 = \mathsf{G}\gamma^1_1$ \newline
$\zeta_{1,2}\gamma^1_1\gamma^1_2\gamma_1^2 = \mathsf{G}(\gamma^1_2
\times \gamma^1_2)$
& $\langle \mathsf{G}g_1^2 \times \mathsf{G}(g_2^2 \times
g_2^2), \mathsf{G}h_1^1 \times \mathsf{G}(h^1_2
\times h^1_2) \rangle$\\ \hline
3
&
&
&
& $\zeta_1\gamma^2_1\gamma^2_2 = \gamma^2_1$ \newline
$\zeta_2\gamma^2_1\gamma^2_2 = \gamma^2_2 \times \gamma^2_2$
& $ \langle \mathsf{G}g_1^3, \mathsf{G}g_1^2 \rangle$ \newline
$ \langle \mathsf{G}(g_2^3 \times g_2^3), \mathsf{G}(g_2^2 \times
g_2^2) \rangle$\\
\hline
4.1
& $\gamma^2_1 \equiv \gamma_1^{4.1}$
& $\tau_1\gamma^2_1\gamma_1^{4.1} = \gamma_1^2$
& $R_1 = \gamma^2_1$
&
& $\langle g_1^{4.1}, g_1^3 \rangle$\\
\hline
4.2
& $\gamma^2_2 \times \gamma^2_2 \equiv \gamma_1^{4.2} \times \gamma_2^{4.2}$
& $\tau_1\gamma^2_2\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$ \newline
$\tau_2\gamma^2_2\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$
& $R_1 = \mathsf{G}\gamma^2_2$ \newline
$R_2 = \mathsf{G}\gamma^2_2$
& $\zeta_{1,1}\gamma^2_1\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$ \newline
$\zeta_{2,1}\gamma^2_1\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$
& $\langle g_1^{4.2} \times g_1^{4.2}, g_2^3 \times g_2^3 \rangle$
\\
\hline
4.2.1
& $\gamma^2_2 \equiv \gamma_1^{4.2.1}$
& $\tau_1\gamma^2_2\gamma_1^{4.2.1} = \gamma^2_2$
& $R_1 = \gamma^2_2$
&
& $\langle g_1^{4.2.1}, g_1^{4.2} \rangle$
\\
\hline
4.2.2
& $\gamma^2_2 \equiv \mathbb{N}$
&
& $R_1 = 1$
&
& $\langle g_1^{4.2.2},g_1^{4.2} \rangle$ \newline $\langle
\id_{\mathbb{N}},g_1^{4.2.2} \rangle$
\\
\hline
\end{tabular}}
\caption{Calls for Example~\ref{ex:u}}
\label{fig:u}
\end{minipage}
\vspace*{0.5in}
\begin{minipage}{0.5\textwidth}
\centering
\scalebox{.85}{
\begin{tabular}{|p{1cm}|p{3.5cm}|p{4.5cm}|p{4cm}|p{4.5cm}|p{5.7cm}|}
\hline
\thead{\emph{call}} &
\thead{\emph{matching}}&
\thead{$\ol\tau$} & \thead{$\ol R$} & \thead{$\ol\zeta$} &
\thead{\emph{constraints}}\\
\thead{\emph{no.}} & \thead{\emph{problems}} &\thead{ }
&\thead{ } &\thead{ } &\thead{\emph{added to $C$}}\\\hline\hline
1
& $\beta_1 \equiv \gamma^1_1 \times \gamma^1_1$
& $\tau_1 \beta_1 \gamma^1_1\gamma^1_2 = \gamma^1_1$ \newline
$\tau_2 \beta_1 \gamma^1_1\gamma^1_2 = \gamma^1_2$
& $R_1 = \mathsf{G}\,(\mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2))$
& $\zeta_{1,1} \beta_1\gamma^1_1\gamma^1_2 = \mathsf{G}\gamma^1_1
\times \mathsf{G}(\gamma^1_2 \times \gamma^1_2)$
& $\langle g_1^1,f\rangle$ \newline $\langle h_1^1 \times h_2^1, g_1^1 \rangle$
\\\hline
2
& $\mathsf{G}\gamma^1_1 \times \mathsf{G}(\gamma^1_2 \times \gamma^1_2)
\equiv \gamma_1^2$
& $\tau_1\gamma^1_1\gamma^1_2\gamma_1^2 = \mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2)$
& $R_1 = \mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2)$
& $\zeta_{1,1}\gamma^1_1\gamma^1_2\gamma_1^2 = \mathsf{G}\gamma^1_1$ \newline
$\zeta_{1,2}\gamma^1_1\gamma^1_2\gamma_1^2 = \mathsf{G}(\gamma^1_2
\times \gamma^1_2)$
& $\langle \mathsf{G}g_1^2 \times \mathsf{G}(g_2^2 \times
g_2^2), \mathsf{G}h_1^1 \times \mathsf{G}(h^1_2
\times h^1_2) \rangle$\\ \hline
3
&
&
&
& $\zeta_1\gamma^2_1\gamma^2_2 = \gamma^2_1$ \newline
$\zeta_2\gamma^2_1\gamma^2_2 = \gamma^2_2 \times \gamma^2_2$
& $ \langle \mathsf{G}g_1^3, \mathsf{G}g_1^2 \rangle$ \newline
$ \langle \mathsf{G}(g_2^3 \times g_2^3), \mathsf{G}(g_2^2 \times
g_2^2) \rangle$\\
\hline
4.1
& $\gamma^2_1 \equiv \mathsf{List}\,\gamma_1^{4.1}$
& $\tau_1\gamma^2_1\gamma_1^{4.1} = \gamma_1^{4.1}$
& $R_1 = \mathsf{List}\,(\mathsf{G}\gamma_1^{4.1})$
& $\zeta_{1,1}\gamma^2_1\gamma_1^{4.1} = \mathsf{G}\gamma_1^{4.1}$
& $\langle g_1^{4.1}, g_1^3 \rangle$ \newline
$\langle \mathsf{List}\,h_1^{4.1}, g_1^{4.1} \rangle$ \\
\hline
4.2
& $\gamma^2_2 \times \gamma^2_2 \equiv \gamma_1^{4.2} \times \gamma_2^{4.2}$
& $\tau_1\gamma^2_2\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$ \newline
$\tau_2\gamma^2_2\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$
& $R_1 = \mathsf{G}\gamma^2_2$ \newline
$R_2 = \mathsf{G}\gamma^2_2$
& $\zeta_{1,1}\gamma^2_1\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$ \newline
$\zeta_{2,1}\gamma^2_1\gamma_1^{4.2}\gamma_2^{4.2} = \gamma^2_2$
& $\langle g_1^{4.2} \times g_1^{4.2}, g_2^3 \times g_2^3 \rangle$
\\
\hline
4.1.1
& $\mathsf{G}\gamma_1^{4.1} \equiv\gamma_1^{4.1.1}$
& $\tau_1\gamma_1^{4.1}\gamma_1^{4.1.1} = \mathsf{G}\gamma_1^{4.1}$
& $R_1 = \mathsf{G}\gamma_1^{4.1}$ \newline $R_2 =
\mathsf{List}(\mathsf{G}\gamma_1^{4.1})$
& $\zeta_{1,1}\gamma_1^{4.1}\gamma_1^{4.1.1} =
\gamma_1^{4.1}$ \newline
$\zeta_{2,1}\gamma^{4.1}_1\gamma^{4.1.1}_1 = \mathsf{G}\gamma^{4.1}_1$
& $\langle \mathsf{G}g_1^{4.1.1}, \mathsf{G}h_1^{4.1} \rangle$
\\
\hline
4.2.1
& $\gamma^2_2 \equiv \gamma_1^{4.2.1}$
& $\tau_1\gamma^2_2\gamma_1^{4.2.1} = \gamma^2_2$
& $R_1 = \gamma^2_2$
&
& $\langle g_1^{4.2.1}, g_1^{4.2} \rangle$
\\
\hline
4.2.2
& $\gamma^2_2 \equiv \mathbb{N}$
&
& $R_1 = 1$
&
& $\langle g_1^{4.2.2},g_1^{4.2} \rangle$ \newline $\langle
\id_{\mathbb{N}},g_1^{4.2.2} \rangle$
\\
\hline
4.1.1.1
& $\gamma^{4.1}_1 \equiv \mathbb{N}$
&
& $R_1 = 1$
&
& $\langle g_1^{4.1.1.1}, g_1^{4.1.1} \rangle$ \newline
$\langle \id_{\mathbb{N}}, g_1^{4.1.1.1} \rangle$
\\
\hline
4.1.1.2
& $\mathsf{G}\gamma_1^{4.1} \equiv \gamma_1^{4.1.1.2}$
& $\tau_1\gamma_1^{4.1}\gamma_1^{4.1.1.2} = \mathsf{G}\gamma_1^{4.1}$
& $R_1 = 1$
&
& $\langle \mathsf{G}g_1^{4.1.1.2}, \mathsf{G}g_1^{4.1.1} \rangle$
\\
\hline
\end{tabular}}
\caption{Calls for Example~\ref{ex:t}}
\label{fig:t}
\end{minipage}
\end{sidewaystable}
\end{example}
\begin{example}\label{ex:t}
For $\mathsf{G}$ and $t$ as in Example~\ref{ex:ex3}
and $f : \mathsf{List}\,\mathbb{N} \times \mathbb{N} \to X$ we have
\[\begin{array}{lll}
K^{\mathsf{const}} & = & \mathbb{N}\\
K^{\mathsf{flat}}\,\alpha & = & \mathsf{List}\,\alpha\\
K^{\mathsf{inj}}\,\alpha & = & \alpha\\
K^{\mathsf{pairing}}\,\alpha_1\,\alpha_2 & = & \alpha_1 \times \alpha_2\\
K^{\mathsf{projpair}}\,\alpha_1\,\alpha_2 & = & \alpha_1 \times \alpha_2
\end{array}\]
The call $~\mathsf{adm}\;t\;\;f\;\;\mathsf{G}\,\beta_1~$ results in the
sequence of calls:
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|l|l l l l|}
\hline
\emph{call 1} & $\mathsf{adm}$ & $t$ & $f$ & $\mathsf{G}\beta_1$\\ \hline
\emph{call 2} & $\mathsf{adm}$ & $t_2$ & $\mathsf{G}h^1_1 \times
\mathsf{G}(h^1_2 \times h^1_2)$ & $\mathsf{G}(\mathsf{G}\gamma^1_1
\times \mathsf{G}(\gamma^1_2 \times \gamma^1_2))$\\ \hline
\emph{call 3} & $\mathsf{adm}$ & $t_3$ & $(\mathsf{G}g^2_1,
\mathsf{G}(g^2_2 \times g^2_2))$ & $\mathsf{G}\gamma^1_1 \times
\mathsf{G}(\gamma^1_2 \times \gamma^1_2)$\\ \hline
\emph{call 4.1} & $\mathsf{adm}$ &
${\mathsf{flat}}\;({\mathsf{cons}}\;{\mathsf{const}}\;{\mathsf{nil}})$
& $g^3_1$ & $\mathsf{G}\gamma^2_1$\\ \hline
\emph{call 4.2} & $\mathsf{adm}$ & ${\mathsf{pairing}}\;
({\mathsf{inj}\,2})\;{\mathsf{const}}$ & $g^3_2 \times
g_2^3$ & $\mathsf{G}(\gamma^2_2 \times \gamma^2_2)$\\ \hline
\emph{call 4.1.1} & $\mathsf{adm}$ &
$\mathsf{cons}\,\mathsf{const}\,\mathsf{nil}$ & $\mathsf{G}h_1^{4.1}$ &
$\mathsf{List}\,(\mathsf{G}\gamma_1^{4.1})$\\ \hline
\emph{call 4.2.1} & $\mathsf{adm}$ & $\mathsf{inj}\,2$ &
$g_1^{4.2}$
& $\mathsf{G}\gamma^2_2$\\ \hline
\emph{call 4.2.2} & $\mathsf{adm}$ & $\mathsf{const}$ &
$g_1^{4.2}$
& $\mathsf{G}\gamma^2_2$\\ \hline
\emph{call 4.1.1.1} & $\mathsf{adm}$ & $\mathsf{const}$ &
$g_1^{4.1.1}$ & $\mathsf{G}\,\gamma_1^{4.1}$\\ \hline
\emph{call 4.1.1.2} & $\mathsf{adm}$ & $\mathsf{nil}$ &
$\mathsf{G}g_1^{4.1.1}$ & $\mathsf{List}(\mathsf{G}\gamma_1^{4.1})$\\ \hline
\end{tabular}}
\end{figure}
\noindent
where
\[\begin{array}{lll}
t & = & \mathsf{projpair} \;(\;{\mathsf{inj}}\;
(\,{\mathsf{flat}}\;({\mathsf{cons}}\;{\mathsf{const}}\;{\mathsf{
nil}}),\; {\mathsf{pairing}}\;
({\mathsf{inj}\,2})\;{\mathsf{const}}\;)\;)\\
t_2 & = & {\mathsf{inj}}\;
(\;{\mathsf{flat}}\;({\mathsf{cons}}\;{\mathsf{const}}\;{\mathsf{
nil}}),\; {\mathsf{pairing}}\;
({\mathsf{inj}\,2})\;{\mathsf{const}}\;)\\
t_3 & = & (\;{\mathsf{flat}}\;({\mathsf{cons}}\;{\mathsf{const}}\;{\mathsf{
nil}}),\; {\mathsf{pairing}}\;
({\mathsf{inj}\,2})\;{\mathsf{const}}\;)
\end{array}\]
\noindent
The steps of $\mathsf{adm}$ corresponding to these call are given in
Table~\ref{fig:t}, with the most important components of these steps
listed explicitly. Since the solution to the generated set of
constraints imposes the requirement that $f =
map_{\mathsf{List}}\,\id_{\mathbb{N}} \; \times \; \id_\mathbb{N}$,
we conclude that the only function mappable over $t$ relative to the
specification $\mathsf{G}\,\beta_1$ is this $f$. This is precisely
the result obtained informally in Example~\ref{ex:ex3}.
\end{example}
\begin{example}\label{ex:list}
For $t$ as in Example~\ref{ex:ex4} the call
$~\mathsf{adm}\;t\;\;f\;\;\mathsf{List}\,\beta_1~$ results in the
sequence of calls:
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|l|l l l l|}
\hline
\emph{call 1} & $\mathsf{adm}$ & $t$ & $f$ &
$\mathsf{List}\,\beta_1$\\ \hline
\emph{call 2} & $\mathsf{adm}$ &
$\mathsf{cons}\,(\mathsf{cons\,3\,nil})\,\mathsf{nil})$ &
$g^1_1$ & $\mathsf{List}\,\beta_1$\\ \hline
\emph{call 2.1} & $\mathsf{adm}$ &
$\mathsf{nil}$ & $g^2_1$ & $\mathsf{List}\,\beta_1$\\ \hline
\end{tabular}}
\end{figure}
\noindent
The steps of $\mathsf{adm}$ corresponding to these call are given in
the table below, with the most important components of these steps
listed explicitly:
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|p{0.3cm}|p{1.5cm}|p{2.3cm}|p{2cm}|p{2.1cm}|p{0.5cm}|}
\hline
\thead{step} & \thead{{matching}} & \thead{$\ol\tau$} & \thead{$\ol
R$} & \thead{$\ol{\zeta}$} & \thead{{constraints}}\\
\thead{no.} & \thead{{problems}} & \thead{ } &\thead{ } & \thead{ } &
\thead{{added to $C$}}\\\hline\hline
\emph{1}
& $\beta_1 \equiv \gamma^1_1$
& $\tau_1 \beta_1 \gamma^1_1 = \beta_1$
& $R_1 = \beta_1$ \newline
$R_2 = \mathsf{List}\,\beta_1$
& $\zeta_{2,1}\beta_1\gamma^1_1 = \beta_1$
& $\langle g^1_1, f \rangle$
\\\hline
\emph{2}
& $\beta_1 \equiv \gamma^2_1$
& $\tau_1 \beta_1 \gamma^2_1 = \beta_1$
& $R_1 = \beta_1$ \newline
$R_2 = \mathsf{List}\,\beta_1$
& $\zeta_{2,1}\beta_1\gamma^2_1 = \beta_1$
& $\langle g^2_1, g^1_1 \rangle$
\\\hline
\emph{2.1}
& $\beta_1 \equiv \gamma^{2.1}_1$
& $\tau_1 \beta_1 \gamma^{2.1}_1 = \beta_1$
& $R_1 = 1$
&
& $\langle g^{2.1}_1, g^2_1 \rangle$
\\\hline
\end{tabular}}
\end{figure}
\noindent
Since the solution to the generated set of constraints imposes the
requirement that $f = g^{2.1}_1$, we conclude that any function $f :
\mathsf{List}\,\mathbb{N} \to X$ (for some type $X$) is mappable over
$t$ relative to the specification $\mathsf{List}\,\beta_1$.
\end{example}
\begin{example}\label{ex:ex5-again}
For $t$ as in Example~\ref{ex:ex5} the call
$~\mathsf{adm}\;t\;\;f\;\;\mathsf{List\,(List\,\beta_1)}~$ results in
the following sequence of calls:\looseness=-1
\vspace*{0.1in}
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|l|l l l l|}
\hline
\emph{call 1} & $\mathsf{adm}$ & $t$ & $f$ &
$\mathsf{List}\,\beta_1$\\ \hline
\emph{call 2.1} & $\mathsf{adm}$ &
$\mathsf{cons\,1}\,(\mathsf{cons\,2\,nil})$ & $g^1_1$ &
$\mathsf{List}\,\beta_1$\\ \hline
\emph{call 2.2} & $\mathsf{adm}$ &
$\mathsf{cons}\,(\mathsf{cons\,3\,nil})\,\mathsf{nil})$ &
$\mathsf{List}\,g^1_1$ &
$\mathsf{List}\,(\mathsf{List}\,\beta_1)$\\ \hline
\emph{call 2.1.1} & $\mathsf{adm}$ &
$\mathsf{cons\,2\,nil}$ & $g^{2.1}_1$ &
$\mathsf{List}\,\beta_1$\\ \hline
\emph{call 2.2.1} & $\mathsf{adm}$ &
$\mathsf{cons\,3\,nil}$ & $g^{2.2}_1$ &
$\mathsf{List}\,\beta_1$\\ \hline
\emph{call 2.2.2} & $\mathsf{adm}$ &
$\mathsf{nil}$ & $\mathsf{List}\,g^{2.2}_1$ &
$\mathsf{List}\,(\mathsf{List}\,\beta_1)$\\ \hline
\emph{call 2.1.1.1} & $\mathsf{adm}$ &
$\mathsf{nil}$ & $g^{2.1.1}_1$ &
$\mathsf{List}\,\beta_1$\\ \hline
\emph{call 2.2.1.1} & $\mathsf{adm}$ &
$\mathsf{nil}$ & $g^{2.2.1}_1$ &
$\mathsf{List}\,\beta_1$\\ \hline
\end{tabular}}
\end{figure}
\vspace*{0.1in}
\noindent
The steps of $\mathsf{adm}$ corresponding to these calls are given in
the table below, with the most important components of these steps
listed explicitly:
\vspace*{0.1in}
\begin{figure}[H]
\centering
\scalebox{.85}{
\begin{tabular}{|p{1cm}|p{2.4cm}|p{2.9cm}|p{2.8cm}|p{3.5cm}|p{2.8cm}|}
\hline
\thead{step} & \thead{{matching}} & \thead{$\ol\tau$} & \thead{$\ol
R$} & \thead{$\ol{\zeta}$} & \thead{{constraints}}\\
\thead{no.} & \thead{{problems}} & \thead{ } &\thead{ } & \thead{ } &
\thead{{added to $C$}}\\\hline\hline
\emph{1}
& $\mathsf{List}\,\beta_1 \equiv \gamma^1_1$
& $\tau_1 \beta_1 \gamma^1_1 = \mathsf{List}\,\beta_1$
& $R_1 = \mathsf{List}\,\beta_1$ \newline
$R_2 = \mathsf{List}\,(\mathsf{List}\,\beta_1)$
& $\zeta_{1,1}\beta_1\gamma^1_1 = \beta_1$ \newline
$\zeta_{2,1}\beta_1\gamma^1_1 = \mathsf{List}\,\beta_1$
& $\langle \mathsf{List}\,g^1_1, f \rangle$
\\\hline
\emph{2.1}
& $\beta_1 \equiv \gamma^{2.1}_1$
& $\tau_1 \beta_1 \gamma^{2.1}_1 = \beta_1$
& $R_1 = \beta_1$ \newline
$R_2 = \mathsf{List}\,\beta_1$
& $\zeta_{2,2}\beta_1\gamma^{2.1}_1 = \beta_1$
& $\langle g^{2.1}_1, g^1_1 \rangle$
\\\hline
\emph{2.2}
& $\mathsf{List}\,\beta_1 \equiv \gamma^{2.2}_1$
& $\tau_1 \beta_1 \gamma^{2.2}_1 = \mathsf{List}\,\beta_1$
& $R_1 = \mathsf{List}\,\beta_1$ \newline
$R_2 = \mathsf{List}\,(\mathsf{List}\,\beta_1)$
& $\zeta_{1,1}\beta_1\gamma^{2.2}_1 = \beta_1$ \newline
$\zeta_{2,1}\beta_1\gamma^{2.2}_1 = \mathsf{List}\,\beta_1$
& $\langle \mathsf{List}\, g^{2.2}_1, \mathsf{List}\,g^1_1 \rangle$
\\\hline
\emph{2.1.1}
& $\beta_1 \equiv \gamma^{2.1.1}_1$
& $\tau_1 \beta_1 \gamma^{2.1.1}_1 = \beta_1$
& $R_1 = \beta_1$ \newline
$R_2 = \mathsf{List}\,\beta_1$
& $\zeta_{2,2}\beta_1\gamma^{2.1.1}_1 = \beta_1$
& $\langle g^{2.1.1}_1, g^{2.1}_1 \rangle$
\\\hline
\emph{2.2.1}
& $\beta_1 \equiv \gamma^{2.2.1}_1$
& $\tau_1 \beta_1 \gamma^{2.2.1}_1 = \beta_1$
& $R_1 = \beta_1$ \newline
$R_2 = \mathsf{List}\,\beta_1$
& $\zeta_{2,2}\beta_1\gamma^{2.2.1}_1 = \beta_1$
& $\langle g^{2.2.1}_1, g^{2.2}_1 \rangle$
\\\hline
\emph{2.2.2}
& $\mathsf{List}\,\beta_1 \equiv \gamma^{2.2.2}_1$
& $\tau_1 \beta_1 \gamma^{2.2.2}_1 = \mathsf{List}\,\beta_1$
& $R_1 = 1$
&
& $\langle \mathsf{List}\,g^{2.2.2}_1, \mathsf{List}\,g^{2.2}_1 \rangle$
\\\hline
\emph{2.1.1.1}
& $\beta_1 \equiv \gamma^{2.1.1.1}_1$
& $\tau_1 \beta_1 \gamma^{2.1.1.1}_1 = \beta_1$
& $R_1 = 1$
&
& $\langle g^{2.1.1.1}_1, g^{2.1.1}_1 \rangle$
\\\hline
\emph{2.2.1.1}
& $\beta_1 \equiv \gamma^{2.2.1.1}_1$
& $\tau_1 \beta_1 \gamma^{2.2.1.1}_1 = \beta_1$
& $R_1 = 1$
&
& $\langle g^{2.2.1.1}_1, g^{2.2.1}_1 \rangle$
\\\hline
\end{tabular}}
\end{figure}
\noindent
Since the solution to the generated set of constraints imposes the
requirement that $f = \mathsf{List}\,g^{2.2.1.1}_1$, we conclude that
the most general functions mappable over $t$ relative to the
specification $\mathsf{List}\,(\mathsf{List}\,\beta_1)$ are those of
the form $f = \mathsf{map_{List}}\,f'$ for some type $X$ and function
$f' : \mathbb{N} \to X$.
\end{example}
\section{Conclusion and Future
Directions}\label{sec:conclusion}
The work reported here is part of a larger effort to develop a single,
unified categorical theory of data types. In particular, it can be
seen as a first step toward a properly functorial initial algebra
semantics for GADTs that specializes to the standard functorial
initial algebra semantics for nested types (which itself subsumes the
standard such semantics for ADTs) whenever the GADTs in question is a
nested type (or ADT).\looseness=-1
Categorical semantics of GADTs have been studied in \cite{hf11} and
\cite{jg08}. Importantly, both of these works interpret a GADT as a
fixpoint of a higher-order endofunctor $\funccat {U} \Set \to \funccat
{U} \Set$, where the category $U$ is {\em discrete}. As discussed in
Section~\ref{sec:introduction}, this destroys one of the main benefits
of interpreting a data type $\mathsf{D}$ as a fixpoint $\mu
F_\mathsf{D}$ of a higher-order endofunctor $F_\mathsf{D}$, namely the
existence of a non-trivial map function. Indeed, the action on
morphisms of $\mu F_\mathsf{D}$ should interpret the map function
$\mathsf{map_D}$ standardly associated with $\mathsf{D}$. But in the
discrete settings of \cite{hf11} and \cite{jg08}, the resulting
endofunctor $\mu F_\mathsf{D} : U \to \Set$ has very little to say
about the interpretation of $\mathsf{map_{D}}$, since its functorial
action need only specify the result of applying $\mathsf{map_{D}}$ to
a function $\mathsf{f : A \to B}$ when $\mathsf{B}$ is $\mathsf{A}$
and $\mathsf{f}$ is the identity function on $\mathsf{A}$. In
addition,~\cite{hf11} cannot handle truly nested data types such as
$\mathsf{Bush}$ or the GADT $\mathsf{G}$ from Example~\ref{ex:ex2}.
The resulting discrete initial algebra semantics for GADTs thus do not
recover the usual functorial initial algebra semantics of nested types
(including ADTs and truly nested types) when instantiated to these
special classes of GADTs.
In~\cite{fio12} an attempt is made to salvage the method
from~\cite{hf11} while taking the aforementioned issues into
account. The overall idea is to relax the discreteness requirement on
the category $U$, and to replace dependent products and sums in the
development of~\cite{hf11} with left and right Kan extensions,
respectively. But then the domain of $\mu F_{\mathsf{D}}$ must be the
category of all interpretations of types and all morphisms between
them, which in turn leads to the inclusion of unwanted junk elements
obtained by map closure, as already described in
Section~\ref{sec:introduction} of~\cite{jp19}. So this solution also
fails to bring us closer to a semantics of the kind we are aiming for.
{\em Containers}~\cite{aag03,aag05} provide an entirely different
approach to describing the functorial action of an ADT or nested type.
In this approach an element of such a type is described first by its
structure, and then by the data that structure contains. That is, a
ADT or nested type $\type D$ is seen as comprising a set
$S_{\mathsf{D}}$ of {\em shapes} and, for each shape $s\in
S_{\mathsf{D}}$, a set $P_{\,\mathsf{D},s}$ of {\em positions} in
$s$. If $\mathsf{A}$ is a type, then an element of $\type D\,\type A$
consists of a choice of a shape $s$ and a labeling of each of position
in $s$ by elements of $\type A$. Thus, if $A$ interprets $\type A$,
then $\type D\,\type A$ is interpreted as a labeling $\sum_{s\in
S_{\type D}}(P_{\,\type D,s} \to A)$. The interpretation $D$ for
$\mathsf{D}$ simply abstracts this interpretation over $\mathsf{D}$'s
input type, and, for any morphism $f : A \to B$, the functorial action
$D\,f : \sum_{s\in S_{\type D}}(P_{\,\type D,s} \to A) \,\to\, \sum_{s\in
S_{\type D}}(P_{\,\type D,s} \to B)$ is obtained by post-composition.
This functorial action does indeed interpret $\mathsf{map_{D}}$: given
a shape and a labeling of its position by elements of $\type A$, we
get automatically a data structure of the same shape whose positions
are labeled by elements of $\type B$ as soon as we have a function
$\mathsf{f : A \to B}$ to translate elements of $\type A$ to elements
of $\type B$.\looseness=-1
GADTs that go beyond ADTs and nested types have been studied from the
container point of view as {\em indexed containers}, both in
\cite{agh+15} and again in \cite{hf11}. The authors of \cite{agh+15}
propose encoding strictly positive indexed data types in terms of some
syntactic combinators they consider ``categorically
inspired''. However, as far as we understand their claim, map
functions and their interpretations as a functorial actions are not
worked out for indexed containers. The encoding in~\cite{agh+15}
nevertheless remains essential to understanding GADTs and other
inductive families as ``structures containing data''. With respect to
it, our algorithm can be understood as determining how ``containery''
a GADT $\mathsf{D}$ written in, say, Haskell or Agda is. Indeed,
given a term $t$ whose type is an instance of $\mathsf{D}$, our
algorithm can determine $t$'s shape and positions, so there is no
longer any need to guess or otherwise divine them. Significantly,
there appears to be no general technique for determining the shapes
and positions of the elements of a data type just from the type's
programming language definition, and the ability to determine
appropriate shapes and position sets usually comes only with a deep
understanding of, and extensive experience with, the data structures
at play.\looseness=-1
We do not know of any other careful study of the functorial action of
type-indexed strictly positive inductive families. The work reported
here is the result of such a study for a specific class of such types,
namely the GADTs described in Equations~(\ref{eq:gadts}) and
(\ref{eq:data-constr-types}). Our algorithm defines map functions for
GADTs that coincide with the usual ones for GADTs that are ADTs and
nested types. The map functions computed by our algorithm will guide
our ongoing efforts to give functorial initial algebra semantics for
GADTs that subsume the usual ones for ADTs and nested types as
fixpoints of higher-order endofunctors.
\vspace*{0.1in}
\noindent
{\bf Acknowledgments} This research was supported in part by NSF award
CCR-1906388. It was performed while visiting Aarhus University's Logic
and Semantics group, which provided additional support via Villum
Investigator grant no.~25804, Center for Basic Research in Program
Verification.
|
1,108,101,565,929 | arxiv | \section{Introduction}
Discussions of the spatial forms of physical materials use in a natural way geometrical and topological concepts. It is to be expected that arrangements of matter should form patterns that are described by pre-existing mathematical structures drawn from geometry and topology. But theoretical physicists also deal with abstract entities, which do not have an actual material presence. Still geometrical and topological considerations are relevant to these ephemeral theoretical constructs. I have in mind fields, both classical and quantum, which enter into our theories of fundamental processes. These fields $\phi (x)$ provide a mapping from a ``base" space or space-time on which they are defined into the field ``target" manifold on which they range. The base and target spaces, as well as the mapping, may possess some non-trivial topological features, which affect the fixed time description and the temporal evolution of the fields, thereby influencing the physical reality that these fields describe. Quantum fields of a quantum field theory are operator valued distributions whose relevant topological properties are obscure. Nevertheless, topological features of the corresponding classical fields are important in the quantum theory for a variety of reasons: (i) Quantized fields can undergo local (space-time dependent) transformations (gauge transformations, coordinate diffeomorphisms) that involve classical functions whose topological properties determine the allowed quantum field theoretic structures. (ii) One formulation of quantum field theory uses a functional integral over classical fields, and classical topological features become relevant. (iii) Semi-classical (WKB) approximations to the quantum theory rely on classical dynamics, and again classical topology plays a role in the analysis.
Topological effects in quantum electrodynamics were first appreciated by Dirac in his study of the quantum mechanics for (hypothetical) magnetic monopoles. This analysis leads directly to contemporary analysis of Yang-Mill theory -- the contemporary generalization of Maxwell's electrodynamics -- and has yielded several significant results: the discovery of the $\theta$-vacuum angle; the recognition that c-number parameters in the theory may require quantization for topological reasons (like Dirac's monopole strength); the realization that the chiral anomaly equation is just the local version of the celebrated Atiyah-Singer index theorem.
Here I shall not describe the Yang-Mills investigations; they are too technical and too specialized for this general audience. Rather I shall show you how a topological effect in a condensed matter situation leads to charge fractionalization. This phenomenon has a physical realization in 1-dimensional (lineal) polymers, like polyacetylene, and in 2-dimensional (planar) systems, like the Hall effect.
The polyacetylene story is especially appealing, because it can be told in several ways: in pictorial terms which only involves counting, or in the first quantized formalism for quantum mechanical equations, or in the second quantized formalism of a quantum field theory \cite{rj1}.
\section{The Polyacetylene Story (Counting Argument)}
Polyacetylene is a material consisting of parallel chains of carbon atoms, with electrons moving primarily along the chains, while hopping between chains is strongly suppressed. Consequently, the system is effectively 1-dimensional. The distance between carbon atoms is about 1\AA.
If the atoms are considered to be completely stationary, {\it i.e.} rigidly attached to their equilibrium lattice sites, electron hopping along the chain is a structureless phenomenon.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.26]{jackiw_scan003.eps}
\caption{(a) The rigid lattice of polyacetylene; (O) the carbon atoms are equally spaced 1 \AA\, apart. (b), (c) The effect of Peierls' instability is to shift the carbon atoms .04\AA to the right (A) or to the left (B), thus giving rise to a double degeneracy.}
\label{fig:example1}
\end{figure}
However, the atoms can oscillate around their rigid lattice positions for a variety of reasons, like zero-point motion, thermal excitation, etc. It might be thought that these effects merely give rise to a slight fuzzing of the undistorted-lattice situation.
In fact this is not correct; something more dramatic takes place. Rather than oscillating about the rigid-lattice sites, the atoms first shift a distance of about .04 \AA\ and then proceed to oscillate around the new, slightly distorted location. That this should happen was predicted by Peierls, and is called the Peierls instability. Due to reflection symmetry, there is no difference between a shift to the right or a shift to the left; the material chooses one or the other, thus breaking spontaneously the reflection symmetry, and giving rise to doubly degenerate vacua, called A and B.
If the displacement is described by a field $\phi$ which depends on the position x along the lattice, the so-called phonon field, then Peierls' instability, as well as detailed dynamical calculations indicate that the energy density $V(\phi)$, as a function of constant $\phi$, has a double-well shape.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.16]{En_densA.eps}
\caption{Energy density $V(\phi)$, as a function of a constant phonon field $\phi$. The symmetric stationary point, $\phi = 0$, is unstable. Stable vacua are at $\phi = {\scriptstyle{+}} |\phi_0|, \text{(A) and}\, \phi = \text{-}|\phi_0|, \text{B}$. }
\label{fig:example2}
\includegraphics[scale=.22]{En_densB.eps}
\caption{The two constant fields, $\pm \mid\phi_0 \mid$, correspond to the two vacua (A and B). The two kink fields, $\pm \phi_s$, interpolate between the vacua and represent domain walls.}
\label{fig:En}
\end{figure}
The symmetric point $\phi=0$ is unstable; the system in its ground state must choose one of the two equivalent ground states $\phi= \pm \mid \phi_0 \mid = \pm .04$\AA. In the ground states, the phonon field has uniform values, independent of x.
By now it is widely appreciated that whenever the ground state is degenerate there frequently exist additional stable states of the system, for which the phonon field is non-constant. Rather, as a function of x, it interpolates, when x passes from negative to positive infinity, between the allowed ground states. These are the famous solitons, or kinks. For polyacetylene they correspond to domain walls which separate regions with vacuum A from those with vacuum B, and vice versa. One represents the chemical bonding pattern by a double bond connecting atoms that are closer together, and the single bond connecting those that are further apart.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.38]{dots.eps}
\caption{Polyacetylene states. The equally spaced configuration (O) possesses a left-right symmetry, which however is energetically unstable. Rather in the ground states the carbon atoms shift a distance $\mu$ to the left or right, breaking the symmetry and producing two degenerate vacua (A, B). A soliton (S) is a defect in the alteration pattern; it provides a domain wall between configurations (A) and (B).
}
\label{fig:example4}
\end{figure}
Consider now a polyacetylene sample in the A vacuum, but with two solitons along the chain. Let us count the number of links in the sample without solitons and compare with number of links where two solitons are present. It suffices to examine the two chains only in the region where they differ, {\it i.e.} between the two solitons. Vacuum A exhibits 5 links, while the addition of two solitons decreases the number of links to 4. The two soliton state exhibits a deficit of one link. If now we imagine separating the two solitons a great distance, so that they act independently of one another, then each soliton carries a deficit of half a link, and the quantum numbers of the link, for example the charge, are split between the two states. This is the essence of fermion fractionization.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.25]{chembond.eps}
\caption{(a), (b) Pattern of chemical bonds in vacua A and B. (c) Two solitons inserted into vacuum A.}
\label{fig:example5}
\end{figure}
It should be emphasized that we are not here describing the familiar situation of an electron moving around a two-center molecule, spending ``half" the time with one nucleus and ``half" with the other. Then one might say that the electron is split in half, on the average; however fluctuations in any quantity are large. But in our soliton example, the fractionization is without fluctuations; in the limit of infinite separation one achieves an eigenstate with fractional eigenvalues.
We must however remember that the link in fact corresponds to two states: an electron with spin up and another with spin down. This doubling obscures the dramatic charge $\frac{1}{2}$ effect, since everything must be multiplied by 2 to account for the two states. So in polyacetylene, a soliton carries a charge deficit of one unit of electric charge. Nevertheless charge fractionization leaves a spur: the soliton state has net charge, but no net spin, since all of the electron spins are paired. If an additional electron is inserted into the sample, the charge deficit is extinguished, and one obtains a neutral state, but now there is a net spin. These spin-charge assignments (charged -- without spin, neutral -- with spin) are unexpected, but in fact have been observed, and provide experimental verification for the soliton picture and fractionalization in polyacetylene.
Notice that in this simple counting argument no mention is made of topology. This feature emerges only when an analytic treatment is given. I now turn to this.
\section{The Polyacetylene Story (Quantum Mechanics)}
I shall now provide a calculation which shows how charge $1/2$ arises in the quantum mechanics of fermions in interaction with solitons. The fermion dynamics are governed by an one-dimensional Dirac Hamiltonian, $H(\phi)$, which also depends on a background phonon field $\phi$, with which the fermions intact. The Dirac Hamiltonian arises not because the electrons are relativistic. Rather it emerges in a certain well-formulated approximation to the microscopic theory, which yields a quantal equation that is a 2x2 matrix equation, like a Dirac equation. In the vacuum sector, $\phi$ takes on a constant value $\phi_0$, appropriate to the vacuum. When a soliton is present, $\phi$ becomes the appropriate, static soliton profile $\phi_s$. We need not be any more specific. We need not insist on any explicit soliton profile. All that we require is that the topology [{\it i.e.} the large distance behavior] of the soliton profile be non-trivial.
In the present lineal case the relevant topology is that infinity corresponds to two points, the end points of the line, and the phonon field in the soliton sector behaves differently at the points at infinity.
To analyze the system we need the eigenmodes, both in the vacuum and soliton sectors.
\begin{eqnarray}
H(\phi_0) \psi^v_E &=& E \psi^v_E \label{fig1}\\
H(\phi_s) \psi^s_E &=& E \psi^s_E
\label{fig2}
\end{eqnarray}
The Dirac equation is like a matrix-valued ``square root" of the wave equation. Because a square root is involved, there will be in general negative energy solutions and positive energy solutions. The negative energy solutions correspond to the states in the valence band; the positive energy ones, to the conduction band. In the ground state, all the negative energy levels are filled, and the ground state charge is the integral over all space of the charge density $\rho (x)$, which in turn is constructed from all the negative energy wave functions.
\begin{equation}
\rho(x) = \int^0\limits_{- \infty} d E \, \rho_{E} \, (x), \
\rho_E (x) = \psi^\ast_E \, (x)\, \psi_E\, (x) \label{fig3}
\end{equation}
Of course integrating (\ref{fig3}) over $x$ will produce an infinity; to renormalize we measure all charges relative to the ground state in the vacuum sector. Thus the soliton charge is
\begin{equation}
Q = \int \, dx \, \int^0\limits_{- \infty} dE\, \{\rho^s_E\, (x) - \rho^v_E\, (x)\}
\label{fig4}
\end{equation}
Eq. (\ref{fig4}) may be completely evaluated without explicitly specifying the soliton profile, nor actually solving for the negative energy modes, provided H possesses a further property. We assume that there exists a conjugation symmetry which takes positive energy solutions of (\ref{fig1}) and (\ref{fig2}) into negative energy solutions. (This is true for polyacetylene.) That is, we assume that there exists a unitary 2x2 matrix $M$, such that
\begin{equation}
M \psi_E = \psi_{- E}
\label{fig5}
\end{equation}
An immediate consequence, crucial to the rest of the argument, is that the charge density at $E$ is an even function of $E$.
\begin{equation}
\rho_{\scriptscriptstyle E} (x) = \rho_{\scriptscriptstyle - E}(x)
\label{fig6}
\end{equation}
Whenever one solves a conjugation symmetric Dirac equation, with a topologically interesting background field, like a soliton, there always are, in addition to the positive and negative energy solutions related to each other by conjugation, self-conjugate, normalizable zero-energy solutions. That this is indeed true can be seen by explicit calculation. However, the occurrence of the zero mode is also predicted by very general mathematical theorems about differential equations. These so-called ``index theorems" count the zero eigenvalues, and insure that the number is non-vanishing whenever the topology of the background is non-trivial. We shall assume that there is just one zero mode, described by the normalized wave function $\psi_0$.
To evaluate the charge $Q$ in (\ref{fig4}), we first recall that the wave functions are complete, both in the soliton sector and in the vacuum sector.
\begin{equation}
\int^\infty\limits_{- \infty} d E\, \psi^\ast_E \, (x)\, \psi_E (y) = \delta (x-y)
\label{fig7}
\end{equation}
As a consequence, it follows that
\begin{subequations}
\begin{equation}
\int^\infty\limits_{- \infty} d E\, [\rho^s_E \, (x) \, -\rho^v_E\, (x)] = 0
\label{fig8}
\end{equation}
In the above completeness integral over all energies, we record separately the negative energy contributions, the positive energy contributions, and for the soliton, the zero-energy contribution. Since the positive energy charge density is equal to the negative one, by virtue of (\ref{fig6}), we conclude that (\ref{fig8}) may be equivalently written as an integral over negative $E$.
\begin{equation}
\int^0\limits_{- \infty} d E\, [2\rho^s_E\, (x) - 2\rho^v_E\, (x)] + \psi^\ast_0\, (x) \, \psi_0 \, (x) =0
\end{equation}
\end{subequations}
Rearranging terms give
\begin{equation}
Q= \int d x \int^0\limits_{-\infty} d E [\rho^s_E (x) - \rho^v_0 (x)] = -\frac{1}{2} \int d x \psi_0 (x) \psi_0 (x) = -\frac{1}{2}
\label{fig9}
\end{equation}
This is the final result: the soliton's charge is $-\frac{1}{2}$; a fact that follows from completeness (\ref{fig7}) and conjugation symmetry (\ref{fig6}). It is seen in (\ref{fig9}) that the zero-energy mode is essential to the conclusion. The existence of the zero mode in the conjugation symmetric case is assured by the non-trivial topology of the background field. The result is otherwise completely general.
\section{The Polyacetylene Story (Quantum Field Theory)}
The quantum mechanical derivation that I just presented does not address the question of whether the fractional half-integer charge is merely an uninteresting expectation value or whether it is an eigenvalue. To settle this, we need a quantum field theory approach, that is we need to second quantize the field. For this, we expand $\Psi$, which now is an anti-commuting quantum field operator, in eigenmodes of our Dirac equation in the soliton sector as
\begin{eqnarray}
\Psi &=& \sum (b_E \, \psi^s_E + d^\dagger_E \, \psi^s_{-E}) + a \psi_0 \nonumber\\
\Psi^\dagger&=& \sum\limits^E (b^\dagger_E \, \psi^{s \ast}_E + d_E \, \psi^{s \ast}_{-E}) + a^\dagger \psi_0
\label{fig10}
\end{eqnarray}
The important point is that while the finite energy modes $\psi^s_{\pm E}$ enter with annihilation particle (conduction band) operators $b_E$ and creation anti-particle (valence band) operators $d^\dagger_E$, the zero mode does not have a partner and is present in the sum simply with the operator $a$. The zero energy state is therefore doubly degenerate. It can be empty $\mid->$, or filled $\mid+>$, and the $a, a^\dagger$ operators are realized as
\begin{equation}
a \mid+> = \mid - >,\ a^\dagger \mid+> = 0,\ a \mid-> = 0,\ a^\dagger \mid+> = \mid + >
\label{eq11}
\end{equation}
The charge operator $Q = \int d x \psi^\dagger \psi$ must be properly defined to avoid infinities. This is done, according to Schwinger's prescription in the vacuum sector, by replacing the formal expression by
\begin{equation}
Q = \frac{1}{2} \int d x \, (\psi^\dagger \psi - \psi \psi^\dagger)
\label{eq12}
\end{equation}
We adopt the same regularization prescription for the soliton sector and insert our expansion (\ref{fig10}) into (\ref{eq12}). We find with the help of the orthonormality of wave functions
\begin{eqnarray}
Q &=& \frac{1}{2} \, \sum\limits_E \, (b^\dagger_E \, b_E + d_E\, d^\dagger_E - b_E \, b^\dagger_E - d^\dagger_E \, d_E) + \frac{1}{2} (a^\dagger a - a a^\dagger)\nonumber \\
&=& \sum\limits_E (b^\dagger_E\, b_E - d^\dagger_E \, d_E) + a^\dagger a - \frac{1}{2}
\label{eq13}
\end{eqnarray}
Therefore the eigenvalues for $Q$ are
\begin{equation}
Q \mid-> = -\frac{1}{2} |->, \ \ Q \mid+> = \frac{1}{2} |+> \text{!}
\label{eq14}
\end{equation}
\section{Conclusion}
This then concludes my polyacetylene story, which has experimental realization and confirmation. And the remarkable effect arises from the non-trivial topology of the phonon field in the soliton sector.
Many other topological effects have been found in the field theoretic descriptions of condensed matter and particle physics. Yet we must notice that mostly these arise in phenomenological descriptions, not in the fundamental theory. In condensed matter the fundamental equation is the many-body Schr\"{o}dinger equation with Coulomb interactions. This does not show any interesting topological structure. Only when it is replaced by effective, phenomenological equations do topological considerations become relevant for the effective description. Fundamental (condensed matter) Nature is simple!
Similarly in particle physics, our phenomenological, effective theories, like the Skyrme model, enjoy a rich topological structure. Moreover, even the Yang-Mills theory of our fundamental ``standard particle physics model" supports non-trivial topological structure, which leads to the QCD vacuum angle. In view of my previous observation, can we take this as indirect evidence that thisYang-Mills based theory also is a phenomenological, effective description and at a more fundamental level -- yet to be discovered -- we shall find a simpler description that does not have any elaborate mathematical structure. Perhaps in this final theory Nature will be described by simple counting rules -- like my first polyacetylene story. Surely this will not be the behemoth of string theory.
This work is supported in part by funds provided by the U.S. Department of Energy (D.O.E.) under cooperative research agreement DE-FC02-94ER40818.
|
1,108,101,565,930 | arxiv | \section{#1}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{displaymath}}{\begin{displaymath}}
\newcommand{\end{displaymath}}{\end{displaymath}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray*}}{\begin{eqnarray*}}
\newcommand{\end{eqnarray*}}{\end{eqnarray*}}
\renewcommand{\k}{\kappa}
\newcommand{\mu}{\mu}
\newcommand{\nu}{\nu}
\newcommand{\phi}{\phi}
\renewcommand{\r}{\rho}
\newcommand{\varrho}{\varrho}
\newcommand{{\cal H}}{{\cal H}}
\newcommand{{\cal N}}{{\cal N}}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\cal O}}{{\cal O}}
\newcommand{{\cal L}}{{\cal L}}
\newcommand{\scriptscriptstyle}{\scriptscriptstyle}
\newcommand{\labell}[1]{\label{#1}\qquad_{#1}}
\newcommand{\reef}[1]{(\ref{#1})}
\newcommand{\nonumber}{\nonumber}
\newcommand{\partial}{\partial}
\newcommand{\ti}[1]{\tilde{#1}}
\newcommand{\textrm{d}}{\textrm{d}}
\newcommand{\mt}[1]{\textrm{\tiny #1}}
\newcommand{{\cal F}}{{\cal F}}
\newcommand{{\cal A}}{{\cal A}}
\newcommand{{\cal B}}{{\cal B}}
\newcommand{\ell_s}{\ell_s}
\newcommand{\zD}{\ensuremath{z_{D7}}}
\newcommand{\tzD}{\ensuremath{\zeta_m}}
\newcommand{\ensuremath{\tilde{\rho}}}{\ensuremath{\tilde{\rho}}}
\newcommand{\ensuremath{\tilde{z}}}{\ensuremath{\tilde{z}}}
\newcommand{\Rl}{\ensuremath{(R/l_s)^2}
\newcommand{\tc}{\ensuremath{\sqrt{g_s N}}}
\newcommand{\stc}{\ensuremath{(g_sN)^{\frac{1}{4}}}}
\newcommand{\mq}{\ensuremath{m_q}}
\newcommand{\ensuremath{\mbox{\small eff.}}}{\ensuremath{\mbox{\small eff.}}}
\newcommand{\mbox{${\cal N}$}}{\mbox{${\cal N}$}}
\newcommand{\ensuremath{{\cal Y}}}{\ensuremath{{\cal Y}}}
\newcommand{\ensuremath{{\cal Y}^{\ell,\pm}}}{\ensuremath{{\cal Y}^{\ell,\pm}}}
\newcommand{\ensuremath{\frac{\ell}{2}}}{\ensuremath{\frac{\ell}{2}}}
\newcommand{\ensuremath{SU(2)_R\times SU(2)_L}}{\ensuremath{SU(2)_R\times SU(2)_L}}
\newcommand{\ensuremath{\bar{\rho}}}{\ensuremath{\bar{\rho}}}
\newcommand{\ensuremath{{SU(N)}} }{\ensuremath{{SU(N)}} }
\newcommand{\ensuremath{\frac{1}{2}}}{\ensuremath{\frac{1}{2}}}
\newcommand{\ensuremath{M_{\pi}}}{\ensuremath{M_{\pi}}}
\newcommand{\ensuremath{\Lambda_{\mbox{\small QCD}}}}{\ensuremath{\Lambda_{\mbox{\small QCD}}}}
\newcommand{\ensuremath{\chi+i\,e^{-\phi}}}{\ensuremath{\chi+i\,e^{-\phi}}}
\newcommand{\ensuremath{SL(2,\bbz{})}}{\ensuremath{SL(2,\bbz{})}}
\newcommand{\ensuremath{{\mathcal Im}}}{\ensuremath{{\mathcal Im}}}
\newcommand{\ensuremath{\bar{1}}}{\ensuremath{\bar{1}}}
\newcommand{\ensuremath{\bar{2}}}{\ensuremath{\bar{2}}}
\newcommand{\ensuremath{\bar{\imath}}}{\ensuremath{\bar{\imath}}}
\newcommand{\ensuremath{\bar{\jmath}}}{\ensuremath{\bar{\jmath}}}
\newcommand{\ensuremath{\bar{k}}}{\ensuremath{\bar{k}}}
\newcommand{\ensuremath{\bar{l}}}{\ensuremath{\bar{l}}}
\newcommand{\ensuremath{\bar{a}}}{\ensuremath{\bar{a}}}
\newcommand{\ensuremath{\bar{b}}}{\ensuremath{\bar{b}}}
\newcommand{\ensuremath{\bar{c}}}{\ensuremath{\bar{c}}}
\newcommand{\ensuremath{\bar{d}}}{\ensuremath{\bar{d}}}
\newcommand{\ensuremath{\bar{z}}}{\ensuremath{\bar{z}}}
\newcommand{\ensuremath{\bar{w}}}{\ensuremath{\bar{w}}}
\newcommand{\ensuremath{\bar{\zeta}}}{\ensuremath{\bar{\zeta}}}
\newcommand{\ensuremath{\bar{\tau}}}{\ensuremath{\bar{\tau}}}
\newcommand{\ensuremath{\bar{A}}}{\ensuremath{\bar{A}}}
\newcommand{\ensuremath{\bar{B}}}{\ensuremath{\bar{B}}}
\newcommand{\ensuremath{\bar{C}}}{\ensuremath{\bar{C}}}
\newcommand{\ensuremath{\bar{D}}}{\ensuremath{\bar{D}}}
\newcommand{\N}[1]{\ensuremath{{\cal N}=#1}}
\newcommand{\ensuremath{\tilde{K}}}{\ensuremath{\tilde{K}}}
\newcommand{{\bf Ai}}{{\bf Ai}}
\newcommand{{\bf I}}{{\bf I}}
\newcommand{{\bf J}}{{\bf J}}
\newcommand{{\bf K}}{{\bf K}}
\newcommand{\ensuremath{\tilde{\eta}}}{\ensuremath{\tilde{\eta}}}
\newcommand{\ensuremath{\bar{\partial}}}{\ensuremath{\bar{\partial}}}
\def\tilde{\lambda} {\tilde{\lambda}}
\def\tilde{r} {\tilde{r}}
\def\tilde{\rho} {\tilde{\rho}}
\def r_\mt{vac}{r_\mt{vac}}
\newcommand{\ensuremath{\vec{n}}}{\ensuremath{\vec{n}}}
\newcommand{\ensuremath{\tilde{\lambda}}}{\ensuremath{\tilde{\lambda}}}
\newcommand{\ensuremath{\cos\theta}}{\ensuremath{\cos\theta}}
\newcommand{\ensuremath{\sin\theta}}{\ensuremath{\sin\theta}}
\newcommand{\ensuremath{\partial_\sigma}}{\ensuremath{\partial_\sigma}}
\newcommand{\ensuremath{\dot{\theta}}}{\ensuremath{\dot{\theta}}}
\newcommand{\ensuremath{\dot{\varphi}}}{\ensuremath{\dot{\varphi}}}
\newcommand{\ensuremath{\varphi}}{\ensuremath{\varphi}}
\newcommand{\ensuremath{\partial_t}}{\ensuremath{\partial_t}}
\newcommand{\ensuremath{\partial_{\tau}}}{\ensuremath{\partial_{\tau}}}
\newcommand{\ensuremath{\tilde{\sigma}}}{\ensuremath{\tilde{\sigma}}}
\newcommand{\ensuremath{\varepsilon_i}}{\ensuremath{\varepsilon_i}}
\newcommand{\ensuremath{\sigma_0}}{\ensuremath{\sigma_0}}
\newcommand{\ensuremath{\mathrm{N}}}{\ensuremath{\mathrm{N}}}
\newcommand{\ensuremath{\NC^{rs}_{mn}}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}}}
\newcommand{\ensuremath{\NC^{rs}_{mn}(\ei,\sz)}}{\ensuremath{\ensuremath{\mathrm{N}}^{rs}_{mn}(\ensuremath{\varepsilon_i},\ensuremath{\sigma_0})}}
\newcommand{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}}{\ensuremath{1+\ensuremath{\sin\frac{\sz}{2}}}}
\newcommand{\ensuremath{\sin\frac{\sz}{2}}}{\ensuremath{\sin\frac{\ensuremath{\sigma_0}}{2}}}
\newcommand{\ensuremath{\cos\frac{\sz}{2}}}{\ensuremath{\cos\frac{\ensuremath{\sigma_0}}{2}}}
\newcommand{\ensuremath{\mathrm{P}^l_m(\cos\sz)}}{\ensuremath{\mathrm{P}^l_m(\cos\ensuremath{\sigma_0})}}
\newcommand{\ensuremath{\mathrm{sign}}}{\ensuremath{\mathrm{sign}}}
\newcommand{\ensuremath{\hat{P}}}{\ensuremath{\hat{P}}}
\newcommand{\ensuremath{\mathbb{I}}}{\ensuremath{\mathbb{I}}}
\newcommand{{\cal E }}{{\cal E }}
\newcommand{\ensuremath{\mbox{arccosh}}}{\ensuremath{\mbox{arccosh}}}
\newcommand{\ensuremath{\mbox{cotan}}}{\ensuremath{\mbox{cotan}}}
\newcommand{\ensuremath{\mathcal{U}}}{\ensuremath{\mathcal{U}}}
\renewcommand{\Re}{\ensuremath{\mathrm{Re}}}
\renewcommand{\Im}{\ensuremath{\mathrm{Im}}}
\begin{document}
\title{\LARGE \bf Minimal area surfaces in \ads{3} through integrability}
\author{
Yifei He\thanks{E-mail: \texttt{[email protected]}} ,
Martin Kruczenski\thanks{E-mail: \texttt{[email protected]}} \\
Department of Physics and Astronomy, Purdue University, \\
525 Northwestern Avenue, W. Lafayette, IN 47907-2036.}
\maketitle
\begin{abstract}
Minimal area surfaces in \ads{3} ending on a given curve at the boundary are dual to planar Wilson loops in \N{4} SYM. In previous work
it was shown that the problem of finding such surfaces can be recast as the one of finding an appropriate parameterization of the boundary contour
that corresponds to conformal gauge. A. Dekel was able to find such reparameterization in a perturbative expansion around a circular contour.
In this work we show that for more general contours such reparameterization can be found using a numerical procedure that does not rely on a perturbative expansion.
This provides further checks and applications of the integrability method. An interesting property of the method is that it uses as data the Schwarzian derivative of the contour
and therefore it has manifest global conformal invariance. Finally, we apply Shanks transformation to extend the near circular expansion
to larger deformations. The results are in agreement with the new method.
\end{abstract}
\clearpage
\newpage
\section{Introduction}
According to the AdS/CFT correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, the expectation value of the Wilson loop in $SU(N )$ \N{4} SYM theory, for large $N$ and at large 't Hooft coupling can be computed by finding a minimal surface in AdS space \cite{Maldacena:1998im,Rey:1998ik} ending at the boundary on the Wilson loop. In the case of \ads{3}, a standard method to find such surfaces is through Pohlmeyer reduction \cite{Pohlmeyer}. The equation of motion is simplified to a linear problem accompanied by a generalized cosh-Gordon/sinh-Gordon equation. Once the equation is solved, one can construct the surface and calculate the area. Over the years, much work have been done on the computation of Wilson loops of various shapes. In Minkowski signature, the most interesting cases are Wilson loops with light-like cusps \cite{Kruczenski:2002fb} due to their relation with scattering amplitudes \cite{Alday:2007hr,Alday:2009yn}. In Euclidean signature, the well-studied cases include circular Wilson loops \cite{Berenstein:1998ij}, the wavy Wilson loops \cite{Semenoff:2004qr}, the cusp \cite{DGO} and more generally solutions in terms of Riemann theta functions \cite{Ishizeki:2011bf,Kruczenski:2013bsa}. Although the Pohlmeyer reduction allows to find solutions, in general, given an arbitrary smooth contour, it is not known how to find the minimal surface ending on it and compute the area. Essentially, the complication is that we have to solve an elliptic problem for an integrable system instead of a time evolution problem as is more common. In this paper, we focus on this problem. We consider the Euclidean case, namely, an Euclidean Wilson loop confined on a plane such that the dual surface is contained on a $\mathbb{H}_{3}$ subspace of $AdS_5$.
A formalism for approaching this problem in the Euclidean case was recently introduced in \cite{Kruczenski:2014bla} where the calculation of the area of the minimal surface ending on a given boundary contour was reduced to finding a parameterization of the contour in terms of the conformal angle $\theta$ on the corresponding worldsheet. Once the conformal angle is found, one can express the area in terms of the Schwarzian derivative of the boundary contour with respect to the conformal angle. This formalism was used to study contours perturbatively around circular contours by A. Dekel in \cite{Dekel:2015bla}, where the area was given as a series expansion in the perturbative parameter to high order. In \cite{Irrgang:2015txa}, the method was generalized to Minkowski case and in \cite{Huang:2016atz}, solutions given by Mathieu functions were found. However, a general analytical or numerical solution to the problem of finding the conformal parameter for a given contour is not known. In this paper we use the formalism given in \cite{Kruczenski:2014bla,Dekel:2015bla} and provide a numerical solution to the problem. Our main objective is to provide a check and an application of the integrability ideas that were used to develop the method. In particular, an important aspect of integrability that is manifest in this method is the existence of a one parameter family of curves with the same area related by a symmetry that changes the spectral parameter \cite{Ishizeki:2011bf,Kruczenski:2014bla} known as $\lambda$-deformations \cite{Dekel:2015bla} or "master" symmetry \cite{Klose:2016uur,Klose:2016qfv}. In fact, in this last work it was shown that such symmetry can be used to construct the non-local Yangian charges from the global symmetries.
The paper is organized as follows. In section \ref{setup}, we give a brief review of the general setup and previous results for studying minimal surfaces in $\mathbb{H}_{3}$. Following the formalism in \cite{Kruczenski:2014bla}, we describe how the boundary condition of the cosh-Gordon equation is encoded in the boundary contour and how the problem is reduced to finding the correct parameterization of the boundary contour. In the following section, we describe the numerical method used to find such parameterization and give examples for
various contours. In section \ref{shanks}, we extend the perturbative results given in \cite{Dekel:2015bla} to regions where the original expansion diverges and reproduce the results of the new method as a check. In section \ref{areaforzeros}, we provide an area formula for contours where the Pohlmeyer holomorphic function $f(z)$ has zeros which the area formula given in \cite{Kruczenski:2014bla} fails to apply to. The last section gives our conclusions. It should be noted that it is also possible to attempt to solve the minimal surface directly, see {\it e.g.}\ \cite{FT} and more recently \cite{Klose:2016uur,Klose:2016qfv} where the $\lambda$-deformations were also constructed numerically. Here we concentrate in understanding the integrability properties of the system and use the numerical solutions as a check of the integrability ideas. Also it should be noted that, although our numerical method is general, in practice it converges slowly if the contour is irregular and cannot be described accurately by interpolating through a relatively small set of points.
\section{General setup}\label{setup}
In this section, we review the general setup for studying minimal surfaces in $\mathbb{H}_{3}$. We briefly describe the method given in \cite{Kruczenski:2014bla} which reduces the problem of calculating the area to finding the conformal parametrization and explain how it was used in \cite{Dekel:2015bla} to find solutions perturbatively around the circular contour.
\subsection{Minimal surfaces in $\mathbb{H}_{3}$}
In $\mathbb{R}^{1,3}$, $\mathbb{H}_{3}$ is embedded as a hyperboloid $X\cdot X=-1$, where $X=(X_{0},X_{1},X_{2},X_{3})$. The metric in $\mathbb{R}^{1,3}$ is
\begin{equation}
ds^2=-dX_0^2+dX_1^2+dX_2^2+dX_3^2.
\end{equation}
The Poincar\'{e} coordinates are given by
\begin{equation}
Z=\frac{1}{X_{0}-X_{3}}, \quad X=\frac{X_{1}+iX_{2}}{X_{0}-X_{3}}, \quad \bar{X}=\frac{X_{1}-iX_{2}}{X_{0}-X_{3}}.
\end{equation}
and the metric is
\begin{equation}
ds^2=\frac{dZ^2+dXd\bar{X}}{Z^2}.
\end{equation}
An Euclidean surface in $\mathbb{H}_{3}$ can be described as a map $X(r,\theta)$, $Z(r,\theta)$ from the unit disk on the complex plane parameterized as $z=re^{i \theta}$ ($r\le1$) and we assume
a conformal parameterization, namely the induced metric is
\begin{equation}
ds^2 = 4 e^{2\alpha}\, dz\,d\bar{z}
\label{indmetric}
\end{equation}
for some real function $\alpha(z,\bar{z})$. As $r\to 1$, one approaches the boundary of the surface, where
\begin{equation}
Z(r=1,\theta)=0, \quad X(r=1,\theta)=X(s(\theta)).
\end{equation}
$X(s)$ is a given closed curve defined on the boundary of $\mathbb{H}_{3}$ with an arbitrary parameter $s$, which is related to the conformal angle $\theta$ by an unknown reparametrization $s(\theta)$.
The string action (area) is given by
\begin{equation}
S=\frac{1}{2}\int{d\sigma d\tau (\partial X\cdot \bar{\partial} X+\Lambda(X\cdot X+1))},
\end{equation}
where $\Lambda$ is a Lagrange multiplier and we also need to impose the Virasoro constraints
\begin{equation}
\bar{\partial} X\cdot\bar{\partial} X=\partial X\cdot\partial X=0.
\end{equation}
The equation of motion is
\begin{equation}
\partial\bar{\partial}X-\Lambda X=0,\ \ \ \ \Lambda=\bar{\partial} X\cdot\partial X.
\end{equation}
Using the equivalence $SO(1,3)\simeq SL(2,\mathbb{C})$, one can write
$\mathbb{X}= X_0+X_i\sigma^i$ where $\sigma^i$ are the Pauli matrices. The equation of motion and the Virasoro constraints become
\begin{equation}
\det\mathbb{X}=1, \quad \partial\bar{\partial}\mathbb{X}=\Lambda \mathbb{X}, \quad \det(\partial \mathbb{X})=\det(\bar{\partial}\mathbb{X})=0.
\end{equation}
The matrix $\mathbb{X}$ satisfies the reality condition $\mathbb{X}^\dagger=\mathbb{X}$ that can be solved by writing
\begin{equation}
\mathbb{X}=\mathbb{AA}^{\dagger},
\end{equation}
with
\begin{equation}
\det\mathbb{A}=1, \quad \mathbb{A}\in SL(2,\mathbb{C}).
\end{equation}
The matrix $\mathbb{A}$ satisfies the linear problem
\begin{equation}\label{linearequation}
\partial \mathbb{A}=\mathbb{A} J,\quad \bar{\partial} \mathbb{A}=\mathbb{A} \bar{J},
\end{equation}
where
\begin{equation}
J=\begin{pmatrix}
-\frac{1}{2}\partial\alpha&fe^{-\alpha}\\[6pt]
\lambda e^{\alpha}&\frac{1}{2}\partial\alpha
\end{pmatrix}, \quad
\bar{J}=\begin{pmatrix}
\frac{1}{2}\bar{\partial}\alpha&\frac{1}{\lambda} e^{\alpha}\\[6pt]
-\bar{f}e^{-\alpha}&-\frac{1}{2}\bar{\partial}\alpha
\end{pmatrix}.
\end{equation}
Here $J,\bar{J}$ are the components of the current
\begin{equation}
j=\mathbb{A}^{-1}d\mathbb{A}=Jdz+\bar{J}d\bar{z}.
\end{equation}
The consistency condition requires $f$, $\bar{f}$ to be holomorphic and anti-holomorphic, and $\alpha(z,\bar{z})$ to satisfy the generalized cosh-Gordon equation:
\begin{equation}\label{coshGordon}
\partial\bar{\partial}\alpha=e^{2\alpha}+f\bar{f}e^{-2\alpha}.
\end{equation}
The expressions for $J$ and $\bar{J}$ include a spectral parameter $\lambda$. When $|\lambda|=1$ we obtain a one parameter family of minimal surfaces satisfying the equation of motion with different boundary contours but the same area. The $\lambda$-deformation of the original contour plays an important role in understanding the integrability of the problem and has been studied recently in \cite{Klose:2016uur,Klose:2016qfv}. In this paper we take $\lambda=1$, after obtaining such solution it is possible to change $\lambda$ and study the full $\lambda$-deformed family of contours but we leave that for future work.
To use the above formalism, we have to find the function $f(z)$ associated with the particular contour $X(s)$ we are interested in, then solve the cosh-Gordon equation, write down the current $J,\bar{J}$ and solve for $\mathbb{A}$ from which we can reconstruct the minimal surface. To calculate the area, consider the induced metric \eqref{indmetric}, then the area is given by the integral
\begin{equation}
{\cal A}=4\int_{D}e^{2\alpha}d\sigma d\tau.
\end{equation}
After regularization, the finite part of the area is (see {\it e.g.} \cite{Kruczenski:2014bla})
\begin{equation}\label{regarea}
{\cal A}_f=-2\pi-4\int_{D} f\bar{f}e^{-2\alpha}d\sigma d\tau,
\end{equation}
where $D$ is the unit disk on the complex plane.
\subsection{Boundary data}
Near the boundary, $r\to 1$ and it is convenient to define a world-sheet coordinate
\begin{equation}
\xi=1-r^2,
\end{equation}
Then, $\alpha(z,\bar{z})$ has the expansion
\begin{equation}\label{alphaexpand}
\alpha(\xi,\theta)\simeq-\ln\xi+\beta_2(\theta)(1+\xi)\xi^2+O(\xi^4),
\end{equation}
where $\beta_2(\theta)$ can be defined as
\begin{equation}
\beta_2(\theta)=\frac{1}{6}e^{2i\theta}(\partial^2\alpha-(\partial\alpha)^2)\big|_{r\to1}.
\end{equation}
All the higher order coefficients in \eqref{alphaexpand} are fixed by $\beta_2(\theta)$ and $f(\theta)=f(e^{i\theta})$.
Going back to the linear problem \eqref{linearequation} and writing $\mathbb{A}$ in terms of two linear independent vectors $\psi=(\psi_1,\psi_2)$ and $\tilde{\psi}=(\tilde{\psi}_1,\tilde{\psi}_2)$ as
\begin{equation}
\mathbb{A}=\left(\begin{array}{cc}
\psi_1&\psi_2\\
\tilde{\psi}_1&\tilde{\psi}_2
\end{array}\right),
\end{equation}
the linear equations \eqref{linearequation} are reduced to
\begin{equation}
\partial\psi=\psi J, \quad \bar{\partial}\psi=\psi\bar{J},
\end{equation}
and the same equations for $\tilde{\psi}$. Taking this linear problem to the boundary, it follows that \cite{Kruczenski:2014bla}
\begin{equation}\label{schwarzian}
\{X_\lambda(\theta),\theta\}=\frac{1}{2}-12\beta_{2}(\theta)-2\lambda f(\theta) e^{2i\theta}+\frac{2}{\lambda}\bar{f}(\theta)e^{-2i\theta},
\end{equation}
where $\{X_\lambda(\theta),\theta\}$ is the Schwarzian derivative of the boundary contour $X_\lambda(\theta)$ associated to a given value $\lambda$ of the spectral parameter. The original contour corresponds to $\lambda=1$, and one has
\begin{equation}\label{schwarzianRI}
\begin{aligned}
&\text{Re}\{X(\theta), \theta\}=\frac{1}{2}-12\beta_2(\theta),\\
&\text{Im}\{X(\theta), \theta\}=-4\text{Im}(e^{2i\theta}f(\theta)).
\end{aligned}
\end{equation}
Hence, if we know the contour in terms of the conformal angle $\theta$, we can calculate the Schwarzian derivative with respect to $\theta$ and then obtain $\beta_{2}(\theta)$ and $f(\theta)$ from its real and imaginary parts. With such information, we can find out $f(z)$ by analytic continuation and plug it into the cosh-Gordon equation \eqref{coshGordon} to solve for $\alpha(z,\bar{z})$. Finally we can calculate the regularized area using \eqref{regarea}.
For a Wilson loop, the contour $X(s)$ is given in terms of an arbitrary parameter $s$ (instead of the conformal angle $\theta$). Using the property of the Schwarzian derivative
\begin{equation}\label{schwarzianproperty}
\{F,\theta\}=\{s,\theta\}+(\partial_{\theta}s)^2\{F,s\},
\end{equation}
eq.\eqref{schwarzianRI} gives
\begin{equation}
\begin{aligned}
\{s,\theta\}+(\partial_{\theta}s)^2\text{Re}\{X(s), s\}&=\frac{1}{2}-12\beta_2(\theta),\\
(\partial_{\theta}s)^2\text{Im}\{X(s), s\}&=-4\text{Im}(e^{2i\theta}f(\theta)).
\end{aligned}
\end{equation}
If we know the reparametrization $s(\theta)$, then we can use the boundary data $\{X(s), s\}$ to find $\beta_{2}$ and $f$. The problem remains of how to find the reparametrization $s(\theta)$ given a specific boundary contour. In the following subsection we describe how to find such reparameterization for contours close to circular.
\subsection{Perturbation around the circular contour}
In some very interesting work \cite{Dekel:2015bla}, Dekel applied the above method to contours which are small perturbations of the circular contour. We review those results here since we extend them later using the Shanks transformation and use them to check the results of the new method.
The shape of the contours is taken to be of the form
\begin{equation}
X(\theta)=e^{is(\theta)+\sum_{n=1}^\infty\epsilon^n \xi_n(s(\theta))}
\end{equation}
where $\epsilon$ is the perturbation parameter. Correspondingly, $f(z)$ and $\alpha(z,\bar{z})$ have the expansion:
\begin{equation}
\begin{aligned}
f(z)&=\sum _{n=1}^{\infty}f_n(z)\epsilon^n,\\
\alpha(z,\bar{z})&=\text{ln}(\frac{1}{1-z\bar{z}})+\sum _{n=2}^{\infty}\alpha_n(z,\bar{z})\epsilon^n,
\end{aligned}
\end{equation}
and the correct reparametrization $s(\theta)$ has the expansion
\begin{equation}
s(\theta)=\theta+\sum _{n=1}^{\infty} s_n(\theta)\epsilon^n.
\end{equation}
When $\epsilon=0$, $X(\theta)$, $s(\theta)$, $f(z)$ and $\alpha(z,\bar{z})$ reduce to the results of the circular contour. Given the boundary contour $X(s(\theta))$, one can first calculate the real and imaginary parts of the Schwarzian derivative expressed in terms of the unknown $s_n(\theta)$. Next one expands the LHS of the equations \eqref{schwarzianRI} with the parameter $\epsilon$ and extract $f(\theta)$ and $\beta_2(\theta)$ order by order. Plugging $f(z)$ into the generalized cosh-Gordon equation to solve for $\alpha(z,\bar{z})$ and expanding the solution near the boundary, one gets $\beta_2(\theta)$ which can then be used to compare with the first equation of \eqref{schwarzianRI} to fix $s_n(\theta)$. In the end, one can plug these $s_n(\theta)$ into $f(z)$ and $\alpha(z,\bar{z})$ to calculate the area using \eqref{regarea}.
In \cite{Dekel:2015bla}, this procedure was applied to various contours and the areas were given as a series expansion in terms of $\epsilon$. Here we cite the area formulas for elliptical and symmetric contours $X(s)=e^{is+\epsilon\sin ps}$ \cite{Dekel:2015bla}:
\begin{equation}
\begin{aligned}
{\cal A}_{\text{ellipse}}=&-2\pi-\frac{3\pi\epsilon^2}{4}+\frac{3\pi\epsilon^3}{4}-\frac{237\pi\epsilon^4}{320}+\frac{117\pi\epsilon^5}{160}-\frac{64881\pi\epsilon^6}{89600}\\
&+\frac{64443\pi\epsilon^7}{89600}-\frac{14373577\pi\epsilon^8}{20070400}+\frac{3584953\pi\epsilon^9}{5017600}-\frac{110314688219\pi\epsilon^{10}}{154542080000}\\
&+\frac{22064732579\pi\epsilon^{11}}{30908416000}-\frac{6630907488364381\pi\epsilon^{12}}{9281797324800000}\\
&+\frac{1106373532973931\pi\epsilon^{13}}{1546966220800000}-\frac{40943000996733445243\pi\epsilon^{14}}{57175871520768000000}\\
&+\frac{1952095942839819321\pi\epsilon^{15}}{2722660548608000000}-\frac{157750690929831538029244697\pi\epsilon^{16}}{219774901986388869120000000}\\
&+\frac{19736906966190071806502297\pi\epsilon^{17}}{27471862748298608640000000}\\
&-\frac{801650044535506237372382994066703\pi\epsilon^{18}}{1115068403809909423032238080000000}+\mathcal{O}(\epsilon^{20}), \label{ellipse}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
{\cal A}_{\text{symmetric,p=2}}=&-2\pi-\frac{3\pi\epsilon^2}{4}+\frac{93\pi\epsilon^4}{20}-\frac{50143\pi\epsilon^6}{4200}
+\frac{510139\pi\epsilon^8}{14400}\\
&-\frac{65754318359\pi\epsilon^{10}}{582120000}+\frac{1195458440855851\pi\epsilon^{12}}{3178375200000}\\
&-\frac{61047851487256409\pi\epsilon^{14}}{47344547250000}+\frac{45707069078388982419341507\pi\epsilon^{16}}{10124976097716480000000
}\\
&-\frac{52566325973037148254959546391187\pi\epsilon^{18}}{3273637646841985463040000000}+\mathcal{O}(\epsilon^{20}),
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
{\cal A}_{\text{symmetric,p=13}}=&-2\pi-1092\pi\epsilon^2+\frac{1660932\pi\epsilon^4}{25}\\
&-\frac{3887594024353\pi\epsilon^6}{570000}+\frac{679687975645852511\pi\epsilon^8}{821712000}\\
&-\frac{2652706006393624451200787779\pi\epsilon^{10}}{24329522800000000}+\mathcal{O}(\epsilon^{12}).
\end{aligned}
\end{equation}
These results will be used to compare with the area calculations in the following sections.
\section{Finding the reparametrization $s(\theta)$}
Instead of finding the parameterization $s(\theta)$ as a series expansion near a circular contour we can implement a numerical procedure that is in principle defined for any contour. The idea is simple, for a given contour $X(s)$, one proposes a reparametrization $s(\theta)$, and then calculates the Schwarzian derivative of $X(\theta)$ with respect to $\theta$. Thus, a potential value for $\beta_2(\theta)$ and $\text{Im}(e^{2i\theta}f(\theta))$ is found from eq.\eqref{schwarzianRI}. This data can be analytically continued to find $f(z)$ inside the unit disk and then solve the generalized cosh-Gordon equation numerically by a procedure describe in the appendix. Next we expand the resulting $\alpha$ near the boundary and extract the $\beta_2$ according to \eqref{alphaexpand}, and call it $\tilde{\beta}_2$. If $\theta$ is the conformal angle, i.e., $s(\theta)$ is the correct reparametrization, we should have $\beta_2(\theta)=\tilde{\beta}_2(\theta)$. If not we compute the error as
\begin{equation}\label{dbeta2}
B_2[s(\theta)]=\int_0^{2\pi}\!\!d\theta (\beta_2(\theta)-\tilde{\beta}_2(\theta))^2
\end{equation}
Now we can use standard numerical procedures to find the minimum of $B_2$ as a functional of $s(\theta)$. In practice we define $s(\theta)$ by its values at fixed angles $\theta_j= j\, \frac{2\pi}{M} $, $j=0..M-1$ and use Powell's multidimensional minimization method as described in chapter 10 of \cite{numrec}. The larger the number of interpolating points needed, the more complicated the numerical calculations.
Once we find the minimum of the function \eqref{dbeta2}, the corresponding $s(\theta)$ will be the best value for the reparametrization of the contour and $B_2$ will be a measure of the error. The procedure is illustrated in Figure \ref{procedure}.
\begin{figure}[t]
\centering
\includegraphics[trim=0cm 0cm 0cm 3cm, clip=true, width=1.1 \textwidth]{procedureB}
\caption{The procedure for finding the reparametrization $s(\theta)$. In the expression of $f(\theta)$, $\mathcal{P}$ projects onto positive frequencies.}
\label{procedure}
\end{figure}
As a test, we applied this minimization procedure for various contours including ellipses, symmetric contours, etc. and found $s(\theta)$ in each case. Then we calculated the areas using \eqref{regarea}. For the contours which have near-circular shapes, we checked the results with the perturbative area formula given in \cite{Dekel:2015bla} and found agreement. For each contour, we also calculated the number $n$ of zeros of $f(z)$ using the formula
\begin{equation}
n=\frac{1}{2\pi}\oint \frac{f'(e^{i\theta})}{f(e^{i\theta})}e^{i\theta}d\theta.
\end{equation}
For the cases where $f(z)$ has no zeros inside the unit disk, we confirm the results of the area calculation with the area formula given in \cite{Kruczenski:2014bla}:
\begin{equation}
{\cal A}_{f}=-2\pi-\bigg|\frac{i}{2}\oint \frac{\text{Re}\{X(\theta), \theta\}-\{\chi,\theta\}}{\partial_{\theta}\ln \chi}d\theta\bigg|,
\label{Areg}
\end{equation}
where
\begin{equation}
\chi(z)=\int^z \sqrt{f}dz.
\end{equation}
For the cases where $f(z)$ has zeros inside the unit disk, another formula for calculating the minimal surface area is given in section \ref{areaforzeros}. In the following subsections, we describe the results for the reparametrization $s(\theta)$ and the area we obtained for various contours as well as the comparison with calculations using other methods.
\subsection{Ellipse}
The elliptical contour is given by
\begin{equation}
X(s)=\cos(s)+iR\sin(s),
\end{equation}
where $R$ is the ratio of the two axis. As input for the procedure we use the Schwarzian derivative given by
\begin{equation}
\{X(s),s\}=\frac{5-5R^2+(1+R^2)\cos(2s)+4iR\cos(s)\sin(s)}{4(R\cos(s)+i\sin(s))^2}.
\end{equation}
Given the symmetry of the contour, $s(\theta)$ should have rotation and reflection symmetries. Therefore, to look for the reparameterization $s(\theta)$, we only need to minimize the function \eqref{dbeta2} for values of $\theta\in[0,\frac{\pi}{2}]$, which greatly reduces the calculation.
We applied the procedure described in this section to elliptical contours with $1.2\le R \le 2.2$ at $0.2$ intervals and find the reparameterizations $s(\theta)$ and the areas. Writing $R=1+\epsilon$, we can compare the areas with the formula given in \cite{Dekel:2015bla}. See Figure \ref{ellipserepara} and Figure \ref{ellipsearea} where we find agreement.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{ellipserepara}
\caption{The reparametrization function for various ellipses. Here we plot the difference between $s(\theta)$ and $\theta$ for $0<\theta<\pi$. We consider contours with values $1.2\le R\le 2.2$ at $0.2$ intervals.}
\label{ellipserepara}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{ellipsearea}
\caption{The areas of the elliptical contours compared with the perturbative calculations from $\epsilon^2$ up to $\epsilon^{18}$. Notice however that the perturbative results are obtained by using conformal invariance to map $\epsilon$ to $\tilde{\epsilon}=-\frac{\epsilon}{1+\epsilon}$ and computing the better behaved series ${\cal A}_{\text{ellipse}}(\tilde{\epsilon})$. The original series \eqref{ellipse} cannot be used for values of epsilon $\epsilon\gtrsim 0.8$. Such trick is not available for the other contours.}
\label{ellipsearea}
\end{figure}
\subsection{Symmetric contours}
The symmetric contours in \cite{Dekel:2015bla} are defined by
\begin{equation}
X(s)=e^{i s+a\sin{p s}},
\end{equation}
with $p$ a positive integer (see Figure \ref{symmetriccontours}).
\begin{figure}[t]
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{symmetric2contour}
\caption{}
\label{p2}
\end{subfigure}
%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\textwidth]{symmetric13contour}
\caption{}
\label{p13}
\end{subfigure}
\caption{\ref{p2}: symmetric contours with $p=2$ and $0.1<a<1$ at $0.1$ intervals. \ref{p13}: symmetric contours with $p=13$ and $0.02<a<0.16$ at $0.02$ intervals. They have $\mathbb{Z}_p$ rotational, reflection and inversion symmetries.}
\label{symmetriccontours}
\end{figure}
Such contours have $p$-fold rotational symmetry $X\rightarrow e^{\frac{2\pi i}{p}} X$, reflection symmetry $X\rightarrow e^{\frac{i\pi }{p}} \bar{X}$ and inversion symmetry $X\rightarrow X^{-1}$. As a result, the generalized cosh-Gordon equation, and therefore $s(\theta)$ and $\alpha(z,\bar{z})$ have $2p$-fold rotational symmetry. However, $f(z)$ does not have such symmetry. In fact, it can be seen that $f(z)$ has a multiple zero at $z=0$ and if we write $f(z)$ as
\begin{equation}
f(z)=z^{p-2}\tilde{f}(z),
\end{equation}
then $\tilde{f}(z)$ has $2p$ rotational symmetry. When solving for the reparametrization for the symmetric contours, we impose the symmetry condition on the minimization procedure, namely we divide the unit disk into $2p$ wedges and solve the problem on a single wedge.
We perform the calculations for $p=2$ and $p=13$ with different values of $a$. Setting $a=\epsilon$, we compare the areas with the results from \cite{Dekel:2015bla} in the region where the series expansion for the area converges and find agreement. In Figs. \ref{symmetric2repara} and \ref{symmetric13repara}, we show the reparametrization functions for various symmetric contours and in Fig. \ref{symmetric2area} and \ref{symmetric13area}, we illustrate the comparison between the area calculations.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{symmetric2repara}
\caption{The reparametrization functions for symmetric contours with $p=2$. Here we plot the difference between $s(\theta)$ and $\theta$ for $0\le\theta\le\frac{2\pi}{4}$. The contours we consider are $0.1\le a\le 1$ at $0.1$ intervals.}
\label{symmetric2repara}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{symmetric13repara}
\caption{The reparametrization functions for symmetric contours with $p=13$. Here we plot the difference between $s(\theta)$ and $\theta$ for the relevant region $0<\theta<\frac{2\pi}{26}$. The contours we consider are $0.02\le a\le 0.16$ at $0.02$ intervals.}
\label{symmetric13repara}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{symmetric2area}
\caption{The areas of the symmetric contours with $p=2$. We plot a few partial sums (continuous curves) of the perturbative calculations up to the 18th order as well as the results from our calculation (dots), which goes beyond the range of $a$ where the perturbative series converges.}
\label{symmetric2area}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{symmetric13area}
\caption{The areas of the symmetric contours with $p=13$. We plot a few partial sums (continuous curves) of the perturbative calculations up to the 10th order as well as the results from our calculation (dots), which goes beyond the range of $a$ where the perturbative series converges.}
\label{symmetric13area}
\end{figure}
\section{Shanks transformation}\label{shanks}
As already discussed, in \cite{Dekel:2015bla} the area formula for various shapes is given as a series expansion on the perturbative parameter $\epsilon$. However, the series diverges beyond certain values of $\epsilon$. There are various methods to accelerate the convergence of such a series. We found particularly useful the so called Shanks transformation\cite{Shanks} based on the partial sums of the series
\begin{equation}
A_{N}=\sum_{n=0}^N a_n \epsilon^{n},
\end{equation}
and defined as:
\begin{equation}
S(A_N)=\frac{A_{N+1}A_{N-1}-A_N^2}{A_{N+1}+A_{N-1}-2A_{N}}.
\end{equation}
By using repeated Shanks transformations for the symmetric contours and the ellipse we observe great improvement of convergence as shown in the following examples.
\subsection{Symmetric contour}
For symmetric contours, the series expansion of the area has only even powers of $\epsilon$. We therefore write the area series as
\begin{equation}
{\cal A}_{sym,N}=\sum_n^N a_{sym,n} \epsilon^{2n},
\end{equation}
and perform Shanks transformation on it. As can be seen in Figure \ref{symmetric2area} and \ref{symmetric13area}, the series diverges at around $\epsilon \sim 0.4$ and $\epsilon \sim 0.13$ for countours with $p=2$ and $p=13$ respectively. After the Shanks transformation with the coefficients given in \cite{Dekel:2015bla}, we manage to find the areas for the values of $a$ used in the previous section and find good agreement. See Table \ref{shanks0.7}, and figure \ref{areacomparison} for the comparison.
\begin{center}
\begin{tabular}{l*{5}{c}}
$N$ & $A_N$ & $S(A_N)$ & $S^2(A_N)$ & $S^3(A_N)$ & $S^4(A_N)$\\
\hline
\\
1 & -10.9013 & & & & \\
2 & -7.39385 & -9.34802 & & & \\
3 & -11.8065 & -9.19200 & -9.25671 & & \\
4 & -5.39056 & -9.30258 & -9.24927 & -9.25187 & \\
5 & -15.4146 & -9.19965 & -9.25326 & -9.25161 & -9.25169\\
6 & 0.940615 & -9.31152 & -9.25044 & -9.25173 & \\
7 & -26.5334 & -9.17697 & -9.25281 & & \\
8 & 20.59769 & -9.35077 & & & \\
9 & -61.5493 & & & & \\
\end{tabular}
\captionof{table}{Shanks transformation of the area series for $p=2$, $a=0.7$. The result obtained by finding the reparametrization which is shown in Figure \ref{symmetric2area} is ${\cal A}_{f}(a=0.7)=-9.25174$.}
\label{shanks0.7}
\end{center}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{areacomparison}
\caption{Comparison of different area calculations for symmetric contours with $p=2$. {\color{red}{$\times$}} indicates the areas calculated by finding the reparametrization, and {\color{blue}{$\square$}} indicates the areas calculated through Shanks transformation of the perturbative expansion. The results agree.}
\label{areacomparison}
\end{figure}
\subsection{Ellipse}
For elliptical contours, Dekel used conformal symmetry to relate an ellipse with perturbative parameter $\epsilon$ to one with $\tilde{\epsilon}=-\frac{\epsilon}{1+\epsilon}$. While $0<\epsilon<\infty$, we have $-1<\tilde{\epsilon}<0$. Therefore, we can consider the areas for ellipses with $-1<\tilde{\epsilon}<0$ where the perturbative formula has better convergence. However, the approximation fails for $\epsilon\gtrsim 4$. We apply Shanks transformation on the area formula for the ellipse with $-1<\tilde{\epsilon}<0$. The convergence of the series accelerates drastically. In Table \ref{shank10}, we show the acceleration of convergence for one elliptical contour beyond the range of convergence of the original formula ($\epsilon=10$). In fact we see convergence up to $\epsilon \sim 100$. We plot those results in Fig.\ref{shanksellipse}.
\begin{sidewaystable}
\begin{tabular}{*{11}{r}}
$N$ & $A_N$ & $S(A_N)$ & $S^2(A_N)$ & $S^3(A_N)$ & $S^4(A_N)$ & $S^5(A_N)$ & $S^6(A_N)$ & $S^7(A_N)$& $S^8(A_N)$& $S^9(A_N)$\\
\hline \\
0 & -6.2832 & & & & & & & & & \\
1 & -6.2832 & -6.2832 & & & & & & & & \\
2 & -8.2305 & -27.7031 & -25.7380 & & & & & & & \\
3 & -10.0007 & -25.5395 & -25.5175 & -25.5365 & & & & & & \\
4 & -11.5899 & -25.5177 & -25.5383 & -25.5174 & -25.5365 & & & & & \\
5 & -13.0163 & -25.8864 & -45.5321 & -36.0704 & -31.2791 & -28.8814 & & & & \\
6 & -14.3004 & -26.2483 & -27.5703 & -27.3039 & -27.1627 & -26.8718 & -26.9845 & & & \\
7 & -15.4599 & -26.5324 & -27.3078 & -27.1649 & -26.8919 & -26.9915 & -27.1289 & -27.0797 & & \\
8 & -16.5095 & -26.7404 & -27.2153 & -27.0728 & -27.0495 & -27.0552 & -27.0542 & -27.0547 &-27.0548 & \\
9 & -17.4614 & -26.8850 & -27.1592 & -27.0542 & -27.0550 & -27.0542 & -27.0547 & -27.0548 &-27.0550 & -27.0553 \\
10 & -18.3260 & -26.9797 & -27.1226 & -27.0550 & -27.0541 & -27.0552 & -27.0546 & -27.0546 &-27.0550 & \\
11 & -19.1121 & -27.0366 & -27.0989 & -27.0600 & -27.0494 & -27.0538 & -27.0552 & -27.0546 & & \\
12 & -19.8272 & -27.0664 & -27.0842 & -27.0693 & -27.1378 & -27.1017 & -27.0858 & & & \\
13 & -20.4780 & -27.0775 & -27.0768 & -27.0775 & -27.0768 & -27.0779 & & & & \\
14 & -21.0704 & -27.0767 & -27.0776 & -27.0767 & -27.0779 & & & & & \\
15 & -21.6097 & -27.0690 & -27.0943 & -27.0742 & & & & & & \\
16 & -22.1004 & -27.0578 & -27.1938 & & & & & & & \\
17 & -22.5470 & -27.0457 & & & & & & & & \\
18 & -22.9532 & & & & & & & & & \\
\end{tabular}
\captionof{table}{Shanks transformation of the area series for ellipse with $\epsilon=R-1=10$. We used $\tilde{\epsilon}=-\frac{\epsilon}{1+\epsilon}$ and Shanks transform the series for $\tilde{\epsilon}=-\frac{10}{11}$. The area shows good convergence after nine Shanks transformations.}
\label{shank10}
\end{sidewaystable}
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{epsilonShanks}
\caption{Plots of minimal surface area for an ellipse boundary as a function of $\epsilon=R-1$ after performing nine Shanks transformations of the series expansion \eqref{ellipse} in terms of $\tilde{\epsilon}=-\frac{\epsilon}{1+\epsilon}$. See Table \ref{shank10}.}
\label{shanksellipse}
\end{figure}
\section{Area formula for $f(z)$ with zeros}\label{areaforzeros}
If we define the one forms $j$, $w$, and function $\chi$ on the world-sheet \cite{Alday:2010vh,Irrgang:2015txa}
\begin{eqnarray}
j &=& 4 f \sqrt{\bar{f}} e^{-2\alpha} dz + \frac{2}{\sqrt{f}} \left(\bar{\partial}^2\alpha-(\bar{\partial}\alpha)^2\right) d\bar{z} \\
\chi &=& \int_{A}^{z} w, \ \ \ w=\sqrt{\bar{f}} d\bar{z}
\end{eqnarray}
where $A$ is any point on the disk, usually at the boundary, then the current $j$ satisfies $dj=0$ as follows from the generalized cosh-Gordon equation \eqref{coshGordon}.
With these definitions, the area can be written as
\begin{equation}
{\cal A}_f+ 2\pi = -\int dz\,d\bar{z}\ e^{-2\alpha} f\bar{f} = \int j\wedge w = \int j\wedge d\chi = -\int d(\chi j) = -\oint \chi j
\end{equation}
Since $\chi$ is uniquely defined only in a simple connected domain, when $\sqrt{f(z)}$ has cuts, namely when $f(z)$ has zeros, such domain has to go around the cuts.
For example in fig.\ref{fzeros} we show how to cut the disk following the black lines along a contour labeled by successive segments $1$ to $9$. The lines are separated for clarity but lines $2,9$ actually overlap, as well as $4,7$ etc.
A straight-forward calculation leads to
\begin{eqnarray}
{\cal A}_f+ 2\pi &=& -\frac{i}{4}\left[ \sum_\ell \left(\oint_{a_\ell}w\oint_{b_\ell} j-\oint_{b_\ell} \omega \oint_{a_\ell} j \right) - \oint_{1} \omega\oint_1 j \right. \\
&&\ \ \left. + 2\left(\oint_1 \omega\int_2j-\int_2 \omega\oint_1 j\right)+2\oint_1(\int_{A}^{z}\omega)j\right]
\end{eqnarray}
where $a_\ell,b_{\ell}$ is a basis of cycles for the disk with cuts and $1$ is the boundary of the disk. The path $2$ connects one cut to the boundary as in the Fig.\ref{fzeros}. This formula is similar to the one known for light-like Wilson loops \cite{Alday:2010vh} except that it contains a contribution from the boundary of the disk. Recall that in the case of \cite{Alday:2010vh} the world-sheet was the whole plane and there was no contribution from infinity. We checked that when $f(z)$ has no zeros it reduces to formula \eqref{Areg} and also we checked numerically the validity of the formula for some examples. It should be noted that this formula requires knowing the values of $\alpha$ and $f$ inside the disk in which case it might be more convenient to directly use the definition \eqref{regarea} as we did previously.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{cuts}
\caption{If the holomorphic function $f(z)$ has zeros, the formula for the area gets an extra contribution from integrals around the non-trivial contours.}
\label{fzeros}
\end{figure}
\section{Conclusions}
In this work we have shown that the integrability ideas to find minimal area surfaces discussed in \cite{Kruczenski:2014bla,Dekel:2015bla} can be implemented numerically. The method is in principle valid for any contour but in practice it becomes numerically difficult if the contour is not reasonably smooth.
Although one can also try to find the minimal surface by direct minimization of the area functional, using integrability has some advantages. First the method is manifestly invariant under global conformal transformations, second it reconstructs the Pohlmeyer analytic function $f(z)$
and therefore it makes it easier to obtain the $\lambda$-deformed contours. More generically, the idea is to understand the role of integrability in the minimal area problem rather than find actual solutions. Along these lines the idea of $\lambda$-deformations \cite{Ishizeki:2011bf,Kruczenski:2014bla,Dekel:2015bla} or "master" symmetry \cite{Klose:2016uur,Klose:2016qfv} seems quite powerful.
\section{Acknowledgments}
We are grateful to A. Dekel, T. Klose, S. Komatsu, J. Toledo and P. Vieira for various discussions and suggestions on the minimal area problem. This work was supported in part by DOE through grant DE-SC0007884.
|
1,108,101,565,931 | arxiv |
\section*{Acknowledgment}
JPS would like to acknowledge the support of NSF DMR-1719490.}
\mw{MW acknowledges support from the Simons Foundation Grant (No.~454953 Matthieu Wyart) and from the
SNSF under Grant No.~200021-165509.}
\bibliographystyle{ws-rv-van}
|
1,108,101,565,932 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Quantization of the Nambu-Poisson bracket has been a long-standing problem since Y. Nambu proposed in 1973 \cite{Nambu}. The Zariski quantization proposed in 1996 by G. Dito, M. Flato, D. Sternheimer, and L. Takhtajan, is one of the strong candidates for a quantization of the Nambu-Poisson bracket \cite{DitoFlatoSternheimerTakhtajan}. The Zariski quantization consists of two steps. First, instead of a direct quantization of the original Nambu-Poisson bracket, they deform spaces on which the Nambu-Poisson bracket acts in the classical level, and define a new Nambu-Poisson bracket that has the same Nambu-Poisson structure as the original one. The definition of the Nambu-Poisson structure is to satisfy the Leibniz rule, the skew-symmetry, and the Fundamental Identity, which is a generalization of the Jacobi identity. Second, they define a deformation quantization of the new Nambu-Poisson bracket. Because the Zariski quantized Nambu-Poisson bracket reduces not to the original Nambu-Poisson bracket but to the new Nambu-Poisson bracket in the classical limit, the Zariski quantization is regarded as different with usual quantizations \cite{Yoneya}. For this reason, applications for Physics have developed little for a long time.
In this paper, we study a relation between the original and the new Nambu-Poisson brackets and clarify physical meaning of the Zariski quantization. In subsection 2.1, we summarize definitions and mathematical properties of the classical Zariski product and spaces $\mathcal{M}$ on which it acts. We also define the new Nambu-Poisson bracket on $\mathcal{M}$ and show that it possesses the same Nambu-Poisson structure as the original one, whereas G. Dito et al. defined it on $\mathcal{A}_0$ that are small subspaces of $\mathcal{M}$. It is necessary for later discussions to define the Zariski quantization not on $\mathcal{A}_0$ but on $\mathcal{M}$. In subsection 2.2, we construct a natural metric for $\mathcal{M}$ in order to apply the Zariski quantization for field theories. We show that the metric is invariant under a gauge transformation generated by the new Nambu-Poisson bracket, although 3-algebras equipped with invariant metrics are restricted in general. In subsection 2.3, we deform superstring and supermembrane theories by the classical Zariski product, the spaces $\mathcal{M}$, and the metric. This deformation can be applied for any first quantized field theory, even if it does not possess the Nambu-Poisson structure. We show that the deformed actions reduce to the actions before the deformation if we restrict their fields to only one-body states. We also show that the deformed actions possess flat directions. These results imply that the deformation of first quantized theories by the classical Zariski product, the space $\mathcal{M}$, and the metric is a many-body deformation. As a corollary, we find that the new Nambu-Poisson bracket is the many-body deformation of the original Nambu-Poisson bracket.
In subsection 3.1, we summarize definitions and mathematical properties of the deformation quantization in the Zariski quantization. By performing the deformation quantization of the classical Zariski product and $\mathcal{M}$, we obtain the quantum Zariski product and $\mathcal{M}_{\hbar}$. We can define the Zariski quantized Nambu-Poisson bracket by the deformation quantization of the many-body deformation of the original Nambu-Poisson bracket. In subsection 3.2, we construct a natural metric for $\mathcal{M}_{\hbar}$. This metric is invariant under a gauge transformation generated by the Zariski quantized Nambu-Poisson bracket. In subsection 3.3, we perform the Zariski quantization of first quantized field theories and study their general features by using a simple model. The Zariski quantization is applicable to any first quantized field theory, which is not necessary to possess the Nambu-Poisson structure. We define theories by path-integrals of Zariski quantized actions, which are $\hbar$ deformations of classical actions. As a result, we find that pair creations and annihilations occur among the many bodies that are introduced by the many-body deformation. Therefore, by performing the Zariski quantization, which consists of the many-body deformation and the deformation quantization, we obtain second quantized theories from first quantized field theories. The Zariski quantization preserves supersymmetries of first quantized field theories, because the quantum Zariski product is Abelian, associative and distributive, and admits a commutative derivative satisfying the Leibniz rule. Therefore, by performing the Zariski quantization of superstring and supermembrane theories, we can obtain second quantized theories of the superstrings and the supermembranes.
\vspace{1cm}
\section{Classical Zariski Product and Many-Body Deformation}
\setcounter{equation}{0}
In this section, we find that actions representing single body systems become those representing many-body systems, by replacing product and spaces on which it acts, with the classical Zariski product and $\mathcal{M}$. As a corollary, we find that the new Nambu-Poisson bracket defined by using the classical Zariski product and $\mathcal{M}$, is a many-body deformation of the original Nambu-Poisson bracket.
\subsection{Definitions and Mathematical Properties}
In this subsection, we summarize definitions and mathematical properties of the classical Zariski product \cite{DitoFlatoSternheimerTakhtajan} and spaces $\mathcal{M}$ on which it acts. We also define a Nambu-Poisson bracket deformed by the classical Zariski product on $\mathcal{M}$, which will be interpreted as many-body spaces later, instead of the small subspaces $\mathcal{A}_0$ defined in \cite{DitoFlatoSternheimerTakhtajan}.
First, we define elements of linear spaces $\mathcal{M}$ by
\begin{equation}
\bold{X}=\sum_{u} Y_{u}(\sigma) Z_{u},
\end{equation}
where the basis $Z_{u}$ are labeled by polynomials $u=u(x_1,x_2)$ in the valuables $x_1$, $x_2$ with real coefficients. $Z_{u}$ satisfies $Z_{au}=aZ_{u}$ where $a$ is a real number. The coefficients $Y_{u}(\sigma)$ are functions over $p$-dimensional spaces. Summation is defined naturally as linear spaces.
The classical Zariski product $\bullet$ is defined on $\mathcal{M}$ by
\begin{eqnarray}
\bold{X} \bullet \bold{X}'&=& (\sum_{u} Y_{u}(\sigma) Z_{u}) \bullet (\sum_{v} Y'_{v}(\sigma) Z_{v}) \n&=& \sum_{u, v} Y_{u}(\sigma) Y'_{v}(\sigma) Z_{uv}.
\end{eqnarray}
The classical Zariski product is Abelian, distributive and associative as follows:
\begin{equation}
\bold{X} \bullet \bold{X}'= \sum_{u, v} Y_{u}(\sigma) Y'_{v}(\sigma) Z_{uv}
=\sum_{v, u} Y'_{v}(\sigma) Y_{u}(\sigma) Z_{vu}
=\bold{X}' \bullet \bold{X}, \label{classicalabelian}
\end{equation}
\begin{eqnarray}
\bold{X} \bullet (\bold{X}'+\bold{X}'')&=& \sum_{u} Y_{u}(\sigma) Z_{u} \bullet
(\sum_{v} Y'_{v}(\sigma) Z_{v}+ \sum_{w} Y''_{w}(\sigma) Z_{w}) \n
&=&\sum_{u, v} Y_{u}(\sigma) Y'_{v}(\sigma) Z_{uv}
+\sum_{u, w} Y_{u}(\sigma) Y''_{w}(\sigma) Z_{uw} = \bold{X} \bullet \bold{X}'+\bold{X} \bullet \bold{X}'',
\end{eqnarray}
and
\begin{eqnarray}
(\bold{X} \bullet \bold{X}') \bullet \bold{X}''
&=&(\sum_{u, v} Y_{u}(\sigma) Y'_{v}(\sigma) Z_{uv})
\bullet(\sum_{w} Y''_{w}(\sigma) Z_{w}) \n
&=& \sum_{u, v, w} Y_{u}(\sigma) Y'_{v}(\sigma) Y''_{w}(\sigma) Z_{uvw}
=\bold{X} \bullet (\bold{X}' \bullet \bold{X}'').
\end{eqnarray}
We define derivatives on $\mathcal{M}$ by derivatives with respect to $\sigma^i$ ($i=1,2, \cdots, p$). These derivatives are commutative:
\begin{equation}
\frac{\partial}{\partial \sigma^i}\frac{\partial}{\partial \sigma^j}\bold{X}
=\sum_{u} (\frac{\partial}{\partial \sigma^i}\frac{\partial}{\partial \sigma^j}Y_{u}(\sigma)) Z_{u}
=\sum_{u} (\frac{\partial}{\partial \sigma^j}\frac{\partial}{\partial \sigma^i}Y_{u}(\sigma)) Z_{u}
= \frac{\partial}{\partial \sigma^j}\frac{\partial}{\partial \sigma^i}\bold{X},
\end{equation}
and the derivatives of the classical Zariski product satisfy the Leibniz rule:
\begin{eqnarray}
\frac{\partial}{\partial \sigma^i}(\bold{X} \bullet \bold{X}')
&=&\sum_{u, v} \frac{\partial}{\partial \sigma^i}(Y_{u}(\sigma) Y'_{v}(\sigma)) Z_{uv} \n
&=&\sum_{u, v} (\frac{\partial}{\partial \sigma^i}Y_{u}(\sigma) Y'_{v}(\sigma) +Y_{u}(\sigma) \frac{\partial}{\partial \sigma^i}Y'_{v})Z_{uv} \n
&=& \sum_{u} \frac{\partial}{\partial \sigma^i}Y_{u}(\sigma) Z_{u}
\bullet \sum_{v} Y'_{v}(\sigma) Z_{v}
+\sum_{u} Y_{u}(\sigma) Z_{u}
\bullet \sum_{v} \frac{\partial}{\partial \sigma^i}Y'_{v}(\sigma) Z_{v} \n
&=&\frac{\partial}{\partial \sigma^i}\bold{X} \bullet \bold{X}'+ \bold{X} \bullet \frac{\partial}{\partial \sigma^i}\bold{X}'. \label{classicalLeibnitz}
\end{eqnarray}
We define a Nambu-Poisson bracket deformed by the classical Zariski product and $\mathcal{M}$ by
\begin{eqnarray}
[\bold{X}, \bold{X}', \bold{X}'']_{\bullet}
&:=&\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} \bold{X} \bullet
\frac{\partial}{\partial\sigma^j} \bold{X}'\bullet
\frac{\partial}{\partial\sigma^k} \bold{X}'' \n
&=&
\sum_{u,v,w}
\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} Y_{u}(\sigma)
\frac{\partial}{\partial\sigma^j} Y'_{v}(\sigma)
\frac{\partial}{\partial\sigma^k} Y''_{w}(\sigma)
Z_{uvw},
\end{eqnarray}
where $i, j, k$ run from 1 to 3.
By definition, the bracket is skew-symmetric, that is, totally anti-symmetric in all the three entries. By using (\ref{classicalabelian}) - (\ref{classicalLeibnitz}), one can easily show that it satisfies the Leibniz rule and the Fundamental Identity;
\begin{equation}
\{A, B,\{X,Y,Z\}\}=\{\{A,B,X\},Y,Z\}+\{X,\{A,B,Y\},Z\}+\{X,Y,\{A,B,Z\}\},
\end{equation}
for any $A, B, X, Y, Z \in \mathcal{M}$. Thus, the deformed Nambu-Poisson bracket possesses the same Nambu-Poisson structure as the original one.
\subsection{Metric}
In this subsection, we construct a natural metric for $\mathcal{M}$, in order to apply the classical Zariski product and $\mathcal{M}$ for field theories.
We define a metric for $X, X' \in \mathcal{M}$ by
\begin{eqnarray}
<\bold{X}, \bold{X}'>
&=&
<\bold{X} \bullet \bold{X}'> \n
&=&
\int d^p\sigma <<\bold{X} \bullet \bold{X}'>> \n
&=&
\sum_{u,v} \int d^p \sigma Y_{u}(\sigma) Y'_{v}(\sigma) <<Z_{uv}>>.
\end{eqnarray}
$<<Z_{w}>>$ is defined by
\begin{equation}
<<Z_{w}>>=a \mbox{ if } w=a z^2, \quad \mbox{otherwise} <<Z_{w}>>=0,
\end{equation}
where $a$ is a real number and $z$ is a normalized polynomial, whose monomial of the highest total degree has coefficient 1.
This metric is invariant under a gauge transformation generated by the $p$-dimensional deformed Nambu-Poisson bracket,
$[\bold{X}^1, \bold{X}^2, \cdots, \bold{X}^p]_{\bullet}
:=\epsilon^{i_1 i_2 \cdots i_p}
\frac{\partial}{\partial\sigma^{i_1}} \bold{X}^1 \bullet
\frac{\partial}{\partial\sigma^{i_2}} \bold{X}^2 \bullet \cdots \bullet
\frac{\partial}{\partial\sigma^{i_p}} \bold{X}^p$.
Here we show the $p=3$ case as an example, whereas the $p=2$ case (Poisson bracket) and the $p>3$ case are shown in a similar way. The condition of the invariance of the metric is given by
\begin{equation}
\delta <\bold{X}^1, \bold{X}^2>
=
<\delta \bold{X}^1, \bold{X}^2> + <\bold{X}^1, \delta \bold{X}^2> =0.
\end{equation}
This is equivalent to
\begin{equation}
<[\bold{X}^3, \bold{X}^4, \bold{X}^1]_{\bullet}, \bold{X}^2> + <\bold{X}^1, [\bold{X}^3, \bold{X}^4, \bold{X}^2]_{\bullet}>=0.
\end{equation}
The left hand side is
\begin{eqnarray}
&&<\sum_{u_3,u_4,u_1}
\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} Y^3_{u_3}(\sigma)
\frac{\partial}{\partial\sigma^j} Y^4_{u_4}(\sigma)
\frac{\partial}{\partial\sigma^k} Y^1_{u_1}(\sigma)
Z_{u_3 u_4 u_1}
\sum_{u_2} Y^2_{u_2}(\sigma) Z_{u_2}> \n
&&+
<\sum_{u_1} Y^1_{u_1} (\sigma) Z_{u_1},
\sum_{u_3,u_4,u_2}
\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} Y^3_{u_3}(\sigma)
\frac{\partial}{\partial\sigma^j} Y^4_{u_4}(\sigma)
\frac{\partial}{\partial\sigma^k} Y^2_{u_2}(\sigma)
Z_{u_3 u_4 u_2}> \n
&=&
\sum_{u_1, u_2, u_3, u_4}
\int d^3 \sigma (\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} Y^3_{u_3}(\sigma)
\frac{\partial}{\partial\sigma^j} Y^4_{u_4}(\sigma)
\frac{\partial}{\partial\sigma^k} Y^1_{u_1}(\sigma)
Y^2_{u_2}(\sigma) \n
&& \qquad \qquad \qquad+
Y^1_{u_1} (\sigma)
\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} Y^3_{u_3}(\sigma)
\frac{\partial}{\partial\sigma^j} Y^4_{u_4}(\sigma)
\frac{\partial}{\partial\sigma^k} Y^2_{u_2}(\sigma))
<<Z_{u_1 u_2 u_3 u_4}>> \n
&=&
\sum_{u_1, u_2, u_3, u_4}
\int d^3 \sigma (
\frac{\partial}{\partial\sigma^k}
(\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} Y^3_{u_3}(\sigma)
\frac{\partial}{\partial\sigma^j} Y^4_{u_4}(\sigma)
Y^1_{u_1}(\sigma)
Y^2_{u_2}(\sigma)) )
<<Z_{u_1 u_2 u_3 u_4}>> \n
&=&0.
\end{eqnarray}
\subsection{Flat Directions}
In this subsection, we deform first quantized field theories, superstring and supermembrane theories as examples, by using the classical Zariski product and $\mathcal{M}$. We study ground states and find flat directions, which indicates the deformed theories describe many-body systems.
We can deform any first quantized action $S=\int d^p \sigma \mathcal{L}(X)$ into $S=< \mathcal{L}(\bold{X})_{\bullet}>$. The action need not to possess the Nambu-Poisson structure. If we restrict $\bold{X} = \sum_{u} Y(\sigma)_u Z_u$ into $\bold{X}=Y(\sigma)_v Z_v$, $S=< \mathcal{L}(\bold{X})_{\bullet}>$ reduces to $S=\int d^p \sigma \mathcal{L}(Y_v)$ because $<<Z_{v^2}>>$ is a non-zero constant. This implies that each $Y(\sigma)_v Z_v$ among $\sum_{u} Y(\sigma)_u Z_u$ should be a single body state. Irreducible polynomials can label single particle state whereas reducible polynomials can label bound states.
In order to examine whether $S=< \mathcal{L}(\bold{X})_{\bullet}>$ represent many-body systems, we study ground states in deformed superstring and supermembrane theories as examples. The bosonic part of the superstring Hamiltonian in a light-cone gauge is given by
\begin{equation}
H=\frac{l}{4\pi\alpha'p^+} \int_0^l d \sigma
(\frac{2\pi\alpha' (p^+)^2}{l^2}(\partial_{\tau} X^i)^2
+\frac{1}{2\pi \alpha'}(\partial_{\sigma} X^i)^2).
\end{equation}
After the deformation, we obtain
\begin{equation}
H_{\bullet}=\frac{l}{4\pi\alpha'p^+}
< \frac{2\pi\alpha' (p^+)^2}{l^2} \partial_{\tau} \bold{X}^i \bullet \partial_{\tau} \bold{X}^i
+\frac{1}{2\pi \alpha'} \partial_{\sigma} \bold{X}^i \bullet \partial_{\sigma} \bold{X}^i >,
\end{equation}
where $\bold{X}^i = \sum_u Y^i_u (\tau, \sigma) Z_u$.
$H_{\bullet}=0$ for
\begin{equation}
\bold{X}^i = \sum_u Y^i_u Z_u,
\end{equation}
where $Y^i_u$ are constants. This means $\{Y^i_u\}$ represent flat directions. One can show that these flat directions are preserved after a deformation quantization that will be defined in the next section. Moreover, the deformed theory preserves the supersymmetry of the superstring theory, because the classical and quantum Zariski products are Abelian, distributive, and associative, and admit commutative derivatives satisfying the Leibniz rule. Thus, quantum corrections to the flat directions should be suppressed. Therefore, this theory possesses a continuous spectrum and describes a many-body system. The flat directions correspond to positions of many-body superstrings.
One can also apply the deformation for the supermembrane Hamiltonian in a light-cone gauge
and obtains
\begin{equation}
H_{\bullet}=\frac{\nu T}{4} < (\partial_{\tau} \bold{X}^i)^2_{\bullet}
+\frac{2}{\nu^2} \{ \bold{X}^i, \bold{X}^j \}^2_{\bullet}
-\frac{2}{\nu} \Theta^T \bullet \gamma_i \{X^i, \Theta \}_{\bullet}>,
\end{equation}
where $\bold{X}^i = \sum_u Y^i_u (\tau, \sigma) Z_u$ and
$\Theta = \sum_u \theta_u (\tau, \sigma) Z_u$.
$H_{\bullet}=0$ for
\begin{equation}
\bold{X}^i = \sum_u Y^i_u Z_u, \qquad \Theta=0,
\end{equation}
where $Y^i_u$ are constants.
Thus, $\{Y^i_u\}$ are flat directions, which correspond to positions of many-body membranes.
From these general features, we conclude that the deformations of first quantized field theories by the classical Zariski product and $\mathcal{M}$ are many-body deformations.
\section{Zariski Quantization as Second Quantization}
\setcounter{equation}{0}
In this section, we perform a deformation quantization of the classical Zariski product and $\mathcal{M}$, and obtain Zariski quantized actions from the deformed actions. We define quantum theories of the Zariski quantized actions by using a path-integral. We find that pair creations and annihilations occur among the many bodies that are introduced by the many-body deformation. This indicates that first quantized field theories become second quantized theories by the Zariski quantization.
\subsection{Definitions and Mathematical Properties}
In this subsection, we summarize definitions and mathematical properties of a deformation quantization in the Zariski quantization \cite{DitoFlatoSternheimerTakhtajan}.
The deformation quantization of $\mathcal{M}$ is defined by
\begin{equation}
\bold{X}_{\alpha}=\sum_{r=0}^{\infty} \alpha^r \sum_{u_r} Y^r_{u_r}(\sigma) Z_{u_r} \in \mathcal{M}_{\hbar},\end{equation}
where $\alpha$ is a deformation parameter related to $\hbar$. We will determine the relation later.
The quantum Zariski product $\bullet_{\hbar}$ is defined by a deformation quantization of the classical Zariski product as
\begin{eqnarray}
\bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}'&=& (\sum_{r=0}^{\infty} \alpha^r \sum_{u_r} Y^r_{u_r}(\sigma) Z_{u_r})\bullet_{\hbar} (\sum_{s=0}^{\infty} \alpha^s \sum_{v_s} Y'^s_{v_s}(\sigma) Z_{v_s}) \n
&=& (\sum_{u_0} Y^0_{u_0}(\sigma) Z_{u_0})
\bullet_{\hbar}
(\sum_{v_0} Y'^0_{v_0}(\sigma) Z_{v_0}) \n
&=& \sum_{u_0, v_0} Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) Z_{u_0} \bullet_{\hbar} Z_{v_0}. \label{quantumproduct1}
\end{eqnarray}
Any polynomial can be decomposed uniquely as $u=a u_1 u_2 \cdots u_M$, where $a$ is a real number and $u_i$ are irreducible normalized polynomials. $Z_u \bullet_{\hbar} Z_v$ is defined by
\begin{equation}
Z_u \bullet_{\hbar} Z_v
=
ab \zeta((u_1 u_2 \cdots u_M) \times_{\hbar} (u_{M+1}u_{M+2} \cdots u_{N})), \label{quantumproduct2}
\end{equation}
where $v=b u_{M+1} u_{M+2} \cdots u_N$.
$\times_{\hbar}$ is defined by
\begin{eqnarray}
&&(u_1 u_2 \cdots u_M) \times_{\hbar} (u_{M+1}u_{M+2} \cdots u_{N}) \n
&:=&T(u_1 \otimes u_2 \otimes \cdots \otimes u_{M} \otimes u_{M+1} \otimes \cdots \otimes u_{N}) \n
&:=&
\frac{1}{N!}\sum_{\sigma \in S_N} u_{\sigma_1} * u_{\sigma_2}* \cdots * u_{\sigma_N}, \label{quantumproduct3}
\end{eqnarray}
where $u_1 \otimes u_2 \otimes \cdots \otimes u_{N}$ is the symmetric tensor product. $S_N$ is the permutation group of $\{1,2, \cdots, N\}$. $*$ is the Moyal product defined by
\begin{equation}
f*g=\sum_{r=0}^{\infty} \frac{\alpha^r}{r!}
\epsilon^{i_1 j_1} \epsilon^{i_2 j_2} \cdots \epsilon^{i_r j_r}
\frac{\partial}{\partial x_{i_1}} \frac{\partial}{\partial x_{i_2}} \cdots \frac{\partial}{\partial x_{i_r}} f
\frac{\partial}{\partial x_{j_1}} \frac{\partial}{\partial x_{j_2}} \cdots \frac{\partial}{\partial x_{j_r}}g,
\end{equation}
where $i_r$ and $j_r$ run from 1 to 2.
$\zeta$ is defined by
\begin{equation}
\zeta(\sum_{r=0}^{\infty} \alpha^r u_r)
=
\sum_{r=0}^{\infty} \alpha^r Z_{u_r}.
\end{equation}
The quantum Zariski product is also Abelian, distributive and associative as follows. It is Abelian because
\begin{eqnarray}
Z_u \bullet_{\hbar} Z_v
&=&
ab \zeta((u_1 u_2 \cdots u_M) \times_{\hbar} (u_{M+1}u_{M+2} \cdots u_{N})) \n
&=&
ba \zeta( (u_{M+1}u_{M+2} \cdots u_{N}) \times_{\hbar} (u_1 u_2 \cdots u_M) ) \n
&=&
Z_v \bullet_{\hbar} Z_u,
\end{eqnarray}
and
\begin{eqnarray}
\bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}'
&=& \sum_{u_0, v_0} Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) Z_{u_0} \bullet_{\hbar} Z_{v_0} \n
&=& \sum_{v_0, u_0} Y'^0_{v_0}(\sigma) Y^0_{u_0}(\sigma) Z_{v_0} \bullet_{\hbar} Z_{u_0} \n
&=&\bold{X}_{\hbar}' \bullet_{\hbar} \bold{X}_{\hbar}.
\end{eqnarray}
It is distributive because
\begin{eqnarray}
\bold{X}_{\hbar} \bullet_{\hbar} (\bold{X}_{\hbar}'+\bold{X}_{\hbar}'') &=& (\sum_{r=0}^{\infty} \alpha^r \sum_{u_r} Y^r_{u_r}(\sigma) Z_{u_r})
\bullet_{\hbar}
(\sum_{s=0}^{\infty} \alpha^s \sum_{v_s} Y'^s_{v_s}(\sigma) Z_{v_s}
+
\sum_{s=0}^{\infty} \alpha^s \sum_{w_s} Y''^s_{w_s}(\sigma) Z_{w_s}) \n
&=& (\sum_{u_0} Y^0_{u_0}(\sigma) Z_{u_0})
\bullet_{\hbar}
(\sum_{v_0} Y'^0_{v_0}(\sigma) Z_{v_0}
+
\sum_{w_0} Y''^0_{w_0}(\sigma) Z_{w_0}) \n
&=& \sum_{u_0, v_0} Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) Z_{u_0} \bullet_{\hbar} Z_{v_0}
+
\sum_{u_0, w_0} Y^0_{u_0}(\sigma) Y''^0_{w_0}(\sigma) Z_{u_0} \bullet_{\hbar} Z_{w_0} \n
&=&
\bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}'+ \bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}''.
\end{eqnarray}
Associativity is verified as
\begin{eqnarray}
(Z_u \bullet_{\hbar} Z_v) \bullet_{\hbar} Z_w
&=&
ab \zeta((u_1 u_2 \cdots u_M) \times_{\hbar} (u_{M+1}u_{M+2} \cdots u_{N}))|_{\alpha=0} \bullet_{\hbar} Z_w \n
&=&
Z_{uv} \bullet_{\hbar} Z_w \n
&=&
abc \zeta((u_1 u_2 \cdots u_N) \times_{\hbar} (u_{N+1}u_{N+2} \cdots u_{L}))\n
&=&
abc \zeta(T(u_1 \otimes \cdots \otimes u_L)) \n
&=&
abc \zeta((u_1 u_2 \cdots u_M) \times_{\hbar} (u_{M+1}u_{M+2} \cdots u_{L}))\n
&=&
Z_{u} \bullet_{\hbar} Z_{vw} \n
&=&
Z_u \bullet_{\hbar} (Z_v \bullet_{\hbar} Z_w),
\end{eqnarray}
where $w=c u_{N+1} u_{N+2} \cdots u_{L}$, and thus
\begin{eqnarray}
(\bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}') \bullet_{\hbar} \bold{X}_{\hbar}''
&=& \sum_{u_0, v_0, w_0} Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) Y''^0_{w_0}(\sigma) (Z_{u_0} \bullet_{\hbar} Z_{v_0}) \bullet_{\hbar} Z_{w_0} \n
&=& \sum_{u_0, v_0, w_0} Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) Y''^0_{w_0}(\sigma) Z_{u_0} \bullet_{\hbar} (Z_{v_0} \bullet_{\hbar} Z_{w_0}) \n
&=&\bold{X}_{\hbar} \bullet_{\hbar} (\bold{X}_{\hbar}' \bullet_{\hbar} \bold{X}_{\hbar}'').
\end{eqnarray}
Derivatives are defined as in the previous section by
\begin{equation}
\frac{\partial}{\partial \sigma^i} \bold{X}_{\hbar}=\sum_{r=0}^{\infty} \alpha^r \sum_{u_r} \frac{\partial}{\partial \sigma^i} Y^r_{u_r}(\sigma) Z_{u_r}.
\end{equation}
These derivatives are also commutative:
\begin{eqnarray}
\frac{\partial}{\partial \sigma^i} \frac{\partial}{\partial \sigma^j} \bold{X}_{\hbar}
&=&
\sum_{r=0}^{\infty} \alpha^r \sum_{u_r} \frac{\partial}{\partial \sigma^i} \frac{\partial}{\partial \sigma^j} Y^r_{u_r}(\sigma) Z_{u_r} \n
&=&
\sum_{r=0}^{\infty} \alpha^r \sum_{u_r} \frac{\partial}{\partial \sigma^j} \frac{\partial}{\partial \sigma^i} Y^r_{u_r}(\sigma) Z_{u_r} \n
&=&
\frac{\partial}{\partial \sigma^j} \frac{\partial}{\partial \sigma^i} \bold{X}_{\hbar},
\end{eqnarray}
and the derivatives of the quantum Zariski product also satisfy the Leibniz rule:
\begin{eqnarray}
\frac{\partial}{\partial \sigma^i} (\bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}')
&=&
\sum_{u_0, v_0} \frac{\partial}{\partial \sigma^i} (Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma)) Z_{u_0} \bullet_{\hbar} Z_{v_0} \n
&=&
\sum_{u_0, v_0}( \frac{\partial}{\partial \sigma^i} Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) + Y^0_{u_0}(\sigma) \frac{\partial}{\partial \sigma^i} Y'^0_{v_0}(\sigma)) Z_{u_0} \bullet_{\hbar} Z_{v_0} \n
&=&
\frac{\partial}{\partial \sigma^i} \bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}'
+ \bold{X}_{\hbar} \bullet_{\hbar}
\frac{\partial}{\partial \sigma^i} \bold{X}_{\hbar}'.
\end{eqnarray}
We define the Zariski quantized Nambu-Poisson bracket by
\begin{eqnarray}
[\bold{X}_{\hbar}, \bold{X}_{\hbar}', \bold{X}_{\hbar}'']_{\bullet_{\hbar}}
&:=&\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} \bold{X}_{\hbar} \bullet_{\hbar}
\frac{\partial}{\partial\sigma^j} \bold{X}_{\hbar}' \bullet_{\hbar}
\frac{\partial}{\partial\sigma^k} \bold{X}_{\hbar}'' \n
&=&
\sum_{u_0,v_0,w_0}
\epsilon^{ijk}
\frac{\partial}{\partial\sigma^i} Y^0_{u_0}(\sigma)
\frac{\partial}{\partial\sigma^j} Y'^0_{v_0}(\sigma)
\frac{\partial}{\partial\sigma^k} Y''^0_{w_0}(\sigma)
Z_{u_0} \bullet_{\hbar} Z_{v_0} \bullet_{\hbar} Z_{w_0},
\end{eqnarray}
where $i, j, k= 1, 2, 3$.
By definition, the bracket is skew-symmetric. One can show that it satisfies the Fundamental Identity and the Leibniz rule as exactly in the same way as in the previous section by using the above properties. Thus, the Zariski quantized Nambu-Poisson bracket, which is a deformation quantization of the many-body deformation of the original Nambu-Poisson bracket, has the same Nambu-Poisson structure as the original Nambu-Poison bracket.
We can show that a quantum Zariski product of two elements depends only on $\alpha^2$ in the following way.
Because
\begin{eqnarray}
&&v_{\sigma^1} * v_{\sigma^2} * \cdots * v_{\sigma^N} \n
&=&
\sum_{n_1, \cdots, n_{N-1}} \alpha^{n_1+ \cdots + n_{N-1}}\frac{1}{n_1! \cdots n_{N-1}!}(\epsilon^{i_{1, 1}j_{1, 1}} \cdots \epsilon^{i_{1, n_1} j_{1, n_1}})
(\epsilon^{i_{2, 1}j_{2, 1}} \cdots \epsilon^{i_{2, n_2} j_{2, n_2}}) \cdots
\epsilon^{i_{N-1, n_{N-1}} j_{N-1, n_{N-1}}} \n
&&\partial_{i_{N-1, 1}} \cdots \partial_{i_{N-1, n_{N-1}}}( \cdots (\partial_{i_{2, 1}} \cdots \partial_{i_{2, n_2}}(\partial_{i_{1, 1}} \cdots \partial_{i_{1, n_1}} v_{\sigma^1}
\partial_{j_{1, 1}} \cdots \partial_{j_{1, n_1}} v_{\sigma^2})
\partial_{j_{2, 1}} \cdots \partial_{j_{2, n_2}} v_{\sigma^3}) \n
&& \cdots )\partial_{j_{N-1, 1}} \cdots \partial_{j_{N-1, n_{N-1}}} v_{\sigma^N} \n
&=&
\sum_{n_1, \cdots, n_{N-1}} (-\alpha)^{n_1+ \cdots + n_{N-1}}\frac{1}{n_1! \cdots n_{N-1}!}(\epsilon^{j_{N-1, 1}i_{N-1, 1}} \cdots \epsilon^{j_{N-1, n_{N-1}} i_{N-1, n_{N-1}}}) \cdots
(\epsilon^{j_{1, 1}i_{1, 1}} \cdots \epsilon^{j_{1, n_1} i_{1, n_1}}) \n
&&\partial_{j_{N-1, 1}} \cdots \partial_{j_{N-1, n_{N-1}}}v_{\sigma^N} \partial_{i_{N-1, 1}} \cdots \partial_{i_{N-1, n_{N-1}}}(\partial_{j_{N-2, 1}} \cdots \partial_{j_{N-2, n_{N-2}}} v_{\sigma^{N-1}}
\partial_{i_{N-2, 1}} \cdots \partial_{i_{N-2, n_{N-2}}}( \n
&&
\partial_{j_{N-3, 1}} \cdots \partial_{j_{N-3, n_{N-3}}} v_{\sigma^{N-2}}
\partial_{i_{N-3, 1}} \cdots \partial_{i_{N-3, n_{N-3}}}( \cdots
(\partial_{j_{1, 1}} \cdots \partial_{j_{1, n_1}}
v_{\sigma^2}
\partial_{i_{1, 1}} \cdots \partial_{i_{1, n_1}}
v_{\sigma^1})) \cdots ), \n
\end{eqnarray}
and
\begin{eqnarray}
&&v_{\sigma^N} * v_{\sigma^{N-1}} * \cdots * v_{\sigma^1} \n
&=&
\sum_{n_1, \cdots, n_{N-1}} (\alpha)^{n_1+ \cdots + n_{N-1}}\frac{1}{n_1! \cdots n_{N-1}!}(\epsilon^{j_{N-1, 1}i_{N-1, 1}} \cdots \epsilon^{j_{N-1, n_{N-1}} i_{N-1, n_{N-1}}}) \cdots
(\epsilon^{j_{1, 1}i_{1, 1}} \cdots \epsilon^{j_{1, n_1} i_{1, n_1}}) \n
&&\partial_{j_{N-1, 1}} \cdots \partial_{j_{N-1, n_{N-1}}}v_{\sigma^N} \partial_{i_{N-1, 1}} \cdots \partial_{i_{N-1, n_{N-1}}}(\partial_{j_{N-2, 1}} \cdots \partial_{j_{N-2, n_{N-2}}} v_{\sigma^{N-1}}
\partial_{i_{N-2, 1}} \cdots \partial_{i_{N-2, n_{N-2}}}( \n
&&
\partial_{j_{N-3, 1}} \cdots \partial_{j_{N-3, n_{N-3}}} v_{\sigma^{N-2}}
\partial_{i_{N-3, 1}} \cdots \partial_{i_{N-3, n_{N-3}}}( \cdots
(\partial_{j_{1, 1}} \cdots \partial_{j_{1, n_1}}
v_{\sigma^2}
\partial_{i_{1, 1}} \cdots \partial_{i_{1, n_1}}
v_{\sigma^1})) \cdots ), \n
\end{eqnarray}
we obtain
\begin{eqnarray}
&&\frac{1}{N!} \sum_{\sigma \in S_N} (v_{\sigma^1} * \cdots *v_{\sigma^N}) \n
&=& \frac{1}{2} \frac{1}{N!} \sum_{\sigma \in S_N} (v_{\sigma^1} * \cdots *v_{\sigma^N}+
v_{\sigma^N} * \cdots *v_{\sigma^1}) \n
&=& \frac{1}{N!} \sum_{\sigma \in S_N}
\sum_{n_1+ \cdots +n_{N-1}=2n} (\alpha)^{2n}\frac{1}{n_1! \cdots n_{N-1}!}(\epsilon^{j_{N-1, 1}i_{N-1, 1}} \cdots \epsilon^{j_{N-1, n_{N-1}} i_{N-1, n_{N-1}}}) \cdots
(\epsilon^{j_{1, 1}i_{1, 1}} \cdots \epsilon^{j_{1, n_1} i_{1, n_1}}) \n
&&\partial_{j_{N-1, 1}} \cdots \partial_{j_{N-1, n_{N-1}}}v_{\sigma^N} \partial_{i_{N-1, 1}} \cdots \partial_{i_{N-1, n_{N-1}}}(\partial_{j_{N-2, 1}} \cdots \partial_{j_{N-2, n_{N-2}}} v_{\sigma^{N-1}}
\partial_{i_{N-2, 1}} \cdots \partial_{i_{N-2, n_{N-2}}}( \n
&&
\partial_{j_{N-3, 1}} \cdots \partial_{j_{N-3, n_{N-3}}} v_{\sigma^{N-2}}
\partial_{i_{N-3, 1}} \cdots \partial_{i_{N-3, n_{N-3}}}( \cdots
(\partial_{j_{1, 1}} \cdots \partial_{j_{1, n_1}}
v_{\sigma^2}
\partial_{i_{1, 1}} \cdots \partial_{i_{1, n_1}}
v_{\sigma^1})) \cdots ). \n \label{alpha=hbar^2}
\end{eqnarray}
From (\ref{quantumproduct1}), (\ref{quantumproduct2}), (\ref{quantumproduct3}), and (\ref{alpha=hbar^2}), the statement is proven true. Therefore, we identify $\alpha^2$ as $\hbar$.
\subsection{Metric}
In this subsection, we construct a natural metric for $\mathcal{M}_{\hbar}$. In particular, the metric is invariant under a gauge transformation generated by the Zariski quantized Nambu-Poisson bracket.
We define a metric for $X_{\hbar}, X_{\hbar}' \in \mathcal{M}_{\hbar}$ by
\begin{eqnarray}
<\bold{X}_{\hbar}, \bold{X}_{\hbar}'>
&=&
<\bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}'> \n
&=&
\int d^p\sigma <<\bold{X}_{\hbar} \bullet_{\hbar} \bold{X}_{\hbar}'>> \n
&=&
\sum_{u_0, v_0} \int d^p \sigma Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) <<Z_{u_0} \bullet_{\hbar} Z_{v_0}>> \n
&=&
\sum_{u_0, v_0} \int d^p \sigma Y^0_{u_0}(\sigma) Y'^0_{v_0}(\sigma) \sum_{r=0}^{\infty} \alpha^r \sum_{w_r} <<Z_{w_r}>>,
\end{eqnarray}
where $<<Z_{w_r}>>$ are defined in the same way as in the subsection 2.2.
This metric is invariant under a gauge transformation generated by the $p$-dimensional Zariski quantized Nambu-Poisson bracket. Here we show the $p=3$ case as an example, whereas the $p=2$ and $p>3$ case are shown in a similar way. The condition of the invariance of the metric is given by
\begin{equation}
<[\bold{X}_{\hbar}^3, \bold{X}_{\hbar}^4, \bold{X}_{\hbar}^1]_{\bullet_{\hbar}}, \bold{X}_{\hbar}^2> + <\bold{X}_{\hbar}^1, [\bold{X}_{\hbar}^3, \bold{X}_{\hbar}^4, \bold{X}_{\hbar}^2]_{\bullet_{\hbar}}>=0,
\end{equation}
in the same way as in the previous case.
The left hand side is
\begin{eqnarray}
&&<\sum_{(u_3)_0,(u_4)_0,(u_1)_0}
\epsilon^{ijk}
\partial_i (Y^3)^0_{(u_{3})_0}
\partial_j (Y^4)^0_{(u_4)_0}
\partial_k (Y^1)^0_{(u_1)_0}
Z_{(u_3)_0} \bullet_{\hbar} Z_{(u_4)_0} \bullet_{\hbar} Z_{(u_1)_0},
\sum_{(u_2)_0} (Y^2)^0_{(u_2)_0} Z_{(u_2)_0}> \n
&+&
<\sum_{(u_1)_0} (Y^1)^0_{(u_1)_0} Z_{(u_1)_0},
\sum_{(u_3)_0,(u_4)_0,(u_2)_0}
\epsilon^{ijk}
\partial_i (Y^3)^0_{(u_3)_0}
\partial_j (Y^4)^0_{(u_4)_0}
\partial_k (Y^2)^0_{(u_2)_0}
Z_{(u_3)_0} \bullet_{\hbar} Z_{(u_4)_0} \bullet_{\hbar} Z_{(u_2)_0}> \n
&=&
\sum_{(u_1)_0, (u_2)_0, (u_3)_0, (u_4)_0}
\int d^3 \sigma (\epsilon^{ijk}
\partial_i (Y^3)^0_{(u_3)_0}
\partial_j (Y^4)^0_{(u_4)_0}
\partial_k (Y^1)^0_{(u_1)_0}
(Y^2)^0_{(u_2)_0} \n
&& \qquad \qquad +
(Y^1)^0_{(u_1)_0}
\epsilon^{ijk}
\partial_i (Y^3)^0_{(u_3)_0}
\partial_j (Y^4)^0_{(u_4)_0}
\partial_k (Y^2)^0_{(u_2)_0})
<<Z_{(u_1)_0} \bullet_{\hbar} Z_{(u_2)_0} \bullet_{\hbar} Z_{(u_3)_0} \bullet_{\hbar} Z_{(u_4)_0}>> \n
&=&
\sum_{(u_1)_0, (u_2)_0, (u_3)_0, (u_4)_0}
\int d^3 \sigma (
\partial_k
\epsilon^{ijk}
\partial_i (Y^3)^0_{(u_3)_0}
\partial_j (Y^4)^0_{(u_4)_0}
(Y^1)^0_{(u_1)_0}
(Y^2)^0_{(u_2)_0}) \n
&& \qquad \qquad \qquad \qquad \qquad \quad <<Z_{(u_1)_0} \bullet_{\hbar} Z_{(u_2)_0} \bullet_{\hbar} Z_{(u_3)_0} \bullet_{\hbar} Z_{(u_4)_0}>> \n
&=&0.
\end{eqnarray}
\subsection{Pair Creation and Annihilation}
In this subsection, we study general features of Zariski quantized theories by using a simple model. We start with a first quantized action,
\begin{equation}
S_0=\frac{1}{2} X^2
+ \lambda X^6, \label{firstquantizedaction}
\end{equation}
where $X \in \bold{R}$ represents a target coordinate.
If we deform it by the classical Zariski product and $\mathcal{M}$, we obtain
\begin{equation}
S=<\frac{1}{2} \bold{X} \bullet \bold{X}
+ \lambda (\bold{X})_{\bullet}^6>, \label{classicalaction}
\end{equation}
where $\bold{X}=\sum_u Y_u Z_u$ and
$Y_u \in \bold{R}$.
After the Zariski quantization, we obtain
\begin{equation}
S_{\hbar}=<\frac{1}{2} \bold{X} \bullet_{\hbar} \bold{X} + \lambda (\bold{X})^6_{\bullet_{\hbar}}>. \label{Zariskiquantizedaction}
\end{equation}
We define a theory by a path-integral as
\begin{equation}
Z= \int \mathcal{D}Y \exp{(\frac{i}{\hbar}<\frac{1}{2} \bold{X} \bullet_{\hbar} \bold{X}
+ \lambda (\bold{X})_{\bullet_{\hbar}}^6>)}. \label{pathint}
\end{equation}
(\ref{classicalaction}) is a classical action of this theory because (\ref{classicalaction}) dominates in (\ref{pathint}) in the $\hbar \to 0$ limit.
We have typical interaction terms in (\ref{Zariskiquantizedaction});
\begin{eqnarray}
&&\lambda (Y_{x_1} Z_{x_1}) \bullet_{\hbar} (Y_{x_1} Z_{x_1}) \bullet_{\hbar} (Y_{x_1^2} Z_{x_1^2}) \bullet_{\hbar} (Y_{x_2} Z_{x_2}) \bullet_{\hbar} (Y_{x_2} Z_{x_2}) \bullet_{\hbar} (Y_{x_2^2} Z_{x_2^2}) \n
&=& \lambda Y_{x_1} Y_{x_1} Y_{x_1^2} Y_{x_2} Y_{x_2} Y_{x_2^2} Z_{x_1} \bullet_{\hbar} Z_{x_1} \bullet_{\hbar} Z_{x_1^2} \bullet_{\hbar} Z_{x_2} \bullet_{\hbar} Z_{x_2} \bullet_{\hbar} Z_{x_2^2} \n
&=& \lambda Y_{x_1} Y_{x_1} Y_{x_1^2} Y_{x_2} Y_{x_2} Y_{x_2^2} \zeta(T(x_1 \otimes x_1 \otimes x_1 \otimes x_1 \otimes x_2 \otimes x_2 \otimes x_2 \otimes x_2)) \n
&=& \lambda Y_{x_1} Y_{x_1} Y_{x_1^2} Y_{x_2} Y_{x_2} Y_{x_2^2}
\zeta(x_1^4 x_2^4 + \frac{16}{5} \hbar x_1^2x_2^2 + O(\hbar^2) ) \n
&=& \lambda Y_{x_1} Y_{x_1} Y_{x_1^2} Y_{x_2} Y_{x_2} Y_{x_2^2}
(Z_{x_1}Z_{x_1}Z_{x_1^2}Z_{x_2}Z_{x_2}Z_{x_2^2}
+ \frac{16}{5} \hbar Z_{x_1}Z_{x_1}Z_{x_2}Z_{x_2}+O(\hbar^2)) \n
&=& \lambda (Y_{x_1} Z_{x_1}) (Y_{x_1} Z_{x_1}) (Y_{x_1^2} Z_{x_1^2}) (Y_{x_2} Z_{x_2}) (Y_{x_2} Z_{x_2}) (Y_{x_2^2} Z_{x_2^2}) \n
&&+ \frac{16}{5} \hbar \lambda (Y_{x_1^2}Y_{x_2^2}) (Y_{x_1} Z_{x_1}) (Y_{x_1} Z_{x_1}) (Y_{x_2} Z_{x_2}) (Y_{x_2} Z_{x_2}) +O(\hbar^2). \label{interactions}
\end{eqnarray}
In (\ref{interactions}), $\lambda (Y_{x_1} Z_{x_1}) (Y_{x_1} Z_{x_1}) (Y_{x_1^2} Z_{x_1^2}) (Y_{x_2} Z_{x_2}) (Y_{x_2} Z_{x_2}) (Y_{x_2^2} Z_{x_2^2})$ is a classical interaction term, whereas $\frac{16}{5} \hbar \lambda (Y_{x_1^2}Y_{x_2^2}) (Y_{x_1} Z_{x_1}) (Y_{x_1} Z_{x_1}) (Y_{x_2} Z_{x_2}) (Y_{x_2} Z_{x_2})$ is a quantum correction. Because each $Z_u$ in the interactions is a base for each one-body state, classical interactions represent 6-body interactions as shown as an example in Fig.\ref{classical}, whereas the quantum correction represents a 4-body interaction. Because the quantum correction is $O(\hbar)$, it should be interpreted to be an one-loop correction as shown in Fig.\ref{quantum}. This implies that pair creations and annihilations occur among the many bodies introduced by the many-body deformation. Therefore, we conclude that first quantized field theories become second quantized theories after the Zariski quantization.
\begin{figure}
\begin{center}
\subfigure[a classical interaction]{
\psfrag{x1}{$Y_{x_1} Z_{x_1}$}
\psfrag{x2}{$Y_{x_1} Z_{x_1}$}
\psfrag{x3}{$Y_{x_2} Z_{x_2}$}
\psfrag{x4}{$Y_{x_2} Z_{x_2}$}
\psfrag{x5}{$Y_{x_2^2} Z_{x_2^2}$}
\psfrag{x6}{$Y_{x_1^2} Z_{x_1^2}$}
\includegraphics[height=5cm, keepaspectratio, clip]{feynman1.eps}
\label{classical}
}
\hfill
\subfigure[a quantum correction]{
\psfrag{x1}{$Y_{x_1} Z_{x_1}$}
\psfrag{x2}{$Y_{x_1} Z_{x_1}$}
\psfrag{x3}{$Y_{x_2} Z_{x_2}$}
\psfrag{x4}{$Y_{x_2} Z_{x_2}$}
\psfrag{x5}{$Y_{x_1^2}Y_{x_2^2}$}
\psfrag{x6}{$\hbar$}
\includegraphics[height=5cm, keepaspectratio, clip]{feynman2.eps}
\label{quantum}
}
\end{center}
\caption{typical interactions}
\end{figure}
\section{Examples}
The Zariski quantization is applicable to any first quantized field theories and preserves the supersymmetries of them. In this section, we present relevant examples.
The Zariski quantized type IIB superstring action \cite{GreenSchwarz} is given by
\begin{eqnarray}
S_{s}&=&-\frac{1}{2 \pi \alpha'} < \sqrt{-\frac{1}{2}(\epsilon^{ij}\bold{\Pi}^{\mu}_i \bullet_{\hbar} \bold{\Pi}^{\nu}_j)^2_{\bullet_{\hbar}}}_{\bullet_{\hbar}}
+i \epsilon^{ij} \partial_i \bold{X}^{\mu} \bullet_{\hbar} (\bar{\bold{\Theta}}^1 \bullet_{\hbar} \Gamma_{\mu} \partial_j \bold{\Theta}^1 - \bar{\bold{\Theta}}^2 \bullet_{\hbar} \Gamma_{\mu} \partial_j \bold{\Theta}^2) \n
&&-\epsilon^{ij}\bar{\bold{\Theta}}^1 \bullet_{\hbar} \Gamma^{\mu} \partial_i \bold{\Theta}^1 \bullet_{\hbar} \bar{\bold{\Theta}}^2 \bullet_{\hbar} \Gamma_{\mu} \partial_j \bold{\Theta}^2>,
\end{eqnarray}
where
$i, j = 0, 1$, $\mu, \nu =0, \cdots, 9$,
$\bold{\Pi}^{\mu}_i = \partial_i \bold{X}^{\mu} -i \bar{\bold{\Theta}}^1 \bullet_{\hbar} \Gamma^{\mu} \partial_i \bold{\Theta}^1 -
i \bar{\bold{\Theta}}^2 \bullet_{\hbar} \Gamma^{\mu} \partial_i \bold{\Theta}^2$. $\bold{\Theta}^1$ and $\bold{\Theta}^2$ are $SO(1, 9)$ Majorana-Weyl fermions that possess the same chirality.
The Zariski quantized supermembrane action \cite{BergshoeffSezginTownsend} is given by
\begin{eqnarray}
S_{M}&=&
\Bigl<
\sqrt{-det\bold{G}_{\bullet_{\hbar}}}_{\bullet_{\hbar}}
+\frac{i}{4} \epsilon^{i j k} \bar{\bold{\Psi}} \bullet_{\hbar} \Gamma_{MN} \partial_{i} \bold{\Psi} \bullet_{\hbar}
(\bold{\Pi}_{j}^{\,\,\, M} \bullet_{\hbar} \bold{\Pi}_{k}^{\,\,\, N}
+\frac{i}{2}\bold{\Pi}_{j}^{\,\,\, M} \bullet_{\hbar} \bar{\bold{\Psi}} \bullet_{\hbar} \Gamma^N \partial_{k}\bold{\Psi} \n
&& \qquad \qquad \qquad \qquad \qquad \qquad \qquad -\frac{1}{12} \bar{\bold{\Psi}} \bullet_{\hbar} \Gamma^M \partial_{j}\bold{\Psi} \bullet_{\hbar} \bar{\bold{\Psi}} \bullet_{\hbar} \Gamma^N \partial_{k}\bold{\Psi})
\Bigr>,
\end{eqnarray}
where $i, j, k = 0, 1, 2$, $M, N=0, \cdots, 10$, $\bold{G}_{i j}= \bold{\Pi}_{i}^{\,\,\, M} {\bullet_{\hbar}} \bold{\Pi}_{j M}$ and $\bold{\Pi}_{i}^{\,\,\, M}= \partial_{i} \bold{X}^{M}
-\frac{i}{2} \bar{\bold{\Psi}} {\bullet_{\hbar}} \Gamma^{M} \partial_{i} \bold{\Psi}$.
$\bold{\Psi}$ is a $SO(1, 10)$ Majorana fermion.
These two theories are expected to be second quantized covariant theories of superstrings and supermembranes.
By performing the Zariski quantization of the type IIB superstring in the Schild gauge \cite{Schild}, which is equivalent to the IIB matrix model \cite{IKKT} with the area preserving diffeomorphism symmetry, we obtain
\begin{equation}
S_{IIB} = < -\frac{1}{4} \{ \bold{X}^{\mu}, \bold{X}^{\nu} \}_{\bullet_{\hbar}}^2
- \frac{1}{2} \bar{\bold{\Theta}} {\bullet_{\hbar}} \Gamma^{\mu} \{\bold{X}_{\mu}, \bold{\Theta} \}_{\bullet_{\hbar}}>, \label{IIBaction}
\end{equation}
where $\bold{\Theta}$ is a $SO(1, 9)$ Majorana-Weyl fermions.
By performing the Zariski quantization of the supermembrane action in a semi-light-cone gauge \cite{MModel}, which is equivalent to the 3-algebra model of M-theory with the volume preserving diffeomorphism symmetry \cite{MModel, LorentzianM}, we obtain
\begin{eqnarray}
S_{3algM}
&=&
\Bigl<
-\frac{1}{12}\{\bold{X}^I, \bold{X}^J, \bold{X}^K\}_{\bullet_{\hbar}}^2
-\frac{1}{2}(\bold{A}^{u}_{\alpha a b} {\bullet_{\hbar}} \{\varphi^a_u, \varphi^b_u, \bold{X}^{I}\})_{\bullet_{\hbar}}^2 \n
&& \qquad \qquad
-\frac{1}{3} E^{\alpha \beta \gamma}
\bold{A}^u_{\alpha a b} {\bullet_{\hbar}} \bold{A}^v_{\beta c d} {\bullet_{\hbar}} \bold{A}^w_{\gamma e f}
\{\varphi^a_u, \varphi^c_v, \varphi^d_v\}\{\varphi^b_u, \varphi^e_w, \varphi^f_w\} \n
&& \qquad \qquad
-\frac{i}{2}\bar{\bold{\Psi}} {\bullet_{\hbar}} \Gamma^{\alpha} \bold{A}^u_{\alpha a b} {\bullet_{\hbar}} \{\varphi^a_u, \varphi^b_u, \bold{\Psi}\}
+\frac{i}{4}\bar{\bold{\Psi}} {\bullet_{\hbar}} \Gamma_{IJ}\{\bold{X}^I, \bold{X}^J, \bold{\Psi}\}_{\bullet_{\hbar}}
\Bigr>,
\label{continuumaction}
\end{eqnarray}
where $\alpha, \beta, \gamma = 0, 1, 2$, $I, J, K = 3, \cdots, 10$ and $\varphi^a$ are complete basis of functions in three-dimensions.
$E^{\alpha \beta \gamma}$ is a Levi-Civita symbol in three dimensions.
$\bold{\Psi}$ is a $SO(1,2) \times SO(8)$ Majorana-Weyl fermion satisfying
\begin{eqnarray}
&&\Gamma^{012}\bold{\Psi}=-\bold{\Psi} \label{kappa}, \\
&& \bold{\Psi}^{\dagger} = \bold{\Psi}^{T}.
\end{eqnarray}
These theories are also expected to be second quantized theories of superstrings and supermembranes. It is rather easy to study their relations to matrix models and string field theories because (\ref{IIBaction}) and (\ref{continuumaction}) possess large gauge symmetries, the area and volume preserving diffeomorphism symmetry, respectively, which should correspond to the large gauge symmetries of the matrix models and string field theories.
\section{Conclusion and Discussion}
\setcounter{equation}{0}
In this paper, we clarified physical meaning of the Zariski quantization. We found that we can obtain second quantized theories by performing the Zariski quantization, which consists of the many-body deformation and deformation quantization, from first-quantized field theories, such as superstring and supermembrane theories. The Zariski quantization preserves the supersymmetries of the first quantized theories, because the quantum Zariski product is Abelian, associative and distributive, and admits commutative derivative satisfying the Leibniz rule. Therefore, by performing the Zariski quantization of superstring and supermembrane theory, we can obtain second quantized theories of superstring and supermembrane.
We discuss the origin of the difference between the physical consequences in the paper by G. Dito et al. and in our paper. In our paper, we deformed the first quantized field theories by $\mathcal{M}$, which depend on both $\sigma$-spaces and $x$-spaces. In $\mathcal{M}$, $u(x)$ are labels on many bodies and $Y_u(\sigma)$ are their fields. As a result, Zariski quantized theories are second quantized theories. On the other hand, G. Dito et al. studied the Zariski quantization on small subspaces $\mathcal{A}_0$, which depend essentially only on $x$-spaces since the basis $J(Z_u) \in \mathcal{A}_0$ satisfy $\partial_{\sigma}J(Z_u)= J(Z_{\partial_{x}u})$ \cite{DitoFlatoSternheimerTakhtajan}. Then, there is no degree of freedom of $Y_u(\sigma)$ in $\mathcal{A}_0$. Because one needs to define physical observables by $u$ themselves in $\mathcal{A}_0$, Zariski quantized theories cannot be second quantized theories and the Zariski quantization was interpreted as "sesqui-quantization," a halfway between first and second quantizations.
String field theories and several matrix models are known to describe many-body strings or membranes although they have not been proved to formulate non-perturbative string theory yet. We hope that non-perturbative dynamics of string theory will be derived from the Zariski quantized superstring and supermembrane theories. One reasonable way is to study the relations among the Zariski quantized theories, the string field theories and the matrix models.
\vspace*{0cm}
|
1,108,101,565,933 | arxiv | \section{Introduction}
\label{sec:aintro}
As a consequence of the gravitational diffraction of light \cite{Turyshev:2017,Turyshev-Toth:2017}, electromagnetic (EM) waves traveling from distant sources in the close proximity of the Sun are focused by the solar gravitational field at heliocentric distances beyond $\overline z\simeq b^2/(2r_g)\gtrsim 547.6 \,(b/R_\odot)^2$ astronomical units (AU), where $b$ is a light ray's impact parameter, $r_g=2GM_\odot/c^2$ is the Schwarzschild radius of the Sun and $R_\odot$ is its radius. This diffraction process is characterized by truly remarkable properties: At optical or near infrared wavelengths, it offers light amplification of up to a factor of $4\pi^2 r_g/\lambda\simeq 2.1\times 10^{11}\, (1\,\mu{\rm m}/\lambda)$, and angular resolution of up to $\simeq0.38\,{\lambda}/{b}=0.10\,({\lambda}/{1\,\mu{\rm m}})(R_\odot/b)$ nanoarcseconds (nas) \cite{Turyshev:2017,Turyshev-Toth:2017,Turyshev-Toth:2019-extend}.
The resulting solar gravitational lens (SGL) allows for extraordinary observational capabilities, including, for instance, direct high-resolution imaging and spectroscopy of Earth-like exoplanets \cite{Turyshev-etal:2018}. We can benefit from this unique natural `instrument' with the help of a meter-class telescope, equipped with a solar coronagraph (which is needed to block the solar light), and positioned in the strong interference region of the SGL (see Fig.~\ref{fig:regions}) with respect to the intended imaging target. Until recently such deep space missions were hard to contemplate, but with recent reports on the Voyager 1 spacecraft reaching distances beyond 140 AU while still transmitting valuable data after more than 42 years of continuous operation, and with advances in spacecraft miniaturization and progress in propulsion technologies, efforts to explore the space outside our solar system have intensified \cite{KISS:2015,Turyshev-etal:2018}.
Recognizing its value for astronomy and astrophysics, recently we investigated the optical properties of the SGL and developed its wave-optical treatment \cite{Turyshev-Toth:2017,Turyshev-Toth:2019,Turyshev-Toth:2019-extend}. With this knowledge, we studied photometric imaging with the SGL \cite{Turyshev-Toth:2019-blur}, estimating the total power that is incident on the aperture of an imaging telescope, thus measuring the amplitude of the incident signal. As part of the investigation, we studied the fact that imaging of extended sources with the SGL is affected by blurring, due to the SGL's inherent spherical aberration. With these results at hand, we investigated the process of image formation of point sources using an optical telescope placed in the SGL focal region \cite{Turyshev-Toth:2019-image}. We derived analytical expressions that can be used to model extended sources using numerical tools.
In the present paper, we investigate the image formation process by an optical telescope in the SGL focal region, viewing an extended, resolved source positioned at a large, but finite distance from the Sun. This investigation of the imaging process requires knowledge not only of the amplitude of the signal, but also its phase. Our objective is to derive analytical expressions that may be used to evaluate signals from realistic targets, which is important for a variety of potential astronomical applications of the SGL. To assess realistic observing scenarios in the context of a potential deep space mission, we also study the process of deconvolving blurred SGL images under realistic conditions in the presence of various sources of noise. We provide the theoretical foundation to address these important questions. Our ultimate goal is to offer analytical tools to compute photon fluxes from realistic sources, to estimate detection SNRs, required integration times for a given observing scenario, to evaluate the quality of reconstructed images and, by doing so, to move the concept of imaging with the SGL from a domain of theoretical physics to the mainstream of astronomy and astrophysics.
Our paper is organized as follows:
Section~\ref{sec:im-form} introduces the SGL and the solution for the EM field in the image plane in the strong interference region behind the Sun.
Section \ref{sec:image-sens} discusses the modeling of the intensity distribution observed in the focal plane behind the convex lens. We present the total signal received from the extended source as consisting of two parts: the signal from the directly imaged region of the source and the blur received from the rest of the source. Although our basic results are generic, to allow for the analytic evaluation of realistic observing scenarios, we model the source as a uniformly illuminated disk. This approach allows us to develop analytical expressions to estimate the total photon flux received by the telescope.
In Section \ref{sec:weak-int} we study image formation in the geometric optics and weak interference regions, thus extending our results to all the optical regions behind the Sun and demonstrating the compatibility of our results with known microlensing models.
In Section~\ref{sec:power} we derive the power deposited in the focal plane of the imaging telescope from the directly imaged region of the target object, the rest of the target and also light contamination from off-target sources. We estimate the photon flux received at the detector from a realistic distant target for various cases of the image-telescope geometries. We estimate the resulting SNRs in the presence of light from the solar corona, which is the dominant source of noise.
In Section \ref{sec:convolve} we develop an approach to evalaute the ``deconvolution penalty'', the amount by which measurement noise is amplified by the deconvolution process that is used to recover a high-quality image from observations blurred by the SGL. We evaluate the integration times needed to obtain direct, high-quality resolved images of exoplanets, and demonstrate the superiority of the SGL compared to exoplanet imaging scenarios unaided by the SGL.
In Section \ref{sec:disc} we discuss results and explore avenues for the next phase of our investigation of imaging and spectroscopy of exoplanets with the SGL.
Finally, Appendix \ref{sec:model} contains a brief analysis of the solar corona using the same methodology applied in the rest of the paper, offering a suitable basis for comparison. In Appendix~\ref{sec:PSF-average} we derive a form of the point-spread function of the SGL that is averaged over the aperture of an optical telescope and discuss the properties of this averaged formulation.
\begin{figure}
\includegraphics[scale=0.25]{regions}
\caption{\label{fig:regions}The different optical regions of the SGL
(adapted from \cite{Turyshev-Toth:2019-extend}).
}
\end{figure}
\section{Image formation process with the SGL}
\label{sec:im-form}
\subsection{The EM field in the strong interference region}
\label{sec:EM-field}
In \cite{Turyshev-Toth:2019-extend}, we considered light from an extended source at a finite distance, $z_0$ from the Sun. We parameterize the problem using a heliocentric spherical coordinate system $(r,\theta,\phi)$ that is aligned with a preferred axis: a line connecting a preselected (e.g., central) point in the source to the center of the Sun, as shown in Fig.~\ref{fig:imaging-geom}. We also use of a cylindrical coordinate system $(\rho,z,\phi)$, with the $z$-axis corresponding to the preferred axis. Furthermore, we characterize points in the image plane and the source plane (both perpendicular to the $z$-axis) using 2-dimensional vector coordinates $\vec{x}$ and $\vec{x}'$, respectively.
\begin{figure}[h]
\includegraphics[scale=0.7]{imaging-geom}
\caption{\label{fig:imaging-geom}The geometry of imaging a point source with the SGL. A point source with coordinates $(x',y')$ is positioned in the source plane, at the distance $z_0$ from the Sun. The SGL image plane is at the heliocentric distance ${\overline z}$. Rays with different optical paths produce a diffraction pattern in the SGL image plane that is observed by an imaging telescope.}
\end{figure}
We consider light, modeled as a monochromatic high-frequency EM wave (i.e., neglecting terms $\propto(kr)^{-1}$ where $k=2\pi/\lambda$ is the wavenumber) coming from a source at the distance of $r_0=(z_0^2+|{\vec x}'|^2)^\frac{1}{2}\simeq z_0\gg r_g$ from the Sun (see Fig.~\ref{fig:imaging-geom}) and received on the opposite side of it at the heliocentric distance of $r=(\overline z^2+|{\vec x}|^2)^\frac{1}{2}\simeq \overline z\gg r_g$, we derived the components of the EM field near the optical axis in the strong interference region of the SGL (see Fig.~\ref{fig:regions}). Up to terms of ${\cal O}(\rho^2/z^2, \sqrt{2r_g\overline z}/z_0)$, the components of such an EM field take the form \cite{Turyshev-Toth:2019-extend,Turyshev-Toth:2019-blur,Turyshev-Toth:2019-image}
{}
\begin{eqnarray}
\left( \begin{aligned}
{E}_\rho& \\
{H}_\rho& \\
\end{aligned} \right) = \left( \begin{aligned}
{H}_\phi& \\
-{E}_\phi& \\
\end{aligned} \right)&=&
\frac{E_0}{z_0} \sqrt{2\pi kr_g}e^{i\sigma_0}
J_0\Big(\frac{2\pi}{\lambda}
\sqrt{\frac{2r_g}{\overline z}}
|{\vec x}+\frac{\overline z}{{ z}_0}{\vec x'}|\Big)
e^{i\big(k(r+r_0+r_g\ln 2k(r+r_0))-\omega t\big)}
\left( \begin{aligned}
\cos\phi& \\
\sin\phi& \\
\end{aligned} \right),
\label{eq:DB-sol-rho}
\end{eqnarray}
where the $z$-components of the EM wave behave as $({E}_z, {H}_z)\sim {\cal O}({\rho}/{z}, \sqrt{2r_g\overline z}/z_0)$. The quantity $\overline z=z(1+z/z_0+{\cal O}(z^2/z_0^2))$ denotes heliocentric distances along the line connecting the point source and the center of the Sun (see Fig.~\ref{fig:imaging-geom}). Note that these expressions are valid for forward scattering when $\theta+ \sqrt{2r_g\overline z}/z_0\approx 0$, or when $0\leq \rho\leq r_g$.
We can describe the imaging of an extended source. For that, we use the solution for the EM field (\ref{eq:DB-sol-rho}) and study the Poynting vector, ${\vec S}=(c/4\pi)\big<\overline{[\Re{\vec E}\times\Re{\vec H}]}\big>$, that describes the energy flux in the image plane \cite{Wolf-Gabor:1959,Richards-Wolf:1959,Born-Wolf:1999}. Normalizing this flux to the time-averaged value that would be observed if the gravitational field of the Sun were absent, $|\overline{\vec S}_0|=(c/8\pi)E_0^2/z_0^2$, we define the amplification factor of the SGL, ${ \mu}_{\tt SGL}=|{\vec S}|/|\overline{\vec S}_0|$:
{}
\begin{eqnarray}
{ \mu}_{\tt SGL}({\vec x},{\vec x}')&=&
\mu_0J^2_0\Big(\frac{2\pi}{\lambda}
\sqrt{\frac{2r_g}{\overline z}}
|{\vec x}+\frac{\overline z}{{ z}_0}{\vec x'}|\Big),
\qquad {\rm with} \qquad
\mu_0=\frac{4\pi^2}{1-e^{-4\pi^2 r_g/\lambda}}\frac{r_g}{\lambda}\simeq1.17\times 10^{11}\,
\Big(\frac{1\,\mu{\rm m}}{\lambda}\Big).
\label{eq:S_z*6z-mu2}
\end{eqnarray}
The angular resolution of the SGL is determined by the first zero of the Bessel function $J_0(x)$ in (\ref{eq:S_z*6z-mu2}), which occurs at $x=2.4048$ and yields
{}
\begin{eqnarray}
R_{\tt SGL}=\big|\frac{{\vec x}}{\overline z}+\frac{{\vec x'}}{{ z}_0}\big|=0.38 \frac{\lambda}{\sqrt{2r_g\overline z}}=0.10\Big(\frac{\lambda}{1\,\mu{\rm m}}\Big)\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^\frac{1}{2}~{\rm nas}.
\label{eq:S_=}
\end{eqnarray}
Note that by setting ${\vec x}'=0$ in (\ref{eq:S_=}), we recover the SGL's resolution for point sources \cite{Turyshev-Toth:2017}. Let us compare the SGL to a conventional optical telescope with aperture $d$ and focal length of $f$. Its light amplification is known to be \cite{Born-Wolf:1999,Goodman:2017} (see also the relevant derivations in Appendix~\ref{sec:model}, for instance, (\ref{eq:pow-cor})):
{ }
\begin{eqnarray}
\mu_{\tt tel}({\vec x},{\vec x}')&=&
i_0 \Big( \frac{2
J_1\big(u\frac{1}{2}d\big)}{u\frac{1}{2}d}\Big)^2, \qquad {\rm with}\qquad i_0= \Big(\frac{kd^2}{8f}\Big)^2 \qquad {\rm and} \qquad u =\frac{\pi d}{\lambda}\big|\frac{{\vec x}}{\overline z}+\frac{{\vec x'}}{{ z}_0}\big|.
\label{eq:amp=*}
\end{eqnarray}
As it is well known, it is the first zero of the Bessel function $J_1(x)$ at $x=1.220\pi$ in (\ref{eq:amp=*}) that determines the telescope's resolution:
{}
\begin{eqnarray}
R_{\tt tel}=\big|\frac{{\vec x}}{\overline z}+\frac{{\vec x'}}{{ z}_0}\big|=1.22\, \frac{\lambda}{d}=0.21\Big(\frac{\lambda}{1\,\mu{\rm m}}\Big)\Big(\frac{1\,{\rm m}}{d}\Big)~{\rm as},
\label{eq:S_=0}
\end{eqnarray}
which is more than $2\times10^9$ times less than that of the SGL. Again, by setting ${\vec x}'=0$ in (\ref{eq:S_=0}), we recover the familiar expression for the angular resolution of an optical telescope for point sources \cite{Born-Wolf:1999,Goodman:2017}.
However, the impressive amplification and angular resolution of the SGL (\ref{eq:S_=}) come at a price, which is the spherical aberration inherent in the SGL's optical properties \cite{Turyshev-Toth:2019-blur}. To discuss the impact of this aberration on the prospective imaging with the SGL, it is convenient to introduce its point-spread function (PSF), given by ${\rm PSF}={ \mu}_{\tt SGL}({\vec x},{\vec x}')/\mu_0=J^2_0\big(({2\pi}/{\lambda}) \sqrt{{2r_g}/{\overline z}} |{\vec x}+({\overline z}/{{ z}_0}){\vec x'}|\big)$. This expression (\ref{eq:S_z*6z-mu2}) is the PSF of the SGL, scaled by the amplification factor on the optical axis, $\mu_0$. (Note that (\ref{eq:amp=*}) does the same, by scaling the PSF of an optical telescope, $\propto (2J_1(x)/x)^2$, using the intensity at the center, $i_0$.)
The PSF concept is used in Fourier optics to describe the properties of an imaging system characterized by its diffraction pattern \cite{Born-Wolf:1999,Goodman:2017}. In fact, the imaging system's resolution can be limited either by aberration or by diffraction causing blurring of the image. These two phenomena have different origins and are unrelated. The PSF describes the interplay between diffraction and aberration: the smaller the aperture of a lens the more likely the PSF is dominated by diffraction. As was discussed in \cite{Turyshev-Toth:2019-extend}, the PSF of the SGL is rather broad, behaving as $\propto 1/\rho$, as the distance from the optical axis, $\rho=|{\vec x}+({\overline z}/{{ z}_0}){ \vec x'}|$, increases. The PSF of an optical telescope (\ref{eq:amp=*}) falls off much faster, behaving as $\propto 1/\rho^3$. It is this behavior of the monopole SGL that is responsible for the considerable blurring of any image that forms in the SGL's image plane. However, given that the PSF of the SGL is known, its inverse can be used to reconstruct the original image \cite{Turyshev-etal:2018}. Below we will consider the impact of the SGL blur on the image quality.
Examining (\ref{eq:S_z*6z-mu2}) and recognizing that (\ref{eq:S_=}) is extremely small, we see that a monopole gravitational lens acts as a convex lens by focusing light, according to
{}
\begin{equation}
{\vec x}=-\frac{\overline z}{z_0}{\vec x}' \qquad \rightarrow \qquad x=-\frac{\overline z}{z_0}x', \qquad y=-\frac{\overline z}{z_0}y'.
\label{eq:mapping}
\end{equation}
These expressions imply that the SGL focuses light in the opposite quadrant in the image plane while also reducing the size of the image compared to the source by a factor of ${\overline z}/{z_0}\sim1.0\times 10^{-4}\,({\overline z}/650 ~{\rm AU}) (30~{\rm pc}/z_0)$. For an exoplanet with radius $R_\oplus$, positioned at a distance of $z_0$ from the Sun, the image of this target at a heliocentric distance of ${\overline z}$, will be compressed to a cylinder with radius
{}
\begin{equation}
r_\oplus=\frac{\overline z}{z_0}R_\oplus=669.98\,\Big(\frac{\overline z}{650 ~{\rm AU}}\Big) \Big(\frac{30~{\rm pc}}{z_0}\Big)~{\rm m}.
\label{eq:rE}
\end{equation}
A telescope with aperture $d\ll r_\oplus$ would have to scan this image by traversing and sampling the image plane at multiple locations to recover the image.
\begin{figure}
\includegraphics[scale=0.9]{image-general}
\caption{\label{fig:dir-image}Imaging of extended resolved sources with the SGL. The SGL is a convex lens, producing inverted images of a source.}
\end{figure}
Consider the process of imaging an extended, resolved source. In the most widely considered practical scenario, the kilometer-scale image plane is sampled by a telescope with a meter-scale aperture. Such a telescope has the resolution required to employ a coronagraph, but it is otherwise used as a photometric detector,
measuring the brightness of the Einstein ring that forms around the Sun from light originating from the exoplanet. First, we recognize that the telescope's aperture is much smaller than the image size, $d\ll 2r_\oplus$. This leads us to separate the received signal into two parts: the signal received from the directly imaged region that corresponds to the telescope location, and the blur due light received from the rest of the source. Based on the SGL's mapping (\ref{eq:mapping}) for a given point $(x_0,y_0)$ in the image plane (Fig.~\ref{fig:dir-image}), the directly imaged region will be in the vicinity of the point $(x'_0,y_0')=-(z_0/\overline z)(x_0,y_0)$ in the source plane. Furthermore, given the telescope aperture $d$, the directly imaged region in the source plane has the diameter
{}
\begin{equation}
D=\frac{z_0}{\overline z}d =9.52\,\Big(\frac{d}{1 ~{\rm m}}\Big) \Big(\frac{650 ~{\rm AU}}{\overline z}\Big) \Big(\frac{z_0}{30~{\rm pc}}\Big)~{\rm km},
\label{eq:Dd}
\end{equation}
centered at $(x'_0,y_0')$. The signal that is received from the areas outside of $D$ on the source is causing the blur \cite{Turyshev-Toth:2019-blur}. Using (\ref{eq:rE}) and (\ref{eq:Dd}), we see that a telescope with the aperture $d$ could resolve an exoplanet whose radius is $R_{\tt exo}$ with $N_d$ linear resolution elements (see Fig.~\ref{fig:dir-image}) given by
{}
\begin{equation}
N_d=\frac{2R_\oplus}{D}\Big(\frac{R_{\tt exo}}{R_\oplus}\Big)=\frac{2r_\oplus}{d}\Big(\frac{R_{\tt exo}}{R_\oplus}\Big)=1339.95\,\Big(\frac{1 ~{\rm m}}{d}\Big) \Big(\frac{\overline z}{650 ~{\rm AU}}\Big) \Big(\frac{30~{\rm pc}}{z_0}\Big)\Big(\frac{R_{\tt exo}}{R_\oplus}\Big).
\label{eq:res-el}
\end{equation}
\subsection{Image formation by an optical telescope in the SGL image plane}
\label{sec:image-form-Fourier}
To produce images of faint, distant objects with the SGL, we represent an imaging telescope by a convex lens with aperture $d$ and focal distance $f$; see Fig.~\ref{fig:imaging-sensor}. We position the telescope at a point with coordinates ${\vec x}_0$ in the image plane in the strong interference region of the lens (Fig.~\ref{fig:regions}) \cite{Nambu:2013,Kanai-Nambu:2013,Nambu:2013b,Born-Wolf:1999,Turyshev-Toth:2019-extend}. To stay within the image, ${\vec x}_0$ is within the range: $|{\vec x}_0|+d/2\leq r_\oplus$. The amplitude of the EM wave just in front of the telescope aperture, from (\ref{eq:DB-sol-rho}), is given as
{}
\begin{eqnarray}
{\cal A}({\vec x},{\vec x}_0, {\vec x}')&=&
\sqrt{\mu_0} J_0\Big(k
\sqrt{\frac{2r_g}{\overline z}} |{\vec x}+{\vec x}_0+\frac{\overline z}{{ z}_0}{\vec x}'|\Big).
\label{eq:amp-w}
\end{eqnarray}
\begin{figure}
\includegraphics[scale=0.65]{imaging-sensor}
\caption{\label{fig:imaging-sensor}Imaging a point source with the SGL with a telescope. The telescope is positioned on the optical axis that connects the source and the Sun and it ``sees'' the Einstein ring. The telescope is represented by a convex lens with a diameter $d$ and a focal length $f$. Positions in the SGL image plane, $(x,y)$, and the optical telescope's focal plane, $(x_i,y_i)$, are also shown. }
\end{figure}
The presence of a convex lens is equivalent to a Fourier transform of the wave (\ref{eq:amp-w}). The focal plane of the optical telescope is located at the focal distance $f$ of the lens, centered on ${\vec x}_0$. Using the Fresnel--Kirchhoff diffraction formula, the amplitude of the image field in the optical telescope's focal plane at a location ${\vec x}_i=(x_i,y_i)$ is given by \cite{Wolf-Gabor:1959,Richards-Wolf:1959,Born-Wolf:1999}:
{}
\begin{eqnarray}
{\cal A}({\vec x}_i,{\vec x}_0, {\vec x}')=\frac{i}{\lambda}\iint \displaylimits_{|{\vec x}|^2\leq (d/2)^2} \hskip -7pt {\cal A}({\vec x},{\vec x}_0, {\vec x}')e^{-i\frac{k}{2f}|{\vec x}|^2}\frac{e^{iks}}{s}d^2{\vec x}.
\label{eq:amp-w-f0}
\end{eqnarray}
The function $e^{-i\frac{k}{2f}|{\vec x}|^2}=e^{-i\frac{k}{2f}(x^2+y^2)}$ represents the action of the convex lens that transforms incident plane waves to spherical waves, focusing at the focal point. Assuming that the focal length is sufficiently greater than the radius of the lens, we may approximate the optical path $s$ as $s=\sqrt{(x-x_i)^2+(y-y_i)^2+f^2}\sim f+\big((x-x_i)^2+(y-y_i)^2\big)/2f$. This allows us to present (\ref{eq:amp-w-f0}) as
{}
\begin{eqnarray}
{\cal A}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
- \sqrt{\mu_0} \frac{e^{ikf(1+{{\vec x}_i^2}/{2f^2})}}{i\lambda f}\iint\displaylimits_{|{\vec x}|^2\leq (\frac{1}{2}d)^2} d^2{\vec x}
J_0\Big(k
\sqrt{\frac{2r_g}{\overline z}} |{\vec x}+{\vec x}_0+\frac{\overline z}{{ z}_0}{\vec x}'|\Big) e^{-i\frac{k}{f}({\vec x}\cdot{\vec x}_i)}.
\label{eq:amp-w-f}
\end{eqnarray}
To account for the propagation distance between the source and the image plane, we recognize that the field strength, $E_0/z_0$, of the plane wave in (\ref{eq:DB-sol-rho}) is a function of the coordinates on the source plane, namely $E_0({\vec x}')/{\bar r}$, where $\bar r$ is distance between a point on the source plane with coordinates of $({\vec x}',-z_0)$ and a point on the image plane with coordinates of $({\vec x}+{\vec x}_0,\overline z)$, namely $\bar r=(({\vec x}+{\vec x}_0-{\vec x}')^2+({\overline z}+z_0)^2)$. Given the fact that $z_0\gg \{|{\vec x}'|, {\overline z},|{\vec x}+{\vec x}_0|\}$, we may approximate $r\simeq z_0+{\cal O}({\overline z}^2/z_0^2)$, yielding the transformation of the field strength as $E_0/z_0\rightarrow E_0({\vec x}')/z_0$. Note that we do not approximate the phase of the EM wave (\ref{eq:DB-sol-rho}), only its amplitude. This is because the phase is the quantity of our primary interest for the SGL, thus, we need to know it with the most available precision.
Next, with the amplitude ${\cal A}({\vec x}_i,{\vec x}_0,{\vec x}')$ given by (\ref{eq:amp-w-f}), the EM field (\ref{eq:DB-sol-rho}) in the focal plane of the telescope (indicated by subscript ${\vec x}_i$) produced by a point source positioned in the source plane at coordinates ${\vec x}'$ (Figs.~\ref{fig:imaging-geom}, \ref{fig:dir-image}) is given as
{}
\begin{eqnarray}
\left( \begin{aligned}
{E}_\rho& \\
{H}_\rho& \\
\end{aligned} \right)_{\hskip -3pt {\vec x}_i} = \left( \begin{aligned}
{H}_\phi& \\
-{E}_\phi& \\
\end{aligned} \right)_{\hskip -3pt \vec x_i} &=&\frac{{E}_0({\vec x}')}{z_0}
{\cal A}({\vec x}_i,{\vec x}_0,{\vec x}')
e^{i\big(k(r+r_0+r_g\ln 2k(r+r_0))-\omega t\big)}
\left( \begin{aligned}
\cos\phi& \\
\sin\phi& \\
\end{aligned} \right).
\label{eq:DB-sol-rho2}
\end{eqnarray}
With this expression, we may compute the Poynting vector of the EM field that originates at a point source at coordinates $\vec x'$ in the source plane, is captured by a telescope with aperture $d$ in the image plane centered on coordinates ${\vec x}_0$, and is finally received in the telescope's image plane at ${\vec x}_i$. Given the form (\ref{eq:DB-sol-rho2}) of the EM field, the Poynting vector will have only one nonzero component, $S_z$. With overline and brackets denoting time-averaging and ensemble averaging (over the source's surface), correspondingly, and defining $\Omega(t)=k(r+r_0+r_g\ln 2k(r+r_0))-\omega t$, we compute $S_z$ as
{}
\begin{eqnarray}
S_z({\vec x}_i,{\vec x}_0,{\vec x}')=\frac{c}{4\pi}\big<\overline{[\Re{\vec E}\times\Re{\vec H}]}_z\big>=\frac{c}{4\pi}\frac{E_0^2}{z_0^2}
\big<\overline{\big(\Re\big[{\cal A}({\vec x}_i,{\vec x}_0,{\vec x}')e^{i\Omega(t)}\big]\big)^2}\big>.
\label{eq:Pv}
\end{eqnarray}
Dividing this expression by the time-averaged Pointing vector of a spherical EM wave propagating in the absence of gravity that would be received at the same location but before entering the telescope \cite{Born-Wolf:1999}, $|\overline{{\vec S}}_0|=({c}/{8\pi}) E_0^2/z_0^2,$ we obtain the amplification factor, $\mu({\vec x}_i,{\vec x}_0,{\vec x}')=S_z({\vec x}_i,{\vec x}_0,{\vec x}')/|\overline{{\vec S}}_0|$ of the optical system consisting of the SGL and an imaging telescope, i.e., the convolution of the PSF of the SGL with that of an optical telescope:
{}
\begin{eqnarray}
\mu({\vec x}_i,{\vec x}_0,{\vec x}')=2
\big<\overline{\big(\Re\big[{\cal A}({\vec x}_i,{\vec x}_0,{\vec x}')e^{i\Omega(t)}\big]\big)^2}\big>.
\label{eq:psf}
\end{eqnarray}
To compute the intensity distribution corresponding to the light received from the entire extended source and received in the focal plane of the imaging telescope, we need to form a product of the source's surface brightness per unit area, $B_{\tt s}({\vec x}')\propto {E}^2_0({\vec x}')$ with dimensions of ${\rm W \,m}^{-2}{\rm sr}^{-1}$, and the PSF from (\ref{eq:psf}), and integrate the result over the entire surface of the source. Therefore, the intensity distribution on the detector at the focal plane of the optical telescope that is positioned on the image plane in the strong interference region of the SGL, may be presented as
{}
\begin{eqnarray}
I({\vec x}_i,{\vec x}_0) =\frac{1}{z^2_0} \iint d^2{\vec x}' B_{\tt s}({\vec x}') \mu({\vec x}_i,{\vec x}_0,{\vec x}'),
\label{eq:power}
\end{eqnarray}
which accounts for the fact that the EM field originating at the extended source is not spatially coherent.
As a result, to compute the power received by a detector in the focal plane of an imaging telescope positioned it the SGL image plane, we need to first compute the Fourier transform of the complex amplitude of the EM field (\ref{eq:amp-w-f}) and then follow the process that is outlined above and is captured by (\ref{eq:psf}) and (\ref{eq:power}). This approach allows one to employ the powerful tools of Fourier optics (e.g., \cite{Goodman:2017}) to develop practical applications of the SGL.
\section{Modeling the signal in the focal plane of an optical telescope}
\label{sec:image-sens}
In the previous section we obtained expressions that characterize the intensity distribution of light originating at a distant, extended source and received by an imaging telescope in the image plane. We now consider the intensity distribution in the focal plane of an optical telescope. We recognize that an actual astrophysical telescope is a complex instrument and has physical limitations related to its design and manufacturing specifications. In our present analysis, we use an idealized model in the form of an optically perfect convex thin lens. This is sufficient to study the principles of image formation in the telescope image plane.
\subsection{Complex amplitude in the focal plane}
\label{sec:extended-image}
Expression (\ref{eq:amp-w-f}) is rather complex and cannot be evaluated analytically in the general case. Such expressions are usually evaluated numerically instead, often in the spatial frequency domain after a Fourier-transform \cite{Goodman:2017}. However, some useful analytical approximations do exist, which we explore here.
To simplify the discussion, it is convenient to express the position ${\vec x}_0$ of the telescope in the SGL image plane via the coordinates ${\vec x}_0'$ of the corresponding central position of the directly imaged region in the source plane (see Fig.~\ref{fig:dir-image}). Using the mapping (\ref{eq:mapping}), this can be done as
{}
\begin{equation}
{\vec x}_0=-\frac{\overline z}{z_0}{\vec x}_0'.
\label{eq:mapping*}
\end{equation}
As a result, (\ref{eq:amp-w-f}) takes the following equivalent form:
{}
\begin{eqnarray}
{\cal A}({\vec x}_i,{\vec x}_0',{\vec x}')&=&-\sqrt{\mu_0}
\frac{e^{ikf(1+{{\vec x}_i^2}/{2f^2})}}{i\lambda f}
\iint\displaylimits_{|{\vec x}|^2\leq (\frac{1}{2}d)^2}\hskip -8pt
d^2{\vec x}
J_0\Big(k
\sqrt{\frac{2r_g}{\overline z}} |{\vec x}+\frac{\overline z}{{z}_0}({\vec x}'-{\vec x}_0')|\Big) e^{-i\frac{k}{f}({\vec x}\cdot{\vec x}_i)}.
\label{eq:amp-w-fd3*}
\end{eqnarray}
Because the spatial frequency $\alpha$ is high, the Bessel function $J_0(\alpha\rho)$ in (\ref{eq:amp-w-fd3*}) oscillates rapidly as the distance from the optical axis $\rho$ increases, but the overall behavior of this function diminishes rather slowly, $\propto 1/\sqrt{\rho}$. Such a behavior of $J_0$ in the complex amplitude of the EM wave (\ref{eq:amp-w-fd3*}) is the source of a significant imaging blur \cite{Turyshev-Toth:2019-extend,Turyshev-Toth:2019-blur}. In other words, a telescope with aperture $d\ll r_\oplus$ in the focal region of the SGL receives light not only from the directly imaged region with diameter of $D=(z_0/{\overline z}) d \leq R_\oplus$ on the surface of a resolved source, but also from the rest of that surface that lies outside the region with the diameter $D$.
Following \cite{Turyshev-Toth:2019-blur}, we recognize that for any given location of the telescope in the image plane, the total EM field at the telescope's focal plane from an exoplanet, ${\cal A}_{\tt source}$, is the sum of two contributions: the EM field received from the directly imaged region, ${\cal A}_{\tt dir}$, and the blur from the rest of the source, ${\cal A}_{\tt blur}$. We therefore need to evaluate the integral in (\ref{eq:amp-w-fd3*}) in these two regions:
{ }
\begin{eqnarray}
{\cal A}_{\tt source}({\vec x}_i,{\vec x}_0',{\vec x}')&=&{\cal A}_{\tt dir}({\vec x}_i,{\vec x}_0',{\vec x}')+{\cal A}_{\tt blur}({\vec x}_i,{\vec x}_0',{\vec x}').
\label{eq:amp-total}
\end{eqnarray}
In this expression, the directly imaged region is given by expression (\ref{eq:amp-w-fd3*}) for all the points on the source, ${\vec x}'$, that lie within the range $|{\vec x}'-{\vec x}'_0|\leq \frac{1}{2}D$.
In addition, blur from the rest of the source is also given by expression (\ref{eq:amp-w-fd3*}), but for $|{\vec x}'-{\vec x}'_0|\geq \frac{1}{2}D$, $|{\vec x}'|<\rho_\oplus$, where $\rho_\oplus$ is the radius of the source, as measured from the origin of the coordinate system.
Although the expressions for ${\cal A}_{\tt dir}({\vec x}_i,{\vec x}_0',{\vec x}')$ and ${\cal A}_{\tt blur}({\vec x}_i,{\vec x}_0',{\vec x}')$ have identical analytical form, the amplitudes of the EM waves in these expressions correspond to different regions with different intensities, ${E}^{\tt dir}_0$ and ${E}^{\tt blur}_0$. Radiation received from these two regions is spatially incoherent, $\big<{E}^{\tt dir}_0{E}^{\tt blur}_0\big>=0$, where $\big<...\big>$ denotes spatial averaging.
To compute ${\cal A}_{\tt dir}$ and ${\cal A}_{\tt blur}$, we need to evaluate the double integral over $d^2 {\vec x}$ for two different regions. To do this, we introduce two-dimensional coordinates to describe points in the source plane, $\vec x'$; the position of the telescope in the image plane, $\vec x_0$; points in the image plane within the telescope's aperture, $\vec x$; and points in the optical telescope's focal plane ${\vec x}_i$. These are given as follows:
\begin{eqnarray}
\{{\vec x}'\}&\equiv& (x',y')=\rho'\big(\cos\phi',\sin\phi'\big)=\rho'{\vec n}',
\label{eq:x'}\\
\{{\vec x}_0\}&\equiv& (x_0,y_0)=\rho_0\big(\cos\phi_0,\sin\phi_0\big)=\rho_0{\vec n}_0, \label{eq:x0}\\
\{{\vec x}\}&\equiv& (x,y)=\rho\big(\cos\phi,\sin\phi\big)=\rho\,{\vec n}, \label{eq:x}\\
\{{\vec x}_i\}&\equiv& (x_i,y_i)=\rho_i\big(\cos\phi_i,\sin\phi_i\big)=\rho_i{\vec n}_i.
\label{eq:p}
\end{eqnarray}
We introduce the following notations for the two relevant spatial frequencies and a useful ratio for convenience:
{}
\begin{eqnarray}
\alpha=k \sqrt{\frac{2r_g}{\overline z}}, \qquad \eta_i=k\frac{\rho_i}{f}, \qquad \beta=\frac{\overline z}{{z}_0}.
\label{eq:alpha-mu}
\end{eqnarray}
The quantities $\alpha$ and $\eta_i$ are the spatial frequencies involved in the image formation process with the SGL using a convex lens at the image plane. The frequency $\alpha$ is fixed and is determined by the chosen observation wavelength and the heliocentric distance. The frequency $\eta_i$ is variable: in addition to the observing wavelength and the focal length of the optical telescope, the subscript $i$ serves as a reminder that it depends also on the position $\vec{x}_i$ in the optical telescope's focal plane. The quantity $\beta$ is a scale factor that accounts for the finite distance to the source and heliocentric distance to the image plane.
With the notations (\ref{eq:alpha-mu}), the integral in (\ref{eq:amp-w-fd3*}) present in the expressions for both complex amplitudes takes the form
{}
\begin{eqnarray}
\iint\displaylimits_{|{\vec x}|^2\leq (\frac{1}{2}d)^2}
J_0\Big(\alpha \big|{\vec x}+\beta({\vec x}'-{\vec x}'_0)\big|\Big) e^{-i\eta_i({\vec x}\cdot{\vec n}_i)}.
\label{eq:amp-w-d*}
\end{eqnarray}
By evaluating this integral for different regions in the source plane, we can compute the amplitudes ${\cal A}_{\tt dir}({\vec x}_i,{\vec x}_0',{\vec x}')$ and ${\cal A}_{\tt blur}({\vec x}_i,{\vec x}_0',{\vec x}')$ that are needed to evaluate the signal received form the entire source.
\subsection{Complex amplitude of the EM field received from the directly imaged region}
\label{sec:power-dim}
We first consider the directly imaged region (see Fig.~\ref{fig:regions}.) Assuming that $\beta|{\vec x'}-{\vec x}'_0|\ll |{\vec x}|$ everywhere in this region, we may evaluate (\ref{eq:amp-w-d*}) by keeping only the leading term in the series expansion with respect to the small parameter $\beta|{\vec x}'-{\vec x}'_0|/|{\vec x}|$, which implies that the EM field here may be approximated by light coming from the central point, ${\vec x}'={\vec x}'_0$, in that unresolved spot with diameter of $D$ in the source plane. With this assumption and notations (\ref{eq:x}), the integral (\ref{eq:amp-w-d*}) may be easily evaluated:
{}
\begin{eqnarray}
\int_0^{\frac{1}{2}d}\hskip -8pt \rho d\rho
\int_0^{2\pi} \hskip -8pt d\phi\,
J_0(\alpha \rho) e^{-i\eta_i \rho\cos(\phi-\phi_i)}=
\pi\Big(\frac{d}{2}\Big)^2 \frac{2}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d} \Big(\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big),
\label{eq:amp-B-res}
\end{eqnarray}
where $\alpha$ and $\eta_i$ are given by (\ref{eq:alpha-mu}).
This result allows use to present the complex amplitude of the EM field received from the directly imaged region, ${\cal A}_{\tt dir}({\vec x}_i,{\vec x}_0,{\vec x}')$, which can be derived from (\ref{eq:amp-w-fd3*}) in the following form:
{ }
\begin{eqnarray}
{\cal A}_{\tt dir}({\vec x}_i,{\vec x}_0',{\vec x}')&=&i
\sqrt{\mu_0}e^{ikf(1+{{\vec x}_i^2}/{2f^2})}
\Big(\frac{kd^2}{8 f}\Big) \frac{2}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d} \Big(\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big).
\label{eq:amp-dir3}
\end{eqnarray}
We now can compute the Poynting vector of a plane wave that travels through the gravitational field of the Sun and is received in the focal plane of a convex lens placed in the focal region of the SGL. For this, we substitute the result (\ref{eq:amp-dir3}) into (\ref{eq:Pv}). After temporal averaging, we obtain the following expression for the Poynting vector for an EM wave that depends only on the radial position $\rho_i$ in the focal plane of the lens (where from (\ref{eq:alpha-mu}) we have $\eta_i=\eta_i(\rho_i)$):
{}
\begin{eqnarray}
S_{\tt dir}(\rho_i)=\frac{c}{8\pi}
E_0^2
\Big(\frac{kd^2}{8f}\Big)^2 \mu_0\Big(\frac{2}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d} \Big(\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big)\Big)^2.
\label{eq:Pv1}
\end{eqnarray}
Substituting this result in (\ref{eq:psf}), we derive the PSF of an imaging system that relies on the SGL and a convex lens, scaled by the Fresnel number and the gain of the SGL on the optical axis:
{}
\begin{eqnarray}
\mu_{\tt dir}(\rho_i)=\mu_0 \Big(\frac{kd^2}{8f}\Big)^2\Big(\frac{2}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d} \Big(\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big)\Big)^2,
\label{eq:psf1}
\end{eqnarray}
with $\alpha$ and $\eta_i$ given by (\ref{eq:alpha-mu}). This imaging PSF is a result of a convolution of two point-spread functions: the PSF of the SGL (\ref{eq:S_z*6z-mu2}) and that of the convex lens, behaving as $\propto (2J_1(x)/x)^2$. This expression shows that the PSF for an unresolved source does not depend on the source's position in the source plane; nor does it depend on the telescope's position in the image plane. It is determined entirely by the parameters of the imaging telescope \cite{Turyshev-Toth:2019-image}.
Substituting result (\ref{eq:psf1}) into (\ref{eq:power}), we derive the intensity distribution for light received from the directly imaged region, which is determined by the following expression:
{}
\begin{eqnarray}
I_{\tt dir}(\rho_i) =\frac{1}{z^2_0}
\int_0^{2\pi} \hskip -8pt d\phi' \int^{\textstyle\frac{D}{2}}_0\hskip -8pt \rho' d\rho'
B_{\tt s}({\vec x}') \mu_{\tt dir}(\rho_i).
\label{eq:pow-dir}
\end{eqnarray}
Assuming that the surface brightness within the directly imaged region is uniform, $ B_{\tt s}({\vec x}')=B_{\tt s}$, the integrals in (\ref{eq:pow-dir}) are easily computed. As a result, we obtain the following intensity distribution for the light received from this region:
{}
\begin{eqnarray}
I_{\tt dir}(\rho_i) =\pi B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0d^2}{4{\overline z}^2}
\Big(\frac{2}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d} \Big(\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big)\Big)^2,
\label{eq:pow-dirD}
\end{eqnarray}
where we accounted for (\ref{eq:Dd}). We note that (\ref{eq:pow-dirD}) agrees with a similar expression given by Eq. (15) in \cite{Turyshev-Toth:2019-image} (which was obtained for imaging a point source), by extending it to the case of an extended source at a large, but finite distance. Fig.~\ref{fig:images} (left) shows the characteristic behavior\footnote{See also \url{https://youtu.be/wdFEM9KiMZU} for a video simulation.} captured by (\ref{eq:pow-dirD}).
\begin{figure}
\includegraphics[width=\linewidth]{fig5}
\caption{\label{fig:images}Top row: Density plots simulating images that appear in the focal plane of the optical telescope. Left: the directly imaged region, in accordance with Eq.~(\ref{eq:pow-dirD}). The brightness of this image is exaggerated to ensure that the Einstein ring and diffraction artifacts remain visible. Center: Light from the rest of the source, in accordance with Eq.~(\ref{eq:P-blur*2*}). This is the dominant light contribution, yielding a much brighter Einstein ring with less prominent diffraction artifacts. Right: image contamination due to a nearby source of light, in accordance with Eq.~(\ref{eq:P-blur*off4*}), showing light from another uniformly illuminated disk of the same size, offset horizontally by ten radii. Bottom row: corresponding dimensionless intensities depicted on a decimal logarithmic scale. The contribution from the directly imaged region is ${\cal O}(10^3)$ less than the contribution from the rest of the source. Contribution from a nearby object is of similar intensity, but confined to narrow sections of the Einstein ring.
}
\end{figure}
To study the behavior of (\ref{eq:pow-dirD}) at the Einstein ring, we take the limit of $\eta_i\rightarrow \alpha$, that results in
{}
\begin{eqnarray}
I_{\tt dir}(\rho_i^{\tt ER}) =\pi B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0d^2}{4{\overline z}^2}
\Big(J^2_0(\alpha {\textstyle\frac{1}{2}}d)+J^2_1(\alpha {\textstyle\frac{1}{2}}d)\Big)^2.
\label{eq:pow-dirD+}
\end{eqnarray}
To take the next step, we use well-known approximations for the Bessel functions for large arguments \cite{Abramovitz-Stegun:1965}, given as
\begin{eqnarray}
J_0(x)\simeq \sqrt{\frac{2}{\pi x}}\cos(x-{\textstyle\frac{\pi}{4}})+{\cal O}\big(x^{-1}\big)
\qquad {\rm and} \qquad
J_1(x)\simeq \sqrt{\frac{2}{\pi x}}\sin(x-{\textstyle\frac{\pi}{4}})+{\cal O}\big(x^{-1}\big).
\label{eq:BF}
\end{eqnarray}
These approximations lead to the following approximation for (\ref{eq:pow-dirD}), which describes the intensity distribution on the Einstein ring resulting from light originating in the directly imaged region:
{}
\begin{eqnarray}
I_{\tt dir}(\rho_i^{\tt ER}) =B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2 \frac{4\mu_0}{\pi\alpha^2{\overline z}^2}=B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{4}{ k{\overline z}},
\label{eq:pow-dirD2}
\end{eqnarray}
where we used the definitions for $\mu_0$ and $\alpha$ given by (\ref{eq:S_z*6z-mu2}) and (\ref{eq:alpha-mu}), correspondingly.
We note that the intensity distribution for the EM image field received from the directly imaged region does not explicitly depend on the Schwarzschild radius of the gravitational lens as it is implicitly encoded in the position of the Einstein ring in the focal plane. In addition, there is no dependence on the distance to the source or position of the telescope in the image plane. However, as expected, the distribution strongly depends on the telescope aperture and is slowly decreasing with increase of the heliocentric distance.
\subsection{Amplitude of the EM field received from outside the directly imaged region}
\label{sec:power-blur}
We now consider light originating from the areas within the source that are outside the directly imaged region (Fig.~\ref{fig:regions}), but still deposited in the focal plane of the optical lens because of the PSF (\ref{eq:S_z*6z-mu2}). This process is represented by the complex amplitude ${\cal A}_{\tt blur}({\vec x}_i,{\vec x}_0',{\vec x}')$ in (\ref{eq:amp-total}). To compute ${\cal A}_{\tt blur}$, we again use (\ref{eq:amp-w-fd3*}), but this time, we assume that the directly imaged region is very small compared to the rest of the planet, so that outside the directly imaged region the following inequality holds $|{\vec x}|\ll\beta|{\vec x}'-{\vec x}'_0|$. For most of this region, in (\ref{eq:amp-w-d*}), the Bessel function, $J_0$, may be approximated by taking its asymptotic behavior for large arguments (\ref{eq:BF}), yielding
{}
\begin{eqnarray}
J_0\Big(\alpha|{\vec x}+\beta({\vec x}'-{\vec x}'_0)|\Big)
=\frac{1}{\sqrt{2\pi \alpha |{\vec x}+\beta({\vec x}'-{\vec x}'_0)|}}\Big(e^{i\big(\alpha |{\vec x}+\beta({\vec x}'-{\vec x}'_0)|-\frac{\pi}{4}\big)}+e^{-i\big(\alpha |{\vec x}+\beta({\vec x}'-{\vec x}'_0)|-\frac{\pi}{4}\big)}\Big).
\label{eq:bf0}
\end{eqnarray}
To evaluate (\ref{eq:bf0}), we rely on (\ref{eq:x'})--(\ref{eq:p}), but slightly redefining them by introducing
{}
\begin{eqnarray}
\{({\vec x}'-{\vec x}'_0)\}={\vec x}''=\rho''{\vec n}''= \rho''(\cos\phi'',\sin\phi'').
\label{eq:coord2}
\end{eqnarray}
Next, given the fact that $|{\vec x}|\leq \beta|{\vec x}'-{\vec x}'_0|$, we expand $ |{\vec x}+\beta({\vec x}'-{\vec x}'_0)|$ to first order in ${\vec x}$:
{}
\begin{eqnarray}
|{\vec x}+\beta({\vec x}'-{\vec x}'_0)|=\beta |{\vec x}'-{\vec x}'_0|+({\vec x}\cdot {\vec n}'') +{\cal O}(\rho^2)=\beta \rho''+\rho\cos(\phi-\phi'') +{\cal O}(\rho^2).
\label{eq:mod}
\end{eqnarray}
With these definitions, the double integral (\ref{eq:amp-w-d*}) takes the form
{}
\begin{eqnarray}
\iint\displaylimits_{|{\vec x}|^2\leq (\frac{1}{2}d)^2}\hskip -8pt
d^2{\vec x}
J_0\Big(\alpha|{\vec x}+\beta({\vec x}'-{\vec x}'_0)|\Big) e^{-i\frac{k}{f}({\vec x}\cdot{\vec x}_i)}&=&\frac{1}{\sqrt{2\pi \alpha \beta \rho'}}
\int_0^{2\pi} \hskip -8pt d\phi \int_0^{d/2} \hskip -8pt \rho d\rho\,
\Big(1-\frac{\rho\cos(\phi-\phi'')}{\beta \rho''}\Big)\times\nonumber\\
&&\hskip -110pt
\times\,
\Big(e^{i\big(\alpha \beta \rho''-\frac{\pi}{4}+\alpha \rho\cos(\phi-\phi'')\big)}+e^{-i\big(\alpha \beta \rho''-\frac{\pi}{4}+\alpha \rho\cos(\phi-\phi'')\big)}\Big)e^{-i\rho \eta_i\cos(\phi-\phi_i)}+{\cal O}(\rho^2).~~~
\label{eq:am-3*2*}
\end{eqnarray}
The phases of these two integrals may be given as
{}
\begin{eqnarray}
\varphi_\pm({\vec x})&=&
\pm(\alpha \beta \rho''-{\textstyle\frac{\pi}{4}})+u_\pm\,\rho\cos\big(\phi-\epsilon_\pm\big)+{\cal O}(\rho^2),
\label{eq:ph4}
\end{eqnarray}
where $u_\pm$ has the form
{}
\begin{eqnarray}
u_\pm=\sqrt{\alpha^2\mp2\alpha\eta_i\cos\big(\phi''-\phi_i\big)+\eta_i^2},
\label{eq:upm}
\end{eqnarray}
and the angles $\epsilon_\pm$ are given by the following relationships:
{}
\begin{eqnarray}
\cos\epsilon_\pm=\frac{\pm\alpha \cos\phi''-\eta_i\cos\phi_i}{u_\pm}, \qquad
\sin\epsilon_\pm=\frac{\pm\alpha \sin\phi''-\eta_i\sin\phi_i}{u_\pm}.
\label{eq:eps}
\end{eqnarray}
With this, the two integrals present in (\ref{eq:am-3*2*}) may be evaluated as
{}
\begin{eqnarray}
I^\pm({\vec x}_i, {\vec x}'')&=& \pi\Big(\frac{d}{2}\Big)^2\frac{1}{\sqrt{2\pi \alpha \beta \rho''}} e^{\pm i\big(\alpha \beta \rho''-{\textstyle\frac{\pi}{4}}\big)}\Big\{\Big( \frac{2
J_1\big(u_\pm\frac{1}{2}d\big)}{u_\pm\frac{1}{2}d}\Big)-i\frac{d\cos(\phi''-\epsilon_\pm)}{2\beta \rho'}\Big(\frac{2J_2(u_\pm\rho)}{u_\pm\frac{1}{2}d}\Big)\Big\}.
\label{eq:I120}
\end{eqnarray}
Substituting expressions (\ref{eq:I120}) in (\ref{eq:am-3*2*}) and then using the result in (\ref{eq:amp-w-fd3*}), we derive the amplitude ${\cal A}_{\tt blur}({\vec x}_i,{\vec x}'_0,{\vec x}')$:
{ }
\begin{eqnarray}
{\cal A}_{\tt blur}({\vec x}_i,{\vec x}'')&=&
ie^{ikf(1+{{\vec x}_i^2}/{2f^2})}
\Big(\frac{kd^2}{8f}\Big) \frac{\sqrt{\mu_0}}{\sqrt{2\pi \alpha \beta \rho''}} \times\nonumber\\
&&\hskip 0pt\times\,
\bigg\{e^{i\big(\alpha \beta \rho''-{\textstyle\frac{\pi}{4}}\big)}
\Big[\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)-i\frac{d\cos(\phi''-\epsilon_+)}{2\beta \rho''}\Big(\frac{2J_2(u_+\rho)}{u_+\frac{1}{2}d}\Big)\Big]+
\nonumber\\
&&\hskip 20pt\,
+e^{-i\big(\alpha \beta \rho''-{\textstyle\frac{\pi}{4}}\big)}
\Big[\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)-i\frac{d\cos(\phi''-\epsilon_-)}{2\beta \rho''}\Big(\frac{2J_2(u_-\rho)}{u_-\frac{1}{2}d}\Big)\Big]
\bigg\}.
\label{eq:amp-blur3}
\end{eqnarray}
We may now compute the Poynting vector of a plane wave originating from outside the directly imaged region, traveling through the gravitational field in the vicinity of the Sun, arriving in the focal plane of an imaging telescope. For this, similarly to the derivation of (\ref{eq:Pv1}), we substitute (\ref{eq:amp-blur3}) into (\ref{eq:Pv}). After temporal averaging, we obtain the following expression (similar to that obtained in \cite{Turyshev-Toth:2019-image} for point sources):
{}
\begin{eqnarray}
S_{\tt blur}({\vec x}_i,{\vec x}'')&=&\frac{c}{8\pi}
{E}_{\tt dir}^2({\vec x}')
\Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0}{2\pi \alpha \beta \rho''}
\times\nonumber\\
&&\hskip -40pt \times\,
\bigg\{\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2+
2\sin(2\alpha\beta\rho'')\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)-\nonumber\\
&-&
\frac{d\cos(2\alpha\beta\rho'')}{\beta\rho''}\Big\{
\frac{\alpha-\eta_i\cos(\phi''-\phi_i)}{u_+}\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)\Big( \frac{2
J_2\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)+
\nonumber\\
&&\hskip 80pt +\,
\frac{\alpha+\eta_i\cos(\phi''-\phi_i)}{u_-}\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)\Big( \frac{2
J_2\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)\Big\}+{\cal O}\Big(\frac{d^2}{\beta^2\rho'^2}\Big)
\bigg\}.
\label{eq:Sblur}
\end{eqnarray}
As outside the directly imaged region the ratio $d/(\beta\rho'')$ is very small, we may neglect this term in the expression above. Substituting the result in (\ref{eq:psf}), we compute the PSF for the SGL's blur for a resolved source:
{}
\begin{eqnarray}
\mu_{\tt blur}({\vec x}_i,{\vec x}'')=\frac{\mu_0}{2\pi \alpha \beta \rho''} \Big(\frac{kd^2}{8f}\Big)^2
\bigg\{\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2+
2\sin(2\alpha\beta\rho'')\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)
\bigg\}.~~
\label{eq:psf-bl1}
\end{eqnarray}
Using this result (\ref{eq:psf-bl1}) in (\ref{eq:power}), we derive the expression that may be used to determine the intensity distribution for the signal received from the area outside the directly imaged region:
{}
\begin{eqnarray}
I_{\tt blur}({\vec x}_i,{\vec x}_0) =\frac{1}{z^2_0}
\iint d^2{\vec x}'' B_{\tt s}({\vec x}'') \mu_{\tt blur}({\vec x}_i,{\vec x}'').
\label{eq:pow-blur}
\end{eqnarray}
This integral must be evaluated for two different regions corresponding to the telescope pointing within the image and outside of it, as was done in \cite{Turyshev-Toth:2019-blur}, where we considered the photometric signal (or the power of the signal just before the telescope's aperture.)
\subsubsection{Intensity distribution for light from outside of the directly imaged region}
\label{sec:blur-in}
Expression (\ref{eq:pow-blur}) allows us to compute the power received from the resolved source from the area lying outside the directly imaged region. To do that, we introduce a new coordinate system in the source plane, ${\vec x}''$, with the origin at the center of the directly imaged region: ${\vec x}'-{\vec x}'_0={\vec x}''$. As vector ${\vec x}'_0$ is constant, $dx'dy'=dx''dy''$. Next, in the new coordinate system, we use polar coordinates $(x'',y'')\rightarrow (r'',\phi'')$. In these coordinates, the circular edge of the source, $R_\oplus$, is no longer a circle but a curve, $\rho_\oplus(\phi'')$, the radial distance of which is given by the following relation:
{}
\begin{eqnarray}
\rho_\oplus(\phi'')
&=&\sqrt{R_\oplus^2-{\rho'_0}^2\sin^2\phi''}-\rho'_0\cos\phi''.
\label{eq:rho+}
\end{eqnarray}
For an actual astrophysical source, $B_s({\vec x}')$ is, of course, an arbitrary function of the coordinates ${\vec x}'$ and thus the integral can only be evaluated numerically. However, we can obtain an analytic result in the simple case of a disk of uniform brightness, characterized by ${B}_s({\vec x}') ={B}_s$. In this case, we integrate (\ref{eq:pow-blur}):
{}
\begin{eqnarray}
I_{\tt blur}({\vec x}_i,{\vec x}_0) &=&\frac{1}{z^2_0}
\int_0^{2\pi} \hskip -6pt d\phi'' \int_{\textstyle\frac{D}{2}}^{\rho_\oplus}\hskip -4pt \rho'' d\rho''
B_{\tt s}({\vec x}') \mu_{\tt blur}({\vec x}_i,{\vec x}_0,{\vec x}')=\frac{B_{\tt s}}{z^2_0} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0}{2\pi \alpha \beta} \times\nonumber\\
&&\hskip-40pt
\times\,
\int_0^{2\pi} \hskip -8pt d\phi'' \int_{\textstyle\frac{D}{2}}^{\rho_\oplus}\hskip -4pt d\rho''
\bigg\{\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2+
2\sin(2\alpha\beta\rho'')\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)
\bigg\}.
\label{eq:P-blur*=0}
\end{eqnarray}
The integral over $d\rho''$ in (\ref{eq:P-blur*=0}) can be easy evaluated, resulting in
{}
\begin{eqnarray}
I_{\tt blur}({\vec x}_i,{\vec x}_0) &=&
\frac{B_{\tt s}}{{\overline z}^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0 d}{2\alpha} \times
\nonumber\\
&&\hskip-50pt\times\,
\bigg\{\frac{1}{2\pi}
\int_0^{2\pi} \hskip -8pt d\phi'' \bigg(\frac{2r_\oplus}{d}
\Big(\sqrt{1-\big(\frac{\rho_0}{r_\oplus}\big)^2\sin^2\phi''}-\frac{\rho_0}{r_\oplus}\cos\phi''\Big)-1\bigg)
\bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg)-\nonumber\\
&&\hskip-60pt
-\,\frac{2}{\alpha d}\,\frac{1}{2\pi}
\int_0^{2\pi} \hskip -8pt d\phi'' \Big(\cos\Big[2\alpha r_\oplus
\Big(\sqrt{1-\big(\frac{\rho_0}{r_\oplus}\big)^2\sin^2\phi''}-\frac{\rho_0}{r_\oplus}\cos\phi''\Big)\Big]-\cos[\alpha d]\Big)\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)
\bigg\}.~~~
\label{eq:P-blur*}
\end{eqnarray}
We observe that the ratios involving the Bessel functions in the expression (\ref{eq:P-blur*}) above are at most $2J_1(x)/x=1$, at $x=0$. Given the fact that the spatial frequency $\alpha$ is quite high, for most values of the argument these ratios become negligible. In addition, the last term in this expression is at most $\propto 2/\alpha d$, which is negligibly small even compared to the next smallest term (i.e., that does not contain $r_\oplus$) in the first integral in this expression. Therefore, the last term in this expression can be omitted, and expression (\ref{eq:P-blur*}) takes the form
{}
\begin{eqnarray}
I_{\tt blur}({\vec x}_i,{\vec x}_0) &=&
\frac{B_{\tt s}}{{\overline z}^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0 d}{2\alpha}
\frac{1}{2\pi}
\int_0^{2\pi} \hskip -8pt d\phi'' \bigg(\frac{2r_\oplus}{d}
\sqrt{1-\big(\frac{\rho_0}{r_\oplus}\big)^2\sin^2\phi''}-1\bigg)
\bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg),~~~~~
\label{eq:P-blur*2*}
\end{eqnarray}
where we obtained the final form of the equation by dropping the $(\rho_0/r_\oplus)\cos\phi''$ term in the first integral in (\ref{eq:P-blur*}), as this term, multiplied by the squared Bessel-function terms that have the same periodicity by virtue of the dependence of $u_\pm$ on $\phi''$, vanishes identically when integrated over a full $2\pi$ period.
Expression (\ref{eq:P-blur*2*}) describes the blur contribution to the intensity distribution in the focal plane, corresponding to the image of an object of uniform brightness. Fig.~\ref{fig:images} (center) shows the characteristic behavior presented in this expression. This result is in a good agreement with a similar one given by Eq.~(33) of \cite{Turyshev-Toth:2019-image}, but extends the latter on the case of an extended, resolved source positioned at a large, but finite distance from the SGL. Considering the terms remaining in (\ref{eq:P-blur*2*}), we note that the spatial frequency $u_\pm$, as a function of $\phi''$ and $\phi_i$, is given by expression (\ref{eq:upm}) as $u_\pm=(\alpha^2\mp2\alpha\eta_i\cos\big(\phi_i-\phi''\big)+\eta_i^2)^\frac{1}{2}$. To study the behavior of $P_{\tt blur}({\vec x}_i,{\vec x}_0)$ at the Einstein ring, we take the limit $\eta_i\rightarrow \alpha$ to present the ratios of the Bessel functions as
{}
\begin{eqnarray}
\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2 ~~~\rightarrow~~~
\Big( \frac{2
J_1\big(\alpha d\sin{\textstyle\frac{1}{2}}(\phi_i-\phi'')\big)}{\alpha d\sin{\textstyle\frac{1}{2}}(\phi_i-\phi'')}\Big)^2+\Big( \frac{2
J_1\big(\alpha d\cos{\textstyle\frac{1}{2}}(\phi_i-\phi'')\big)}{\alpha d\cos{\textstyle\frac{1}{2}}(\phi_i-\phi'')}\Big)^2.
\label{eq:Bess-rat}
\end{eqnarray}
Given that $\alpha d \gg 1$, these expressions suggest that for any value of $\phi''$ they will uniquely select such a value for $\phi_i$ that would make $u_\pm=0$ and thus, the arguments of the Bessel functions vanish. When this happens, the ratios of the Bessel functions reach their maximal value of 1, resulting in two peaks positioned at the azimuthal angles $\phi_i=\phi''$ and $\phi_i=\phi''+\pi$ (similar observation was made in \cite{Turyshev-Toth:2019-image}).
This observation greatly simplifies (\ref{eq:P-blur*2*}) (and (\ref{eq:P-blur*})), resulting in the following compact form for the intensity distribution for light received from the Einstein ring in the focal plane of the telescope:
{}
\begin{eqnarray}
I_{\tt blur}(\rho_i^{\tt ER},\rho_0) &=&
\frac{B_{\tt s}}{{\overline z}^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0 d}{\alpha}
\Big(\frac{2r_\oplus}{d}\epsilon(\rho_0)-1\Big)=\pi
B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{d}{{\overline z}}
\sqrt{\frac{2r_g}{{\overline z}}}\Big(\frac{2r_\oplus}{d}\epsilon(\rho_0)-1\Big),
\label{eq:P-blur*2}
\end{eqnarray}
where the blur factor $\epsilon(\rho_0)$ is given by the following expression \cite{Turyshev-Toth:2019-blur} (see also Fig.~\ref{fig:epsilon-beta}):
{}
\begin{eqnarray}
\epsilon(\rho_0)
&=&\frac{1}{2\pi}\int_0^{2\pi} \hskip -3pt d\phi''\sqrt{1-\Big(\frac{\rho_0}{r_\oplus}\Big)^2\sin^2\phi''}=\frac{2}{\pi}{\tt E}\Big[\Big(\frac{\rho_0}{r_\oplus}\Big)^2\Big],
\label{eq:eps_r0}
\end{eqnarray}
where ${\tt E}[x]$ is the elliptic integral \cite{Abramovitz-Stegun:1965}.
As a result, we see that the intensity distribution describing the signal received in the focal plane of the telescope is given as a sum of the intensities of the signal received from the directly imaged region (\ref{eq:pow-dirD2}) and that received from the rest of the source (\ref{eq:P-blur*2}) (similarly to the result derived in \cite{Turyshev-Toth:2019-blur} for photometric signals), which, in terms of the intensity distribution, takes the form
{}
\begin{eqnarray}
I_{\tt source}(\rho_i^{\tt ER}, \rho_0)&=&I_{\tt dir}(\rho_i^{\tt ER},0)+I_{\tt blur}(\rho_i^{\tt ER},\rho_0)=\nonumber\\
&=&\pi
B_{\tt s}\Big(\frac{kd^2}{8f}\Big)^2\Big\{\frac{2R_\oplus}{z_0}\sqrt{\frac{2r_g}{\overline z}}\epsilon(\rho_0)+\frac{4}{\pi k{\overline z}}-\frac{d}{{\overline z}}
\sqrt{\frac{2r_g}{{\overline z}}}\Big\}\simeq \pi
B_{\tt s}\Big(\frac{kd^2}{8f}\Big)^2\frac{2R_\oplus}{z_0}\sqrt{\frac{2r_g}{\overline z}}\epsilon(\rho_0),
\label{eq:dir+blur}
\end{eqnarray}
where we neglected the two terms in the middle expression, as their magnitudes are negligible in comparison to the leading term.
\subsubsection{Blur at an off-image telescope position}
\label{sec:extend-photo-vic}
As discussed in \cite{Turyshev-Toth:2019-blur}, in the case of the SGL, blur from an extended source is present even outside the direct image of the source. Therefore, even a telescope positioned at $\rho_0\geq r_\oplus$ will receive light from the source. In this case, the blur for the off-image position, $I_{\tt off}({\vec x}_0)$, is obtained by integrating (\ref{eq:pow-blur}) over the surface of the source as it is seen from an off-image coordinate system.
The same conditions to derive (\ref{eq:P-blur*}) are valid, so the power received by the telescope takes the same form. The only difference comes from the fact that we are outside the image, thus, the integration limits change. First, we note that the circular edge of the source, $R_\oplus$, is given by a curve, $\rho_\oplus(\phi'')$, the radial distance of which in this polar coordinate system is given as
\begin{eqnarray}
\rho_\oplus(\phi'') &=&\pm \sqrt{R_\oplus^2-{\rho'_0}^2\sin^2\phi''}+\rho'_0\cos\phi'',
\label{eq:rho++}
\end{eqnarray}
with the angle $\phi''$ in this case is defined so that $\phi''=0$ when pointing at the center of the source. The angle $\phi''$ varies only within the range $\phi''\in [\phi_-,\phi_+]$, with $\phi_\pm=\pm \arcsin ({R_\oplus}/{\rho'_0})$. Given the sign in front of the square root in (\ref{eq:rho++}), for any angle $\phi''$ there will be two solutions for $\rho_\oplus(\phi'')$, given as $(\rho^-_\oplus,\rho^+_\oplus)$.
\begin{figure}
\rotatebox{90}{\hskip 40pt {\footnotesize $\epsilon(\rho_0)+\beta(\rho_0)$}}
\includegraphics[width=0.40\linewidth]{int_elliptic-both-data}
\caption{\label{fig:epsilon-beta}Combined behavior of $\epsilon(\rho_0)$ (\ref{eq:eps_r0}), for $0\leq \rho_0/r_\oplus\leq 1$ (solid red curve) and $\beta(\rho_0)$ (\ref{eq:beta_r0}), for $\rho_0/r_\oplus\geq 1$ (dashed green curve). The horizontal axis is in units of $\rho_0/r_\oplus$. The dots represent values obtained from a numerical simulation of (\ref{eq:power}) with (\ref{eq:amp-w-d*}).
}
\end{figure}
Assuming that the brightness of the source in this region is uniform, $B(x',y')=B_{\tt s}$, we use (\ref{eq:rho++}) and evaluate (\ref{eq:pow-blur}) for this set of conditions:
{}
\begin{eqnarray}
I_{\tt off}({\vec x}_i,{\vec x}_0) &=&\frac{1}{z^2_0}
\int_{\phi_-}^{\phi_+} \hskip -3pt d\phi''
\int_{\rho^-_\oplus}^{\rho^+_\oplus}\hskip -3pt \rho'' d\rho''
B_{\tt s}({\vec x}'') \mu_{\tt blur}({\vec x}_i,{\vec x}'')=\frac{B_{\tt s}}{z^2_0} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0}{2\pi \alpha \beta} \times\nonumber\\
&&\hskip-40pt
\times\,
\int_{\phi_-}^{\phi_+} \hskip -3pt d\phi''
\int_{\rho^-_\oplus}^{\rho^+_\oplus}\hskip -3pt d\rho''
\bigg\{\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2+
2\sin(2\alpha\beta\rho'')\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)
\bigg\}.~~~
\label{eq:P-blur*off}
\end{eqnarray}
The integral over $d\rho''$ can be easy evaluated, resulting in
{}
\begin{eqnarray}
I_{\tt off}({\vec x}_i,{\vec x}_0) &=&
\frac{B_{\tt s}}{z_0^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{2\mu_0}{\alpha\beta} \times
\nonumber\\
&&\hskip-70pt\times\,
\bigg\{\frac{R_\oplus}{2\pi}
\int_{\phi_-}^{\phi_+} \hskip -8pt d\phi'' \Big(\sqrt{1-\big(\frac{{\rho}_0}{r_\oplus}\big)^2\sin^2\phi''}\Big)
\bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg)-\nonumber\\
&&\hskip-55pt
-\,\frac{1}{\alpha \beta}\,\frac{1}{2\pi}
\int_{\phi_-}^{\phi_+} \hskip -8pt d\phi''\sin\big(2\alpha \rho_0\cos\phi''\big)
\sin\Big[2\alpha r_\oplus
\Big(\sqrt{1-\big(\frac{\rho_0}{r_\oplus}\big)^2\sin^2\phi''}\Big)\Big]\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)
\bigg\}.
\label{eq:P-blur*off4}
\end{eqnarray}
Similarly to the approach that we used in evaluating the magnitude of the terms in (\ref{eq:P-blur*}) we we may drop the second term in this expression transforming (\ref{eq:P-blur*off4}) into
{}
\begin{eqnarray}
I_{\tt off}({\vec x}_i,{\vec x}_0) &=&
\frac{B_{\tt s}}{z_0^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{2\mu_0R_\oplus}{\alpha\beta}
\frac{1}{2\pi}
\int_{\phi_-}^{\phi_+} \hskip -8pt d\phi'' \Big(\sqrt{1-\Big(\frac{\rho_0}{r_\oplus}\Big)^2\sin^2\phi''}\Big)
\bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg).~~~
\label{eq:P-blur*off4*}
\end{eqnarray}
Fig.~\ref{fig:images} (right) shows the behavior captured in this expression that is characterized by two peaks of light deposited at the Einstein ring. Such a behavior is expected for sources of light external to the target, including its parent star. Specifically, the light from the parent star is not a significant source of light contamination, as its signal will be deposited in just two compact spots on the image plane (as seen in Fig.~\ref{fig:images} (right)), which can be easily blocked.
Next, using similar arguments that led to result (\ref{eq:Bess-rat}) (but taking only one of the ratios), we present (\ref{eq:P-blur*off4*}), as
{}
\begin{eqnarray}
I_{\tt off}(\rho_i^{\tt ER},\rho_0) &=&\pi B_{\tt s}
\Big(\frac{kd^2}{8f}\Big)^2\frac{2R_\oplus}{z_0} \frac{\mu_0}{\pi\alpha \overline z} \beta(\rho_0)=\pi
B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{2R_\oplus}{z_0} \sqrt{\frac{2r_g} {\overline z}}\beta(\rho_0),
\label{eq:P-blur*off4=}
\end{eqnarray}
with the factor $\beta (\rho_0)$ given by the following expression:
{}
\begin{eqnarray}
\beta (\rho_0)
&=&\frac{1}{\pi}\int_{\phi_-}^{\phi_+} \hskip -3pt d\phi''\sqrt{1-\Big(\frac{\rho_0}{r_\oplus}\Big)^2\sin^2\phi''}=\frac{2}{\pi}{\tt E}\Big[\arcsin \frac{r_\oplus}{\rho_0},\Big(\frac{\rho_0}{r_\oplus}\Big)^2\Big],
\label{eq:beta_r0}
\end{eqnarray}
where ${\tt E}[a,x]$ is the incomplete elliptic integral \cite{Abramovitz-Stegun:1965}. This result is also similar to that obtained for the case of photometric imaging with the SGL discussed in \cite{Turyshev-Toth:2019-blur}. The combined behavior of this factor and $\epsilon(\rho_0)$ (given by Eq.~(\ref{eq:eps_r0})) is shown in Fig.~\ref{fig:epsilon-beta}.
Expressions (\ref{eq:dir+blur}) and (\ref{eq:P-blur*off4=}) are our main results that may be used to evaluate the signals to be expected for imaging with the SGL. The describe the intensity distribution in the focal plane of an imaging telescope that is positioned in the image plane in the strong interference region of the SGL. As such, these results are helpful for the ongoing instrument and mission design studies \cite{Turyshev-etal:2018}.
\section{Image formation in the geometric optics and weak interference regions}
\label{sec:weak-int}
As the optical telescope is moved farther away from the optical axis, it enters the weak interference region and eventually the region of geometric optics. It is important to study the image formation process in these regions, as modeling the magnitude of the signals detected here is useful to develop realistic SNR estimates that account for background noise. These models can also to be used in the development of autonomous navigation algorithms, required to navigate a space-based telescope towards the SGL's optical axis with respect to an imaging target such as an exoplanet \cite{Turyshev-etal:2018}.
\subsection{EM field in the geometric optics and weak interference regions}
\label{sec:EM-field-gowi}
The solution for the EM field in the geometric optics and weak interference regions consists of a combination of the gravity-modified incident wave and also the scattered wave that results from the diffraction of the incident wave on the solar gravity field \cite{Turyshev-Toth:2017,Turyshev-Toth:2019-extend}. Following the approach presented in \cite{Turyshev-Toth:2017,Turyshev-Toth:2019-extend,Turyshev-Toth:2019-image}, we use the method of stationary phase to develop a solution for the incident and scattered EM fields that in the spherical coordinate system $(r,\theta,\phi)$, to the order of ${\cal O}\big(r_g^2, \theta^2, \sqrt{2r_g\tilde r}/z_0\big)$, take the form
{}
\begin{eqnarray}
\left( \begin{aligned}
{D}_\theta& \\
{B}_\theta& \\
\end{aligned} \right)_{\tt \hskip -2pt in/sc} = \left( \begin{aligned}
{B}_\phi& \\
-{D}_\phi& \\
\end{aligned} \right)_{\tt \hskip -2pt in/sc}&=&
\frac{E_0}{z_0}
{\cal A}_{\tt in/sc} (\tilde r,\theta) e^{i\big(k(r+r_0+r_g\ln 4k^2rr_0)-\omega t\big)}
\left( \begin{aligned}
\cos \phi& \\
\sin \phi& \\
\end{aligned} \right),
\label{eq:DB-sol-in}
\end{eqnarray}
with the complex amplitudes ${\cal A}_{\tt in}$ and ${\cal A}_{\tt sc}$ (shorthanded as $A_{\tt in/sc}$ with the upper and lower signs are for the ``{\tt in}'' and ``{\tt sc}'' waves, correspondingly) given as
{}
\begin{eqnarray}
{\cal A}_{\tt in/sc}(\tilde r,\theta)&=&
a_{\tt in/sc}(\tilde r,\theta)
\exp\Big[{-ik\Big\{{\textstyle\frac{1}{4}} \theta \big(\tilde r \theta\pm \sqrt{\tilde r^2 \theta^2+8r_g \tilde r}\big)\big)-r_g+2r_g\ln {\textstyle\frac{1}{2}} k \big(\tilde r \theta\pm \sqrt{\tilde r^2 \theta^2+8r_g \tilde r}\big)\Big\}}\Big],
\label{eq:DB-sol-inA}
\end{eqnarray}
where the real-valued amplitude factors $a_{\tt in}$ and $a_{\tt sc}$ have the form
{}
\begin{eqnarray}
a^2_{\tt in/sc}(\tilde r,\theta)&=&
\frac{\big({\textstyle\frac{1}{2}} (\sqrt{1+{8r_g}/{\tilde r\theta^2}}\pm1)\big)^2}{\sqrt{1+{8r_g}/{\tilde r\theta^2}}},
\label{eq:a_insc*}
\end{eqnarray}
with the radial components of both EM waves behave as $({E}_r, {H}_r)_{\tt \hskip 0pt in/sc} \sim {\cal O}({\rho}/{r},\sqrt{2r_g\tilde r}/r_0)$. Also, the effective distance $\tilde r$ is given as $\tilde r=z_0{\overline z}/(z_0+{\overline z})$ (see details in \cite{Turyshev-Toth:2019-extend}). Note that for large angles $\theta\gg \sqrt{2r_g/\tilde r}$, expression (\ref{eq:a_insc*}) results in the known forms of the amplitude factors $a^2_{\tt in}(\tilde r,\theta)=1+{\cal O}(r_g\theta^2,r_g^2)$ and $a^2_{\tt sc}(\tilde r,\theta) =(2r_g/\tilde r\theta^2)^2 +{\cal O}(r_g\theta^2,r_g^2)$, see \cite{Turyshev-Toth:2019-extend}. However, expression (\ref{eq:a_insc*}) allows studying the case when $\theta\simeq \sqrt{2r_g/\tilde r}$.
Since we are concerned with the EM field in the image plane, it is convenient to transform solution (\ref{eq:DB-sol-in}) to cylindrical coordinates $(\rho,\phi,z)$, as was done in \cite{Turyshev-Toth:2017,Turyshev-Toth:2019-extend}. As result, the components of this EM field, to ${\cal O}(r_g^2, \theta^2)$, take the form
{}
\begin{eqnarray}
\left( \begin{aligned}
{E}_\rho& \\
{H}_\rho& \\
\end{aligned} \right)_{\tt \hskip -2pt in/sc} = \left( \begin{aligned}
{H}_\phi& \\
-{E}_\phi& \\
\end{aligned} \right)_{\tt \hskip -2pt in/sc}&=&
\frac{E_0}{z_0}
{\cal A}_{\tt in/sc}\big(\tilde r,\theta \big)e^{i\big(k(r+r_0+r_g\ln k^2rr_0)-\omega t\big)}
\left( \begin{aligned}
\cos \phi& \\
\sin \phi& \\
\end{aligned} \right),
\label{eq:DB-sol-in-cc}
\end{eqnarray}
where the $z$-components of the EM waves behave as $({E}_z, {H}_z)_{\tt \hskip 0pt in/sc} \sim {\cal O}({\rho}/{z},b/z_0)$.
Expressing the combination $\tilde r\theta$ via the angle $\delta=b/r_0$ and generalizing the resulting expression to the 3-dimensional case, as was done in \cite{Turyshev-Toth:2019-extend}, we have
{}
\begin{eqnarray}
\tilde r\theta&=&r\big(\theta+\delta\big)+{\cal O}(r^3/r_0^2)=|\vec{x}+{\vec x}_0+\beta{\vec x}'|+{\cal O}(r^3/r_0^2),
\label{eq:tildebeta}
\end{eqnarray}
where $\beta={\overline z}/{z_0}$ is from (\ref{eq:alpha-mu}).
This allows us to express the complex amplitudes $ {\cal A}_{\tt in/sc}(r,\theta,r_0)\rightarrow {\cal A}_{\tt in}({\vec x},{\vec x}_0,{\vec x}')$ as
{}
\begin{eqnarray}
{\cal A}_{\tt in/sc}({\vec x},{\vec x}_0,{\vec x}')&=&a_{\tt in/sc}({\vec x},{\vec x}_0,{\vec x}')
\exp\Big[-ik\Big\{\frac{1}{4\overline z}\big|\vec{x}+{\vec x}_0\big|\Big(\big|\vec{x}+{\vec x}_0+\beta{\vec x}'\big|\pm \sqrt{\big(\vec{x}+{\vec x}_0+\beta{\vec x}'\big)^2+8r_g \tilde r}\Big)-\nonumber\\
&&\hskip 80pt -\, r_g+2r_g\,
\ln \frac{k}{2}\Big(\big|\vec{x}+{\vec x}_0+\beta{\vec x}'\big|\pm\sqrt{\big(\vec{x}+{\vec x}_0+\beta{\vec x}'\big)^2+8r_g \tilde r}\Big)\Big\}\Big],
\label{eq:Ain-3d}\\
a^2_{\tt in/sc}({\vec x},{\vec x}_0,{\vec x}')&=&\frac{\big({\textstyle\frac{1}{2}} (\sqrt{1+{8r_g \tilde r}/{(\vec{x}+{\vec x}_0+\beta{\vec x}')^2}}\pm1)\big)^2}{\sqrt{1+{8r_g\tilde r}/{(\vec{x}+{\vec x}_0+\beta{\vec x}')^2}}}.
\label{eq:a_insc2}
\end{eqnarray}
Clearly, these are rather complex expressions. However, in the case when displacements are large, $\rho_0 \gg \rho$ and $\beta\rho' \ll \rho_0$, we may use the approximation (\ref{eq:mod}), which allows us to expand (\ref{eq:Ain-3d}) and (\ref{eq:a_insc2}), to the first order in $\rho/\rho_0$ and $\beta\rho'/\rho_0$, yielding the following results:
{}
\begin{eqnarray}
{\cal A}_{\tt in}({\vec x},{\vec x}_0,{\vec x}')&=&
a_{\tt in}(\rho_0,\tilde r) \exp\Big[-i\Big(
\big(\xi_{\tt in}({\vec x}\cdot{\vec n}_0)+\eta_i({\vec x}\cdot{\vec n}_i)\big)+{\textstyle\frac{1}{2}}\xi_{\tt in}\beta\,({\vec x}'\cdot{\vec n}_0)\Big)\Big]e^{i\delta\varphi_{\tt in} (\rho_0,\tilde r)},
\label{eq:amp-Ain}\\
{\cal A}_{\tt sc} ({\vec x},{\vec x}_0,{\vec x}')&=&
a_{\tt sc}(\rho_0,\tilde r) \exp\Big[i\Big( {\big(\xi_{\tt sc}({\vec x}\cdot{\vec n}_0)-\eta_i({\vec x}\cdot{\vec n}_i)\big)}+{{\textstyle\frac{1}{2}}\xi_{\tt sc}\beta\,({\vec x}'\cdot{\vec n}_0)}\Big)\Big]e^{i\delta\varphi_{\tt sc} (\rho_0,\tilde r)}, \label{eq:amp-Asc}
\end{eqnarray}
where the real-valued factors $a^2_{\tt in/sc}$ and phases $\delta\varphi_{\tt in/sc}(\rho_0,\tilde r)$ are given as
\begin{eqnarray}
a^2_{\tt in/sc}(\rho_0,\tilde r) &=&
\frac{\big[{\textstyle\frac{1}{2}} (\sqrt{1+{8r_g\tilde r}/{\rho_0^2}}\pm1)\big]^2}{\sqrt{1+{8r_g\tilde r}/{\rho_0^2}}},
\label{eq:a_insc}\\
\delta\varphi_{\tt in/sc}
(\rho_0,\tilde r) &=& -k\Big(\frac{\rho^2_0}{4\tilde r}\Big(1\pm\sqrt{1+\frac{8r_g \tilde r}{\rho_0^2}}-\frac{4r_g \tilde r}{\rho_0^2}\Big)+2r_g\ln k\rho_0{\frac{1}{2}}\Big(\sqrt{1+\frac{8r_g \tilde r}{\rho^2_0}}\pm1\Big)\Big).
\label{eq:Ain-d_ph}
\end{eqnarray}
Also, the spatial frequencies $\xi_{\tt in}$ and $\xi_{\tt sc}$ in (\ref{eq:amp-Ain}) and (\ref{eq:amp-Asc}), are defined as
{}
\begin{eqnarray}
\xi_{\tt in/sc}&=&k\Big(\sqrt{1+\frac{8r_g \tilde r}{\rho^2_0}}\pm1\Big)\frac{\rho_0}{2\tilde r},
\label{eq:betapm}
\end{eqnarray}
where, again, the upper sign is for $\xi_{\tt in}$ and the lower index is for $\xi_{\tt sc}$.
Note that in the case when angles $\theta$ are large, $\theta\gg\sqrt{2r_g/\tilde r}$ or $\rho_0\gg \sqrt{2r_g\tilde r}$, the amplitude factors (\ref{eq:a_insc}) reduce to the known values (see, for instance, \cite{Turyshev-Toth:2019-extend}), namely
\begin{equation}
a_{\tt in}=1+{\cal O}(r_g^2), \qquad a_{\tt sc}=\frac{2r_g\tilde r}{\rho_0^2}+{\cal O}(r_g^2), \qquad \rho_0 \gg \sqrt{2r_g\tilde r}.
\label{eq:a0_insc}
\end{equation}
However, the form of the expression (\ref{eq:a_insc}) allows us to study the case when $\rho_0\simeq \sqrt{2r_g\tilde r}$ and $\rho_0\lesssim \sqrt{2r_g\tilde r}$, which offers a description of the gravitational scattering of light in the transition region between the region of geometric optics and the weak interference region, and then toward the optical axis. This allows us to describe the entire process of gravitational scattering of light from the wave-optical standpoint.
To further emphasize the point above, we show the results that we obtained for the amplification factors $a^2_{\tt in/sc}$ and the spatial frequencies $\xi_{\tt in/sc}$, in relation to models that are used to describe gravitational microlensing. As we discussed in \cite{Turyshev-Toth:2019-image}, the spatial frequencies $\xi_{\tt in/sc}$ can be expressed as
{}
\begin{eqnarray}
\xi_{\tt in/sc}&=&k\Big(\sqrt{1+\frac{8r_g \tilde r}{\rho^2_0}}\pm1\Big)\frac{\rho_0}{2\tilde r}=k\theta_\pm, \qquad
\theta_\pm= {\textstyle\frac{1}{2}} \Big(\sqrt{\theta^2+4\theta_E^2}\pm\theta\Big),
\label{eq:xi}
\end{eqnarray}
where $\theta_E=\sqrt{{2r_g}/{\tilde r}}$ is the Einstein deflection angle and $\theta=\rho_0/{\tilde r}$. The angles $\theta_\pm$ are the angles corresponding to the positions of the observed major and minor images \cite{Liebes:1964,Refsdal:1964,Schneider-Ehlers-Falco:1992}.
Furthermore, our results match the expressions used to describe light amplification observed in the microlensing experiments.
If the source is offset from the optical axis by a small amount, it is lensed into two images that appear in line with the source and the lens, and close to the Einstein ring. Because the size of the Einstein ring is so small, the two images of the source are unresolved and the primary observable is their combined amplification. Using (\ref{eq:a_insc}) we obtain the combined light amplification, $A$, by adding the two amplification factors of the major and minor images, which yields the familiar expression
\begin{eqnarray}
A=a^2_{\tt in}+a^2_{\tt sc}&=&
\frac{1+{4r_g\tilde r}/{\rho_0^2}}{\sqrt{1+{8r_g\tilde r}/{\rho_0^2}}}=\frac{u^2+2}{u\sqrt{u^2+4}},\qquad {\rm where}\qquad u=\frac{\theta}{\theta_E}.
\label{eq:a12_amp}
\end{eqnarray}
Expressions (\ref{eq:xi})--(\ref{eq:a12_amp}) establish the correspondence between our analysis in this section and well-known models of microlensing \cite{Liebes:1964,Refsdal:1964,Schneider-Ehlers-Falco:1992}. Using our approach, we were able to present a previously unavailable description of microlensing phenomena using Maxwell's vector theory of the EM field. Our modeling approach can be further extended to incorporate other important features that allow for a better description of the source, the lens, and the backgrounds, including polarization of the incident EM wave, non-linear propagation effects, dispersion in the interstellar medium, contribution of the zodiacal background and others that are yet unavailable in the models of microlensing phenomena.
\subsection{Image EM field and intensity in the focal plane of the telescope}
\label{sec:image-EM-field}
With the expressions above, we may now develop the EM field that constitutes the image and evaluate its intensity in the focal plane of an imaging telescope.
To derive the amplitudes of the EM field in the focal plane of the telescope that correspond to (\ref{eq:amp-Ain}) and (\ref{eq:amp-Asc}), we need to put these expressions in (\ref{eq:amp-w-f}). The corresponding integrals over $d^2{\vec x}$ are easy to evaluate. As a result, similarly to \cite{Turyshev-Toth:2019-image}, we derive the amplitudes of the two EM waves on the optical telescope's focal plane in the following form:
{}
\begin{eqnarray}
{\cal A}_{\tt in}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
\Big(\frac{kd^2}{8f}\Big)\, \Big\{a_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+ \frac{1}{2}d}\Big)+{\cal O}(r_g^2)\Big\}e^{i\big(kf(1+{{\vec x}_i^2}/{2f^2})+\delta\varphi_{\tt in}(\rho_0,\tilde r) +\frac{\pi}{2}-{\textstyle\frac{1}{2}}\xi_{\tt in}\beta\rho'\cos(\phi'-\phi_0)\big)},
\label{eq:amp-Aind}\\
{\cal A}_{\tt sc}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
\Big(\frac{kd^2}{8f}\Big)\Big\{
a_{\tt sc}\Big(
\frac{
2J_1(v_-\frac{1}{2}d)}{v_- \frac{1}{2}d}\Big)+{\cal O}(r^2_g)\Big\}e^{i\big(kf(1+{{\vec x}_i^2}/{2f^2})+\delta\varphi_{\tt sc}(\rho_0,\tilde r)
+\frac{\pi}{2}+{\textstyle\frac{1}{2}}\xi_{\tt sc}\beta\rho'\cos(\phi'-\phi_0)\big)},
\label{eq:amp-Ascd}
\end{eqnarray}
where the spatial frequencies $v_\pm$ are defined as
{}
\begin{eqnarray}
v_+=\sqrt{\xi_{\tt in}^2+2\xi_{\tt in}\eta_i\cos\big(\phi_i-\phi_0\big)+\eta_i^2}\qquad {\rm and} \qquad
v_-=\sqrt{\xi_{\tt sc}^2-2\xi_{\tt sc}\eta_i\cos\big(\phi_i-\phi_0\big)+\eta_i^2}.\label{eq:vpms}
\end{eqnarray}
Remembering the time-dependent phase from (\ref{eq:DB-sol-in-cc}), we substitute this expression in (\ref{eq:Pv}) and, after time averaging, we derive the Poynting vector of the EM wave in the focal plane of the imaging telescope. As a result, in the region of the geometric optics, where only the incident EM wave is present, the intensity of the EM field in the focal plane sensor is derived using (\ref{eq:amp-Aind}), resulting in expression independent on $\rho'$ and $\phi'$:
{}
\begin{eqnarray}
S_{\tt geom.opt.}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
\frac{c}{8\pi}\frac{E_0^2}{z_0^2}
\Big(\frac{kd^2}{8f}\Big)^2\Big\{a^2_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)^2+{\cal O}(r^2_g)\Big\}.
\label{eq:FI-go}
\end{eqnarray}
As in the region of weak interference both incident and scattered waves are present, the field intensity in the focal plane of the imaging telescope is derived using the sum of the two solutions, (\ref{eq:amp-Aind}) and (\ref{eq:amp-Ascd}), yielding
{}
\begin{eqnarray}
S_{\tt weak.int.}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
\frac{c}{8\pi}\frac{E_0^2}{z_0^2}
\Big(\frac{kd^2}{8f}\Big)^2\Big\{a^2_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)^2+
a^2_{\tt sc}\Big(\frac{
2J_1(v_-\frac{1}{2}d)}{v_-\frac{1}{2}d}\Big)^2+\nonumber\\
&&\hskip -80pt +\,
2\cos\Big(\frac{k\rho_0}{2\tilde r}\sqrt{\rho_0^2+8r_g \tilde r}+2kr_g\ln \frac{\sqrt{\rho^2_0+8r_g \tilde r}+\rho_0}{\sqrt{\rho_0^2+8r_g \tilde r}-\rho_0}\Big)
a_{\tt in}a_{\tt sc}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)\bigg(\frac{
2J_1(v_-\frac{1}{2}d)}{v_-\frac{1}{2}d}\Big)+{\cal O}(r_g^2)
\Big\},~~~
\label{eq:FI-ir}
\end{eqnarray}
also independent on $\rho'$ and $\phi'$. Similar simplifying assumptions, based on the behavior of the ratios involving the Bessel function $2J_1(v_\pm\frac{1}{2}d)/{v_\pm\frac{1}{2}d}$ in these regions \cite{Turyshev-Toth:2019-image}, are applicable here. Therefore, the intensity distribution pattern in the weak interference region takes the following simplified form:
{}
\begin{eqnarray}
S_{\tt weak.int.}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
\frac{c}{8\pi}\frac{E_0^2}{z_0^2}
\Big(\frac{kd^2}{8f}\Big)^2\Big\{
a^2_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)^2+
a^2_{\tt sc}\Big(\frac{
2J_1(v_-\frac{1}{2}d)}{v_-\frac{1}{2}d}\Big)^2+{\cal O}(r^2_g)
\Big\}.
\label{eq:FI-ir+}
\end{eqnarray}
Substituting the resulting expressions (\ref{eq:FI-go}) and (\ref{eq:FI-ir+}) in (\ref{eq:psf}), we compute the convolved PSFs for the two regions:
{}
\begin{eqnarray}
\mu_{\tt geom.opt.}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
\Big(\frac{kd^2}{8f}\Big)^2\Big\{a^2_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)^2+{\cal O}(r^2_g)\Big\},
\label{eq:FI-go-mu}\\
\mu_{\tt weak.int.}({\vec x}_i,{\vec x}_0,{\vec x}')&=&
\Big(\frac{kd^2}{8f}\Big)^2\Big\{a^2_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)^2+a^2_{\tt sc}\Big(\frac{
2J_1(v_-\frac{1}{2}d)}{v_-\frac{1}{2}d}\Big)^2+{\cal O}(r_g^2)\Big\}.
\label{eq:FI-ir+-mu}
\end{eqnarray}
\begin{figure}
\includegraphics[width=0.3\linewidth]{fig6}
\caption{\label{fig:images-go}Density plot simulating the image seen by the optical telescope when it is positioned in the region of weak interference, $\rho_0\gtrsim R_\odot$ from the optical axis, with the resulting minor and major images shown in accordance with Eq.~(\ref{eq:FI-ir+-Int}). The Sun is indicated with a dashed line, while the Einstein ring is shown as a solid line. Note that in the region of geometric optics only the major image remains, as described by Eq.~(\ref{eq:FI-go-Int}).
}
\end{figure}
Substituting this result (\ref{eq:psf-bl1}) into (\ref{eq:power}), we derive the expression that may be used to determine the intensity distribution for the signals received in these two regions. Again assuming uniform surface brightness, and noticing that (\ref{eq:FI-go-mu}) and (\ref{eq:FI-ir+-mu}) do not depend on $\rho'$ and $\phi'$, we can easily evaluate the integral. This results in the following intensities to be observed in the focal plane of the imaging telescope:
{}
\begin{eqnarray}
I_{\tt geom.opt.}({\vec x}_i,{\vec x}_0)&=&\pi B_{\tt s}
\Big(\frac{kd^2}{8f}\Big)^2 \frac{R^2_\oplus}{z_0^2}
\Big\{a^2_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)^2+{\cal O}(r^2_g)\Big\},
\label{eq:FI-go-Int}\\
I_{\tt weak.int.}({\vec x}_i,{\vec x}_0)&=&\pi
B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2 \frac{R^2_\oplus}{z_0^2}\Big\{
a^2_{\tt in}\Big(\frac{
2J_1(v_+\frac{1}{2}d)}{v_+\frac{1}{2}d}\Big)^2+
a^2_{\tt sc}\Big(\frac{
2J_1(v_-\frac{1}{2}d)}{v_-\frac{1}{2}d}\Big)^2+{\cal O}(r^2_g)\Big\}.
\label{eq:FI-ir+-Int}
\end{eqnarray}
Eqs.~(\ref{eq:FI-go-Int})--(\ref{eq:FI-ir+-Int}) describe the intensity distributions that correspond to the imaging in two different optical regions behind the Sun. They describe the spots of light corresponding to incident and scattered waves, that are given by the terms containing $v_+$ and $v_-$, correspondingly. Examining (\ref{eq:FI-go-Int}) and (\ref{eq:FI-ir+-Int}) in conjunction with (\ref{eq:vpms}), we see that these expressions nearly vanish for most values of $v_\pm$, except when $v_\pm$ becomes zero which happens, when $\eta_i \rightarrow \xi_{\tt in/sc}$. When this happens, we observe a spot that is outside the Einstein ring (for $\eta_i \rightarrow \xi_{\tt in}$) describing the major image and the other one inside the ring (for $\eta_i \rightarrow \xi_{\tt sc}$) describing the minor image. This approach provides a wave-optical treatment for the microlensing phenomena that is usually described by invoking the language of geometric optics \cite{Schneider-Ehlers-Falco:1992}.
Examining (\ref{eq:vpms}), we see that because the combinations $\xi_{\tt in}{\textstyle\frac{1}{2}}d$ and $\xi_{\tt sc}{\textstyle\frac{1}{2}}d$ are rather large, expression (\ref{eq:FI-go-Int}) is almost zero everywhere except for one point where the argument of the Bessel function vanishes. Taking the limit $\eta_i\rightarrow \xi_{\tt in}$ in (\ref{eq:FI-go-Int}), we obtain:
{}
\begin{eqnarray}
I_{\tt geom.opt.}({\vec \xi}^{\tt in}_i,{\vec x}_0)&=&\pi
B_{\tt s}\Big(\frac{kd^2}{8f}\Big)^2 \frac{R^2_\oplus}{z_0^2}
\Big\{\Big(\frac{
2J_1\big(\xi_{\tt in}d\cos{\textstyle\frac{1}{2}}(\phi_i-\phi_0)\big)}{\xi_{\tt in}d\cos{\textstyle\frac{1}{2}}(\phi_i-\phi_0)}\Big)^2+{\cal O}(r^2_g)\Big\},
\label{eq:FI-go+}
\end{eqnarray}
where to show the dominant behavior of this expression in the geometric optics region, we used the value for $a_{\tt in}$ from (\ref{eq:a0_insc}).
This expression describes one peak corresponding to the incident wave whose intensity is not amplified by the SGL. It is for the major image corresponding $\xi_{\tt in}$, which appears always outside the Einstein ring. Similarly to (\ref{eq:FI-go+}), we take the limit in $\eta_i\rightarrow \xi_{\tt sc}$ in the expression (\ref{eq:FI-ir+-Int}) and obtain
{}
\begin{eqnarray}
I_{\tt weak.int.}({\vec \xi}^{\tt sc}_i,{\vec x}_0)&=&\pi
B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{R^2_\oplus}{z_0^2}\Big\{\Big(\frac{
2J_1\big(\xi_{\tt in}d\cos{\textstyle\frac{1}{2}}(\phi_i-\phi_0)\big)}{\xi_{\tt in}d\cos{\textstyle\frac{1}{2}}(\phi_i-\phi_0)}\Big)^2+ \Big(\frac{2r_g{\tilde r}}{\rho^2_0} \Big)^2 \Big(\frac{
2J_1\big(\xi_{\tt sc}d\sin{\textstyle\frac{1}{2}}(\phi_i-\phi_0)\big)}{\xi_{\tt sc}d\sin{\textstyle\frac{1}{2}}(\phi_i-\phi_0)}\Big)^2\Big\},~~~
\label{eq:FI-ir+*}
\end{eqnarray}
where to explicitly demonstrate the behavior of $I_{\tt weak.int}$, we used the values for $a_{\tt in/sc}$ from (\ref{eq:a0_insc}).
Eq.~(\ref{eq:FI-ir+*}) describes two images with uneven brightness, one depending on $v_+$ from (\ref{eq:vpms}), characteristic of the incident wave, that appears outside the Einstein ring and the other image given by the $v_-$-dependent term and scaled by the factor $(2r_g{\overline z}/\rho_0^2)^2$, corresponding to the scattered wave, that appears inside the Einstein ring.
\section{Power received at the image of the Einstein ring}
\label{sec:power}
Fig.~\ref{fig:images} shows the signals from the directly imaged region and from the rest of the source, as received at the Einstein ring at the focal plane of an optical telescope. The thickness of the Einstein ring is determined by the resolution of the diffraction-limited telescope, given as $\sim\lambda/d$ (from (\ref{eq:S_=0})). Eqs.~(\ref{eq:pow-dirD}) and (\ref{eq:P-blur*off4*}) describe the intensities of light received from the directly imaged region, $I(\rho_i) $, and blur from the rest of the planet, $I_{\tt blur}({\vec x}_i,{\vec x}_0)$, correspondingly. These expressions describe the signal intensity.
In determining the useful area in the focal plane of an optical telescope, we observe that a meter-class telescope positioned in the strong interference region of the SGL will not be able to resolve the thickness of the Einstein ring given as $2r_\oplus=({\overline z}/z_0)2R_\oplus$; for that, a telescope aperture of $2r_\oplus$ would be required. However a meter-class telescope will be able to resolve the circumference of the ring, $\ell_{\tt ER}=2\pi \sqrt{2r_g/\overline z}$, at an angular resolution characterized by $\lambda/d$.
There are two natural ways to use the information present in the Einstein ring:
\begin{inparaenum}[1)]
\item to use the total power deposited within the Einstein ring, as seen by the diffraction-limited telescope, or
\item to measure brightness variations of the Einstein ring along its circumference.
\end{inparaenum}
Measuring the total power allows for a straightforward signal estimation. Measuring brightness variations along the Einstein ring represents another valuable observable that can help improve image quality and also reduce unwanted light contamination from nearby off-image sources. Here, we focus on the measuring the total power; we leave the topic of measuring brightness variations for a separate discussion.
As shown in Fig.~\ref{fig:images}, the Einstein ring is seen in the focal plane of an imaging telescope as an annulus of unresolved width, with radius determined from (\ref{eq:alpha-mu}) as $\alpha=\eta_i$, yielding $\rho_{\tt ER}=f\sqrt{2r_g/{\overline z}}$. Therefore, the useful signal received in the focal plane of a diffraction-limited telescope is received from the entire circumference of the Einstein ring that occupies the annulus within the two radii, $\rho_{\tt ER}^\pm$, given as
{}
\begin{eqnarray}
\rho_{\tt ER}^\pm=f\Big(\sqrt{\frac{2r_g}{\overline z}}\pm\frac{\lambda}{2d}\Big).
\label{eq:ER-rho}
\end{eqnarray}
As a result, to estimate the power received in the focal plane of a diffraction-limited telescope from a distant, extended and resolved source, we need to integrate the intensities (\ref{eq:pow-dirD}) and (\ref{eq:P-blur*off4*}) over the focal plane corresponding to the annulus between the radii (\ref{eq:ER-rho}).
\subsection{Power in the focal plane from the directly imaged region}
\label{sec:power-dir}
Before considering the power deposited at the annulus around the Einstein ring corresponding to the signal received from the directly imaged region, we first compute the total power deposited by this signal in the entire focal plane. For this, we take (\ref{eq:pow-dirD}) and derive the following
{}
\begin{eqnarray}
P^0_{\tt fp.dir}&=&\int^{2\pi}_0 \hskip -4pt d\phi_i \int_0^\infty
\hskip 0pt I_{\tt dir}(\rho_i) \rho_i d\rho_i = \nonumber\\
&=& \pi B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0d^2}{4{\overline z}^2}
\int^{2\pi}_0 \hskip -4pt d\phi_i \int_0^\infty
\hskip 0pt \rho_i d\rho_i \Big(\frac{2}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d} \Big(\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big)\Big)^2.
\label{eq:pow-fp0}
\end{eqnarray}
To evaluate this integral, we remember the identity
{}
\begin{eqnarray}
\int_0^{d/2} \hskip -5pt \rho d \rho J_0(\alpha \rho)J_0(\eta_i \rho)= \Big(\frac{d}{2}\Big)^2
\frac{1}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d}
\Big(
\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big).
\label{eq:amp-w-2+}
\end{eqnarray}
With the help of (\ref{eq:amp-w-2+}) and (\ref{eq:alpha-mu}), we present (\ref{eq:pow-fp0}) as
{}
\begin{eqnarray}
P^0_{\tt fp.dir}&=&\pi B_{\tt s} \frac{\mu_0d^2}{4{\overline z}^2}
\int_0^{d/2} \hskip -5pt \rho d \rho J_0(\alpha \rho)
\int_0^{d/2}\hskip -5pt \rho' d \rho' J_0(\alpha \rho')
\int^{2\pi}_0 \hskip -4pt d\phi_i \int_0^\infty
\hskip 0pt \eta_i d\eta_i J_0(\eta_i \rho)J_0(\eta_i \rho').
\label{eq:pow-fp1}
\end{eqnarray}
The last integral in (\ref{eq:pow-fp1})
is just the semi-infinite integral of a Fourier-Bessel transform (Hankel transform) that is bounded at $\rho\rightarrow 0$ and vanishes at $\rho\rightarrow\infty$, constituting the orthogonality relation on a semi-infinite interval \cite{deLeon:2014}:
{}
\begin{eqnarray}
\int_0^\infty \hskip -3pt q dq J_n\big(q \rho\big)J_n\big(q\rho'\big)=\frac{\delta(\rho-\rho')}{\rho'}.
\label{eq:fb}
\end{eqnarray}
Using (\ref{eq:fb}) in (\ref{eq:pow-fp1}), we have
{}
\begin{eqnarray}
P^0_{\tt fp.dir}&=&
B_{\tt s} \frac{\mu_0 \pi^2d^4}{16{\overline z}^2}\Big(J^2_0(\alpha{\textstyle\frac{1}{2}}d)+J^2_1(\alpha{\textstyle\frac{1}{2}}d)\Big)
\equiv P_{\tt dir}=\frac{B_s}{z_0^2}\pi({\textstyle\frac{1}{2}}d)^2\pi({\textstyle\frac{1}{2}}D)^2\frac{4 \sqrt{2r_g\overline z}}{d},
\label{eq:pow2*}
\end{eqnarray}
where $P_{\tt dir}$ is the power of the EM field received from the directly imaged region of the resolved target and measured at the entrance of the telescope (just in front of the convex lens) as was derived in \cite{Turyshev-Toth:2019-blur} by integrating the energy density over the aperture. Eq.~(\ref{eq:pow2*}) confirms that in the case of imaging with the SGL, the total energy is conserved. This is despite the fact that the PSF (\ref{eq:S_z*6z-mu2}) diminishes as $\propto 1/\rho$ as the distance from its optical axis, $\rho$, increases \cite{Turyshev-Toth:2019-extend}.
Now we can estimate the power deposited at the annuals around the Einstein ring corresponding to the signal received from the directly imaged region, $P_{\tt fp.dir}$. For this, we take (\ref{eq:pow-dirD}) and integrate it over the area seen by the diffraction-limited telescope
{}
\begin{eqnarray}
P_{\tt fp.dir}&=&\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt ER}}^{\rho_{\tt ER}^+}
\hskip 0pt I_{\tt dir}(\rho_i) \rho_i d\rho_i = \nonumber\\
&=& \pi B_{\tt s} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0d^2}{4{\overline z}^2}
\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt ER}}^{\rho_{\tt ER}^+}
\hskip 0pt \rho_i d\rho_i \Big(\frac{2}{(\alpha^2-\eta_i^2){\textstyle\frac{1}{2}}d} \Big(\alpha J_0(\eta_i {\textstyle\frac{1}{2}}d) J_1(\alpha {\textstyle\frac{1}{2}}d)-\eta_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(\eta_i {\textstyle\frac{1}{2}}d)\Big)\Big)^2.
\label{eq:pow-fp}
\end{eqnarray}
To consider practical applications of the SGL, it is convenient to represent $P_{\tt fp.dir}$ as a fraction of the total power incident at the telescope entrance, $P_{\tt dir}$, namely:
{}
\begin{eqnarray}
P_{\tt fp.dir}&=&\epsilon_{\tt dir}P_{\tt dir}.
\label{eq:pow-frac}
\end{eqnarray}
The quantity $\epsilon_{\tt dir}$ is the encircled energy ratio that describes the ratio of the power deposited within the first few Airy rings of the diffraction pattern seen at the focal plane of a convex lens to the total energy incident on a telescope. Similarly, in our case, $\epsilon_{\tt dir}$ describes the fraction of the total energy incident on the telescope from the directly imaged region that is deposited around the Einstein ring as seen by a diffraction-limited telescope.
To evaluate $\epsilon_{\tt dir}$, we introduce a new variable, $p_i$, and new integration limits corresponding to (\ref{eq:ER-rho}):
{}
\begin{eqnarray}
p_i=\eta_i {\textstyle\frac{1}{2}}d=\frac{\pi d}{\lambda f}\rho_i, \qquad {\rm and}
\qquad
p^\pm_{\tt ER}=\alpha {\textstyle\frac{1}{2}}d \pm {\textstyle\frac{\pi}{2}},
\label{eq:p-eta}
\end{eqnarray}
where $\eta_i$ and $\alpha$ are from (\ref{eq:alpha-mu}). Then, from (\ref{eq:pow2*}) and (\ref{eq:pow-fp}), we have:
{}
\begin{eqnarray}
\epsilon_{\tt dir}&=&\frac{1}{2\Big(J^2_0(\alpha{\textstyle\frac{1}{2}}d)+J^2_1(\alpha{\textstyle\frac{1}{2}}d)\Big)}\int_{p^-_{\tt ER}}^{p_{\tt ER}^+}
\hskip 0pt p_i dp_i \Big(\frac{2}{(\alpha{\textstyle\frac{1}{2}}d)^2-p_i^2} \Big(\alpha{\textstyle\frac{1}{2}}d J_0(p_i) J_1(\alpha {\textstyle\frac{1}{2}}d)-p_i J_0(\alpha {\textstyle\frac{1}{2}}d) J_1(p_i)\Big)\Big)^2.
\label{eq:een*}
\end{eqnarray}
\begin{figure}[t]
\includegraphics[scale=1.1]{s9099}
\caption{\label{fig:s9099}Encircled energy and its normalized distribution for the directly imaged region (\ref{eq:een*}) and the rest of the source (\ref{eq:een*b}). Horizontal axis is in seconds of arc, as seen by a telescope positioned at ${\overline z}=650$~AU. The peak at $\sim 1.6''$ corresponds to the location of the Einstein ring.
}
\end{figure}
As the quantity $(\alpha{\textstyle\frac{1}{2}}d)$ is rather large, $\alpha{\textstyle\frac{1}{2}}d\simeq 24.49\,\big({1\,\mu{\rm m}}/{\lambda}\big)\big({d}/{1\,{\rm m}}\big)\big({650\,{\rm AU}}/{\overline z}\big)^\frac{1}{2}$, we may simplify (\ref{eq:een*}) by using the asymptotic approximation of the Bessel functions (\ref{eq:BF}), which results in the following:
{}
\begin{eqnarray}
\epsilon_{\tt dir}&=&\frac{1}{\pi}\int_{\alpha {\textstyle\frac{1}{2}}d - {\textstyle\frac{\pi}{2}}}^{\alpha {\textstyle\frac{1}{2}}d + {\textstyle\frac{\pi}{2}}}
\hskip 0pt dp_i \Big(\frac{\sin\big(\alpha{\textstyle\frac{1}{2}}d-p_i\big)}{\alpha{\textstyle\frac{1}{2}}d-p_i} -\frac{\cos\big(\alpha{\textstyle\frac{1}{2}}d+p_i\big)}{\alpha{\textstyle\frac{1}{2}}d+p_i} \Big)^2\simeq 0.77,
\label{eq:een*appr}
\end{eqnarray}
which indicates that only $\sim77\%$ of the energy incident on the telescope from the directly imaged region is deposited within the annulus with thickness of $\lambda/d$ centered at the Einstein ring.
As a result, the power received from the directly imaged region on a resolved exoplanet and measured at the Einstein ring in the focal plane of a diffraction-limited telescope, with $\epsilon_{\tt dir} $ from (\ref{eq:een*appr}), may be given as
{}
\begin{eqnarray}
P_{\tt fp.dir}&=&\epsilon_{\tt dir} B_{\tt s} \frac{\mu_0\pi^2d^4}{16{\overline z}^2}\Big(J^2_0(\alpha{\textstyle\frac{1}{2}}d)+J^2_1(\alpha{\textstyle\frac{1}{2}}d)\Big)\simeq \epsilon_{\tt dir} B_{\tt s} \frac{\pi^2 d^3}{4{\overline z}}\sqrt{\frac{2r_g}{\overline z}},
\label{eq:pow**}
\end{eqnarray}
where we used the approximations (\ref{eq:BF}) and the definitions (\ref{eq:alpha-mu}). We note that the power (\ref{eq:pow**}) is independent of the observing wavelength and the distance to the target; however it is a strong function of the telescope's aperture, as expected.
\subsection{Power in the focal plane due to blur from the rest of the planet}
\label{sec:foc-power-blur}
Similarly to the discussion on the signal from the directly imaged region, we first compute the total power deposited in the focal plane from the rest of the extended, resolved exoplanet. For this, we take (\ref{eq:P-blur*2*}) and form the quantity
{}
\begin{eqnarray}
P^0_{\tt fp.blur}({\vec x}_0)&=&\int^{2\pi}_0 \hskip -4pt d\phi_i \int_0^\infty
\hskip 0pt I_{\tt blur}({\vec x}_i,{\vec x}_0) \rho_i d\rho_i = \frac{B_{\tt s}}{{\overline z}^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0 d}{2\alpha}\times\nonumber\\
&& \hskip -50pt \times\,\frac{1}{2\pi}
\int_0^{2\pi} \hskip -8pt d\phi'' \bigg(\frac{2r_\oplus}{d}
\sqrt{1-\big(\frac{\rho_0}{r_\oplus}\big)^2\sin^2\phi''}-1\bigg)
\int^{2\pi}_0 \hskip -4pt d\phi_i \int_0^\infty
\hskip 0pt \rho_i d\rho_i \bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg).
\label{eq:pow-fp0b}
\end{eqnarray}
Using the variable $p_i$ given by (\ref{eq:p-eta}), that yields
\begin{eqnarray}
u_\pm {\textstyle\frac{1}{2}}d=\sqrt{(\alpha {\textstyle\frac{1}{2}}d)^2\mp2 \alpha {\textstyle\frac{1}{2}}d \,p_i \cos(\phi_i-\phi'')+p_i^2},
\label{eq:upm+}
\end{eqnarray}
the last integral in the expression (\ref{eq:pow-fp0b}) is evaluated as
{}
\begin{eqnarray}
\int^{2\pi}_0 \hskip -4pt d\phi_i \int_0^\infty
\hskip 0pt \rho_i d\rho_i \Big( \frac{2
J_1\big(u_\pm\frac{1}{2}d\big)}{u_\pm\frac{1}{2}d}\Big)^2&=& \Big(\frac{\lambda f}{\pi d}\Big)^2\int^{2\pi}_0 \hskip -4pt d\phi_i \int_0^\infty
\hskip 0pt p_i dp_i \Big( \frac{2
J_1\big(u_\pm\frac{1}{2}d\big)}{u_\pm\frac{1}{2}d}\Big)^2
=4\pi \Big(\frac{\lambda f}{\pi d}\Big)^2.
\label{eq:int-fp0*}
\end{eqnarray}
The integrand in (\ref{eq:int-fp0*}) effectively behaves akin to a delta function as it predominantly selects points on the Einstein ring, as shown in (\ref{eq:Bess-rat}). This result allows us to express (\ref{eq:pow-fp0b}) as
{}
\begin{eqnarray}
P^0_{\tt fp.blur}({\vec x}_0)&=&B_{\tt s} \frac{\pi^2 d^3}{4 {\overline z}^2}\frac{\mu_0}{\pi\alpha} \Big(\frac{2r_\oplus}{d}\epsilon(\rho_0)-1\Big)\equiv P_{\tt blur}({\vec x}_0),
\label{eq:pow-fp0+}
\end{eqnarray}
where $\epsilon(\rho_0)$ is given by (\ref{eq:eps_r0}) and $P_{\tt blur}({\vec x}_0)$ is the total integrated flux (i.e., power) received from the area on the source which is outside the directly imaged region, as given by Eq.~(30) of \cite{Turyshev-Toth:2019-blur}. Therefore, our results describing the intensity distribution due to the blur at the focal plane of an imaging telescope (\ref{eq:P-blur*2*}) and those derived for photometric imaging in \cite{Turyshev-Toth:2019-blur}, where we estimated the total power incident on the aperture of that telescope, are also equivalent.
Now, similarly to (\ref{eq:pow-fp0b}), we can estimate the power deposited at the annulus around the Einstein ring corresponding to the blur signal, $P_{\tt blur}$. For this, we take (\ref{eq:P-blur*2*}) and integrate it over the area seen by the diffraction-limited telescope:
{}
\begin{eqnarray}
P_{\tt fp.blur}({\vec x}_0)&=&\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt ER}}^{\rho_{\tt ER}^+}
\hskip 0pt I_{\tt blur}({\vec x}_i,{\vec x}_0) \rho_i d\rho_i = \frac{B_{\tt s}}{{\overline z}^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{\mu_0 d}{2\alpha}\times\nonumber\\
&& \hskip -50pt \times\,\frac{1}{2\pi}
\int_0^{2\pi} \hskip -8pt d\phi'' \bigg(\frac{2r_\oplus}{d}
\sqrt{1-\big(\frac{\rho_0}{r_\oplus}\big)^2\sin^2\phi''}-1\bigg)
\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt ER}}^{\rho_{\tt ER}^+}
\hskip 0pt \rho_i d\rho_i \bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg).
\label{eq:pow-fp0*}
\end{eqnarray}
To simplify (\ref{eq:pow-fp0*}), similarly to (\ref{eq:pow-frac}), it is convenient to introduce the encircled energy factor, $\epsilon_{\tt dir}$, for the blur contribution
{}
\begin{eqnarray}
P_{\tt fp.blur}({\vec x}_0)&=&\epsilon_{\tt blur}P_{\tt blur}({\vec x}_0).
\label{eq:pow-frac-bl}
\end{eqnarray}
As we integrate over $d\phi_i$ for the entire period of $[0,2\pi]$, the factor $\epsilon_{\tt blur}$ may be given in a very concise form. Thus, with the help of (\ref{eq:pow-fp0+}), (\ref{eq:pow-fp0b}) and the variable $p_i$ from (\ref{eq:p-eta}) yielding $u_\pm {\textstyle\frac{1}{2}}d$ given by (\ref{eq:upm+}), after numerical integration, we have
{}
\begin{eqnarray}
\epsilon_{\tt blur}&=&\frac{1}{8\pi}\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{p^-_{\tt ER}}^{p_{\tt ER}^+}
\hskip 0pt p_i dp_i \bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg)\simeq 0.69,
\label{eq:een*b}
\end{eqnarray}
independent of the angle $\phi''$ present in (\ref{eq:upm+}). This result suggests that only $\sim69\%$ of the energy incident on the telescope from the the area outside the directly imaged region is deposited within the annulus with thickness of $\lambda/d$ centered at the Einstein ring. Because of the diffraction within the telescope, a significant part of the remaining energy is deposited at the center of the focal plane and in the side lobes of the diffraction pattern, as seen in Fig.~\ref{fig:images}.
Therefore, the power received from outside the directly imaged region of a resolved source and measured at the Einstein ring in the focal plane of a diffraction-limited telescope, with $\epsilon_{\tt blur} $ from (\ref{eq:een*b}), is given as
{}
\begin{eqnarray}
P_{\tt fp.blur}(\rho_0)&=&\epsilon_{\tt blur} B_{\tt s} \frac{\pi^2 d^3}{4 {\overline z}^2}\frac{\mu_0}{\pi\alpha} \Big(\frac{2r_\oplus}{d}\epsilon(\rho_0)-1\Big)\simeq \epsilon_{\tt blur} B_{\tt s} \frac{\pi^2 d^3}{4{\overline z}}\sqrt{\frac{2r_g}{\overline z}}\Big(\frac{2R_\oplus}{d}\frac{\overline z}{z_0}\epsilon(\rho_0)-1\Big),
\label{eq:pow*b*}
\end{eqnarray}
where we used (\ref{eq:BF}) and (\ref{eq:alpha-mu}) to simplify the result. We note that the power (\ref{eq:pow*b*}) is also independent of the observing wavelength, but is inversely proportional to the distance to the source.
As a result, the total power received from the entire exoplanet,
\begin{eqnarray}
P_{\tt fp.exo}(\rho_0)=P_{\tt fp.dir}+P_{\tt fp.blur}(\rho_0),
\label{eq:pow*tot}
\end{eqnarray}
at the location of the Einstein ring in the focal plane of a diffraction-limited telescope with the help of (\ref{eq:pow**}) and (\ref{eq:pow*b*}) is given as
{}
\begin{eqnarray}
P_{\tt fp.exo}(\rho_0)&=&
B_{\tt s} \frac{\pi^2 d^3}{4{\overline z}}\sqrt{\frac{2r_g}{\overline z}}\Big(\epsilon_{\tt dir} +\epsilon_{\tt blur} \Big(\frac{2r_\oplus}{d}\epsilon(\rho_0)-1\Big)\Big)\simeq
\epsilon_{\tt blur} B_{\tt s}\pi^2 d^2 \frac{R_\oplus}{2z_0}\sqrt{\frac{2r_g}{\overline z}}\epsilon(\rho_0), \qquad 0\leq \rho_0 \leq r_\oplus,~~~~
\label{eq:pow*exo*}
\end{eqnarray}
which is similar to the result obtained in \cite{Turyshev-Toth:2019-blur} for the case of photometric imaging of extended objects with the SGL.
\subsection{Power in the focal plane from an off-image source}
\label{sec:foc-power-off}
Similarly to (\ref{eq:pow-fp0*}), we may evaluate the energy received at the focal plane corresponding to intensity (\ref{eq:P-blur*off4*}). We can do that by integrating (\ref{eq:P-blur*off4*}) over the focal plane of the imaging telescope, as we did for (\ref{eq:pow-dirD}) and (\ref{eq:P-blur*2*}), namely
{}
\begin{eqnarray}
P_{\tt fp.off}(\rho_0)&=&\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt ER}}^{\rho_{\tt ER}^+}
\hskip 0 pt I_{\tt blur}({\vec x}_i,{\vec x}_0) \rho_i d\rho_i=\frac{B_{\tt s}}{{z_0}^2} \Big(\frac{kd^2}{8f}\Big)^2\frac{2\mu_0 R_\oplus}{\alpha\beta}\times\nonumber\\
&&\hskip -30pt \times\,
\frac{1}{2\pi}
\int_{\phi_-}^{\phi_+} \hskip -8pt d\phi''
\Big(\sqrt{1-\big(\frac{\rho_0}{r_\oplus}\big)^2\sin^2\phi''}\Big)
\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt ER}}^{\rho_{\tt ER}^+}
\rho_i d\rho_i
\bigg(\Big( \frac{2
J_1\big(u_+\frac{1}{2}d\big)}{u_+\frac{1}{2}d}\Big)^2+\Big( \frac{2
J_1\big(u_-\frac{1}{2}d\big)}{u_-\frac{1}{2}d}\Big)^2\bigg).
\label{eq:pow*2+}
\end{eqnarray}
Similarly to the derivation of $P_{\tt fp.blur}(\rho_0)$ above, this expression results in the following
{}
\begin{eqnarray}
P_{\tt fp.off}(\rho_0)&=&\epsilon_{\tt blur} B_{\tt s} \pi^2 d^2 \frac{R_\oplus}{2z_0}\sqrt{\frac{2r_g}{\overline z}}\beta(\rho_0),\qquad \rho_0 \geq r_\oplus,
\label{eq:pow*2++}
\end{eqnarray}
which is equivalent to $\epsilon_{\tt blur} P_{\tt off}(\rho_0)$, where $P_{\tt off}$ is the power received for off-source pointing, as given by Eq.~(38) of \cite{Turyshev-Toth:2019-blur} and $\beta(\rho_0)$ is from (\ref{eq:beta_r0}). Therefore, our results describing the intensity distribution due to the blur at the focal plane of an imaging telescope for off-source pointing (\ref{eq:P-blur*off4*}) and those derived for photometric imaging in \cite{Turyshev-Toth:2019-blur}, are complimentary.
Result (\ref{eq:pow*2++}) may be used, in particular, to model light contamination from the parent star, which, as shown in Fig.~\ref{fig:images} (right) contributes two spots at the Einstein ring that may be masked by an appropriate management of the focal plane.
\subsection{Power in the focal plane at a large distance from the optical axis}
\label{sec:power-gowi}
Once we move far away from the optical axis, the power deposited in the focal plane of the optical telescope is computed with the intensity distributions (\ref{eq:FI-go-Int}) and (\ref{eq:FI-ir+-Int}) for the geometric optics and weak interference regions, correspondingly. When we integrate over the focal plane, we see from Fig.~\ref{fig:images} that the two images corresponding to the incident and scattered waves are seen in the focal plane as unresolved circles, with radii determined from (\ref{eq:alpha-mu}) and (\ref{eq:betapm}) as $\eta_i=\xi_{\tt in/sc}$. Therefore, the useful signal received in the focal plane of a diffraction-limited telescope occupies the annulus between the two radii, $\rho_{\tt in/sc}^\pm$, that from (\ref{eq:betapm}) are given as
{}
\begin{eqnarray}
\rho_{\tt in}^\pm=f\Big(\Big(\sqrt{1+\frac{8r_g \tilde r}{\rho^2_0}}+1\Big)\frac{\rho_0}{2\tilde r}\pm\frac{\lambda}{2d}\Big)
\qquad{\rm and}\qquad
\rho_{\tt sc}^\pm=f\Big(\Big(\sqrt{1+\frac{8r_g \tilde r}{\rho^2_0}}-1\Big)\frac{\rho_0}{2\tilde r}\pm\frac{\lambda}{2d}\Big).
\label{eq:insc-rho}
\end{eqnarray}
As a result, the variable $p_i$ from (\ref{eq:p-eta}) varies within different radii:
{}
\begin{eqnarray}
p^\pm_{\tt in}=\xi_{\tt in} {\textstyle\frac{1}{2}}d \pm {\textstyle\frac{\pi}{2}}
\qquad {\rm and} \qquad
p^\pm_{\tt sc}=\xi_{\tt sc} {\textstyle\frac{1}{2}}d \pm {\textstyle\frac{\pi}{2}}.
\label{eq:p-eta-gi}
\end{eqnarray}
Following the approach that was developed in in Sec.~\ref{sec:foc-power-blur}, with the help of (\ref{eq:FI-go-Int}) and (\ref{eq:FI-ir+-Int}), we compute the power deposited in the focal plane in the geometric optics and weak interference regions, which take the form
{}
\begin{eqnarray}
P_{\tt fp.geom.opt}({\vec x}_0)&=&\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt in}}^{\rho_{\tt in}^+}
\hskip 0pt I_{\tt geom.opt}({\vec x}_i,{\vec x}_0) \rho_i d\rho_i = \epsilon_{\tt geom.opt} B_{\tt s} \pi^2d^2\frac{R^2_\oplus}{4z_0^2} a^2_{\tt in},
\label{eq:fp.pow-go}\\
P_{\tt fp.weak.int}({\vec x}_0)&=&\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{\rho^-_{\tt sc}}^{\rho_{\tt sc}^+}
\hskip 0pt I_{\tt weak.int}({\vec x}_i,{\vec x}_0) \rho_i d\rho_i = \epsilon_{\tt weak.int} B_{\tt s} \pi^2 d^2\frac{R^2_\oplus}{4z_0^2} \big(a^2_{\tt in}+a^2_{\tt sc}\big),
\label{eq:fp.pow-wi}
\end{eqnarray}
where the encircled energies for these regions with the help of (\ref{eq:int-fp0*}) are given as
{}
\begin{eqnarray}
\epsilon_{\tt geom.opt}&=&\frac{1}{4\pi}\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{p^-_{\tt in}}^{p_{\tt in}^+}
\hskip 0pt p_i dp_i \Big( \frac{2
J_1\big(v_+\frac{1}{2}d\big)}{v_+\frac{1}{2}d}\Big)^2\simeq 0.69,
\label{eq:ee-go}\\
\epsilon_{\tt weak.int}&=&
\frac{1}{4\pi (a^2_{\tt in}+a^2_{\tt sc})}\Big\{a^2_{\tt in}\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{p^-_{\tt in}}^{p_{\tt in}^+}
\hskip 0pt p_i dp_i \Big( \frac{2
J_1\big(v_+\frac{1}{2}d\big)}{v_+\frac{1}{2}d}\Big)^2+a^2_{\tt sc}\int^{2\pi}_0 \hskip -4pt d\phi_i \int_{p^-_{\tt sc}}^{p_{\tt sc}^+}
\hskip 0pt p_i dp_i \Big( \frac{2
J_1\big(v_-\frac{1}{2}d\big)}{v_-\frac{1}{2}d}\Big)^2\Big\}\simeq\nonumber\\[4pt]
&&\simeq 0.69.
\label{eq:ee-wi}
\end{eqnarray}
We see that the power deposited at the foal plane of the optical telescope is amplified by the factors $a^2_{\tt in}$ and $a^2_{\tt sc}$ which, according to (\ref{eq:a12_amp}), is getting larger as the deviation from the optical axis, $\rho_0$, decreases. Thus, as we move closer to the optical axis, amplification gets larger and once we enter the strong interference region it is given by (\ref{eq:pow*exo*}).
Finally, we mention that sources at moderate distances from the parent star do not contribute to the signal measured at the Einstein ring. As their diffraction-limited images will be centered at the angles $\xi_{\tt in/sc}$ given by (\ref{eq:betapm}), they will not bring light contamination to the Einstein ring and thus, they may be ignored in the relevant SNR analysis.
\subsection{Anticipated signals for imaging an exo-Earth}
\label{sec:signal-estimates}
We may now estimate the signals that could be expected from realistic targets when they are imaged with the SGL. We consider a planet identical to our Earth that orbits a star identical to our Sun. The total flux received by such a target is the same as the solar irradiance at the top of Earth's atmosphere, given as $I_0=1,366.83~{\rm W/m}^2$. Approximating the planet as a Lambertian sphere illuminated from the viewing direction yields a Bond spherical albedo \cite{Lester-etal:1979} of $2/3\pi$, and the target's average surface brightness becomes $B_{\tt s}=({2}/{3\pi})\alpha I_0$, where we take Earth's broadband albedo to be $\alpha=0.3$ and assuming that we see a fully-illuminated planet at 0 phase angle.
With these parameters, the power, $P_{\tt fp.dir}$, and the photon flux, $Q_{\tt fp.dir}=P_{\tt fp.dir}(\lambda/hc)$, corresponding to the signal received from the directly imaged region of the planet is estimated from (\ref{eq:pow**}) to be
{}
\begin{eqnarray}
P_{\tt fp.dir}&=& \epsilon_{\tt dir} \alpha I_0 \frac{\pi d^3}{6{\overline z}}\sqrt{\frac{2r_g}{\overline z}}=1.33\times 10^{-17}\,\Big(\frac{d}{1\,{\rm m}}\Big)^3\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^\frac{3}{2}~{\rm W},
\label{eq:dir-sP}\\
Q_{\tt fp.dir}&=&
66.71\, \Big(\frac{d}{1\,{\rm m}}\Big)^3\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^\frac{3}{2}\Big(\frac{\lambda}{1\,\mu{\rm m}}\Big)~{\rm photons/s},
\label{eq:dir-s}
\end{eqnarray}
where we assumed that all light is transmitted at $\lambda=1~\mu$m and used $\epsilon_{\tt dir}=0.77$.
Similarly, assuming that the planet is positioned at $z_0=30$~pc away from us, with the help of (\ref{eq:pow*b*}) (or, equivalently, from (\ref{eq:pow*exo*})) and using $\epsilon_{\tt blur}=0.69$,
we estimate the signal from the rest of the planet as
{}
\begin{eqnarray}
P_{\tt fp.blur}(\rho_0)&=& \epsilon_{\tt blur} \alpha I_0 {\pi d^2}\frac{R_\oplus}{3z_0}\sqrt{\frac{2r_g}{\overline z}}\epsilon(\rho_0)= 1.59\times 10^{-14}\epsilon(\rho_0)
\Big(\frac{d}{1\,{\rm m}}\Big)^2\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^\frac{1}{2}\Big(\frac{30\,{\rm pc}}{z_0}\Big)~{\rm W},
\label{eq:blur-sP}\\
Q_{\tt fp.blur}(\rho_0)&=&
8.01\times 10^4\epsilon(\rho_0) \Big(\frac{d}{1\,{\rm m}}\Big)^2\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^\frac{1}{2}\Big(\frac{30\,{\rm pc}}{z_0}\Big)
\Big(\frac{\lambda}{1\,\mu{\rm m}}\Big)~{\rm photon/s}.~~~~~
\label{eq:blur-s}
\end{eqnarray}
For comparison, we can also compute the power observed by a regular telescope (unaided by the SGL). Using (\ref{eq:fp.pow-go}) and positioning the telescope at the distance $\rho_0=10R_\odot$ (so that $a^2_{\tt in}=1$) from the SGL optical axis, which corresponds to geometric optics regime, typically found in modern astronomical observations (with $\epsilon_{\tt geom.opt}=0.69$):
{}
\begin{eqnarray}
P_{\tt fp.geom.opt}(\rho_0)&=& \epsilon_{\tt geom.opt} \alpha I_0 {\pi d^2} \frac{R^2_\oplus}{6z_0^2} a^2_{\tt in}\simeq
7.03\times 10^{-21}~
\Big(\frac{d}{1\,{\rm m}}\Big)^2\Big(\frac{30\,{\rm pc}}{z_0}\Big)^2~{\rm W}, \label{eq:pow-go+phP}\\
Q_{\tt fp.geom.opt}(\rho_0)&=&
3.54\times 10^{-2}\Big(\frac{d}{1\,{\rm m}}\Big)^2\Big(\frac{30\,{\rm pc}}{z_0}\Big)^2\Big(\frac{\lambda}{1\,\mu{\rm m}}\Big)~{\rm photons/s}.
\label{eq:pow-go+ph}
\end{eqnarray}
Using this estimate, we can compare the performance of a conventional telescope against one aided by the SGL. The angular resolution (\ref{eq:S_=0}) needed to resolve features of size $D$ given by (\ref{eq:Dd}) in the target plane requires a telescope with aperture $d_D\sim1.22\, (\lambda/D) z_0=1.22\, (\lambda/d) {\overline z}\simeq 1.19 \times 10^5~{\rm km}=18.60 R_\oplus$, which is not realistic. The photon flux of a $d=1$~m telescope can be calculated by scaling the result (\ref{eq:pow-go+ph}) by a factor of $(D/2R_\oplus)^2\simeq 5.57 \times 10^{-7}$, yielding the value of $1.97 \times 10^{-8}$
photons/s, which is extremely small. Comparing this flux with (\ref{eq:dir-s}), we see that the SGL, used in conjunction with a $d=1$~m telescope, amplifies the light from the directly imaged region (i.e., an unresolved source) by a factor of
$\sim 3.38\times10^9\,(d/1{\rm m})({650\,{\rm AU}}/{\overline z})^\frac{3}{2}(z_0/{30\,{\rm pc}})^2$.
\subsection{Noise from the solar corona and detection SNR}
\label{sec:sol-cor}
The Einstein ring corresponding to a distant target, as observed from a position in the SGL focal region, is seen through the bright solar corona, which represents an important noise contribution that must be considered. Noise from the solar corona can be mitigated by letting as little light from the corona to reach the instrument as possible. This is achieved by employing a suitably designed solar coronagraph, needed in any case to block direct light from the Sun, but which can also be used to reduce the noise from the solar corona.
Solar coronagraphy was invented by Lyot \cite{Lyot:1932} to study the solar corona by blocking out the Sun and reproducing solar eclipses artificially. Coronagraphs are also considered to block out light from point sources, such as the host star of an exoplanet imaged with conventional telescope \cite{Traub:2010}. The SGL coronagraph is different, as it needs to block the light from the Sun and the solar corona, leaving visible only those areas where the Einstein ring appears.
\begin{figure}
\includegraphics[scale=0.833]{foc-plane-detector}
\caption{\label{fig:foc-plane-det} The annular coronagraph concept. The coronagraph blocks light from both within and outside the Einstein ring. The thickness of the exposed area is determined by the diffraction limit of the optical telescope at its typical observational wavelength.}
\end{figure}
The already available design for the SGL coronagraph \cite{Zhou:2018} rejects sunlight with a contrast ratio of $\sim 10^7$. At this level of rejection, the light from the solar disk is completely blocked to the level comparable to the brightness of the solar corona.
Taking a further step, we consider two possible coronagraph concepts. A conventional coronagraph (which we call a ``disk coronagraph'') that blocks light only from the solar disk and the solar corona up to the inner boundary, $\theta_{\tt cor}^-$, of the $\lambda/d$ annulus centered on the Einstein ring, and a coronagraph that also blocks light outside the outer boundary, $\theta_{\tt cor}^+$, of the $\lambda/d$-annulus centered at the Einstein ring (the ``annular coronagraph'', shown in Fig.~\ref{fig:foc-plane-det}). Fig.~\ref{fig:sol-cor-bright} shows the relative angular sizes for the Sun and the Einstein ring, as heliocentric distance increases.
Compared to the disk coronagraph, the annular coronagraph reduces the noise contribution from the solar corona by an additional $\sim 10\%$. As the solar corona is quite bright compared to the Einstein ring, the use of an annular coronagraph is preferred for an SGL imaging instrument. Consequently, in the estimates that we develop for the corona contribution, we assume an annular coronagraph design.
In Appendix~\ref{sec:model}, we estimate the contribution from the solar corona. Integrating (\ref{eq:model-th}) over the observed width and circumference of the Einstein ring annulus, we obtain (\ref{eq:pow-fp=+*4}), which yields the following estimate (with $\epsilon_{\tt cor}\simeq0.60$):
{}
\begin{eqnarray}
P_{\tt fp.cor}
&=&19.48\,\epsilon_{\tt cor}\, \pi^2 \lambda d\,\frac{R_\odot}{\overline z} \Big(\frac{R_\odot}{\sqrt{2r_g\overline z}}\Big)^{6.8}\Big[1+1.89 \Big(\frac{R_\odot}{\sqrt{2r_g\overline z}}\Big)^{10.2}
+0.0284\Big(\frac{\sqrt{2r_g\overline z}}{R_\odot}\Big)^{5.3}
\Big]
=\nonumber\\
&=&4.56 \times 10^{-10}\,\Big[1+0.79 \Big(\frac{650\,{\rm AU}}{\overline z}\Big)^{5.1}
+0.05\Big(\frac{\overline z}{650\,{\rm AU}}\Big)^{2.65}
\Big]\Big(\frac{d}{1\,{\rm m}}\Big)\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^{4.4}\Big(\frac{\lambda}{1\,\mu{\rm m}}\Big)
~~ {\rm W}.
\label{eq:pow-fp=+*4+}
\end{eqnarray}
This corresponds to the corona photon flux, which is estimated to be
{}
\begin{align}
Q_{\tt fp.cor}
=2.29 \times 10^{9}\,\Big[1+0.79 \Big(\frac{650\,{\rm AU}}{\overline z}\Big)^{5.1}
+0.05\Big(\frac{\overline z}{650\,{\rm AU}}\Big)^{2.65}
\Big]\Big(\frac{d}{1\,{\rm m}}\Big)\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^{4.4}\Big(\frac{\lambda}{1\,\mu{\rm m}}\Big)^2
~~ {\rm photons/s}.
\label{eq:pow-fp=+*4+2}
\end{align}
Assuming that the contribution of the solar corona is removable (e.g., by observing the corona from a slightly different vantage point) and only stochastic (shot) noise remains, we estimate the resulting ${\rm SNR}_{\tt C}$ of detecting the signal (convolved with the SGL, thus, the subscript `{\tt C}')
in the solar corona dominated regime as
{}
\begin{equation}
{\rm SNR}_{\tt C}=\frac{Q_{\tt fp.blur}}{\sqrt{Q_{\tt fp.cor}}}=\frac{1.68\,\epsilon(\rho_0) }{\sqrt{1+0.79 \Big(\dfrac{650\,{\rm AU}}{\overline z}\Big)^{5.1}+0.05\Big(\dfrac{\overline z}{650\,{\rm AU}}\Big)^{2.65}
}}\Big(\frac{d}{1\,{\rm m}}\Big)^\frac{3}{2}\Big(\frac{30\,{\rm pc}}{z_0}\Big)\Big(\frac{\overline z}{650\,{\rm AU}}\Big)^{1.7}\,\sqrt{\frac{t}{1\,{\rm s}}}.
\label{eq:snr-cor}
\end{equation}
\begin{figure}
\includegraphics[width=0.40\linewidth]{foc-sol-cor-ER-dist}
\caption{\label{fig:sol-cor-bright} Angular sizes of the Sun and the diffraction-limited view of the Einstein ring as functions of heliocentric distance (for $\lambda=0.6~\mu$m.) As the heliocentric distance increases, the Einstein ring (together with the entire imaged region) further separates from the Sun. A coronagraph may have to be able to compensate for decreasing angular sizes.}
\end{figure}
It is noteworthy to consider the behavior of this ${\rm SNR}_{\tt C}$ of (\ref{eq:snr-cor}) with respect to the several parameters involved:
\begin{inparaenum}[1)]
\item It does not depend on the wavelength. This is because for this estimate we assumed the presence of an annular coronagraph. The width of the annulus of such a coronagraph is $\propto \lambda/d$, thus canceling out the wavelength dependence. (A disk coronagraph would increase the noise contribution from the corona by $\sim 10$\% with a weak wavelength dependence.)
\item Within heliocentric ranges of interest, the ${\rm SNR}_{\tt C}$ improves almost linearly with the heliocentric distance. Although the angular size of the Einstein ring decreases as $\propto 1/\sqrt{\overline z}$, the plasma contribution diminishes much faster, as $\propto 1/{\overline z}^{4.4}$. Combining these two factors results in the overall $\propto{\overline z}^{1.7}$ behavior of the ${\rm SNR}_{\tt C}$.
\item The ${\rm SNR}_{\tt C}$ has a rather strong dependence on the telescope aperture, behaving as $\propto d^\frac{3}{2}$. This is, again, due to our use of the annular coronagraph in deriving the estimate of the solar corona signal.
\end{inparaenum}
\section{Image reconstruction with the SGL}
\label{sec:convolve}
In the preceding sections we developed analytical tools that are needed to estimate the signal levels from various distant targets. The next step is to understand how these signals can be measured and used to reconstruct the images of those targets. We also need to understand the actual circumstances of signal acquisition, the inevitable noise that accompanies these observations, and the implied constraints such as minimum integration times that are required to acquire signals of sufficient quality.
To address these questions, we need to study the role of the SGL PSF, ${ \mu}_{\tt SGL}$, from (\ref{eq:S_z*6z-mu2}) in image formation and how knowledge of the PSF makes image reconstruction possible.
\subsection{Image convolution by the SGL}
\label{sec:conv-integral}
We consider a photometric imaging process, in which a telescope is used to measure the power (yielding the signal amplitude) of the signal that enters a telescope with aperture diameter $d$. To compute the total power of the signal that is amplified by the SGL and is received by the telescope, we convolve the surface brightness of the source, $B_{\tt s}({\vec x}')$, by the amplification factor of the SGL, ${ \mu}_{\tt SGL}$, given by (\ref{eq:S_z*6z-mu2}) and integrate over the aperture by way of the following quadruple integral (as was first given by Eq.~(8) in \cite{Turyshev-Toth:2019-blur}):
{}
\begin{eqnarray}
P({\vec x}_0)&=&
\frac{\mu_0}{z_0^2}\iint\displaylimits_{-\infty}^{+\infty}d^2{\vec x}'\, B_{\tt s}({\vec x}')
\hskip -5pt\iint\displaylimits_{|{\vec x}|^2\leq (\frac{1}{2}d)^2}\hskip -5pt d^2{\vec x}
\,J^2_0\big(\alpha|{\vec x}_0+{\vec x}+\beta{\vec x}'|\big),
\label{eq:power_rec2*}
\end{eqnarray}
where $\alpha$ and $\beta$ are given by (\ref{eq:alpha-mu}) and ${\vec x}_0$, as before, is the telescope's position in the image plane.
Equation (\ref{eq:power_rec2*}) describes the convolution of the extended source with the SGL and may be used to estimate the power of the anticipated photometric signals (see Sec.~\ref{sec:power} and \cite{Turyshev-Toth:2019-image}). It describes a typical power transmission from an extended source through the medium with the gain of ${ \mu}_{\tt SGL}$, and with the $1/z_0^2$ distance dependence.
We observe that integration over $d^2\vec{x}$ in (\ref{eq:power_rec2*}) amounts to averaging of the SGL PSF (which is given after (\ref{eq:S_=0}) as ${ \mu}_{\tt SGL}/\mu_0=J^2_0\big(\alpha|{\vec x}_0+{\vec x}+\beta{\vec x}'|\big)$) over the telescope aperture, namely:
{}
\begin{eqnarray}
{\overline {\rm PSF}}(|{\vec x}_0+\beta{\vec x}'|)&=&\frac{1}{\pi({\textstyle\frac{1}{2}}d)^2 }
\hskip -5pt\iint\displaylimits_{|{\vec x}|^2\leq (\frac{1}{2}d)^2}\hskip -5pt d^2{\vec x}
\,J^2_0\big(\alpha|{\vec x}+{\vec x}_0+\beta{\vec x}'|\big).
\label{eq:power_rec2}
\end{eqnarray}
As the telescope aperture is expected to be significantly larger than the spatial wavelength of the PSF (i.e., $\alpha d\gg 1$, see relevant discussion in \cite{Turyshev-Toth:2019-image}), the integral in (\ref{eq:power_rec2*}) can be easily evaluated. For this, it is instructional to express the coordinates on the source plane, ${\vec x}'$, via those measured on the image plane, ${\vec x}''$, which can be done with the help of (\ref{eq:mapping}) and (\ref{eq:alpha-mu}), resulting in ${\vec x}'=-{\vec x}''/\beta$. Next, following \cite{Turyshev-Toth:2019-blur}, we split the argument of the Bessel function into two intervals $|{\vec x}_0-{\vec x}''|\ll |{\vec x}|< \frac{1}{2}d$ and $|{\vec x}_0-{\vec x}''|\geq \frac{1}{2}d$, which is equivalent to separating the integration over the directly-imaged region and the rest of the exoplanet done in preceding sections. Using the approach demonstrated in Appendix~\ref{sec:PSF-average}, we present the averaged SGL PSF in the form of (\ref{eq:psf-mu}):
{}
\begin{eqnarray}
{\overline {\rm PSF}}(|{\vec x}_0+\beta{\vec x}'|)\equiv {\overline {\rm PSF}}\big(|{\vec x}_0-{\vec x}''|\big)&=&\frac{1}{\pi\alpha} \frac{4}{d}\, \mu(|{\vec x}_0-{\vec x}''|),
\label{eq:power_x02}
\end{eqnarray}
with the factor $\mu(|{\vec x}_0-{\vec x}''|)$ having the following form:
{}
\begin{eqnarray}
\mu(|{\vec x}_0-{\vec x}''|)&=&
\bigg\{ \begin{aligned}
\epsilon(|{\vec x}_0-{\vec x}''|), \hskip 10pt 0\leq |{\vec x}_0-{\vec x}''|\leq {\textstyle\frac{1}{2}}d& \\
\beta(|{\vec x}_0-{\vec x}''|), \hskip 30pt |{\vec x}_0-{\vec x}''| > {\textstyle\frac{1}{2}}d& \\
\end{aligned}\, ,
\label{eq:power_rec9*a}
\end{eqnarray}
and where $\epsilon(|{\vec x}_0-{\vec x}''|)$ and $\beta(|{\vec x}_0-{\vec x}''|)$ are from (\ref{eq:av3b}) and (\ref{eq:av3bb}), correspondingly:
{}
\begin{eqnarray}
\epsilon(|{\vec x}_0-{\vec x}''|)&=&\frac{2}{\pi}{\tt E}\Big[\Big(\frac{2|{\vec x}_0-{\vec x}''|}{d}\Big)^2\Big] \qquad{\rm and}\qquad
\beta(|{\vec x}_0-{\vec x}''|)=\frac{2}{\pi}{\tt E}\Big[\arcsin \Big(\frac{d}{2|{\vec x}_0-{\vec x}''|}\Big),\Big(\frac{2|{\vec x}_0-{\vec x}''|}{d}\Big)^2\Big],~~~~
\label{eq:av9b}
\end{eqnarray}
with ${\tt E}[x]$ and ${\tt E}[a,x]$ being the elliptic and incomplete elliptic integrals \cite{Abramovitz-Stegun:1965}, respectively.
With this, (\ref{eq:power_rec2*}) transforms equivalently:
\begin{eqnarray}
P({\vec x}_0)&=&
\frac{\mu_0}{z_0^2\beta^2} \pi({\textstyle\frac{1}{2}}d)^2 \frac{1}{\pi\alpha } \frac{4}{d} \iint\displaylimits_{-\infty}^{+\infty}d^2{\vec x}''\, B_{\tt s}\big(\hskip -2pt-{\vec x}''/\beta\big)\mu(|{\vec x}_0-{\vec x}''|).
\label{eq:power_rec3*}
\end{eqnarray}
Assuming uniform irradiance at the top of the exoplanet's atmosphere, $B_{\tt s}$, we may present the surface brightness of the source as $B_{\tt s}({\vec x}')=B_{\tt s}\alpha_{\tt s}({\vec x}')$, where $\alpha_{\tt s}({\vec x}')$ is the exoplanetary albedo. With this, (\ref{eq:power_rec3*}) takes the form
{}
\begin{eqnarray}
P({\vec x}_0)&=&P_{\tt dir}
\iint\displaylimits_{-\infty}^{+\infty} d^2{\vec x}''\, {\hat \alpha}_{\tt s}\big(\hskip -2pt-{\vec x}''/\beta\big)\mu(|{\vec x}_0-{\vec x}''|),
\label{eq:power_rec7a*}
\end{eqnarray}
where ${\hat \alpha}_{\tt s}\big(\hskip -2pt-{\vec x}''/\beta\big)={ \alpha}_{\tt s}\big(\hskip -2pt-{\vec x}''/\beta\big)/(\pi({\textstyle\frac{1}{2}}d)^2)$ is the albedo surface density within the source area selected by the telescope and $P_{\tt dir}$ is the power that would be received by the telescope at a particular position in the image plane from the source area with the diameter $D=b/\beta$ (as in (\ref{eq:pow**})):
{}
\begin{eqnarray}
P_{\tt dir}&=&
\frac{\mu_0}{z_0^2\beta^2} \pi({\textstyle\frac{1}{2}}d)^2 \frac{4}{d}\frac{1}{\pi\alpha } \pi({\textstyle\frac{1}{2}}d)^2 {B}_{\tt s}=B_{\tt s} \frac{\pi^2 d^3}{4{\overline z}}\sqrt{\frac{2r_g}{\overline z}}.
\label{eq:power_rec5a*}
\end{eqnarray}
Expression (\ref{eq:power_rec7a*}), together with (\ref{eq:power_rec9*a}) exhibits essentially the same structure as (\ref{eq:pow*tot}), where the total power received by the telescope is a sum two components: the power received from the directly-imaged region and that from the rest of the planet. At any particular telescope position in the image plane, ${{\vec x}_0}_i$, the signal from the directly imaged region $P_{\tt dir}{\alpha_{\tt s}}_i$ is overwhelmed by the blur from the rest of the exoplanet and it is therefore not directly observable. However, as we shall discuss in the next subsection, it is recoverable after deconvolution.
For imaging purposes, we are interested in reconstructing the surface albedo, $\alpha({\vec x}')$, from a series of measurements of $P({\vec x}_0)$. This requires inverting the {\em convolution operator}, represented by the double integral in (\ref{eq:power_rec7a*}).
Computationally, this is best accomplished by way of the Fourier quotient method, taking advantage of the convolution theorem \cite{FILTERS07}, according to which the inverse can be carried out using simple division after a two-dimensional Fourier transform into the spatial frequency domain. This approach also makes it easy to make use of deblurring and spatial filtering algorithms that exist and are applicable for many deconvolution or image deblurring problems \cite{Hansen-etal:2006}.
Our present goal is more modest: We wish to estimate the ``deconvolution penalty'', the amount by which the deconvolution process amplifies noise.
\subsection{Deconvolution in matrix form and noise}
\label{sec:conv-matr}
To understand the effect of deconvolution on signal and noise, we first discretize the integral in (\ref{eq:power_rec7a*}) by replacing the infinite integration limits with a finite integration area that fully covers the source, $r^{\tt im}_\oplus \geq r_\oplus$ and then dividing this area into $N$ equal non-overlapping area elements of size $\sim d^2$, thus
$N=\pi r^{\tt im\, 2}_\oplus/(\pi({\textstyle\frac{1}{2}}d)^2)=(2r^{\tt im}_\oplus/d)^2$. We characterize the positions of each of these source elements projected in the image plane as $\vec{x}''_j$ ($1\le j\le N$). We define the mean surface albedo ${\overline \alpha}_{{\tt s}j}$ for the $j$-th surface element defined by the ${|{\vec x}''_j-{\vec x}''|< \frac{1}{2}d}$ distance from position ${\vec x}''_j$ as
{}
\begin{eqnarray}
{\overline \alpha}_{{\tt s}j}=\iint\displaylimits_{{|{\vec x}''_j-{\vec x}''|< \frac{1}{2}d}}d^2{\vec x}''\, {\hat \alpha}_{\tt s}\big(\hskip -2pt-{\vec x}''/\beta\big)\equiv \frac{1}{\pi({\textstyle\frac{1}{2}}d)^2 }\iint\displaylimits_{{|{\vec x}''_j-{\vec x}''|< \frac{1}{2}d}}d^2{\vec x}''\, { \alpha}_{\tt s}\big(\hskip -2pt-{\vec x}''/\beta\big).
\label{eq:alb}
\end{eqnarray}
Next, we choose $N$ measurement locations ${\vec{x}_0}_i$ in the image plane that satisfy ${\vec{x}_0}_i-\vec{x}_i''=0.$
With these notations, a discretized version of Eq.~(\ref{eq:power_rec7a*}) may be given as
{}
\begin{eqnarray}
P({{\vec x}_0}_i)&=&
P_{\tt dir}
\sum_{j=1}^N \Big(\delta_{ij}+\beta(|{{\vec x}_0}_i-{\vec x}_j''|)(1-\delta_{ij})\Big){\overline \alpha}_{{\tt s}j} =
P_{\tt dir}
\sum_{j=1}^N C_{ij}{\overline \alpha}_{{\tt s}j},~~~~
\label{eq:power0_r}
\end{eqnarray}
where we introduced the {\em convolution matrix}
\begin{align}
C_{ij}=\delta_{ij}+\beta(|{{\vec x}_0}_i-{\vec x}_j''|)(1-\delta_{ij}),
\label{eq:Cij+}
\end{align}
which, with the help of (\ref{eq:psf-mu*2}), may be given in the following approximate form:
\begin{align}
C_{ij}=\delta_{ij}+\frac{d}{4|{{\vec x}_0}_i-{\vec x}_j''|}(1-\delta_{ij})\qquad {\rm or} \qquad
C_{ij}=
\delta_{ij}\Big(1-\frac{d}{4|{{\vec x}_0}_i-{\vec x}_j''|}\Big)+\frac{d}{4|{{\vec x}_0}_i-{\vec x}_j''|}.
\label{eq:Cij}
\end{align}
The quantity $|{{\vec x}_0}_i-{\vec x}_j''|$ here is distance between the $i$-th telescope location ${{\vec x}_0}_i$ and the projected directly imaged location ${\vec x}_j''$ (as introduced in Sec.~\ref{sec:conv-integral}) of the $j$-th source surface element, both located in the image plane.
As the relationship between the $P({{\vec x}_0}_i)$ and $\alpha_{{\tt s}j}$ is linear, recovering the latter from the former, that is, deconvolution, is accomplished easily in principle using matrix inversion:
\begin{align}
\alpha_{{\tt s}i}=\frac{1}{P_{\tt dir}}\sum_{j=1}^N\, C^{-1}_{ij}P({{\vec x}_0}_j).
\label{eq:CijDeconv}
\end{align}
In practice, this is not a viable approach given the extreme size of the convolution matrix (e.g., $10^{12}$ elements for a megapixel image) and the resulting computational burden and numerical instabilities. However, this representation of the deconvolution process permits us to study its properties and, in particular, its impact on noise.
We model measurement noise as uniform, uncorrelated Gaussian noise of magnitude $\sigma$. The contribution of noise is introduced in (\ref{eq:CijDeconv}) using root-mean-square addition, where the estimate for $\hat \alpha_{{\tt s}i}$ is obtained as
{}
\begin{align}
\hat \alpha_{{\tt s}i}
=\frac{1}{P_{\tt dir}}\bigg(\sum_{j=1}^N\, C^{-1}_{ij}P({{\vec x}_0}_j)+\Big({\sum_{j=1}^N\,(C^{-1}_{ij})^2}\Big)^\frac{1}{2}~\sigma\bigg)
=\alpha_{{\tt s}i}+\frac{1}{P_{\tt dir}}\Big({\sum_{j=1}^N\,(C^{-1}_{ij})^2}\Big)^\frac{1}{2}~\sigma,
\label{eq:CijDeconv1}
\end{align}
where $\hat \alpha_{{\tt s}i}$ now represents the estimate of the recovered signal in the presence of noise. We need to understand how this deconvolution process treats the signal $P({{\vec x}_0}_i)$ and the noise $\sigma$ differently. Specifically, given the observed SNR (again, as in (\ref{eq:snr-cor}), denoted with the subscript {\tt C} for convolved),
{}
\begin{align}
{\rm SNR}_{\tt C}=\frac{\langle P({{\vec x}_0}_i)\rangle}{\sigma},
\end{align}
we wish to estimate the {\rm SNR} of the recovered signal (denoted using the subscript {\tt R}) after deconvolution:
\begin{align}
{\rm SNR}_{\tt R}=\frac{\langle \alpha_{{\tt s}i}\rangle}{\displaystyle\frac{1}{P_{\tt dir}}\Big({\sum_{j=1}^N\,(C^{-1}_{ij})^2}\Big)^\frac{1}{2}~\sigma}.
\end{align}
To do so, we need to be able to estimate the behavior of the deconvolution matrix $C^{-1}_{ij}$.
\subsection{Approximating the deconvolution matrix to compute the SNR}
To approximate $C_{ij}$ (\ref{eq:Cij}), we first observe that its diagonal elements are identically 1. Its off-diagonal elements are all less than 1. The largest off-diagonal element is determined by the distance $d$ between adjacent area elements yielding the value $1/4$. The rest of the off-diagonal elements of $C_{ij}$ are smaller than this value. This leads us to approximate $C_{ij}$ by the form
\begin{align}
C_{ij}\rightarrow {\widetilde C}_{ij}= \mu\delta_{ij}+\nu U_{ij}, \qquad {\rm with} \qquad \mu=1-\nu,
\label{eq:fffs2s}
\end{align}
where $\nu\ll1$ is a constant, $\delta_{ij}$ is the unit matrix and $U_{ij}$ is the ``everywhere one'' matrix, every element of which is equal to 1. (Note that (\ref{eq:fffs2s}) resembles the structure of (\ref{eq:Cij})). We choose $\nu$ to be
\begin{align}
\nu=\langle C_{ij}\rangle_{i\ne j},
\end{align}
that is to say, $\nu$ is the average value of the off-diagonal elements of $C_{ij}$. We can easily compute $\nu$ for large $N$ by replacing the summation with an integral over the observable image area $A=N\pi({\textstyle\frac{1}{2}}d)^2$ (or $A=Nd^2$ if a square imaging area is used) corresponding to the source coordinates ${\vec x}''_i$ and the corresponding area $A$ for the image coordinates ${{\vec x}_0}_i$. Using the relevant components of the PSF from the matrix form (\ref{eq:Cij}) and that form (\ref{eq:psf-mu*2}), we compute
{}
\begin{eqnarray}
\nu=\frac{1}{N(N-1)}\Big(\sum_{i=1}^N\sum_{j=1}^NC_{ij}-\sum_{i=1}^NC_{ii}\Big)=
\frac{1}{A^2}\iint_{A} d^2{\vec{x}_0}\iint_{A} d^2\vec{x}''\frac{d}{4|{\vec x}_0-{\vec x}''|}\sim \frac{1}{a\sqrt{N}},
\end{eqnarray}
where the value of $a$ depends on the shape of the integration area $A$. For a circular integration area, $a= 1.18$, while for a square integration area, it is $a= 1.35$.
The inverse of $\widetilde {C}_{ij}$ from (\ref{eq:fffs2s}) is easily computed:
\begin{align}
\widetilde{C}_{ij}^{-1}=\frac{1}{\mu}\delta_{ij}-\frac{\nu}{\mu(\mu+\nu N)}U_{ij}.
\end{align}
This form allows us to estimate the effect of deconvolution on signal and noise. For this, we assume a uniform signal $P({\vec{x}_0}_i)={\langle P({{\vec x}_0}_i)\rangle}\equiv \langle P\rangle$ in (\ref{eq:CijDeconv1}):
{}
\begin{eqnarray}
\hat \alpha_{{\tt s}i}
=\frac{1}{P_{\tt dir}}\bigg(\sum_{j=1}^N\, C^{-1}_{ij}\langle P\rangle+\Big({\sum_{j=1}^N\,(C^{-1}_{ij})^2}\Big)^\frac{1}{2}~\sigma\bigg),
\label{eq:CijDeconv1*}
\end{eqnarray}
and thus the post-deconvolution ${\rm SNR}_{\tt R}$ is calculated as
\begin{eqnarray}
{\rm SNR}_{\tt R}=\frac{\displaystyle\frac{1}{N}\sum_{i=1}^N\sum_{j=1}^N\, C^{-1}_{ij}}{\Big({\displaystyle\frac{1}{N}\sum_{i=1}^N\sum_{j=1}^N\,(C^{-1}_{ij})^2}\Big)^\frac{1}{2}~}\frac{\langle P\rangle}{\sigma}.
\end{eqnarray}
Replacing $C_{ij}^{-1}$ with $\widetilde{C}_{ij}^{-1}$, we estimate the deconvolution penalty in the limit of large $N$:
\begin{eqnarray}
\frac{{\rm SNR}_{\tt R}}{{\rm SNR}_{\tt C}}=
\frac{\displaystyle\frac{1}{N}\sum_{i=1}^N\sum_{j=1}^N\, \widetilde{C}^{-1}_{ij}}{\Big({\displaystyle\frac{1}{N}\sum_{i=1}^N\sum_{j=1}^N\,(\widetilde{C}^{-1}_{ij})^2}\Big)^\frac{1}{2}~}
= \frac{\mu}{\nu N}\sim\frac{a}{\sqrt N}.
\label{eq:penalty}
\end{eqnarray}
This deconvolution penalty arises unavoidably, as a consequence of how the deconvolution process affects signal versus noise. However, the estimate (\ref{eq:penalty}) with either $a=1.18$ or $a=1.35$ is rather conservative. Our numerical simulations confirm that even a simple filter in the frequency domain, introduced as part of the deconvolution algorithm, especially when applied to realistic planetary images, can improve the result such that $a={\cal O}(10)$ or better. Further improvements are expected with the use of advanced spatial filtering and deblurring techniques. These are currently being investigated and results, when available, will be reported. For now, we treat $a=10$ as a conservative estimate and use it in the next section to evaluate realistic SNRs and corresponding integration times.
\subsection{Towards realistic imaging of exoplanets}
\label{sec:SNR-estim}
To assess the value of the estimates obtained in the processing section, we need to consider them in the context of realistic imaging scenarios.
We take (\ref{eq:blur-s}) to represent the estimate of the total convolved signal received from a uniformly illuminated source and measured at a particular location in the image plane, namely $\left<Q_i\right>=Q_{\tt fp.blur}(\rho_0).$ Accounting for the fact that photons obey Poisson statistics, we estimate the variance of the signal as being $\sigma(Q_i)=\sqrt{Q_{\tt fp.blur}(\rho_0)}$, resulting in the {\rm SNR} of the convolved image as ${\rm SNR}^0_{\tt C}=\left<Q_i\right>/\sigma(Q_i)=\sqrt{Q_{\tt fp.blur}(\rho_0)}.$ Using this result in (\ref{eq:penalty}) with $a=10$, we obtain the SNR of the deconvolved signal:
{}
\begin{eqnarray}
{\rm SNR}_{\tt R}\geq
\frac{10}
{\sqrt{N}}\sqrt{Q_{\tt fp.blur}(\rho_0)}\sqrt{\frac{t}{1\,{\rm s}}}. \label{eq:snr*o*}
\end{eqnarray}
Given the desired ${\rm SNR}_{\tt R}$, equation (\ref{eq:snr*o*}) allows us to estimate the per-pixel integration time, $t_{\tt pix}$:
{}
\begin{eqnarray}
t_{\tt pix}\leq 10^{-2}
N \frac{{\rm SNR}^2_{\tt R}}{Q_{\tt fp.blur}}&=&1.25\times 10^{-7} \,
N \, {\rm SNR}^2_{\tt R}\, \Big(\frac{1\,{\rm m}}{d}\Big)^2\Big(\frac{\overline z}{650\,{\rm AU}}\Big)^\frac{1}{2}\Big(\frac{30\,{\rm pc}}{z_0}\Big)\Big(\frac{1\,\mu{\rm m}}{\lambda}\Big)~{\rm s}.
\label{eq:tin0_pix}
\end{eqnarray}
Therefore, from (\ref{eq:tin0_pix}) we determine that in the signal dominated regime it takes $\sim11\,{\rm s}$ of integration time to reach ${\rm SNR}_{\tt R} = 7$. With $t_{\tt tot}= t_{\tt pix} N$ to be the total integration time needed collect data for the entire $N$-pixel image, using (\ref{eq:tin0_pix}) we see that to recover a high-resolution image with $N=1024\times 1024$ pixels, we need $\sim 4.5$ months of integration time. A 2-m telescope would compete this task in less than 50 days.
The short integration times resulting from (\ref{eq:snr*o*}) are possible for bright exoplanets or other luminous objects, where the solar corona contribution in not a significant part of the overall noise budget. However, as we discussed in Sec.~\ref{sec:sol-cor}, the brightness of the solar corona affects the performance of the SGL in a significant way. Thus, in the presence of the solar corona, an estimate similar to (\ref{eq:snr*o*}) may be obtained directly from the SNR for the signal in the presence of the solar corona ${\rm SNR}_{\tt C}$ given by (\ref{eq:snr-cor}). Using this result in (\ref{eq:penalty}) we obtain an estimate for the SNR of the deconvolved image in the presence of the solar corona as
{}
\begin{eqnarray}
{\rm SNR}_{\tt R}\geq
\frac{10}{\sqrt{N}}\frac{Q_{\tt fp.blur}}{\sqrt{Q_{\tt fp.cor}}}\sqrt{\frac{t}{1\,{\rm s}}}. \label{eq:snr*rec_cor*}
\end{eqnarray}
This expression yields the following per-pixel integration time, $t_{\tt pix}$, in the presence of the solar corona noise:
{}
\begin{eqnarray}
t_{\tt pix}&\leq& 10^{-2} N
\frac{{Q_{\tt fp.cor}}{\rm SNR}^2_{\tt R}}{Q^2_{\tt fp.blur}}=\nonumber\\
&=& 3.54\times 10^{-3} \,N\, {\rm SNR}^2_{\tt R}\Big(1+0.79 \Big(\dfrac{650\,{\rm AU}}{\overline z}\Big)^{5.1}
+0.05\Big(\dfrac{\overline z}{650\,{\rm AU}}\Big)^{2.65}\Big)
\Big(\frac{1\,{\rm m}}{d}\Big)^3\Big(\frac{z_0}{30\,{\rm pc}}\Big)^2\Big(\frac{650\,{\rm AU}}{\overline z}\Big)^{3.4}~{\rm s}.~~~~~
\label{eq:tin_cor_pix}
\end{eqnarray}
Result (\ref{eq:tin_cor_pix}) suggests that for $d=1$~m it could take up to $\sim 3\times 10^3$ sec of integration time per pixel to reach the ${\rm SNR}_{\tt R}=7$ for an image of $N=100\times 100=10^4$ pixels. For ${\overline z}=650\,{\rm AU}$, this translates in a $t_{\tt tot}= t_{\tt pix}N\sim 1$ year of total integration time needed to recover the entire $100\times 100$ pixel image of an exoplanet at 30 pc. Using for this purpose a larger telescope, say $d=2\,{\rm m}$, the per-pixel integration time drops to $390$ sec, reducing the integration time required to recover an image with the same number of pixels to $\lesssim 1.5$ months of integration time. Use of a 5~m telescope implies a per-pixel integration time of $\sim 150$~s on the a $250\times 250$ pixel image, for a total integration time of $\sim 110$ days. Collecting more, redundant data will allow us to account for the diurnal rotation of the exoplanet and its variable cloud cover. To compensate for the diurnal rotation, we may also benefit from a multitelescope architecture that can reduce the total integration time \cite{Turyshev-etal:2018}, while matching the temporal behavior of the target. However, if the direct spectroscopy of an exoplanet atmosphere is the main mission objective, this can be achieved with a single spacecraft.
We emphasize that direct imaging and spectroscopy of an exoplanet at such resolutions are impossible using any of the conventional astronomical instruments, either telescopes or interferometers; the SGL is the only means to obtain such results.
\subsection{Image reconstruction in the presence of noise}
Our estimate for the SNR deconvolution penalty (\ref{eq:penalty})
can be directly compared against simulated exoplanet image reconstruction at various levels of noise. Since the PSF of the SGL is known, convolution and deconvolution of a simulated image is a relatively straightforward process \cite{Toth-Turyshev:2020-deconv}.
\begin{figure}[t]
\includegraphics[width=0.3\linewidth]{earthBW128x128}~\includegraphics[width=0.3\linewidth]{earthCC128x128}~\includegraphics[width=0.3\linewidth]{earthDC128x128}
\vskip 1pt
\includegraphics[width=0.3\linewidth]{earth1024x1024}~\includegraphics[width=0.3\linewidth]{earthCC1024x1024}~\includegraphics[width=0.3\linewidth]{earthDD1024x1024}\\
\caption{\label{fig:earth}A simulation of the effects of the monopole solar gravitational lens on an Earth-like exoplanet image.
Top row, left: a monochrome image, sampled at 128$\times$128 pixels; center: blurred image; right: deconvolution at ${\rm SNR}\sim 4.5$. From \cite{Toth-Turyshev:2020-deconv}.
Bottom row, left: original RGB color image with a 1024$\times$1024 pixel resolution; center: image blurred by the SGL; right: the result of image deconvolution at an SNR of $\sim$5.2 per color channel, or combined SNR of $\sim$9.
}
\end{figure}
In Fig.~\ref{fig:earth}, we show the results of a simulated convolution of an Earth-like exoplanet image with the SGL PSF and subsequent deconvolution. The top row depicts the result of deconvolution of a monochrome image of an exo-Earth, using modest image resolution ($128\times 128$ image pixels), reconstructed with an ${\rm SNR}\sim 4.5$ after deconvolution. According to Eq.~(\ref{eq:tin_cor_pix}), an image of this quality may be achievable in $\sim 1.1$ years of cumulative integration time even for a source at a distance of 30~pc, using only a single $d=1$~m telescope, situated at 650~AU from the Sun.
Clearly, the SNR and the resulting image quality can be much improved by using a larger telescope, conducting an observational campaign at a greater distance from the Sun, and of course, using multiple instruments. A much more ambitious image reconstruction is depicted in the bottom row of Fig.~\ref{fig:earth}: a high-resolution (megapixel) RGB-color image of an exo-Earth, reconstructed at ${\rm SNR}\sim 5.2$ per color channel, for a combined ${\rm SNR}\sim 9$ for the color image. Even this image quality is within the realm of the feasible if we consider a target at $z_0=3$~pc, observed through the SGL using $d=2.5$~m telescopes at 1000~AU from the Sun. The cumulative integration time needed to obtain this image is less than 8 years with a single instrument.
These estimates demonstrate that utilizing the SGL to obtain a good quality resolved image of an exoplanet of interest within 30~pc from the Earth is firmly within the realm of the possible.
\section{Discussion and Conclusions}
\label{sec:disc}
We investigated the image formation process with the SGL. For that, we analyzed the EM field originating from an extended, resolved source and received in the focal plane of an imaging telescope, represented by a thin convex lens.
The complex amplitude of the EM signal in the telescope's focal plane can be modeled by splitting the signal into two parts: light from the directly imaged region (the spot on the distant source that geometrically corresponds to the imaging telescope's aperture) and the blur signal that is received by the telescope from the rest of the source. Assuming uniform surface brightness within the directly imaged spot, (\ref{eq:pow-dirD+}) describes the image of an Einstein ring in the imaging telescope's focal plane, as expected. The expression for blur (\ref{eq:pow-blur}) is given in integral form and cannot be evaluated analytically in the general case, when the surface brightness of the imaged source is nonuniform and an arbitrary function of the source plane coordinates. We have, however, endeavored to evaluate this integral in the special case when the source is a disk of uniform surface brightness. Being able to estimate the magnitude of the blur in this case in the form of expression (\ref{eq:P-blur*2}) provides useful limits when evaluating the magnitude of the signal and the anticipated SNR of measurements to be performed with the SGL.
Far away from the SGL's optical axis, in the region of weak interference, we recovered an expression that, as expected, corresponds to two spots of light of uneven brightness that are seen by the imaging telescope: one outside, one inside the nominal radius of the Einstein ring (which are know as the major and minor images, correspondingly; see \cite{Schneider-Ehlers-Falco:1992}). These correspond to the incident and scattered wavefronts, respectively, that are produced by the SGL. In the geometric optics region, the spot corresponding to the scattered wavefront (i.e., the minor image) vanishes, as this light is blocked by the opaque spherical Sun.
The results in this paper extend those obtained in \cite{Turyshev-Toth:2019-image} where a similar analysis was performed for the case of imaging of point sources. The new results extend our understanding of the image formation process to the case of extended, resolved sources positioned at large, but finite distances from the Sun. In addition, these results are also in good agreement with those reported in \cite{Turyshev-Toth:2019-blur} for the case of photometric imaging where the goal is to measure the total power received by a telescope as it is positioned at various locations in the SGL image plane (i.e., the ``light bucket'' approach). Here we extended those results all the way to the focal plane of an optical telescope.
An azimuthally resolved picture of the Einstein ring due to an extended source opens new possibilities. If the surface brightness of the source is not uniform, this can produce variations in brightness along the Einstein ring (as described by (\ref{eq:pow-dirD}) and (\ref{eq:P-blur*2*})). This information on the azimuthally varying Einstein ring's brightness may help improve the effectiveness of image deconvolution. Similarly, light contamination due to nearby off-image sources (e.g., the parent star of an exoplanet being imaged) can contribute to the Einstein ring at specific spots (the case, that is captured by (\ref{eq:P-blur*off4*})). In these cases, it makes sense to collect light not from the entirety of the Einstein ring but only from specific sections that are less affected by contamination (Fig.~\ref{fig:images}). Similarly, light not coming from the immediate vicinity of the Einstein ring can be largely ignored by appropriate sampling the Einstein ring in the telescope focal plane.
We were also able to investigate the most significant source of noise, the solar corona. We have shown that it is possible to obtain a detailed image of a distant exoplanet with integration times consistent with a realistic SGL mission even in the presence of this noise. We developed a semianalytical model of the deconvolution process in order to understand the impact of deconvolution on noise. We showed that deconvolution amplifies measurement noise, thus reducing sensitivity. Nevertheless, even for very distant exoplanets located up to 30~pc from us, a telescope located in the strong interference region of the SGL can obtain multipixel images with the realistic mission lifetimes. We also note that with the use of multiple spacecraft, integration times can be significantly reduced, allowing investigations even in the presence of temporal variability of the target due to diurnal rotation or changing surface features (e.g., varying cloud cover). At the same time, even a single spacecraft may be sufficient to obtain spectroscopic data that can be used to confirm the presence of active organic processes on that exoplanet.
The analytical tools developed here may be used to evaluate the anticipated signal levels from various targets of interest and sources of local light contamination, as well as compare these signals against background noise. These results are important for the design of future imaging missions to the focal region of the SGL, as they provide important insight into the various factors that may affect the performance of these projects.
The properties of the exoplanet (size, distance, albedo, parent star brightness, etc.), telescope parameters (aperture size, optical throughput, etc.), coronagraph parameters (annular vs. disk, contrast ratio, etc.), increasing heliocentric distance (as the spacecraft travels along the optical axis), use of multiple telescopes, spectral filtering and other factors may improve the SNR estimates. However, already at this level, the analysis that we presented demonstrates that utilizing the SGL for the purposes of resolved imaging of distant exoplanets is feasible, providing unique capabilities not available through other means. As such, the SGL should be further investigated to determine its most optical practical applications. This work is ongoing and results, when available, will be reported elsewhere.
\begin{acknowledgments}
This work in part was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
VTT acknowledges the generous support of Plamen Vasilev and other Patreon patrons.
\end{acknowledgments}
|
1,108,101,565,934 | arxiv | \section{Introduction and calculation}
The lattice numerical calculation of the matrix element of any
operator $\cal{A}$ with mass dimension $d$ yields $a^d\cal{A}$, where
$a$ is the lattice spacing, up to scaling violating contributions of
order $O(a^{d+1})$. The calculation of $a$ as a function of the coupling
constant can be improved
if an effective scheme is used \cite{parisi1,parisi2}. A nice example of such
improvement has been shown to occur in pure Yang-Mills
theories \cite{npb} and is known to work rather well in other scalar theories
(see for example \cite{wolff,abc} for the 2D spin models).
In this paper we calculate the 3-loop internal energy in full QCD with
Wilson fermions on the lattice. The 3-loop beta function for this theory
has been recently calculated \cite{harisettore,proc97} and we can compute the
corresponding corrections to asymptotic scaling in both the standard
and effective schemes.
We will use the Wilson action $S$ with $N_f$ flavours and gauge group
$SU(N)$. This action is
\begin{eqnarray}
S &=& S_W + S_f , \nonumber \\
S_W &=& \beta \sum_\Box E_W(\Box ) , \\
S_f &=& \sum_x E_f(x) \nonumber
\label{Swilson}
\end{eqnarray}
with
\begin{eqnarray}
E_W(\Box ) &=& 1 - {1\over N} \hbox{Re Tr} \left( \Box \right) , \nonumber \\
E_f(x) &=& \sum_{\rm flavours}
a^4 \Bigg[ \left(m + {4 r \over a} \right)
\overline\psi_x \psi_x - \\
& & {1 \over {2 a}} \sum_\mu
\left( \overline\psi_{x+\hat\mu} \left( r +
\gamma_\mu \right) U^\dagger_\mu(x) \psi_x +
\overline\psi_x \left(r - \gamma_\mu \right)
U_\mu(x) \psi_{x+\hat\mu} \right) \Bigg] . \nonumber
\label{energy}
\end{eqnarray}
$\beta$ is the coupling on the lattice $\beta\equiv 2N/g^2$ and
$r$ the Wilson parameter.
$\Box$~stands for the plaquette and $U_\mu(x)\equiv \exp
\left(i a g A_\mu(x)\right)$ for the link at the
site $x$ pointing towards $x +\hat\mu$.
We will parametrize the results in terms of $N$, $N_f$ and
the couple $(\kappa,r)$ where $\kappa$ is the usual hopping parameter
\begin{equation}
\kappa = { 1 \over {8 r + 2 am}}.
\label{kappa}
\end{equation}
The average of $E_f$ can be straightforwardly computed to all orders
by rescaling the fermionic action $S_f$ by a factor $\epsilon$
under the fermionic path integral $Z^f$
\begin{equation}
Z^f(\epsilon) \equiv \int {\cal{D}}\overline\psi(x)
{\cal{D}}\psi(x) \exp\left(
- \epsilon S_f \right) = \epsilon^{4 V N N_f}
Z^f(\epsilon =1)
\label{Zf}
\end{equation}
and by using
\begin{equation}
\langle E_f \rangle = - {\partial \over {\partial \epsilon}}
\left({{\ln Z^f(\epsilon)} \over V}
\right)_{\epsilon =1} = - 4 N N_f.
\label{ef}
\end{equation}
On the other hand, the average of $E_W$ in presence of the
action Eq.(\ref{Swilson}) is calculated in perturbation theory
\begin{equation}
\langle E_W \rangle = c_1 \; g^2 + c_2 \; g^4 + c_3 \; g^6 + \cdots
\label{expansion}
\end{equation}
The $n$-loop coefficient can be written as $c_n = c^g_n + c^f_n$ where
$c^g_n$ is the pure Yang-Mills contribution, known
since ref.~\cite{gcrossi,c3feo}
up to 3-loops, and $c_n^f$ is the fermionic contribution.
To calculate $c_n^f$ we will first compute the free energy
$-(\ln Z)/V$ up to 3~loops, $Z$ being the full partition function
\begin{equation}
Z \equiv \int {\cal D}U_\mu(x) {\cal D}\overline\psi(x)
{\cal D}\psi(x) \exp(-S) .
\label{Z}
\end{equation}
The average of $E_W$ is then extracted as follows
\begin{equation}
\langle E_W \rangle = - {1 \over 6}\, {\partial \over {\partial \beta}}\,
\left( {\ln Z \over V} \right) .
\label{e}
\end{equation}
The Feynman diagrams necessary
to calculate the fermion contribution to the
free energy up to 3~loops are shown in Fig.~1.
We have used the Feynman gauge. The involved algebra of the lattice
perturbation theory was carried out by making use of the computer code
developed by us~\cite{npbus}.
After grouping the diagrams in several infrared-finite sets, we
calculated the resulting finite integrals on finite lattice-sizes $L$ and then
the results were extrapolated to infinite size.
The extrapolating function was of the type~\cite{luscher,caracc2}
\begin{equation}
a_0 + \sum_{i \leq 2j ,\; j=1,2,\cdots}
a_{ij} {{\left(\ln L\right)^i } \over L^{2j}}.
\label{extrapolation}
\end{equation}
We used a broad spectrum of such functional forms and analyzed the
quality of each extrapolation to assign
a weight to each one, to finally produce a reliable
estimate of the systematic error.
The criterion to judge the quality of the extrapolation was
based on the accuracy of the
the fitted functional form to reproduce known results at finite
lattice-sizes.
The different functional forms were
obtained by truncating the series in Eq.(\ref{extrapolation}) at
different values of $j$ and assuming vanishing coefficients $a_{ij}$
for some $i$ and $j$.
Recall \cite{gcrossi,c3feo} that the pure gluonic contributions
are~(presented here with improved accuracy)
\begin{eqnarray}
c_1^g &=& {{N^2 - 1} \over 8 \;N} , \nonumber \\
c_2^g &=& \left( N^2 - 1 \right) \left(0.0051069297 -
{1 \over {128 \; N^2}} \right) , \\
c_3^g &=& \left( N^2 - 1 \right) \Bigg( {0.0023152583(50) \over N^3} -
{0.002265487(17) \over N} + \nonumber \\
& & \; \; \; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
0.000794223(19) \; N \Bigg) . \nonumber
\label{cg}
\end{eqnarray}
The 2-loop coefficient can be written in closed form as
\begin{equation}
\left( N^2 - 1 \right) \left( \
{1\over {384}} + {{{ P_1}}\over {24}} - {{{{{ P_1}}^2}}\over {12}} -
{{{ Q_1}}\over {24}} - {{{ Q_2}}\over {288}} - {1 \over {128 \; N^2}}
\right),
\end{equation}
where $P_1$, $Q_1$ and $Q_2$ are finite integrals defined and evaluated
in~\cite{lusweisz}.
In tables~I and~II we show $c_2^f$ and $c_3^f$ for several pairs
$(\kappa,r)$. They are parametrized in terms of four constants
$h_2$, $h_{30}$, $h_{31}$ and $h_{32}$ as follows
\begin{eqnarray}
c_1^f &=& 0 \; ,\nonumber \\
c_2^f &=&\left(N^2 - 1\right) h_2 \; {{ N_f} \over N} , \\
c_3^f &=& \left(N^2 - 1\right)
\left( h_{30} \; N_f + h_{31} \; {N_f \over N^2} +
h_{32} \; {N_f^2 \over N} \right) . \nonumber
\label{cf}
\end{eqnarray}
In table~III we show the result for $c_2=c_2^g + c_2^f$ and
$c_3=c_3^g + c_3^f$ for $r=1$, $N_f=3$
and $N=2$. In table~IV the result for $r=1$, $N_f=3$ and $N=3$ is shown.
\section{Corrections to asymptotic scaling}
The beta function in QCD with $N_f$ Wilson fermions can be written as
\begin{equation}
{\overline\beta}^L(g) \equiv - a {{d g} \over {d a}}|_{g_R,\;\mu} =
-b_0 g^3 - b_1 g^5 - b_2 g^7 - \cdots
\label{beta}
\end{equation}
The non-universal 3-loop coefficient $b_2$ has been recently
calculated in ref.~\cite{harisettore}, (see also~\cite{proc97}).
In terms of the bare coupling $g$, the lattice spacing $a$ approaches
the continuum limit as
\begin{equation}
a\Lambda_L = \exp\left( - {1 \over {2 b_0 g^2}} \right)
\left( b_0 g^2 \right)^{- b_1 / 2 b_0^2}
\left( 1 + q \; g^2 +\cdots \right) ,
\label{ascaling}
\end{equation}
where $q$ is the 3-loop correction to asymptotic scaling
\begin{equation}
q \equiv {{b_1^2 - b_2 b_0} \over {2b_0^3}} .
\end{equation}
Other couplings can be defined.
A popular effective coupling in terms of the plaquette
energy is~\cite{parisi1,parisi2}
\begin{equation}
g_{E_W}^2 \equiv {1 \over c_1} \; \langle E_W \rangle_{MC} ,
\label{ge}
\end{equation}
where $\langle \cdot \rangle_{MC}$ indicates Monte Carlo average.
In terms of $g_{E_W}$ the approach of the lattice spacing to the
continuum limit is written as in Eq.(\ref{ascaling}) with $q_{E_W}$
instead of $q$. This 3-loop correction to asymptotic scaling is
\begin{equation}
q_{E_W} = q - {{b_0 \, c_3 - b_1 \, c_2 - b_0 \, c_2^2/c_1} \over
{2 \, c_1 \, b_0^2}} .
\label{qe}
\end{equation}
Moreover the lattice Lambda parameters are related by the equation
\begin{equation}
\Lambda_{E_W} = \exp \left( {c_2 \over {2\, b_0 \, c_1}} \right) \;
\Lambda_L .
\label{lambda}
\end{equation}
We give the coefficients $q_{E_W}$ for $SU(2)$ in table~V and
for $SU(3)$ in table~VI. In both cases we show the result for
$r=1$ and for several choices of $\kappa$.
These results must be compared with the value of $q$ in the standard
bare coupling scheme. This is~\cite{harisettore}
\begin{eqnarray}
SU(2) \;\; \rightarrow & & q(N_f =1)=0.12617(4) \;\;\;\; q(N_f =3)=0.2547(1)
, \nonumber \\
SU(3) \;\; \rightarrow & & q(N_f =1)=0.23956(4) \;\;\;\; q(N_f =3)=0.3681(2) ,
\label{q}
\end{eqnarray}
(notice that in the standard bare coupling scheme, $q$ does not
depend on the fermion mass~\cite{harisettore}).
The 3-loop correction to asymptotic scaling for $N_f=3$, $N=3$
is $\sim 37\%$ in the standard scheme and $\sim 18-21\%$ in
the~$\langle E_W \rangle$ scheme. Apparently this improvement
is not as dramatic as it was for the pure gluonic case \cite{npb}
(for $SU(3)$ $q\sim 19\%$ in the standard scheme and $q\sim 1\%$ in
the energy scheme). The improvement therefore seems to be more
efficient in the quenched case. This fact can also be seen from tables~V
and~VI where the approach to the quenched case (either
$N_f \longrightarrow 0$ or $\kappa \longrightarrow 0$) is
accompained by the lowering of $q_{E_W}$.
However, the only relevant test for the
improvement in asymptotic scaling from
the standard to some energy scheme is the practical use of the effective
scheme. For example, the 2D spin models
\cite{wolff,abc} show a definite improvement in spite of the
behaviour of the 3-loop coefficient $q$ which for the $O(3)$
models passes from $q=-0.09138$ in the standard scheme to~$0.1694$
in the energy scheme.
\section{Discussion}
We have calculated the free energy up to 3~loops in QCD with
Wilson fermions. We have given the result of the internal energy
average $\langle E_W\rangle$
as a function of the number of fermions $N_f$, their masses and the
Wilson parameter $r$. These expansions have been used to study
the 3-loop corrections to asymptotic scaling in this theory. We have
found that at 3~loops the corresponding energy scheme \cite{parisi1,parisi2}
provides a moderate improvement with respect to the standard scheme.
However, only the practical use of this scheme in particular problems
will reveal how useful it is.
We can numerically compute the set of coefficients $c_n^f$
and $q$ also for other choices of $(\kappa, r)$, $N_f$ and $N$.
\section{Acknowledgements}
We thank CNUCE (Pisa) for qualified technical
assistance in the use of their IBM--SP2.
We would also like to thank Andrea Pelissetto and Ettore Vicari for
useful discussions.
\newpage
|
1,108,101,565,935 | arxiv | \section{Background} \label{sec:ext_background}
In the following, we provide further information and review related literature on concepts discussed throughout this work.
\subsection{Unbalanced Optimal Transport} \label{sec:unot_add}
\looseness -1 Unbalanced optimal transport is a generalization of the classical OT formulation \eqref{eq:ot}, and as such allows mass to be created and destroyed throughout the transport. This relaxation has found recent use cases in various domains ranging from biology \citep{schiebinger2019optimal, yang2018scalable}, imaging \citep{lee2019parallel}, shape registration \citep{bonneel2019spot}, domain adaption \citep{fatras2021unbalanced}, positive-unlabeled learning \citep{chapel2020partial}, to general machine learning \citep{janati2020spatio, frogner2015learning}.
Problem \eqref{eq:ub-ot} provides a general framework of the unbalanced optimal transport problem, which can recover related notions introduced in the literature:
Choosing for ${\mathcal{D}}_f$ the Kullback-Leibler divergence, one recovers the so-called squared \citeauthor{hellinger1909neue} distance. Alternatively, with ${\mathcal{D}}_f = \ell_2$ norm, we arrive at \citet{benamou2003numerical}, while an $\ell_1$ norm retrieves a concept often referred to as partial OT \citep{figalli2010optimal}. The latter comprises approaches which do not rely on a relaxation of the marginal constraints as in \eqref{eq:ub-ot}. In particular, some strategies of partial OT expand the original problem by adding \emph{virtual} mass to the marginals \citep{pele2009fast, caffarelli2010free, gramfort2015fast}, or by extending the OT map by \emph{dummy} rows and columns \citep{sarlin2020superglue} onto which excess mass can be transported.
A further review is provided in \citep[Chapter 10.2]{Peyre2019computational}.
Recent work has furthermore developed alternative computational schemes \citep{chapel2021unbalanced, sejourne2022faster} as well as provided a computational complexity analysis \citep{pham2020unbalanced} of the generalized Sinkhorn algorithm solving entropic regularized unbalanced OT \citep{chizat2018scaling}. Besides \citet{yang2018scalable}, these approaches do not provide parameterizations of the unbalanced problem and allow for an out-of-sample generalization which we consider in this work.
\subsection{Cycle-Consistent Learning} \label{sec:cyclecon}
The principle of cycle-consistency has been widely used for learning bi-directional transformations between two domains of interest. Cycle-consistency thereby assumes that both the forward and backward mapping are roughly inverses of each other.
In particular, given unaligned points $x \in {\mathcal{X}}$ and $y \in {\mathcal{Y}}$, as well as maps $g : {\mathcal{X}} \mapsto {\mathcal{Y}}$ and $f : {\mathcal{Y}} \mapsto {\mathcal{X}}$, cycle-consistency reconstruction losses enforce $\| x - f(g(x)) \|$ as well as $\| y - g(f(y)) \|$ using some notion of distance $\| \cdot \|$, assuming that there exists such a ground truth bijection $g = f^{-1}$ and $f = g^{-1}$.
The advantage of validating \emph{good} matches by cycling between \emph{unpaired} samples becomes evident through the numerous use cases to which cycle-consistency has been applied: Originally introduced within the field of computer vision \citep{kalal2010forward} and applied to image-to-image translation tasks \citep{zhu2017unpaired}, it has been quickly adapted to multi-modal problems \citep{zhu2017toward}, domain adaptation \citep{hoffman2018cycada}, and natural language processing \citep{shen2017style}.
The original principle has been further generalized to settings requiring a many-to-one or surjective mapping between domains \citep{guo2021fork} via conditional variational autoencoders, dynamic notions of cycle-consistency \citep{zhang2020learning}, or to time-varying applications \citep{dwibedi2019temporal}.
These classical approaches enforce cycle-consistency by \emph{explicitly} composing both maps and penalizing for any deviation from this bijection. In this work, we treat cycle-consistency differently. It is enforced implicitly by coupling the two distributions of interest through a sequence of reversible transformations: re-weighting, transforming, and re-weighting (Eq.~\eqref{eq:proxy_measure_constraints} and Fig.~\ref{fig:overview}).
Similarly to our work, \citet{zhang2022cycle} and \citet{hur2021reversible} establish a notion of cycle-consistency (reversibility) for a pair of pushforward operators to align two unpaired measures.
Both methods rely on the Gromov-Monge distance \citep{memoli2018distance}, a divergence to compare probability distributions defined on different ambient spaces ${\mathcal{X}}$ and ${\mathcal{Y}}$---a setting not considered in this work.
They proceed by defining a reversible metric through replacing the single Monge map by a pair of two Monge maps, i.e., $f: {\mathcal{X}} \mapsto {\mathcal{Y}}$ and $g: {\mathcal{Y}} \mapsto {\mathcal{X}}$, minimizing the objective
\begin{equation} \label{eq:gromov-monge}
\mathrm{GM}(\mu, \nu):=\inf _{\substack{f: {\mathcal{X}} \mapsto {\mathcal{Y}}, f_{\sharp} \mu=\nu \\ g: {\mathcal{Y}} \mapsto {\mathcal{X}}, g_{\sharp} \nu=\mu}} \Delta_{{\mathcal{X}}}^{p}(f ; \mu)+\Delta_{{\mathcal{Y}}}^{p}(g ; \nu)+ \Delta_{{\mathcal{X}}, {\mathcal{Y}}}^{p}(f, g ; \mu, \nu),
\end{equation}
\begin{align*}
\Delta_{{\mathcal{X}}}^{p}(f ; \mu) &= \left( \mathbb{E} \left[ | c_{\mathcal{X}}(x, x') - c_{\mathcal{Y}}(f(x), f(x')) |^p \right] \right)^\frac{1}{p} \\
\Delta_{{\mathcal{Y}}}^{p}(g ; \nu) &= \left( \mathbb{E} \left[ | c_{\mathcal{X}}(y, y') - c_{\mathcal{Y}}(g(y), g(y')) |^p \right] \right)^\frac{1}{p} \\
\Delta_{{\mathcal{X}}, {\mathcal{Y}}}^{p}(f, g ; \mu, \nu) &= \left( \mathbb{E} \left[ | c_{\mathcal{X}}(x, g(y)) - c_{\mathcal{Y}}(f(x), y) |^p \right] \right)^\frac{1}{p}.
\end{align*}
Problem \eqref{eq:gromov-monge} shows similarities to the classical cycle-consistency objective of \citet{zhu2017unpaired}, where cycle-consistency is indirectly enforced through $\Delta_{{\mathcal{X}}, {\mathcal{Y}}}^{p}$. \citet{zhang2022cycle} parameterize both Monge maps through neural networks in a similar fashion as done in \citep{yang2018scalable, fan2021scalable}.
Our approach differs from \citet{zhang2022cycle, hur2021reversible} as we model the problem through a single Monge map with duals $f, g$, allowing us to map back-and-forth between measures $\mu$ and $\nu$, and using a different parametrization approach (ICNNs). More importantly, the approaches presented by \citet{zhang2022cycle, hur2021reversible} do not generalize to the unbalanced case. While \citet{zhang2022cycle} proposed an unbalanced version of \eqref{eq:gromov-monge} by relaxing the marginals as done in \citet{chizat2018scaling}, they require the unbalanced sample sizes to be known (i.e., $n$ and $m$ need to be fixed). In our application of interest, particle counts of the target population are, however, not known \emph{a priori}.
\subsection{Convex Neural Architectures} \label{sec:icnn}
Input convex neural networks \citep{amos2017input} are a class of neural networks that approximate the family of convex functions $\psi$ with parameters $\theta$, i.e., whose outputs $\psi_{\theta}(x)$ are convex w.r.t. an input $x$. This property is realized by placing certain constraints on the networks parameters $\theta$. More specifically, an ICNN is an $L$-layer feed-forward neural network, where each layer $l = \{0, ... ,L-1\}$ is given by
\begin{equation} \label{eq:icnn}
z_{l+1} = \sigma_l(W^x_l x + W^z_l z_l + b_l) \text{ and } \psi_\theta(x) = z_L,
\end{equation}
where $\sigma_k$ are convex non-decreasing activation functions, and $\theta = \{W^x_l, W^z_l, b_l\}_{l=0}^{L-1}$ is the set of parameters, with all entries in $W^z_l$ being non-negative and the convention that $z_0$ and $W^z_0$ are $0$.
As mentioned above and through the connection established in \S~\ref{sec:background}, convex neural networks have been utilized to approximate Monge map $T$ \eqref{eq:monge} via the convex Brenier potential $\psi$ connected to the primal and
dual optimal transport problem. In particular, it has been used to model convex dual functions \citep{makkuva2020optimal} as well as normalizing flows derived from convex potentials \citep{huang2021convex}. The expressivity and universal approximation properties of ICNNs have been further studied by~\citet{chen2018optimal}, who show that any convex function over a compact convex domain can be approximated in sup norm by an ICNN.
To improve convergence and robustness of ICNNs ---known to be notoriously difficult to train \citep{richter2021input}--- different initialization schemes have been proposed: \citet{bunne2022supervised} derive two initialization schemes ensuring that \emph{upon initialization} $\nabla \psi$ mimics an affine Monge map $T$ mapping either the source measure onto itself (identity initialization) or providing a map between Gaussian approximations of measures $\mu$ and $\nu$ (Gaussian initialization). Further, \citet{korotin2020wasserstein} proposed to use quadratic layers as well as a pre-training pipeline to initialize ICNN parameters to encode an identity map.
\section{Additional Experimental Results} \label{sec:add_exps}
\subsection{Synthetic Data}
In our synthetic two-dimensional dataset, the source and target distribution are mixtures of Gaussians with varying proportions (see Fig. \ref{fig:fig_toy}). Both source and target consist of three corresponding clusters, and by changing the proportions of each cluster, we illustrate a scenario in which subpopulations grow and shrink at different rates. Table~\ref{tab:toy_setup} shows the shares of the three clusters in the source and target distributions. In order to match the target distribution without transporting mass across non-corresponding clusters, the clusters have to be re-scaled with the factors presented in column 'True Scaling Factor'. The last two columns show the mean weights per cluster obtained by \textsc{NubOT} and \textsc{ubOT GAN}, respectively. \textsc{ubOT GAN} captures only the general trend in growth and shrinkage, the exact weights do not scale the cluster proportions appropriately. In contrast, the weights obtained by \textsc{NubOT} match the required scaling factors very closely. Fig. \ref{fig:toy_mmd_w}, shows the weighted MMD between the source distribution and the target distribution, confirming superior performance of \textsc{NubOT}.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{figures/fig_toy_mmd_w.pdf}
\caption{Distributional fit of the predicted samples to the target samples on synthetic data, measured by a weighted version of kernel MMD.}
\label{fig:toy_mmd_w}
\end{figure}
\begin{table}[h]
\centering
\caption{Setup of the synthetic mixture of Gaussians dataset, showing the proportions of the three clusters in source and target distribution in three different settings (\textbf{a.}, \textbf{b.}, \textbf{c.}) as well as the required scaling factor per cluster needed to match the target without transporting points to non-corresponding clusters. The last two columns show the mean weights obtained by \textsc{NubOT} and \textsc{ubOT GAN}.}
\label{tab:toy_setup}
\footnotesize
\begin{tabular}{p{0.3cm}|p{1cm}p{2cm}p{2cm}p{2cm}|p{2cm}p{2cm}}
\toprule & \textbf{Cluster} &
\textbf{Source Proportions} ($p$) & \textbf{Target Proportions} ($q$) & \textbf{True Scaling Factor ($q/p$)} & \textbf{Mean Weight \textsc{NubOT}} & \textbf{Mean Weight \textsc{ubOT GAN}} \\
\midrule
\textbf{a.} &1 & 0.33 & 0.45 & 1.35 & 1.32 & 1.02 \\
& 2&0.33 & 0.45 & 1.35 & 1.36& 0.99 \\
& 3&0.33 & 0.10 & 0.30 & 0.26 & 0.8\\
\midrule
\textbf{b.} &1& 0.33 & 0.70 & 2.10 & 2.07 & 1.18 \\
&2& 0.33 & 0.20 & 0.60 & 0.64 & 0.88 \\
&3& 0.33 & 0.10 & 0.30 & 0.29 & 0.81 \\
\midrule
\textbf{c.}&1 & 0.45 & 0.10 & 0.22 & 0.23 & 0.79 \\
&2& 0.45 & 0.45 & 1.00 & 0.98 & 0.94 \\
&3& 0.10 & 0.45 & 4.50 & 4.60 & 1.44 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Single-Cell Perturbation Responses}
As we lack ground truth for the correspondence of control and perturbed cells, we assess the biological meaningfulness of our predictions by comparing the weights to ClCasp3 and Ki67 intensity, the apoptosis and proliferation markers, respectively. Figures \ref{fig:umap_trametinib}, \ref{fig:umap_ixazomib} and \ref{fig:umap_vindesine} show UMAP projections computed on control cells for the drugs Trametinib, Ixazomib and Vindesine.
In Figure~\ref{fig:umap_ixazomib}~c.,~d., and Figure~\ref{fig:umap_vindesine}~c.,~d., regions of low predicted weights accurately correspond to regions of increased ClCasp3 intensity. Additionally, we compare predicted weights between the two cell types, and contrast them with observed cell counts.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{figures/fig_umaps_trametinib.pdf}
\caption{UMAP projections computed on control cells for Trametinib at $t=24h$. High predicted weights in the MelA$^+$ cell type suggest proliferation, while the Sox9$^+$ population shows higher levels of cell death. This prediction is confirmed by the relative cell counts, where MelA$^+$ cell counts increase and Sox9$^+$ counts decrease, demonstrating opposite response behaviors to Trametinib for each subpopulation, i.e., MelA$^+$ cells show proliferation and Sox9$^+$ cells death.}
\label{fig:umap_trametinib}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/fig_umaps_ixazomib.pdf}
\caption{UMAP projections computed on control cells for Ixazomib for $t=8h$, and $t=24h$, colored by protein marker intensities \textbf{a.} MelA and \textbf{b.} Sox9, markers for the two subpopulations, as well as ClCasp3, a marker for cell death, at \textbf{c.} 8h and \textbf{d.} 24h. The UMAPs confirm the measured relative cell counts of each subpopulation. After 8h \textbf{a.}-\textbf{c.}, neither MelA$^+$ nor Sox9$^+$ cells are affected by the treatment, i.e., we mainly predict weights around 1. \textbf{d.} After 24h, we observe low weights in regions of high predicted apoptosis marker intensities (ClCasp3), especially at $t=24h$, where the observed cell counts suggest death predominantly in the MelA$^+$ cell cluster.}
\label{fig:umap_ixazomib}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/fig_umaps_vindesine.pdf}
\caption{UMAP projections computed on control cells for Vindesine for $t=8h$, and $t=24h$, colored by protein marker intensities \textbf{a.} MelA and \textbf{b.} Sox9, markers for the two subpopulations, as well as ClCasp3, a marker for cell death, at \textbf{c.} 8h and \textbf{d.} 24h. The predicted weights (left) at \textbf{c.} 8h and \textbf{d.} 24h match the observed effects on each subpopulation, as initially only Sox9$^+$ cells are affected by treatment with Vindesine, and only after 24h MelA$^+$ cells show increased cell death. }
\label{fig:umap_vindesine}
\end{figure}
\section{Datasets} \label{sec:datasets}
We evaluate \textsc{NubOT} on several tasks including synthetic data as well as perturbation responses of single cells. In both settings, we are provided with unpaired measures $\mu$ and $\nu$ and aim to recover map $T$ which describes how source $\mu$ transforms into target $\nu$. While in the synthetic data setting we are provided with a ground truth matching, this is not the case for the single-cell data as measuring a cell requires destroying it. In the following, we describe generation and characteristics of both datasets, as well as introduce additional biological insights allowing us to shed light on the learned matching $T$.
\subsection{Synthetic Data} \label{sec:synth_datasets}
To evaluate \textsc{NubOT} in a simple and low-dimensional setup with known ground-truth, we generate synthetic example: We model a source population with clear subpopulation structure through a mixture of Gaussians. Next, we generate a second (target) population aligned to the source population. We then simulate an intervention to which the subpopulations respond differently, including different levels of growth and death.
Specifically, we generate batches of 400 samples with three clusters with different proportions before and after the intervention. Table~\ref{tab:toy_setup} shows the proportions of the three clusters in the source and target distribution, as well as the required weight-factor and the obtained results from \textsc{NubOT} and \textsc{ubOT GAN}.
\subsection{Single-Cell Data} \label{sec:cell_datasets}
\paragraph{Biological experiment.}
The single-cell dataset used in this work was generated by the a multiplexed microscopy technology called Iterative Indirect Immunofluorescence Imaging (4i) \citep{gut2018multiplexed}, which is capable of measuring the abundance and localization of many proteins in cells. By iteratively adding, imaging and removing fluorescently tagged antibodies, a multitude of protein markers is captured for each cell. Additionally, cellular and morphological characteristics are extracted from microscopical images, such as the cell and nucleus area and circularity.
This spatially resolved phenotypic dataset is rich in molecular information and provides insights into heterogeneous responses of thousands of cells to various drugs.
Measuring different morphological and signaling features captures pre-existing cell-to-cell variability which might influence perturbation effect, resulting in various different responses. Some of these markers are of particular importance, as they provide insights into the level of a cell's growth or death as well as subpopulation identity.
We utilized a mixture of two melanoma tumor cell lines (M130219 and M130429) at a ratio of 1:1. The cell lines can be differentiated by the mutually exclusive expression of marker proteins. The former is positive for Sox9, the latter for a set of four proteins which are all recognized by and antibody called MelA \citep{raaijmakers2015new}. Cells were seeded in a 384-well plate and incubated at 37C and 5\% CO2 overnight. Next, the cells were exposed to multiple cancer drugs and Dimethyl sulfoxide (DMSO) as a vehicle control for 8h and 24h after which the cells were fixed and six cycles of 4i were performed TissueMAPS and the scikit-image library \citep{van2014scikit} were used to process and analyze the acquired images, perform feature extraction and quality control steps using semi-supervised random forest classifiers.
\paragraph{Data generation and processing.}
Our datasets contain high-dimensional single-cell data of control and drug-treated cells measured at two time points (8 and 24 hours). For both the 8h-dataset and the 24h-dataset, we normalized the extracted intensity and morphological features by dividing each feature by its 75th percentile, computed on the control cells. Additionally, values were transformed by a $log1p$ function ($x \leftarrow log(x+1)$). In total, our datasets consist of 48 features, of which 26 are protein marker intensities and the remaining 22 are morphological features. For each treatment, we have measured between 2000 and 3000 cells. For training the models, we perform a 80/20 train/test split. We trained all models on control and treated cells for each time step and each drug separately. The considered drugs as well as their inhibition type can be found in Table~\ref{tab:drugs}.
\begin{figure}
\centering
\includegraphics[width=1.05\textwidth]{figures/fig_celltyping_8h_drugs.pdf}
\caption{Classification of cells into cell types (MelA$^+$, Sox9$^+$) based on protein marker intensities of MelA and Sox9, for all drugs, at $t=8h$ \S~\ref{sec:cell_datasets}. Each tile represents one drug. MelA$^+$ cells colored in red, Sox9$^+$ in blue.}
\label{fig:cell_types_check_8h}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/fig_cellnumbers_both_celltypes.pdf}
\caption{\looseness -1 Drug treatment-induced change in cell counts in the two cell types compared to the cell count of the respective cell types in the control condition. \textbf{a.} Cell count change for cell types Sox9$^+$ (top) and MelA$^+$ (bottom) at $t=8h$. \textbf{b.} Cell count change for cell types Sox9$^+$ (top) and MelA$^+$ (bottom) at $t=24h$.}
\label{fig:cell_counts_relative}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.05\textwidth]{figures/fig_celltyping_24h_drugs.pdf}
\caption{Classification of cells into cell types (MelA$^+$, Sox9$^+$) based on protein marker intensities of MelA and SOX9, for all drugs, at $t=24h$ \S~\ref{sec:cell_datasets}. MelA$^+$ cells colored in red, Sox9$^+$ in blue.}
\label{fig:cell_types_check_24h}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/fig_cellnumbers_drugs.pdf}
\caption{Observed cell counts of drug-treated cells normalized to control cell counts, per drug and time point. 8h treatment in light blue, 24h treatment in dark blue.}
\label{fig:cell_counts_total}
\end{figure}
\begin{table}[]
\centering
\footnotesize
\caption{\textbf{Overview of all treatments and their inhibition type considered in this work.} Abbreviations PROTi (Proteasome inhibitor), DNASynthi (DNA synthesis inhibitor), panKi (pan kinase inhibitor), ImmuneMod. (Immune modulatory compound), MTDisruptor (Microtubule disruptor), ApopInducer (Apoptosis inducer).}
\label{tab:drugs}
\begin{tabular}{lc||lc}
\toprule
\textbf{Drug Name} & \textbf{Inhibitor Type} & \textbf{Drug Name} & \textbf{Inhibitor Type} \\
\midrule
Ixazomib & PROTi & Olaparib & PARPi \\
Sorafenib & RAFi & Paclitaxel & MTDisruptor \\
Dabrafenib & BRAFi & Melphalan & Alkylator \\
Everolimus & mTORi & Regorafenib & panKi \\
Hydroxyurea & DNASynti & Vindesine & MTDisruptor \\
Midostaurin & panKi & Cisplatin & Alkalyting \\
Dexamethasone & ImmuneMod. & Ulixertinib & ERKi \\
Temozolomide & Alkylator & Staurosporine & ApopInducer \\
Decitabine & DNAMeti & Lenalidomide & ImmuneMod. \\
Dasatinib & SRCi-ABLi & Crizotinib & METi \\
Trametinib & MEKi & Imatinib & KITi-PDGFRi-ABLi \\
Erlotinib & EGFRi & Palbociclib & CDK4/6i \\
Dacarbazine & Alkylator \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Cell type assignment.}
We assigned M130219 and M130429 cells to the Sox9 and MelA cell types, respectively, by first training a two component Gaussian mixture model on the features 'intensity-cell-MelA-mean' and 'intensity-nuclei-Sox9-mean' of the control cells. Next, we used the aforementioned features and the labels provided by the mixture model to train a nearest neighbor classifier, which we then used to predict the cell type labels of the drug treated cells. The procedure was performed separately for the 8h- and 24h dataset. Results of the classification can be found in Figure \ref{fig:cell_types_check_8h} and Figure \ref{fig:cell_types_check_24h} respectively.
\section{Experimental Details} \label{sec:exp_details}
\textsc{NubOT} consists of several modules and its performance is compared against several baselines. In the following, we provide additional background on experimental details, including a description of the evaluation metrics and baselines considered, as well as further information on the parameterization and hyperparameter choices made for \textsc{NubOT}.
\subsection{Evaluation Metrics} \label{sec:eval_metrics}
We evaluate our model by analyzing the distributional similarity between the predicted and observed perturbed distribution. For this, we compute the kernel maximum mean discrepancy (MMD) \citep{gretton2012kernel}. To take the mass variation into consideration, we compute a weighted version of MMD, by weighting each predicted point by its associated normalized weight.
\subsection{Baselines} \label{sec:baselines}
We compare \textsc{NubOT} against several baselines, comprising a balanced OT-based method \citep[\textsc{CellOT}]{bunne2021learning} and an unbalanced OT-based method \citep[\textsc{NubOT}]{yang2018scalable}, i.e., current state-of-the-art methods as well as ablations of our work. In the following, we briefly motivate and introduce each baseline.
\paragraph{\textsc{CellOT}.}
By introducing reweighting functions $\eta$ and $\zeta$, \textsc{NubOT} recovers a balanced problem parameterized by dual potentials $f$ and $g$. An important ablation study to consider is thus to compare its performance to its balanced counterpart. Ignoring the fact that the original problem includes cell death and growth, and thus varying cell numbers, we apply ideas developed in \citet{makkuva2020optimal, bunne2021learning} and learn a balanced OT problem via duals $f$ and $g$. These duals are parameterized by two ICNNs and optimized in objective \eqref{eq:not-minmax} via an alternating min-max scheme.
\paragraph{\textsc{ubOT GAN}.}
Using \eqref{eq:ub-ot}, \citet{yang2018scalable} propose to model mass variation in unbalanced OT via a relaxation of the marginals.
Similar to \citet{fan2021scalable}, \citet{yang2018scalable} reformulate the constrained Monge problem~\eqref{eq:monge} as a saddle point problem with Lagrange multiplier $h$ for the constraint $T_\sharp \mu = \nu$, i.e.,
\begin{align*}
\sup_h \inf_T &\int_{{\mathcal{X}}} c(x, T(x)) \mu(x) d x+\int_{{\mathcal{X}}} h(y)\left(\nu-T_{\sharp} \mu\right) d y \\
= &\int_{{\mathcal{X}}}[c(x, T(x))-h(T(x))] \mu(x) d x+\int_{{\mathcal{X}}} h(y) \nu(y) d y,
\end{align*}
parameterizing $T$ and $h$ via neural networks.
To allow mass to be created and destroyed, \citet{yang2018scalable} introduce scaling factor $\xi: \mathcal{X} \rightarrow \mathbb{R}^+$, allowing to scale mass of each source point $x_i$. The optimal solution then needs to balance the cost of mass and the cost of transport, potentially measured through different cost functions $c_1: \mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R}^+$ (cost of mass transport) and $c_2: \mathbb{R}^+ \rightarrow \mathbb{R}^+$ (cost of mass variation). Parameterizing the transport map $T_{\theta}$, the scaling factors $\xi_{\phi}$, and the penalty $h_{\omega}$ with neural networks, the resulting objective is
\begin{equation*} \label{eq:gan-loss}
l(\theta, \phi, \omega) \coloneqq \frac{1}{n} \sum_{i=0}^{n} \left[ c_1(x_i,T_{\theta}(x_i)) \xi_{\phi}(x_i) + c_2(\xi_{\phi}(x_i)) + \xi_{\phi}(x_i) h_{\omega}(T_{\theta}(x_i) - \Psi^*(h_{\omega}(y_i) \right],
\end{equation*}
with $\Psi^*$ approximating the divergence term of the relaxed marginal constraints (see \eqref{eq:ub-ot}), and is optimized via alternating gradient updates.
\paragraph{\textsc{Identity}.}
A trivial baseline is to compare the predictions to a map which does not model any perturbation effect. The \textsc{Identity} baseline thus models an identity map and provides an \emph{upper bound} on the overall performance, also considered in \citet{bunne2021learning}.
\paragraph{\textsc{Observed}.}
In a similar fashion we might ask for a \emph{lower bound} on the performance. As a ground truth matching is not available, we can construct a baseline for a comparison on a distributional level by comprising a different set of observed perturbed cells, which only vary from the true predictions up to experimental noise. The closer a method can approach the \textsc{Observed} baseline, the more accurate it fits the perturbed cell population.
\subsection{Hyperparameters} \label{sec:hyperparams}
We parameterize the duals $f$ and $g$ using ICNNs with 4 hidden layers, each of size 64, using ReLU as activation function between the layers. We choose the identity initialization scheme introduced by \citet{bunne2022supervised} such that $\nabla g$ and $\nabla f$ resemble the identity function in the first training iteration.
As suggested by \citet{makkuva2020optimal}, we relax the convexity constraint on ICNN $g$ and instead penalize its negative weights $W^z_l$
\begin{equation*}
R\left(\theta\right)=\lambda \sum_{W^z_l \in \theta}\left\|\max \left(-W^z_l, 0\right)\right\|_{F}^{2}.
\end{equation*}
The convexity constraint on ICNN $f$ is enforced after each update by setting the negative weights of all $W^z_l \in \theta_f $ to zero. Duals $g$ and $f$ are trained with an alternating min-max scheme where each model is trained at the same frequency.
Further, both reweighting functions $\eta$ and $\zeta$ are represented by a multi-layer perceptron (MLP) with two hidden layers of size 64 for the single-cell and of size 32 for the synthetic dataset, with ReLU activation functions. The final output is further passed through a softplus activation function as we do not assume negative weights.
For the unbalanced Sinkhorn algorithm, we choose an entropy regularization of $\varepsilon = 0.005$ and a marginal relaxation penalty of $0.05$.
We use both Adam for pairs $g$ and $f$ as well as $\eta$ and $\zeta$ with learning rate $10^{-4}$ and $10^{-3}$ as well as $\beta_1 = 0.5$ and $\beta_2 = 0.9$, respectively. We parameterize both baselines with networks of similar size and follow the implementation proposed by \citet{yang2018scalable} and \citet{bunne2021learning}.
\section{Reproducibility}
The code will be made public upon publication of this work.
\section{Background} \label{sec:background}
\subsection{Optimal Transport} \label{sec:background_OT}
\looseness -1 For two probability measures $\mu, \nu$ in $\mathcal{P}(\mathcal{X})$ with $\mathcal{X} = \mathbb{R}^d$ and a real-valued continuous cost function $c\in\mathcal{C}(\mathcal{X}^2)$, the optimal transport problem \citep{kantorovich1942transfer} is defined as
\begin{equation} \label{eq:ot}
\textup{OT}(\mu, \nu) \vcentcolon= \inf_{\gamma\in \Gamma(\mu,\nu)}\int_{{\mathcal{X}}^2} c(x,y) \gamma(dx, dy),
\end{equation}
where $\Gamma(\mu, \nu) = \{ \gamma \in {\mathcal{M}}_+({\mathcal{X}}^2), \text{ s.t. } (\Proj_1)_\sharp \gamma = \mu, (\Proj_2)_\sharp \gamma = \nu \}$ is the set of couplings in the cone of nonnegative Radon measures ${\mathcal{M}}_+({\mathcal{X}}^2)$ with respective marginals $\mu, \nu$. When instantiated on finite discrete measures, such as $\mu=\sum_{i=1}^n u_i\delta_{{\mathbf{x}}_i}$ and $\nu=\sum_{j=1}^m v_j\delta_{{\mathbf{y}}_j}$, with $\mathbf{u} \in \Sigma_n, \mathbf{v} \in \Sigma_m$ this problem translates to a linear program, which can be regularized using an entropy term~\citep{Peyre2019computational}. For $\varepsilon\geq0$, set
\begin{equation}\label{eq:reg-ot}
\textup{OT}_{\varepsilon}(\mu, \nu) \vcentcolon= \min_{\mathbf{P}\in U({\mathbf{u}},{\mathbf{v}})} \dotp{\mathbf{P}}{[c({\mathbf{x}}_i, {\mathbf{y}}_j)]_{ij}} \,-\varepsilon H(\mathbf{P}),
\end{equation}
where $H(\mathbf{P}) \vcentcolon= -\sum_{ij} \mathbf{P}_{ij} (\log \mathbf{P}_{ij} - 1)$ and the polytope $U({\mathbf{u}}, {\mathbf{v}})$ is the set of matrices $\{\mathbf{P}\in\mathbb{R}^{n \times m}_+, \mathbf{P}\mathbf{1}_m ={\mathbf{u}}, \mathbf{P}^\top\mathbf{1}_n={\mathbf{v}}\}$. For clarity, we will sometimes write $\textup{OT}_{\varepsilon}({\mathbf{u}}, {\mathbf{v}}, \{{\mathbf{x}}_i\}, \{{\mathbf{y}}_j\})$.
Notice that the definition above reduces to \eqref{eq:ot} when $\varepsilon=0$. Setting $\varepsilon>0$ yields a faster and differentiable proxy to approximate $\textup{OT}$ and allows fast numerical approximation via the Sinkhorn algorithm \citep{cuturi2013sinkhorn}, but introduces a bias, since in general $\textup{OT}_{\varepsilon}(\mu,\mu)\ne 0$.
\paragraph{Neural optimal transport.}
To parameterize \eqref{eq:ot} and allow to predict how a measure evolves from $\mu$ to $\nu$, we introduce an alternative formulation known as the \citeauthor{Monge1781} problem (\citeyear{Monge1781}) given by
\begin{equation} \label{eq:monge}
\textup{OT}_{\varepsilon}(\mu, \nu)=\inf _{T: T_{\sharp} \mu=\nu} \int_{\mathcal{X}}c(x, T(x)) d \mu(x),
\end{equation}
with pushforward operator $\sharp$ and the optimal solution $T^*$ known as the \citeauthor{Monge1781} map between $\mu$ and $\nu$.
\looseness -1 When cost $c$ is the quadratic Euclidean distance, i.e., $c = \| \cdot\|^2_2$, \citeauthor{Brenier1987}'s theorem (\citeyear{Brenier1987}) states that this \citeauthor{Monge1781} map is necessarily the gradient $\nabla \psi$ of a convex potential $\psi : \mathcal{X} \mapsto \mathbb{R}$ such that $\nabla \psi_\sharp \mu = \nu$, i.e., $T^*(x) = \nabla \psi(x)$.
This connection has far-reaching impact and is a central component of recent neural optimal transport solvers \citep{makkuva2020optimal, bunne2022proximal, alvarez2021optimizing, korotin2020wasserstein, bunne2022supervised, fan2021barycenter}.
Instead of (indirectly) learning the Monge map $T$ \citep{yang2018scalable, fan2021scalable}, it is sufficient to restrict the computational effort to learning a \emph{good} convex potential $\nabla_\theta$, parameterized via input convex neural networks (ICNN) \citep{amos2017input}, s.t. $\nabla_\theta \psi \sharp \mu = \nu$.
Alternatively, parameterizations of such maps can be carried out via the dual formulation of \eqref{eq:ot} \citep[Proposition 1.11, Theorem 1.39]{santambrogio2015optimal}, i.e.,
\begin{equation}
\textup{OT}(\mu, \nu) \vcentcolon= \sup _{\substack{f, g \in \mathcal{C}(\mathcal{X}) \\ f \oplus g \leq c}} \int f d \mu+\int g d \nu,
\end{equation}
where the dual potentials $f, g$ are continuous functions from $\mathcal{X}$ to $\mathbb{R}$, and $f \oplus g \mapsto f(x) + g(x)$.
Based on \citet{Brenier1987}, \citet{makkuva2020optimal} derive an approximate min-max optimization scheme parameterizing the duals $f, g$ via two convex functions. The objective thereby reads
\begin{equation} \label{eq:not-minmax}
\textup{OT}_\text{n}(\mu, \nu)=\sup _{f \text{ convex}}\mathop{\mathrm{inf}\vphantom{\mathrm{sup}}} _{g \text{ convex}} \underbrace{\frac{1}{2}\mathbb{E}\left[\|x\|+\|y\|\right]}_{\mathcal{C}_{\mu, \nu}} - \underbrace{\mathbb{E}_{\mu}[f(x)]-\mathbb{E}_{\nu}[\langle y, \nabla g(y)\rangle-f(\nabla g(y))]}_{\mathcal{V}_{\mu, \nu}(f, g)}.
\end{equation}
When paramterizing $f$ and $g$ via a pair of ICNNs with parameters $\theta_f$ and $\theta_g$, this neural OT scheme then allows to predict $\nu$ or $\mu$ via $\nabla g_{\theta_g \sharp} \mu$ or $\nabla f_{\theta_f \sharp}\nu$, respectively.
We further discuss neural primal \citep{fan2021scalable, yang2018scalable} and dual approaches \citep{,makkuva2020optimal, korotin2020wasserstein, bunne2021learning} in \S\ref{sec:baselines}.
\subsection{Unbalanced optimal transport.}
A major constraint of problem \eqref{eq:ot} is its restriction to a pair of probability distributions $\mu$ and $\nu$ of equal mass.
Unbalanced optimal transport \citep{benamou2003numerical, liero2018optimal, chizat2018unbalanced} lifts this requirement and allows a comparison between unnormalized measures, i.e., via
\begin{equation} \label{eq:ub-ot}
\inf _{\gamma \in \mathcal{M}_{+}({\mathcal{X}}^2)} \int_{{\mathcal{X}}^2} c(x, y) \gamma(dx, dy) +\tau_1 \mathcal{D}_{f_1}((\Proj_1)_\sharp \gamma \mid \mu)+ \tau_2 \mathcal{D}_{f_2}((\Proj_2)_\sharp \gamma \mid \nu),
\end{equation}
with $f$-divergences $D_{f_1}$ and $D_{f_2}$ induced by $f_1, f_2$, and parameters $(\tau_1, \tau_2)$ controlling how much mass variations are penalized as opposed to transportation of the mass.
When introducing an entropy regularization as in \eqref{eq:reg-ot}, the unbalanced OT problem between discrete measures ${\mathbf{u}}$ and ${\mathbf{v}}$, i.e.,
\begin{equation} \label{eq:reg-ub-ot}
\textup{UBOT}({\mathbf{u}}, {\mathbf{v}}) \vcentcolon= \min _{\Gamma \in \mathbb{R}_{+}^{n \times m}}\dotp{\Gamma}{[c(x_i, y_j)]_{ij}} +\tau_1 {\mathcal{D}}_{f_1}(\Gamma \mathds{1}_m \mid {\mathbf{u}})+\tau_2 {\mathcal{D}}_{f_2}(\Gamma^{\top} \mathds{1}_m \mid {\mathbf{v}}) \,-\varepsilon H(\Gamma),
\end{equation}
can be efficiently solved via generalizations of the Sinkhorn algorithm \citep{chizat2018scaling, cuturi2013sinkhorn, benamou2015iterative}
.
We describe alternative formulations of the unbalanced OT problem in detail, review recent applications, and provide a broader literature review in the Appendix (\S\ref{sec:unot_add}).
\section{Conclusion} \label{sec:conclusion}
\looseness -1 This work presents a novel formulation of the unbalanced optimal transport problem that bridges two previously disjoint perspectives on the topic: a theoretical one based on semi-couplings and a practical one based on recent neural estimation of OT maps. The resulting algorithm, \textsc{NubOT}, is scalable, efficient, and robust. Yet, it is effective at modeling processes that involve population growth or death, as demonstrated through various experimental results on both synthetic and real data. On the challenging single-cell perturbation task, \textsc{NubOT} is able to successfully predict perturbed cell states, while explicitly modeling death and proliferation. This is an unprecedented achievement in the field of single-cell biology, which currently relies on the use of markers to approximate the survival state of cell population upon drug treatment. Explicitly modeling proliferation and death at the single-cell level as part of the drug response, allows to link cellular properties observed prior to drug treatment to therapy outcomes. Thus, the application of \textsc{NubOT} in the fields of drug discovery and personalized medicine could be of great implications, as it allows to identify cellular properties predictive of drug efficacy.
\section{Introduction}\label{sec:introduction}
Modeling change is at the core of various problems in the natural sciences, from dynamical processes driven by natural forces to population trends induced by interventions. In all these cases, the gold standard is to track particles or individuals across time, which allows for immediate estimation of individual (or aggregate) effects. But maintaining these pairwise correspondences across interventions or time is not always possible, for example, when the same sample cannot be measured more than once. This is typical in biomedical sciences, where the process of measuring is often altering or destructive. For example, single-cell biology profiling methods destroy the cells and thus cannot be used repeatedly on the same cell. In these situations, one must rely on comparing \textit{different} replicas of a population and, absent a natural identification of elements across the populations, infer these correspondences from data in order to model evolution or intervention effects.
\looseness -1 The problem of inferring correspondences across unpaired samples in biology has been traditionally tackled by relying on average and aggregate perturbation responses \citep{green2016systems, zhan2019mek, sheldon2007collective} or by applying mechanistic or linear models \citep{yuan2021cellbox, dixit2016perturb} in, potentially, a learned latent space \citep{lotfollahi2019scgen}. Cellular responses to treatments are, however, highly complex and heterogeneous. To effectively predict the drug response of a patient during treatment and capture such cellular heterogeneity, it is necessary to learn nonlinear maps describing such perturbation responses on the level of single cells.
Assuming perturbations incrementally alter molecular profiles of cells, such as gene expression or signaling activities, recent approaches have utilized optimal transport to predict changes and alignments \citep{schiebinger2019optimal, bunne2022recovering, tong2020trajectorynet}. By returning a coupling between control and perturbed cell states, which overall minimizes the cost of matching, optimal transport can solve that puzzle and reconstruct these incremental changes in cell states over time.
Despite the advantages mentioned above, the classic formulation of OT is ill-suited to model processes where the population changes in \textit{size}, e.g., where elements might be created or destroyed over time. This is the case, for example, in single-cell biology, where interventions of interest typically promote proliferation of certain cells and death of others. Such scenarios violate the assumption of conservation of mass that the classic OT problem relies upon. Relaxing this assumption yields a generalized formulation, known as the \textit{unbalanced} OT (UBOT) problem, for which recent work has studied its properties \citep{liero2018optimal, chizat2018scaling}, numerical solution \citep{chapel2021unbalanced}, and has applied it successfully to problems in single cell biology \citep{yang2018scalable}. Yet, these methods typically scale poorly with sample size, are prone to unstable solutions, or make limiting assumptions, e.g., only allowing for destruction but not creation of mass.
In this work, we address these shortcomings by introducing a novel formulation of the unbalanced OT problem that relies on the formalism of semi-couplings introduced by \citet{chizat2018unbalanced}, while still obtaining an explicit transport map that models the transformation between distributions. The advantage of the latter is that it allows mapping new out-of-sample points, and it provides an interpretable characterization of the underlying change in distribution. Since the unbalanced OT problem does not directly admit a Monge (i.e., mapping-based) formulation, we propose to \textit{learn} to jointly `re-balance' the two distributions, thereby allowing us to estimate a map between their rescaled versions. To do so, we leverage prior work \citep{makkuva2020optimal, korotin2021neural} that learns the transport map as the gradient of a convex dual potential \citep{Brenier1987} parameterized via an input convex neural network \citep{amos2017input}. In addition, we derive a simple update rule to learn the rescaling functions. Put together, these components yield a reversible, parameterized, and computationally feasible implementation of the semi-coupling unbalanced OT formulation (Fig. ~\ref{fig:overview}).
In short, the main \textbf{contributions} of this work are: \begin{enumerate*}[label=(\roman*)]
\item A novel formulation of the unbalanced optimal transport problem that weaves together the theoretical foundations of semi-couplings with the practical advantage of transport maps;
\item A general, scalable, and efficient algorithmic implementation for this formulation based on dual potentials parameterized via convex neural network architectures; and
\item An empirical validation on the challenging task of predicting perturbation responses of single cells to multiple cancer drugs, where our method successfully predicts cell proliferation and death, in addition to faithfully modeling the perturbation responses on the level of single cells.
\end{enumerate*}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/fig_overview.pdf}
\caption{\looseness -1 \textbf{a.} A semi-coupling pair ({\color{pink} $\gamma_1$}, {\color{darkblue} $\gamma_2$}) consists of two couplings that together solve the \emph{unbalanced} OT problem. Intuitively, {\color{pink} $\gamma_1$} describes where mass goes as it leaves from $\mu$, and {\color{darkblue} $\gamma_2$} where it comes from as it arrives in $\nu$. \textbf{b.} \textsc{NubOT} parameterizes the semi-couplings ({\color{pink} $\gamma_1$}, {\color{darkblue} $\gamma_2$}) as the composition of reweighting functions $\eta$ and $\zeta$ and the dual potentials $f$ and $g$ between the then \emph{balanced} problem.}
\label{fig:overview}
\end{figure}
\section{A Neural Unbalanced Optimal Transport Model}
The method we propose weaves together a rigorous formulation of the unbalanced optimal transport problem based on semi-couplings (introduced below) with a practical and scalable OT mapping estimation method based on input convex neural network parameterization of the dual OT problem.
\paragraph{Semi-coupling formulation.}
\citet{chizat2018unbalanced} introduced a class of distances that generalize optimal transport for the unbalanced setting. They introduce equivalent dynamic and static formulations of the problem, the latter of which relies on \textit{semi-couplings} to allow for variations of mass. These are generalizations of couplings whereby only one of the projections coincides with a prescribed measure. Formally, the set of semi-couplings between measures $\mu$ and $\nu$ is defined as
\begin{equation}\label{eq:semicoupling_set}
\Gamma\left(\mu, \nu\right) \stackrel{\text { def. }}{=}\left\{\left(\gamma_0, \gamma_1\right) \in\left(\mathcal{M}_{+}\left({\mathcal{X}}^2\right)\right)^2:\left(\Proj_1\right)_{\sharp} \gamma_0=\mu,\left(\Proj_2\right)_{\sharp} \gamma_1=\nu\right\}.
\end{equation}
With this, the unbalanced Kantorovich OT problem can be written as $C_k(\mu, \nu) = \inf_{(\gamma_0, \gamma_1) \in \Gamma\left(\mu, \nu\right) } \int c(x, \tfrac{\gamma_0}{\gamma}, y, \tfrac{\gamma_1}{\gamma})\mathop{}\!\mathrm{d} \gamma(x,y)$, where $\gamma$ is any joint measure for which $\gamma_0, \gamma_1 \ll \gamma$.
Although this formulation lends itself to formal theoretical treatment, it has at least two limitations. First, it does not explicitly model a mapping between measures. Indeed, no analogue of the celebrated Brenier's theorem is known for this setting. Second, deriving a computational implementation of this problem is challenging by the very nature of the semi-couplings: being undetermined along one marginal makes it hard to model the space in \eqref{eq:semicoupling_set}.
\paragraph{Rebalancing with proxy measures.}
To turn the semi-coupling formulation of unbalanced OT into a computationally feasible method, we propose to conceptually break the problem into balanced and unbalanced subproblems, each tackling a different aspect of the difference between measures: feature transformation and mass rescaling. These in turn imply a decomposition of the semi-couplings of \eqref{eq:semicoupling_set}, as we will show later. Specifically, we seek proxy measures $\tilde{\mu}$ and $\tilde{\nu}$ with equal mass (i.e., $\mu({\mathcal{X}}) = \tilde{\nu}({\mathcal{X}})$) across which to solve a \textit{balanced} OT problem through a Monge/Brenier formulation. To decouple measure scaling from feature transformation, we propose to choose $\tilde{\mu}$ and $\tilde{\nu}$ simply as rescaled versions of $\mu$ and $\nu$. Thus, formally, we seek $\tilde{\mu}, \tilde{\nu} \in \mathcal{M}_+(\mathcal{X})$ and $T,S: \mathcal{X} \rightarrow \mathcal{X}$ such that
\begin{equation}\label{eq:proxy_measure_constraints}
\tilde{\mu} = \eta \cdot \mu, \quad \tilde{\nu} = \zeta \cdot \nu, \quad T_\sharp \tilde{\mu} = \tilde{\nu}, \quad S_\sharp \tilde{\nu} = \tilde{\mu},
\end{equation}
where $\eta,\zeta: {\mathcal{X}} \rightarrow \mathbb{R}^+$ are scalar fields, $\eta \cdot \mu$ denotes the measure with density $\eta(x)\mathop{}\!\mathrm{d} \mu(x)$ (analogously for $\zeta \cdot \nu)$, and $T,S$ are a pair of forward/backward optimal transport maps between $\tilde{\mu}$ and $\tilde{\nu}$ (Fig.~\ref{fig:overview}b).
Devising an optimization scheme to find all relevant components in \eqref{eq:proxy_measure_constraints} is challenging. In particular, it involves solving an OT problem whose marginals are not fixed, but that will change as the reweighting functionals $\eta,\zeta$ are updated. We propose an alternating minimization approach, whereby we alternative solve for $\eta,\zeta$ (through an approximate scaling update) and $T,S$ (through gradient updates on ICNN convex potentials, as described in Section \ref{sec:background_OT}).
\paragraph{Updating rescaling functions.}
Given current estimates of $\eta$ and $T$, we consider the UBOT problem \eqref{eq:ub-ot} between $T_\sharp (\eta \cdot \mu) = T_\sharp \tilde{\mu}$ and $\nu$. Although in general these two measures will not be balanced (hence why we need to use UBOT instead of usual OT), our goal is to eventually achieve this. To formalize this, let us use the shorthand notation $\pi_{\textup{UB}}^*(\alpha, \beta) \eqdef \argmin_{\pi} \textup{UBOT} (\pi; \alpha, \beta)$, where \textup{UBOT} is defined in \eqref{eq:reg-ub-ot}. For a fixed $T$, our goal is to find $w$ such that $(\Proj_1)_\sharp [\pi_{\textup{UB}}^*(T_\sharp (\eta \cdot \mu), \nu)] = T_\sharp (\eta \cdot \mu)$. For the discrete setting (finite samples), this corresponds to finding a vector $\mathbf{e}$ satisfying:
\begin{equation}\label{eq:UBOT_fixed_point}
\sum_{j=1}^m [\Gamma]_{ij} = \mathbf{e} \odot \mathbf{u}, \quad \text{where} \medspace \Gamma= \argmin \textup{UBOT} (\mathbf{e} \odot \mathbf{u},T({\mathbf{x}}_i), \mathbf{v}, {\mathbf{y}}_j) .
\end{equation}
For a fixed $T$, the vector ${\mathbf{e}}^*$ satisfying this system can be found via a fixed-point iteration. In practice, we approximate it instead with a single-step update using the solution to the unscaled problem:
\[ \Gamma \gets \textup{UBOT}(\mathbf{u},T({\mathbf{x}}_i), \mathbf{v}, {\mathbf{y}}_j) ; \qquad \mathbf{e} \gets \Gamma \mathds{1} \oslash \mathbf{u};\]
which empirically provides a good approximation on the optimal ${\mathbf{e}}^*$ but is significantly more efficient. Apart from requiring a single update, whenever $\mathbf{u}$ and $\mathbf{v}$ are uniform (as in most applications where the samples are assumed to be drawn i.i.d.) solving this problem between unscaled histograms will be faster and more stable than solving its scaled (and therefore likely non-uniform) counterpart in \eqref{eq:UBOT_fixed_point}.
Analogously, for a given $S$, we choose $\zeta$ that ensures $(\Proj_2)_\sharp [\pi_{\textup{UB}}^*(S_\sharp (\zeta \cdot \nu), \mu)] = S_\sharp (\zeta \cdot \nu)$. For empirical measures, this yields the update:
\[ \Gamma \gets \textup{UBOT}(\mathbf{v},S({\mathbf{y}}_j), \mathbf{u}, {\mathbf{x}}_i); \qquad \mathbf{z} \gets \Gamma \mathds{1} \oslash \mathbf{v}; \]
In order to be able to predict mass changes for new samples, we will use the discrete $\mathbf{e}, \mathbf{z}$ to fit continuous versions of $\eta, \zeta$ via functions parameterized as neural networks trained to achieve $\eta({\mathbf{x}}_i) \approx e_i \medspace \forall i \in \{1, \dots, n\}$ and $\zeta({\mathbf{y}}_j) \approx z_j \medspace \forall j \in \{1, \dots, m\}$.
\paragraph{Updating mappings.}
For a fixed pair of $\eta, \zeta$, we want $T$ and $S$ to be a pair of optimal OT maps between $\tilde{\mu}$ and $\tilde{\nu}$. Since these are guaranteed to be balanced due to the argument above, we can use a usual (balanced) OT formulation to find them. In particular, we use the formulation of \citep{makkuva2020optimal} to fit them. That is, $T = \nabla g$ and $S = \nabla f$ for convex potentials $f$ and $g$, parameterized as ICNNs with parameters $\theta_f$ and $\theta_g$. The corresponding objective for these two potentials is:
\begin{align*}
\mathcal{L}(f,g) &= \Exp_{x\sim \tilde{\mu}} \left[ f (\nabla g) - \langle x, \nabla g (x) \rangle \right ] - \Exp_{y\sim \tilde{\nu}} \left[ f(y) \right] \\&= \int_\mathcal{X} \bigl[ f(\nabla g (x)) - \langle x, \nabla g (x) \rangle \bigr ] \eta(x) \mathop{}\!\mathrm{d} \mu(x)- \int f(y) \zeta(y) \mathop{}\!\mathrm{d} \nu(y).
\end{align*}
In the finite sample setting, this objective becomes: \vspace{-0.2cm}
\begin{equation}
\mathcal{L}(f,g) = \frac{1}{n}\sum_{i=1}^n e_i \left[ f (\nabla g ({\mathbf{x}}_i)) - \langle {\mathbf{x}}_i, \nabla g ({\mathbf{x}}_i) \rangle \right ] - \sum_{j=1}^m z_j f({\mathbf{y}}_j). \vspace{-0.2cm}
\end{equation}
The optimization procedure is summarized in Algorithm~\ref{algo}.
\vspace{-0.2cm}
\paragraph{Transforming new samples.}
After learning $f, g, \eta, \zeta$, we can use these functions to transform (map and rescale) new samples, i.e., beyond those used for optimization. For a given source datapoint ${\mathbf{x}}$ with mass $u$, we transform it as $({\mathbf{x}}, u) \mapsto (\nabla g ({\mathbf{x}}), \eta({\mathbf{x}}) \cdot u \cdot \zeta(\nabla g ({\mathbf{x}}))^{-1})$. Analogously, target points can be mapped backed to the source domain using $({\mathbf{y}}, v) \mapsto (\nabla f ({\mathbf{y}}), \zeta({\mathbf{y}}) \cdot v \cdot \eta(\nabla f ({\mathbf{y}}))^{-1})$.
\vspace{-0.2cm}
\paragraph{Recovering semi-couplings.}
Let us define $\tilde{\Gamma}_1 \eqdef \diag({\mathbf{e}}^{-1})^\top\Gamma_1$ and $\tilde{\Gamma}_2 \eqdef \diag({\mathbf{z}}^{-1})^\top \Gamma_2$, where $\Gamma_1, \Gamma_2$ are the solutions of the UBOT problems computed in Algorithm 1 (lines 7 and 9, respectively). It is easy to see that $(\tilde{\Gamma}_1, \tilde{\Gamma}_2^\top)$ is a valid pair of semi-couplings between $\mu$ and $\nu$ (cf.~Eq.~\ref{eq:semicoupling_set}).
\begin{algorithm}[t]
\KwIn{$f, g$: ICNNs, initialized s.t. $\nabla g(x) \approx x$ and $\nabla f(y) \approx y$ ; $\eta, \zeta$: NNs}
\caption{Neural Unbalanced Optimal Transport (\textsc{NubOT})}\label{algo}
\For{t in \texttt{epochs}} {
Sample batch $\{x_i\}_{i=1}^{n} \sim \mu$ and $\{y_j\}_{j=1}^{m} \sim \nu$
$\hat{y} \gets \nabla g(x)$
$\hat{x} \gets \nabla f(y)$
$\Gamma_1 \gets \texttt{unbalanced.sinkhorn}(\hat{y}, \tfrac{1}{n}\mathds{1}_n, y, \tfrac{1}{m}\mathds{1}_m)$
$e_i \gets \frac{\sum_j \Gamma_{ij}}{\sum_{ij} \Gamma_{ij}} \cdot n$
$\Gamma_2 \gets \texttt{unbalanced.sinkhorn}(\hat{x}, \tfrac{1}{m}\mathds{1}_m, y, \tfrac{1}{n}\mathds{1}_n)$
$z_i \gets \frac{\sum_j \Gamma_{ij}}{\sum_{ij} \Gamma_{ij}} \cdot m$
$J(\theta_g, \theta_f) = \frac{1}{n} \sum_{i=1}^{n} e_i \left[ f(\nabla g(x_i)) - \langle x_i, \nabla g(x_i) \rangle \right] - \frac{1}{m} \sum_{j=1}^{m} z_jf(y_j)$
$L_{\eta}(\theta_{\eta}) = $ \texttt{MSE}$(\mathbf{e}, \eta(x))$
$L_{\zeta}(\theta_{\zeta}) = $ \texttt{MSE}$(\mathbf{z}, \zeta(y))$
Update $\theta_g$ to minimize $J$, $\theta_{\eta}$ to minimize $L_{\eta}$, $\theta_{\zeta}$ to minimize $L_{\zeta}$, and $\theta_f$ to maximize $J$}
\end{algorithm}
\section{David' note - remove eventually}
By doing so, we can decompose the unbalanced problem as a sequence of measure scaling, mapping, measure scaling operations.
\dam{Might want to discuss why not directly computing the UBOT between $\mu$ and $\nu$ to get the required weights. The issue with this is that we don't want to naively find \textit{any} reweighting that gives us the right maringals. Comparing $\mu$ and $\nu$ without first mapping them will likely lead to uniformative UBOT couplings. Also, they would be static throughout training. Instead, we want to refine our estimates of the mass correspondence (and therefore, reweigiting necessary) as we nail down the (feature to feature) correspondence between them}.
\todo{
Issues, no Brenier (but a dual), no mapping, crucial for neural OT (out-of-sample)
We try to bind semi-coupling with previous parameterizations of the Monge maps
Intuition
(diagram)
Formalization and how to use Makkuva in the core
Objective
Optimality and cycle-consistency
Algorithm and training (e.g., use unbalanced OT from POT to achieve this) etc.
}
Notation: Let $\hat{\mu} = \nabla g_\sharp \mu , \tilde{\mu} = \eta \cdot \mu$, and $\hat{\tilde{\mu}} = \nabla (g_\sharp \mu) \cdot \eta = \nabla g_\sharp (\eta \cdot \mu)$. Here, $\eta \cdot \mu$ means that every $x$ has mass $\eta(x)\mu(x)$, though I'm not sure this is a valid formal definition. Analogously for the target distribution we have: $\hat{\nu} = \nabla f_\sharp \nu , \tilde{\nu} = \zeta \cdot \nu$, and $\hat{\tilde{\nu}} = \nabla (f_\sharp \nu) \cdot \zeta = \nabla f_\sharp (\zeta \cdot \nu)$.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{figures/IMG_5961_small.png}
\caption{Caption}
\label{fig:my_label}
\end{figure}
In the Algorithm (L4) we compute $\hat{\mu} \eqdef \nabla g_\sharp \mu$ and solve UBOT between it and $\nu$, obtaining $\Gamma_1$ (L6). Since this is an UBOT problem, $\Gamma_1$'s marginals are typically not equal to $(\hat{\mu},\nu)$. Its actual marginals can be written (using Chizat's notation for projection of a coupling) as $((\Proj_0)_\sharp \Gamma_1, (\Proj_1)_\sharp \Gamma_1)$. We define $\tilde{\mu}$ as the first of these, hence $\Gamma_1$ is a valid (balanced) coupling between $\tilde{\mu}$ and $(\Proj_1)_\sharp \Gamma_1$. In L7 we're defining $\eta$ to satisfy $\eta \cdot \mu = (\Proj_0)_\sharp \Gamma_1 = \tilde{\mu}$, that is, the reweighting needed to conform $\mu$ to $\Gamma_1$'s first marginal. Note that don't explicitly compute $\tilde{\mu}$, we just know it can be obtained from $\eta$ and $\mu$. Similarly, we implicitly define $\hat{\tilde{\mu}} = \nabla g_\sharp (\eta \cdot \mu)$.
Analogously, we compute $\hat{\nu} \eqdef \nabla f_\sharp \nu $ (L5), and solve UBOT between it and $\mu$, obtaining $\Gamma_2$ (L8). $\Gamma_2$'s marginals are $((\Proj_0)_\sharp \Gamma_2, (\Proj_1)_\sharp \Gamma_2) \eqdef (\tilde{\nu}, (\Proj_1)_\sharp \Gamma_2)$. In L9 we're defining $\zeta$ to satisfy $\zeta \cdot \nu = (\Proj_0)_\sharp \Gamma_2 = \tilde{\nu}$. Finally, we implicitly define $\hat{\tilde{\nu}} = \nabla f_\sharp (\zeta \cdot \nu)$.
The objective in L10 corresponds to (Makkuva's formulation of) the 2-Wasserstein objective between $\tilde{\mu}$ and $\tilde{\nu}$, which can be seen by writing its population version as:
\begin{align*}
\mathcal{L}(f,g) &= \int_\mathcal{X} \bigl[ f(\nabla g (x)) - \langle x, \nabla g (x) \rangle \bigr ] \eta(x) \mathop{}\!\mathrm{d} \mu(x)- \int f(y) \zeta(y) \mathop{}\!\mathrm{d} \nu(y) \\
&= \Exp_{x\sim \tilde{\mu}} \left[ f (\nabla g) - \langle x, \nabla g (x) \rangle \right ] - \Exp_{y\sim \tilde{\nu}} \left[ f(y) \right]
\end{align*}
Thus, at optimality $(f,g)$ are the katorovich potentials, so we have
\begin{align*}
\hat{\tilde{\mu}} \eqdef \nabla g_\sharp \tilde{\mu} \approx \tilde{\nu} \qquad \text{and} \qquad \hat{\tilde{\nu}} \eqdef \nabla f_\sharp \tilde{\nu} \approx \tilde{\mu}
\end{align*}
Other observations
\begin{itemize}
\item This derivation provides a plausible explanation for Frederike's observation that $\tilde{\mu} = \eta \cdot \mu \approx \zeta \cdot \nu = \tilde{\nu}$ for every cluster. At optimality we have $\tilde{\nu} \approx \nabla g_\sharp {\tilde{\mu}}$ (see diagram). Let $A$ be a set (e.g., cluster) for which $\nabla g$ preserves mass (formally, it is a measure-preserving map when restricted to $A$). Then
\[ \tilde{\nu}(A) = \int_{A} \mathop{}\!\mathrm{d} \tilde{\nu}
\stackrel{\text{(optimality)}}{\approx} \int_A \mathop{}\!\mathrm{d} \nabla g_\sharp {\tilde{\mu}} \stackrel{\text{(def.~of }\sharp)}{=} \int_A \nabla g \mathop{}\!\mathrm{d}\tilde{\mu} \stackrel{\text{(measure preserv.)}}{=} \int_A \mathop{}\!\mathrm{d} \tilde{\nu} = \tilde{\mu}(A)\]
Whenever our Monge maps are reasonable, they should be measure preserving within any cluster, so the above would hold (perhaps approximately) when looking at every cluster.
\item From the above, we can define a valid pair of semi couplings between $\mu$ and $\nu$ as follows. Let $\Gamma^*_a$ be the trivial coupling between $\mu$ and $\tilde{\mu}$, i.e., $\Gamma^*_a(x,x') = \mu(x')$ if $x=x'$ and zero otherwise (in the discrete case, it would be a diagonal matrix $\text{diag}(\mathbf{a}))$. The chain together (need to formalize) $\nabla g$ and $1/\zeta$ to get $\Gamma^*_b$ with right-marginal $\nu$. Note that this pair satisfies $(\text{Proj}_1)_\sharp \Gamma^*_a = (\text{Proj}_0)_\sharp \Gamma^*_b$, which is not necessary. We can aslo define a non-matching pair of couplings as follows. Take $\Gamma^*_a$ as the chaining (formalize) of $\eta$ and $\Gamma_1$, so that it has marginals $\mu$ and $(\text{Proj}_1)_\sharp \Gamma_1$. Analogously define $\Gamma^*_b$ by chaining $\Gamma_2^\top$ and $\zeta$ so that it has marginals $(\text{Proj}_0)_\sharp \Gamma_2$ and $\nu$. Boom! Done!
\end{itemize}
\dam{Not sure if we need this but typing it up for the purposes of discussion}
\begin{equation}
\sup_f \inf_g \mathcal{V}_{\mu,\nu}(f,g) + C_{\mu,\nu} = \sup_f \inf_g -\Exp_{\mu}[f(X)] - \Exp_{\nu}\bigl[ \langle Y, \nabla g(Y) \rangle - f(\nabla g(Y)) \bigr] + C_{\mu, \nu} =
\end{equation}
\begin{equation*}
J(\theta_g, \theta_f) = \mathbb{E}_{x\sim \tilde{\mu}} \bigl[ f(\nabla g(x)) - \langle x, \nabla g(x)) \bigr] - \mathbb{E}_{y \sim \tilde{\nu}} f(y)
\end{equation*}
...
\section{Evaluation} \label{sec:evaluation}
We illustrate the effectiveness of \textsc{NubOT} on different tasks, including a synthetic setup for which a ground truth matching is available, as well as an important but challenging task to predict single-cell perturbation responses to a diverse set of cancer drugs with different modes of actions.
\paragraph{Baselines.}
To put \textsc{NubOT}'s performance into perspective, we compare it to several baselines: First, we consider a balanced neural optimal transport method \textsc{CellOT} \citep{bunne2021learning}, based on the neural dual formulation of \citet{makkuva2020optimal}. Further, we benchmark \textsc{NubOT} against the current state-of-the-art \textsc{ubOT GAN}, an unbalanced OT formulation proposed by \citet{yang2018scalable}, which simultaneously learns a transport map and a scaling factor for each source point in order to account for variation of mass. Additionally, we consider two naive baselines: \textsc{Identity}, simulating the identity matching and modeling cell behavior in absence of a perturbation, and \textsc{Observed}, a random permutation of the observed target samples and thus a \emph{lower bound} when comparing predictions to observed cells on the distributional level. More details can be found in the Appendix \S\ref{sec:baselines}.
\subsection{Synthetic Data}
\looseness -1 Populations are often heterogeneous and consist of different subpopulations. Upon intervention, these subpopulations might exhibit heterogeneous responses. Besides a change in their feature profile, the subpopulations may also show changes in their particle counts. To simulate such heterogeneous intervention responses, we generate a dataset containing a two-dimensional mixture of Gaussians with three clusters in the source distribution $\mu$. The target distribution $\nu$ consists of the same three clusters, but with different cluster proportions. Further, each particle has undergone a constant shift in space upon intervention. We consider three scenarios with increasing imbalance between the three clusters (see Fig.~\ref{fig:fig_toy}a-c). We evaluate \textsc{NubOT} on the task of predicting the distributional shift from source to target, while at the same time correctly rescaling the clusters such that no mass is transported across non-corresponding clusters.
\begin{figure}
\centering
\includegraphics[width=1.05\textwidth]{figures/fig_toy_plot_numbers.pdf}
\caption{\looseness -1 \textbf{Unbalanced sample mapping}. In all three scenarios (a,b,c), the source (gray) and target (blue) datasets share structure but have different shifts and per-cluster sampling proportions. Tasked with mapping from source to target, \textsc{NubOT} and \textsc{ubOT GAN} predict the locations (middle pane, red) and weights (right pane) of the transported samples. The number next to the weights denotes the mean weights per cluster. While both methods map the samples to the correct location, \textsc{NubOT} more accurately predicts the weights needed to match the target distribution, creating mass (dark blue) or destroying it (red) as needed.}
\label{fig:fig_toy}
\end{figure}
\paragraph{Results.}
\looseness -1 The results (setup, predicted Monge maps and weights) are displayed in Fig.~\ref{fig:fig_toy}. Both \textsc{NubOT} and \textsc{ubOT GAN} correctly map the points to the corresponding target clusters without transporting mass across clusters. \textsc{NubOT} also accurately models the change in cluster sizes by predicting the correct weights for each point. In contrast, \textsc{ubOT GAN} only captures the general trend of cluster growth and shrinkage, but does not learn the exact weights required to re-weight the cluster proportions appropriately. The exact setup and calculation of weights can be found in the \S\ref{sec:add_exps} (see Table~\ref{tab:toy_setup}).
\subsection{Single-cell Perturbation Responses} \label{sec:exp_synth}
Through the measurement of genomic, transcriptomic, proteomic or phenotypic profiles of cells, and the identification of different cell types and cellular states based on such measurements, biologists have revealed perturbation responses and response mechanism which would have remained obscured in bulk analysis approaches \citep{green2016systems, liberali2014hierarchical, kramer2022multimodal}. However, single-cell measurements typically require the destruction the cells in the course of recording.
\begin{wrapfigure}{r}{0.34\textwidth}
\begin{center}
\vspace{-10pt}
\includegraphics[width=0.4\textwidth]{figures/fig_corr_both_weights-fraction_celltypes.pdf}
\end{center}
\caption{Given the ground truth on the known subpopulation (MelA (red) and Sox9 (blue)) sizes for each drug, we analyze their level of correlation to our predicted weights after \textbf{a.} 8h and \textbf{b.} 24h. With increasing difficulty of the task and certain drugs completely removing both or one of the subpopulations, the level of correlation reduces from 8 to 24h.}
\label{fig:summed_weights}
\vspace{-0.75cm}
\end{wrapfigure}
\looseness -1 Thus, each measurement provides us only with a \textit{snapshot} of cell populations, i.e., samples of probability distribution that is evolving in the course of the perturbation, from control $\mu$ (source) to perturbed cell states $\nu$ (target).
Using \textsc{NubOT} and the considered baselines, we learn a map $T$ that reconstructs how individual cells respond to a treatment.
The effect of a single perturbation frequently varies depending on the cell type or cell state, and may include the induction of cell death or proliferation. In the following, we will evaluate if \textsc{NubOT} is able to capture and predict heterogeneous proliferation and cell death rates of two co-cultures melanoma cell liens through $\eta$ and $\zeta$ in response to 25 drug treatments.
\begin{figure}[t]
\centering
\includegraphics[width=1.05\textwidth]{figures/fig_comp_mmd_weighted.pdf}
\caption{\looseness -1 Distributional fit of the predicted perturbed cell states to the observed perturbed cell states for each drug and timestep, measured by a weighted version of kernel MMD on a set of held-out control cells. For \textsc{NubOT} and \textsc{ubOT GAN}, MMD is weighted by the predicted weights, while for the other baselines it is computed with uniform weights. \textsc{Observed} corresponds to a random permutation of the observed control cells, i.e., its distribution is approximately the same as the observed cells.}
\label{fig:comp_mmd_weighted}
\end{figure}
\looseness -1 The single-cell measurements used for this task were generated using the imaging technology 4i \citep{gut2018multiplexed} over the course of 24 hours, resulting in three different unaligned snapshots ($t=0h$, $t=8h$ and $t=24h$) for each of the drug treatments.
The control cells, i.e., the source distribution $\mu$, consists of cells taken from a mixture of melanoma cell lines at $t=0h$ that are exposed to a dimethyl sulfoxide (DMSO) as a vehicle control. Futher, We consider two different target populations $\nu$ capturing the perturbed populations after $t=8h$ and $t=24h$ of treatment, respectively.
As both cancer cell lines exhibit different sensitivities to the drugs \citep{raaijmakers2015new}, their proportion (Fig.~\ref{fig:cell_counts_relative}) as well as the total cell counts (Fig.~\ref{fig:cell_counts_total}) vary over the time points.
Both cell lines are characterized by the expression of mutually exclusive protein markers, i.e., one cell line strongly expresses a set of proteins detected by an antibody called MelA (MelA$^+$ cell type), while the other is characterized by high levels of the protein Sox9 (Sox9$^+$ cell type). An evaluation of this cell line annotation can be found in Fig.~\ref{fig:cell_types_check_8h} (8h) and Fig.~\ref{fig:cell_types_check_24h} (24h).
Contrary to the synthetic task in \S~\ref{sec:exp_synth}, the nature of these measurements destroys the ground truth matching.
We thus use insights from the number of cells after 8 and 24 hours of treatment (Fig.~\ref{fig:cell_counts_relative},~\ref{fig:cell_counts_total}), as well as the cell type annotation for each cell to further evaluate \textsc{NubOT}'s performance.
A detailed description of the dataset can be found in \S~\ref{sec:cell_datasets}.
\paragraph{Results.}
\looseness -1 We split the dataset into a train and test set and train \textsc{NubOT} as well as the baselines on unaligned unperturbed (control) and perturbed cell populations for each drug. During evaluation, we then predict out-of-sample the perturbed cell state from held-out control cells. Details on the network architecture and hyperparameters can be found in \S~\ref{sec:hyperparams}.
\textsc{NubOT} and \textsc{ubOT GAN} additionally predict the weight associated with the perturbed predicted cells, giving insights into which cells have proliferated or died in response to the drug treatments.
First, we compare how well each method fits the observed perturbed cells on the level of the entire distribution. For this, we measure the weighted version of kernel maximum mean discrepancy (MMD) between predictions and observations. More details on the evaluation metrics can be found in \S~\ref{sec:eval_metrics}.
The results are displayed in Fig.~\ref{fig:comp_mmd_weighted}. \textsc{NubOT} outperforms all baselines in almost all drug perturbations, showing its effectiveness in predicting OT maps and local variation in mass.
\looseness -1 In the absence of a ground truth and in particular, given our inability to measure (i.e., observe) cells which have died upon treatment, we are required to base further analysis of \textsc{NubOT}'s predictions on changes in cell count for each subpopulation (MelA$^+$, Sox9$^+$).
Fig.~\ref{fig:cell_counts_relative} clearly shows that drug treatments lead to substantially different cell numbers for each of the subpopulations compared to control. For example, Ulixertinib leads to the proliferation of both subpopulations after 8h, but to pronounced cell death in Sox9$^+$ and strong proliferation in MelA$^+$ cells after 24h.
We thus expect, that weights predicted by \textsc{NubOT} for all drugs correlate with the change in cell counts for each cell type (here measured as population fractions).
This is indeed the case, Fig.~\ref{fig:summed_weights} shows a high correlation between observed cell counts of the two cell types and the sum of the predicted weights of the respective cell types after 8h of treatment for all drugs.
After 24 hours, treatment-induced cell death (in at least one cell type) by some drugs can be so severe at that the number of observed perturbed cells becomes too low for accurate predictions and the evaluation of the task (Fig.~\ref{fig:cell_counts_total}). Further, we find that drugs influence the abundance of the cell lines markers MelA and Sox9, complicating cell type classification (see Fig.~\ref{fig:cell_types_check_8h},~\ref{fig:cell_types_check_24h}, as well as Fig.~\ref{fig:cell_types_check_8h},~\ref{fig:cell_types_check_24h}).
We ignore drugs falling into these categories and find that whilst the correlation between predicted weights and observed cell counts is reduced after 24h (see Fig.~\ref{fig:summed_weights}a), \textsc{NubOT} still captures the overall trend.
\begin{figure}[t]
\centering
\includegraphics[width=1.05\textwidth]{figures/fig_umaps_ulixertinib.pdf}
\caption{UMAP projections of the control cells for Ulixertinib at \textbf{a.} 8h and \textbf{b.} 24h. Cells are colored by the observed and predicted protein marker values (Ki67, MelA), and predicted weights. \textsc{NubOT} thereby correctly predicts weights $\geq 1$ for proliferating cells in the MelA$^+$ population (\textbf{a.} and \textbf{a.}, right panel), and increased levels of cell death in the Sox9$^+$ population after 24h via weights $\leq 1$ (\textbf{b.}, right panel), confirmed by the experimental observations (see Fig.~\ref{fig:cell_counts_relative}). \vspace{-15pt}}
\label{fig:umap_ulixertinib}
\end{figure}
\looseness -1 The data further provides insights into biological processes such as apoptosis, a form of programmed cell death induced by enzymes called Caspases (ClCasp3). While dead cells become invisible in the cell state space (they cannot be measured), \textit{dying} cells are still present in the observed perturbed sample and can be recognized by high levels of ClCasp3 (the apopotosis markers).
Conversely, the protein Ki67 marks proliferating cells.
Analyzing the correlation of ClCasp3 and Ki67 intensity with the predicted weights provides an additional assessment of the biological meaningfulness of our results.
For example, upon Ulixertinib treatment, the absolute cell counts show an increase in Sox9$^+$ cells, and a decline of MelA$^+$ cells at 24h (Fig.~\ref{fig:cell_counts_relative}). Fig.~\ref{fig:umap_ulixertinib} shows UMAP projections of the control cells at both time points, colored by the observed and predicted protein marker values and the predicted weights.
At 8h, \textsc{NubOT} predicts only little change in mass, but a few proliferative cells with high weights in areas which are marked by high values of the proliferation marker Ki67. At 24h, our model predicts cell death in the Sox9$^+$ (MelA$^-$) cell type, and proliferation in the MelA$^+$ cell type, which matches the observed changes in cell counts per cell type, seen in Fig.~\ref{fig:cell_counts_relative} in \S~\ref{sec:add_exps}.
We identify similar results for Trametinib (Fig.~\ref{fig:umap_trametinib}), Ixazomib (Fig.~\ref{fig:umap_ixazomib}), and Vindesine (Fig.~\ref{fig:umap_vindesine}) which can be found in \S~\ref{sec:add_exps}.
These experiments thus demonstrate that \textsc{NubOT} accurately predicts heterogeneous drug responses at the single-cell level, capturing both, cell proliferation and death.
\section*{Acknowledgments}
C.B. was supported by the NCCR Catalysis (grant number 180544), a National Centre of Competence in Research funded by the Swiss National Science Foundation. L.P. is supported by the European Research Council (ERC-2019-AdG-885579), the Swiss National Science Foundation (SNSF grant 310030\_192622), the Chan Zuckerberg Initiative, and the University of Zurich. G.G. received funding from the Swiss National Science Foundation and InnoSuisse as part of the BRIDGE program, as well as from the University of Zurich through the BioEntrepreneur Fellowship.
\section*{Acknowledgments}
|
1,108,101,565,936 | arxiv | \section{Background}
\label{sec:background}
\subsection{The Spiking Neuron Model}
The discrete time leaky integrate-and-fire~(LIF) model follows:
\begin{equation}
U^{(l+1)}[t+1] = U^{(l+1)}[t]\left(1-\frac{1}{\tau_m}\right)\left(1-S^{(l+1)}[t]\right)+\mathbf{W}^{(l)}a^{(l)}[t]
\label{eq:1}
\end{equation}
\begin{equation}
a^{(l+1)}[t+1] = a^{(l+1)}[t]\left(1-\frac{1}{\tau_s}\right)+\frac{1}{\tau_s}S^{(l+1)}[t+1]
\label{eq:2}
\end{equation}
\begin{equation}
S^{(l+1)}[t] = H\left(U^{(l+1)}[t]-\vartheta\right), H(x)=\begin{cases}
0,~ x<0 \\ 1,~ x>0
\end{cases}
\label{eq:3}
\end{equation}
The compute graph is shown on the left side of Figure~\ref{fig:compute_graph}. We can safely substitute the variable $S$ by bringing the equation ~(\ref{eq:3}) into equations~(\ref{eq:1}) \& ~(\ref{eq:2}) and get:
\begin{equation}
U^{(l+1)}[t+1] = U^{(l+1)}[t]\left(1-\frac{1}{\tau_m}\right)\left[1-H\left(U^{(l+1)}[t]-\vartheta\right)\right]+\mathbf{W}^{(l)}a^{(l-1)}[t]
\label{eq:4}
\end{equation}
\begin{equation}
a^{(l+1)}[t+1] = a^{(l+1)}[t]\left(1-\frac{1}{\tau_s}\right)+\frac{1}{\tau_s}H\left(U^{(l+1)}[t+1]-\vartheta\right)
\label{eq:5}
\end{equation}
This leads our compute graph into the right side of Figure~\ref{fig:compute_graph}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{compute_graph.jpg}
\caption{The two compute graphs}
\label{fig:compute_graph}
\end{figure}
\subsection{The Loss Function}
Use Van Rossum distance with kernel $\epsilon$ to measure the difference between a desired output spike trains $\boldsymbol{d}$ and an actual output spike trains $\boldsymbol{S}$:
\begin{equation}
L = \sum_{k=0}^{N_t}E_{t_k} = \sum_{k=0}^{N_t}\frac{1}{2}\left((\epsilon\ast\boldsymbol{d})[t_k]-(\epsilon\ast\boldsymbol{S})[t_k]\right)^2
\label{eq:6}
\end{equation}
\section{Conclusion and future works}
\label{sec:conslusion}
In this short article. We add on the previous missed reset dependency between time steps using surrogate gradient. The new term shows better performance on the single neuron experiment when learning rate is larger than 0.005. However, on real datasets include MNIST, NMNIST and CIFAR-10, adding this reset dependency does not help.\par
\section{Experiments}
\label{sec:experiments}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{toy_experiment.pdf}
\caption{Single neuron example, and corresponding different compute phases of BP gradient}
\label{fig:toy_experiment}
\end{figure}
\subsection{Toy Experimental Setup}
\label{sec:toy_setup}
We simulate a single neuron, with fixed random input spike current and fixed fictitious random target spike-trains. We use standard BP algorithm to train the neuron by 200 iteration. All parameters are concluded in the Table~\ref{table:1}:\par
\begin{table}[h]
\centering
\caption{Important parameters of the single neuron back-propagation experiment}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\#Input neuron&50&Desired spike-train $p$&1/20&$\tau_m$&6&Sigmoid $T$&0.3\\
\hline
\#Time steps&100&Input spike-train $p$&1/10&$\tau_s$&2&Default lr&0.005\\
\hline
\end{tabular}
\label{table:1}
\end{table}
In the Table~\ref{table:1}, variable $p$ means the probability of generating a spike on each time-step when initialize fictitious inputs or a target output spike-trains.
As shown in the right side of Figure~\ref{fig:toy_experiment}, the neuron's output converged to the desired output successfully near the 30th iteration.\par
On the left side, we put four figures to show the four computing phases of gradients (All 200 iterations overlapped together). The Figure A is the difference between the current and the desired output. Then the difference is propagated to spike-trains in the Figure B. Since all previous spikes contribute to later current, the new graph gains a left tail on each spike compared to Figure A. By multiply the surrogate gradient of $\partial S/\partial U$, the Figure B transit to the $grad_{u_s}$ shown in Figure C. Finally, the Figure D add the timing dependency with the missing term $grad_{u_t}$ mentioned above onto the $grad_{u_s}$. \par
\subsection{Toy Experiment Results}
We repeat this toy experiment multiple times and record the loss curve. Figure~\ref{fig:toy_compare} compares the loss curve before and after adding the reset term. The solid lines are averaged losses, and the dash lines are one standard deviations above/below the averages. After adding the reset term, the algorithm performs better when learning rate is $>0.005$ as shown on the first line of Figure~\ref{fig:toy_compare}, and has competitive performance with smaller learning rate.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{example_comp.pdf}
\caption{Toy example comparison between with/without the reset dependency}
\label{fig:toy_compare}
\end{figure}
\subsection{Larger datasets}
We conclude our current results in the Table~\ref{table:2}. The parameters of the CNNs we used are shown in the Table~\ref{table:3} below:
\vspace{-0.3cm}
\begin{table}[h]
\centering
\caption{Performance comparison on larger datasets}
\begin{tabular}{c|c|c}
Datasets& With reset term accuracy(\%) & Without reset term accuracy(\%)\\
\hline
NMNIST&99.29&99.28\\
MNIST&99.49&99.50\\
CIFAR10& 88.96 & 88.98\\
\end{tabular}
\label{table:2}
\end{table}
\vspace{-0.5cm}
\begin{table}[h]
\centering
\caption{CNN's parameters}
\begin{tabular}{c|c|c}
Datasets& Network Size & Time steps\\
\hline
NMNIST & 12C5-P2-64C5-P2 & 5\\
MNIST & 15C5-P2-40C5-P2-300 & 30\\
CIFAR10 & 96C3-256C3-P2-384C3-P2-384C3-256C3-1024-1024 & 5\\
\end{tabular}
\label{table:3}
\end{table}
\section{Introduction}
SNN is believed as the 3\textsuperscript{rd} generation artificial intelligence, its potential is yet to be discovered. Despite the high power efficiency as compared with traditional ANN, the application is constrained by the low accuracy brought by SNN's non-continuous spike behavior. How to boost SNN's performance and find its suitable application scenarios are the main focuses of current researches.
\section{Methods}
\label{sec:methods}
According to the computation graph without variable $S$, and follows the basic Back-propagate through time (BPTT) strategy, we can get:
\begin{equation}
\frac{\partial L}{\partial U^{(l)[t]}}=\underbrace{\frac{\partial L}{\partial a^{(l)[t]}}\frac{\partial a^{(l)[t]}}{\partial U^{(l)[t]}}}_{\textbf{Spatial}}+\underbrace{\frac{\partial L}{\partial U^{(l)[t+1]}}\frac{\partial U^{(l)[t+1]}}{\partial U^{(l)[t]}}}_{\textbf{Temporal}}
\label{eq:7}
\end{equation}
The activation based methods like SLAYER\cite{shrestha2018slayer}, STBP\cite{wu2018spatio}, SuperSpike\cite{zenke2018superspike}, and ANTLR \cite{kim2020unifying} choose different approaches to approximate the \textbf{Spatial} term. However, as for the \textbf{Temporal} part, we derived it based on equation~(\ref{eq:4}) as following:
\begin{equation}
\frac{\partial U^{(l)[t+1]}}{\partial U^{(l)[t]}} = \left(1-\frac{1}{\tau_m}\right)\left\{\left[1-H\left(U^{(l)}[t]-\vartheta\right)\right] + \color{red}{U^{(l)}[t]\left[-\frac{\partial H\left(U^{(l)}[t]-\vartheta\right)}{\partial U^{(l)}[t] }\right]}\color{black}{}\right\}
\end{equation}
The red colored term is missed in all previous works. This term is also ill-issdefined as the term ${\partial a^{(l)[t]}}/{\partial U^{(l)[t]}}$ in the \textbf{Spatial} term for the same reason: The derivative of $H(x)$ over $x$ is the Dirac delta function, which is zero almost everywhere and is infinity only when $x=0$. However this term encodes an important dependency of membrane potential between different time steps.
Like all previous works, we use the surrogate gradient method to approximate the Heaviside step function's gradient:
\begin{equation}
H\left(U^{(l)}[t]-\vartheta\right) \approx \sigma\left.\left(U^{(l)}[t]-\vartheta\right)\right|_T = \frac{1}{1+e^{\frac{-\left(U^{(l)}[t]-\vartheta\right)}{T}}}
\label{eq:9}
\end{equation}
\begin{equation}
\frac{\partial H\left(U^{(l)}[t]-\vartheta\right)}{\partial U^{(l)}[t] } \approx \frac{1}{T} \left.\sigma\left(U^{(l)}[t]-\vartheta\right)\right|_T * \left(1-\sigma\left.\left(U^{(l)}[t]-\vartheta\right)\right|_T\right)
\label{eq:10}
\end{equation}
The new surrogate gradient term describes the timing dependency brought by the reset mechanism: When the membrane potential near the threshold, small perturbation towards threshold has potential to change the membrane potential of the next time-step dramatically.
|
1,108,101,565,937 | arxiv | \section{Results and discussion}
{\it Energetic stability and dynamical stability.} We identify only six BHC (AlPt, AuBa, AuLi, AuRb, LiPt, LuPt) and 13 BSQ (AlPt, AuBa, AuCa, AuCs, AuK, AuLi, AuRb, AuSr, BaPt, LuPt, PtSc, PtSr, PtY) structures having negative $E_j$, and find that these compounds except for BHC AlPt and LiPt are dynamically unstable. For example, BSQ LuPt has the lowest value of $E_{\rm BSQ}=-0.47$ eV among 19 compounds. Irrespective of this fact, the imaginary frequencies are observed around the point X and the middle of the $\Gamma$-M line, which confirms that instability, as shown in Fig.~\ref{fig2}(a).
Next, we focus on BHC AuCu. Although $E_{\rm BHC}({\rm AuCu})$ has a positive value (0.84 eV), BHC AuCu is dynamically stable as shown in Fig.~\ref{fig2}(b). This supports the recent experiment \cite{zagler}, where BHC AuCu is synthesized on a graphene substrate. Note that the use of the GGA-PBE functional \cite{pbe} to AuCu is known to yield an underestimation of the formation energy of ordered phases (L1$_0$ and L1$_2$) by a factor of 2. The use of a nonlocal functional is necessary to predict the value of the formation energy correctly \cite{nonlocalpbe}. It will be more important to study whether the substrate can produce a negative formation energy in this system, while such an investigation is beyond the scope of this work.
Through the phonon dispersion calculations on BSQ LuPt and BHC AuCu, we point out that negative $E_j$ is neither a sufficient nor necessary condition for obtaining dynamically stable compounds in the structure $j$. Similar stability properties have been reported in silicene and germanene \cite{cahangirov} and 2D simple metals \cite{ono2020}.
\begin{figure}[tt]
\center
\includegraphics[scale=0.45]{Fig_3.eps}
\caption{Plots of $E_j(XY)$ for 3D structures $j$ as a function of $E_{\rm BHC}(XY)$.} \label{fig3}
\end{figure}
\begin{table*}
\begin{center}
\caption{List of compounds that satisfy the conditions (C1) and (C2) described in text. ``SE'' and ``NR'' indicate that the B$_h$ structure has been ``synthesized experimentally'' and has ``not been reported'', respectively. ``DS'' and ``U'' indicate that the BHC structure is ``dynamically stable'' and ``unstable'', respectively. The figure in the parentheses is the number of compounds that match the conditions imposed. }
{
\begin{tabular}{ll}\hline\hline
C1 / C2 \hspace{10mm} & Compounds \\
\hline
B$_h$ SE / BHC DS \hspace{10mm} & IrLi \cite{varma_LiIr}, LiPd \cite{vucht_LiPd}, LiPt \cite{bronger_LiPt}, and LiRh \cite{sidhu_LiRh} (4) \\
B$_h$ SE / BHC U \hspace{10mm} & (0) \\
B$_h$ NR / BHC DS \hspace{10mm} & AgAl, AgAu, AgPd, AgPt, AuCd, CdMg, CrIr, CrRh, FeGa, GaMg, IrMo, IrRe, IrRu, IrTc, IrW \\
& \hspace{0mm} IrZn, MgSn, MoRh, OsRe, OsRu, ReTc, RhTc, RuTc, and ScZr (24) \\
B$_h$ NR / BHC U \hspace{10mm} & AgZn, AlTi, AuTi, CaZn, GaTc, GaV, LiMg, MgZn, MoPt, OsV, PtRe, and PtTc (12) \\
\hline\hline
\end{tabular}
}
\label{table1}
\end{center}
\end{table*}
\begin{figure}[tt]
\center
\includegraphics[scale=0.45]{Fig_4.eps}
\caption{The phonon dispersion curves of (a) BHC AlCu and (b) B$_h$ AlSn. } \label{fig4}
\end{figure}
{\it Synthesizability and dynamical stability.} Although the energetic stability will not be a good indicator of dynamical stability, it can be used to study a similarity of compounds in between different structures \cite{nevalaita}. Figure \ref{fig3} shows the relationship of $E_j$ between BHC, B2, L1$_0$, and B$_h$ structures. The value of $E_{\rm BHC}$ is correlated to that of 3D structures. In particular, a strong correlation between the BHC and B$_h$ structures is observed: The correlation coefficients for the linear fit are 0.58 (B2), 0.59 (L1$_0$), and 0.78 (B$_h$). In the previous study \cite{ono2020}, we have demonstrated that AlCu in the B$_h$ structure is dynamically stable, while such a structure has not yet been synthesized. In the present study, we calculate the phonon dispersion curves of AlCu in the BHC structure. As shown in Fig.~\ref{fig4}(a), no imaginary frequencies are observed. This fact that both structures are dynamically stable must be due to the structural similarity between BHC and B$_h$, where both structures are constructed by stacking the hexagonal lattice.
In order to study the metastability of B$_h$ compounds in the BHC structure, we choose binary compounds that can have a B$_h$ structure (space group of P$\bar{6}$m2), a negative formation energy, and zero band gap, referring to the Materials Project database \cite{materialsproject} and using the \texttt{pymatgen} code \cite{pymatgen}. We find that 40 compounds satisfy these conditions and identify that among them 28 compounds are dynamically stable. Table \ref{table1} lists the number of compounds satisfying the conditions: (C1) The B$_h$ structure has been synthesized experimentally and (C2) the BHC structure is dynamically stable. The Li-based compounds of IrLi, LiPd, LiPt, and LiRh satisfy both conditions (C1) \cite{varma_LiIr,vucht_LiPd,bronger_LiPt,sidhu_LiRh} and (C2). No compounds are found for satisfying only the condition (C1). Therefore, this establishes the structure-stability relationship: If a compound in the B$_h$ structure has been synthesized experimentally, that in the BHC structure is dynamically stable (B$_h\rightarrow$ BHC). We can also find that 12 compounds are unstable (see Table \ref{table1}), and those in the B$_h$ structure have not been synthesized experimentally \cite{materialsproject}. Therefore, the contraposition of ``B$_h\rightarrow$ BHC'' also holds. The phonon dispersions of these compounds in the BHC structure are provided in the SM \cite{SM}.
We can confirm that the negatively large (or positively small) formation energy is neither a sufficient nor necessary condition for the dynamical stability of compounds again. The values of $E_{\rm BHC}$ for the stable Li-based compounds are as follows: For IrLi, LiPd, LiPt, and LiRh, $E_{\rm BHC}=0.67, 0.08, -0.21$, and $0.74$ eV, respectively. Among the 40 compounds listed in Table \ref{table1}, BHC FeGa has the largest value of $E_{\rm BHC}=2.06$ eV, irrespective of its dynamical stability.
We investigate the dynamical stability of 46 Li-based compounds (Li$X$) in the BHC structure by calculating the phonon dispersions. We confirm that 15 BHC Li$X$ is dynamically stable when $X=$ group 2 (Sc, Y, and Lu), group 9 (Co, Rh, and Ir), group 10 (Ni, Pd, and Pt), group 11 (Cu and Ag), and group 13 (Al, Ga, and Tl) metals (BHC Li is also stable \cite{ono2020}), as provided in the SM \cite{SM}. This strongly suggests that these dynamical stabilities are correlated with the group in the periodic table.
When the condition of negative formation energy is not imposed in the search of B$_h$-type compounds \cite{pymatgen}, we find 96 compounds that consist of only metallic elements. In addition to the Li-based compounds mentioned above, only B$_h$ AlSn has already been synthesized experimentally \cite{kane_AlSn}. However, B$_h$ AlSn is not a compound but a solid solution. This fact gives rise to the instability of AlSn in the B$_h$ structure, as shown in Fig.~\ref{fig4}(b). We also investigate the dynamical stability of AlSn by considering three different BHC structures: no buckled, low-buckled, and high-buckled structures. As expected, all structures are unstable. The phonon dispersion curves in these structures are shown in the SM \cite{SM}. When the PBE functional revised for solids (PBEsol) \cite{pbesol} is used, AlSn in the B$_h$ and BHC structures are also unstable.
The present investigation implies that there would be two scenarios for the dynamical stability of compounds in the BHC structure. One is explained as a 2D analog of the B$_h$ structure (that is, the hexagonal symmetry is preferred), as in the four Li-based compounds listed in Table \ref{table1}. The other might be more complex: Although the ground state structure is different from the B$_h$ structure, a few or more metastable structures that can be correlated with the BHC structure are hidden in the potential energy surface, which can yield a dynamically stable BHC structure. While this is still a phenomenological discussion, we believe that the latter scenario explains the stability of the other 24 compounds (see Table \ref{table1}), AuCu \cite{zagler}, and AlCu \cite{ono2020}.
In order to investigate the possibility that the BHC phase is transformed into the BSQ phase, we also study the dynamical stability of the BSQ phase for AlPt, AlCu, 15 Li-based compounds above, and 24 compounds listed in Table \ref{table1}. When BSQ is dynamically stable, we investigate the contribution from the vibrational free-energy. We confirm that no phase transition between BHC and BSQ is observed with increasing the temperature (see the SM for details \cite{SM}).
Before closing, we discuss how to synthesize 2D compounds experimentally. An appropriate substrate may be needed to create 2D systems that have either ordered, disordered, or more complex structures \cite{overbury,nielsen,yuhara_Rh,dhaka,yuhara_BiSn,yuhara_Ru,yuhara_Ag,yuhara_Al,sad,zagler}. The strain stored \cite{tersoff} and the coordination number of atoms \cite{nielsen,ono2021} around the surface will be modulated by the presence of the substrate, yielding stable 2D systems. For example, Pb and Sn, where these atoms are immiscible with each other, can form ordered structures on Rh(111) and Ru(0001) \cite{yuhara_Rh,yuhara_Ru}, disordered structures on Ag(111) \cite{yuhara_Ag}, and no mixed phase on Al(111) surfaces \cite{yuhara_Al}.
{\it Conclusion and future prospects.} We have demonstrated that (i) a negative formation energy is neither a sufficient nor a necessary condition for yielding dynamically stable 2D compounds, as in LuPt and AuCu; and (ii) given the synthesizability of a compound in the B$_h$ structure, the BHC structure is dynamically stable. We have identified 41 different binary compounds as candidates having the BHC structure (see the SM \cite{SM}).
The present strategy for finding 2D compounds, relating the different structures in different dimensions, can be extended to other 2D structures. For example, the instability of LuPt (see Fig.~\ref{fig2}(a)) is due to the different ground state structure in the 3D crystal: Lu and Pt form the L1$_2$ structure. We consider that as a counterpart of L1$_2$, other 2D structures must be present. In this respect, the origin of the dynamical stability of 24 compounds listed in Table \ref{table1} as well as L1$_0$ AuCu, shown in Fig.~\ref{fig2}(b) and reported in Ref.~\cite{zagler}, remains unclear. In the present investigation, we have focused on compounds that consist of only metallic elements. We expect that the stability relationship between B$_h$ and BHC structures can also be applied to other B$_h$ compounds that include other elements such as C, N, and S (230 compounds or alloys \cite{pymatgen}) and that more analyses, with the help of 2D materials database \cite{ashton,choudhary,haastrup,feng}, can lead to an establishment of another stability-syhthesizability relationship.
\begin{acknowledgments}
The authors thank M. Aoki for fruitful discussions. The computation was carried out using the facilities of the Supercomputer Center, the Institute for Solid State Physics, the University of Tokyo, and using the supercomputer ``Flow'' at Information Technologcy Center, Nagoya University.
\end{acknowledgments}
|
1,108,101,565,938 | arxiv | \section{Introduction}
The combination of chiral perturbation theory ($\chi$PT) and non-perturbative methods inspired from
$S$- matrix theory, play an important role in the study of resonances nowadays~\cite{uchpt1}.
A straightforward application of this approach is to determine the properties of the resonance,
such as the mass and the width. A deeper understanding of the resonance structure can be obtained
by combining the $1/N_C$ expansion of QCD in $\chi$PT~\cite{uchpt2}.
A novel ingredient in our current study is the singlet $\eta_1$, which is massive
even in the chiral limit due to the $U_A(1)$ anomaly effect. This ingredient has been commonly ignored
in the previous works on the study of $N_C$ trajectories of resonance poles~\cite{uchpt2}.
Since it turns out to be the ninth pseudo Goldstone in the chiral
and large $N_C$ limit and becomes a relevant degree of freedom in the low energy QCD for large $N_C$,
it is necessary to take this effect into account to study resonance properties at large $N_C$.
In our study, this gap is filled by using the $U(3)$ $\chi$PT, which incorporates the massive singlet $\eta_1$
as its explicit degree of freedom.
\section{Theoretical setup}
The current study of resonance properties is based on the complete one-loop
calculation of meson-meson scattering within $U(3)$ $\chi$PT by explicitly
including the tree level exchanges of scalar and vector resonances.
The perturbative calculation incorporates not only the genuine meson-meson scattering diagrams, including the
loops and the resonance exchanges, but also the contributions from the wave function renormalizations, mass renormalizations,
pseudo-Goldstone weak decay constants and $\eta-\eta'$ mixing.
The master formula to unitarize the perturbative $U(3)$ $\chi$PT results is from
a simplified version of N/D method derived in \cite{oller99prd}
\begin{eqnarray}\label{defnd}
{T_J^{I}}(s)^{-1} = N_J^I(s)^{-1}+ g(s)~,
\end{eqnarray}
where $T_J^{I}(s)$ denotes the unitarized partial wave amplitudes with well defined isospin $I$
and angular momentum $J$. $N_J^{I}(s)$ collects the crossed channel
cuts and can be constructed from the perturbative results
\begin{eqnarray}\label{defnfunction}
N_J^I(s) = {T_J^I}(s)^{\rm (2)+Res+Loop}+ T_J^I(s)^{(2)} \,\, g(s) \, \, T_J^I(s)^{(2)} \,,
\end{eqnarray}
where $T_J^I(s)^{\rm (2)+ Res + Loop}$ stands for the partial wave amplitude from the
perturbative calculation, with the superscripts ${\rm(2),~ Res}$ and Loop
denoting the leading order amplitudes, resonance exchanges and
loop contributions, respectively. The explicit expressions from the perturbative calculation can be
found in Ref.~\cite{guo11prd}. $g(s)$ contains the right hand cuts and its explicit
expression can be also found in \cite{guo11prd} and references therein.
\section{Results and Discussions}
From the unitarized meson-meson scattering amplitudes, one can construct the phase shift,
modulus of the $S$-matrix and invariant mass distribution~\cite{guo11prd}.
Then we fit the various quantities
to the experimental data to get the unknown parameters in our model.
The resonance poles are found on the unphysical Riemann sheets, which can be obtained from the extrapolation
of $T$-matrix on the physical sheet (the first sheet) to the unphyscial ones.
In our study, the pole positions of three kinds of vector resonances are obtained: $\rho(770)$ with $IJ=1\,1$,
$K^*(892)$ with $IJ=\frac{1}{2}\,1$ and $\phi(1020)$ with $IJ=0\,1$.
For the scalar resonances, we find $\sigma$, $f_0(980)$ and $f_0(1370)$ for the $IJ=0\,0$ case,
$a_0(980)$ and $a_0(1450)$ for $IJ=1\,0$ case, $\kappa$ and $K^*(1430)$ for $IJ=\frac{1}{2}\,0$.
The masses and widths of the above mentioned resonances are consistent with the PDG values~\cite{pdg}.
We have also calculated the couplings between the resonances and the pseudo-scalar mesons.
For the detailed result, see Ref.~\cite{guo11prd}.
Then we extrapolate the values of $N_C$ from 3 to larger numbers to study the corresponding
behaviors of resonance poles. In this way, one can learn whether the resonance is a standard $\bar{q}q$
resonance by plotting the $N_C$ trajectories of its pole positions. The basic criteria is that in the framework of
large $N_C$ QCD the mass of a standard $\bar{q}q$ resonance is a constant, while its decay width decreases as $1/N_C$
when $N_C$ approaches to infinity.
For $\rho(770)$, $K^*(892)$, $f_0(980)$, $f_0(1370)$, $a_0(1450)$ and $K^*(1430)$,
we plot the quantities of $\Gamma*N_C$ ($\Gamma$ denotes the width) and mass of the resonances as functions
of $N_C$ in Figs.~\ref{fig:ncwidth} and \ref{fig:resmass} respectively. It is clear that the resonances
appearing in those two figures all behave as the standard $\bar{q}q$ resonances, since their
masses are more or less stable when varying $N_C$ and their decay widths decrease as $1/N_C$ for large values
of $N_C$. However for the very broad resonances $\sigma$ and $\kappa$, we find their pole positions in the
complex $s$ plane approach to the real and negative axis when increasing the values of $N_C$.
In this case, it is not appropriate to interpret the imaginary part of $\sqrt{s}$ as the decay width.
In Fig.~\ref{fig:sigkappa}, we plot the real and imaginary parts of $s_\sigma$ and $s_\kappa$ as functions
of $N_C$, from where one can conclude that $\sigma$ and $\kappa$ resonances in our study do not correspond to the
standard $\bar{q}q$ resonances for large $N_C$.
For $a_0(980)$, both its width and mass increase when increasing $N_C$, indicating
it does not seem to correspond to a standard $\bar{q}q$ resonance in our current study.
\begin{figure}[Htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig-ncwidth.eps}
\caption{Decay widths of resonances as functions of $N_C$. For $a_0(1450)$, we plot
the quantity of $\Gamma*N_C/2$. For the others, we plot the quantity of $\Gamma*N_C$ as functions of $N_C$. }
\label{fig:ncwidth}
\end{center}
\end{figure}
\begin{figure}[Htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig-resmass.eps}
\caption{Masses of resonances as functions of $N_C$.}
\label{fig:resmass}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.8\textwidth]{fig-sigkappa-reims-midsimple.eps}
\caption{$N_C$ trajectories for the imaginary and real parts of $\sigma$ pole $s_\sigma$ and $\kappa$ pole $s_\kappa$. }
\label{fig:sigkappa}
\end{center}
\end{figure}
\acknowledgements{%
This work is partially funded by the DGICYT grant FPA2010-17806,
the Fundaci\'on S\'eneca grant with Ref.~11871/PI/09,
the EU-Research Infrastructure Integrating Activity ``Study of Strongly Interacting Matter" (HadronPhysics2, grant No.227431)
under the Seventh Framework Programme of EU, the Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042),
Natural Science Foundation of Hebei Province with contract No. A2011205093
and Doctor Foundation of Hebei Normal University with contract No. L2010B04.
}
|
1,108,101,565,939 | arxiv | \section{Introduction}
A number of indicators point to the period from $z\approx 3$ to
$z\approx 1$ as the time when galaxies
assembled. This period of some 3 billion years witnessed
a maximum in the rate at which stars have formed and a
peak in number of quasars and powerful radio galaxies.
The luminous star forming galaxies that
trace the rise and fall of star formation through this epoch
represent a tiny fraction
of the protogalactic systems that exist at this time,
since neutral hydrogen clouds are known to have been abundant,
amounting to far more mass than was then in stars.
This leaves the bulk of the baryons destined to join
galaxies below the threshold for
viewing by today's telescopes, and it means that our perception of
this important epoch of history lacks a clear observational
picture of the sequence and timing of events that has occurred
in the coalescence of mass to form galactic potential wells and
their present contents of stars, gas and dust.
Theoretical simulations, which are successfully tuned to produce the $z=0$
large scale structure starting from initial conditions that are consistent
with the level of fluctuation in the microwave background, predict
mass distribution functions for the protogalaxies through this
period of galaxy formation.
These simulations, which are based on the gravitational collapse
of a Cold Dark Matter dominated Universe, demonstrate a hierarchical
clustering that leads to the desired $z\approx 0$ large-scale behavior and
shows galaxies forming by the dissipationless
merging of low mass dark matter mini-halos halos
and the subsequent accretion of condensing, dissipational gas
(cf. White \& Frenk 1991, Kauffman 1996, Ma et al 1997).
The postponement of the assembly of massive galaxies in the
CDM models is somewhat at odds with observations showing
powerful radio galaxies at redshift above $z=5$ (van Breugel et al 1999)
and evidence for large gaseous disks in well-formed potentials
at $z\approx 2.5$ (Prochaska \& Wolfe 1999). A crucial test of
the formation ideas will be to measure the sizes of galaxies and
their total dynamical masses as a function of redshift in order
to define time sequence of galaxy formation.
The next
section reviews the considerable indirect evidence that bears
on this epoch, leading to the conclusion that a large radio telescope,
capable of sensing the abundant cool gas confined to the
evolving potential wells, will clarify the history of galaxies
through this critical period. Subsequent sections address the
specific observational tests.
\section{Evolution of global properties $z=4$ to $z=0$}
\begin{figure}
\begin{center}
\includegraphics[width=15cm ]{hi_civ_q_sf.PS}
\caption{\small Cosmological density of neutral gas, incidence
of CIV absorption, comoving density of luminous QSOs, and mean
star formation rate as a function of redshift.
{\it Top panel.} Mean cosmological density of neutral gas,
$\Omega_g$, normalized to the critical density (Storrie-Lombardi et al
1996; Lanzetta et al 1995, Zwaan et al 1997($z=0$),
Turnshek 1998: dashed error bars)
{\it Upper middle panel.} Number of CIV metal-line absorption systems per unit redshift, $n(z)$ (Steidel 1990).
Hatched areas indicate the range ($0<q_o<1/2$) for unevolving
cross sections since $z=1.5$, beyond which redshift CIV can be
measured with ground-based telescopes.
{\it Lower middle panel.} Comoving density of optically selected QSOs:
filled squares from Schmidt et al 1994; open circles from Hewitt et al 1993).
$H_o = 50 $km~s$^{-1}$Mpc$^{-1}$, $q_o =1/2$
{\it Bottom panel.} Comoving star formation rate density
$M_{\odot}$yr$^{-1}$Mpc$^{-3}$ from Madau (1998) and references therein.
}
\label{hi_civ_q_sf.fig}
\end{center}
\end{figure}
A number of indicators trace global properties of the Universe
through the epoch of most vigorous assembly of galaxies.
Fig. 1 shows four of these plotted as a function of
redshift. (1)~The star formation rate SFR density computed for
color-selected, star forming galaxies (cf Madau et al 1996, Calzetti
and Heckman 1999) shows a steep rise with redshift to $z\approx 1$ with
a modest decline at higher redshifts. More recent observational
evidence (Steidel et al 1999) favors a flat SFR density
above $z=1$ to at least $z=4$, once corrections have been made for
extinction. (2)~The comoving space density of
luminous optically selected quasars reaches a maximum at $z\approx 2$,
implying that mass is being redistributed within the evolving
galaxies to efficiently feed nuclear activity.
(3)~Studies of absorption lines against the ultraviolet continua of bright
high redshift quasars provide probes of the comoving density of neutral
gas, through studies of the damped Lyman-$\alpha$ DLa line of HI (Wolfe
et al 1986), as
well as (4)~the ionized galaxy halo gas that is sensed in CIV (Steidel 1990).
Quasar absorption lines are especially relevant for the studies of
normal galaxies, since they are not biased toward the luminous objects
at the peak of the luminosity function for a chosen redshift, but
rather the quasar absorption lines provide a democratic selection of the
common, gas rich objects that represent the
less rapidly evolving protogalaxies
that are contain the bulk of the baryons destined to
eventually be locked into galaxies at $z=0$.
The DLa lines are especially relevant to the discussion
here, since the quantities of cool neutral gas sensed in these
high redshift absorbers exceeds the local HI comoving density $\Omega_g(z=0)$
by a factor of at least five (Wolfe et al 1995, Lanzetta 1995), and
the HI is a viable target for radio studies in the 21cm line of HI.
The DLa class comprises neutral gas layers with H$^{0}$ column densities
above $2{\times}10^{20}$cm$^{-2}$, while the MgII selection chooses
column densities of H$^{0}$ down to levels ${\sim}10^{18}$cm$^{-2}$.
The HI content contained in the DLa population of absorbing cloud suffers
uncertainty in the high redshift regime. The same clouds that
absorb in the Lyman-$\alpha$ line, also show absorption by common metals
that originated in a prior generation of stars. The same generation of
stars will have produced dust, and the resulting extinction may
cause lines of sight with DLa absorbers to be selected against when
samples of high $z$ quasars are composed for spectroscopic study
(Pei et al 1999). A detailed appraisal of the extinction leads Pei
et al (1999) to conclude that the DLa statistics may underestimate the
neutral gas density at $z\approx 2.5$ by a factor of two to three, for
a total factor of 10 to 15 above the present density $\Omega_g(z=0)$.
A large uncertainty also remains in the low $0<z<1.7$
DLa measures of $\Omega_g(z)$
since the Lyman-$\alpha$ line is not shifted in the optical window
observable with ground based telescopes until $z>1.7$. This is a
period of galaxy evolution when the SFR subsided, perhaps in response
to depleted supplies of neutral gas from which to form stars. During this
period, star formation may have changed from being fueled with largely
primordial material to star formation relying on reprocessing an interstellar
medium of built from stellar mass loss (Kennicutt et al 1994) of
earlier generations of relatively long lived stars. An understanding
of this transition is important in predicting the gas-consumption time
(Roberts 1963) and the duration of star formation into the future.
These uncertainties could be eliminated by using a SKA class telescope
to observe the HI emission directly in a survey to redshift $z=1.5$
to perform the kinds of HI census that is being made for the local
Universe now by surveys conducted with telescopes
such as Arecibo (Zwaan et al 1997) and
Parkes (Staveley-Smith et al 1996).
Currently operational telescopes map
the detailed gas kinematics of nearby spiral galaxies in
the 21cm emission line,
and the analyses show that galaxies rich in neutral gas
have well ordered rotation of a cool disk, much like the disk
of the Milky Way. The dynamical analysis of the velocity fields of
the quiescent rotating disks has been a important tool in the
measurement of the total mass in galaxies and has served to specify the
presence and distribution of dark matter in galaxies
(cf van Albada et al 1985,
Broeils 1992). In this application, the HI is a highly diagnostic tracer
of the galactic potential. Even the crude resolution obtained with
single-dish telescopes yields a ``mass indicator'' by measurement
of the profile width. The settled and quiescent nature of
neutral gas layers makes them a more reliable tracer of
gravitational potentials than emission line gas in HII regions
around star forming regions, where stellar winds and expanding
shock fronts compete with gravitation in determining the
gas kinematics.
The apparent absence of isolated neutral gas clouds without an associated
galaxy in the nearby Universe (Zwaan et al 1997, Spitzak and
Schneider 1999) suggests that the neutral gas relies on
a confining potential to maintain the gas at sufficient density that the
it can remain neutral in the face of an ionizing background
radiation field. Perhaps under the conditions of ``adequate'' confinement
for preservation of neutrality, it is difficult to avoid the
instabilities that lead to cooling, collapse and star formation.
In such a picture, the shallower potential wells of the lower mass
dwarf galaxies would be the places where the HI is more gently
confined and evolutionary processes would generally proceed
more slowly. Indeed, this is the domain inhabited by the dimmest
of the gas-rich LSB galaxies. The expectation is that the
DLa clouds at high redshift must also be confined in order to
remain neutral, and they too will be tracers of their confining
potentials.
In many respects, the neutral clouds giving rise to the
the DLa absorbers have similar properties to the interstellar media of gas
rich galaxies. They typically have a mix of cold clouds and a thinner,
turbulent medium (Briggs \& Wolfe 1983, Lanzetta \& Bowen 1992,
Prochaska \& Wolfe 1997) whose physical conditions vary
from mildly ionized to the more highly
ionized ``halo'' gas characterized by the CIV absorption lines
and represented by well studied lines of sight through the
Milky Way halo. Metal abundances in the DLa clouds show considerable
variance around the expected trend of enrichment over time (Pettini
et al 1997), and there is now evidence that the DLa clouds are a distinct
population (Pettini et al 1999)
apart from the active star forming galaxies found through
color selection (Steidel et al 1999). The onset of the CIV absorption
(see Fig.~\ref{hi_civ_q_sf.fig}) is another symptom of wide-spread
star formation, as metals are produced and redistributed in the ISM and halos
(Steidel 1990, Sembach et al 1999).
\section{Gas-rich clouds vs. active star forming galaxies}
\begin{figure}
\hsize .927in
\vglue .3in
\hglue 11cm\epsfxsize=\hsize\epsffile{ssa22_cw_c16.PS}
\hsize 1.5in
\vglue 1.6in
\hsize 4.05in
\vglue -3.2in
\hglue +1.2cm\epsfxsize=\hsize\epsffile{crosssection.PS}
\hsize 14.5cm
\caption{ Comparison of quasar absorption-line
cross sections for CIV, MgII-Lyman Limit, and damped
Lyman$-\alpha$ lines with the physical size of the optical emission
from a color-selected galaxy at $z\approx 3$ {\it top right}
(Giavalisco et al 1996a). The
$z\approx 3$ galaxy is centered in a 5$''$ diameter circle that
subtends 37.5 kpc ($H_o=75$~km~s$^{-1}$Mpc$^{-1}$, $\Omega=0.2$).
}
\label{crosssection.fig}
\end{figure}
An interesting comparison can be made between the observed sizes of the
high-$z$ star forming galaxies (Giavalisco et al 1996a) and the
interception cross-sections for uv absorption by different ions
(cf. Steidel 1993). The Lyman-break color-selection technique
for identifying the star forming galaxies produces candidates
with a density on the sky of ${\sim}1$~arcmin$^{-2}$ for objects
with redshifts predominantly in the range $2.6\leq z\leq 3.4$
(Steidel et al 1998).
The comoving density of $L\geq L_*$ galaxy ``sites,'' computed for this redshift
range, amounts to ${\sim}2$~arcmin$^{-2}$ (for a cosmological model
with $\Omega_o=0.2$). Fig.~\ref{crosssection.fig} shows
the cross section for absorption lines that every
$L_*$ galaxy site would necessarily present, if ordinary galaxies
are to explain the observed incidence of absorption lines.
Thus, absorption line statistics indicate $\sim$2 times
the absorption cross sections shown in Fig.~\ref{crosssection.fig}
for every Lyman-break galaxy. At low and intermediate redshifts ($z<1$),
the association of the metal line absorbers with the outer regions
of galaxies has been well established for the MgII class of
absorber by observation of galaxies close to the lines of sight
to the background quasars (Le Brun et al 1993). The CIV selected systems are
consistent with galaxy halo cloud properties (Petijean \& Bergeron
1994, Guillemin \& Bergeron 1997),
and ionized high velocity cloud (HVC) analogs for the
CIV cloud population exist in the halo of the Milky Way
(Sembach et al 1999). A few of the
DLa absorbers are also identified with galaxies at intermediate redshift
(Steidel et al 1994, Steidel et al 1995). The optical identification
of the DLa systems has gone more slowly than for the MgII, for
example, because DLa absorbers are rarer (as indicated by the
relative cross sections), and the majority of the surveys for
DLa systems have been conducted with ground-based telescopes, which
find $z>1.7$ systems that are difficult to associate with
galaxies. Curiously, some of the studies of the lowest redshift DLa absorbers
have failed to provide any optical identification to sensitive
limits (Rao \& Turnshek 1998).
The overall implication of a comparison between the physical
sizes of the Lyman-break galaxies and the absorption-line
cross sections for the
high-$z$ Universe is that there is a substantial population
of metal-enriched gaseous objects,
possibly accompanied by a tiny pocket of
stellar emission, that can well go unnoticed
in deep optical images. The nature of these invisible absorbers remains
a puzzle. Do the basic skeletons of today's $L^*$ galaxies
exist already at redshifts greater than 3 as partially filled
gravitational potential wells of dark matter -- each well binding
a star forming nuclear region, a large disk of neutral gas,
and an extended halo structure of
ionized, metal-enriched gas? Or,
are the statistical cross sections for absorption plotted
in Fig.~\ref{crosssection.fig} actually the integrals of many much
smaller absorption cross sections created by much
less strongly bound, small clouds that
coalesce steadily since $z\approx 5$ to form the large galaxies at $z=0$?
In the latter case, the star-forming Lyman Break population would
represent a very tiny fraction of the protogalaxy population. Will all
galaxies pass through such a star-forming phase, or will they be
accreted to the onto the LB objects? Would the tiny DLa clouds need to
be bound in dark matter mini-halos in order to avoid
photoionization as has occurred in the intergalactic medium?
Are the DLa clouds clustered together or are they bound to the outskirts of
the LB galaxies? These are important questions since the
DLa population appears to be a gravitationally confined population
containing enough baryons to produce the stellar matter at $z=0$,
and if these clouds are largely cool and unevolved, their
kinematic and dynamic properties can be
studied directly in no way other than at radio wavelengths.
\section{The big questions}
\begin{figure}
\begin{center}
\includegraphics[width=13cm ]{dla_comp.PS}
\caption{\small Comparison of cross sections presented
in neutral gas. These are views of the coverage of the sky by
DLa clouds out to $z=4$. {\it Left} Large disk galaxies. {\it Right} Tiny
protogalactic mini-halos. Galaxies at $z=0$ to 2 are lightly
shaded; $z=2$ to 3 are most heavily shaded; $z=3$ to 4 have medium
shading. Panels are 1 arcmin square, and the top axis gives the
physical scale that applies at $z=2$.
($H_o=75$~km~s$^{-1}$Mpc$^{-1}$, $\Omega=0.2$)
}
\label{schematic.fig}
\end{center}
\end{figure}
The big question is centered on the sequence for construction of
the galaxy population. Do galaxies form in a hierarchical manner
as described by the CDM simulations? Can merging and accretion be
gentle enough to produce the cool flattened disk galaxies like
the spiral population at $z\approx 0$?
Fig.~\ref{schematic.fig} is a schematic view of the observational
consequences for the two scenarios -- one with large objects already
in place at high $z$ and one relying on hierarchical merging.
In order to build up the cross section
required by the DLa statistics
at $z\approx 2.5$, there must be several disks per unit
$\Delta z$ per square arcminute along a randomly chosen direction.
The left panel in Fig.~\ref{schematic.fig} illustrates this
idea with disks whose comoving number density is constant over time
from
$z=4$ to $z=0$
and whose diameters are adjusted to
match the DLa interception probabilities (Lanzetta et al 1995).
There is an ${\sim}$50\% probability of interception by a DLa
over a redshift path from 0 to 4.
The right panel in Fig.~\ref{schematic.fig} has individual
spherically shaped clouds that decline in radius toward higher
redshifts as $r\propto (1+0.7z)^{-3/2}$, while maintaining the
same integral interception cross section per unit redshift as in the
left panel. This requires that there be many more clouds to
build up the required integral cross section.
The metal line systems, such as CIV, have larger cross section, as shown in
Fig.~\ref{crosssection.fig}, and a similar diagram plotted for CIV statistics
would provide near complete coverage of the sky, with a significant
probability of multiple CIV absorbers along a single line of sight,
as is observed (Steidel 1990).
The angular size scales of the DLa absorbers range
over a few arcseconds, which is a good match to the sub-arcsecond
resolution obtained by a radio interferometer array with baselines
of a few kilometers to a few hundred kilometers. As depicted in
Fig.~\ref{crosssection.fig}, these objects are everywhere in the
sky, so a sufficiently sensitive telescope would find them in
surveys of randomly chosen fields. A further consequence of
the existence of vast numbers of these objects is that they
should frequently be seen in absorption against background
radio sources. Roughly one half of the radio galaxies and
quasars at $z \geq 4$ will lie behind DLa clouds.
The prospects for making definitive emission
and absorption experiments are discussed in the following
sections. These observations include detection and mapping
of HI emission from individual high redshift $z>3$ galaxies,
as well as statistical studies of the HI content contained in
sub-classes of optically selected objects. Measuring 21cm
absorption against background sources might be an effective
way to settle the question of spatial extent and dynamical
mass content of the DLa population; the absorption experiments
could be tackled during the next few years -- with existing
telescope arrays.
A consequence of hierarchical formation models is that galaxies
should undergo repeated merging and accretion events. At $z\approx 0$
the most noticeable merging systems are also bright far infrared
emitters and often are hosts for OH megamasers. The brightest megamasers
are so luminous that they could be seen throughout the $z<5$ Universe.
This may provide a means to directly measure the galaxy merging rate as
a function of time, without bias due to dust obscuration.
\section{Monitoring the HI content of the Universe}
\begin{figure}
\begin{center}
\includegraphics[width=13cm ]{SKA_detec.PS}
\caption{\small Detection rate for HI and OH emission
from high redshift galaxies, assuming no evolution, as a function
of observed frequency. Top axes indicate corresponding redshifts for the
HI 21cm and OH 18cm emission lines. Left vertical
axis indicates detections ster$^{-1}$MHz$^{-1}$, and right vertical
axis has detections deg$^{-2}$ per 20 MHz. Detection rates are computed
for detection thresholds of 1000, 200, 5, and 0.75$\mu$Jy.
Cosmological models are: {\it dotted} $\Omega_m= 1$, $\Omega_{\Lambda}= 0$,
{\it solid} $\Omega_m= 0.2$, $\Omega_{\Lambda}= 0$,
{\it dashed} $\Omega_m=0.02$, $\Omega_{\Lambda}= 0$,
{\it dot-dash} $\Omega_m=0.2$, $\Omega_{\Lambda}= 0.8$.
Large dots are drawn for the $\Omega_m= 0.2$, $\Omega_{\Lambda}= 0$
models to indicate the maximum redshifts where $M_{HI}^*$ and $0.1M_{HI}^*$
galaxies could be detected in the HI line.
}
\label{detecrate.fig}
\end{center}
\end{figure}
The expectation for detection of HI-rich galaxies at different
redshifts is straightforward to calculate. To provide the
framework for these estimates, Fig.~\ref{detecrate.fig} summarizes
simulations of the
number of detected signals per unit of survey solid angle per
unit of radio frequency. These units are chosen to facilitate
comparison between the sky density of OH megamasers and more
common but less luminous emission from ordinary galaxies in
the 21cm line. The detection rates are computed for a range
of cosmological models. Computed sensitivity levels assume
signal profile widths of 300 km~s$^{-1}$ with optimally smoothed
spectra. The detection threshold levels are: 1 mJy, 0.2 mJy, 5$\mu$Jy,
and 0.75$\mu$Jy. The levels at 5$\mu$Jy and 0.75$\mu$Jy are 7$\sigma$
for 8 and 360 hour integrations respectively with a telescope with the nominal
specifications suggested for SKA (Taylor \& Braun 1999).
The calculation is based on the number density of
galaxies of given HI mass described by the HI mass function of
Zwaan et al (1999), truncated at the low mass end at $10^5$M$_{\odot}$.
This mass function follows a Schechter form, which has power law rise
toward low masses with index $\alpha=1.2$ and a knee at
$M_{HI}^* = 10^{9.8}$M$_{\odot}$ ($H_o=75$ km~s$^{-1}$~Mpc$^{-1}$)
above which the mass function
cuts off sharply with exponential dependence.
A SKA Deep Field HI survey would yield ${\sim}10^5$ gas rich
galaxies from a 1 square degree field. The most numerous
detections would probably fall in the redshift range $0.8<z<2$. These
objects would be excellent tracers of large scale structure.
\subsection{Low redshift: $z<1$}
For low redshifts, the Zwaan et al mass function should provide a reasonably
reliable estimate of the detection rate.
Through the redshift range from $z\approx 1$ to $z=0$, the Deep Field
survey would observe the decline of the HI mass content
of the Universe, as mass is increasingly locked up in stars.
The difference between the star formation rate density (as measured in
optical surveys) and gas consumption rate will specify the
role of stellar mass loss in replenishing the ISM and prolonging
star formation into the future.
\subsection{Direct detection of protogalaxies at $z\geq 2$}
As a framework for discussion, the calculation of Fig.~\ref{detecrate.fig}
has adopted a constant
comoving density of non-evolving galaxies.
This cannot be correct
at high redshifts.
We know that there is more
neutral mass at $z\approx 2.5$ than at present. Whether this will also
translate into an increased number of detections above what is
specified by these calculations is not clear.
Whether the HI is parceled in large or small masses will be the
deciding factor.
In order to illustrate the difficulty in
measuring small HI masses predicted by hierarchical clustering
scheme, the figure has dots drawn to indicate the highest
redshift at which non-evolved $M_{HI}^*(z=0)$ and 0.1$M_{HI}^*(z=0)$
galaxies would be detected, for the $\Omega_m= 0.2$, $\Omega_{\Lambda}= 0$
cosmology. For a SKA Deep Field requiring 360 hours of integration,
the $M_{HI}^*$ galaxies are detected to redshift $z\approx 2.6$ and
an 0.1$M_{HI}^*$ to $z\approx 0.9$.
The goal of observing reshifts around $z\approx 2.5$ would
be resolve the considerable uncertainty in HI mass distribution.
The large excess of HI that we know exists could simply increase
all galaxy sites by a factor of 10 in mass, implying that an
0.1$M_{HI}^*$ galaxy at present evolved from a system of
1$M_{HI}^*$ at $z=2.5$, and these would be easily detected,
along with the 10$M_{HI}^*$ around the knee in the mass function.
On the other hand, a hierarchical picture would have increased
numbers, say by a factor 10, of objects with some fraction
of the HI mass now measured in galaxies nearby. Note that as
galaxies merge and stars form, the stellar
populations form a
sink for neutral baryons -- merging ten $M_{HI}=10^9$M$_{\odot}$
protogalaxies is likely to produce a luminous $z=0$ galaxy
with only ${\sim}10^9$M$_{\odot}$ or less of HI mass, since
much of the mass is destined to be consumed in star formation.
Individual protogalactic clumps with $M_{HI}<10^8$M$_{\odot}$
would not be detected by even the long integrations of a
Deep Field survey. However, the coalescing clumps may be
clustered sufficiently to create detectable signals.
\subsection{A statistical measure of HI in the Lyman Break Population}
The detection of the small, protogalactic HI masses common to
hierarchical models at redshifts around $z\approx 3$ will
be difficult, even with a SKA class telescope. On the
other hand, considerable progress might be made with a straightforward
statistical method, much sooner than the
construction of a new radio telescope. Several current generation
aperture synthesis telescopes (the Westerbork Synthesis Radio Telescope
and the Giant Metrewave Radio Telescope in India)
are equipped to observe the 21cm line redshifted to $z\approx 3$.
The field of view of these telescopes can survey several square
degrees of sky in a single integration with sufficient angular
resolution to avoid confusion among the LB galaxies. If an adequate
catalog of LB galaxies could be constructed for such a synthesis field,
with of order $10^4$ identified LB objects
with celestial coordinates and redshifts,
then the radio signals could be stacked, to obtain a statistical measure
of the HI content of the LB population. This would allow the
``average HI content per LB galaxy site'' to be determined.
\begin{figure}
\begin{center}
\includegraphics[width=10cm ]{SKA_snr.PS}
\caption{\small Signal to noise ratio for spectral measurements
of an $M_{HI}^*=10^{9.8}$M$_{\odot}$ galaxy as a function of
redshift. S/R ratios are computed for 30~km~s$^{-1}$ channels observing
galaxies with 300~km~s$^{-1}$ wide emission profiles
Two cases are considered: {\it lower curve} a 8 hour SKA integration
reaching $\sigma= 3\mu$Jy and {\it upper curve} a 360 hour SKA
Deep Field integration.
}
\label{snr.fig}
\end{center}
\end{figure}
\section{Sizes and kinematics of the DLa Clouds}
\subsection{Emission observations}
For nearby galaxies, synthesis mapping techniques provide
measurements of the extent of the HI emission and kinematics
that help to clarify the structure of the galaxies and the
distribution of their dark matter component. These elegant maps and
subsequent analysis typically describe the nearby galaxies with peak
flux densities in the integral profiles of a few 100 mJy with
sensitivity levels around $\sigma \approx 1$~mJy per velocity
channel ($\sim$10 km~s$^{-1}$ wide).
The prospects for obtaining this level of sensitivity
with $S/N \geq 100$ in high redshift galaxies are not good.
Fig.~\ref{snr.fig} summarizes the expected $S/N$ for
$M_{HI}^*=10^{9.8}$M$_{\odot}$ galaxies as a function of
redshift. For this example, the galaxies have total
velocity spreads of 300~km~s$^{-1}$, which is observed with
channel spacing of 30 km~s$^{-1}$. A SKA 360 hour
integration cannot achieve $S/N>50$ for redshifts
greater than 0.6 and hits $S/N=10$ for $z=1.25$.
\subsection{Absorption observations}
Fortunately, definitive measurements can be obtained through
high spatial resolution observations of absorption against
extended background radio sources. Fig.~\ref{schematic.fig}
shows that roughly half of the radio sources with redshift
greater than 4 will lie behind DLa absorbing layers. SKA
sensitivities will permit absorption experiments against
random high redshift radio sources in every field. In fact,
a standard tool for determining a lower limit to redshifts
for optically blank field radio sources may be to examine the radio
spectrum for narrow absorption lines of redshifted HI.
In fact, considerable progress in assessing the extent and kinematics of
the DLa class of quasar absorption line system can be made
before the commissioning of SKA
with minimal technical adaptation of existing radio facilities.
The technique requires background radio quasars or high redshift
radio galaxies with extended radio continuum emission. Some
effort needs to be invested in surveys to find redshifted
21cm line absorption against these types of sources. These
surveys can either key on optical spectroscopy of the quasars to
find DLa systems for subsequent inspection in the 21cm line, or
they can make blind spectral surveys in the 21cm line directly,
once the new wideband spectrometers that are being constructed for
at Westerbork and the new Green Bank Telescope are completed. Then
radio interferometers with suitable angular resolution at the redshifted
21cm line frequency must be used to map the absorption against the
extended background source. This would involve interferometer
baselines of only a few hundred kilometers -- shorter than is
typically associated with VLBI techniques, but longer than
the VLA and GMRT baselines. The shorter spacings in the European
VLBI Network and the MERLIN baselines
would form an excellent basis for these
experiments, although considerable effort will be required to
observe at the interference riddled frequencies outside the
protected radio astronomy bands.
\begin{figure}
\begin{center}
\includegraphics[width=13cm]{3c196.PS}
\caption{\small Absorption by an intervening disk galaxy
against an extended background radio quasar. {\it Top:}
Contours of radio continuum emission (Lonsdale 1983 with
the outer radio contour taken from
the map of Oren as shown by Cohen et al 1996).
{\it Upper middle:} The velocity field of and emission
profile expected for disk galaxy.
{\it Lower middle:} A spectrum that has been restricted to gas
lying in front of background continuum; in principle, sensitive
mapping could measure the distribution and kinematics for these
clouds in absorption across the face of the radio source.
{\it Bottom:} The integral absorption spectrum obtained by
observing this source with a low angular resolution telescope.
}
\label{3c196.fig}
\end{center}
\end{figure}
Fig.~\ref{3c196.fig} shows an example of how these experiments might work.
The top panel shows contours for the radio source 3C196.
Brown and Mitchell (1983) discovered a 21cm line in absorption at $z=0.437$
against this source in a blind spectral survey. The object has
been the target of intensive optical and UV spectroscopy (summarized by
Cohen et al 1996), as well as HST imaging to identify the
the intervening galaxy responsible for the absorption (Cohen et al
1996, Ridgway and Stockton 1997). Fig.~\ref{3c196.fig} includes a dashed
ellipse in the top panel to indicate the approximate
extent and orientation of the galaxy identification.
The second panel from the top in Fig.~\ref{3c196.fig} illustrates the 21cm line
emission spectrum typical of nearby HI-rich disk galaxies, observed
by a low resolution (``single-dish'') beam that does not resolve
the gaseous structure in the galaxy. The rotation of a galaxy
with a flat rotation curve produces the velocity field shown
to the left of the spectrum
For disk systems observed in absorption, the information accessible to
the observer is less, since we can only hope to ever learn about the
gas opacity and kinematics for regions that fall in front of background
continuum. This restricts our knowledge to zones outlined in the
third panel of Fig.~\ref{3c196.fig}. The ``restricted emission'' spectrum is
drawn to illustrate what fraction of the galaxies gas content might
be sensed by a sensitive synthesis mapping observation. A comparison
to the total gas content in the upper spectrum suggests that much of
the important information (velocity spread, for example)
would be measured by a synthesis map of the absorption against
background source.
The single-dish spectrum of the absorption lines observed for an
object like 3C196 is weighted by the regions where the background
continuum has the highest brightness. As shown in the lower panel,
this weighting emphasizes the bright spots in the radio lobes.
Clearly sensitive mapping will better recover the information
lost in the integral spectrum produced by a low angular
resolution observation. A preliminary look at recent observations
of the $z=0.437$ absorber in 3C196 can be found in de Bruyn et al
(1997).
\section{Direct measurement of the merger rate}
OH megamasers occur in the nuclear regions of merging and heavily
interacting
galaxies. The galaxies are characterized by disturbed morphologies,
strong far infrared emission and heavy extinction at the
center. The brighter OH megamasers can be detected easily at cosmological
distances. Due to the heavy obscuration, these objects are not
especially eye catching, but their strong FIR flux has led to the
identification of a representative sample.
Briggs (1998) argued that the OH megamasers may be a useful tool
in direct measurement of the galaxy merger rate over time,
since these sources would be expected to turn up in radio
spectroscopic surveys, such as a SKA Deep Field. Simply counting the
number of OH megamasers per volume as a function of redshift would
specify the number of galaxies in the merging phase at that time.
The selection is both immune to obscuration and unbiased with respect
to redshift since the entire radio spectrum (aside from regions of
strong rfi) can be covered with a single telescope.
Fig.~\ref{detecrate.fig} shows estimates for the detection
rate for OH megamaser sources for comparison with the HI emission
from ordinary galaxies. The OH calculations use a constant
comoving density of OH megamaser hosts and the local OH megamaser
luminosity function. For the levels of sensitivities reached by
current radio telescopes (detection levels 0.2 to 1 mJy), the
OH detections should dominate for frequencies below $\sim$1000 MHz.
Given the expectation that mergers were more numerous in the
past, there may be a much high detection rate for OH emission
through the range $1<z<3$ that shown in the figure (Briggs 1998).
\section{Conclusion}
Radio mapping in the redshifted HI line
with modest spatial resolution radio interferomters
promises to resolve basic questions about how galaxies
assemble and evolve. By observing the cool neutral gas that
traces gravitational potential wells of forming galaxies,
the 21cm line provides not only a measure of the neutral
gas content of the Universe over cosmic time scales but also
a method to weigh the dark matter halos.
\section*{Acknowledgements}
The author is grateful to A.G. de Bruyn and J.M. van der Hulst
for valuable discussions.
\section*{References}
|
1,108,101,565,940 | arxiv | \section{Introduction}
Long-duration Gamma-Ray Bursts (GRBs) are thought to originate from the death of massive stars \citep{2006ARA&A..44..507W}. It is widely recognized that the study of GRBs provides the important knowledge on the final evolutionary stage in the life of massive stars. Although the nature of GRBs remains elusive, one viable scenario to produce a GRB is the neutrino-driven collapsar model \citep{1993ApJ...405..273W,1999ApJ...524..262M}. The gravitational collapse of the rapidly rotating core is believed to create a fast rotational Kerr black hole. The copious amounts of neutrinos and their anti particles, which are emitted from the hot accretion disk, annihilate and create an electron positron fireball around the rotational axis \citep{1989Natur.340..126E}. The baryon-starved fireball is believed to give rise to a relativistic collimated outflow, and eventually produce a GRB after the beam has broken free from the stellar progenitor.
Over the years, substantial work has been made towards understanding if neutrino-driven collapsar jets can produce the required relativistic outflow (see e.g. \citet{1999ApJ...524..262M,2003ApJ...588L..25F,2006ApJ...641..961L,2007ApJ...659..512N,2008ApJ...673L..43D,2009ApJ...692..804L,2010ApJ...720..614H,2011ApJ...737....6S,2011MNRAS.410.2385T}). However, there are still many open questions. In terms of the central engine, one of the largest uncertainties is the efficiency of energy deposition by neutrinos. Although numerical studies are very powerful methods to investigate the energy deposition rate by neutrinos and subsequent evolution of the jet, we need to solve General Relativistic Neutrino Radiation Hydrodynamics with microphysics for several tens of seconds after the black hole formation. This is certainly challenging as the typical life time of the central engine is roughly six orders of magnitude longer than the dynamical timescale of the nascent black hole. Computational resources are not yet available to perform such numerical studies. Because of these difficulties, a number of studies have thus far employed steady state approximations \citep{1999ApJ...518..356P,1994ApJ...428L..13N,2002ApJ...577..311K,2002ApJ...579..706D,2005ApJ...629..341K,2006ApJ...643L..87G,2007ApJ...657..383C,2010ApJ...709..851L,2011MNRAS.410.2302Z,2012Ap&SS.337..711L} or performed hydrodynamic (or magnetohydrodynamic) simulations \citep{2005ApJ...632..421L,2007ApJ...664.1011J,2007PThPh.118..257S,2010ApJ...720..614H} with simplified microphysical treatments. It is interesting to note that \citet{2011MNRAS.410.2302Z} recently have conducted General Relativistic Ray Tracing Neutrino Radiation Transfer and they found that the energy deposition by neutrinos could be well described as a simple analytic formula. Thanks to these studies, the jet luminosity can be estimated qualitatively without the need for expensive numerical simulations.
It should be noted, however, even if the neutrino-driven jet is successfully launched, this does not guarantee the production of a GRB. As a minimum requirement, the jet needs to successfully penetrate the stellar envelope otherwise it would become non-relativistic and thus incapable of producing GRB. Ever since the neutrino-driven collapsar model was proposed, a number of numerical works of jet propagation have been done (see e.g. \citep{2000ApJ...531L.119A,2003ApJ...586..356Z,2007ApJ...665..569M,2009ApJ...699.1261M,2011ApJ...731...80N}). In these studies, however, the jet luminosity was assumed for simplicity to either be constant or to directly follow the mass accretion rate (see e.g. \citet{2001ApJ...550..410M,2012ApJ...754...85N}). In order to judge whether neutrino-driven jets are capable of successfully breaking out of their progenitors, and to explore the effects of rotation, one needs to take into account how the jet power scales with the neutrino energy deposition generated by the accompanying accretion. In this paper, we present, for the first time, the propagation of neutrino-driven jets employing accurate neutrino energy deposition rate as calculated by \citet{2011MNRAS.410.2302Z}. The evolution of mass accretion rate and black hole mass and spin, all of which are necessary to evaluate the energy deposition by neutrinos, are evaluated here using an inner boundary condition in the simulation, although the current calculation is still not self-consistent as it fails to resolve the accretion disk and assumes no feedback (e.g. a disk wind). The purpose of this study is (1) to clarify whether neutrino-driven jets can successfully break out from the stellar surface and (2) to determine the progenitor's rotation rate necessary for successful jet breakout. As we will show in this paper, the final outcome of the explosion depends sensitive on the rate of rotation, and differences in rotation rate could be responsible for the observational differences seen between GRBs, low luminosity GRBs (LLGRBs) and failed GRBs.
\section{Methods and Models}
\begin{figure}
\vspace{15mm}
\epsscale{1.2}
\plotone{f1.eps}
\caption{SAM distribution of models (Red: Mref, Green: M150, Blue: M70, Pink: M50). The cross marks denote the intersection between the SAM of star and SAM at the last stable orbit for a black hole with the mass and angular momentum inside the indicated coordinate.
\label{f1}}
\end{figure}
We perform two dimensional, relativistic hydrodynamical axisymmetric (and also equatorial symmetric) simulations of the accretion and subsequent jet propagation. The numerical code employed in this paper is essentially the same as those used in previous papers \citep{2011ApJ...731...80N,2012ApJ...754...85N}. The initial stellar density distribution is fixed as 16TI model in \citet{2006ApJ...637..914W}. As is the case with previous studies, we cut the inner portions of the star from a certain radius. The self-gravity of matter in the active numerical regions is calculated by solving Poisson equations and the monopole gravity is added as the point mass at the inner excised region. The mass accretion rate ($\dot{M}$) is estimated by the mass flows through the inner boundary (see Eq.~(1) in \citet{2012ApJ...754...85N}). The mass and angular momentum in the excised region are assumed to be the same mass and angular momentum of the black hole. The time evolution of mass and spin of black hole is calculated by integrating mass and angular momentum flux crossing the inner boundary. It should be noted that when the specific angular momentum (SAM) at the location of inner boundary in equatorial region becomes larger than SAM at the inner most stable circular orbit (ISCO) (see cross marks in Figure~\ref{f1}), we alter our prescription when calculating the angular momentum to:
\begin{eqnarray}
f_a = \dot{M} \times J_{ISCO} \label{eq:anguinteg},
\end{eqnarray}
where $f_a$ and $J_{ISCO}$ denote the angular momentum flux and SAM at the ISCO, respectively. This treatment comes from the fact that the angular momentum of matter in the disk is transported outwards due to the turbulent viscosity or non-axisymmetric waves, and finally the matter falls into a black hole with $\sim J_{ISCO}$.
According to \citet{2011MNRAS.410.2302Z}, the jet luminosity by the neutrino process is determined by;
\begin{eqnarray}
&L_{j} = 1.1 \times 10^{52} x_{ms}^{-4.8}( \frac{M_{bh}}{3 M_{\odot}} )^{-\frac{3}{2}} \nonumber \\
& \hspace{8mm} \times \begin{cases} 0 & (\dot{M} < \dot{M}_{ign}) \\
\dot{m}^{\frac{9}{4}} & ( \dot{M}_{ign} < \dot{M} < \dot{M}_{trap}) \\
\dot{m}_{trap}^{\frac{9}{4}} & (\dot{M} > \dot{M}_{trap}) \label{eq:neutrinolumi}
\end{cases}
\end{eqnarray}
where $\dot{m} \equiv \dot{M}/(M_{\odot}/s)$, $x_{ms} \equiv r_{ms}/(2GM_{bh}/c^2)$ ($r_{ms}$ denotes the marginally stable orbit). $G$ and $c$ denote the gravitational constant and the speed of light, respectively. The characteristic mass accretion rate $\dot{M}_{ign}$ and $\dot{M}_{trap}$ are given as a function of the viscous parameter ($\alpha$), and the Kerr parameter ($a$) (see \citet{2011MNRAS.410.2302Z}).
Throughout this paper, we set $\alpha=0.1$ and Kerr parameter dependence on these accretion rate is linearly interpolated by $x_{ms}$.
\begin{deluxetable*}{lccccccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Summary of our results \label{tab:model}}
\tablewidth{0pt}
\startdata
\hline\hline
Model~~ &
~Breakout~ &
~$t_{i}$~ &
~$t_{br}$~ &
~$L_{p}$~ &
~$E_{dg}$~ &
~$E_{j}$~ &
~$E_{j>L_{50}}$~ &
~$E_{j>L_{49.5}}$~ &
~$T_{j>L_{50}}$~ &
~$T_{j>L_{49.5}}$~ &\\
& & (s) & (s) & ($10^{50}$ erg/s) & ($10^{51}$erg)& ($10^{51}$erg) & ($10^{51}$erg) & ($10^{51}$erg) & (s) & (s) \\
\hline
Mref & yes & 10.9 & 27.8 & 1.9 & 1.4 & 7.4 & 4.4 & 5.6 & 27.8 & 46.2 \\
M150 & yes & 2.2 & 17.6 & 3.2 & 1.6 & 11.7 & 8.8 & 9.8 & 40.0 & 55.0 \\
M70 & yes & 15.5 & 45.5 & 1.0 & 0.9 & 3.6 & - & 1.7 & - & 21.7 \\
M50 & no & 21.6 & - & 0.6 & - & - & - & - & - & - \\
\enddata
\end{deluxetable*}
The spherical symmetric density distribution is mapped by the spherical coordinate. The computational domain covers from $10^{8}$ to $4 \times 10^{10}$cm. Note that the location of the inner boundary in the present study is ten times smaller than in previous jet propagation studies (see e.g. \cite{2009ApJ...700L..47L,2011ApJ...732...26M,2011ApJ...731...80N}). The evolution of mass accretion rate, which is sensitive to the location of inner boundary (see \cite{2012ApJ...754...85N}), can thus be better captured by our simulations. However, as a result, these simulations become rather computationally expensive and we are only able to conduct them until the jet bow shock reaches the stellar surface or the black hole mass reaches $10 M_{\odot}$. The evolution of the the post-breakout phase is then analyzed by using a simple analytic formalism (see Eqs.~(\ref{eq:timeextrapo})-(\ref{eq:acrateana})). The results of an extended numerical simulation will be nonetheless compare with the analytical approach for the reference model in order to confirm that the analytical approach qualitatively captures the evolution of the jet dynamics (see Section~\ref{sec:subsecpostbreak} and Figure~\ref{f4}).
We employ the gamma-law equation of state with $\gamma = 4/3$. The jet injection parameters such as the Lorentz factor and the specific internal energy are the same as those used in the standard model of \citet{2012ApJ...754...85N}, where the initial Lorentz factor and specific internal energy are fixed to $\Gamma=400$ and $\epsilon=0.01$, respectively. It should be noted for a fixed $\Gamma$ and $\epsilon$, the overall jet dynamics depend solely on $\theta_{op}$ (see \citet{2012ApJ...754...85N}). In this study, we assume $\theta_{op} = 9^{\circ}$, which agrees well with the opening angles deduced for Long GRBs \citep{2011arXiv1101.2458G}. The dependence of our results on the $\theta_{op}$ will be discussed in section~\ref{sec:results}.
The 1000 non-uniform radial grids cover all the computational region while the meridian section is covered by 60 uniform grids. The 3 level Adaptive Mesh Refinement technique, similar to that used in \citet{2011ApJ...731...80N,2012ApJ...754...85N} is also employed in order to decrease computational cost.
We set up the stellar rotation by a similar manner as those used in \citet{2010ApJ...716.1308L}. The SAM distribution is separated into radial and polar components as $J(r,\theta) = j(r) \Theta(\theta)$, where $r$ and $\theta$ are the spherical radius and polar angle, respectively. In the reference model (Mref), $j(r)$ is given by 16TI model. For M150, M70 and M50 models, $j(r)$ is multiplied by 1.5, 0.7 and 0.5, respectively (see Figure~\ref{f1} for the SAM distribution of our models). The polar angle components are assumed to be rigid body rotation on shells, i.e., $\Theta(\theta) = \sin^2{\theta}$.
It is important to highlight that the simulations in this study do not cover the black hole accretion disk system. Even if the simulations cover the full computational domain, our numerical calculations can not treat the disk evolution appropriately, since the general relativistic effect and microphysics, which are important in determining the disk evolution, are not incorporated. However, the analytical formula proposed by \citet{2011MNRAS.410.2302Z} allows us to estimate neutrino luminosity without resolving the black hole accretion disk system. Owing to these prescription, we can study the jet penetration phase as determined by the neutrino-driven energy injection. The study of the coupling between black hole and disk accretion system is the beyond the scope of this paper.
We also note that the injection of the jet is delayed in the simulation as a result of the inner core not possessing enough angular momentum to create a disk \citep{2006ApJ...637..914W,2006ApJ...641..961L,2011MNRAS.410.2302Z}. Based on the standard neutrino-driven collapsar model (and assumptions those used in \citet{2007ApJ...657..383C,2011MNRAS.410.2302Z}), the central engine starts to operate after the accretion disk is formed around a black hole. Therefore, in our simulations, the jet is injected only when the SAM of matter at the inner boundary in equatorial plane exceeds $J_{ISCO}$.
\section{Results}
\label{sec:results}
\subsection{Basic features and jet penetrability}
\label{sec:subsecbasic}
\begin{figure}
\vspace{15mm}
\epsscale{1.2}
\plotone{f2.eps}
\caption{The density contour (log scale) in the meridian section for each model at $t=17.6$s (left), $t=27.8$s (middle) and $t=46.5$s (right). Each time corresponds to the time of jet breakout for M150, Mref and M70, respectively (See also $t_{br}$ in Table~\ref{tab:model}). The spatial size of each box is $-5 \times 10^{9}$cm $< x < 5 \times 10^{9}$cm and $0 < z < 1 \times 10^{10}$cm. The upper and lower panels correspond to M150, Mref, M70 and M50 models. Note that some panels in M150 and Mref are lacking since we stop the simulation at the time of breakout.
\label{f2}}
\end{figure}
The overall evolution of the collapse of the progenitor seen in our simulations is not surprisingly similar to that found in \citet{2011ApJ...731...80N,2012ApJ...754...85N}. The infall of stellar envelope generates a rarefaction wave, which propagates outwards. The envelope contraction is almost identical among all models and follows a rather spherical contraction, since the centrifugal force plays a minor role. Note that we find that the density distribution at the inner boundary is slightly oblate but this does not affect the subsequent jet propagation although it might have consequences for the jet production (which is not properly simulated here).
During the jet propagation phase, on the other hand, the results of jet evolution are very different among each model (see Table~\ref{tab:model} and Figure~\ref{f2}). We also show that the summary of our results in Table~\ref{tab:model}. For model M50, the forward shock wave does not move out and almost stagnates around the inner boundary despite the successful operation of the central engine ($t_{i}$ in Table~\ref{tab:model} denotes the time of initiation of central engine). In fact, no collimated feature can be seen for M50 in the lowest panels in Figure~\ref{f2}. This is attributed to the fact that the jet power does not exceed the ram pressure of the inflowing material, and the forward shock wave stagnates or is advected inwards. For models with successful jet breakout, on the other hand, the jet also can not move forward quickly after the initiation of the central engine. However, due to an increase in the Kerr parameter of the black hole over time, the jet power eventually exceeds the ram pressure of the inflowing material (Figure~\ref{f3}). Once the forward shock wave is able to move out, the jet interacts with the stellar mantle and gives rise to a cocoon. The hot cocoon helps jet confinement and helps to preserve the jet's strong outgoing momentum and energy flux, eventually leading to a successful breakout.
\begin{figure}
\vspace{15mm}
\epsscale{1.1}
\plotone{f3.eps}
\caption{From the top to the bottom, hemispherical neutrino luminosity, mass accretion rate, black hole mass, spin parameter and conversion efficiency from accretion energy to neutrino luminosity ($\eta \equiv L_{j}/\dot{M} c^2$) as a function of time from the onset of the collapse. Each color represents different models. Note that the model name with ``extend'' indicates the extrapolate results by analytic treatment (see text for more details.).
\label{f3}}
\end{figure}
Figure~\ref{f3} shows the evolution of hemispherical neutrino luminosity, mass accretion rate, black hole mass, Kerr parameter and conversion efficiency from accretion energy to neutrino luminosity ($\eta \equiv L_{j}/\dot{M} c^2$) as a function of time from the onset of the collapse. The chief cause for the different jet propagation behavior is the sensitive dependence of the neutrino luminosity on the Kerr parameter (see Eq.~(\ref{eq:neutrinolumi})). For the fast rotational model, the angular momentum of the black hole is very large and it increases with time (see the 4th panel in Figure~\ref{f3}), which produces a powerful jet as a result of the large neutrino deposition energy rates. It is also important to note that the onset timing of central engine ($t_i$) also greatly affects the outcome of explosion. As shown in Table~\ref{tab:model} and illustrated in Figure~\ref{f2}, the jet production is significantly delayed for a slower rotational model. This is due to the neutrino luminosity being weaker for both smaller accretion rates and larger black hole masses (see Eq.~(\ref{eq:neutrinolumi}) for the dependence of $\dot{m}$ and $M_{bh}$). In fact, for M50, although the Kerr parameter reaches $\gtrsim 0.9$ at the end of our simulation, the neutrino luminosity is not large enough to move out the forward shock wave.
According to these results, we infer that neutrino driven jets may not penetrate progenitors with extended envelopes, since significant large mass might be able to accrete to the black hole before jet breakout. The weak neutrino luminosity resulting for a sizable increase in black hole mass could result in the jet becoming non-relativistic ejecta. Therefore a compact progenitor is an inevitable requirement for a successful of neutrino-driven jet breakout. For the massive envelope progenitors such as PoP III or Red (Blue) supergiants, a different process might be required to powered GRBs (see also discussions in \citet{2011ApJ...726..107S}).
The foremost important result in this study is that the jet succeeds to break out from the star except for M50, which corresponds to the model with the slowest rotation rate. The neutrino-driven jet with $\theta_{op}=9^{\circ}$ produced in a rapidly rotating compact Wolf-Rayet star can potentially give rise to a GRBs. We also find that the time-averaged accretion-to-jet conversion efficiency among successful jet breakout models is roughly $\eta \sim 10^{-3}$, while $\eta$ for M50 can not reach $10^{-3}$ and never accomplish the jet breakout. This result is roughly consistent with our previous work \citep{2012ApJ...754...85N}. The analytical criteria in \citep{2012ApJ...754...85N} also shows the opening angle dependence for the successful jet breakout, which is the threshold $\eta$ increases with $(\theta_{op})^2$. Therefore, according to this result, we would like to give an important caution that jets with wider opening angle are more difficult to penetrate the star than the present results, which indicates that the threshold progenitor rotation rate differs in accordance with the jet opening angle.
\subsection{Post breakout phase: Analytical Formula}
\label{sec:subsecpostbreak}
As we have already mentioned, our numerical simulations are terminated at the time of jet breakout. However, it is interesting to extend the result of our numerical simulations to post break out stage, which allows us to estimate the expected observational differences among the computed models (see Section~\ref{sec:subsecobser}).
We employee the following analytic approximations in order to see the subsequent evolution after the jet break out;
\begin{eqnarray}
t &=& t_{(b)} + \int_{r_{(b)}}^{r} \frac{dt_{ff}}{dr} (r) dr \label{eq:timeextrapo}\\
M_{bh} &=& M_{bh(b)} + \int_{r_{(b)}}^{r} \frac{dM_{r}}{dr} (r) dr \label{eq:massextrapo}
\end{eqnarray}
where
\begin{eqnarray}
t_{ff}(r) &=& \beta \sqrt{\frac{GM_{r}}{r^3}} \label{eq:freefallana}, \\
\dot{M}_{ff}(r) &=& \frac{dM_{r}/dr}{dt_{ff}/dr} \nonumber \\
&=& \frac{1}{\beta ^2}
\frac{8 \pi G M_{r} t_{ff} \rho }
{3 M_{r} - 4 \pi r^3 \rho } \label{eq:acrateana}.
\end{eqnarray}
The above analytical estimation is essentially similar to the approach presented in \citet{2012ApJ...754...85N,2011ApJ...726..107S}. The free fall time $t_{ff}(r)$ can be determined by based on the assumption of spherical symmetric envelope contraction.
Here, $t_{b}$ denotes the time at jet break out. Note also that the functions of $M_{r}(r)$ and $\rho(r)$ are extracted from the table of 16TI model. $r_b$ is determined by the assumption of $M_{bh} (t_{b}) = M_r (r_{(b)})$. The non-dimensional parameter $\beta$ is determined to ensure that $\dot{M}(t_b)$ is equal to $\dot{M}_{ff}(r_{(b)})$. According to this procedure, the neutrino luminosity and mass accretion rate can be smoothly connected from the results of numerical simulations. The evolution of the Kerr parameter is determined by Eq.~(\ref{eq:anguinteg}). Note that, since model M50 does not succeed the jet breakout, there is no post breakout phase for this model. Results of these analytical extension are also described in Figure~\ref{f3}.
\begin{figure}
\vspace{15mm}
\epsscale{1.0}
\plotone{f4.eps}
\caption{Same as Figure~\ref{f3}, but comparison between results from extended numerical simulations and analytical approaches for the reference model.
\label{f4}}
\end{figure}
Figure~\ref{f4} shows the comparison between the results of extended numerical simulations and analytical estimation for the reference model. The extended numerical simulations are performed until $t=40$s, which corresponds to about ten seconds after the jet breakout. Note that we do not broaden the computational region, since the stellar contraction is not affected by the jet dynamics in the outer parts of the star. As shown in this figure, the time evolution of black hole mass ($M_{bh}$), Kerr parameter ($a$) and conversion efficiency ($\eta$) are almost identical between results of the extended numerical simulation and the analytical calculation. For luminosity ($L_{j}$) and mass accretion rate ($\dot{M}$), on the other hand, the analytical calculations are slightly larger than the results of our numerical simulations. This may be attributed to the fact that the analytical approach neglects stellar rotation, which increases the mass accretion rate, and consequently overestimates the neutrino luminosity. However, these differences are within ten percent. Therefore, we confirm that the above analytical approach qualitatively well describes the time evolution of the dynamics of the jet. In the following subsection, we discuss the observational consequence with the aid of the analytical approach in the post breakout phase.
\subsection{Observational consequences}
\label{sec:subsecobser}
We first divide the energetics of the jet into two parts, which are relativistic jet component ($E_j$) and the cocoon component ($E_{dg}$) (see \citet{2011ApJ...726..107S,2003MNRAS.345..575M}). The $E_j$ is calculated based on the assumption that all the injected energy after the jet break out goes into the relativistic component, i.e, $E_j$ is given by $\int_{t_b}^{\infty} (L_{j}/2) dt$. Note that $1/2$ factor comes from the assumption of equatorial symmetry.
\begin{figure}
\vspace{15mm}
\epsscale{1.2}
\plotone{f5.eps}
\caption{$E_{j(iso)}$ and $L_{p(iso)}$ relation among models with the successful jet breakout. $E_{j(iso)}$ is calculated from $E_{j}$ (Red), $E_{j>L_{49.5}}$ (Green) and $E_{j>L_{50}}$ (Blue), respectively.
\label{f5}}
\end{figure}
As shown in Table~\ref{tab:model}, $E_j$ increases with an increase of stellar rotation. For the purpose of studying the outcome of explosion in more detail, we further divide the energy of relativistic jet into $E_{j>L_{50}}$ and $E_{j>L_{49.5}}$. $E_{j>L_{50}}$ is calculated by the same manner as $E_j$ except for the integration is carried out when $L_j/2 > 10^{50}$erg/s, while $E_{j>L_{49.5}}$ is calculated with a condition $L_j/2 > 5 \times 10^{49}$erg/s. In addition, we also list the corresponding time duration of each component as $T_{j>L_{50}}$ and $T_{j>L_{49.5}}$ in Table~\ref{tab:model}. In all cases, these timescales are several tens of seconds, which are comparable with typical time scale of prompt phase of GRBs. It should be noted, however, according to \citet{2012ApJ...749..110B}, the duration of prompt phase of GRBs may be modified by the duration of jet penetration, i.e, $t_{\gamma} = t_{e} - t_{b}$ (see also Eq.~(2) in \citet{2012ApJ...749..110B}), where $t_{\gamma}$, $t_{e}$, $t_{b}$ denote observed duration of the prompt phase, the central engine working time and the duration of the jet penetration phase, respectively. Therefore, the actual observed duration will be smaller than the $T_{j}$, and it is substantially modified especially for slower rotation models (e.g. M70) (since $t_{b}$ is larger with slower rotation).
It is also interesting to note that model M70, which is the slowest model among successful jet breakout models, produces the weak explosion and does not have $E_{j>L_{50}}$ due to the low luminosity of the jet. Note also that this luminosity is upper limit for observed luminosity since we neglect the conversion process from hydrodynamical energy to gamma-rays. According to these facts, we infer that the neutrino-driven jet from the compact Wolf-Rayet star, whose rotation rate is between model M70 and M50, may produce very low luminous type burst, possibly LLGRBs. These results are qualitatively consistent with the previous studies (see e.g. \citet{2009ApJ...692..804L,2010ApJ...713..800L,2012ApJ...744..103M}). It should be noted, however, that some LLGRBs are observed with the extreme long duration, which populations can not be explained by the results of current studies. Therefore, the neutrino-driven jet may contribute to only LLGRBs with typical duration of prompt burst ($\sim 10$s).
On the other hand, the energy of cocoon component can be estimated as the diagnostic energy at $t=t_b$ (see \citet{2012ApJ...754...85N}). It is important to note that, as shown in Table~\ref{tab:model}, $E_{dg}$ is typically $\sim 10^{51}$erg for models with successful jet breakout.
This may be attributed the fact that the jet with a slowly rotating core tends to be weaker power and it spends longer time to penetrate the star. Therefore, despite its low jet luminosity, a large fraction of jet energy has been consumed for sweeping aside the stellar mantle and accumulated as the cocoon energy, which eventually reaches $\sim 10^{51}$erg. The cocoon material is expected to contribute the subsequent explosive event after the prompt phase \citep{2007ApJ...657L..77T,2012ApJ...750...68L} and at the afterglow phase \citep{2002MNRAS.337.1349R}.
The results of neutrino-deposition presented in this paper are not able to discern whether sufficient large nickel production might take place to explain hypernova explosions (see also discussions of cocoon propagation in \citet{2001ApJ...556L..37M,2002MNRAS.337.1349R,2003MNRAS.345..575M}). If nickel is not effectively produce at the jet interaction region alternative pathways such as the disk wind by viscous-heating or magnetic-driven wind from central engine would be required to explain the link between the GRBs-Hypernovae.
We further calculate the isotropic energy ($E_{j(iso)}$) for $E_{j}$, $E_{j>L_{49.5}}$ and $E_{j>L_{50}}$, and also isotropic peak luminosity $L_{p(iso)}$, which are shown in Figure~\ref{f5}. In this calculation, the jet opening angle is assumed to be $\theta_{op}= 9^{\circ}$, which is the same as the root of injected jet in our simulations. Note again that we neglect the conversion efficiency from hydrodynamic energy to radiation, so our results are still at the qualitative level and give the only upper limit of GRB radiation. In addition, the time evolution of neutrino luminosity does not capture the rapid time variability of central engine since our numerical simulations do not incorporate the black hole accretion disk system. According to these ambiguities, the time evolution of neutrino luminosity as described in the upper panel of Figure~\ref{f3} are different from the observed light curve in reality. It should be noted, however, that our analysis is meaningful to give the constraint the neutrino luminosity and energy as the upper limit. Based on the above assumption, we find that $E_{j(iso)}$ is $\sim 10^{54}$erg, while $L_{p(iso)}$ is $\sim 10^{52}$erg/s, which are sufficiently large to explain GRBs. We would like to point out that, for the jet with for rapidly rotating progenitor (M150), large fraction of energy are radiated with high luminosity jet ($L_j > 10^{50}$erg/s), while more than half of jet energy for M70 would be radiated with low luminous jet ($L_j < 5 \times 10^{49}$erg/s). According to these results, we suggest that neutrino-driven jet is capable of producing several types of bursts by the different progenitor rotation, which may be the origin of observational different bursts such as GRBs, XRFs. The failed GRBs would be also explained by neutrino-driven central engine when the progenitor is slowly rotating.
\section{Summary}
We present the numerical results of neutrino-driven jet propagation in a rotating Wolf-Rayet star. By changing the rate of progenitor rotation, we discuss jet penetrability and their observational consequences with the aide of analytic extrapolation in the post breakout phase. We show that every model except for M50 succeeds to break out the star. Especially, Mref and M150, which correspond models with sufficient rapidly rotation, have the relativistic outflow component as $L_{p(iso)}\sim 10^{52}$erg/s and $E_{j(iso)}\sim 10^{54}$erg, which are sufficiently large to explain GRBs. On the other hand, the energy in the cocoon component $E_{dg}$ is $\sim 10^{51}$erg for models with successful jet break outs, although it remains an open question whether the jet or the cocoon expansion could give rise to enough nickel production to explain the GRBs-Hypernovae connection. One of the other important results in this study is that model M50, which corresponds the model with slowly rotational model, can not succeed the jet breakout (failed GRB). Therefore, there is the threshold SAM distribution between model M70 and M50 for the success of jet penetration. It should be noted, however, that the threshold rotation, no doubt, strongly depend on the jet opening angle, and our results are only adequate for the canonical jet opening angle, $\theta_{op} = 9^{\circ}$. Although there are some limitations in this study, we suggest that neutrino-driven jet is capable of producing several types of bursts (and also include failure branch of burst i.e, failed GRBs) by the different progenitor rotation.
Finally, we would like to note that the results presented in this paper are optimistic. As one of the large uncertainties, the actual mass accretion rate would be smaller than the results obtained in this paper, since some fraction of the mass is expected to escape from the disk rather than being accreted to the black hole due to neutrino winds or viscous heating \citep{1999ApJ...524..262M}. In addition, the increase rate of Kerr parameter would be slower than the current result, since the SAM of the inflow matter is smaller than the ISCO due to the effect of pressure gradient. Note also that the disk wind also extract the angular momentum of accretion matter. Since the neutrino deposition rate depends sensitively on the mass accretion rate and Kerr parameter, the jet dynamics would be affected by these effects. Note also that the neutrino-driven jet can not explain the extreme long duration of bursts and other populations are necessary to explain these peculiar events. The other factors such as viewing angle may also cause the observational difference among GRB population (See e.g. \citet{2002ApJ...571L..31Y,2003ApJ...594L..79Y,2005ApJ...630.1003G}). More quantitative discussions will be conducted in our forthcoming paper.
\acknowledgments
H.N is grateful to Andrew Macfadyen, Andrei Beloborodov, Philipp Podsiadlowski, Kunihito Ioka, Yudai Suwa and Shoichi Yamada for useful comments and discussions. H.N would also like to thank Eriko Nagakura for proofreadings. This work was supported by Grant-in-Aid for the Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan (24740165) and HPCI Strategic Program of Japanese MEXT.
|
1,108,101,565,941 | arxiv | \section{#1} \vspace{0mm}}
\newcommand{\SubSection}[1]{\vspace{-1mm} \subsection{#1} \vspace{-1mm}}
\newcommand{\SubSubSection}[1]{\vspace{-0mm} \subsubsection{#1} \vspace{0mm}}
\def\cvprPaperID{9944}
\defCVPR 2021{CVPR 2021}
\begin{document}
\title{Mitigating Face Recognition Bias via Group Adaptive Classifier}
\author{Sixue Gong\quad\quad Xiaoming Liu\quad\quad Anil K. Jain\\
Michigan State University, East Lansing MI 48824\\
{\tt\small \{gongsixu, liuxm, jain\}@msu.edu}
}
\maketitle
\begin{abstract}
Face recognition is known to exhibit bias - subjects in a certain demographic group can be better recognized than other groups.
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our proposed group adaptive classifier mitigates bias by using adaptive convolution kernels and attention mechanisms on faces based on their demographic attributes.
The adaptive module comprises kernel masks and channel-wise attention maps for each demographic group so as to activate different facial regions for identification, leading to more discriminative features pertinent to their demographics.
Our introduced automated adaptation strategy determines whether to apply adaptation to a certain layer by iteratively computing the dissimilarity among demographic-adaptive parameters.
A new de-biasing loss function is proposed to mitigate the gap of average intra-class distance between demographic groups.
Experiments on face benchmarks (RFW, LFW, IJB-A, and IJB-C) show that our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
\end{abstract}
\section{Introduction}
Face recognition (FR) systems are known to exhibit discriminatory behaviors against certain demographic groups~\cite{howard2019effect, klare2012face, grother2019frvt}. The $2019$ NIST Face Recognition Vendor Test~\cite{grother2019frvt} shows that all $106$ tested FR algorithms exhibit varying biased performances on gender, race, and age groups of a mugshot dataset.
Deploying biased FR systems to law enforcement is potentially unethical~\cite{creager2019flexibly}.
Given the implication of automated FR-driven decisions, it is crucial to develop fair and unbiased FR systems to avoid the negative societal impact.
Note that, differ from the inductive bias in machine learning~\cite{dietterich1995machine}, we define FR bias as the {\it uneven} recognition performance w.r.t.~demographic groups.
\begin{figure}[t]
\captionsetup{font=footnotesize}
\centering
\begin{subfigure}[b]{0.68\linewidth}
\includegraphics[width=\linewidth]{figs/fig1b.pdf}
\vspace{-6mm}
\caption{{\footnotesize}}
\label{fig:fig1a}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.31\linewidth}
\includegraphics[width=0.9\linewidth]{figs/fig1a.pdf}
\vspace{-1mm}
\caption{{\footnotesize}}
\label{fig:fig1b}
\end{subfigure}\\
\vspace{-1mm}
\caption{\footnotesize{(a) Our proposed group adaptive classifier (GAC) automatically chooses between non-adaptive (``N'') and adaptive (``A'') layer in a multi-layer network, where the latter uses demographic-group-specific kernel and attention. (b) Compared to the baseline with the $50$-layer ArcFace backbone, GAC improves face verification accuracy in most groups of RFW dataset~\cite{wang2019racial}, especially under-represented groups, leading to mitigated FR bias, {\it e.g.}, GAC reduces biasness from $1.11$ to $0.60$.} \vspace{-4mm}}
\label{fig:teaser}
\end{figure}
State-of-the-art (SOTA) FR algorithms~\cite{liu2017sphereface, wang2018cosface, Deng_2019_CVPR} rely on convolutional neural networks (CNNs) trained on large-scale face datasets.
The public FR training datasets, {\it e.g.}, CASIA-WebFace~\cite{yi2014learning}, VGGFace2~\cite{cao2018vggface2}, and MS-Celeb-1M~\cite{guo2016ms}, are collected by scraping images off the web, with inevitable demographic bias~\cite{wang2020mitigating}.
Biases in data are transmitted to the FR models through network learning. For example, to minimize the overall loss, a network tends to learn a better representation for faces in the majority group whose number of faces dominate the training set, resulting in unequal discriminabilities.
The imbalanced demographic distribution of face data is, nevertheless, not the only trigger of FR bias.
Prior works have shown that even using a demographic balanced dataset~\cite{wang2020mitigating} or training separate classifiers for each group~\cite{klare2012face}, the performance on some groups is still inferior to the others.
By studying non-trainable FR algorithms, \cite{klare2012face} introduced the notion of \textit{inherent bias}, {\it i.e.}, certain groups are inherently more susceptible to errors in the face matching.
To tackle the dataset-induced bias, traditional methods re-weight either the data proportions~\cite{chawla2002smote} or cost values~\cite{akbani2004applying}.
Such methods are limited when applied to large-scale imbalanced datasets.
Recent imbalance learning methods focus on novel objective functions for class-skewed datasets.
For instance, Dong~\etal~\cite{dong2018imbalanced} propose a Class Rectification Loss to incrementally optimize on hard samples of the classes with under-represented attributes.
Alternatively, researchers strengthen the decision boundary to impede perturbation from other classes by enforcing margins between hard clusters via adaptive clustering~\cite{huang2019deep}, or between rare classes via Bayesian uncertainty estimates~\cite{khan2019striking}.
To adapt the aforementioned methods to racial bias mitigation, Wang~\etal~\cite{wang2020mitigating} modify the large margin based loss functions by reinforcement learning.
However,~\cite{wang2020mitigating} requires two auxiliary networks, an offline sampling network and a deep Q-learning network, to generate adaptive margin policy for training the FR network, which hinders the learning efficiency.
To mitigate FR bias, our main idea is to optimize the face representation learning on every demographic group in a single network, despite demographically imbalanced training data.
Conceptually, we may categorize face features into two types of patterns: \textit{general pattern} is shared by all faces; \textit{differential pattern} is relevant to demographic attributes.
When the differential pattern of one specific demographic group dominates training data, the network learns to predict identities mainly based on that pattern as it is more convenient to minimize the loss than using other patterns, leading to bias towards that specific group.
One mitigation is to give the network more capacity to broaden its scope for multiple face patterns from different groups.
An unbiased FR model shall rely on both unique patterns for recognition of different groups, and general patterns of all faces for improved generalizability.
Accordingly, as in Fig.~\ref{fig:teaser}, we propose a \textit{group adaptive classifier} (GAC) to explicitly learn these different feature patterns.
GAC includes two modules: the adaptive layer and automation module.
The adaptive layer comprises adaptive convolution kernels and channel-wise attention maps where each kernel and map tackle faces in {\it one} demographic group.
We also introduce a new objective function to GAC, which diminishes the variation of average intra-class distance between demographic groups.
Prior work on dynamic CNNs introduce adaptive convolutions to either every layer~\cite{kang2017incorporating, yang2019cross, wang2019eca}, or manually specified layers~\cite{lu2019see, hou2019cross, su2019multi}. In contrast, we propose an automation module to choose which layers to apply adaptations.
As we observed, not all convolutional layers require adaptive kernels for bias mitigation (see Fig.~\ref{fig:kernel_step}).
At any layer of GAC, only kernels expressing high dissimilarity are considered as demographic-adaptive kernels.
For those with low dissimilarity, their average kernel is shared by all inputs in that layer.
Thus, the proposed network progressively learns to select the optimal structure for the demographic-adaptive learning.
Both non-adaptive layers with shared kernels and adaptive layers are jointly learned in a unified network.
The contributions of this work include: 1) A new face recognition algorithm that reduces demographic bias and tailors representations for faces in every demographic group by adopting adaptive convolutions and attention techniques; 2) A new adaptation mechanism that automatically determines the layers to employ dynamic kernels and attention maps; 3) The proposed method achieves SOTA performance on a demographic-balanced dataset and three benchmarks.
\Section{Related Work}
\textbf{Fairness Learning and De-biasing Algorithms.} A variety of fairness techniques are proposed to prevent machine learning models from utilizing statistical bias in training data, including adversarial training~\cite{alvi2018turning, hendricks2018women, wang2019balanced, pmlr-v80-madras18a}, subgroup constraint optimization~\cite{kearns2019empirical, zhao2017men, wang2019towards}, data pre-processing ({\it e.g.}, weighted sampling~\cite{grover2019bias}, and data transformation~\cite{calmon2017optimized}), and algorithm post-processing~\cite{kim2019multiaccuracy, pleiss2017fairness}.
Another promising approach learns a fair representation to preserve all discerning information about the data attributes or task-related attributes but eliminate the prejudicial effects from sensitive factors~\cite{moyer2018invariant, song2019learning, zemel2013learning, creager2019flexibly, hardt2016equality}. Locatello~\etal~\cite{locatello2019fairness} show the feature disentanglement is consistently correlated with increasing fairness of general purpose representations by analyzing $12,600$ SOTA models.
Accordingly, a disentangled representation is learned to de-bias both FR and demographic attribute estimation~\cite{gong2020jointly}. Other studies address the bias issue in FR by leveraging unlabeled faces to improve the performance in minority groups~\cite{qin2020asymmetric, wang2019racial}. Wang~\etal~\cite{wang2020mitigating} propose skewness-aware reinforcement learning to mitigate racial bias.
Unlike prior work, our GAC is designed to customize the classifier for each demographic group, which, if successful, would lead to mitigated bias.
\begin{figure*}[t!]
\captionsetup{font=footnotesize}
\centering
\begin{subfigure}[b]{0.27\linewidth}
\includegraphics[width=0.95\linewidth]{figs/dynamic_kernel.pdf}
\caption{{\footnotesize Adaptive Kernel}}
\label{fig:adap_kernel}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.27\linewidth}
\includegraphics[width=0.95\linewidth]{figs/attention_map.pdf}
\caption{{\footnotesize Attention Map}}
\label{fig:attention_map}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=0.95\linewidth]{figs/gac.pdf}
\caption{{\footnotesize GAC}}
\label{fig:gac}
\end{subfigure}\\
\vspace{-3mm}
\caption{\footnotesize{A comparison of approaches in adaptive CNNs.}}
\label{fig:related_work}
\end{figure*}
\textbf{Adaptive Neural Networks.} Three types of CNN-based adaptive learning techniques are related to our work: adaptive architectures, adaptive kernels, and attention mechanism.
Adaptive architectures design new performance-based neural functions or structures, {\it e.g.}, neuron selection hidden layers~\cite{hu2018learning} and automatic CNN expansion for FR~\cite{zhang2016adaptive}.
As CNN advances many fields, prior works propose dynamic kernels to realize content-adaptive convolutions.
Li~\etal~\cite{li2015shape} propose a shape-driven kernel for facial trait recognition where each landmark-centered patch has a unique kernel.
A convolution fusion for graph neural networks is introduced by~\cite{du2017topology} where varying-size filters are used per layer.
The works of~\cite{ding2018automatic,li2019selective} use a kernel selection scheme to automatically adjust the receptive field size based on inputs.
To better suit input data, ~\cite{ding2017convolutional} splits training data into clusters and learns an exclusive kernel per cluster.
Li~\etal~\cite{li2017adaptive} introduce an adaptive CNN for object detection that transfers pre-trained CNNs to a target domain by selecting useful kernels per layer.
Alternatively, one may feed input images into a kernel function to dynamically generate kernels~\cite{su2019pixel, zamora2019adaptive, klein2015dynamic, jia2016dynamic}.
Despite its effectiveness, such individual adaptation may not be suitable given the diversity of faces in demographic groups.
Our work is most related to the side information adaptive convolution~\cite{kang2017incorporating}, where in each layer a sub-network inputs auxiliary information to generate filter weights.
We mainly differ in that GAC automatically learns where to use adaptive kernels in a multi-layer CNN (see Figs.~\ref{fig:adap_kernel} and~\ref{fig:gac}), thus more efficient and capable in applying to a deeper CNN.
As the human perception naturally selects the most pertinent piece of information, attention mechanisms are designed for many tasks, {\it e.g.}, detection~\cite{zhang2018progressive}, recognition~\cite{chen2018learning}, image captioning~\cite{chen2017sca}, tracking~\cite{chen2019multi}, pose estimation~\cite{su2019multi}, and segmentation~\cite{lu2019see}.
Normally, attention weights are estimated by feeding images or feature maps into a shared network, composed of convolutional and pooling layers~\cite{bastidas2019channel, chen2018learning, ling2020attention, sindagi2019ha} or multi-layer perceptron (MLP)~\cite{hu2018squeeze, woo2018cbam, sadiq2019facial, linsley2019learning}.
Apart from feature-based attention, Hou~\etal~\cite{hou2019cross} propose a correlation-guided cross attention map for few-shot classification where the correlation between the class feature and query feature generates attention weights.
The work of~\cite{yang2019cross} introduces a cross-channel communication block to encourage information exchange across channels.
To accelerate channel interaction, Wang~\etal~\cite{wang2019eca} propose a $1$D convolution across channels for attention prediction.
Different from prior work, our attention maps are constructed by demographic information (see Figs.~\ref{fig:attention_map},~\ref{fig:gac}), which improves the robustness of face representations in every demographic group.
\Section{Methodology}
\SubSection{Overview}
Our goal is to train a FR network that is impartial to individuals in different demographic groups.
Unlike image-related variations, {\it e.g.}, large-poses or low-resolution faces are harder to be recognized, demographic attributes are subject-related properties with no apparent impact in recognizability of identity, at least from a layman's perspective.
Thus, an unbiased FR system should be able to obtain equally salient features for faces across demographic groups.
However, due to imbalanced demographic distributions and inherent face differences between groups, it was shown that certain groups achieve higher performance even with hand-crafted features~\cite{klare2012face}.
Thus, it is impractical to extract features from different demographic groups that exhibit equal discriminability.
Despite such disparity, a FR algorithm can still be designed to {\it mitigate} the difference in performance.
To this end, we propose a CNN-based group adaptive classifier to utilize dynamic kernels and attention maps to boost FR performance in all demographic groups considered here.
Specifically, GAC has two main modules, an adaptive layer and an automation module. In adaptive layer, face images or feature maps are convolved with a unique kernel for each demographic group, and multiplied with adaptive attention maps to obtain demographic-differential features for faces in a certain group. The automation module determines in which layers of the network adaptive kernels and attention maps should be applied. As shown in Fig.~\ref{fig:overview}, given an aligned face, and its identity label $y_{ID}$, a pre-trained demographic classifier first estimates its demographic attribute $y_{Demo}$. With $y_{Demo}$, the image is then fed into a recognition network with multiple demographic adaptive layers to estimate its identity. In the following, we present these two modules.
\begin{figure*}[t!]
\captionsetup{font=footnotesize}
\centering
\includegraphics[width=0.96\linewidth]{figs/overview.pdf}
\caption{\footnotesize{Overview of the proposed GAC for mitigating FR bias. GAC contains two major modules: the adaptive layer and the automation module. The adaptive layer consists of adaptive kernels and attention maps. The automation module is employed to decide whether a layer should be adaptive or not.}}
\label{fig:overview} {\vspace{-3mm}}
\end{figure*}
\SubSection{Adaptive Layer}
\textit{Adaptive Convolution.} For a standard convolution operation in CNN, an image or feature map from the previous layer $I_{F} \in \mathbb{R}^{ic \times ih \times iw}$ is convolved with a single kernel matrix $K \in \mathbb{R}^{kc \times ic \times kh \times kw}$, where $ic$ is the number of input channels, $kc$ the number of filters, $ih$ and $iw$ the input size, and $kh$ and $kw$ the filter size. Such an operation shares the kernel with every input that goes through the layer, and is thus agnostic to demographic content, resulting in limited capacity to represent faces of minority groups.
To mitigate the bias in convolution, we introduce a trainable matrix of kernel masks $K_{M} \in \mathbb{R}^{nd \times ic \times kh \times kw}$, where $nd$ is the number of demographic groups. During the forward pass, the demographic label $y_{Demo}$ and kernel matrix $K_{M}$ are fed into the adaptive convolutional layer to generate demographic adaptive filters.
Let $K^c \in \mathbb{R}^{ic \times kh \times kw}$ denote the $c^{th}$ channel filter, and the adaptive filter weights for $c^{th}$ channel are:
\begin{equation}
K_{y_{Demo}}^{c} = K^c \mbox{\small $\bigotimes$} K^{j}_{M},
\end{equation}
where $K^{j}_{M} \in \mathbb{R}^{ic \times kh \times kw}$ is the ${j}^{th}$ kernel mask for group $y_{Demo}$, and $\bigotimes$ denotes element-wise multiplication. Then the $c^{th}$ channel of the output feature map is given by $O_F^{c} = f(I_{F} * K_{y_{Demo}}^{c})$, where * denotes convolution, and $f(\cdot)$ is activation. Differ to the conventional convolution, samples in every demographic group have a unique kernel $K_{y_{Demo}}$.
\textit{Adaptive Attention.} Each channel filter in a CNN plays an important role in every dimension of the final representation, which can be viewed as a semantic pattern detector~\cite{chen2017sca}. In the adaptive convolution, however, the values of a kernel mask are broadcast along the channel dimension, indicating that the weight selection is spatially varied but channel-wise joint. Hence, we introduce a channel-wise attention mechanism to enhance the face features that are demographic-adaptive. First, a trainable matrix of channel attention maps $M \in \mathbb{R}^{nd \times kc}$ is initialized in every adaptive attention layer. Given $y_{Demo}$ and the current feature map $O_F \in \mathbb{R}^{kc \times oh \times ow}$, where $oh$ and $ow$ are the height and width of $O_F$, the $c^{th}$ channel of the new feature map is calculated by:
\begin{equation}
O^{c}_{y_{Demo}} = \textrm{Sigmoid}({M}^{jc}) \cdot O_F^c,
\end{equation}
where ${M}^{jc}$ is the entry in the $j^{th}$ row of $M$ for the demographic group $y_{Demo}$ at $c^{th}$ column.
In contrast to the adaptive convolution, elements of each demographic attention map ${M}^{j}$ diverge in channel-wise manner, while the single attention weight ${M}^{jc}$ is spatially shared by the entire matrix $O_F^c \in \mathbb{R}^{oh \times ow}$. The two adaptive matrices, $K_M$ and
${M}$, are jointly tuned with all the other parameters supervised by the classification loss.
Unlike dynamic CNNs~\cite{kang2017incorporating} where additional networks are engaged to produce input-variant kernel or attention map, our adaptiveness is yielded by a simple thresholding function directly pointing to the demographic group with no auxiliary networks. Although the kernel network in~\cite{kang2017incorporating} can generate continuous kernels without enlarging the parameter space, further encoding is required if the side inputs for kernel network are discrete variables. Our approach, in contrast, divides kernels into clusters so that the branch parameter learning can stick to a specific group without interference from individual uncertainties, making it suitable for discrete domain adaptation. Further, the adaptive kernel masks in GAC are more efficient in terms of the number of additional parameters. Compared to a non-adaptive layer, the number of additional parameters of GAC is $nd \times ic \times kh \times kw$, while that of ~\cite{kang2017incorporating} is $id \times kc \times ic \times kh \times kw$ if the kernel network is a one-layer MLP,
where $id$ is the dimension of input side information.
Thus, for one adaptive layer, ~\cite{kang2017incorporating} has $\frac{id\times kc}{nd}$ times more parameters than ours, which can be substantial given the typical large value of $kc$, the number of filters.
\SubSection{Automation Module}
Though faces in different demographic groups are adaptively processed by various kernels and attention maps, it is inefficient to use such adaptations in {\it every} layer of a deep CNN.
To relieve the burden of unnecessary parameters and avoid empirical trimming, we adopt a similarity fusion process to automatically determine the adaptive layers.
Since the same fusion scheme can be applied to both types of adaptation, we take the adaptive convolution as an example to illustrate this automatic scheme.
First, a matrix composed of $nd$ kernel masks is initialized in every convolutional layer. As training continues, each kernel mask is updated independently to reduce classification loss for each demographic group. Second, we reshape the kernel masks into $1$D vectors $\mathbf{V} = [\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_{nd}]$, where $\mathbf{v}_i \in \mathbb{R}^{l}, l=ic \times kw \times kh$ is the kernel mask of the $i^{th}$ demographic group. Next, we compute Cosine similarity between two kernel vectors,
$\theta_{ij} = \frac{\mathbf{v}_i}{\|\mathbf{v}_i\|} \cdot \frac{\mathbf{v}_j}{\|\mathbf{v}_j\|}$,
where $1\leq i,j\leq nd$. The average similarity of all pair-wise similarities is obtained by $\overline{\theta} = \frac{2}{nd(nd-1)}\sum_i \sum_j \theta_{ij}, i\neq j$. If $\overline{\theta}$ is higher than a pre-defined threshold $\tau$, the kernel parameters in this layer reveal the demographic-agnostic property. Hence, we merge the $nd$ kernels into a single kernel by taking the average along the group dimension. In the subsequent training, this single kernel can still be updated separately for each demographic group, since the kernel may become demographic-adaptive in later epochs. We monitor the similarity trend of the adaptive kernels in each layer until $\overline{\theta}$ is stable.
\SubSection{De-biasing Objective Function}
Apart from the objective function for face identity classification, we also adopt a regress loss function to narrow the gap of the intra-class distance between demographic groups. Let $g(\cdot)$ denote the inference function of GAC, and $I^{i}_{jg}$ is the $i^{th}$ image of subject $j$ in group $g$. Thus, the feature representation of image $I^{i}_{jg}$ is given by $\mathbf{r}_{jg}^i = g(I^{i}_{jg}, \mathbf{w}_r)$, where $\mathbf{w}_r$ denotes the GAC parameters.
Assuming the feature distribution of each subject is a Gaussian distribution with diagonal covariance matrix (axis-aligned hyper-ellipsoid), we utilize mahalanobis distance as the intra-class distance of each subject. In particular, we first compute the center point of each identity-ellipsoid:
\begin{equation}
\boldsymbol{\mu}_{jg} = \frac{1}{N} \sum_{i=1}^N g(I^{i}_{jg}, \mathbf{w}_r),
\end{equation}
where $N$ is the total number of face images of subject $j$. The average intra-class distance of subject $j$ is as follows:
\begin{equation}
Dist_{jg} = \frac{1}{N} \sum_{i=1}^N (\mathbf{r}_{jg}^i - \boldsymbol{\mu}_{jg})^T (\mathbf{r}_{jg}^i - \boldsymbol{\mu}_{jg}).
\end{equation}
We then compute the intra-class distance for all subjects in group $g$ as $Dist_{g} = \frac{1}{Q} \sum_{j=1}^{Q} Dist_{jg}$, where $Q$ is the number of total subjects in group $g$. This allows us to lower the difference of intra-class distance by:
\begin{equation}
\mathcal{L}_{bias} = \frac{\lambda}{Q \times nd} \sum_{j=1} ^ {Q \times nd} \Bigl | Dist_{jg} - \frac{1}{nd} \sum_{g=1} ^ {nd} Dist_{g} \Bigr |,
\end{equation}
where $\lambda$ is the coefficient for the de-biasing objective.
\Section{Experiments}
\textbf{Datasets}: Our bias study uses RFW dataset~\cite{wang2019racial} for testing and BUPT-Balancedface dataset~\cite{wang2020mitigating} for training.
RFW consists of faces in four race/ethnic groups: White, Black, East Asian, and South Asian~\footnote{ RFW~\cite{wang2019racial} uses Caucasian, African, Asian, and Indian to name demographic groups. We adopt these groups and accordingly rename to White, Black, East Asian, and South Asian for clearer race/ethnicity definition.}.
Each group contains $\sim$$10$K images of $3$K individuals for face verification. BUPT-Balancedface contains $1.3$M images of $28$K celebrities and is approximately race-balanced with $7$K identities per race.
Other than race, we also study gender bias.
We combine IMDB~\cite{rothe2018deep}, UTKFace~\cite{zhifei2017cvpr}, AgeDB~\cite{moschoglou2017agedb}, AAF~\cite{cheng2019exploiting}, AFAD~\cite{niu2016ordinal} to train a gender classifier, which estimates gender of faces in RFW and BUPT-Balancedface.
All face images are cropped and resized to $112 \times 112$ pixels via landmarks detected by RetinaFace~\cite{deng2019retinaface}.
\textbf{Implementation Details}: We train a baseline network and GAC on BUPT-Balancedface, using the $50$-layer ArcFace architecture~\cite{Deng_2019_CVPR}.
The classification loss is an additive Cosine margin in Cosface~\cite{wang2018cosface}, with the scale and margin of $s=64$ and $m=0.5$. Training is optimized by and a batch size $256$. The learning rate starts from $0.1$ and drops to $0.0001$ following the schedule at $8$, $13$, $15$ epochs for the baseline, and $5$, $17$, $19$ epochs for GAC.
We set $\lambda=0.1$ for the intra-distance de-biasing. $\tau=-0.2$ is chosen for automatic adaptation in GAC. Our FR models are trained to extract a $512$-dim representation. Our demographic classifier uses a $18$-layer ResNet~\cite{he2016deep}. Comparing the GAC and baseline, the average feature extraction speed per image on Nvidia $1080$Ti GPU is $1.4$ms and $1.1$ms, and the number of model parameters is $44.0$M and $43.6$M, respectively.
\textbf{Performance Metrics}:
The common group fairness criteria like demographic parity distance are improper to evaluate fairness of learnt representations, since they are typically designed to measure independence properties of random variables.
However, in FR the sensitive demographic characteristics are tied to identities, making these two variables correlated.
The NIST report proposes to use false negative and false positive for each demographic group to measure the fairness~\cite{grother2019frvt}.
Instead of plotting false negative vs.~false positives, we adopt a compact quantitative metric, {\it i.e.}, the standard deviation (STD) of the performance in different demographic groups, that was previously introduced in~\cite{wang2020mitigating,gong2020jointly} and called ``biasness''.
We also report average accuracy (Avg) to show the overall FR performance.
\SubSection{Results on RFW Protocol}
We follow RFW face verification protocol with $6$K pairs per race/ethnicity.
The models are trained on BUPT-Balancedface with ground truth race and identity labels.
\textbf{Compare with SOTA.} We compare the GAC with four SOTA algorithms on RFW protocol, namely, ACNN~\cite{kang2017incorporating}, RL-RBN~\cite{wang2020mitigating}, PFE~\cite{shi2019probabilistic}, and DebFace~\cite{gong2020jointly}. Since the approach in ACNN~\cite{kang2017incorporating} is related to GAC, we re-implement it and apply to the bias mitigation problem. First, we train a race classifier with the cross-entropy loss on BUPT-Balancedface. Then the softmax output of our race classifier is fed to a filter manifold network (FMN) to generate adaptive filter weights. Here, FMN is a two-layer MLP with a ReLU in between. Similar to GAC, race probabilities are considered as auxiliary information for face representation learning. We also compare with the SOTA approach PFE~\cite{shi2019probabilistic} by training it on BUPT-Balancedface. As shown in Tab.~\ref{tab:bias_rfw_sota},
GAC is superior to SOTA w.r.t.~average performance and feature fairness.
Compared to kernel masks in GAC, the FMN in ACNN~\cite{kang2017incorporating} contains more trainable parameters. Applying it to each convolutional layer is prone to overfitting. In fact, eight layers are empirically chosen as the FMN based convolution.
As the race data is a four-element input in our case, using extra kernel networks adds complexity to the FR network, which degrades the verification performance.
Even though PFE performs the best on standard benchmarks (Tab.~\ref{tab:lfw_ijba_ijbc}), it still exhibits high biasness. Our GAC outperforms PFE on RFW in both biasness and average performance. Compared to DebFace~\cite{gong2020jointly}, in which demographic attributes are disentangled from the identity representations, GAC achieves higher verification performance by optimizing the classification for each demographic group, with a lower biasness as well.
\begin{table}[t]
\captionsetup{font=footnotesize}
\centering
\caption{\footnotesize Performance comparison with SOTA on the RFW protocol~\cite{wang2019racial}. The results marked by (*) are directly copied from~\cite{wang2020mitigating}.}
\label{tab:bias_rfw_sota}
\scalebox{0.69}{
\begin{tabular}{@{}c cccc cc@{}}
\toprule
Method & White & Black & East Asian & South Asian & Avg ($\uparrow$) & STD ($\downarrow$)\\
\midrule
RL-RBN~\cite{wang2020mitigating} & $96.27$ & $95.00$ & $94.82$ & $94.68$ & $95.19$ & $0.63$ \\
ACNN~\cite{kang2017incorporating} & $96.12$ & $94.00$ & $93.67$ & $94.55$ & $94.58$ & $0.94$\\
PFE~\cite{shi2019probabilistic} & $\mathbf{96.38}$ & $\mathbf{95.17}$ & $94.27$ & $94.60$ & $95.11$ & $0.93$\\
ArcFace~\cite{Deng_2019_CVPR} & $96.18^*$ & $94.67^*$ & $93.72^*$ & $93.98^*$ & $94.64$ & $0.96$\\
CosFace~\cite{wang2018cosface} & $95.12^*$ & $93.93^*$ & $92.98^*$ & $92.93^*$ & $93.74$ & $0.89$\\
DebFace~\cite{gong2020jointly} & $95.95$ & $93.67$ & $94.33$ & $94.78$ & $94.68$ & $0.83$\\
GAC & $96.20$ & $94.77$ & $\mathbf{94.87}$ & $\mathbf{94.98}$ & $\mathbf{95.21}$ & $\mathbf{0.58}$\\
\bottomrule
\end{tabular}}
\end{table}
\textbf{Ablation.} To investigate the efficacy of adaptive layers, automation module, and de-biasing loss in GAC, we conduct three ablation studies: adaptive mechanisms, number of convolutional layers, and demographic information. For adaptive mechanisms, since deep feature maps contain both spatial and channel-wise information, we study the relationship among adaptive kernels, spatial and channel-wise attentions, and their impact to bias mitigation. We also study the impact of $\tau$ in our automation module. Apart from the baseline and GAC, we ablate eight variants: (1) GAC-Channel: channel-wise attention for race-differential feature; (2) GAC-Kernel: adaptive convolution with race-specific kernels; (3) GAC-Spatial: only spatial attention is added to baseline; (4) GAC-CS: both channel-wise and spatial attention; (5) GAC-CSK: combine adaptive convolution with spatial and channel-wise attention; (6,7,8) GAC-($\tau=*$): set $\tau$ to $*$.
\begin{table}[t]
\captionsetup{font=footnotesize}
\centering
\caption{\footnotesize Ablation of adaptive mechanisms on the RFW protocol~\cite{wang2019racial}.}
\label{tab:bias_rfw_adap}
\scalebox{0.67}{
\begin{tabular}{@{}c cccc cc@{}}
\toprule
Method & White & Black & East Asian & South Asian & Avg ($\uparrow$) & STD ($\downarrow$)\\
\midrule
Baseline & $96.18$ & $93.98$ & $93.72$ & $94.67$ & $94.64$ & $1.11$ \\
GAC-Channel & $95.95$ & $93.67$ & $94.33$ & $94.78$ & $94.68$ & $0.83$ \\
GAC-Kernel & $96.23$ & $94.40$ & $94.27$ & $94.80$ & $94.93$ & $0.78$ \\
GAC-Spatial & $95.97$ & $93.20$ & $93.67$ & $93.93$ & $94.19$ & $1.06$ \\
GAC-CS & $96.22$ & $93.95$ & $94.32$ & $\mathbf{95.12}$ & $94.65$ & $0.87$ \\
GAC-CSK & $96.18$ & $93.58$ & $94.28$ & $94.83$ & $94.72$ & $0.95$ \\
GAC-($\tau=0$) & $96.18$ & $93.97$ & $93.88$ & $94.77$ & $94.70$ & $0.92$ \\
GAC-($\tau=-0.1$) & $\mathbf{96.25}$ & $94.25$ & $94.83$ & $94.72$ & $95.01$ & $0.75$ \\
GAC-($\tau=-0.2$) & $96.20$ & $\mathbf{94.77}$ & $\mathbf{94.87}$ & $94.98$ & $\mathbf{95.21}$ & $\mathbf{0.58}$\\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure}[t]
\centering
\captionsetup{font=footnotesize}
\includegraphics[width=0.7\linewidth]{figs/fail_examples.pdf}
\captionof{figure}{\footnotesize{$8$ false positive and false negative pairs on RFW given by the baseline but successfully verified by GAC.}}
\label{fig:failexamples}
\end{figure}
Given the ablation results in Tab.~\ref{tab:bias_rfw_adap},
we make several observations: (1) the baseline model is the most biased across race groups.
(2) spatial attention mitigates the race bias at the cost of verification accuracy, and is less effective on learning fair features than other adaptive techniques. This is probably because spatial contents, especially local layout information, only reside at earlier CNN layers, where the spatial dimensions are gradually decreased by the later convolutions and poolings.
Thus, semantic details like demographic attributes are hardly encoded spatially.
(3) Compared to GAC, combining adaptive kernels with both spatial and channel-wise attention increases the number of parameters, lowering the performance.
(4) As $\tau$ determines the number of adaptive layers in GAC, it has a great impact on the performance. A small $\tau$ may increase redundant adaptive layers, while the adaptation layers may lack in capacity if too large.
Fig.~\ref{fig:failexamples} shows pairs of false positives (two faces falsely verified as the same identity) and false negatives
(two faces falsely verified as different identities) produced by the baseline but successfully verified by GAC.
\begin{table}[t]
\captionsetup{font=footnotesize}
\centering
\caption{\footnotesize Ablation of CNN depths and demographics on RFW protocol~\cite{wang2019racial}.}
\label{tab:bias_rfw_depth}
\scalebox{0.67}{
\begin{tabular}{@{}c cccc cc@{}}
\toprule
Method & White & Black & East Asian & South Asian & Avg ($\uparrow$) & STD ($\downarrow$)\\
\midrule
\multicolumn{7}{c}{Number of Layers}\\
ArcFace-34 & $96.13$ & $93.15$ & $92.85$ & $93.03$ & $93.78$ & $1.36$ \\
GAC-ArcFace-34 & $96.02$ & $94.12$ & $94.10$ & $94.22$ & $94.62$ & $0.81$ \\
ArcFace-50 & $96.18$ & $93.98$ & $93.72$ & $94.67$ & $94.64$ & $1.11$ \\
GAC-ArcFace-50 & $96.20$ & $94.77$ & $94.87$ & $94.98$ & $95.21$ & $0.58$\\
ArcFace-100 & $96.23$ & $93.83$ & $94.27$ & $94.80$ & $94.78$ & $0.91$ \\
GAC-ArcFace-100 & $96.43$ & $94.53$ & $94.90$ & $95.03$ & $95.22$ & $0.72$ \\
\midrule
\multicolumn{7}{c}{Race/Ethnicity Labels}\\
Ground-truth & $96.20$ & $94.77$ & $94.87$ & $94.98$ & $95.21$ & $0.58$\\
Estimated & $96.27$ & $94.40$ & $94.32$ & $94.77$ & $94.94$ & $0.79$ \\
Random & $95.95$ & $93.10$ & $94.18$ & $94.82$ & $94.50$ & $1.03$ \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table*}[t!]
\captionsetup{font=footnotesize}
\centering
\scalebox{0.75}{
\begin{tabular}{c cccccc c}
\toprule
Method & Gender & White & Black & East Asian & South Asian & Avg ($\uparrow$) & STD ($\downarrow$)\\
\midrule
\multirow{2}{*}{Baseline} & Male & $97.49 \pm 0.08$ & $96.94 \pm 0.26$ & $97.29 \pm 0.09$ & $97.03 \pm 0.13$ & \multirow{2}{*}{$96.96 \pm 0.03$} & \multirow{2}{*}{$0.69 \pm 0.04$}\\
& Female & $97.19 \pm 0.10$ & $97.93 \pm 0.11$ & $95.71 \pm 0.11$ & $96.01 \pm 0.08$ & &\\
\multirow{2}{*}{AL+Manual} & Male & $98.57 \pm 0.10$ & $98.05 \pm 0.17$ & $98.50 \pm 0.12$ & $\mathbf{98.36 \pm 0.02}$ & \multirow{2}{*}{$98.09 \pm 0.05$} & \multirow{2}{*}{$0.66 \pm 0.07$}\\
& Female & $98.12 \pm 0.18$ & $\mathbf{98.97 \pm 0.13}$ & $96.83 \pm 0.19$ & $97.33 \pm 0.13$ & &\\
\multirow{2}{*}{GAC} & Male & $\mathbf{98.75 \pm 0.04}$ & $\mathbf{98.18 \pm 0.20}$ & $\mathbf{98.55 \pm 0.07}$ & $98.31 \pm 0.12$ & \multirow{2}{*}{$\mathbf{98.19 \pm 0.06}$} & \multirow{2}{*}{$\mathbf{0.56 \pm 0.05}$}\\
& Female & $\mathbf{98.26 \pm 0.16}$ & $98.80 \pm 0.15$ & $\mathbf{97.09 \pm 0.12}$ & $\mathbf{97.56 \pm 0.10}$ & &\\
\bottomrule
\end{tabular}}
{\vspace{-3mm}}
\caption{\footnotesize Verification Accuracy (\%) of $5$-fold cross-validation on $8$ groups of RFW~\cite{wang2019racial}.}
\label{tab:bias_gender_race}
\end{table*}
\begin{figure*}[t!]
\captionsetup{font=footnotesize}
\centering
\begin{subfigure}[b]{0.65\linewidth}
\includegraphics[width=\linewidth]{figs/layer_epoch.pdf}
\vspace{-5mm}
\caption{{\footnotesize}}
\label{fig:kernel_step}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.32\linewidth}
\includegraphics[width=\linewidth]{figs/pcso_histogram.pdf}
\vspace{-5mm}
\caption{{\footnotesize}}
\label{fig:pcso_histogram}
\end{subfigure}\\
\vspace{-3mm}
\caption{\footnotesize{ (a) For each of the three $\tau$ in automatic adaptation, we show the average similarities of pair-wise demographic kernel masks, {\it i.e.}, $\overline{\theta}$, at $1$-$48$ layers (y-axis), and $1$-$20K$ training steps (x-axis). The number of adaptive layers in three cases, {\it i.e.}, $\sum_{1}^{48}(\overline{\theta}>\tau)$ at $20K^{{th}}$ step, are $12$, $8$, and $6$, respectively.
(b) With two race groups (White, Black in PCSO~\cite{klare2012face}) and two models (baseline, GAC), for each of the four combinations, we compute pair-wise correlation of face representations using any two of $1$K subjects in the same race, and plot the histogram of correlations. GAC reduces the difference/bias of two distributions. } \vspace{-2mm}}
\label{fig:heatmap}
\end{figure*}
Both the adaptive layers and de-biasing loss in GAC can be applied to CNN in any depth. In this ablation, we train both the baseline and GAC in ArcFace architecture with three different numbers of layers): (1) ArcFace-34/GAC-ArcFace-34: 34-layer CNN, (2) ArcFace-50/GAC-ArcFace-50, and (3) ArcFace-100/GAC-ArcFace-100.
As the training of GAC relies on demographic information, the error and bias in demographic labels might influence the FR bias reduction of GAC. Thus, we conduct three experiments with different demographic information, (1) ground-truth: the race/ethnicity labels provided by RFW; (2) estimated: the labels predicted by a pre-trained race estimation model; (3) random: the demographic label randomly assigned to each face image.
Tab.~\ref{tab:bias_rfw_depth} reports the ablation results of depths and demographic information. Compared to the baseline models, GAC reduces the performance STD for all the backbones with different number of layers. We see that the model with least number of layers presents the most bias, and the reduction of biasness by GAC is the most as well. The noise in demographic labels does, however, impair the performance of GAC. With estimated demographic information, the biasness is higher than that of the model with ground-truth supervision. Meanwhile, the model trained with randomly assigned demographics has the highest biasness.
\SubSection{Results on Gender and Race Groups}
\label{sec:gender_race}
We now extend demographic attributes to both gender and race. First, we train two classifiers that predict gender and race/ethnicity of a face image. The classification accuracy of gender and race/ethnicity is $85\%$ and $81\%$\footnote{This seemingly low accuracy is mainly due to the large dataset we assembled for training and testing gender/race classifiers. Our demographic classifier has been shown to perform comparably as SOTA on common benchmarks. While demographic estimation errors impact the training, testing, and evaluation of bias mitigation algorithms, the evaluation is of the most concern as demographic label errors may greatly impact the biasness calculation. Thus, future development may include either manually cleaning the labels, or designing a biasness metric robust to label errors.}, respectively. Then, these fixed classifiers are affiliated with GAC to provide demographic information for learning adaptive kernels and attention maps. We merge BUPT-Balancedface and RFW, and split the subjects into $5$ sets for each of $8$ demographic groups. In $5$-fold cross-validation, each time a model is trained on $4$ sets and tested on the remaining set.
Here we demonstrate the efficacy of the automation module for GAC.
We compare to the scheme of manually design (AL+Manual) that adds adaptive kernels and attention maps to a subset of layers.
Specifically, the first block in every residual unit is chosen to be the adaptive convolution layer, and channel-wise attentions are applied to the feature map output by the last block in each residual unit.
As we use $4$ residual units and each block has $2$ convolutional layers, the manual scheme involves $8$ adaptive convolutional layers and $4$ groups of channel-wise attention maps.
As in Tab.~\ref{tab:bias_gender_race}, automatic adaptation is more effective in enhancing the discirminability and fairness of face representations.
Figure~\ref{fig:kernel_step} shows the dissimilarity of kernel masks in the convolutional layers changes during training epochs under three thresholds $\tau$. A lower $\tau$ results in more adaptive layers.
We see the layers that are determined to be adaptive do vary across both layers (vertically) and training time (horizontally), which shows the importance of our automatic mechanism.
\begin{figure*}[t!]
\captionsetup{font=footnotesize}
\captionsetup[subfigure]{labelformat=empty}
\centering
\vspace{-3mm}
\begin{subfigure}[b]{0.8\linewidth}
\centering
\begin{minipage}[c]{0.06\textwidth}
\caption{{\tiny Average Image}}
\end{minipage}\hfill
\begin{minipage}[c]{0.94\textwidth}
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny East Asian female}}
\includegraphics[width=\linewidth]{figs/avgface/af.jpg}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny East Asian male}}
\includegraphics[width=\linewidth]{figs/avgface/am.jpg}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny Black female}}
\includegraphics[width=\linewidth]{figs/avgface/bf.jpg}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny Black male}}
\includegraphics[width=\linewidth]{figs/avgface/bm.jpg}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny White female}}
\includegraphics[width=\linewidth]{figs/avgface/cf.jpg}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny White male}}
\includegraphics[width=\linewidth]{figs/avgface/cm.jpg}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny South Asian female}}
\includegraphics[width=\linewidth]{figs/avgface/if.jpg}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.125\linewidth}
\caption{{\tiny South Asian male}}
\includegraphics[width=\linewidth]{figs/avgface/im.jpg}
\end{subfigure}
\end{minipage}
\end{subfigure}
\vspace{3px}
\begin{subfigure}[b]{0.8\linewidth}
\centering
\begin{minipage}[c]{0.06\textwidth}
\caption{{\tiny Heatmap +Image -GAC}}
\end{minipage}\hfill
\begin{minipage}[c]{0.94\textwidth}
\includegraphics[width=0.125\linewidth]{figs/grad/af/heatimg.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/am/heatimg.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/bf/heatimg.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/bm/heatimg.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/cf/heatimg.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/cm/heatimg.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/if/heatimg.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/im/heatimg.pdf}
\end{minipage}
\end{subfigure}\\
\begin{subfigure}[b]{0.8\linewidth}
\centering
\begin{minipage}[c]{0.06\textwidth}
\caption{{\tiny Heatmap +Image -Base}}
\end{minipage}\hfill
\begin{minipage}[c]{0.94\textwidth}
\includegraphics[width=0.125\linewidth]{figs/grad/af/heatimg_base.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/am/heatimg_base.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/bf/heatimg_base.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/bm/heatimg_base.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/cf/heatimg_base.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/cm/heatimg_base.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/if/heatimg_base.pdf}\hfill
\includegraphics[width=0.125\linewidth]{figs/grad/im/heatimg_base.pdf}
\end{minipage}
\end{subfigure}
\caption{\footnotesize The first row shows the average faces of different groups in RFW. The next two rows show gradient-weighted class activation heatmaps~\cite{selvaraju2017grad} at the $43^{th}$ convolutional layer of the GAC and baseline. The higher diversity of heatmaps in GAC shows the variability of parameters in GAC across groups.}
\label{fig:8groupsheatmap}
{\vspace{-3mm}}
\end{figure*}
\begin{table}[t]
\centering
\scalebox{0.62}{
\begin{tabularx}{1.6\linewidth}{X c | X c c c c}
\toprule
\multirow{2}{*}{Method} & \multirow{2}{*}{LFW (\%)} & \multirow{2}{*}{Method} & IJB-A (\%) & \multicolumn{3}{c}{IJB-C @ FAR (\%)}\\
\cline{5-7}
& & & 0.1\% FAR & 0.001\% & 0.01\% & 0.1\% \\
\midrule
DeepFace+~\cite{taigman2014deepface} & $97.35$ & Yin~\etal~\cite{yin2017multi} & $73.9 \pm 4.2$ & - & - & 69.3 \\
CosFace~\cite{wang2018cosface} & $99.73$ & Cao~\etal~\cite{cao2018vggface2} & $90.4 \pm 1.4$ & $74.7$ & $84.0$ & $91.0$ \\
ArcFace~\cite{Deng_2019_CVPR} & $\mathbf{99.83}$ & Multicolumn~\cite{xie2018multicolumn} & $\mathit{92.0 \pm 1.3}$ & $\underline{77.1}$ & $\underline{86.2}$ & $\underline{92.7}$ \\
PFE~\cite{shi2019probabilistic} & $\mathit{99.82}$ & PFE~\cite{shi2019probabilistic} & $\mathbf{95.3 \pm 0.9}$ & $\mathbf{89.6}$ & $\mathbf{93.3}$ & $\mathbf{95.5}$ \\
\midrule
Baseline & $99.75$ & Baseline & $90.2 \pm 1.1$ & $80.2$ & $88.0$ & $92.9$ \\
GAC & $\underline{99.78}$ & GAC & $\underline{91.3 \pm 1.2}$ & $\mathit{83.5}$ & $\mathit{89.2}$ & $\mathit{93.7}$\\
\bottomrule
\end{tabularx}}
\caption{ \footnotesize{Verification performance on LFW, IJB-A, and IJB-C. [Key: \textbf{Best}, \textit{Second}, \underline{Third} Best]}}
\label{tab:lfw_ijba_ijbc}
{\vspace{-3mm}}
\end{table}
\SubSection{Results on Standard Benchmark Datasets}
While our GAC mitigates bias, we also hope it can perform well on standard benchmarks.
Therefore, we evaluate GAC on standard benchmarks without considering demographic impacts, including LFW~\cite{huang2008labeled}, IJB-A~\cite{klare2015pushing}, and IJB-C~\cite{maze2018iarpa}.
These datasets exhibit imbalanced distribution in demographics. For a fair comparison with SOTA, instead of using ground truth demographics, we train GAC on Ms-Celeb-1M~\cite{guo2016ms} with the demographic attributes estimated by the classifier pre-trained in Sec.~\ref{sec:gender_race}.
As in Tab.~\ref{tab:lfw_ijba_ijbc}, GAC outperforms the baseline and performs comparable to SOTA.
\SubSection{Visualization and Analysis on Bias of FR}
\textbf{Visualization:} To understand the adaptive kernels in GAC, we visualize the feature maps at an adaptive layer for faces of various demographics, via a Pytorch visualization tool~\cite{uozbulak_pytorch_vis_2019}.
We visualize important face regions pertaining to the FR decision by using a gradient-weighted class activation mapping (Grad-CAM)~\cite{selvaraju2017grad}.
Grad-CAM uses the gradients back from the final layer corresponding to an input identity, and guides the target feature map to highlight import regions for identity predicting.
Figure~\ref{fig:8groupsheatmap} shows that, compared to the baseline, the salient regions of GAC demonstrate more diversity on faces from different groups. This illustrates the variability of network parameters in GAC across different groups.
\textbf{Bias via local geometry:}
In addition to STD, we also explain the bias phenomenon by using the local geometry of a given face representation in each demographic group.
We assume that the statistics of neighbors of a given point (representation) reflects certain properties of its manifold (local geometry).
Thus, we illustrate the pair-wise correlation of face representations.
To minimize variations caused by other latent variables, we use constrained frontal faces of a mug shot dataset, PCSO~\cite{klare2012face}, to show the demographic impact on face features.
We randomly select $1$K White and $1$K Black subjects from PCSO, and compute their pair-wise correlation within each race.
In Fig.~\ref{fig:pcso_histogram}, we discover that Base-White representations have lower inter-class correlation than Base-Black, {\it i.e.}, faces in the White group are over-represented by the baseline than faces in the Black group.
In contrast, GAC-White and GAC-Black shows more similarity in their correlation histograms.
\begin{table}[t]
\centering
{\vspace{-3mm}}
\scalebox{0.7}{
\begin{tabular}{cc ccc ccc cc}
\toprule
\multirow{2}{*}{Race} && \multicolumn{2}{c}{Mean} && \multicolumn{2}{c}{StaD} && \multicolumn{2}{c}{Relative Entropy}\\
\cline{3-4} \cline{6-7} \cline{9-10}
&& Baseline & GAC && Baseline & GAC && Baseline & GAC \\
\midrule
White && $1.15$ & $1.17$ && $0.30$ & $0.31$ && $0.0$ & $0.0$ \\
Black && $1.07$ & $1.10$ && $0.27$ & $0.28$ && $0.61$ & $0.43$ \\
East Asian && $1.08$ & $1.10$ && $0.31$ & $0.32$ && $0.65$ & $0.58$ \\
South Asian && $1.15$ & $1.18$ && $0.31$ & $0.32$ && $0.19$ & $0.13$ \\
\bottomrule
\end{tabular}}
\caption{\footnotesize{Distribution of ratios between minimum inter-class distance and maximum intra-class distance of face features in $4$ race groups of RFW. GAC exhibits higher ratios, and more similar distributions to the reference.}}
\label{tab:bias_distribute}
{\vspace{-3mm}}
\end{table}
As PCSO has few Asian subjects, we use RFW to design a second way to examine the local geometry in $4$ race groups.
That is, after normalizing the representations, we compute the pair-wise Euclidean distance and measure the ratio between the minimum distance of inter-subjects pairs and the maximum distance of intra-subject pairs.
We compute the mean and standard deviation (StaD) of ratio distributions in $4$ groups, by two models.
Also, we gauge the relative entropy to measure the deviation of distributions from each other.
For simplicity, we choose White group as the reference distribution.
As shown in Tab.~\ref{tab:bias_distribute}, while GAC has minor improvement over baseline in the mean, it gives smaller relative entropy in the other $3$ groups, indicating that the ratio distributions of other races in GAC are more similar, {\it i.e.}, less biased, to the reference distribution.
These results demonstrate the capability of GAC to increase fairness of face representations.
\Section{Conclusion}
This paper tackles the issue of demographic bias in face recognition by learning a fair face representation. A group adaptive classifier (GAC) is proposed to improve robustness of representations for every demographic group. Both adaptive convolution kernels and channel-wise attention maps are introduced to GAC.
We further add an automation module to determine whether to use adaptations in a given layer. Our findings suggest that faces can be better represented by using layers adaptive to different demographic groups, leading to more balanced performance gain for all groups.
{\small
\bibliographystyle{ieee_fullname}
\section{#1} \vspace{0mm}}
\newcommand{\SubSection}[1]{\vspace{-0mm} \subsection{#1} \vspace{0mm}}
\newcommand{\SubSubSection}[1]{\vspace{-0mm} \subsubsection{#1} \vspace{0mm}}
\def\cvprPaperID{9944}
\defCVPR 2021{CVPR 2021}
\begin{document}
\title{Mitigating Face Recognition Bias via Group Adaptive Classifier \\ (Supplementary Material)}
\author{Sixue Gong\quad\quad Xiaoming Liu\quad\quad Anil K. Jain\\
Michigan State University, East Lansing MI 48824\\
{\tt\small \{gongsixu, liuxm, jain\}@msu.edu}
}
\maketitle
\thispagestyle{empty}
In this supplementary material we include; (1) Section \ref{sec:datasets}: the statistics of datasets used in the experiments; (2) Section \ref{sec:classifier}: Performance of the pre-trained gender and race/ethnicity classifiers to provide GAC with demographic information; (3) Section \ref{sec:demog}: the study on demographic proportions in training set and the intrinsic bias.
\section{Datasets}
\label{sec:datasets}
\begin{table*}[h]
\centering
\captionsetup{font=footnotesize}
\caption{Statistics of training and testing datasets for the experiments in the paper}
\label{tab:datasets}
\scalebox{1.0}{
\begin{tabularx}{0.9\linewidth}{c c c c c c c}
\toprule
Datasets && \# of Images && \# of Subjects && Demographic Annotations \\
\midrule
IMDB~\cite{rothe2018deep} && $460,723$ && $20,284$ && Gender, Age \\
UTKFace~\cite{zhifei2017cvpr} && $24,106$ && - && Gender, Age, Race/ethnicity \\
AgeDB~\cite{moschoglou2017agedb} && $16,488$ && $567$ && Gender, Age \\
AFAD~\cite{niu2016ordinal} && $165,515$ && - && Gender, Age, Ethnicity (East Asian) \\
AAF~\cite{cheng2019exploiting} && $13,322$ && $13,322$ && Gender, Age \\
RFW~\cite{wang2019racial} && $665,807$ && - && Race/Ethnicity \\
BUPT-Balancedface~\cite{wang2020mitigating} && $1,251,430$ && $28,000$ && Race/Ethnicity\\
IMFDB-CVIT~\cite{imfdb} && $34,512$ && $100$ && Gender, Age Groups, Ethnicity (South Asian) \\
MS-Celeb-1M~\cite{guo2016ms} && $5,822,653$ && $85,742$ && No Demographic Labels \\
PCSO~\cite{deb2017face} && $1,447,607$ && $5,749$ && Gender, Age, Race/Ethnicity \\
LFW~\cite{huang2008labeled} && $13,233$ && $5,749$ && No Demographic Labels \\
IJB-A~\cite{klare2015pushing} && $25,813$ && $500$ && Gender, Age, Skin Tone \\
IJB-C~\cite{maze2018iarpa} && $31,334$ && $3,531$ && Gender, Age, Skin Tone \\
\bottomrule
\end{tabularx}}
\end{table*}
Tab.~\ref{tab:datasets} summarizes the datasets we adopt for conducting experiments, which reports the total number of face images and subjects (identities), and the types of demographic annotations. In the cross-validation experiment in Tab.~\ref{tab:cross_valid}, we report the statistics of each data fold for the cross-validation experiment on BUPT-Balancedface and RFW datasets.
\begin{table*}[h]
\centering
\captionsetup{font=footnotesize}
\caption{Statistics of Dataset Folds in the Cross-validation Experiment}
\label{tab:cross_valid}
\scalebox{1.0}{
\begin{tabular}{cc ccc ccc ccc cc}
\toprule
\multirow{2}{*}{Fold} && \multicolumn{2}{c}{White (\#)} && \multicolumn{2}{c}{Black (\#)} && \multicolumn{2}{c}{East Asian (\#)} && \multicolumn{2}{c}{South Asian (\#)} \\
\cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13}
&& Subjects & Images && Subjects & Images && Subjects & Images && Subjects & Images \\
\midrule
$1$ && $1,991$ & $68,159$ && $1,999$ & $67,880$ && $1,898$ & $67,104$ && $1,996$ & $57,628$ \\
$2$ && $1,991$ & $67,499$ && $1,999$ & $65,736$ && $1,898$ & $66,258$ && $1,996$ & $57,159$ \\
$3$ && $1,991$ & $66,091$ && $1,999$ & $65,670$ && $1,898$ & $67,696$ && $1,996$ & $56,247$ \\
$4$ && $1,991$ & $66,333$ && $1,999$ & $67,757$ && $1,898$ & $65,341$ && $1,996$ & $57,665$ \\
$5$ && $1,994$ & $68,597$ && $1,999$ & $67,747$ && $1,898$ & $68,763$ && $2,000$ & $56,703$ \\
\bottomrule
\end{tabular}}
\end{table*}
\section{Demographic Attribute Estimation}
\label{sec:classifier}
We train a gender classifier and a race/ethnicity classifier to provide GAC with demographic information during both training and testing procedures. We use the same datasets for training and evaluating the two demographic attribute classifiers as the work of~\cite{gong2020jointly}. The combination of IMDB, UTKface, AgeDB, AFAD, and AAF is used for gender estimation, and the collection of AFAD, RFW, IMFDB-CVIT, and PCSO is used for race/ethnicity estimation. Fig.~\ref{fig:stats} shows the total number of images in each demographic group of the training and testing set. Fig.~\ref{fig:acc} shows the performance of demographic attribute estimation on the testing set. For gender estimation, we see that the performance in the male group is better than that in the female group. For race/ethnicity estimation, the white group outperforms the other race/ethnicity groups.
\begin{figure}[t]
\captionsetup{font=footnotesize}
\centering
\begin{subfigure}[b]{0.53\linewidth}
\includegraphics[width=\linewidth]{figs/gender_distr.pdf}
\vspace{-6mm}
\caption{{\footnotesize} Gender Distribution}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.46\linewidth}
\includegraphics[width=\linewidth]{figs/race_distr.pdf}
\vspace{-2mm}
\caption{{\footnotesize} Race Distribution}
\end{subfigure}\\
\vspace{-1mm}
\caption{\footnotesize{Statistics of the datasets for training and testing demographic attribute estimation networks. (a) The number of images in each gender group of the datasets for gender estimation; (b) The number of images in each race/ethnicity group of the datasets for race/ethnicity estimation.} \vspace{-4mm}}
\label{fig:stats}
\end{figure}
\begin{figure}[t]
\captionsetup{font=footnotesize}
\centering
\begin{subfigure}[b]{0.53\linewidth}
\includegraphics[width=\linewidth]{figs/gender_acc.pdf}
\vspace{-6mm}
\caption{{\footnotesize} Gender Estimation}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.46\linewidth}
\includegraphics[width=\linewidth]{figs/race_acc.pdf}
\vspace{-2mm}
\caption{{\footnotesize} Race/Ethnicity Estimation}
\end{subfigure}\\
\vspace{-1mm}
\caption{\footnotesize{Performance of the demographic attribute estimation networks. (a) The classification accuracy in each gender group; (b) The classification accuracy in each race/ethnicity group. The red dashed line shows the average performance.} \vspace{-4mm}}
\label{fig:acc}
\end{figure}
\begin{table}[t]
\captionsetup{font=footnotesize}
\centering
\caption{\footnotesize Verification accuracy ($\%$) on the RFW protocol~\cite{wang2019racial} with varying race/ethnicity distribution.}
\label{tab:bias_rfw_ratio}
\scalebox{0.7}{
\begin{tabular}{@{}c cccc cc@{}}
\toprule
Training Ratio & White & Black & East Asian & South Asian & Avg ($\uparrow$) & STD ($\downarrow$)\\
\midrule
7:7:7:7 & $96.20$ & $94.77$ & $94.87$ & $94.98$ & $95.21$ & $0.58$\\
5:7:7:7 & $96.53$ & $94.67$ & $94.55$ & $95.40$ & $95.29$ & $0.79$ \\
3.5:7:7:7 & $96.48$ & $94.52$ & $94.45$ & $95.32$ & $95.19$ & $0.82$ \\
1:7:7:7 & $95.45$ & $94.28$ & $94.47$ & $95.13$ & $94.83$ & $0.48$ \\
0:7:7:7 & $92.63$ & $92.27$ & $92.32$ & $93.37$ & $92.65$ & $0.44$ \\
\bottomrule
\end{tabular}}
\end{table}
\section{Analysis on Intrinsic Bias and Data Bias}
\label{sec:demog}
For all the algorithms listed in Tab.~1 of the main paper, the performance is higher in White group than those in the other three groups, even though all the models are trained on a demographic balanced dataset, BUPT-Balancedface~\cite{wang2020mitigating}. In this section, we further investigate the intrinsic bias of face recognition between demographic groups and the impact of the data bias in the training set. \textit{Are non-White faces inherently difficult to be recognized for existing algorithms? Or, are face images in BUPT-Balancedface (the training set) and RFW~\cite{wang2019racial} (testing set) biased towards the White group?}
To this end, we train our GAC network using training sets with different race/ethnicity distributions and evaluate them on RFW. In total, we conduct four experiments, in which we gradually reduce the total number of subjects in the White group from the BUPT-Balancedface dataset. To construct a new training set, subjects from the non-White groups in BUPT-Balancedface remain the same, while a subset of subjects is randomly picked from the White group. As a result, the ratios between non-White groups are consistently the same, and the ratios of White, Black, East Asian, South Asian are $\{5:7:7:7\}$, $\{3.5:7:7:7\}$, $\{1:7:7:7\}$, $\{0:7:7:7\}$ in the four experiments, respectively. In the last setting, we completely remove White from the training set.
Tab.~\ref{tab:bias_rfw_ratio} reports the face verification accuracy of models trained with different race/ethnicity distributions on RFW. For comparison, we also put our results on the balanced dataset here (with ratio $\{7:7:7:7\}$), where all images in BUPT-Balancedface are used for training. From the results, we see several observations: (1) It shows that the White group still outperforms the non-White groups for all the first three experiments. Even without any White subjects in the training set, the accuracy on the White testing set is still higher than those on the testing images in Black and East Asian groups. This suggests that White faces are either intrinsically easier to be verified or face images in the White group of RFW are less challenging. (2). With the decline in the total number of White subjects, the average performance declines as well. In fact, for all these groups, the performance suffers from the decrease in the number of White faces. This indicates that face images in the White groups are helpful to boost the performance of face recognition algorithms for faces from both White and non-White groups. In other words, faces from the White group benefit the representation learning of global patterns for face recognition in general. (3). Opposite to our intuition, the biasness is lower with less number of White faces, while the data bias is actually increased by adding the unbalancedness to the training set.
{\small
\bibliographystyle{ieee_fullname}
|
1,108,101,565,942 | arxiv | \section{Introduction\label{sect1}}
\subsection{Formulation of problems\label{sect1.1}}
In the union $\Pi_{l}^{\varepsilon}$, fig. \ref{f1}, b and a, of the unit
straight stri
\begin{equation}
\Pi=\left\{ x=\left( x_{1},x_{2}\right) \in\mathbb{R}^{2},\text{ }x_{1
\in\mathbb{R},\text{ }x_{2}\in\left( 0,1\right) \right\} \label{00
\end{equation}
and a rectangle of length $2l>0$ and a small width $\varepsilon>0,
\begin{equation}
\varpi_{l}^{\varepsilon}=\left\{ x:\left\vert x_{1}\right\vert <l,\text{
\ }x_{2}\in\left( -\varepsilon,0\right] \right\} , \label{0
\end{equation}
we consider the spectral Neumann proble
\begin{align}
-\Delta u^{\varepsilon}\left( x\right) & =\lambda^{\varepsilon
}u^{\varepsilon}\left( x\right) ,\text{ \ }x\in\Pi_{l}^{\varepsilon}=\Pi
\cup\varpi_{l}^{\varepsilon},\label{1}\\
\partial_{\nu}u^{\varepsilon}\left( x\right) & =0,\ x\in\partial\Pi
_{l}^{\varepsilon}, \label{2
\end{align}
\begin{figure}
[ptb]
\begin{center}
\ifcase\msipdfoutput
\includegraphics[
height=0.9634in,
width=3.992in
{NewCaDuNa.eps
\else
\includegraphics[
height=0.9634in,
width=3.992in
{C:/Users/Desktop/Documents/Google Drive/Articoli/Articoli-in-Preparazione/CardoneDuranteNazarov1/graphics/NewCaDuNa__1.pdf
\fi
\caption{The waveguide with a box-shaped perturbation (a) and its fragments
(b)
\label{f1
\end{center}
\end{figure}
where $\Delta=\nabla\cdot\nabla$ is the Laplace operator, $\nabla
=\operatorname{grad},$ $\lambda^{\varepsilon}$ is the spectral parameter and
$\partial_{\nu}=\nu\cdot\nabla$ is the directional derivative, $\nu$ stands
for the unit outward normal defined everywhere at the boundary $\partial
\Pi_{l}^{\varepsilon},$ except for corner points, i.e. vertices of the
rectangle (\ref{0}). Since a solution of the problem (\ref{1}), (\ref{2}) may
get singularities at these points, the problem ought to be reformulated as the
integral identity \cite{Lad
\begin{equation}
\left( \nabla u^{\varepsilon},\nabla v^{\varepsilon}\right) _{\Pi
_{l}^{\varepsilon}}=\lambda^{\varepsilon}\left( u^{\varepsilon
,v^{\varepsilon}\right) _{\Pi_{l}^{\varepsilon}},\ \forall v^{\varepsilon}\in
H^{1}\left( \Pi_{l}^{\varepsilon}\right) , \label{3
\end{equation}
where $\left( \ ,\ \right) _{\Pi_{l}^{\varepsilon}}$ is the natural scalar
product in the Lebesgue space $L^{2}\left( \Pi_{l}^{\varepsilon}\right) $
and $H^{1}\left( \Pi_{l}^{\varepsilon}\right) $ stands for the Sobolev
space. The symmetric bilinear form on the left-hand side of (\ref{3}) is
closed and positive in $H^{1}\left( \Pi_{l}^{\varepsilon}\right) $ so that
problem (\ref{1}), (\ref{2}) is associated \cite[Ch 10]{BiSo} with a positive
self-adjoint operator $A_{l}^{\varepsilon}$ in $L^{2}\left( \Pi
_{l}^{\varepsilon}\right) $ whose spectrum $\wp=\wp_{co}$ is continuous and
covers the closed positive semi-axis $\overline{\mathbb{R}}_{+}=\left[
0,+\infty\right) .$ The domain $\mathcal{D}\left( A_{l}^{\varepsilon
}\right) $ of $A_{l}^{\varepsilon},$ of course, belongs to $H^{1}\left(
\Pi_{l}^{\varepsilon}\right) $ but is bigger than $H^{2}\left( \Pi
_{l}^{\varepsilon}\right) $ due to singularities of solutions at the corner
points, see, e.g., \cite[Ch.2]{NaPl}. The point spectrum $\wp_{po}$ of
$A_{l}^{\varepsilon}$ can be non-empty and the main goal of our paper is to
single out a particular value of the length parameter $l$ such that the
operator $A_{l}^{\varepsilon}$ wins an eigenvalue $\lambda_{l}^{\varepsilon
}\in\wp_{po}$ embedded into the continuous spectrum. The corresponding
eigenfunction $u_{l}^{\varepsilon}\in H^{1}\left( \Pi_{l}^{\varepsilon
}\right) $ decays exponentially at infinity and is called a trapped mode, cf.
\cite{LM} and \cite{Urcell}.
Our central result formulated below in Theorem \ref{TheoremEX}, roughly
speaking, demonstrates that an eigenvalue $\lambda_{l}^{\varepsilon}$ exists
in the interval $\left( 0,\pi^{2}\right) \subset\wp_{co}$ for
$l^{\varepsilon}\approx\pi k$ with $k\in\mathbb{N}=\left\{ 1,2,3,...\right\}
$ only. Asymptotics of $\lambda^{\varepsilon}$ and $l^{\varepsilon}$ are
constructed, too.
Problem (\ref{1}), (\ref{2}) is a model of an acoustic waveguide with hard
walls, cf. \cite{Acous}, but is also related in a natural way to the linear
theory of surface water-waves, cf. \cite{KuMaVa}. Indeed, the velocity
potential $\Phi^{\varepsilon}\left( x,z\right) $ satisfies the Laplace
equation in the channel $\Xi_{l,d}^{\varepsilon}=\Pi_{l}^{\varepsilon
\times\left( -d,0\right) \subset\mathbb{R}^{3}\ni\left( x,z\right) $ of
depth $d>0$ with the Neumann condition (no normal flow) at its vertical walls
and horizontal bottom as well as the spectral Steklov condition (the kinetic
one) on the free horizontal surfac
\[
\partial_{z}\Phi^{\varepsilon}\left( x,0\right) =\Lambda^{\varepsilon
\Phi^{\varepsilon}\left( x,0\right) ,\text{ \ }x\in\Pi_{l}^{\varepsilon}.
\]
After factoring out the dependence on the vertical variable $z$,
\begin{equation}
\Phi^{\varepsilon}\left( x,z\right) =u^{\varepsilon}\left( x\right)
\left( e^{z\lambda^{\varepsilon}}+e^{-\left( z+2d\right) \lambda
^{\varepsilon}}\right) , \label{4
\end{equation}
see, e.g., \cite{Vas1} and \cite{LM}, the water-wave problem reduces to the
two-dimensional Neumann problem (\ref{1}), (\ref{2}) for the function
$u^{\varepsilon}$ in (\ref{4}) and the parameter $\lambda^{\varepsilon}$
determined from the equatio
\[
\Lambda^{\varepsilon}=\lambda^{\varepsilon}\frac{1-e^{-2d\lambda^{\varepsilon
}}}{1+e^{-2d\lambda^{\varepsilon}}}=\lambda^{\varepsilon}\tanh\left(
d\lambda^{\varepsilon}\right) .
\]
We will not discuss separately this interpretation of our problem but in the
next section present some asymptotic formulas for eigenvalues of the Laplace
operator with either Dirichlet, or mixed boundary conditions.
\subsection{Asymptotics of eigenvalues\label{sect1.2}}
Imposing the Dirichlet conditio
\begin{equation}
u^{\varepsilon}\left( x\right) =0,\text{ \ \ }x\in\partial\Pi_{l
^{\varepsilon}, \label{5
\end{equation}
instead of the Neumann condition (\ref{2}), creates the positive cut-off value
$\lambda_{\dagger}=\pi^{2}$ of the continuous spectrum $\wp^{D}=\left[
\pi^{2},+\infty\right) $ of the Dirichlet problem (\ref{1}), (\ref{5}) which
provides an adequate model of a quantum waveguide, cf. \cite{Keijo}. The
interval $\left( 0,\pi^{2}\right) $ stays now below the continuous spectrum
and therefore may contain eigenvalues composing the discrete spectrum
$\wp_{di}^{D}$ of the problem. As follows from a result in \cite{Sim}, the
multiplicity $\#\wp_{di}^{D}$ is equal to $1$ for a small $\varepsilon>0$.
Although the paper \cite{Sim} deals with a regular (smooth) perturbation of
the wall, it is possible to select two smooth shallow pockets boxes as in fig.
\ref{f2}, a and b, and to extend the existence and uniqueness result in
\cite{Sim} for the box-shaped perturbations by means of the max-min principle,
see, e.g.,\cite[Thm 10.2.2]{BiSo}. However, the attendant asymptotic formula
\begin{equation}
\lambda_{l}^{\varepsilon}=\pi^{2}-4\pi^{4}\varepsilon^{2}l^{2}+O\left(
\varepsilon^{3}\right) ,\text{ \ }\varepsilon\rightarrow0, \label{6
\end{equation}
cannot be supported by these results because an application in \cite{Sim} of a
change of variables which transforms $\Pi_{l}^{\varepsilon}$ into $\Pi$
requires certain smoothness properties of the boundary $\partial\Pi
_{l}^{\varepsilon}$ which are naturally absent, fig. \ref{f1}, b. In Section
\ref{sect8.3} we will explain how our approach helps to justify formula
(\ref{6}). Local perturbations of quantum waveguides in $\mathbb{R}^{n}$,
$n\geq2$, are intensively investigated during last two decades and many
important results on the existence and asymptotic behavior of their discrete
spectrum have been published. We mention a few of them, namely \cite{ExDu, Ex,
Masl} for the slightly curved and twisted cylindrical waveguides,
\cite{sha1,na480,na561} for cranked waveguides and \cite{Gad, Gru} for the
Laplacian perturbed by a small second-order differential operator with
compactly supported coefficients. We also refer to \cite{na534} for non-local
perturbations, fig. \ref{f3}, a and to \cite{BoBuCa1, BoBuCa4, Sim} for
alternation of the Dirichlet and Neumann boundary conditions
\begin{figure}
[ptb]
\begin{center}
\ifcase\msipdfoutput
\includegraphics[
height=0.7014in,
width=4.3716in
{CaDuNa2.eps
\else
\includegraphics[
height=0.7014in,
width=4.3716in
{C:/Users/Desktop/Documents/Google Drive/Articoli/Articoli-in-Preparazione/CardoneDuranteNazarov1/graphics/CaDuNa2__2.pdf
\fi
\caption{The box-shaped perturbation enveloping (a) and entering (b) a regular
perturbation.
\label{f2
\end{center}
\end{figure}
In the literature one finds much less results on eigenvalues embedded into the
continuous spectrum, cf. the review papers \cite{BBD1, BBD2, LM}. First of all
we describe an elegant method \cite{Vas1} which is based on imposing an
artificial Dirichlet condition and had become of rather wide use in proving
the existence of embedded eigenvalues but only in symmetric waveguides
\begin{figure}
[ptb]
\begin{center}
\ifcase\msipdfoutput
\includegraphics[
height=1.2496in,
width=3.5777in
{CaDuNa3.eps
\else
\includegraphics[
height=1.2496in,
width=3.5777in
{C:/Users/Desktop/Documents/Google Drive/Articoli/Articoli-in-Preparazione/CardoneDuranteNazarov1/graphics/CaDuNa3__3.pdf
\fi
\caption{A non-local perturbation (a) and a symmetric box-shaped perturbation
(b)
\label{f3
\end{center}
\end{figure}
Let us consider an auxiliary mixed boundary value problem and supply the
Helmholtz equation (\ref{1}) with the Neumann condition on the lower lagged
wall and the Dirichlet condition on the upper straight wall, see fig.
\ref{f1}, b
\begin{equation}
u^{\varepsilon}\left( x_{1},1\right) =0,\text{ \ \ }x_{1}\in\mathbb{R
,\ \ \ \ \ \ \partial_{\nu}u^{\varepsilon}\left( x\right) =0,\text{
\ \ }x\in\partial\Pi_{l}^{\varepsilon},\text{ }x_{2}<1. \label{7
\end{equation}
Problem (\ref{1}), (\ref{7}), has the continuous spectrum $\wp_{co
^{M}=\left[ \pi^{2}/4,+\infty\right) $ and in Section \ref{sect8.1} we will
show the existence of only one eigenvalu
\begin{equation}
\lambda_{l}^{\varepsilon}=\frac{\pi^{2}}{4}(1-\pi^{2}l^{2}\varepsilon
^{2})+O(\varepsilon^{3}\left( 1+\left\vert \ln\varepsilon\right\vert \right)
^{2}),\text{ \ }\varepsilon\rightarrow+0, \label{8
\end{equation}
in the discrete spectrum $\wp_{di}^{M}\subset\left( 0,\pi^{2}/4\right) $.
Following \cite{Vas1} we extend the corresponding eigenfunction $u_{l
^{\varepsilon}\left( x_{1},x_{2}\right) $ as an odd function in $x_{2}-1$
from $\Pi_{l}^{\varepsilon}$ onto the bigger waveguide $\widehat{\Pi
_{l}^{\varepsilon}$ drawn in fig. \ref{f3}, b and obtained as the union of the
strip $\mathbb{R}\times\left( 0,2\right) $ and the box $\left( -l,l\right)
\times\left( -\varepsilon,2+\varepsilon\right) $. Owing to the Dirichlet
condition in (\ref{7}) at the midline of $\widehat{\Pi}_{l}^{\varepsilon}$,
this extension $\widehat{u}_{l}^{\varepsilon}\left( x_{1},x_{2}\right) $ is
a smooth function everywhere in $\widehat{\Pi}_{l}^{\varepsilon},$ except at
corner points and inherits from $u_{l}^{\varepsilon}\left( x_{1
,x_{2}\right) $ an exponential decay as $x_{1}\rightarrow\pm\infty$. Clearl
\begin{equation}
-\Delta\widehat{u}_{l}^{\varepsilon}\left( x\right) =\lambda_{l
^{\varepsilon}\widehat{u}_{l}^{\varepsilon}\left( x\right) ,\text{ \
x\in\widehat{\Pi}_{l}^{\varepsilon},\text{ \ }\partial_{\nu}\widehat{u
_{l}^{\varepsilon}\left( x\right) =0,\text{ \ \ \ }x\in\partial\widehat{\Pi
}_{l}^{\varepsilon}, \label{888
\end{equation}
and, thus, $\widehat{u}_{l}^{\varepsilon}$ is an eigenfunction of the Neumann
problem (\ref{888}) while the corresponding eigenvalue (\ref{8}) belongs to
the continuous spectrum $\widehat{\wp}_{co}=\left[ 0,+\infty\right) $ of
this problem.
We emphasize that the method \cite{Vas1} requires the mirror symmetry of the
waveguide and cannot be applied to the asymmetric waveguide $\Pi
_{l}^{\varepsilon}$ in fig. \ref{f1}, b. The detected embedded eigenvalue
$\lambda_{l}^{\varepsilon}$ of the Neumann problem (\ref{888}) is stable with
respect to small symmetric perturbations of the waveguide walls but any
violation of the symmetry may lead it out from the spectrum and turn it into a
point of complex resonance, cf. \cite{Vas4} and, e.g., \cite{na546}.
The intrinsic instability of embedded eigenvalues requests for special
techniques to detect them as well as to construct their asymptotics. In the
present paper we use a criterion for the existence of trapped modes (see
\cite{na275} and Theorem \ref{TheoremA} below) and a concept of enforced
stability of eigenvalues in the continuous spectrum, cf. \cite{na489, na546}.
\subsection{Reduction of the problem\label{sect1.3}}
In view of the mirror symmetry about the $x_{2}$-axis, notice the difference
with the above mentioned assumption in \cite{Vas1}, we truncate the waveguide
$\Pi_{l}^{\varepsilon}$ and consider the Neumann proble
\begin{align}
-\Delta u_{+}^{\varepsilon}\left( x\right) & =\lambda_{+}^{\varepsilon
}u_{+}^{\varepsilon}\left( x\right) ,\text{ \ }x\in\Pi_{l+}^{\varepsilon
},\label{9}\\
\partial_{\nu}u_{+}^{\varepsilon}\left( x\right) & =0,\text{ \ \ \
x\in\partial\Pi_{l+}^{\varepsilon}, \label{10
\end{align}
is its right half (overshaded in fig. \ref{f1}, b
\begin{equation}
\Pi_{l+}^{\varepsilon}=\left\{ x\in\Pi_{l}^{\varepsilon}:x_{1}>0\right\}
=\left\{ x:x_{2}\in(-\varepsilon,0)\text{ for }x_{1}\in\left( 0,l\right)
,\text{ }x_{2}\in(0,1)\text{ for }x_{1}\geq l\right\} . \label{11
\end{equation}
Clearly, the even in $x_{1}$ extension of an eigenfunction $u_{+
^{\varepsilon}$ of problem (\ref{9}), (\ref{10}) becomes an eigenfunction of
the original problem (\ref{1}), (\ref{2}). Searching for an eigenvalu
\begin{equation}
\lambda^{\varepsilon}\in\left( 0,\pi^{2}\right) , \label{12
\end{equation}
we will show in Section \ref{sect7} that, first, problem (\ref{9}), (\ref{10})
cannot get more than one eigenvalue in $\left( 0,\pi^{2}\right) $ and,
second, the mixed boundary value problem in (\ref{11}) with the Dirichlet
condition at the truncation segment $\left\{ x:x_{1}=0,\text{ \ }x_{2
\in(-\varepsilon,1)\right\} $ instead of the Neumann condition as in
(\ref{10}), does not have eigenvalues (\ref{12}). These mean that an
eigenfunction of problem (\ref{1}), (\ref{2}) associated with the eigenvalue
(\ref{12}) is always even in the variable $x_{1}$. In this way, we will be
able to describe the part $\wp_{po}\cap\left( 0,\pi^{2}\right) $ of the
point spectrum in the entire waveguide $\Pi_{l}^{\varepsilon}$. In what
follows we skip the subscript $l$. Hence, we regard (\ref{3}) as an integral
identity serving for problem (\ref{9}), (\ref{10}) in $\Pi_{+}^{\varepsilon
}:=\Pi_{l+}^{\varepsilon}$ and denote $A_{+}^{\varepsilon}$ the corresponding
self-adjoint operator in $L^{2}\left( \Pi_{+}^{\varepsilon}\right) $, cf.
Section \ref{sect1.1}.
\subsection{Embedded eigenvalues\label{sect1.4}}
In the absence of the mirror symmetry about a midline of a waveguide the
modern literature gives much less results on the existence of embedded
eigenvalues. A distinguishing feature of an eigenvalue in the continuous
spectrum is its intrinsic instability with respect to a variation of the
waveguide shape while all eigenvalues in the continuous spectrum stay stable.
In this way to detect an eigenvalue in the Neumann waveguide $\Pi
_{l}^{\varepsilon}$, see (\ref{1}), (\ref{2}), a fine tuning of the
parameter-dependent shape is needed, namely the length $2l=2l(\varepsilon)$ of
the perturbation box (\ref{0}) must be chosen specifically in dependence of
its height $\varepsilon$.
A method to construct particular waveguide shapes which support embedded
eigenvalues, was developed in \cite{na246, na252, na476, na489, na546} on the
basis of a sufficient condition \cite{na275} for the existence of
exponentially decaying solutions trapped modes in waveguides to elliptic
problems in domains with cylindrical outlets to infinity. As a result, several
examples of eigenvalues in the continuous spectrum were constructed without
requiring a geometrical symmetry of waveguides which are obtained from the
straight unit strip by either singular \cite{na246, na252, na533}, or regular
\cite{na476, na489, na546} perturbations of the boundary, see fig. \ref{f5}, a
and b, respectively; for a non-local smooth perturbation, see \cite{na521}. To
this end, a notion of the augmented scattering matrix \cite{na275} was used
together with certain traditional asymptotic procedures in domains with small
holes and cavities, cf. \cite[Ch.4, 5 and 2]{MaNaPl}, or in domains with
smoothly varied boundaries, cf. \cite[Ch. XII, \S 6.5]{Kato}
\begin{figure}
[ptb]
\begin{center}
\ifcase\msipdfoutput
\includegraphics[
height=0.6512in,
width=4.235in
{CaDuNa5.eps
\else
\includegraphics[
height=0.6512in,
width=4.235in
{C:/Users/Desktop/Documents/Google Drive/Articoli/Articoli-in-Preparazione/CardoneDuranteNazarov1/graphics/CaDuNa5__4.pdf
\fi
\caption{Singular (a) and regular (b) perturbations
\label{f5
\end{center}
\end{figure}
It is very important in the above-mentioned approach to detect embedded
eigenvalues that the asymptotic procedures in use are completed in such a way
they provide both, a formal derivation of asymptotic expansions and an
operator reformulation of the diffraction problem in the framework of the
perturbation theory of linear operators, see, e.g., \cite{HilPhi, Kato}, which
helps to conclude a smooth (actually analytic) dependence of scattering
matrices on geometrical parameters describing the waveguide shape. Moreover,
this permits to reformulate the sufficient condition \cite{na275} for the
existence of trapped modes as a nonlinear abstract equation and to fulfil the
condition by means of the contraction principle, cf. Sections \ref{sect2.3}
and \ref{sect4.1}, \ref{sect4.2} below.
The box-shaped perturbation (\ref{0}) of the strip (\ref{00}) can be regarded
as a combination of regular and singular perturbations, respectively outside
and inside neighborhoods of the corner points but unfortunately the authors do
not know a tool to reduce the problem (\ref{1}), (\ref{2}) (or (\ref{3}) in
the variational form) to an abstract equation in\ a fixed (independent of
$\varepsilon$) Banach space and to confirm necessary properties of the
scattering matrices in the waveguide $\Pi_{l}^{\varepsilon}=\Pi\cup\varpi
_{l}^{\varepsilon}$. Thus, in order to support each step of our procedure to
detect an embedded eigenvalue and to establish its uniqueness, we have to
obtain a certain new result for the problems (\ref{1}), (\ref{2}) and/or
(\ref{9}), (\ref{10}) which cannot be deduced from the general perturbation
theory. Although several approaches and tricks proposed in our paper work also
for other shapes like in fig. \ref{f6}, we focus the analysis on the
particular shape in fig. \ref{f1}, b.
\subsection{Architecture of the paper\label{sect.1.6}}
We proceed in Section \ref{sect2} with introducing different waves in $\Pi
_{+}^{\varepsilon}$: oscillatory and exponential for $\lambda^{\varepsilon
\in\left( 0,\pi^{2}\right) $\ and linear in $x_{1}$ at the threshold
$\lambda^{\varepsilon}=\pi^{2}$. Then on the basis of the Mandelstam energy
principle, cf. \cite[\S \ 5.3]{NaPl} we perform the classification $\left\{
\text{incoming/outgoing}\right\} $ for the introduced waves and impose two
types, physical and artificial, of radiation conditions at infinity. The
corresponding diffraction problems give rise to two scattering matrices
$s^{\varepsilon}$ and $S^{\varepsilon}$. Due to the restriction of the
boundary value problem (\ref{1}), (\ref{2}) onto the semi-infinite waveguide
(\ref{11}) the matrix $s^{\varepsilon}$ reduces to classical scalar reflexion
coefficient but the augmented scattering matrix $S^{\varepsilon}$ is of size
$2\times2$ because the artificial radiation conditions involve the exponential
waves in addition to the oscillatory waves. The above-mentioned criterion for
the existence of embedded eigenvalues is formulated in terms of the matrix
$S^{\varepsilon}$, see Theorem \ref{TheoremA} and note that its proof is
completed by Theorem \ref{TheoremMA} about solutions of the problem (\ref{9}),
(\ref{10}) with a fast exponential decay. In Section \ref{sect3} we construct
formal asymptotic expansions of the augmented scattering matrix which are
justified in\ Section \ref{sect6.4}. In order to detect an embedded eigenvalue
in Section \ref{sect4} we need the main asymptotic and first correction terms.
The two-fold nature of the box-shaped perturbation manifests itself in
different ans\"{a}tze for the diagonal entries $S_{11}^{\varepsilon}$ and
$S_{00}^{\varepsilon}$ of the matrix $S^{\varepsilon}$. In the first case the
asymptotic procedure looks like as for a regular perturbation of the boundary,
that is the boundary layer phenomenon does not influence the main asymptotic
term $S_{11}^{\varepsilon}$ in the expansio
\begin{equation}
S_{11}^{\varepsilon}=S_{11}^{0}+\widehat{S}_{11}^{\varepsilon}. \label{S11
\end{equation}
In the second case the correction term $S_{\infty}^{^{\prime}}$ in the
expansion $S_{00}^{\varepsilon}=1+\varepsilon S_{00}^{^{\prime}}+\widehat
{S}_{00}^{\varepsilon}$ results from the boundary layer phenomenon while the
regular expansion affects higher-order terms only. It should be emphasized
that the augmented scattering matrix is unitary and symmetric and the main
asymptotic term in the expansio
\begin{equation}
S_{01}^{\varepsilon}=S_{10}^{\varepsilon}=\varepsilon^{1/2}S_{10}^{^{\prime
}+\widehat{S}_{10}^{\varepsilon} \label{S01
\end{equation}
of the anti-diagonal entries can be computed by means of both the approaches.
In Section \ref{sect4} we first reduce the criterion $S_{11}^{\varepsilon}=-1$
from Theorem \ref{TheoremA} to an abstract equation and second solve it with
the help of the contraction principle. Finally we formulate Theorems
\ref{TheoremEX} and \ref{TheoremUN} on the existence and uniqueness of the
embedded eigenvalue. These assertions are proved in the next three sections.
In Section \ref{sect5} formulations of the problem (\ref{1}), (\ref{2}) in the
Kondratiev \ spaces (Therem \ref{TheoremKA}) and weighted spaces with detached
asymptotics (Theorem \ref{TheoremKMM}) are presented as well as the operator
formulation of the radiation condition at infinity. At the same time, key
results for the particular box-shaped perturbation (\ref{0}), are displayed in
Sections \ref{sect5.3} and \ref{sect5.5} where we verify the absence of
trapped modes with a fast decay and clarify the dependence of majorants in a
priori estimates for solutions on the small and spectral parameters
$\varepsilon\in\left( 0,\varepsilon_{0}\right) $ and $\lambda\in\left(
0,\pi^{2}\right) $.
In Section \ref{sect6} we evaluate remainders in the asymptotic formulas
(\ref{S11})-(\ref{S01}) for the augmented scattering matrix Theorem
\ref{TheoremASY} while the boundary layer phenomenon brings additional powers
of $\left\vert \ln\varepsilon\right\vert $ into bounds of estimates. Other
necessary properties of the matrix are clarified in Section \ref{sect7} where
the uniqueness of the embedded eigenvalue is verified, too. Again kinks of the
perturbation profile seriously complicate the analysis.
Conclusive remarks are collected in Section \ref{sect8} where, in particular,
we discuss essential simplifications of the analysis within the discrete
spectrum and a certain hardship for detecting eigenvalues near higher
thresholds of the continuous spectrum.
\section{Augmented scattering matrix and a criterion for trapped
modes\label{sect2}}
\subsection{Classification of waves.\label{sect2.1}}
For the spectral problem (\ref{12}), the limit ($\varepsilon=0$) problem
(\ref{1}), (\ref{2}) in the straight strip $\Pi=\mathbb{R}\times\left(
0,1\right) $ has two oscillating wave
\begin{equation}
w_{0}^{\varepsilon\pm}\left( x\right) =\left( 2k\right) ^{-1/2}e^{\pm
ik^{\varepsilon}x_{1}},\text{ \ }k^{\varepsilon}=\sqrt{\lambda^{\varepsilon}}.
\label{13
\end{equation}
By the Sommerfeld principle, see, e.g., \cite{Keijo, Acous, Wilcox}, the waves
$w_{0}^{\varepsilon+}$ and $w_{0}^{\varepsilon-}$ are outgoing and incoming,
respective, in the one-sided waveguide (\ref{11}) according to their
wavenumbers $+k^{\varepsilon}$ and $-k^{\varepsilon}.$ In this way, the
inhomogeneous problem (\ref{9}), (\ref{10}
\begin{equation}
-\Delta u^{\varepsilon}\left( x\right) -\lambda^{\varepsilon}u^{\varepsilon
}\left( x\right) =f^{\varepsilon}\left( x\right) ,\text{ \ }x\in\Pi
_{+}^{\varepsilon},\ \ \ \ \ \ \partial_{\nu}u^{\varepsilon}\left( x\right)
=0,\text{\ \ }x\in\partial\Pi_{+}^{\varepsilon}, \label{14
\end{equation}
with, for example, compactly supported right-hand side $f^{\varepsilon}$ is
supplied with the radiation conditio
\begin{equation}
u^{\varepsilon}\left( x\right) =c_{0}^{\varepsilon}w_{0}^{\varepsilon
+}\left( x\right) +\widetilde{u}^{\varepsilon}\left( x\right) ,\text{
\ \ }\widetilde{u}^{\varepsilon}\left( x\right) =O(e^{-x_{1}\sqrt{\pi
^{2}-\lambda^{\varepsilon}}}). \label{15
\end{equation}
In Section \ref{sect5} we will give an operator formulation of problem
(\ref{14}), (\ref{15}).
At the threshol
\begin{equation}
\lambda=\pi^{2} \label{16
\end{equation}
in addition to the oscillating wave
\begin{equation}
w_{0}^{0\pm}\left( x\right) =\left( 2\pi\right) ^{-1/2}e^{\pm i\pi x_{1}}
\label{17
\end{equation}
there appear the staying and growing waves
\[
w_{1}^{0}\left( x\right) =\cos\left( \pi x_{2}\right) ,\ w_{1}^{1}\left(
x\right) =x_{1}\cos\left( \pi x_{2}\right)
\]
which cannot be classified by the Sommerfield principle because of zero
wavenumber. However, as was observed in \cite[\S \ 5.3]{NaPl}, that, first,
waves (\ref{13}) verify the relation
\begin{equation}
q_{R}\left( w_{p}^{\varepsilon\pm},w_{q}^{\varepsilon\pm}\right) =\pm
i\delta_{p,q},\text{ \ \ \ }q_{R}\left( w_{p}^{\varepsilon\pm},w_{q
^{\varepsilon\mp}\right) =0 \label{19
\end{equation}
and, second, the linear combination
\begin{equation}
w_{1}^{0\pm}\left( x\right) =\left( x_{1}\mp i\right) \cos\left( \pi
x_{2}\right) \label{20
\end{equation}
together with waves (\ref{17}) fulfil formulas (\ref{19}) at $\varepsilon=0$
as well. Here, $\delta_{p,q}$ is the Kronecker symbol, $p,q=0$ in the first
case and $p,q=0,1$ in the second case, and $q_{R}$ is a symplectic, that is
sesquilinear and anti-Hermitian for
\[
q_{R}\left( w,v\right) =\int_{0}^{1}\left( \overline{v\left(
R,x_{2}\right) }\frac{\partial w}{\partial x_{1}}\left( R,x_{2}\right)
-w\left( R,x_{2}\right) \overline{\frac{\partial v}{\partial x_{1}}\left(
R,x_{2}\right) }\right) dx_{2}.
\]
This form emerges from the Green formula in the truncated waveguide $\Pi
_{+}^{\varepsilon}\left( R\right) =\{x\in\Pi_{+}^{\varepsilon},$ $x_{1
\in\left( 0,R\right) \}$ and therefore does not depend on the length
parameter $R>l$ for any of introduced waves and their linear combinations.
Hence, we skip the subscript $R$ in (\ref{19}) and (\ref{20}).
For waves (\ref{19}), the sign of $\operatorname{Im}q\left( w_{0
^{\varepsilon\pm},w_{0}^{\varepsilon\pm}\right) $ coincides with the sign of
the wavenumber and therefore indicates the propagation direction. Analogously,
we call the wave $w_{1}^{0+}$ outgoing and the wave $w_{1}^{0-}$ incoming in
the waveguide $\Pi_{+}^{\varepsilon}$ so that the problem (\ref{14}) with
$\lambda^{\varepsilon}=\pi^{2}$ ought to be supplied with the following
threshold radiation conditio
\begin{equation}
u^{\varepsilon}\left( x\right) =c_{0}^{\varepsilon}w_{0}^{0+}\left(
x\right) +c_{1}^{\varepsilon}w_{1}^{0+}\left( x\right) +\widetilde
{u}^{\varepsilon}\left( x\right) ,\text{ \ \ }\widetilde{u}\left( x\right)
=O(e^{_{-\sqrt{3}\pi x_{1}}}) \label{23
\end{equation}
In Section \ref{sect5} we will prove that this formulation of the problem at
the threshold (\ref{16}) provides an isomorphism in its operator setting.
As was demonstrated in \cite[\S \ 5.6]{NaPl}, the form $q$ is closely related
to the Umov-Poyting vector \cite{Poynt, Umov} so that both radiation
conditions (\ref{15}) and (\ref{23}) arise from the Mandelstam (energy)
principle, see \cite[\S 5.3]{NaPl}.
\subsection{Scattering matrices and exponential waves\label{sect2.2}}
In the case (\ref{12}) the incoming wave in (\ref{13}) generates the following
solution of problem (\ref{9}), (\ref{10})
\begin{equation}
\zeta_{0}^{\varepsilon}\left( x\right) =w_{0}^{\varepsilon-}\left(
x\right) +s_{00}^{\varepsilon}w_{0}^{\varepsilon+}\left( x\right)
+\widetilde{\zeta}_{0}^{\varepsilon}\left( x\right) \label{24
\end{equation}
Here, the remainder $\widetilde{\zeta}_{0}^{\varepsilon}$ decays as
$O(e^{-x_{1}\sqrt{\pi^{2}-\lambda^{\varepsilon}}})$ and $s_{00}^{\varepsilon}$
is the reflection coefficient which satisfies $\left\vert s_{00}^{\varepsilon
}\right\vert =1$ due to conservation of energy.
In the same way, in the case (\ref{16}) we can determine the solutions
\[
\zeta_{p}^{\varepsilon}\left( x\right) =w_{p}^{0-}\left( x\right)
+s_{0p}^{0}w_{0}^{0+}\left( x\right) +s_{1p}^{0}w_{1}^{0+}\left( x\right)
+\widetilde{\zeta}_{p}^{0}\left( x\right)
\]
where $p=1$, $\widetilde{\zeta}_{p}^{0}\left( x\right) =O(e^{-x_{1}\sqrt
{3}\pi})$ and the coefficients $s_{qp}^{0}$ form the (threshold) scattering
matrix $s^{0}$ of size $2\times2.$ According to the normalization and
orthogonality conditions (\ref{19}) for waves (\ref{17}), (\ref{20}) and the
relation $w_{p}^{0+}\left( x\right) =\overline{w_{p}^{0-}\left( x\right)
}$, the matrix $s^{0}$ is unitary and symmetric (cf. \cite[\S 2]{na489}) that
is
\begin{equation}
\left( s^{0}\right) ^{-1}=\left( s^{0}\right) ^{\ast},\text{ \ \
s^{0}=\left( s^{0}\right) ^{\top} \label{26
\end{equation}
where $\top$ stands for transposition and $\left( s^{0}\right) ^{\ast
}=\left( \overline{s^{0}}\right) ^{\top}$ is the adjoint matrix.
The reflection coefficient $s_{00}^{\varepsilon}$ ought to be regarded as a
scattering matrix of size $1\times1$ in view of the only one couple of waves
(\ref{13}) which are able to drive energy along the waveguide (\ref{11}). For
example, dealing with the next couple of wave
\begin{equation}
v_{1}^{\varepsilon\pm}\left( x\right) =\left( k_{1}^{\varepsilon}\right)
^{-1/2}e^{\pm k_{1}^{\varepsilon}x_{1}}\cos\left( \pi x_{2}\right) ,\text{
\ \ \ }k_{1}^{\varepsilon}=\sqrt{\pi^{2}-\lambda^{\varepsilon}} \label{27
\end{equation}
which are decaying $(-)$ and growing $\left( +\right) ,$ one readily finds
that
\begin{equation}
q\left( v_{1}^{\varepsilon\pm},v_{1}^{\varepsilon\pm}\right) =0 \label{28
\end{equation}
bu
\begin{equation}
q\left( v_{1}^{\varepsilon+},v_{1}^{\varepsilon-}\right) =-q\left(
v_{1}^{\varepsilon-},v_{1}^{\varepsilon+}\right) =1 \label{29
\end{equation}
As was observed in \cite[\S 5.6]{NaPl} and mentioned above, formula (\ref{28})
annihilates the projection on the $x_{3}$-axis of the Umov-Poyting vector and
therefore waves (\ref{27}) cannot drive energy. In the papers \cite{na246,
na252} (see \cite{na275} for general elliptic systems) the linear combinations
of exponential wave
\begin{equation}
w_{1}^{\varepsilon\pm}\left( x\right) =2^{-1/2}\left( v_{1}^{\varepsilon
+}\left( x\right) \mp iv_{1}^{\varepsilon-}\left( x\right) \right)
\label{30
\end{equation}
were introduced. It is remarkable that, thanks to (\ref{28}) and (\ref{29}),
waves (\ref{13}) and (\ref{30}) enjoy conditions (\ref{19}) with $p,q=0,1.$
The latter allows us to determine the solution
\begin{equation}
Z_{p}^{\varepsilon}\left( x\right) =w_{p}^{\varepsilon-}\left( x\right)
+S_{0p}^{\varepsilon}w_{0}^{\varepsilon+}\left( x\right) +S_{1p
^{\varepsilon}w_{1}^{\varepsilon+}\left( x\right) +\widetilde{Z
_{p}^{\varepsilon}\left( x\right) ,\ \widetilde{Z}_{p}^{\varepsilon}\left(
x\right) =O(e^{-x_{1}\sqrt{4\pi^{2}-\lambda^{\varepsilon}}}),\text{\ }p=0,1,
\label{31
\end{equation}
to compose the coefficient matrix $S^{\varepsilon}=\left( S_{qp
^{\varepsilon}\right) $ and to assure its unitary and symmetry property, cf.
(\ref{26}). Moreover, since $w_{p}^{\varepsilon+}\left( x\right)
=\overline{w_{p}^{\varepsilon-}\left( x\right) }$, this matrix is symmetric,
see \cite[\S 2]{na489} again.
In Section \ref{sect5} we will give an operator formulation of the problem
(\ref{14}), at $\lambda^{\varepsilon}\in\left( 0,\pi^{2}\right) $ with the
radiation conditio
\begin{equation}
U^{\varepsilon}\left( x\right) =c_{0}^{\varepsilon}w_{0}^{\varepsilon
+}\left( x\right) +c_{1}^{\varepsilon}w_{1}^{\varepsilon+}\left( x\right)
+\widetilde{U}^{\varepsilon}\left( x\right) ,\ \ \ \widetilde{U
^{\varepsilon}\left( x\right) =O(e^{-x_{1}\sqrt{4\pi^{2}-\lambda
^{\varepsilon}}}). \label{32
\end{equation}
We recognize this condition as artificial because the right-hand side of
(\ref{32}) involves the exponentially growing wave $w_{1}^{\varepsilon
+}\left( x\right) $, see (\ref{30}) and (\ref{27}).
\subsection{A criterion for trapped modes.\label{sect2.3}}
A reason to consider problem (\ref{14}), (\ref{32}) and the augmented
scattering matrix $S^{\varepsilon}$ is explained by the following assertion.
\begin{theorem}
\label{TheoremA} Problem (\ref{9}),(\ref{10}) with the spectral parameter
(\ref{12}) has a trapped mode $u^{\varepsilon}\in H^{1}\left( \Pi
_{+}^{\varepsilon}\right) $ if and only if
\begin{equation}
S_{11}^{\varepsilon}=-1. \label{33
\end{equation}
\end{theorem}
In other words, equation (\ref{33}) provides a criterion for the existence of
a trapped mode in the one-sided waveguide (\ref{11}).
A proof of Theorem \ref{TheoremA} can be found, e.g., in \cite{na275} and
\cite[Thm 2]{na489} but, since the criterion (\ref{33}) plays the central rule
in our analysis, we here give the condensed proof.
The unitary property of $S^{\varepsilon}$ demonstrates that
\begin{equation}
S_{11}^{\varepsilon}=-1\qquad\Rightarrow\qquad S_{10}^{\varepsilon
=S_{01}^{\varepsilon}=0. \label{34
\end{equation}
Thus the solution (\ref{31}) with $p=1$ becomes a trapped mode because
formulas (\ref{30}) and (\ref{27}) assure tha
\[
Z_{1}^{\varepsilon}\left( x\right) =Z_{p}^{\varepsilon}\left( x\right)
=w_{p}^{\varepsilon-}\left( x\right) -w_{1}^{\varepsilon+}\left( x\right)
+\widetilde{Z}_{p}^{\varepsilon}\left( x\right) =-2^{1/2}iv_{1
^{\varepsilon-}\left( x\right) +\widetilde{Z}_{1}^{\varepsilon}\left(
x\right) =O(e^{-x_{1}k_{1}^{\varepsilon}}).
\]
Hence, (\ref{33}) is a sufficient condition. To verify the necessity, we first
assume that the decompositio
\begin{equation}
U^{\varepsilon}\left( x\right) =c^{\varepsilon}v^{\varepsilon-}\left(
x\right) +\widetilde{U}^{\varepsilon}\left( x\right) \label{35
\end{equation}
of a trapped mode $U^{\varepsilon}\in H^{1}\left( \Pi_{+}^{\varepsilon
}\right) $ has a coefficient $c^{\varepsilon}\neq0.$
Then $U^{\varepsilon}$ becomes a linear combination of solutions (\ref{31}),
namely, according to (\ref{30}), we hav
\begin{align*}
U^{\varepsilon} & =C_{0}^{\varepsilon}Z_{0}^{\varepsilon}+C_{1
^{\varepsilon}Z_{1}^{\varepsilon}=C_{0}^{\varepsilon}w_{0}^{\varepsilon
-}+\left( S_{00}^{\varepsilon}C_{0}^{\varepsilon}+S_{01}^{\varepsilon
C_{1}^{\varepsilon}\right) w_{0}^{\varepsilon+}+\\
& +2^{-1/2}\left( v_{1}^{+}-iv_{1}^{-}\right) C_{1}^{\varepsilon
+2^{-1/2}\left( v_{1}^{+}+iv_{1}^{-}\right) \left( S_{10}^{\varepsilon
}C_{0}^{\varepsilon}+S_{11}^{\varepsilon}C_{1}^{\varepsilon}\right)
+\widetilde{U}^{\varepsilon}.
\end{align*}
Owing to the exponential decay of $U^{\varepsilon}$, coefficients of the
oscillating waves $w_{0}^{\varepsilon+}$ must vanish so that $C_{0
^{\varepsilon}=0,$\
$S_{01}^{\varepsilon}C_{1}^{\varepsilon}=0.$ Moreover, coefficients of the
exponential waves $v_{0}^{\varepsilon+}$ and $v_{0}^{\varepsilon-}$,
respectively, are $2^{-1/2}\left( S_{11}^{\varepsilon}+1\right)
C_{1}^{\varepsilon}=0$\ and\ $2^{-1/2}\left( S_{11}^{\varepsilon}-1\right)
C_{1}^{\varepsilon}=c^{\varepsilon}.$ Recalling our assumption $c^{\varepsilon
}\neq0,$ we see that $C_{1}^{\varepsilon}=-2^{-1/2}c^{\varepsilon}\neq0$ and,
therefore, (\ref{33}) holds true.
If $c^{\varepsilon}=0$ in (\ref{35}), the trapped mode $U^{\varepsilon}\left(
x\right) $ gains very fast decay rate $O(e^{-x_{1}\sqrt{4\pi^{2
-\lambda^{\varepsilon}}}).$ In Section \ref{sect5.3} we will show with a new
argument that such trapped modes do not exist for a small $\varepsilon.$
\begin{remark}
\label{RemarkSO}. The relationship between the augmented scattering matrix and
the reflection coefficient in (\ref{24}) looks as follows
\begin{equation}
s_{00}^{\varepsilon}=S_{00}^{\varepsilon}-S_{01}^{\varepsilon}\left(
S_{11}^{\varepsilon}+1\right) ^{-1}S_{10}^{\varepsilon}, \label{36
\end{equation}
see, e.g., \cite[Thm 3]{na489}. Note that, in view of (\ref{34}), the last
term in (\ref{36}) becomes null in the case $S_{11}^{\varepsilon}=-1$ when
$s_{00}^{\varepsilon}=S_{00}^{\varepsilon}.\ \ \ \ \ \boxtimes$
\end{remark}
\section{Formal asymptotics of the augmented scattering matrix\label{sect3}}
\subsection{Step-shaped perturbation of boundaries.\label{sect3.1}}
In this section we derive asymptotic expansions by means of a formal
asymptotic analysis and postpone their justification to Section \ref{sect6}.
Perturbation of the straight waveguide drawn in fig. \ref{f1},b and in fig.
\ref{f3}, b, ought to be regarded as a combination of regular and singular
perturbations, see, e.g., \cite[Ch.XII, \S \ 6.5]{Kato} and \cite[Ch. 2 and
4]{MaNaPl}, respectively. For a regular perturbation of the boundary, an
appropriate change of variables, which differs from the identity in magnitude
$O\left( \varepsilon\right) $ only, is usually applied in order to convert
the perturbed domain into the reference one. In this way, differential
operators in the problem gain small perturbations but asymptotics can be
constructed with the help of standard iterative procedures like decompositions
of a perturbed operator in the Neumann series.
Singular perturbations of boundaries need much more delicate analysis because
they require for a description of asymptotics in the stretched coordinates
which, for the domain $\Pi_{+}^{\varepsilon}=\Pi_{l+}^{\varepsilon}$, see
(\ref{11}), take the form
\begin{equation}
\xi=\left( \xi_{1},\xi_{2}\right) =\varepsilon^{-1}\left( x_{1
-l,x_{2}\right) . \label{38
\end{equation}
Notice that the change $x\longrightarrow\xi$ and setting $\varepsilon=0$
transform $\Pi_{+}^{\varepsilon}$ into the upper half-plane $\mathbb{R
_{+}^{2}$ with a semi-infinite step, fig. \ref{f4}, a,
\begin{equation}
\Xi=\left\{ \xi\in\mathbb{R}^{2}:\xi_{2}>0\text{ \ for \ }\xi_{1}\leq0\text{
and }\xi_{2}>-1\text{ \ for \ }\xi_{1}>0\right\} . \label{39
\end{equation}
\begin{figure}
[ptb]
\begin{center}
\ifcase\msipdfoutput
\includegraphics[
height=1.6475in,
width=3.7948in
{CaDuNa4.eps
\else
\includegraphics[
height=1.6475in,
width=3.7948in
{C:/Users/Desktop/Documents/Google Drive/Articoli/Articoli-in-Preparazione/CardoneDuranteNazarov1/graphics/CaDuNa4__5.pdf
\fi
\caption{A distorted half-plane (a) to describe the boundary layer near the
ledge (b)
\label{f4
\end{center}
\end{figure}
As a result, the singular perturbation of the waveguide wall gives rise to the
boundary layer phenomenon described by solutions to the following problem
\begin{equation}
-\Delta_{\xi}v\left( \xi\right) =0,\text{ \ }\xi\in\Xi,\text{ \ \
\partial_{\nu\left( \xi\right) }v\left( \xi\right) =g\left( \xi\right)
,\text{ \ \ \ }\xi\in\partial\Xi. \label{40
\end{equation}
The Laplace equation is caused by the relation $\Delta_{x}+\lambda
=\varepsilon^{-2}\Delta_{\xi}+\lambda$ which singles out the Laplacian as the
main asymptotic part of the Helmholtz operator. The Neumann problem (\ref{40})
with a compactly supported datum $g$ admits a solution $v\left( \xi\right)
=O(\left\vert \xi\right\vert ^{-1})$ as $\left\vert \xi\right\vert
\longrightarrow+\infty$ provided $\int_{\partial\Xi}g\left( \xi\right)
ds_{\xi}=0.$ Otherwise, a solution grows at infinity like $c\ln\left\vert
\xi\right\vert $ and loses the intrinsic decay property of a boundary layer so
that traditional asymptotic procedures become much more sophisticated, see
\cite[Ch. 2, 4]{MaNaPl} and \cite{Ilin}. However, we will see that in our
particular problem the boundary perturbation does not affect the main
asymptotic term and the first correction term does not include a boundary layer.
\subsection{Asymptotic procedure\label{sect3.2}}
We search for an eigenvalue of the problem (\ref{9}), (\ref{10}) in the for
\begin{equation}
\lambda^{\varepsilon}=\pi^{2}-\varepsilon^{2}\mu\label{42
\end{equation}
where the correction coefficient $\mu>0$ is to be found in Section
\ref{sect4}. Recalling the normalization factors in (\ref{27}) and (\ref{13}
\begin{equation}
\left( k_{1}^{\varepsilon}\right) ^{-1/2}=\varepsilon^{-1/2}\mu
^{-1/4}+O(\varepsilon^{1/2}),\text{ \ \ \ }\left( 2k^{\varepsilon}\right)
^{-1/2}=\left( 2\pi\right) ^{-1/2}+O(\varepsilon^{2}), \label{43
\end{equation}
we guess at the following asymptotic ans\"{a}tze for entries of the augmented
scattering matri
\begin{equation}
S_{11}^{\varepsilon}=S_{11}^{0}+\varepsilon S_{11}^{\prime}+\widetilde{S
_{11}^{\varepsilon},\text{ \ \ }S_{01}^{\varepsilon}=\varepsilon^{1/2
S_{01}^{0}+\varepsilon^{3/2}S_{01}^{\prime}+\widetilde{S}_{01}^{\varepsilon}
\label{44
\end{equation}
but aim to calculate the terms $S_{p1}^{0}$ and $S_{p1}^{\prime}$ only. We
further estimate the remainders in Section \ref{sect6}.
Using definitions of waves (\ref{13}) and (\ref{27}), (\ref{30}), we take
relations (\ref{43}) and (\ref{44}) into account and rewrite the decomposition
(\ref{31}) of the special solution $Z_{1}^{\varepsilon}$ as follows
\begin{align}
Z_{1}^{\varepsilon}\left( x\right) & =\varepsilon^{-1/2}\left(
4\mu\right) ^{-1/4}\cos\left( \pi x_{2}\right) (1+i+S_{11}^{0}\left(
1-i\right) +\label{45}\\
& +\varepsilon\left( S_{11}^{\prime}\left( 1-i\right) +x_{1}\sqrt{\mu
}\left( 1-i+S_{11}^{0}\left( 1+i\right) \right) +...\right) +\nonumber\\
& +\varepsilon^{1/2}\left( 2\pi\right) ^{-1/2}\left( S_{01}^{0
+\varepsilon S_{01}^{\prime}+...\right) \left( e^{i\pi x_{1}}+...\right)
.\nonumber
\end{align}
Here and everywhere in this section, ellipses stand for lower-order terms
inessential in our formal asymptotic analysis. In (\ref{45}), the Taylor
formul
\begin{equation}
e^{k_{1}^{\varepsilon}x_{1}}\mp ie^{-k_{1}^{\varepsilon}x_{1}}=\left( 1\mp
i\right) +\varepsilon x_{1}\sqrt{\mu}\left( 1\pm i\right) +O\left(
\varepsilon^{2}x_{1}^{2}\right) \label{46
\end{equation}
was applied so that expansion (\ref{46}) becomes meaningful under the
restriction $x_{1}<R$ with a fixed $R$, i.e. for $x\in\Pi^{\varepsilon}\left(
R\right) .$
In view of the above observation we employ the method of matched asymptotic
expansions, cf. \cite{VanDyke,Ilin}, in the interpretation \cite{na489,
na457}. Namely, we regard (\ref{45}) as an outer expansion and introduce the
inner expansio
\begin{equation}
Z_{1}^{\varepsilon}\left( x\right) =\varepsilon^{-1/2}Z_{1}^{0}\left(
x\right) +\varepsilon^{1/2}Z_{1}^{\prime}\left( x\right) +... \label{47
\end{equation}
At the same time, coefficients of $\varepsilon^{-1/2}$ and $\varepsilon^{1/2}$
on the right-hand side of (\ref{45}) exhibit a behavior at infinity of the
terms $Z_{1}^{0}$ and $Z_{1}^{\prime}$ in (\ref{47}) because the upper bound
$R$ for $x_{1}$ can be chosen arbitrary large. Thus, they must satisf
\begin{align}
Z_{1}^{0}\left( x\right) & =\left( 4\mu\right) ^{-1/4}\cos\left( \pi
x_{2}\right) \left( 1+i+S_{11}^{0}\left( 1-i\right) \right)
+...\label{Z0}\\
Z_{1}^{\prime}\left( x\right) & =\left( 4\mu\right) ^{-1/4}\cos\left(
\pi x_{2}\right) S_{11}^{\prime}\left( 1-i\right) +x_{1}\sqrt{\mu}\left(
1-i+S_{11}^{0}\left( 1+i\right) \right) +S_{01}^{0}\left( 2\pi\right)
^{-1/2}e^{i\pi x_{1}}+... \label{Z1
\end{align}
The formal passage to $\varepsilon=0$ transforms the waveguide (\ref{11}) into
the semi-infinite strip $\Pi_{+}^{0}=\mathbb{R\times}\left( 0,1\right) $
while due to (\ref{42}) the Neumann problem (\ref{9}), (\ref{10}) converts
int
\begin{align}
-\Delta u^{0}\left( x\right) & =\pi^{2}u^{0}\left( x\right) ,\text{
\ }x\in\Pi_{+}^{0},\label{481}\\
\partial_{\nu}u^{0}\left( x\right) & =0,\text{ \ \ }x\in\partial\Pi
_{+}^{0}. \label{482
\end{align}
This limit problem has two bounded solution
\begin{align}
u_{0}^{0}\left( x\right) & =\frac{1}{2}\left( e^{-i\pi x_{1}}+e^{i\pi
x_{1}}\right) =\cos\left( \pi x_{1}\right) ,\label{491}\\
u_{1}^{0}\left( x\right) & =\cos\left( \pi x_{2}\right) . \label{492
\end{align}
Comparing with (\ref{Z0}), we se
\begin{equation}
Z_{1}^{0}\left( x\right) =\left( 4\mu\right) ^{-1/4}\left( 1+i+S_{11
^{0}\left( 1-i\right) \right) \cos\left( \pi x_{2}\right) . \label{50
\end{equation}
Since $\lambda^{\varepsilon}=\pi^{2}+O\left( \varepsilon^{2}\right) ,$ the
function $Z_{1}^{\prime}$ also satisfies the homogeneous equation (\ref{481})
but the Neumann condition becomes inhomogeneous because of the boundary
perturbation. For $p=1$, we have
\begin{equation}
-\Delta Z_{p}^{\prime}\left( x\right) =\pi^{2}Z_{p}^{\prime}\left(
x\right) ,\text{ \ \ }x\in\Pi_{+}^{0},\ \ \ \ \ \ \ \ \ \ \ \ \partial_{\nu
}Z_{p}^{\prime}\left( x\right) =g_{p}\left( x\right) ,\text{ \ \
x\in\partial\Pi_{+}^{0}. \label{51
\end{equation}
To determine the datum $g_{1},$ we observe that the function (\ref{50})
satisfies the Neumann condition (\ref{10}) everywhere on $\partial
\Pi^{\varepsilon}$, except at the lower side $\Upsilon^{\varepsilon}=\left\{
x:x_{1}\in\left( 0,l\right) ,\text{ \ \ }x_{2}=-\varepsilon\right\} $ of
the box-shaped perturbation $\varpi_{+}^{\varepsilon}=\left( 0,l\right)
\times\left( -\varepsilon,0\right] $ in (\ref{11}). Furthermore, we obtain
\begin{align}
\partial_{\nu}Z_{1}^{p}\left( x_{1},-\varepsilon\right) & =-\partial
_{2}Z_{1}^{0}\left( x_{1},-\varepsilon\right) =\left( 4\mu\right)
^{-1/4}\pi\sin\left( -\pi\varepsilon\right) \left( 1+i+S_{11}^{0}\left(
1-i\right) \right) =\label{5111}\\
& =-\varepsilon\left( 4\mu\right) ^{-1/4}\pi^{2}\left( 1+i+S_{11
^{0}\left( 1-i\right) \right) +O\left( \varepsilon^{3}\right)
=:-\varepsilon G_{1}^{\prime}+O\left( \varepsilon^{3}\right) \nonumber
\end{align}
and, hence
\begin{equation}
g_{p}\left( x\right) =\left\{
\begin{array}
[c]{l
0,\text{ \ \ }x\in\partial\Pi_{+}^{0}\setminus\overline{\Upsilon^{0}},\\
G_{p}^{\prime},\text{ \ \ }x\in\Upsilon^{0}.
\end{array}
\right. \label{52
\end{equation}
Although the Neumann datum (\ref{52}) is not smooth and has a jump at the
point $x=\left( l,0\right) ,$ the problem (\ref{51}) with $p=1$ has a
solution in $H_{loc}^{1}(\overline{\Pi_{+}^{0}})$ such that
\begin{equation}
Z_{p}^{\prime}\left( x\right) =C_{p}e^{i\pi x_{1}}+\left( C_{p}^{0
+x_{1}C_{p}^{1}\right) \cos\left( \pi x_{2}\right) +\widetilde{Z
_{p}^{\prime}\left( x\right) ,\text{ \ }\widetilde{Z}_{p}^{\prime}\left(
x\right) =O(e^{-x_{1}\sqrt{3}\pi}). \label{5222
\end{equation}
This fact is a direct consequence of the elliptic theory in domains with
cylindrical outlets to infinity (see the key works \cite{AgNi, Ko, MaPl1,
MaPl2} and, e.g., the monographs \cite{KoMaRo, NaPl}, but also may be derived
by the Fourier method after splitting $\Pi_{+}^{0}$ into the rectangle
$\left( 0,l\right) \times\left( 0,1\right) $ and the semi-strip $\left(
l,+\infty\right) \times\left( 0,1\right) .$ A simple explanation how to
apply the above-mentioned theory can be found in the introductory chapter 2 of
\cite{NaPl}, the review \cite{na417} and Section \ref{sect5} of this paper.
The solution (\ref{5222}) is defined up to the term $c\cos\left( \pi
x_{2}\right) $ and, therefore, the coefficient $C^{0}$ can be taken
arbitrary. Other coefficients in (\ref{5222}) are computed by application of
the Green formula in the long ($R$ is big) rectangle $\Pi^{0}\left( R\right)
=\left( 0,R\right) \times\left( 0,1\right) .$
Indeed, we send $R$ to $+\infty$ and obtai
\begin{align}
0 & =\underset{R\rightarrow+\infty}{\lim}\int_{\Pi_{+}^{0}\left( R\right)
}\left( u_{1}^{0}\left( x\right) \left( \Delta+\pi^{2}\right)
Z_{1}^{\prime}\left( x\right) -Z_{1}^{\prime}\left( x\right) \left(
\Delta+\pi^{2}\right) u_{1}^{0}\left( x\right) \right) dx=\label{54}\\
& =\underset{R\rightarrow+\infty}{\lim}\int_{0}^{1}\cos\left( \pi
x_{2}\right) \partial_{1}Z_{1}^{\prime}\left( R,x_{2}\right) dx_{2
-\int_{0}^{l}\cos\left( \pi0\right) \partial_{2}Z_{1}^{\prime}\left(
x_{1},0\right) dx_{1}=\nonumber\\
& =\frac{1}{2}C_{1}^{1}+\left( 4\pi\right) ^{-1/4}\pi^{2}l\left(
1+i+S_{11}^{0}\left( 1-i\right) \right) .\nonumber
\end{align}
In the same way we deal with the functions (\ref{491}) and (\ref{5222}) that
results in the equalit
\begin{gather}
0=\underset{R\rightarrow+\infty}{\lim
{\displaystyle\int_{0}^{1}}
\left. \left( \cos\left( \pi x_{1}\right) \partial_{1}Z_{1}^{\prime
}\left( x\right) dx_{2}-U_{1}^{\prime}\left( x\right) \partial_{1
\cos\left( \pi x_{1}\right) \right) \right\vert _{x_{1}=R}dx_{2
-\label{55}\\
{\displaystyle\int_{0}^{l}}
\cos\left( \pi x_{1}\right) \partial_{2}Z_{1}^{\prime}\left( x_{1
,0\right) dx_{1}=i\pi C_{1}+\left( 4\mu\right) ^{-1/4}\pi^{2}\left(
1+i+S_{11}^{0}\left( 1-i\right) \right)
{\displaystyle\int_{0}^{l}}
\cos\left( \pi x_{1}\right) dx_{1}.\nonumber
\end{gather}
Comparing (\ref{Z1}) and (\ref{5222}), we arrive at the relations
\begin{equation}
\left( 2\pi\right) ^{-1/2}S_{01}^{0}=C_{1},\text{ \ \ }\left( 4\mu\right)
^{-1/4}\sqrt{\mu}\left( 1-i+S_{11}^{0}\left( 1+i\right) \right) =C_{1
^{1},\text{ \ \ }S_{11}^{\prime}\left( 1-i\right) =C_{1}^{0}\text{\ }
\label{56
\end{equation}
which together with our calculations (\ref{55}) and (\ref{56}) give us the
following formulas
\begin{align}
S_{01}^{0} & =\left( 4\mu\right) ^{-1/4}\left( 2\pi\right) ^{1/2}\pi
i\left( 1+i+S_{11}^{0}\left( 1-i\right) \right) \int_{0}^{l}\cos\left(
\pi x_{1}\right) dx_{1}=\label{57}\\
& =-\left( 4\mu\right) ^{-1/4}\left( 2\pi\right) ^{1/2}\left(
1-i-S_{11}^{0}\left( 1+i\right) \right) \sin\left( \pi l\right)
\nonumber\\
\sqrt{\mu}\left( 1-i+S_{11}^{0}\left( 1+i\right) \right) & =-2\pi
^{2}l\left( 1+i+S_{11}^{0}\left( 1-i\right) \right) \Rightarrow
\label{58}\\
& \Rightarrow S_{11}^{0}=-\frac{\sqrt{\mu}\left( 1-i\right) 2\pi
^{2}l\left( 1+i\right) }{\sqrt{\mu}\left( 1+i\right) 2\pi^{2}l\left(
1-i\right) }=-\frac{4\pi^{2}l\sqrt{\mu}+i\left( 4\pi^{4}l^{2}-\mu\right)
}{4\pi^{4}l^{2}+\mu}.\nonumber
\end{align}
We emphasize that $\mu=4\pi^{4}l^{2}\Rightarrow S_{11}^{0}=-1.$
The necessary computations are completed. It should be mentioned that, to
determine the correction terms $S_{11}^{\prime}$ and $S_{01}^{\prime}$ in the
ans\"{a}tze (\ref{44}), one has to make another step in our asymptotic
procedure, see the next section, but they are of no further use.
\subsection{The detailed asymptotic procedure\label{sect3.3}}
Let us construct asymptotics of the entries
\begin{equation}
S_{00}^{\varepsilon}=S_{00}^{0}+\varepsilon S_{00}^{\prime}+\widetilde{S
_{00}^{\varepsilon}\text{, \ \ \ }S_{10}^{\varepsilon}=\varepsilon^{1/2
S_{10}^{0}+\varepsilon^{3/2}S_{10}^{\prime}+\widetilde{S}_{10}^{\varepsilon}
\label{60
\end{equation}
in the augmented scattering matrix. These asymptotics are not of very
importance in our analysis of eigenvalues but are much more representational
than the analogous formulas (\ref{44}) because the ansatz for $Z_{0
^{\varepsilon}$ indeed involves the boundary layer concentrated near the ledge
of the box-shape perturbation in (\ref{11}).
Using (\ref{43}), (\ref{60}) and (\ref{46}), we rewrite the decomposition
(\ref{31}) of $Z_{0}^{\varepsilon}\left( x\right) $ as follows
\begin{align}
Z_{0}^{\varepsilon}\left( x\right) & =\left( 2\pi\right) ^{-1/2}\left(
e^{-i\pi x_{1}}+S_{00}^{0}e^{i\pi x_{1}}+\varepsilon S_{00}^{\prime}e^{i\pi
x_{1}}+...\right) +\label{61}\\
& +\left( 4\mu\right) ^{-1/4}\left( S_{10}^{0}+\varepsilon S_{10}^{\prime
}+...\right) \cos\left( \pi x_{2}\right) \left( 1-i+\varepsilon x_{1
\sqrt{\mu}\left( 1+i\right) +...\right) .\nonumber
\end{align}
Then we accept in a finite part of $\Pi_{+}^{\varepsilon}$ the ansat
\begin{equation}
Z_{0}^{\varepsilon}\left( x\right) =Z_{0}^{0}\left( x\right) +\varepsilon
Z_{0}^{\prime}\left( x\right) +... \label{62
\end{equation}
Referring to (\ref{61}), we fix the behavior of its terms at infinit
\begin{align}
Z_{0}^{0}\left( x\right) & =\left( 2\pi\right) ^{-1/2}\left( e^{-i\pi
x_{1}}+S_{00}^{0}e^{i\pi x_{1}}\right) +\left( 4\mu\right) ^{-1/4
S_{10}^{0}\left( 1-i\right) \cos\left( \pi x_{2}\right) +...,\label{Z00}\\
Z_{0}^{\prime}\left( x\right) & =\left( 2\pi\right) ^{-1/2
S_{00}^{\prime}e^{i\pi x_{1}}+\left( 4\mu\right) ^{-1/4}\cos\left( \pi
x_{2}\right) (S_{10}^{\prime}\left( 1-i\right) +x_{1}\sqrt{\mu}S_{10
^{0}\left( 1+i\right) )+... \label{Z01
\end{align}
Comparing (\ref{Z00}) with (\ref{491}), (\ref{492}), we conclude that
\begin{equation}
Z_{0}^{0}\left( x\right) =\left( 2\pi\right) ^{-1/2}\left( e^{-i\pi
x_{1}}+e^{i\pi x_{2}}\right) +\left( 4\mu\right) ^{-1/4}S_{10}^{0}\left(
1-i\right) \cos\left( \pi x_{2}\right) \label{63
\end{equation}
and, hence
\begin{equation}
S_{00}^{0}=1 \label{64
\end{equation}
Let us describe the correction terms in (\ref{60}). The function (\ref{63})
satisfies the equation (\ref{9}) with $\lambda^{\varepsilon}=\pi^{2}$ and
leaves the discrepancies
\begin{gather}
\partial_{\nu}Z_{0}^{0}\left( x_{1},-\varepsilon\right) =-\partial_{2
Z_{0}^{0}\left( x_{1},-\varepsilon\right) =-\varepsilon\left( 4\mu\right)
^{-1/4}\pi^{2}S_{10}^{0}\left( 1-i\right) +O\left( \varepsilon^{3}\right)
=\label{Z22}\\
=-\varepsilon G_{0}^{\prime}+O\left( \varepsilon^{3}\right) ,x_{1}\in\left(
0,l\right) \nonumber\\
\partial_{\nu}Z_{0}^{0}\left( l,x_{2}\right) =\partial_{1}Z_{0}^{0}\left(
l,x_{2}\right) =-\left( 2\pi\right) ^{1/2}\sin\left( \pi l\right) ,\text{
\ \ }x_{2}\in\left( -\varepsilon,0\right) \label{Z02
\end{gather}
in the boundary condition (\ref{10}) on the big $\Upsilon^{\varepsilon}$ and
small $\upsilon^{\varepsilon}=\left\{ x:x_{1}=l,\text{ }x_{2}\in\left(
-\varepsilon,0\right) \right\} $ sides of the rectangle $\varpi
_{+}^{\varepsilon}$, respectively. The discrepancy (\ref{Z22}) is similar to
(\ref{5111}) and appears as the datum (\ref{52}) in the problem (\ref{51})
with $p=0.$ To compensate for (\ref{Z02}), we need the boundary layer
\begin{equation}
V_{0}^{0}\left( \xi\right) =\left( 2\pi\right) ^{1/2}\sin\left( \pi
l\right) v\left( \xi\right) \label{Z03
\end{equation}
where $\xi$ are stretched coordinates (\ref{38}) and $v$ is a solution of the
Neumann problem (\ref{40}) in the unbounded domain (\ref{39}) with the
right-hand sid
\[
g\left( \xi\right) =\left\{
\begin{array}
[c]{l
0,\text{ \ \ }\xi_{1}\neq0,\\
1,\text{ \ \ }\xi_{1}=0,\text{ \ \ }\xi_{2}\in\left( -1,0\right) ,
\end{array}
\right. \ \text{for }\xi\in\partial\Xi.
\]
This solution, of course, can be constructed by an appropriate conformal
mapping but we only need its presentation at infinit
\begin{equation}
v\left( \xi\right) =(B/\pi)\ln(1/\left\vert \xi\right\vert )+c+O\left(
1/\left\vert \xi\right\vert \right) ,\text{ \ \ \ \ }\left\vert
\xi\right\vert \rightarrow\infty. \label{Z05
\end{equation}
The constant $c$ is arbitrary but the coefficient $B$ can be computed by the
Green formula in the truncated domain $\Xi\left( R\right) =\left\{ \xi
\in\Xi:\left\vert \xi\right\vert <R\right\} $ with $R\rightarrow+\infty:
\begin{align}
0 & =\underset{R\rightarrow+\infty}{\lim}
{\displaystyle\int_{0}^{\pi+\arcsin\left( 1/R\right) }}
\dfrac{\partial v}{\partial\rho}\left( \xi\right) d\varphi
{\displaystyle\int_{0}^{1}}
\dfrac{\partial v}{\partial\xi_{1}}\left( 0,\xi_{2}\right) d\xi
_{2}=\label{Z08}\\
& =-\dfrac{B}{\pi
{\displaystyle\int_{0}^{\pi}}
d\varphi+\int_{0}^{1}d\xi_{2}=-B+1\Rightarrow B=1.\nonumber
\end{align}
Here, $\left( \rho,\varphi\right) $ is the polar coordinates system.
We fix $c=-\pi^{-1}\ln\varepsilon$ in (\ref{Z05}) and observe that
\begin{equation}
v\left( \xi\right) =(1/\pi)\left( \ln\left( 1/\rho\right) -\ln
\varepsilon\right) +O\left( 1/\rho^{2}\right) =(1/\pi)\ln\left(
1/r\right) +O\left( \varepsilon^{2}/r^{2}\right) \label{Z06
\end{equation}
in the polar coordinate system $\left( r,\varphi\right) $ centered at the
point $x=\left( l,0\right) \in\partial\Pi^{\varepsilon},$ see fig. \ref{f4}, b.
Applying the method of matched asymptotic expansions in the traditional
manner, cf. \cite{Ilin, VanDyke}, \cite[Ch. 2]{MaNaPl}, as well as \cite[Ch.
5]{MaNaPl}, \cite{na178} for ledge-shaped perturbation of domains, we consider
(\ref{62}) as an outer expansion in a finite part of the waveguide while
$\varepsilon V_{0}^{0}\left( \xi\right) $ becomes the main term of the inner
expansion in the vicinity of the ledge of the box-shaped perturbation in
(\ref{11}). In view of (\ref{Z03}) and (\ref{Z06}), the standard matching
procedure proposes $Z_{0}^{\prime}$ as a singular solution of the homogeneous
problem (\ref{481}), (\ref{482}) with the following asymptotic condition at
the point $x=\left( l,0\right) \in\partial\Pi^{0}:
\begin{equation}
Z_{0}^{\prime}\left( x\right) =\left( 2/\pi\right) ^{1/2}\sin\left( \pi
l\right) \ln(1/r)+c+O\left( r\right) \text{ \ \ \ \ for }r\rightarrow+0.
\label{Z07
\end{equation}
Arguing in the same way as for the function $Z_{1}^{\prime},$ we conclude that
the problem (\ref{51}), (\ref{Z07}) has a solution $Z_{0}^{\prime}$ which
admits the representation (\ref{5222}) with $p=0$ under the restriction
$x_{1}\geq R>l$ needed due to the logarithmic singularity in (\ref{Z07}). The
coefficient $C_{0}^{0}$ is arbitrary but $C_{0}$ and $C_{0}^{1}$ can be
computed again by means of the Green formula in the domain $\Pi_{+}^{0}\left(
R,\delta\right) =\left\{ x\in\Pi_{+}^{0}:x_{1}<R,\text{\ }r>\delta\right\}
$ and the limit passage $R\rightarrow+\infty,$ $\delta\rightarrow+0,$ cf.
(\ref{54}), (\ref{55}) and (\ref{Z08}). Dealing with $u_{0}^{0}$ and
$Z_{0}^{\prime},$ we take into account the equality $\partial_{2}u_{0
^{0}\left( x_{1},0\right) =0$ and obtain tha
\begin{gather}
0
{\displaystyle\int_{\Pi\left( R,\delta\right) }}
\cos\left( \pi x_{1}\right) \left( \Delta Z_{0}^{\prime}\left( x\right)
+\pi^{2}Z_{0}^{\prime}\left( x\right) \right)
dx=\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \label{ZC}\\
{\displaystyle\int_{0}^{1}}
\left. \left( \cos\left( \pi x_{1}\right) \dfrac{\partial Z_{0}^{\prime
}{\partial x_{1}}\left( x\right) +\pi\sin\left( \pi x_{1}\right)
Z_{0}^{\prime}\left( x\right) \right) \right\vert _{x_{1}=R}dx_{2
-\nonumber\\
-\delt
{\displaystyle\int_{0}^{\pi}}
\left. \left( \cos\left( \pi x_{1}\right) \dfrac{\partial Z_{0}^{\prime
}{\partial r}\left( x\right) -Z_{0}^{\prime}\left( x\right) \dfrac
{\partial}{\partial r}\cos\left( \pi x_{1}\right) \right) \right\vert
_{r=\delta}d\varphi=\nonumber\\
=i\pi C_{0}+\left( \dfrac{\pi}{2}\right) ^{1/2}\sin\left( 2\pi l\right)
+o\left( 1\right) \Rightarrow C_{0}=i\left( 2\pi\right) ^{-1/2}\sin\left(
2\pi l\right) .\nonumber
\end{gather}
Inserting $u_{1}^{0}$ and $Z_{0}^{\prime}$ into the Green formula in $\Pi
_{+}^{0}\left( R,\delta\right) $, we take into account the inhomogeneous
boundary condition on $\Upsilon^{0}$ and derive that
\begin{equation
\begin{array}
[c]{c
0
{\displaystyle\int_{0}^{1}}
\left. \cos\left( \pi x_{2}\right) \partial_{1}Z_{0}^{\prime}\left(
x\right) \right\vert _{x_{1}=R}dx_{2}
{\displaystyle\int_{0}^{l-\delta}}
\left. \cos\left( \pi x_{2}\right) \partial_{2}Z_{0}^{\prime}\left(
x\right) \right\vert _{x_{2}=0}dx_{1}-\\
-\delt
{\displaystyle\int_{0}^{\pi}}
\left. \left( \cos\left( \pi x_{2}\right) \partial_{r}Z_{0}^{\prime
}\left( x\right) -Z_{0}^{\prime}\left( x\right) \partial_{r}\cos\left(
\pi x_{2}\right) \right) \right\vert _{r=\delta}d\varphi=\\
=\dfrac{1}{2}C_{0}^{1}\left( 4\mu\right) ^{-1/4}\pi^{2}lS_{10}^{0}\left(
1-i\right) +\left( 2\pi\right) ^{1/2}\sin\left( \pi l\right) +o\left(
1\right) .
\end{array}
\label{ZC1
\end{equation}
We now compare coefficients in the expansions (\ref{5222}), $p=0$, and
(\ref{Z01}). According to the calculations (\ref{ZC}) and (\ref{ZC1}) we
derive the formula
\begin{align}
C_{0} & =\left( 2\pi\right) ^{-1/2}S_{00}^{\prime}\Rightarrow
S_{00}^{\prime}=i\sin\left( 2\pi l\right) ,\nonumber\\
C_{0}^{1} & =\left( 4\mu\right) ^{-1/4}\sqrt{\mu}S_{10}^{0}\left(
1+i\right) \Rightarrow\label{anti}\\
S_{10}^{0} & =-\dfrac{\left( 4\mu\right) ^{1/4}2\left( 2\pi\right)
^{1/2}\sin\left( \pi l\right) }{\sqrt{\mu}\left( 1+i\right) +2\pi
^{2}l\left( 1-i\right) }=-\left( 4\mu\right) ^{1/4}\left( 2\pi\right)
^{1/2}\frac{\sqrt{\mu}\left( 1-i\right) +2\pi^{2}l\left( 1+i\right)
{4\pi^{4}l^{2}+\mu}\sin\left( \pi l\right) .\nonumber
\end{align}
Calculation of coefficients in the ans\"{a}tze (\ref{44}) and (\ref{60}) is
completed. It is worth to underline that the expression (\ref{anti}) for the
main asymptotic term of $\varepsilon^{-1/2}S_{10}^{\varepsilon}=\varepsilon
^{-1/2}S_{01}^{\varepsilon}$ can be derived from the cumbersome relations
(\ref{57}) and (\ref{58}) as well.
\section{Detection of a trapped mode\label{sect4}}
\subsection{Reformulation of the criterion\label{sect4.1}}
We opt for the for
\begin{equation}
\mu=4\pi^{4}l^{2}+\mathcal{\vartriangle}\mu,\text{ \ \ }l=\pi
k+\mathcal{\vartriangle}l \label{T1
\end{equation}
of the spectral and length parameters which support a trapped mode. Here,
$k\in\mathbb{N}$ is fixed but small $\mathcal{\vartriangle}\mu,$
$\mathcal{\vartriangle}l$ are to be determined. If $\mathcal{\vartriangle
\mu=0$ and $\mathcal{\vartriangle}l=0$, the equalities $S_{11}^{0}=-1$ and
$S_{01}^{0}=0$ hold due to (\ref{57}) and (\ref{anti}).
We purpose to choose the small increments $\mathcal{\vartriangle}\mu$ and
$\mathcal{\vartriangle}l$ in (\ref{T1}) such that the criterion (\ref{33}) for
the existence of a trapped mode is satisfied. Since $S_{11}^{\varepsilon}$ is
complex, the criterion furnishes two equations for two real parameters
$\mathcal{\vartriangle}\mu$ and $\mathcal{\vartriangle}l.$ It is convenient to
consider the other equations
\begin{equation}
\operatorname{Im}S_{11}^{\varepsilon}=0,\text{ \ \ \ }\operatorname{Re
S_{01}^{0}=0 \label{T2
\end{equation}
which, for small $\varepsilon$ and $\mathcal{\vartriangle}\mu,$
$\mathcal{\vartriangle}l,$ are equivalent to $S_{11}^{\varepsilon}=-1.$
Indeed, from formulas (\ref{58}), (\ref{anti}) and (\ref{64}) together with
estimates (\ref{est}) it follows tha
\begin{equation}
\left\vert S_{11}^{\varepsilon}+1\right\vert +\left\vert S_{00}^{\varepsilon
}-1\right\vert \leq c\left( \varepsilon+\left\vert \mathcal{\vartriangle
\mu\right\vert +\left\vert \mathcal{\vartriangle}l\right\vert \right)
^{\delta},\text{ \ \ \ }\delta\in\left( 0,1\right) . \label{T3
\end{equation}
Since $S^{\varepsilon}$ is unitary and symmetric, see Section \ref{sect2.2},
the second assumption in (\ref{T2}) means that $S_{01}^{\varepsilon
=S_{10}^{\varepsilon}=i\sigma$ with some $\sigma\in\mathbb{R}$ and,
furthermore
\begin{equation}
0=\overline{S_{00}^{\varepsilon}}S_{01}^{\varepsilon}+\overline{S_{10
^{\varepsilon}}S_{11}^{\varepsilon}=2i\sigma+O\left( \left\vert
\sigma\right\vert \left( \varepsilon+\left\vert \mathcal{\vartriangle
\mu\right\vert +\left\vert \mathcal{\vartriangle}l\right\vert \right)
^{\delta}\right) . \label{T999
\end{equation}
Hence, $\sigma=0$ when $\mathcal{\vartriangle}\mu$, $\mathcal{\vartriangle}l$
and $\varepsilon>0$ are small so that $S_{01}^{\varepsilon}=0\Rightarrow
\left\vert S_{11}^{\varepsilon}\right\vert =-1$ due to (\ref{T3}),(\ref{T2}).
We have proved that (\ref{T2})$\Rightarrow$(\ref{33}) but the inverse
implication (\ref{33})$\Rightarrow$(\ref{T2}) is obvious.
\subsection{Solving the system of transcendental equations\label{sect4.2}}
By virtue of (\ref{58}), (\ref{S11}), and (\ref{T1}), the first equation
(\ref{T2}) turns int
\begin{equation}
\mathcal{\vartriangle}\mu=-\varepsilon\left( 8\pi^{4}l^{2
+\mathcal{\vartriangle}\mu\right) \operatorname{Im}\widehat{S}_{11
^{\varepsilon}. \label{T4
\end{equation}
The formulas (\ref{anti}), (\ref{S01}), and a simple algebraic calculation
convert the second equation (\ref{T2}) int
\[
\sin l=\left( 4\mu\right) ^{-1/4}\left( 2\pi\right) ^{-1/2}\frac{4\pi
^{4}l^{2}+\mu}{4\pi^{2}l\sqrt{\mu}+2\mu}\varepsilon\operatorname{Re
\widehat{S}_{01}^{\varepsilon
\]
and thu
\begin{equation}
\mathcal{\vartriangle}l=\arcsin\left( \left( -1\right) ^{k}\left(
4\mu\right) ^{-1/4}\left( 2\pi\right) ^{-1/2}\frac{4\pi^{2}l^{2}+\mu
{2\pi^{2}l+\sqrt{\mu}}\varepsilon\operatorname{Re}\widehat{S}_{01
^{\varepsilon}\right) . \label{T5
\end{equation}
We search for a solution $\left( \mathcal{\vartriangle}\mu
,\mathcal{\vartriangle}l\right) $ of the transcendental equations (\ref{T4}),
(\ref{T5}) in the closed dis
\begin{equation}
\overline{\mathbb{B}_{\varrho}}=\left\{ \left( \mathcal{\vartriangle
\mu,\mathcal{\vartriangle}l\right) \in\mathbb{R}^{2}:\left\vert
\mathcal{\vartriangle}\mu\right\vert ^{2}+\left\vert \mathcal{\vartriangle
}l\right\vert ^{2}\leq\varrho^{2}\right\} \label{T6
\end{equation}
and rewrite them in the condensed for
\begin{equation}
\left( \mathcal{\vartriangle}\mu,\mathcal{\vartriangle}l\right)
=T^{\varepsilon}\left( \mathcal{\vartriangle}\mu,\mathcal{\vartriangle
}l\right) \text{ \ \ in }\overline{\mathbb{B}_{\varrho}} \label{T7
\end{equation}
where $T^{\varepsilon}$ is a nonlinear operator involving asymptotic
remainders from formulas (\ref{S11}) and (\ref{S01}) for the augmented
scattering matrix $S^{\varepsilon}=S^{\varepsilon}\left(
\mathcal{\vartriangle}\mu,\mathcal{\vartriangle}l\right) $. The estimates
(\ref{est}) and Proposition \ref{PropositionANAL} below demonstrate that the
operator is smooth in $\overline{\mathbb{B}_{\varrho}}$ with $\varrho
\leq\varrho_{0}$, $\varrho_{0}>0,$ and, furthermore
\[
\left\vert T^{\varepsilon}\left( \mathcal{\vartriangle}\mu
,\mathcal{\vartriangle}l\right) \right\vert =c_{\varrho}\varepsilon\left(
1+\left\vert \ln\varepsilon\right\vert \right) ^{2}\text{ \ \ for }\left(
\mathcal{\vartriangle}\mu,\mathcal{\vartriangle}l\right) \in\overline
{\mathbb{B}_{\varrho}}.
\]
Hence, for any $\varrho\leq\varrho_{0}$, there exists $\varepsilon\left(
\varrho\right) >0$ such that $T^{\varepsilon}$ with $\varepsilon\in\left(
0,\varepsilon\left( \varrho\right) \right) $ is a contraction operator in
the disk (\ref{T6}). By the Banach contraction principle, the abstract
equation (\ref{T7}) has a unique solution $\left( \mathcal{\vartriangle
\mu,\mathcal{\vartriangle}l\right) \in\overline{\mathbb{B}_{\varrho}}$ and
the estimate $\left\vert \mathcal{\vartriangle}\mu\right\vert +\left\vert
\mathcal{\vartriangle}l\right\vert \leq C\varepsilon\left( 1+\left\vert
\ln\varepsilon\right\vert \right) ^{2}$ is valid. This solution depends on
$\varepsilon$ and determines the spectral and length parameters (\ref{T1})
supporting a trapped mode in the perturbed waveguide (\ref{11}) according to
the criterion (\ref{33}) from Theorem \ref{TheoremA} reformulated as (\ref{T2}).
\subsection{The main results\label{sect4.3}}
Based on the performed formal calculations, we will prove in the next three
sections the following existence and uniqueness theorems.
\begin{theorem}
\label{TheoremEX}Let $k\in\mathbb{N}$. There exist $\varepsilon_{k}$,
$c_{k}>0$ and $\mathcal{\vartriangle}\mu_{k}\left( \varepsilon\right) $,
$\mathcal{\vartriangle}l_{k}\left( \varepsilon\right) ,$ such that, for any
$\varepsilon\in\left( 0,\varepsilon_{k}\right) $, the estimat
\[
\left\vert \mathcal{\vartriangle}\mu_{k}\left( \varepsilon\right)
\right\vert +\left\vert \mathcal{\vartriangle}l_{k}\left( \varepsilon\right)
\right\vert \leq c_{k}\varepsilon\left( 1+\left\vert \ln\varepsilon
\right\vert \right) ^{2
\]
is valid and the problem (\ref{1}), (\ref{2}) in the waveguide $\Pi_{l\left(
\varepsilon\right) }^{\varepsilon}=\Pi\cup\varpi_{l_{k}\left( \varepsilon
\right) }^{\varepsilon}$ with the box-shaped perturbation (\ref{0}) of length
$2l_{k}\left( \varepsilon\right) =2\left( \pi k+\mathcal{\vartriangle
_{k}l\left( \varepsilon\right) \right) $ has an eigenvalu
\begin{equation}
\lambda_{k}^{\varepsilon}=\pi^{2}-\varepsilon^{2}(4\pi^{4}\left( \pi
k+\mathcal{\vartriangle}l_{k}\left( \varepsilon\right) \right)
^{2}+\mathcal{\vartriangle}\mu_{k}\left( \varepsilon\right) ). \label{T10
\end{equation}
The eigenvalue (\ref{T10}) is unique in the interval $\left( 0,\pi
^{2}\right) $.
\end{theorem}
\begin{theorem}
\label{TheoremUN}Let $k\in\mathbb{N}$ and $\delta>0.$ There exist
$\varepsilon_{k}^{\delta}>0$ such that, for any $\varepsilon\in\left(
0,\varepsilon_{k}^{\delta}\right) $, the waveguide $\Pi_{l}^{\varepsilon}$
with the length parameter
\begin{equation}
l\in\left[ \pi\left( k-1\right) +\delta,\pi\left( k+1\right)
-\delta\right] \label{T11
\end{equation}
does not support a trapped mode in the case $l\neq l_{k}\left( \varepsilon
\right) $ where $l_{k}\left( \varepsilon\right) $ is taken from Theorem
\ref{TheoremEX}.
\end{theorem}
\section{Weighted spaces with detached asymptotics\label{sect5}}
\subsection{Reformulation of the problem\label{sect5.1}}
Let $W_{\beta}^{1}\left( \Pi_{+}^{\varepsilon}\right) $ be the Kondratiev
(weighted Sobolev) space composed from functions $u^{\varepsilon}$ in
$H_{loc}^{1}\left( \overline{\Pi_{+}^{\varepsilon}}\right) $ with the finite
nor
\begin{equation}
\left\Vert u^{\varepsilon};W_{\beta}^{1}\left( \Pi_{+}^{\varepsilon}\right)
\right\Vert =\left\Vert e^{\beta x_{1}}u^{\varepsilon};H^{1}\left( \Pi
_{+}^{\varepsilon}\right) \right\Vert \label{K1
\end{equation}
where $\beta\in\mathbb{R}$ is the exponential weight index. If $\beta>0$,
functions in $W_{\beta}^{1}\left( \Pi_{+}^{\varepsilon}\right) $ decay at
infinity in the semi-infinite waveguide (\ref{11}) but in the case $\beta<0$ a
certain exponential growth is permitted while the rate of decay/growth is
ruled by $\beta.$ Clearly, $W_{0}^{1}\left( \Pi_{+}^{\varepsilon}\right)
=H^{1}\left( \Pi_{+}^{\varepsilon}\right) $. The space $C_{c}^{\infty
}\left( \overline{\Pi_{+}^{\varepsilon}}\right) $ of smooth compactly
supported functions is dense in $W_{\beta}^{1}\left( \Pi_{+}^{\varepsilon
}\right) $ for any $\beta.$
By a solution of the problem (\ref{14}) in $W_{\sigma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) $, $\sigma\in\mathbb{R}$, we understand a function
$u^{\varepsilon}\in W_{\sigma}^{1}\left( \Pi_{+}^{\varepsilon}\right) $
satisfying the integral identit
\begin{equation}
\left( \nabla u^{\varepsilon},\nabla v^{\varepsilon}\right) _{\Pi
^{\varepsilon}}-\lambda^{\varepsilon}\left( u^{\varepsilon},v^{\varepsilon
}\right) _{\Pi^{\varepsilon}}=F^{\varepsilon}\left( v^{\varepsilon}\right)
\text{ \ \ }\forall v^{\varepsilon}\in W_{-\sigma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \label{K2
\end{equation}
where $F^{\varepsilon}\in W_{-\sigma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
^{\ast}$ is an (anti)linear continuous functional on $W_{-\sigma}^{1}\left(
\Pi_{+}^{\varepsilon}\right) $ and $\left( \ ,\ \right) _{\Pi^{\varepsilon
}}$ is an extension of the Lebesgue scalar product up to a duality between an
appropriate couple of weighted spaces. In view of (\ref{K1}) all terms in
(\ref{K2}) are defined correctly so that the problem (\ref{K2}) is associated
with the continuous mappin
\[
W_{\sigma}^{1}\left( \Pi_{+}^{\varepsilon}\right) \ni u^{\varepsilon
\mapsto\mathcal{A}_{\sigma}^{\varepsilon}\left( \lambda^{\varepsilon}\right)
u^{\varepsilon}=F^{\varepsilon}\in W_{-\sigma}^{1}\left( \Pi_{+
^{\varepsilon}\right) ^{\ast}.
\]
If $f^{\varepsilon}\in L_{\sigma}^{2}\left( \Pi_{+}^{\varepsilon}\right) $
that is $e^{\sigma z}f^{\varepsilon}\in L^{2}\left( \Pi_{+}^{\varepsilon
}\right) $, then the functiona
\[
v^{\varepsilon}\mapsto F^{\varepsilon}\left( v^{\varepsilon}\right) =\left(
f^{\varepsilon},v^{\varepsilon}\right) _{\Pi_{+}^{\varepsilon}
\]
belongs to $W_{-\sigma}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}$.
Clearly, $\mathcal{A}_{-\sigma}^{\varepsilon}\left( \lambda^{\varepsilon
}\right) $ is the adjoint operator for $\mathcal{A}_{\sigma}^{\varepsilon
}\left( \lambda^{\varepsilon}\right) $.
\subsection{The Fredholm property, asymptotics and the index\label{sect5.2}}
Let us formulate some well-know results of the theory of elliptic problems in
domains with cylindrical outlets to infinity (see the key papers \cite{Ko,
MaPl1, MaPl2} and, e.g., the monographs \cite{NaPl, KoMaRo}). This theory
mainly deals with the classical (differential) formulation of boundary value
problems, however as was observed in \cite{na417}, passing to the weak
formulation involving integral identities of type (\ref{K2}) does not meet any
visible obstacle. The only disputable point, namely the dependence of
constants on the small parameter $\varepsilon$, we will be discussed in
Section \ref{sect5.5}.
\begin{theorem}
\label{TheoremKA}(see \cite{Ko}) Let $\lambda^{\varepsilon}\in\left(
0,\pi^{2}\right] $.
1) The operator $\mathcal{A}_{\beta}^{\varepsilon}\left( \lambda
^{\varepsilon}\right) $ is Fredholm if and only i
\begin{equation}
\beta\neq\beta_{0}:=0,\text{ \ \ }\beta\neq\beta_{\pm j}:=\pm\sqrt{\pi
^{2}j^{2}-\lambda^{\varepsilon}},\text{ \ \ }j\in\mathbb{N}. \label{K4
\end{equation}
In the case $\beta=\beta_{p}$ with $p\in\mathbb{Z}$ the range $\mathcal{A
_{\beta}^{\varepsilon}\left( \lambda^{\varepsilon}\right) W_{\beta
^{1}\left( \Pi_{+}^{\varepsilon}\right) $ is not closed subspace in
$W_{-\beta}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}.$
2) Let $\gamma\in\left( \beta_{1},\beta_{2}\right) $ and let $u^{\varepsilon
}\in W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) $ be a solution of
problem (\ref{K2}) with the weight index $\sigma=-\gamma$ and the right-hand
side $F^{\varepsilon}\in W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
^{\ast}\subset W_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) $. Then the
asymptotic decompositio
\begin{equation}
u^{\varepsilon}\left( x\right) =\widetilde{u}^{\varepsilon}\left( x\right)
+\sum_{\pm}\left( a_{\pm}^{\varepsilon}w_{0}^{\varepsilon\pm}\left(
x\right) +b_{\pm}^{\varepsilon}w_{1}^{\varepsilon\pm}\left( x\right)
\right) \label{K5
\end{equation}
and the estimat
\begin{equation}
\left( \left\Vert \widetilde{u}^{\varepsilon};W_{\gamma}^{1}\left(
\Pi^{\varepsilon}\right) \right\Vert ^{2}+\underset{\pm}{\sum}\left(
\left\vert a_{\pm}^{\varepsilon}\right\vert ^{2}+\left\vert b_{\pm
}^{\varepsilon}\right\vert ^{2}\right) \right) ^{1/2}\leq c_{\varepsilon
}\left( \left\Vert F^{\varepsilon};W_{-\gamma}^{1}\left( \Pi^{\varepsilon
}\right) ^{\ast}\right\Vert +\left\Vert u^{\varepsilon};W_{-\gamma
^{1}\left( \Pi^{\varepsilon}\right) \right\Vert \right) \label{K6
\end{equation}
are valid, where $\widetilde{u}^{\varepsilon}\in W_{\gamma}^{1}\left(
\Pi^{\varepsilon}\right) $ is the asymptotic remainder, $a_{\pm
^{\varepsilon}$ and $b_{\pm}^{\varepsilon}$ are coefficients depending on
$F^{\varepsilon}$ and $u^{\varepsilon}$, the waves $w_{0}^{\varepsilon\pm}$
are given by (\ref{13}) and $w_{1}^{\varepsilon\pm}$ by (\ref{20}) for
$\lambda^{\varepsilon}=\pi^{2}$ but by (\ref{30}) for $\lambda^{\varepsilon
}\in\left( 0,\pi^{2}\right) $. The factor $c_{\varepsilon}$ in (\ref{K6}) is
independent of $F^{\varepsilon}$ and $u^{\varepsilon}$ but may depend on
$\varepsilon\in\left[ 0,\varepsilon_{0}\right] .$
\end{theorem}
\bigskip
As was mentioned $\mathcal{A}_{\gamma}^{\varepsilon}\left( \lambda
^{\varepsilon}\right) ^{\ast}=\mathcal{A}_{-\gamma}^{\varepsilon}\left(
\lambda^{\varepsilon}\right) $ and, hence, kernels and co-kernels of these
operators are in the relationshi
\begin{equation}
\ker\mathcal{A}_{\pm\gamma}^{\varepsilon}\left( \lambda^{\varepsilon}\right)
=\text{\textrm{coker}}\mathcal{A}_{\mp\gamma}^{\varepsilon}\left(
\lambda^{\varepsilon}\right) \label{K7
\end{equation}
In the next assertion we compare the indexes Ind$\mathcal{A}_{\pm\gamma
}^{\varepsilon}\left( \lambda^{\varepsilon}\right) =\dim\ker\mathcal{A
_{\pm\gamma}^{\varepsilon}\left( \lambda^{\varepsilon}\right) -\dim
$\textrm{coker}$\mathcal{A}_{\pm\gamma}^{\varepsilon}\left( \lambda
^{\varepsilon}\right) ;$ notice that Ind$\mathcal{A}_{\gamma}^{\varepsilon
}\left( \lambda^{\varepsilon}\right) =-$Ind$\mathcal{A}_{-\gamma
}^{\varepsilon}\left( \lambda^{\varepsilon}\right) $ according to (\ref{K7}).
\begin{theorem}
\label{TheoremKB} (see \cite[Thm. 3.3.3, 5.1.4 (4)]{NaPl}) If $\gamma
\in\left( \beta_{1},\beta_{2}\right) ,$ see (\ref{K4}), the
\begin{equation}
\text{Ind}\mathcal{A}_{-\gamma}^{\varepsilon}\left( \lambda^{\varepsilon
}\right) =\text{Ind}\mathcal{A}_{\gamma}^{\varepsilon}\left( \lambda
^{\varepsilon}\right) +4 \label{K8
\end{equation}
\end{theorem}
We emphasize that the last $4$ is nothing but the number of waves detached in
(\ref{K5}). From (\ref{K7})-(\ref{K8}), it follows that
\begin{equation}
\text{Ind}\mathcal{A}_{-\gamma}^{\varepsilon}\left( \lambda^{\varepsilon
}\right) =-\text{Ind}\mathcal{A}_{\gamma}^{\varepsilon}\left( \lambda
^{\varepsilon}\right) =2 \label{K9
\end{equation}
\subsection{Absence of trapped modes with a fast decay rate.\label{sect5.3}}
In this section we prove that, for $\lambda\in(0,\pi^{2}]$ and $\gamma
\in\left( \beta_{1},\beta_{2}\right) $, there holds the formul
\begin{equation}
\dim\ker\mathcal{A}_{\gamma}^{\varepsilon}\left( \lambda\right) =0
\label{M1
\end{equation}
which, in particular, completes the proof of Theorem \ref{TheoremA}, cf. our
assumption $c_{\varepsilon}\neq0$ for trapped mode (\ref{35}) while
$c_{\varepsilon}=0$ leads to $U^{\varepsilon}=\widetilde{U}^{\varepsilon
\in\ker\mathcal{A}_{\gamma}^{\varepsilon}\left( \lambda\right) $. Clearly,
$\ker\mathcal{A}_{\beta}^{0}\left( \lambda\right) =0$ for any $\beta>0$ that
is the limit problem (\ref{9}), (\ref{10}) in the straight semi-strip $\Pi
_{+}^{0}$ cannot have a trapped mode. However, as was mentioned in Section
\ref{sect1.4}, formula (\ref{M1}) does not follow by a standard perturbation
argument and, moreover, $\dim\ker\mathcal{A}_{\beta}^{\varepsilon}\left(
\lambda^{\varepsilon}\right) >0$ for some $\beta\in\left( 0,\beta
_{1}\right) $ and $\lambda^{\varepsilon}\in(0,\pi^{2})$.
\begin{theorem}
\label{TheoremMA}Let $\gamma\in\left( \beta_{1},\beta_{2}\right) $ be fixed.
There exists $\varepsilon_{0}>0$ such that, for $\varepsilon\in\left(
0,\varepsilon_{0}\right) $ and $\lambda\in(0,\pi^{2}]$, formula (\ref{M1}) is valid.
\end{theorem}
\textbf{Proof.} Let us assume that, for some $\lambda\in\left( 0,\pi
^{2}\right] $ and an infinitesimal positive sequence $\left\{ \varepsilon
_{k}\right\} _{k\in\mathbb{N}}$, the homogeneous problem (\ref{9}),
(\ref{10}) has a solution $u^{\varepsilon_{k}}\in W_{\gamma}^{1}\left(
\Pi_{+}^{\varepsilon}\right) $. We denote by $u_{0}^{\varepsilon_{k}}$ the
restriction of $u^{\varepsilon_{k}}$ onto the semi-strip $\Pi_{+
^{0}=\mathbb{R\times}\left( 0,1\right) $. Under the normalization conditio
\begin{equation}
\left\Vert u^{\varepsilon_{k}};L^{2}\left( \Pi_{+}^{\varepsilon}\left(
2l\right) \right) \right\Vert =1, \label{M0
\end{equation}
we are going to perform the limit passage $\varepsilon_{k}\rightarrow+0$ in
the integral identit
\begin{equation}
\left( \nabla u^{\varepsilon_{k}},\nabla v^{\varepsilon}\right) _{\Pi
_{+}^{\varepsilon}}=\lambda^{\varepsilon}\left( u^{\varepsilon
,v^{\varepsilon}\right) _{\Pi_{+}^{\varepsilon}}, \label{M2
\end{equation}
where $v^{\varepsilon}$ is obtained from a test function $v\in C_{c}^{\infty
}\left( \overline{\Pi_{+}^{0}}\right) $ by the even extension over the
$x_{1}$-axis. If we prove that
$(i)$ $u_{0}^{\varepsilon_{k}}$ converges to $u_{0}^{0}\in W_{-\gamma
^{1}\left( \Pi_{+}^{0}\right) $ weakly in $W_{-\gamma}^{1}\left( \Pi
_{+}^{0}\right) $ and, therefore, strongly in $L^{2}\left( \Pi_{+
^{0}\left( 2l\right) \right) ;$
$(ii)$ $\left\Vert u^{\varepsilon_{k}};L^{2}\left( \Pi_{+}^{\varepsilon_{k
}\setminus\Pi_{+}^{0}\right) \right\Vert \rightarrow0;$
$(iii)$ $\left( \nabla u^{\varepsilon_{k}},\nabla v\right) _{\Pi
^{\varepsilon}\setminus\Pi^{0}}\rightarrow0$ with any smooth function $v$ in
the rectangle $\left[ 0,l\right] \times\left[ -1,1\right] ,$
\noindent then the limit passage in (\ref{M2}) and (\ref{M0}) gives
\begin{gather}
\left( \nabla u_{0}^{0},\nabla v\right) _{\Pi_{+}^{0}}=\lambda\left(
u_{0}^{0},v\right) _{\Pi_{+}^{0}}\text{ \ \ }\forall v\in C_{c}^{\infty
}(\overline{\Pi_{+}^{0}}),\label{M3}\\
\left\Vert u_{0}^{0};L^{2}\left( \Pi_{+}^{0}\left( 2l\right) \right)
\right\Vert =1. \label{M4
\end{gather}
By a density argument, the integral identity (\ref{M3}) is valid with any
$v\in W_{-\gamma}^{1}\left( \Pi_{+}^{0}\right) $ and, therefore, $u_{0
^{0}=0$ because the limit problem in $\Pi_{+}^{0}$ cannot get a non-trivial
trapped mode.
Let us confirm facts $(i)-(iii)$. We write $\varepsilon$ instead of
$\varepsilon_{k}.$
First, we apply a local estimate, see, e.g., \cite{ADN1}, to the solution
$u^{\varepsilon}$ of the problem (\ref{9}), (\ref{10}) with $\lambda
u^{\varepsilon}$ as a given right-hand side
\begin{equation}
\left\Vert u^{\varepsilon};H^{2}\left( \varpi^{\prime}\right) \right\Vert
\leq c\lambda\left\Vert u^{\varepsilon};L^{2}\left( \varpi^{\prime\prime
}\right) \right\Vert . \label{M5
\end{equation}
Here, $\varpi^{\prime}=\left( 4l/3,5l/3\right) \times\left( 0,1\right) $
and $\varpi^{\prime\prime}=\left( l,2l\right) \times\left( 0,1\right) $
are rectangles such that $\varpi^{\prime}\subset\varpi^{\prime\prime
\subset\Pi^{\varepsilon}\left( 2l\right) $ and, therefore, the right-hand
side of (\ref{M5}) is less than $c\lambda$ according to (\ref{M0}).
Second, we split $u^{\varepsilon}$ as follows
\begin{equation}
u^{\varepsilon}=u_{l}^{\varepsilon}+u_{\infty}^{\varepsilon},\text{ \ \
u_{l}^{\varepsilon}=\left( 1-\chi\right) u^{\varepsilon},\text{
\ \ }u_{\infty}^{\varepsilon}=\chi u^{\varepsilon} \label{M55
\end{equation}
where $\chi\in C^{\infty}\left( \mathbb{R}\right) $ is a cut-off function,
$\chi\left( x_{1}\right) =1$ for $x_{1}\geq5l/3$ and $\chi\left(
x_{1}\right) =0$ for $x_{1}\leq4l/3.$ The components in (\ref{M55}) satisfy
the integral identitie
\begin{gather}
\left( \nabla u_{l}^{\varepsilon},\nabla v_{l}\right) _{\Pi_{+
^{\varepsilon}\left( 2l\right) }=\lambda\left( \left( 1-\chi\right)
u^{\varepsilon},v_{l}\right) _{\Pi_{+}^{\varepsilon}\left( 2l\right)
}+\left( \nabla u^{\varepsilon},v_{l}\nabla\chi\right) _{\varpi^{\prime
}-\label{M6}\\
-\left( u^{\varepsilon}\nabla\chi,\nabla v_{l}\right) _{\varpi^{\prime
}\text{ }\forall v_{l}\in H^{1}\left( \Pi_{+}^{\varepsilon}\left( 2l\right)
\right) ,\nonumber\\
\left( \nabla u_{\infty}^{\varepsilon},\nabla v_{\infty}\right)
_{\Pi_{\infty}\left( l\right) }-\lambda\left( u_{\infty}^{\varepsilon
},v_{\infty}\right) _{\Pi_{\infty}\left( l\right) }=\label{M7}\\
=F_{\infty}^{\varepsilon}\left( v_{\infty}\right) :=\left( u^{\varepsilon
}\nabla\chi,\nabla v_{\infty}\right) _{\varpi^{\prime}}-\left( \nabla
u^{\varepsilon},v_{\infty}\nabla\chi\right) _{\varpi^{\prime}}\text{
\ }\forall v_{\infty}\in W_{-\gamma}^{1}\left( \Pi_{\infty}\left( l\right)
\right) .\nonumber
\end{gather}
Third, inserting $v_{l}=u_{l}^{\varepsilon}$ into (\ref{M6}) and taking
(\ref{M0}), (\ref{M5}) into account yiel
\begin{equation}
\left\Vert \nabla u_{l}^{\varepsilon};L^{2}\left( \Pi_{+}^{\varepsilon
}\left( 2l\right) \right) \right\Vert \leq c. \label{M8
\end{equation}
The problem (\ref{M7}) needs a bit more advanced argument. It is posed in the
semi-strip $\Pi_{\infty}\left( l\right) $ independent of $\varepsilon$ and,
thus, the following a priory estimate in the Kondratiev space, see \cite{Ko}
and, e.g.,\cite[Thm 5.1.4 (1)]{NaPl}
\begin{align}
\left\Vert u_{\infty}^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{\infty}\left(
l\right) \right) \right\Vert & \leq c_{1}\left( \left\Vert F_{\infty
}^{\varepsilon};W_{\gamma}^{1}\left( \Pi_{\infty}\left( l\right) \right)
^{\ast}\right\Vert +\left\Vert u_{\infty}^{\varepsilon};L^{2}\left( \Pi
_{+}^{\varepsilon}\left( 2l\right) \cap\Pi_{\infty}\left( l\right)
\right) \right\Vert \right) \label{M9}\\
& \leq c_{2}\left( \left\Vert u^{\varepsilon};L^{2}\left( \varpi^{\prime
}\right) \right\Vert +\left\Vert \nabla u^{\varepsilon};L^{2}\left(
\varpi^{\prime}\right) \right\Vert +\left\Vert u^{\varepsilon};\Pi
_{+}^{\varepsilon}\left( 2l\right) \right\Vert \right) \nonumber
\end{align}
involves some constants $c_{m}$ independent of $\varepsilon$. In this way,
formulas (\ref{M8}) and (\ref{M9}), (\ref{M5}), (\ref{M1}) assure tha
\begin{equation}
\left\Vert u^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon
}\right) \right\Vert \leq c. \label{M10
\end{equation}
Thus, the convergence in $\left( i\right) $ occurs along a subsequence which
is still denoted by $\left\{ \varepsilon_{k}\right\} $.
The last step of our consideration uses integration in $t\in\left(
-\varepsilon,0\right) $ and $x_{1}\in\left( 0,l\right) $ of the
Newton-Leibnitz formul
\[
\left\vert u^{\varepsilon}\left( t,x_{2}\right) \right\vert ^{2}=\int
_{t}^{t+1/2}\frac{\partial}{\partial x_{2}}\left( \chi_{0}\left(
x_{2}\right) \left\vert u^{\varepsilon}\left( x_{1},x_{2}\right)
\right\vert ^{2}\right) dx_{2
\]
where $\chi_{0}\in C^{\infty}\left( \mathbb{R}\right) $ is a cut-off
function, $\chi_{0}\left( x_{2}\right) =1$ for $x_{2}<1/6$ and $\chi
_{0}\left( x_{2}\right) =0$ for $x_{2}>1/3$. As a result, we obtain the
estimat
\[
\int_{\Pi_{+}^{\varepsilon}\setminus\Pi_{+}^{0}}\left\vert u^{\varepsilon
}\left( x\right) \right\vert ^{2}dx\leq c\varepsilon\int_{\Pi_{+
^{\varepsilon}\left( l\right) }\left( \left\vert \nabla u^{\varepsilon
}\left( x\right) \right\vert ^{2}+\left\vert u^{\varepsilon}\left(
x\right) \right\vert ^{2}\right) dx\leq C\varepsilon
\]
while referring to (\ref{M10}) again. This provides (ii) as well as (iii)
because
\begin{align*}
\left\vert \int_{0}^{l}\int_{-\varepsilon}^{0}\nabla u^{\varepsilon}\left(
x_{1},x_{2}\right) \cdot\nabla v\left( x_{1},x_{2}\right) dx_{2
dx_{1}\right\vert & \leq\underset{x\in\overline{\Pi_{+}^{\varepsilon
}\setminus\Pi_{+}^{0}}{\max}\left\vert \nabla v\left( x\right) \right\vert
\left( \text{meas}_{2}\left( \Pi_{+}^{\varepsilon}\setminus\Pi^{0}\right)
\right) ^{1/2}\left\Vert \nabla u^{\varepsilon};L^{2}\left( \Pi
_{+}^{\varepsilon}\setminus\Pi_{+}^{0}\right) \right\Vert \leq\\
& \leq c_{v}\varepsilon^{1/2}l^{1/2}\left\Vert u^{\varepsilon};W_{\gamma
^{1}\left( \Pi_{+}^{\varepsilon}\right) \right\Vert \leq C_{v
\varepsilon^{1/2
\end{align*}
Theorem \ref{TheoremMA} is proved. $\boxtimes$
\subsection{Radiation conditions\label{sect5.4}}
Let $\lambda^{\varepsilon}\in\left( 0,\pi^{2}\right) $ and $\gamma\in\left(
\beta_{1},\beta_{2}\right) $, cf. Theorem \ref{TheoremKA}. The pre-image
$\mathfrak{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) $ of the
subspace $W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}$ in
$W_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}$ for the operator
$\mathcal{A}_{-\gamma}^{\varepsilon}\left( \lambda^{\varepsilon}\right) $
consists of functions in the form (\ref{K5}). Introducing the norm $\left\Vert
u^{\varepsilon};\mathfrak{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
\right\Vert $ as the left-hand side of (\ref{K6}) makes $\mathfrak{W}_{\gamma
}^{1}\left( \Pi_{+}^{\varepsilon}\right) $ a Hilbert space but this Hilbert
structure is of no use in our paper.
The restriction $\mathcal{B}_{\gamma}^{\varepsilon}\left( \lambda
^{\varepsilon}\right) $ of $\mathcal{A}_{-\gamma}^{\varepsilon}\left(
\lambda^{\varepsilon}\right) $ onto $\mathcal{W}_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \subset W_{-\gamma}^{1}\left( \Pi_{+
^{\varepsilon}\right) $ inherits all properties of $\mathcal{A}_{-\gamma
}^{\varepsilon}\left( \lambda^{\varepsilon}\right) ,$ in particular,
$\mathcal{B}_{\gamma}^{\varepsilon}\left( \lambda^{\varepsilon}\right) $ is
a Fredholm operator with Ind$\mathcal{B}_{\gamma}^{\varepsilon}\left(
\lambda^{\varepsilon}\right) =$Ind$\mathcal{A}_{-\gamma}^{\varepsilon}\left(
\lambda^{\varepsilon}\right) =2,$ see (\ref{K9}). Thus, the restriction
$\mathfrak{A}_{\gamma}^{\varepsilon}\left( \lambda^{\varepsilon}\right)
_{out}$ of $\mathfrak{A}_{\gamma}^{\varepsilon}\left( \lambda^{\varepsilon
}\right) $ onto the subspac
\begin{equation}
\mathcal{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) _{out}=\left\{
u^{\varepsilon}\in\mathcal{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon
}\right) :a_{-}^{\varepsilon}=b_{-}^{\varepsilon}=0\text{ in (\ref{K5
)}\right\} \label{K11
\end{equation}
of codimension $2$ becomes of index zero.
\begin{theorem}
\label{TheoremKMM}Let $\lambda$, $\gamma$ and $\varepsilon$ be the same as in
Theorem \ref{TheoremMA}. Then the operator $\mathcal{B}_{\gamma}^{\varepsilon
}\left( \lambda^{\varepsilon}\right) $ actualizes the isomorphis
\[
\mathcal{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) _{out}\approx
W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}.
\]
\end{theorem}
Since the decomposition (\ref{K5}) of a function $u^{\varepsilon
\in\mathfrak{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) _{out}$
loses the incoming waves $w_{0}^{\varepsilon-}$ and $w_{1}^{\varepsilon-}$ due
to the restriction in (\ref{K11}), $\mathcal{B}_{\gamma}^{\varepsilon}\left(
\lambda^{\varepsilon}\right) $ has to be interpreted as an operator of the
problem (\ref{K11}) with the radiation condition (\ref{23}) at $\lambda
=\pi^{2}$ and (\ref{32}) at $\lambda\in(0,\pi^{2}).$ Theorem \ref{TheoremKMM}
says that such problem is uniquely solvable, while its solution in the for
\begin{equation}
u^{\varepsilon}\left( x\right) =\widetilde{u}^{\varepsilon}\left( x\right)
+a_{+}^{\varepsilon}w_{0}^{\varepsilon+}\left( x\right) +b_{+}^{\varepsilon
}w_{1}^{\varepsilon+}\left( x\right) \label{KK1
\end{equation}
obeys the estimat
\begin{equation}
\left\Vert \widetilde{u}^{\varepsilon};W_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \right\Vert +\left\vert a_{+}^{\varepsilon
}\right\vert +\left\vert b_{+}^{\varepsilon}\right\vert \leq C_{\varepsilon
}\left\Vert F^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon
}\right) \right\Vert . \label{KK2
\end{equation}
\subsection{Remark on the dependence of bounds on the small parameter
$\varepsilon$\label{sect5.5}}
If
\begin{equation}
\lambda^{\varepsilon}\in\left[ \delta,\pi^{2}-\delta\right] \label{KK3
\end{equation}
with a fixed $\delta>0,$ the coefficient in the estimate (\ref{K6}) can be
chosen independent of $\varepsilon\in\left[ 0,\varepsilon\left(
\delta\right) \right] $ with some $\varepsilon\left( \delta\right) >0$.
This fact originates in the smooth dependence of the waves (\ref{13}) and
(\ref{27}), (\ref{30}) on the parameter (\ref{KK3}) and the following
consideration. By multiplying $u^{\varepsilon}$ with the same cut-off function
$\chi$ as in (\ref{M55}), we reduce the problem (\ref{K2}) onto the semi-strip
$\Pi_{\infty}\left( l\right) $, namely, inserting $v^{\varepsilon}=\chi
v_{\infty}$ with any $v_{\infty}\in W_{\gamma}^{1}\left( \Pi_{\infty}\left(
l\right) \right) $ as a test function, we obtain for $u_{\infty
}^{\varepsilon}=\chi u^{\varepsilon}$ the integral identit
\begin{gather}
\left( \nabla u_{\infty}^{\varepsilon},\nabla v_{\infty}\right)
_{\Pi_{\infty}\left( l\right) }-\lambda^{\varepsilon}\left( u_{\infty
}^{\varepsilon},v_{\infty}\right) _{\Pi_{\infty}\left( l\right)
=F_{\infty}^{\varepsilon}\left( v_{\infty}\right)
:=\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \label{KK4}\\
:=F^{\varepsilon}\left( \chi v_{\infty}\right) -\left( \nabla
u^{\varepsilon},v_{\infty}\nabla\chi\right) _{\Pi_{\infty}\left( l\right)
}+\left( u^{\varepsilon}\nabla\chi,\nabla v_{\infty}\right) _{\Pi_{\infty
}\left( l\right) }.\nonumber
\end{gather}
Moreover
\begin{align*}
\left\Vert F_{\infty}^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{\infty}\left(
l\right) \right) ^{\ast}\right\Vert & \leq\left\Vert F^{\varepsilon
}\left( \chi\cdot\right) ;W_{-\gamma}^{1}\left( \Pi_{\infty}\left(
l\right) \right) \right\Vert +c_{\chi}\left\Vert u^{\varepsilon
;H^{1}\left( \Pi_{\infty}\left( l\right) \cap\Pi_{+}^{\varepsilon}\left(
2l\right) \right) \right\Vert \leq\\
& \leq c\left( \left\Vert F^{\varepsilon};W_{-\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \right\Vert +\left\Vert u^{\varepsilon};W_{-\gamma
}^{1}\left( \Pi_{+}^{\varepsilon}\right) \right\Vert \right) ,
\end{align*}
cf. the right-hand side of (\ref{K6}). By restriction (\ref{KK3}),
$\lambda^{\varepsilon}$ stays at a distance from the thresholds $\lambda
_{0}^{\dagger}=0$ and $\lambda_{1}^{\dagger}=\pi^{2}$ so that we may choose
the same weight index $\gamma$ for all legalized $\lambda^{\varepsilon}$.
Hence, a general result in \cite{Ko}, see also \cite[\S \ 3.2]{NaPl}, on the
basis of a perturbation argument provides a common factor $c^{\varepsilon
}=const$ in the estimate (\ref{K6}) for ingredients of the asymptotic
representation (\ref{K5}) of the solution $u_{\infty}^{\varepsilon}=\chi
u^{\varepsilon}$ to the problem (\ref{KK4}) in the $\varepsilon$-independent
domain $\Pi_{\infty}\left( l\right) .$ Since the weight $e^{\gamma x_{1}}$
is uniformly bounded in $\Pi_{+}^{\varepsilon}\left( 2l\right) =\Pi
_{+}^{\varepsilon}\setminus\Pi_{\infty}\left( 2l\right) $, the evident
relatio
\[
\left\Vert \widetilde{u}^{\varepsilon};W_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\left( 2l\right) \right) \right\Vert \leq c\left\Vert
u^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\left( 2l\right)
\right) \right\Vert +\sum_{\pm}\left( \left\vert a_{\pm}^{\varepsilon
}\right\vert +\left\vert b_{\pm}^{\varepsilon}\right\vert \right)
\]
allows us to extend the above mentioned estimate over the whole waveguide
$\Pi_{+}^{\varepsilon}$.
Similarly, in the case (\ref{KK3}) the factor $C^{\varepsilon}$ in (\ref{KK2})
can be fixed independent of $\varepsilon$, too.
The desired eigenvalue (\ref{42}) is located in the vicinity of the threshold
$\lambda_{1}^{\dagger}=\pi^{2}$ and the above consideration becomes
unacceptable. Moreover, the normalization factor $\left( \pi^{2
-\lambda^{\varepsilon}\right) ^{-1/4}$ in (\ref{27}) is big so that the
independence property of $c^{\varepsilon}$ and $C^{\varepsilon}$ is surely
lost. Thus, our immediate objective is to modify the estimates in order to
make them homotype for all small $\varepsilon>0$. We emphasize that a
modification of the normalization factor does not suffice because the waves
$e^{\pm k_{1}^{\varepsilon}x_{1}}\cos\left( \pi x_{2}\right) $ in (\ref{27})
become equal at $\varepsilon=0$.
We follow a scheme in \cite[\S 3]{na489} and define for $\lambda^{\varepsilon
}\in((0,\pi^{2})$ the linear combinations of the exponential waves (\ref{27}
\begin{equation}
\mathbf{w}_{1}^{\pm}\left( \lambda^{\varepsilon};x\right) =(1/2)\cos\left(
\pi x_{2}\right) ((1/k_{1}^{\varepsilon})(e^{k_{1}^{\varepsilon}x_{1
}-e^{-k_{1}^{\varepsilon}x_{1}})\mp i(e^{k_{1}^{\varepsilon}x_{1}
+e^{-k_{1}^{\varepsilon}x_{1}})), \label{W1
\end{equation}
cf. (\ref{30}). A direct calculation demonstrates that the new waves
(\ref{W1}) together with the old waves (\ref{13})
\begin{equation}
\mathbf{w}_{0}^{\pm}\left( \lambda^{\varepsilon};x\right) =w_{0
^{\varepsilon\pm}\left( x\right) =\left( 2k\right) ^{-1/2}e^{\pm
ik^{\varepsilon}x_{1}}, \label{W2
\end{equation}
still satisfy the normalization and orthogonality conditions (\ref{19}) but
additionally are in the relationshi
\[
\mathbf{w}_{0}^{\pm}\left( \lambda^{\varepsilon};x\right) -w_{1}^{0\pm
}\left( x\right) =O((\pi^{2}-\lambda^{\varepsilon})x_{1}),\ \ \ \ \mathbf{w
_{1}^{\pm}\left( \lambda^{\varepsilon};x\right) -w_{0}^{0\pm}\left(
x\right) =O((\pi^{2}-\lambda^{\varepsilon})^{1/2}x_{1}).
\]
In other words, the waves (\ref{W1}) and (\ref{W2}) smoothly become the waves
(\ref{20}) and (\ref{13}) introduced in Section \ref{sect2.1} at the threshold
$\lambda^{\varepsilon}=\pi^{2}$. The first property of $\mathbf{w}_{p}^{\pm
}\left( \lambda^{\varepsilon};x\right) $ allows us to repeat considerations
in Sections \ref{sect5.4}, \ref{sect2.2} and compose the space $\mathbf{W
_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) _{out}$ of functions
satisfying the new, so-called artificial, radiation conditio
\begin{equation}
\mathbf{u}^{\varepsilon}\left( x\right) =\widetilde{\mathbf{u}
^{\varepsilon}\left( x\right) +\mathbf{a}_{+}^{\varepsilon}\mathbf{w
_{0}^{+}\left( \lambda^{\varepsilon};x\right) +\mathbf{b}_{+}^{\varepsilon
}\mathbf{w}_{1}^{+}\left( \lambda^{\varepsilon};x\right) ,\text{
\ \ }\widetilde{\mathbf{u}}^{\varepsilon}\in\mathbf{W}_{\gamma}^{1}\left(
\Pi_{+}^{\varepsilon}\right) \label{U5
\end{equation}
cf. (\ref{KK1}), to determine the solutions $\mathbf{Z}_{p}^{\varepsilon
}\left( \lambda^{\varepsilon};\cdot\right) \in\mathbf{W}_{-\gamma
^{1}\left( \Pi_{+}^{\varepsilon}\right) $ of the homogeneous problem
(\ref{K2}), $\sigma=-\gamma,
\begin{gather}
\mathbf{Z}_{p}^{\varepsilon}\left( \lambda^{\varepsilon};x\right)
=\widetilde{\mathbf{Z}}_{p}^{\varepsilon}\left( \lambda^{\varepsilon
};x\right) +\mathbf{w}_{p}^{-}\left( \lambda^{\varepsilon};x\right)
+\mathbf{S}_{0p}^{\varepsilon}\left( \lambda^{\varepsilon}\right)
\mathbf{w}_{0}^{-}\left( \lambda^{\varepsilon};x\right) +\mathbf{S
_{1p}^{\varepsilon}\left( \lambda^{\varepsilon}\right) \mathbf{w}_{1
^{+}\left( \lambda^{\varepsilon};x\right) ,\text{ }\label{U6}\\
\widetilde{\mathbf{Z}}_{p}^{\varepsilon}\left( \lambda^{\varepsilon
;\cdot\right) \in\mathbf{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
,\text{ }p=0,1,\nonumber
\end{gather}
cf. (\ref{31}) and to detect a unitary and symmetric artificial scattering
matrix $\mathbf{S}^{\varepsilon}\left( \lambda^{\varepsilon}\right) =\left(
\mathbf{S}_{qp}^{\varepsilon}\left( \lambda^{\varepsilon}\right) \right)
_{q,\text{ }p=0,1}$. At the same time, the second property of $\mathbf{w
_{p}^{\pm}\left( \lambda^{\varepsilon};x\right) $ assures that, for a fixed
$\varepsilon$, the operato
\begin{equation}
\mathbf{B}_{\gamma}^{\varepsilon}\left( \lambda^{\varepsilon}\right)
_{out}:\mathbf{W}_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
_{out}\rightarrow W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}
\label{U7
\end{equation}
of the problem (\ref{K2}), $\sigma=-\gamma$, with the radiation condition
(\ref{U5}) depends continuously on the spectral parameter $\lambda
^{\varepsilon}\in\left( \pi^{2}-\delta,\pi^{2}\right] $, $\delta>0$, when
the domain of $\mathbf{B}_{\gamma}^{\varepsilon}\left( \lambda^{\varepsilon
}\right) _{out}$ is equipped with the norm
\begin{equation}
\left\Vert \mathbf{u}^{\varepsilon};\mathbf{W}_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \right\Vert =\left\Vert \widetilde{\mathbf{u
}^{\varepsilon};W_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
\right\Vert +\left\vert \mathbf{a}_{+}^{\varepsilon}\right\vert +\left\vert
\mathbf{b}_{+}^{\varepsilon}\right\vert \label{U10
\end{equation}
of a weighted space with detached asymptotics, cf. the left-hand side of
(\ref{KK3}).
Recalling our reasoning in Section \ref{sect5.3} and the beginning of this
section, we conclude that the operator (\ref{U7}) is an isomorphism while its
norm and the norm of the inverse are uniformly bounded in
\begin{equation}
\lambda\in\left[ \pi^{2}-\delta,\pi^{2}\right] ,\text{ \ \ }\varepsilon
\in\left[ 0,\varepsilon_{0}\right] . \label{U8
\end{equation}
Furthermore, by the Fourier method, entries of the matrix $\mathbf{S
^{\varepsilon}\left( \lambda\right) $ can be expressed as weighted integrals
of solutions (\ref{U6}), this matrix is continuous in both arguments
(\ref{U8}) and the limit matri
\begin{equation}
\mathbf{S}^{0}\left( \pi^{2}\right) =\text{diag}\left\{ 1,-1\right\}
\label{U9
\end{equation}
is nothing but augmented scattering matrix at the thresholds and its diagonal
form is due to the explicit solutions (\ref{491}) and (\ref{492}) in the
semi-strip $\Pi^{0}$
\[
Z_{0}^{0}\left( x\right) =(1/\sqrt{2\pi})\left( e^{i\pi x_{1}}+e^{-i\pi
x_{1}}\right) ,\ \ Z_{1}^{0}\left( x_{1}\right) =\cos\left( \pi
x_{2}\right) =(1/2i)\left( \left( x_{1}+i\right) \cos\left( \pi
x_{2}\right) -\left( x_{1}-i\right) \cos\left( \pi x_{2}\right) \right)
.
\]
We resume that above consideration and find out a unique solution
$\mathbf{u}^{\varepsilon}\in\mathbf{W}_{\gamma}^{1}\left( \Pi_{+
^{\varepsilon}\right) _{out}\subset W_{-\gamma}^{1}\left( \Pi_{+
^{\varepsilon}\right) $ of the problem (\ref{K2}) with $\sigma=-\gamma$,
$F^{\varepsilon}\in W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
^{\ast}$ and the artificial radiation condition (\ref{U5}). Moreover, the
estimat
\begin{equation}
\left\Vert \mathbf{u}^{\varepsilon};\mathbf{W}_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \right\Vert \leq c\left\Vert F^{\varepsilon
};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}\right\Vert
\label{U12
\end{equation}
is valid, where $c$ is independent of both parameters (\ref{U8}).
We now search for a solution $u^{\varepsilon}\in W_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) _{out}$ of the same integral identity but the
radiation condition from Section \ref{sect5.4} in the for
\begin{equation}
u^{\varepsilon}=\mathbf{u}^{\varepsilon}+\mathbf{c}_{0}^{\varepsilon
}\mathbf{Z}_{0}^{\varepsilon}+\mathbf{c}_{1}^{\varepsilon}\mathbf{Z
_{1}^{\varepsilon}. \label{U13
\end{equation}
The unknown coefficients $\mathbf{c}_{p}^{\varepsilon}$ should be fixed such
that the decomposition (\ref{KK1}) is satisfied. To this end, we insert into
the right-hand side of (\ref{U13}) formulas (\ref{U5}), (\ref{U6}) and
(\ref{W1}), (\ref{W2}), we compare the resultant coefficients of the waves
(\ref{13}), (\ref{27}) in (\ref{U13}) with those in (\ref{KK1}) and arrive at
the following systems of linear algebraic equations for the unknowns
$\mathbf{c}_{0}^{\varepsilon}$, $\mathbf{c}_{1}^{\varepsilon}$ and
$a_{+}^{\varepsilon}$, $b_{+}^{\varepsilon}:
\begin{align}
a_{+}^{\varepsilon} & =\mathbf{a}_{+}^{\varepsilon}+\mathbf{S
_{01}^{\varepsilon}\mathbf{c}_{1}^{\varepsilon},\text{ \ \ }0=\mathbf{c
_{0}^{\varepsilon},\label{U14}\\
\left( 2k_{1}^{\varepsilon}\right) ^{1/2}b_{+}^{\varepsilon} & =\left(
1-ik_{1}^{\varepsilon}\right) \mathbf{b}_{+}^{\varepsilon}+\left( \left(
1+ik_{1}^{\varepsilon}\right) +\left( 1-ik_{1}^{\varepsilon}\right)
\mathbf{S}_{11}^{\varepsilon}\right) \mathbf{c}_{1}^{\varepsilon
,\label{U15}\\
\left( 2k_{1}^{\varepsilon}\right) ^{1/2}b_{+}^{\varepsilon} & =\left(
1+ik_{1}^{\varepsilon}\right) \mathbf{b}_{+}^{\varepsilon}+\left( \left(
1-\varepsilon k_{1}^{\varepsilon}\right) +\left( 1+ik_{1}^{\varepsilon
}\right) \mathbf{S}_{11}^{\varepsilon}\right) \mathbf{c}_{1}^{\varepsilon
}.\nonumber
\end{align}
Solving the system (\ref{U15}) with the help of the Cramer's rule, a simple
calculation gives the determinan
\[
\left( 2k_{1}^{\varepsilon}\right) ^{3/2}\left( 1-\mathbf{S}_{11
^{\varepsilon}\right) =\left( 2\sqrt{\pi^{2}-\lambda^{\varepsilon}}\right)
^{3/2}i\left( 1-\mathbf{S}_{11}^{\varepsilon}\right)
\]
and the estimate
\begin{equation}
\left\vert b_{+}^{\varepsilon}\right\vert \leq c\left( \pi^{2}-\lambda
^{\varepsilon}\right) ^{-1/2}\left\vert \mathbf{b}_{+}^{\varepsilon
}\right\vert ,\text{ \ \ \ }\left\vert \mathbf{c}_{1}^{\varepsilon}\right\vert
\leq c\left\vert \mathbf{b}_{+}^{\varepsilon}\right\vert \label{U16
\end{equation}
because $2\geq\left\vert 1-\mathbf{S}_{11}^{\varepsilon}\right\vert \geq1/2$
due to (\ref{U9}) and (\ref{U8}). In view of the first relation in (\ref{U14})
we obtain tha
\[
\left\vert a_{+}^{\varepsilon}\right\vert \leq c\left( \left\vert
\mathbf{a}_{+}^{\varepsilon}\right\vert +\left\vert \mathbf{b}_{+
^{\varepsilon}\right\vert \right) .
\]
Collecting formulas (\ref{U15}), (\ref{U16}) and (\ref{U12}), (\ref{U10})
adjusts the inequality (\ref{KK2}) as well as Theorem \ref{TheoremKMM}.
\begin{theorem}
\label{TheoremKUM} Let $\lambda^{\varepsilon}\in\left[ \pi^{2}-\delta,\pi
^{2}\right] $, $\varepsilon\in\left( 0,\varepsilon_{0}\right] $ and
$\gamma\in\left( \beta_{1},\beta_{2}\right) $. The solution (\ref{KK1}) of
the problem (\ref{K2}) with $\sigma=-\gamma$ and $F^{\varepsilon}\in
W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}$ admits the
estimat
\begin{equation}
\left\Vert \widetilde{u}^{\varepsilon};W_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \right\Vert +\left\vert a_{+}^{\varepsilon
}\right\vert +\left( \pi^{2}-\lambda^{\varepsilon}\right) ^{1/4}\left\vert
b_{+}^{\varepsilon}\right\vert \leq C\left\Vert F^{\varepsilon};W_{-\gamma
}^{1}\left( \Pi_{+}^{\varepsilon}\right) ^{\ast}\right\Vert \label{U18
\end{equation}
where $C$ does not depend on $\lambda^{\varepsilon}$, $\varepsilon$ and
$F^{\varepsilon}$.
\end{theorem}
\section{Justification of asymptotics\label{sect6}}
\subsection{The global asymptotic approximation\label{sect6.1}}
The reformulation (\ref{T2}) of the criterion (\ref{32}) implicates the
coefficients $S_{11}^{\varepsilon}$ and $S_{10}^{\varepsilon}$ in the
decomposition (\ref{31}) of the special solution $Z_{1}^{\varepsilon}$ of the
problem (\ref{1}), (\ref{2}) and this section is devoted to the justification
of the formal asymptotic expansions (\ref{44}). We emphasize that the similar
expansions (\ref{60}) of other entries in the augmented scattering matrix
$S^{\varepsilon}$ can be verified in the same way but actually we had used in
Section \ref{sect4.1} much simpler relation (\ref{T3}) only.
In Section \ref{sect3} we applied the method of matched asymptotic expansions
and our immediate objective becomes to compose a global approximation solution
from the inner and outer expansions (\ref{47}) and (\ref{Z1}). To this end, we
employ several smooth cut-off functions
\begin{align}
X_{\varepsilon}\left( x\right) & =1\text{ for }x_{1}\leq l+1/\varepsilon
,\text{ \ \ }X_{\varepsilon}\left( x\right) =0\text{ for }x_{1
\geq2l+1/\varepsilon,\label{J1}\\
\chi_{\infty}\left( x\right) & =1\text{ for }x_{1}\geq2l\text{
},\text{\ \ \ \ \ \ \ \ }\chi_{\infty}\left( x\right) =0\text{ for
x_{1}\leq3l/2,\nonumber\\
\chi_{\varepsilon}\left( r\right) & =1\text{ for }r\leq2\varepsilon,\text{
\ \ \ \ \ \ \ \ \ }\chi_{\varepsilon}\left( r\right) =0\text{ for
r\geq3\varepsilon,\nonumber
\end{align}
where $r=\left( \left\vert x_{1}-l\right\vert ^{2}+x_{2}^{2}\right) ^{1/2}.$
We se
\begin{align}
\mathfrak{Z}^{\varepsilon} & =\chi_{\infty}\mathfrak{Z}^{out}+X_{\varepsilon
}\mathfrak{Z}^{in}-\chi_{\infty}X_{\varepsilon}\mathfrak{Z}^{mat},\label{J2}\\
\mathfrak{Z}^{out}\left( x\right) & =w_{1}^{\varepsilon-}\left( x\right)
+S_{11}^{0}w_{1}^{\varepsilon+}\left( x\right) +\varepsilon^{1/2}S_{01
^{0}w_{0}^{\varepsilon+}\left( x\right) ,\label{J3}\\
\mathfrak{Z}^{in}\left( x\right) & =\varepsilon^{-1/2}Z_{1}^{0}\left(
x\right) +\varepsilon^{1/2}\left( \left( 1-\chi_{\varepsilon}\left(
r\right) \right) \widehat{Z}_{1}^{\prime}\left( x\right) +\chi
_{\varepsilon}\left( r\right) Z_{1}^{\prime}\left( l,0\right) \right)
,\label{J4}\\
\mathfrak{Z}^{mat}\left( x\right) & =\varepsilon^{-1/2}\left(
4\mu\right) ^{-1/4}\cos\left( \pi x_{2}\right) \left( 1+i+S_{11
^{0}\left( 1-i\right) +\varepsilon x_{1}\sqrt{\mu}\left( 1-i+S_{11
^{0}\left( 1+i\right) \right) \right) +\label{J5}\\
& +\varepsilon^{1/2}S_{01}^{0}\left( 2\pi\right) ^{-1/2}e^{i\pi x_{1
}.\nonumber
\end{align}
This construction needs explanations. First, the expressions (\ref{J3}) and
(\ref{J4}) of the outer and inner types are multiplied with the cut-off
functions $\chi_{\infty}$ and $X_{\varepsilon}$ whose support overlap so that
the sum (\ref{J5}) of terms matched in Section \ref{sect3.2} attend the global
approximation (\ref{J2}) twice, i.e. in $\chi_{\infty}\mathfrak{Z}^{out}$ and
$X_{\varepsilon}\mathfrak{Z}^{in}$, but we compensate for this duplication by
subtracting $\chi_{\infty}X_{\varepsilon}\mathfrak{Z}^{mat}$. Moreover, the
formula for commutators $\left[ \Delta,\chi_{\infty}X_{\varepsilon}\right]
=\left[ \Delta,\chi_{\infty}\right] +\left[ \Delta,X_{\varepsilon}\right]
$ demonstrates that
\begin{gather}
\left( \Delta+\lambda^{\varepsilon}\right) \mathfrak{Z}^{\varepsilon
=\chi_{\infty}\left( \Delta+\lambda^{\varepsilon}\right) \mathfrak{Z
^{out}+X_{\varepsilon}\left( \Delta+\lambda^{\varepsilon}\right)
\mathfrak{Z}^{in}-\chi_{\infty}X_{\varepsilon}\left( \Delta+\lambda
^{\varepsilon}\right) \mathfrak{Z}^{mat}+\label{J6}\\
+\left[ \Delta,\chi_{\infty}\right] \left( \mathfrak{Z}^{out
-\mathfrak{Z}^{mat}\right) +\left[ \Delta,X_{\varepsilon}\right] \left(
\mathfrak{Z}^{in}-\mathfrak{Z}^{mat}\right) :=\nonumber\\
:=\mathcal{F}^{\varepsilon}=\chi_{\infty}\mathcal{F}^{out}+X_{\varepsilon
}\mathcal{F}^{in}-\chi_{\infty}X_{\varepsilon}\mathcal{F}^{mat}+\mathcal{F
^{oma}+\mathcal{F}^{ima}.\nonumber
\end{gather}
Second, the function $Z_{1}^{0}$ is properly defined by the formula (\ref{50})
in the whole waveguide but $Z_{1}^{\prime}$ needs an extension from $\Pi
_{+}^{0}$ onto $\Pi_{+}^{\varepsilon}$ denoted by $\widehat{Z}_{1}^{\prime}$
in (\ref{J4}). Since the Neumann datum (\ref{52}) in the problem (\ref{51}),
$p=1$, has a jump at the point $(0,l)\in\partial\Pi_{+}^{0}$, the solution
$Z_{p}^{\prime}$ gets a singular behavior near this point. A simple
calculation based on the Kondratiev theory \cite{Ko} (see also \cite[Ch.
2]{NaPl}) demonstrates that
\begin{equation}
Z_{1}^{\prime}\left( x\right) =\pi^{-1}G_{1}^{\prime}r\left( \ln
r\cos\varphi-\varphi\sin\varphi\right) +\breve{Z}_{1}^{\prime}\left(
x\right) \label{J7
\end{equation}
where $\left( r,\varphi\right) \in\mathbb{R}_{+}\times\left( 0,\pi\right)
$ are polar coordinates in fig. \ref{f4}, b and $\breve{Z}_{1}^{\prime}$ is a
smooth function in the closed rectangle $\overline{\Pi_{+}^{0}\left(
R\right) }$ of any fixed length $R$. We emphasize that the solution
$Z_{1}^{\prime}$ has no singularities at the corner points $\left(
0,0\right) $ and $\left( 0,1\right) ,$ cf. \cite[Ch. 2]{NaPl}, but third
derivatives of $\breve{Z}_{1}^{\prime}$ are not bounded when $r\rightarrow+0$.
The extension $\widehat{Z}_{1}^{\prime}$ in (\ref{J7}) where $\breve{Z
_{1}^{\prime}$ is smoothly continued through the segment $\left\{ x:x_{1
\in\left[ 0,l\right] ,\text{\ }x_{2}=0\right\} $.
Finally, we mention that the correction term $Z_{1}^{\prime}$ in (\ref{47})
was determined in Section \ref{sect3.2} up to the addendum $C_{1}^{0
\cos\left( \pi x_{1}\right) $ but putting $C_{1}^{0}=0$ in the expansion
(\ref{5222}) defines uniquely the function $Z_{1}^{\prime}$ as well as its
value $Z_{1}^{\prime}\left( l,0\right) $ according to (\ref{J7}). Notice
that we also must take $S_{11}^{\prime}=0$ by virtue of (\ref{56}). The
extension $\widehat{Z}_{1}^{\prime}$ of $Z_{1}^{\prime}$ is smooth everywhere
in a neighborhood of $\overline{\Pi_{+}^{0}}$ except at the point $\left(
l,0\right) $ where it inherits a singularity from (\ref{J7}). Using the
partition of unity $\left\{ 1-\chi_{\varepsilon},\chi_{\varepsilon}\right\}
$ makes the last term in (\ref{J4}) smooth in $\overline{\Pi_{+}^{\varepsilon
}}$ but produces additional discrepancy in the Helmholtz equation (\ref{1}).
\subsection{Estimating discrepancies\label{sect6.2}}
First of all, we observe that $\mathcal{F}^{out}=0$ in $\Pi_{+}^{0}$ according
to definition of waves in (\ref{13}) and (\ref{30}). In view of the factor
$\mathfrak{\chi}_{\infty}$ from (\ref{J1}) the first term on the right of
(\ref{J6}) vanishes. Moreover, the Taylor formulas (\ref{43}) and (\ref{46})
assure tha
\[
\left\vert \mathfrak{Z}^{out}\left( x\right) -\mathfrak{Z}^{mat}\left(
x\right) \right\vert +\left\vert \nabla\mathfrak{Z}^{out}\left( x\right)
-\nabla\mathfrak{Z}^{mat}\left( x\right) \right\vert \leq c\varepsilon^{3/2
\]
on the rectangle $\left[ 3l/2,2l\right] \times\left[ 0,1\right] $ where
supports of coefficients in the commutator $\left[ \Delta,\chi_{\infty
}\right] $ are located in. Henc
\begin{equation}
\left\vert \mathcal{F}^{oma}\left( x\right) \right\vert \leq c\varepsilon
^{3/2},\ \ \ \ \mathcal{F}^{oma}\left( x\right) =0\text{ \ \ for }x_{1
\geq2l. \label{J9
\end{equation}
Let us consider the sum
\begin{equation}
\mathcal{F}^{inm}\left( x\right) =X_{\varepsilon}\mathcal{F}^{in
-X_{\varepsilon}\mathfrak{\chi}_{\infty}\mathcal{F}^{mat}. \label{J10
\end{equation}
Outside the finite domain $\Pi_{+}^{\varepsilon}\left( 3l/2\right) $ it is
equal t
\begin{gather}
\varepsilon^{-1/2}X_{\varepsilon}\left( x\right) \left( \left( \Delta
+\pi^{2}\right) Z_{1}^{0}\left( x\right) +\varepsilon\left( \Delta+\pi
^{2}\right) Z_{1}^{\prime}\left( x\right) -\mathfrak{\chi}_{\infty}\left(
\Delta+\pi^{2}\right) \mathfrak{Z}^{mat}\left( x\right) \right)
+\label{J11}\\
+\varepsilon^{-1/2}X_{\varepsilon}\left( x\right) \left( \lambda
^{\varepsilon}-\pi^{2}\right) \left( Z_{1}^{0}\left( x\right) +\varepsilon
Z_{1}^{\prime}\left( x\right) -\mathfrak{\chi}_{\infty}\left( x\right)
\mathfrak{Z}^{mat}\left( x\right) \right) =\nonumber\\
=0+\varepsilon^{3/2}\mu(Z_{1}^{0}\left( x\right) -\mathfrak{\chi}_{\infty
}\left( x\right) \left( 4\mu\right) ^{-1/4}\cos\left( \pi x_{2}\right)
\left( 1+i+S_{11}^{0}\left( 1-i\right) \right) -\nonumber\\
-\varepsilon(Z_{1}^{\prime}\left( x\right) -\mathfrak{\chi}_{\infty}\left(
x\right) (\left( 4\mu\right) ^{-1/4}\cos\left( \pi x_{2}\right)
x_{1}\sqrt{\mu}\left( 1-i+S_{11}^{0}\left( 1+i\right) \right) +S_{01
^{0}\left( 2\pi\right) ^{-1/2}e^{i\pi x_{1}}).\nonumber
\end{gather}
where formulas (\ref{42}) and (\ref{T1}) are taken into account. We now use
the representations (\ref{50}) and (\ref{5222}) to conclude tha
\begin{equation}
\left\vert \mathcal{F}^{inm}\left( x\right) \right\vert \leq c\varepsilon
^{3/2}e^{-x_{1}\sqrt{3}\pi}\text{ \ \ \ for }x_{1}\geq3l/2. \label{J12
\end{equation}
Inside $\Pi_{+}^{\varepsilon}\left( 3l/2\right) $ we hav
\[
\mathcal{F}^{inm}=-\varepsilon^{2}\mu\mathfrak{Z}^{in}+\varepsilon
^{1/2}\left( 1-\chi_{\varepsilon}\right) \left( \Delta+\pi^{2}\right)
\widehat{Z}_{1}^{\prime}-\varepsilon^{1/2}\left[ \mathcal{\vartriangle
,\chi_{\varepsilon}\right] (\widehat{Z}_{1}^{\prime}-Z_{1}^{\prime}\left(
l,0\right) ).
\]
The inequalit
\[
\varepsilon^{2}\mu\left\vert \mathfrak{Z}^{in}\left( x\right) \right\vert
\leq c\varepsilon^{3/2}\text{ \ \ \ in }\overline{\Pi_{+}^{\varepsilon}\left(
3l/2\right)
\]
is evident. Because of the singularity $O\left( r\left\vert \ln r\right\vert
\right) $ in (\ref{J7}), estimates of other two terms in (\ref{J11}) involve
the weight functio
\begin{equation}
\rho\left( x\right) =r+\left( 1+\left\vert \ln r\right\vert \right)
\label{J14
\end{equation}
cf. the Hardy inequality (\ref{hardy}). Since $\widehat{Z}_{1}^{\prime
=Z_{1}^{\prime}$ in $\Pi_{+}^{0}$ satisfies the Helmholtz equation from
(\ref{51}), we hav
\begin{align*}
\varepsilon^{1/2}\left( 1-\chi_{\varepsilon}\left( r\right) \right)
\left( \mathcal{\Delta}+\pi^{2}\right) \widehat{Z}_{1}^{\prime}\left(
x\right) & =0,\text{ \ \ }x\in\Pi_{+}^{0}\left( 3l/2\right) \\
\varepsilon^{1/2}\left\vert \left( 1-\chi_{\varepsilon}\left( r\right)
\right) \left( \mathcal{\Delta}+\pi^{2}\right) \widehat{Z}_{1}^{\prime
}\left( x\right) \right\vert & \leq c\varepsilon^{1/3}\left\vert
x_{1}\right\vert \left( \varepsilon+r\right) ^{-2}\rho\left( x\right)
,\text{ \ \ }x\in\varpi_{+}^{\varepsilon}=\Pi_{+}^{\varepsilon}\setminus
\Pi_{+}^{0
\end{align*}
Observing that, according to the third line in (\ref{J1}), coefficients in the
commutator $\left[ \Delta,\chi_{\varepsilon}\right] =2\nabla\chi
_{\varepsilon}\cdot\nabla+\Delta\chi_{\varepsilon}$ gets orders $\varepsilon
^{-1}$ and $\varepsilon^{-2},$ respectively but vanish outside the set
$\Cap^{\varepsilon}=\left\{ x\in\Pi_{+}^{\varepsilon}:2\varepsilon
<r<3\varepsilon\right\} $, we conclude that
\begin{align}
\varepsilon^{1/2}\left[ \Delta,\chi_{\varepsilon}\right] \left( \widehat
{Z}_{1}^{\prime}\left( x\right) -Z_{1}^{\prime}\left( l,0\right) \right)
& =0,\text{ \ \ }x\in\Pi_{+}^{\varepsilon}\left( 3l/2\right) \setminus
\Cap^{\varepsilon},\label{J16}\\
\varepsilon^{1/2}\left\vert \left[ \Delta,\chi_{\varepsilon}\left(
\tau\right) \right] \left( \widehat{Z}_{1}^{\prime}\left( x\right)
-Z_{1}^{\prime}\left( l,0\right) \right) \right\vert & \leq
c\varepsilon^{1/2}\left( \varepsilon^{-1}\left\vert \ln r\right\vert
+\varepsilon^{-2}r\left\vert \ln r\right\vert \right) \leq\nonumber\\
& \leq c\varepsilon^{1/2}\left\vert x_{1}\right\vert \left( \varepsilon
+r\right) ^{-2}\rho\left( x\right) ,\text{ }x\in\Cap^{\varepsilon
}.\nonumber
\end{align}
Finally, we mention that the support of the term $\mathcal{F}^{ima}$ in
(\ref{J6}) belongs to the rectangle $\left[ l+1/\varepsilon,2l+1/\varepsilon
\right] \times\left[ 0,1\right] ,$ see the first line of (\ref{J1}), where
the remainder $\widetilde{Z}_{1}^{\prime}\left( x\right) $ in (\ref{5222})
gets the exponential small order $O\left( e^{-\sqrt{3}\pi/\varepsilon
}\right) $, and henc
\begin{equation}
\left\vert \left[ \Delta,X_{\varepsilon}\right] \left( \mathfrak{Z
^{in}\left( x\right) -\mathfrak{Z}^{mat}\left( x\right) \right)
\right\vert =\varepsilon^{1/2}\left\vert \left[ \Delta,X_{\varepsilon
}\right] \widetilde{Z}_{1}^{\prime}\left( x\right) \right\vert \leq
c\varepsilon^{1/2}e^{-3\sqrt{\pi}/\varepsilon}. \label{J17
\end{equation}
It remains to consider discrepancies in the Neumann condition (\ref{2}). Since
the cut-off functions $X_{\varepsilon}$ and $\chi_{\infty}$ can be taken
dependent on the longitudinal coordinate $x_{1}$ only, the asymptotic solution
(\ref{J2}) satisfies the homogeneous Neumann condition everywhere on
$\partial\Pi_{+}^{0}$, except on the sides $\Upsilon^{\varepsilon}$ and
$\upsilon^{\varepsilon}$ of the rectangle (\ref{0}), cf. Sections
\ref{sect3.2} and \ref{sect3.3}. Furthermore, $Z_{1}^{0}$ does not depend on
$x_{1}$ and $Z_{1}^{\prime}$ is multiplied in (\ref{J4}) with the cut-off
function $\mathfrak{\chi}_{\varepsilon}$ in the radial variable $r.$ Thus,
$\partial_{1}\mathfrak{Z}^{\varepsilon}=0$ on the short side $\upsilon
^{\varepsilon}.$ Regarding the trace $\mathcal{G}^{\varepsilon}$ of
$\partial_{\nu}\mathfrak{Z}^{\varepsilon}=-\partial_{2}\mathfrak{Z}^{as}$ on
the long side $\Upsilon^{\varepsilon}$ we take the formulas (\ref{51
)-(\ref{52}) into account and, similarly to (\ref{J14}) and (\ref{J16}),
obtai
\begin{equation}
\left\vert \mathcal{G}^{\varepsilon}\left( x_{1},-\varepsilon\right)
\right\vert \leq c\varepsilon^{3/2}\left( \varepsilon+r\right) ^{-1}.
\label{J18
\end{equation}
We emphasize that differentiation in $x_{2}$ eliminates $\ln r$ in the first
term of (\ref{J7}).
\subsection{Comparing the approximate and true solutions.\label{sect6.3}}
First of all, we observe that $\mathfrak{Z}^{as}\left( x\right)
=\mathfrak{Z}^{out}\left( x\right) $ as $x_{1}>2l$ by virtue of the
definition (\ref{J1}) of $X_{\varepsilon}$ and $\mathfrak{\chi}_{\infty}$.
Thus, in view of (\ref{31}) and (\ref{J2}), (\ref{J3}) the difference
$\mathcal{R}^{\varepsilon}=Z^{\varepsilon}-\mathfrak{Z}^{\varepsilon}$ loses
the incoming waves $w_{p}^{\varepsilon-}$ and falls into the space $W_{\gamma
}^{1}\left( \Pi_{+}^{\varepsilon}\right) _{out}$. Moreover, the
decomposition (\ref{KK1}) of $\mathcal{R}^{\varepsilon}\left( x\right) $
contains the coefficients $a_{+}^{\varepsilon}=\widehat{S}_{10}^{\varepsilon}$
and $b_{+}^{\varepsilon}=\widehat{S}_{11}^{\varepsilon}$ defined in
(\ref{S11}) and (\ref{S01}). The integral identity (\ref{K2}) with
$\sigma=-\gamma$ serving for $\mathcal{R}^{\varepsilon}$, involves the
functiona
\begin{equation}
F^{\varepsilon}\left( v^{\varepsilon}\right) =\left( \mathcal{F
^{\varepsilon},v^{\varepsilon}\right) _{\Pi_{+}^{\varepsilon}}-\left(
\mathcal{G}^{\varepsilon},v^{\varepsilon}\right) _{\Upsilon_{\varepsilon}}
\label{J20
\end{equation}
where $\mathcal{F}^{\varepsilon}$ is given in (\ref{J6}) and $\mathcal{G
^{\varepsilon}=-\partial_{\nu}\mathfrak{Z}^{\varepsilon}$. If we prove the
inclusion $F^{\varepsilon}\in W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon
}\right) ^{\ast}$, then the estimate (\ref{U18}) adjusted by the weighting
factor $\left( \pi^{2}-\lambda^{\varepsilon}\right) ^{1/4}=\varepsilon
^{1/2}\mu^{1/4}$ demonstrates that
\[
\left\vert \widehat{S}_{10}^{\varepsilon}\right\vert +\varepsilon
^{1/2}\left\vert \widehat{S}_{11}^{\varepsilon}\right\vert \leq c\left\Vert
F^{\varepsilon},W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
\right\Vert .
\]
We fix some test function $v^{\varepsilon}\in W_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) $. The classical one-dimensional Hardy inequalit
\[
\int_{0}^{l}r^{-2}\left\vert \ln\frac{r}{l}\right\vert ^{2}\left\vert V\left(
r\right) \right\vert ^{2}rdr\leq4\int_{0}^{l}\left\vert \frac{dV}{dr}\left(
r\right) \right\vert ^{2}rdr,\text{ \ \ }V\in C_{0}^{\infty}\left[
0,l\right) ,
\]
in a standard way, cf. \cite[Ch.1, \S 4]{MaNaPl}, leads to the relatio
\begin{equation}
\left\Vert \rho^{-1}v^{\varepsilon};L^{2}\left( \Pi_{+}^{\varepsilon}\left(
2l\right) \right) \right\Vert ^{2}\leq c\left\Vert v^{\varepsilon
;H^{1}\left( \Pi_{+}^{\varepsilon}\left( 2l\right) \right) \right\Vert
^{2}\leq c_{\gamma}\left\Vert v^{\varepsilon};W_{\gamma}^{1}\left( \Pi
_{+}^{\varepsilon}\right) \right\Vert ^{2} \label{hardy
\end{equation}
where $\rho$ is the weight factor (\ref{J14}). Moreover, introducing the new
weight factor $\rho_{1}\left( x\right) =r\left( 1+\left\vert \ln
r\right\vert \right) ^{2}$, we derive the weighted trace inequalit
\begin{align}
\int_{\Upsilon^{\varepsilon}}\rho_{1}^{-1}\left\vert v^{\varepsilon
}\right\vert ^{2}dx_{1} & =\int_{\Pi_{+}^{\varepsilon}\left( l\right)
}\dfrac{\partial}{\partial x_{2}}\left( \mathfrak{\chi}_{0}\rho_{1
^{-1}\left\vert v^{\varepsilon}\right\vert ^{2}\right) dx\leq\nonumber\\
& \leq c\int_{\Pi_{+}^{\varepsilon}\left( l\right) }\left( \left\vert
\dfrac{\partial v^{\varepsilon}}{\partial x_{2}}\right\vert \rho_{1
^{-1}\left\vert v^{\varepsilon}\right\vert +\left( 1+\dfrac{\partial
}{\partial x_{2}}\rho_{1}^{-1}\right) \left\vert v^{\varepsilon}\right\vert
^{2}\right) dx\leq\nonumber\\
& \leq c\int_{\Pi_{+}^{\varepsilon}\left( l\right) }\left( \left\vert
\nabla v^{\varepsilon}\right\vert ^{2}\rho^{-2}\left\vert v^{\varepsilon
}\right\vert ^{2}\right) dx\leq c_{\gamma}\left\Vert v^{\varepsilon
};W_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) \right\Vert ^{2}.
\label{J22
\end{align}
Here, we took into account that $\left\vert \nabla\rho_{1}\left( x\right)
^{-1}\right\vert \leq c\rho\left( x\right) ^{-2}$ and used a cut-off
function $\chi_{0}\in C^{1}\left( \mathbb{R}\right) $, $\chi_{0}\left(
x_{2}\right) =1$ for $x_{2}\leq1/3$ and $\chi_{0}\left( x_{2}\right) =1$
for $x_{2}\geq2/3$. The inclusion $F^{\varepsilon}\in W_{-\gamma}^{1}\left(
\Pi_{+}^{\varepsilon}\right) ^{\ast}$ is obvious because $\mathcal{F
^{\varepsilon}$ has a compact support. To estimate the norm $\left\Vert
\mathcal{F}^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right)
^{\ast}\right\Vert $ we apply inequalities obtained in the previous section.
Since $\gamma\in(0,\sqrt{3}\pi)$, the estimates (\ref{J17}) give
\[
\left\vert \left( \mathcal{F}^{ima},v^{\varepsilon}\right) _{\Pi
_{+}^{\varepsilon}}\right\vert \leq c\varepsilon^{1/2}e^{\left( \gamma
-\sqrt{3}\pi\right) /\varepsilon}\int_{l+1/\varepsilon}^{2l+1/\varepsilon
}\int_{0}^{1}e^{-\gamma x_{1}}\left\vert v^{\varepsilon}\left( x\right)
\right\vert dx\leq c\varepsilon^{3/2}\left\Vert v^{\varepsilon};W_{-\gamma
}^{1}\left( \Pi_{+}^{\varepsilon}\right) \right\Vert .
\]
By the formula (\ref{J9}), we hav
\[
\left\vert \left( \mathcal{F}^{oma},v^{\varepsilon}\right) _{\Pi
_{+}^{\varepsilon}}\right\vert \leq c\varepsilon^{3/2}\left\Vert
v^{\varepsilon};L^{1}\left( \Pi_{+}^{\varepsilon}\left( 3l/2\right)
\right) \right\Vert \leq c\varepsilon^{3/2}\left\Vert v^{\varepsilon
};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) \right\Vert .
\]
Recalling (\ref{J10}) and (\ref{J12}), (\ref{J14}), (\ref{J16}) yield
\begin{gather*}
\left\vert \left( \mathcal{F}^{inm},v^{\varepsilon}\right) _{\Pi
_{+}^{\varepsilon}}\right\vert \leq\left( c\varepsilon^{3/2}\int_{\Pi
_{\infty}\left( 3l/2\right) }e^{-x_{1}\sqrt{3}\pi}\left\vert v^{\varepsilon
}\left( x\right) \right\vert dx+\varepsilon^{1/2}\int_{\varpi_{+
^{\varepsilon}}\dfrac{\left\vert x_{1}\right\vert \rho\left( x\right)
}{\left( \varepsilon+r\right) ^{2}}\left\vert v^{\varepsilon}\left(
x\right) \right\vert dx+\right. \\
+\left. \varepsilon^{1/2}\int_{\Cap^{\varepsilon}}\dfrac{\rho\left(
x\right) }{\left( \varepsilon+r\right) ^{2}}\left\vert v^{\varepsilon
}\left( x\right) \right\vert dx\right) \leq\\
\leq c(\varepsilon^{3/2}\left( \int_{3l/2}^{+\infty}e^{2\left( \gamma
-\sqrt{3}\pi\right) x_{1}}dx_{1}\right) ^{1/2}\left( \int_{\Pi
_{+}^{\varepsilon}\left( 3l/2\right) }e^{-2\gamma x_{1}}\left\vert
v^{\varepsilon}\left( x\right) \right\vert ^{2}dx\right) ^{1/2}+\\
+\varepsilon^{1/2}\left( \int_{\varpi_{+}^{\varepsilon}}\left\vert
x_{1}\right\vert ^{2}\dfrac{\left( \rho\left( x\right) \right) ^{4
}{\left( \varepsilon+r\right) ^{4}}dx+\int_{\Cap^{\varepsilon}
\dfrac{\left( \rho\left( x\right) \right) ^{4}}{\left( \varepsilon
+r\right) ^{4}}dx\right) ^{1/2}\left\Vert \rho^{-1}v^{\varepsilon
;L^{1}\left( \Pi_{+}^{\varepsilon}\left( l\right) \right) \right\Vert
\leq\\
\leq\varepsilon^{3/2}\left( 1+\left\vert \ln\varepsilon\right\vert \right)
^{2}\left\Vert v^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon
}\right) \right\Vert .
\end{gather*}
Finally, we derive from (\ref{J18}) and (\ref{J22}) the following estimate of
the last scalar product in (\ref{J20})
\[
\left\vert \left( \mathcal{G},v^{\varepsilon}\right) _{\Upsilon
_{\varepsilon}}\right\vert \leq c\varepsilon^{3/2}\left( \int_{\Upsilon
_{\varepsilon}}\left( \varepsilon+r\right) ^{-2}\rho_{1}dx_{1}\right)
^{1/2}\left\Vert \rho_{1}^{-1/2}v^{\varepsilon};L^{2}\left( \Upsilon
_{\varepsilon}\right) \right\Vert \leq c\varepsilon^{3/2}\left( 1+\left\vert
\ln\varepsilon\right\vert \right) ^{3/2}\left\Vert v^{\varepsilon
;W_{-\gamma}^{1}\left( \Pi^{\varepsilon}\right) \right\Vert .
\]
Collecting the obtained inequalities we conclude that the functional in
(\ref{J20}) meets the estimat
\begin{equation}
\left\Vert F^{\varepsilon};W_{-\gamma}^{1}\left( \Pi^{\varepsilon}\right)
^{\ast}\right\Vert \leq c\varepsilon^{3/2}\left( 1+\left\vert \ln
\varepsilon\right\vert \right) ^{2}. \label{J23
\end{equation}
\subsection{Asymptotics of the augmented scattering matrix\label{sect6.4}}
We are in position to formulate the main technical result in the paper. Since
the norm $||\mathcal{R}^{\varepsilon};W_{\gamma}^{1}\left( \Pi_{+
^{\varepsilon}\right) _{out}||$ in the space with detached asymptotics
contains the coefficients $\widehat{S}_{10}^{\varepsilon}$ and $\widehat
{S}_{11}^{\varepsilon}$ in the representation (\ref{KK1}) of $\mathcal{R
^{\varepsilon}$ estimates of the asymptotic remainders in (\ref{S01}) and
(\ref{S11}) follow directly from (\ref{J23}) and (\ref{U18}), (\ref{42}). A
similar outcome for $S_{00}^{\varepsilon}$ can be obtained by repeating word
by word calculations in the previous section based on the formal asymptotic
(\ref{60}), (\ref{64}). We only mention that the discrepancy (\ref{Z02}) on
the small side $\upsilon^{\varepsilon}$ of the box $\varpi_{+}^{\varepsilon}$
can be considered as follows
\begin{align*}
\left( 2\pi\right) ^{1/2}\left\vert \sin\left( \pi l\right) \int
_{-\varepsilon}^{0}v^{\varepsilon}\left( l,x_{2}\right) dx_{2}\right\vert
& \leq c\varepsilon^{1/2}\left( 1+\left\vert \ln\varepsilon\right\vert
\right) \left\Vert r^{-1/2}\left( 1+\left\vert \ln r\right\vert \right)
^{-1}v^{\varepsilon};L^{2}\left( \upsilon^{\varepsilon}\right) \right\Vert
\leq\\
& \leq c\varepsilon\left( 1+\left\vert \ln\varepsilon\right\vert \right)
\left\Vert v^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon
}\right) \right\Vert
\end{align*}
where a weighted trace inequality of type (\ref{J22}) is applied.
\begin{theorem}
\label{TheoremASY}Remainders in the asymptotic forms (\ref{S11})-(\ref{S01})
enjoy the estimate
\begin{equation}
\left\vert \widehat{S}_{11}^{\varepsilon}\right\vert +\varepsilon
^{-1/2}\left\vert \widehat{S}_{01}^{\varepsilon}\right\vert +\left\vert
S_{00}^{\varepsilon}-1\right\vert \leq c\varepsilon\left( 1-\left\vert
\ln\varepsilon\right\vert \right) ^{2}. \label{est
\end{equation}
\end{theorem}
We finally note that formulas (\ref{T3}), (\ref{T999}) is the only issue in
the paper which deals with $S_{00}^{\varepsilon}$ but requires much less
accurate information.
\subsection{Dependence on $\bigtriangleup l$ and $\bigtriangleup\mu
$\label{sect6.5}}
Let $\varepsilon$ be fixed small and positive. We take $l=\pi k+\bigtriangleup
l$ and make the change of coordinate
\begin{equation}
x\rightarrow\mathbf{x=}\left( \mathbf{x}_{1},\mathbf{x}_{2}\right) =\left(
x_{1},x_{2}\right) ,\ \ \ \ \ \ \mathbf{x}_{1}=\left( 1-\chi_{k}\left(
x_{1}\right) \right) x_{1}+\chi_{k}\left( x_{1}\right) \left(
x_{1}-\bigtriangleup l\right) \label{J51
\end{equation}
where $\chi_{k}$ is a smooth cut-off function, $\chi_{k}\left( x_{1}\right)
=1$ for $\left\vert x_{1}-\pi k\right\vert <\pi/3$ and $\chi_{k}\left(
x_{1}\right) =0$ for $\left\vert x_{1}-\pi k\right\vert >2\pi/3$. If
$\bigtriangleup l$ is small, this change is nonsingular. Moreover, it
transforms $\Pi_{l}^{\varepsilon}$ into $\Pi_{\pi k}^{\varepsilon}$ and turns
the Helmholtz operator $\bigtriangleup+\pi^{2}-\varepsilon^{2}\left(
\mu+\bigtriangleup\mu\right) $ into the second-order differential operator
$L^{\varepsilon}\left( \bigtriangleup l,\bigtriangleup\mu;\mathbf{x
,\nabla_{\mathbf{x}}\right) $ whose coefficients depend smoothly on
$\bigtriangleup\mu$ and $\bigtriangleup l.$ Clearly, $L^{\varepsilon}\left(
0,0;\mathbf{x},\nabla_{\mathbf{x}}\right) =\bigtriangleup_{\mathbf{x}
+\pi^{2}-\varepsilon^{2}\mu$. Owing the Fourier method, we can rewrite the
element $S_{11}^{\varepsilon}=S_{11}^{\varepsilon}\left( \bigtriangleup
\mu,\bigtriangleup l\right) $ of the augmented scattering matrix as the
integra
\begin{equation}
S_{11}^{\varepsilon}\left( \bigtriangleup\mu,\bigtriangleup l\right)
=\alpha_{11}^{\varepsilon}\left( \bigtriangleup\mu\right) \int_{Q_{k}
Z_{11}^{\varepsilon}\left( \bigtriangleup\mu,\bigtriangleup l;x\right) dx
\label{J53
\end{equation}
over the rectangle $Q_{k}=\left( \pi\left( k+1\right) ,\pi\left(
k+2\right) \right) \times\left( 0,1\right) $ where $\mathbf{x}=x$
according to (\ref{J51}). Due to the general result in the perturbation theory
of linear operators, see, e.g., \cite{HilPhi, Kato}, the special solution
$Z_{11}^{\varepsilon}\left( x\right) =$ $Z_{11}^{\varepsilon}\left(
\bigtriangleup\mu,\bigtriangleup l;x\right) $ rewritten in the coordinates
$\mathbf{x}$ depend smoothly on $\left( \bigtriangleup\mu,\bigtriangleup
l\right) \in\overline{\mathbb{B}_{\rho}}.$The coefficient $\alpha
_{11}^{\varepsilon}\left( \bigtriangleup\mu\right) $ in (\ref{J53}) is also
a smooth function whose exact form is of no need. Thus, the element
(\ref{J53}) inherits this smooth dependence while the remainder $\widetilde
{S}_{11}^{\varepsilon}\left( \bigtriangleup\mu,\bigtriangleup l\right) $ in
the representation (\ref{S11}) gets the same property according to the formula
(\ref{58}) for $S_{11}^{0}\left( \bigtriangleup\mu,\bigtriangleup l\right) $
written in the variables (\ref{T1}).
Similar operations apply to $S_{01}^{\varepsilon}\left( \bigtriangleup
\mu,\bigtriangleup l\right) $ and $\widetilde{S}_{01}^{\varepsilon}\left(
\bigtriangleup\mu,\bigtriangleup l\right) .$
Finally, recalling our examination in Section \ref{sect5.5} and Theorem
\ref{TheoremASY}, we formulate the result.
\begin{proposition}
\label{PropositionANAL} The remainders in the asymptotic formulas (\ref{S11}),
(\ref{58}) and (\ref{S01}), (\ref{anti}) satisfy the inequalit
\[
\left\vert \nabla_{\left( \bigtriangleup\mu,\bigtriangleup l\right)
}\widetilde{S}_{11}^{\varepsilon}\left( \bigtriangleup\mu,\bigtriangleup
l\right) \right\vert +\varepsilon^{-1/2}\left\vert \nabla_{\left(
\bigtriangleup\mu,\bigtriangleup l\right) }\widetilde{S}_{01}^{\varepsilon
}\left( \bigtriangleup\mu,\bigtriangleup l\right) \right\vert \leq
c\varepsilon\left( 1+\left\vert \ln\varepsilon\right\vert ^{2}\right)
,\text{ \ \ }\left( \bigtriangleup\mu,\bigtriangleup l\right) \in
\overline{\mathbb{B}_{\rho}}.
\]
\end{proposition}
\section{The uniqueness assertions\label{sect7}}
\subsection{Eigenvalues in the vicinity of the threshold $\pi^{2
.$\label{sect7.1}}
Let us adapt a trick from \cite[\S 7]{na489} for the box-shaped perturbation
(\ref{0}) and conclude with the uniqueness mentioned in Theorem
\ref{TheoremEX}.
Assume that there exists an infinitesimal sequence $\left\{ \varepsilon
_{k}\right\} _{k\in\mathbb{N}}$ such that the problem (\ref{9}), (\ref{10})
in the semi-infinite waveguide $\Pi_{l_{k}}^{\varepsilon}$ has two eigenvalues
$\lambda_{1}^{\varepsilon_{k}}$ and $\lambda_{2}^{\varepsilon_{k}}$ while
\begin{equation}
\varepsilon_{k}\rightarrow+0,\text{ \ \ }l_{k}\rightarrow l_{0
>0,\ \ \ \ \ \ \ \ \lambda_{j}^{\varepsilon_{k}}=\pi^{2}+\widehat{\lambda
_{j}^{\varepsilon_{k}}\in(0,\pi^{2}],\text{ \ \ }\widehat{\lambda
_{j}^{\varepsilon_{k}}\rightarrow0,\text{ \ \ }j=1,2. \label{U0
\end{equation}
In what follows we write $\varepsilon$ instead of $\varepsilon_{k}.$ The
corresponding eigenfunctions $u_{1}^{\varepsilon}$ and $u_{2}^{\varepsilon}$
are subject to the normalization and orthogonality condition
\begin{equation}
\left\Vert u_{j}^{\varepsilon};L^{1}\left( \Pi_{+}^{\varepsilon}\left(
2l\right) \right) \right\Vert =1,\text{ \ \ }\left( u_{1}^{\varepsilon
},u_{2}^{\varepsilon}\right) _{\Pi_{+}^{\varepsilon}}=0, \label{U00
\end{equation}
cf. (\ref{M0}). Repeating with evident changes our arguments in Section
\ref{sect5.3} we observe that the restrictions $u_{j0}^{\varepsilon}$ of
$u_{j}^{\varepsilon}$ onto $\Pi_{+}^{0}$ converge to $u_{j0}^{0}$ weakly in
$W_{-\gamma}^{1}\left( \Pi_{+}^{0}\right) $ and strongly in $L^{2}\left(
\Pi_{+}^{0}\left( 2l\right) \right) $. Furthermore, the limits satisfy the
formula (\ref{M4}) and the following integral identity, see (\ref{M5})
\begin{equation}
\left( \nabla u_{j0}^{0},\nabla v\right) _{\Pi_{+}^{0}}=\pi^{2}\left(
u_{j0}^{0},v\right) _{\Pi_{+}^{0}},\text{ \ \ }v\in C_{c}^{\infty
(\overline{\Pi_{+}^{0}}). \label{U1
\end{equation}
Any solution in $W_{-\gamma}^{1}\left( \Pi_{+}^{0}\right) $ with $\gamma
\in\left( \beta_{1},\beta_{2}\right) $ of the homogeneous Neumann problem
(\ref{U1}) in the semi-strip $\Pi_{+}^{0}=\left( 0,+\infty\right)
\times\left( 0,1\right) $ is a linear combination of two bounded solutions
(\ref{491}) and (\ref{492})
\begin{equation}
u_{j}^{0}\left( x\right) =c_{j1}\cos\left( \pi x_{1}\right) +c_{j2
\cos\left( \pi x_{2}\right) . \label{U2
\end{equation}
Let us prove that $c_{11}=c_{21}=0$ in (\ref{U2}). Since the trapped mode
$u_{j}^{\varepsilon}$ has an exponential decay at infinity, the Green formula
in $\Pi_{\infty}\left( 3l/2\right) $ with it and the bounded function
$e^{\pm ix_{1}\sqrt{\lambda_{j}^{\varepsilon}}}$ assures that
\begin{equation}
\int_{0}^{1}e^{\pm ix_{1}\sqrt{\lambda_{j}^{\varepsilon}}}\left. \left(
\partial_{1}u_{j}^{\varepsilon}\left( x\right) \mp i\sqrt{\lambda
_{j}^{\varepsilon}}u_{j}^{\varepsilon}\left( x\right) \right) \right\vert
_{x_{1}=3l/2}dx_{2}=0. \label{U3
\end{equation}
The local estimate (\ref{M5}) in $\varpi^{\prime}\ni\left( 3l/2,x_{2}\right)
,$ $x_{2}\in\left( 0,1\right) ,$ and formulas in (\ref{U0}), (\ref{U00})
allow us to compute the limit of the left-hand side of (\ref{U3}) and obtain
tha
\begin{equation}
e^{\pm i3l\pi/2}\int_{0}^{1}\left( \dfrac{\partial u_{j0}^{0}}{\partial
x_{1}}\left( \frac{3}{2}l,x_{2}\right) \pm i\pi u_{j0}^{0}\left( \frac
{3}{2}l,x_{2}\right) \right) dx_{2}=0. \label{U4
\end{equation}
Inserting (\ref{U2}) into (\ref{U4}), we see that $c_{j1}=0,$ indeed.
\begin{remark}
\label{RemarkUU1}. If $\widetilde{\lambda}_{j}^{\varepsilon}>0$ and
$\lambda_{j}^{\varepsilon}>\pi^{2}$ in (\ref{U0}), one may use the Green
formula in $\Pi_{\infty}\left( 3l/2\right) $ with four bounded functions
$e^{\pm ix_{1}\sqrt{\lambda_{j}^{\varepsilon}}}$ and $e^{\pm ix_{1
\sqrt{\lambda_{j}^{\varepsilon}-\pi^{2}}}\cos\left( \pi x_{2}\right) $. In
this way one derives the equalities $c_{j1}=c_{j2}=0$ (see \cite[\S 7]{na489}
for details) and concludes that a small neighborhood of the threshold $\pi
^{2}$ can contain only eigenvalues indicated in (\ref{U0}).$\ $The same
reasoning show that the problem (\ref{9}), (\ref{10}) cannot get an eigenvalue
$\lambda^{\varepsilon}\rightarrow+0$ as $\varepsilon\rightarrow+0
.\ \ \ $\boxtimes$
\end{remark}
Since $c_{j1}=0,$ the limit normalization (\ref{M4}) shows that $u_{j
^{0}\left( x\right) =l^{-1/2}\cos\left( \pi x_{2}\right) ,$\ $j=1,2.$
Moreover, Theorem \ref{TheoremKA} (2) applied to the trapped mode
$u_{j}^{\varepsilon}\in H^{1}\left( \Pi_{+}^{\varepsilon}\right) \subset
W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) $ gives the formul
\begin{equation}
u_{j}^{\varepsilon}\left( x\right) =B_{j}^{\varepsilon}e^{-x_{1}\sqrt
{\pi^{2}-\lambda_{j}^{\varepsilon}}}\cos\left( \pi x_{2}\right)
+\widetilde{u}_{j}^{\varepsilon}\left( x\right) ,\ \ \ \ \ \ \left\vert
b_{j}^{\varepsilon}\right\vert +\left\Vert \widetilde{u}_{j}^{\varepsilon
};W_{\gamma}^{1}\left( \Pi_{+}^{\varepsilon}\right) \right\Vert \leq
c\left\Vert u_{j}^{\varepsilon};W_{-\gamma}^{1}\left( \Pi_{+}^{\varepsilon
}\right) \right\Vert , \label{U6bis
\end{equation}
where $\gamma\in\left( \beta_{1},\beta_{2}\right) ,$ $c$ is independent of
$\varepsilon$ according to the content of Section \ref{sect5.5} and the waves
$w_{0}^{\varepsilon\pm}$ in (\ref{13}) and $v_{1}^{\varepsilon+}$ in
(\ref{27}) do not appear in the expansion of $u_{j}^{\varepsilon}$ due to the
absense of decay at infinity. Since the right-hand side of (\ref{U6bis}) is
uniformly bounded in $\varepsilon=\varepsilon_{k}$ , $k\in\mathbb{N}$ (see
Section \ref{sect5.5} again), we hav
\[
bB_{j}^{\varepsilon}\rightarrow l^{-1/2},\text{ \ \ \ \ \ }\widetilde{u
_{j0}^{\varepsilon}\rightarrow0\text{\ weakly in }W_{\gamma}^{1}\left(
\Pi_{+}^{\varepsilon}\right) \text{
\]
along a subsequence of $\left\{ \varepsilon_{k}\right\} _{k\in\mathbb{N}}.$
Moreover, the last equality in (\ref{U00}) turns int
\[
0=\left( u_{1}^{\varepsilon},u_{2}^{\varepsilon}\right) _{\Pi_{+
^{\varepsilon}}=\int_{\Pi_{+}^{\varepsilon}}\cos^{2}\left( \pi x_{2}\right)
e^{-x_{1}\Lambda_{\varepsilon}}dx+\left( u_{1}^{\varepsilon}-\widetilde
{u}_{1}^{\varepsilon},\widetilde{u}_{2}^{\varepsilon}\right) _{\Pi
_{+}^{\varepsilon}}+\left( \widetilde{u}_{1}^{\varepsilon},u_{2
^{\varepsilon}\right) _{\Pi_{+}^{\varepsilon}}=\frac{1}{2}\Lambda
_{\varepsilon}^{-1}b_{1}^{\varepsilon}\overline{b_{2}^{\varepsilon}}+O\left(
1\right) .
\]
We multiply this relation with $\Lambda_{\varepsilon}=\sqrt{\pi^{2
-\lambda_{1}^{\varepsilon}}+\sqrt{\pi^{2}-\lambda_{2}^{\varepsilon
}\rightarrow0,$ as $\varepsilon\rightarrow0$ and derive from (\ref{U7}) the
absurd formula $o(1)=b_{1}^{\varepsilon}\overline{b_{2}^{\varepsilon
}\rightarrow l^{-2}.$ Thus, there can exist at most one eigenvalue indicated
in (\ref{U0}).
\subsection{Absence of eigenfunctions which are odd in $x_{1}.$\label{sect7.2
}
In Section \ref{sect1.3} we have changed the original problem (\ref{1}),
(\ref{2}) in $\Pi^{\varepsilon}$ for the Neumann problem (\ref{9}), (\ref{10})
in the half $\Pi_{+}^{\varepsilon}$ of the waveguide while assuming that an
eigenfunction is even in $x_{1}$. Replacing (\ref{10}) by the mixed boundary
condition
\begin{equation}
\partial_{\nu}u^{\varepsilon}\left( x\right) =0,\text{ \ }x\in\partial
\Pi_{+}^{\varepsilon},\text{ \ \ }x_{1}>0,\text{ \ \ }u_{+}^{\varepsilon
}\left( x\right) =0,\text{ }x\in\partial\Pi_{+}^{\varepsilon},\text{
\ \ }x_{1}=0, \label{U8bis
\end{equation}
we deal with the alternative, namely an eigenfunction is odd in $x_{1}$ and
therefore vanishes at $\Gamma^{\varepsilon}=\left\{ x:x_{1}=0,\ x_{2
\in\left( -\varepsilon,1\right) \right\} .$ The variational formulation of
the problem (\ref{9}), (\ref{U8bis})
\[
\left( \nabla u_{+}^{\varepsilon},\nabla v^{\varepsilon}\right) _{\Pi
_{+}^{\varepsilon}}=\lambda_{+}^{\varepsilon}\left( u_{+}^{\varepsilon
},v^{\varepsilon}\right) _{\Pi_{+}^{\varepsilon}},\text{ \ }v^{\varepsilon
}\in H_{0}^{1}\left( \Pi_{+}^{\varepsilon};\Gamma^{\varepsilon}\right)
\text{,
\]
involves a subspace of functions in $H_{0}^{1}\left( \lambda\right) $ which
are null on $\Gamma^{\varepsilon}.$ Evident modifications of considerations in
Sections \ref{sect5} and \ref{sect6} adapt all our results to the mixed
boundary value problem (\ref{9}), (\ref{U8bis}). The only but important
difference is that the solutions (\ref{491}), (\ref{492}) of the limit Neumann
problem in the half-strip $\Pi_{+}^{0}=\mathbb{R}_{+}\mathbb{\times}\left(
0,1\right) $ now turn into the following ones
\[
u_{0}^{0}\left( x\right) =i\sin\left( \pi x_{1}\right) ,\text{ \ \
u_{1}^{0}\left( x\right) =x_{1}\cos\left( \pi x_{2}\right)
\]
Thus, supposing that, for an infinitesimal sequence $\left\{ \varepsilon
_{k}\right\} _{k\in\mathbb{N}},$ the problem (\ref{9}), (\ref{U8bis}) in
$\Pi_{l_{k}+}^{\varepsilon}$ has an eigenvalue $\lambda_{1}^{\varepsilon_{k}}$
with the properties (\ref{U0}) at $j=1,$ we obtain that a non-trivial limit
$u_{01}^{0}$, cf. (\ref{U2}), of the corresponding eigenfunction
$u_{1}^{\varepsilon_{k}}$ becomes
\[
u_{10}^{0}\left( x\right) =c_{11}\sin\left( \pi x_{1}\right) +c_{12
x_{1}\sin\left( \pi x_{2}\right) .
\]
Now, in contrast to Section \ref{sect7.1}, we may insert $u_{1}^{\varepsilon
_{k}}$ into the Green formula in $\Pi_{\infty}\left( 3l/2\right) $ together
with one of three bounded functions $e^{\pm i\sqrt{\lambda_{1}^{\varepsilon
_{k}}}x_{1}}$ and $e^{-\sqrt{\pi^{2}-\lambda_{1}^{\varepsilon_{k}}}
\cos\left( \pi x_{2}\right) $. Similarly to (\ref{U3}) and (\ref{U4}), these
possibilities allow us to conclude that $c_{11}=0$, $c_{12}=0$ and, hence,
$u_{10}^{0}=0$. The observed contradiction and Remark \ref{RemarkUU1} which
remains true for the problem (\ref{9}), (\ref{U8bis}), confirm the absence of
eigenvalues in a small neighborhood of the threshold $\lambda^{\varepsilon
}=\pi^{2}.$
\subsection{Absence of eigenvalues at a distance from the
threshold.\label{sect7.3}}
For any $\lambda\in\left( 0,\pi^{2}\right) $, the limit Neumann problem in
$\Pi_{+}^{0}=\mathbb{R}_{+}\mathbb{\times}\left( 0,1\right) $ has the
solution
\begin{align}
Z_{0}^{00}\left( \lambda,x\right) & =\left( 2k\left( \lambda\right)
\right) ^{-1/2}\left( e^{-ik\left( \lambda\right) x_{1}}+e^{ik\left(
\lambda\right) x_{1}}\right) ,\label{U11bis2}\\
Z_{1}^{00}\left( \lambda,x\right) & =\left( 2k_{1}\left( \lambda\right)
\right) ^{-1/2}\left( (e^{k_{1}\left( \lambda\right) x_{1}}+ie^{-k_{1
\left( \lambda\right) x_{1}}\right) \cos\left( \pi x_{2}\right) +i\left(
e^{k_{1}\left( \lambda\right) x_{1}}-ie^{-k_{1}\left( \lambda\right)
x_{1}}\right) \cos\left( \pi x_{2}\right) =\nonumber\\
& =\left( 2k_{1}\left( \lambda\right) \right) ^{-1/2}\left( 1+i\right)
(e^{k_{1}\left( \lambda\right) x_{1}}+e^{-k_{1}\left( \lambda\right)
x_{1}}\cos\left( \pi x_{2}\right) \nonumber
\end{align}
where $k\left( \lambda\right) =\sqrt{\lambda}$, $k_{1}\left( \lambda
\right) =\sqrt{\pi^{2}-\lambda}$ and, thus, the augmented scattering matrix
takes the for
\begin{equation}
S^{00}=\left(
\begin{array}
[c]{cc
1 & 0\\
0 & i
\end{array}
\right) . \label{U12bis
\end{equation}
To fulfill the criterion (\ref{33}), the perturbation $\varpi_{+
^{\varepsilon}$ in the waveguide $\Pi_{+}^{\varepsilon}$ has to turn the
right-hand bottom element of the matrix (\ref{U12bis}) into $-1$ that cannot
be made, e.g., for any $\lambda\in\left[ 0,\pi^{2}-c\sqrt{\varepsilon
}\right] ,$ $c>0$ and a small $\varepsilon$. The latter fact may be easily
verified by either constructing asymptotics in Sections \ref{sect3},
\ref{sect6}, or applying a perturbation argument as in Section \ref{sect5.5}.
Notice that we have succeeded in Sections \ref{sect3.2}, \ref{sect4.2} to
construct asymptotics (\ref{S11}) with the main term $S_{11}^{0}=-1$ because
the spectral parameter (\ref{42}) stays too close to the threshold and the
augmented scattering matrix in $\Pi_{+}^{0}$ is not continuous at $\lambda
=\pi^{2}$, cf. Section \ref{sect5.5}. In the case of the mixed boundary value
problem (\ref{9}), (\ref{U8bis}) evident changes in solutions (\ref{U11bis2})
give the matrix $S^{00}=diag\left\{ -1,-i\right\} $ instead of
(\ref{U12bis}) but our final conclusion remains the same.
\subsection{Interferences on the uniqueness\label{sect7.4}}
The material of the previous three sections proves the last assertion in
Theorem \ref{TheoremEX}. The interval $\left( 0,\pi^{2}\right) $ where the
eigenvalue (\ref{T10}) is unique in the waveguide $\Pi_{l}^{\varepsilon}$ with
$l=l_{k}\left( \varepsilon\right) $ and a fixed $\varepsilon\in\left(
0,\varepsilon_{k}\right) $ can be enlarged up to $\left( 0,\pi^{2
+c\sqrt{\varepsilon}\right) ,$ $c>0,$ due to Remark \ref{RemarkUU1}.
Moreover, enhancing our consideration in Section \ref{sect7.3} by dealing with
the exponential waves $e^{\pm x_{1}\sqrt{4\pi^{2}-\lambda}}\cos\left( 2\pi
x_{2}\right) $ and the augmented scattering matrix of size $3\times3$, cf.
\cite{na489}, confirms that any $\lambda\in\left[ \pi^{2}+c\sqrt{\varepsilon
},4\pi^{2}-c\sqrt{\varepsilon}\right) $ cannot be an eigenvalue as well. We
will discuss the higher thresholds $\pi^{2}k^{2}$ with $k=2,3,...$ in Section
\ref{sect8.2}.
To confirm Theorem \ref{TheoremUN}, we use a similar reasoning. In this
way,,we recall the asymptotic formulas (\ref{S01}), (\ref{anti}) and
(\ref{est}) and observe that $S_{01}^{\varepsilon}$ cannot vanish for a small
$\varepsilon$ when the length parameter (\ref{T11}) stays outside the segmen
\begin{equation}
\left[ \pi k-c\varepsilon\left( 1+\left\vert \ln\varepsilon\right\vert
\right) ^{2},\pi k+c\varepsilon\left( 1+\left\vert \ln\varepsilon\right\vert
\right) ^{2}\right] \label{U13bis
\end{equation}
If $\lambda^{\varepsilon}$ belongs to (\ref{U13bis}), the uniqueness of the
solution $\left( \bigtriangleup\mu,\bigtriangleup l\right) $ of the abstract
equation (\ref{T7}) which is equivalent to the criterion in Theorem
\ref{TheoremA} follows from the contraction principle.
\section{Available generalizations\label{sect8}}
\subsection{Eigenvalues in the discrete spectrum\label{sect8.1}}
Let us consider the mixed boundary value problem (\ref{1}), (\ref{7}). As in
Section \ref{sect1.3} we reduce it to the half (\ref{11}) of the perturbed
waveguide $\Pi^{\varepsilon}=\Pi\cup\varpi^{\varepsilon}$, cf. (\ref{9}),
(\ref{10})
\begin{align}
-\Delta u_{+}^{\varepsilon}\left( x\right) & =\lambda_{+}^{\varepsilon
}u_{+}^{\varepsilon}\left( x\right) ,\text{ \ \ }x\in\Pi_{+}^{\varepsilon
},\text{ }u_{+}^{\varepsilon}\left( x_{1},1\right) =0,\text{\ \
x_{1}>0,\label{X1}\\
\partial_{\nu}u_{+}^{\varepsilon}\left( x\right) & =0,\text{ \
x\in\partial\Pi_{+}^{\varepsilon},\text{ \ \ }x_{2}<1.\nonumber
\end{align}
If $\lambda^{\varepsilon}\in\left( 0,\pi^{2}/4\right) $ stays below the
continuous spectrum $\ \wp_{co}^{M}=\left[ \pi^{2}/4,+\infty\right) $ of the
problem (\ref{X1}), there is no oscillating wave but deal with the exponential
wave
\[
v_{1/2}^{\varepsilon\pm}\left( x\right) =\left( k_{1/2}^{\varepsilon
}\right) ^{-1/2}e^{\pm k_{1/2}^{\varepsilon}x_{1}}\cos\left( \frac{\pi
{2}x_{2}\right) ,\text{ \ \ }k_{1/2}^{\varepsilon}=\sqrt{\frac{\pi^{2}
{4}-\lambda^{\varepsilon}
\]
and, similarly to (\ref{27}), (\ref{30}), compose the linear combination
\[
w_{1/2}^{\varepsilon\pm}\left( x\right) =2^{-1/2}\left( v_{1/2
^{\varepsilon+}\left( x\right) \mp v_{1/2}^{\varepsilon-}\left( x\right)
\right) .
\]
The conditions (\ref{28}), (\ref{29}) and (\ref{19}) with $p,q=1/2$ are
satisfied and we may determine the augmented scattering matrix $S^{\varepsilon
}$ which now is a scalar. Theorem \ref{TheoremA} remains valid and, therefore,
the equalit
\begin{equation}
S^{\varepsilon}=-1 \label{X2
\end{equation}
states a criterion for the existence of a trapped mode. Constructing
asymptotics of $S^{\varepsilon}$ and solving the equation (\ref{X2}) yield the
relation (\ref{8}) for an eigenvalue in the discrete spectrum of the problems
(\ref{X1}) and (\ref{1}), (\ref{7}). Repeating arguments from Sections
\ref{sect5} - \ref{sect7} proves estimates of the asymptotic remainders as
well as the uniqueness of the eigenvalue $\lambda_{+}^{\varepsilon}\in
\wp_{d_{i}}^{M},$ however, for any $l>0.$ The latter conclusion requires to
explain a distinction between analysis of isolated and embedded eigenvalues.
The main difference is caused by the application of the criterion (\ref{33})
which in the case of the scalar $S^{\varepsilon}$ changes just into one
equatio
\begin{equation}
\operatorname{Re}S^{\varepsilon}=-1 \label{X3
\end{equation}
which is equivalent to (\ref{X2}) because $\left\vert S^{\varepsilon
}\right\vert =1.$ As a result we may satisfy (\ref{X3}) by choosing
$\bigtriangleup\mu$ and do not need the additional parameter $\bigtriangleup
l$ in (\ref{T1}) which was used in Section \ref{sect4} to solve the system
(\ref{T2}) of two transcendental equations. In other words, the absence of
oscillating waves crucially restricts a possible position of $S_{11
^{\varepsilon}=S^{\varepsilon}$ to the unit circle on the complex plane while
the entry $S_{11}^{\varepsilon}$ in the previous unitary matrix
$S^{\varepsilon}$ of size $2\times2,$ see Section \ref{sect2.2}, can step
aside from $\mathbb{S}$ and a fine tuning by means of $\bigtriangleup l$ is
necessary to assure the equality (\ref{33}).
\subsection{Higher thresholds\label{sect8.2}}
A straightforward modification of our approach may be used for an attempt to
construct embedded eigenvalues near the thresholds $\pi^{2}k^{2},$ $k=2,3,...$
of the continuous spectrum$\wp_{co}$ of the problem (\ref{9}), (\ref{10}) in
$\Pi_{+}^{\varepsilon}.$ At the same time, the number of oscillating outgoing
waves at the threshold $\pi^{2}k^{2}$ equals $k$ and, therefore, size of the
augmented scattering matrix becomes $\left( k+1\right) \times\left(
k+1\right) .$ In this case the fine tuning needs at least $k$ free
parameters, cf. \cite{na489, na546}, instead of only one $\bigtriangleup l$ in
Section \ref{sect4}. Additional parameters can be easily introduced when the
perturbed wall is a broken line like in fig. \ref{f6}, a, with $l,$ $L$ and
$k$. The amplification of the augmented scattering matrix does not affect the
criterion (\ref{33}) in Theorem \ref{TheoremA}, in Sections \ref{sect3} and
\ref{sect4}
\begin{figure}
[ptb]
\begin{center}
\ifcase\msipdfoutput
\includegraphics[
height=1.0179in,
width=4.5057in
{CaDuNa6.eps
\else
\includegraphics[
height=1.0179in,
width=4.5057in
{C:/Users/Desktop/Documents/Google Drive/Articoli/Articoli-in-Preparazione/CardoneDuranteNazarov1/graphics/CaDuNa6__6.pdf
\fi
\caption{Other type of piecewise smooth perturbations
\label{f6
\end{center}
\end{figure}
In the mirror symmetry with respect to the line $\left\{ x:x_{1}=0\right\} $
is denied, see fig. \ref{f6}, b, then we have to analyze the problem
(\ref{1}), (\ref{2}) in the intact waveguide $\Pi^{\varepsilon}$ where the
augmented scattering matrix gets rise of size even in the case $\lambda
^{\varepsilon}\leq\pi^{2}.$ In this sense the box-shaped perturbation is
optimal because it demonstrates all technicalities but reduces the
computational details to the necessary minimum. A preliminary assessment
predicts that embedded eigenvalues of the problem (\ref{9}), (\ref{10}) in
$\Pi_{+}^{\varepsilon}$ do not appear near any threshold $\pi^{2}k^{2}$ with
$k>1$ but we are not able to verify this fact rigorously.
\subsection{The Dirichlet boundary condition.\label{sect8.3}}
All procedures described above can be applied to detect eigenvalues of the
Helmholtz equation (\ref{1}) in the quantum waveguide $\Pi^{\varepsilon}$, cf.
\cite{Sim}, with the Dirichlet condition (\ref{5}). However, the asymptotic
structures must be modified a bit due to the following observation. The
correction term $Z^{\prime}$ in the inner asymptotic expansio
\[
Z^{\varepsilon}\left( x\right) =\sin\left( \pi x_{2}\right) +\varepsilon
Z^{\prime}\left( x\right) +....,
\]
cf. (\ref{62}), must be found out from the mixed boundary value problem in the
semi-stri
\begin{align*}
-\Delta Z^{\prime}\left( x\right) & =\pi^{2}Z^{\prime}\left( x\right)
,\text{\ }x\in\Pi_{+}^{0},\text{ \ \ \ }-\partial_{1}Z^{\prime}\left(
0,x_{2}\right) =0,\text{\ }x_{2}\in\left( 0,1\right) ,\\
Z^{\prime}\left( x_{1},1\right) & =0,\text{\ }x_{1}>0,\text{
\ \ \ }Z^{\prime}\left( x_{1},1\right) =\pi^{2},\text{\ }x_{1}\in\left(
0,l\right) ,\text{ \ \ \ \ }Z^{\prime}\left( x_{1},0\right) =0,\text{\
x_{1}>l.
\end{align*}
According to the Kondratiev theory \cite{Ko}, see also \cite[Ch. 2]{NaPl}, a
solution of this problem admits the representatio
\begin{equation}
Z^{\prime}\left( x\right) =\left( C^{0}+x_{1}C^{1}\right) \sin\left( \pi
x_{2}\right) +\widetilde{Z}^{\prime}\left( x\right) \label{X10
\end{equation}
where $\widetilde{Z}^{\prime}\left( x\right) $ has the decay $O\left(
e^{-\sqrt{3}\pi x_{1}}\right) ,$ is smooth everywhere in $\overline{\Pi
_{+}^{0}}$ except at the point $P=\left( l,0\right) $ and behaves a
\begin{equation}
Z^{\prime}\left( x\right) =\pi\varphi+O\left( r\right) ,\text{
\ \ }r\rightarrow0, \label{X11
\end{equation}
while $\left( r,\varphi\right) \in\mathbb{R}_{+}\times\left( 0,\pi\right)
$ is the polar coordinates system centered at $P$. The singularity in
(\ref{X11}) leads the function $\widetilde{Z}^{\prime}$ out from the Sobolev
space $H^{1}\left( \Pi_{+}^{0}\right) $. Nevertheless, the solution
$Z^{\prime}$ still lives in appropriate Kondratiev space with a weighted norm
so that the coefficient $C^{1}$ in (\ref{X10}) can be computed by inserting
$Z^{\prime}\left( x\right) $ and $\sin\left( \pi x_{2}\right) $ into the
Green formula in $\Pi_{+}^{0}\left( R\right) $. To compensate for the
singularity, one may construct a boundary layer as a solution of the Dirichlet
problem in the unbounded domain (\ref{39}) in fig. \ref{f4}, a.
The above commentary exhibits all changes in the asymptotic analysis in
Section \ref{sect3.2}. As for the justification scheme in Section \ref{sect6},
it should be noted that, due to the Dirichlet condition (\ref{5}), the
inequality (\ref{hardy}) of Hardy's type takes the for
\[
\left\Vert r^{-1}v^{\varepsilon};L^{2}\left( \Pi_{+}^{\varepsilon}\left(
2l\right) \right) \right\Vert ^{2}\leq c\left\Vert v^{\varepsilon
;H^{1}\left( \Pi_{+}^{\varepsilon}\left( 2l\right) \right) \right\Vert
^{2
\]
and sheds the factor $1+\left\vert \ln r\right\vert $ in the weight function
(\ref{J14}). As a result, the factor $\left( 1+\left\vert \ln\varepsilon
\right\vert \right) ^{2}$ occurring in (\ref{8}) and (\ref{est}) for the
Neumann case, disappears from the asymptotic remainder in (\ref{6}) for the
Dirichlet condition.
\bigskip
|
1,108,101,565,943 | arxiv | \section{Introduction}
\label{sect:intro}
We have to take binary and multiple systems into account when we study the formation of stars and exoplanets and the further dynamical evolution of stellar groups and systems of exoplanets. This fact is reflected in many works on this topic (i.e.~\citealt{2014prpl.conf..267R}). That is why the search for multiple stars is a relevant observational problem, especially in the context of studies on exoplanet systems (i.e.~\citealt{2019MNRAS.490.5088M}). The search for binary and multiple stars is a traditional topic for Pulkovo Observatory (i.e.~\citealt{2006Ap.....49..397G}). We present the results of the study on the visual binary star ADS~9346 (WDS~14410+5757, HD~129580, HIP~71782), which has been a part of the 26-inch Pulkovo refractor observational program since 1979.
This paper continues the series of studies on ADS~9346 (\citealt{2008AstL...34..405K, 2010AstL...36..204K}). In the first work, based on all observations from the WDS catalog, the radius of the curvature of the observed arc and the dynamical mass of the binary star with the elliptical orbit were calculated. The value of the dynamical mass ($4.2\pm1.6\,M_\odot$) is significantly different from the expected according to the photometric data ($\approx 2.2\,M_\odot$). In the second work, using the Apparent Motion Parameters (AMP) method, we obtained three possible orbits with small inclination to the plane of projection and orbital periods $\approx 2000$ years. However, to achieve agreement with the expected mass we had to increase the parallax from Hipparcos ($18.9\pm0.9$ mas, ESA 1997) up to 24~mas. Also, based on the Pulkovo photographic (1979-2005) and CCD (2003-2007) observations, the perturbation in separation between the components was discovered and the assumption that the system might have another companion with the orbital period of 4 years (or as a multiple of 4).
Nowadays new data became available: the series of Pulkovo CCD observations have been extended for 12 years; the parallaxes from Gaia~DR2~\citep{2018A&A...616A...1G} ($17.8573 \pm 0.0318$~mas for A-star and $17.8652 \pm 0.0384$~mas for B-star) and Gaia~EDR3~\citep{2021A&A...649A...1G} ($17.8383 \pm 0.0161$~mas for A-star and $17.8720 \pm 0.0429$~mas for B-star) are even smaller than parallaxes published in Hipparcos. Therefore, we have to take another look at our previous results.
In this work we study the discovered perturbations in the relative motion and define the preliminary inner orbit of the possible companion.
\section{Observational data analysis}
\label{sect:obs_data_analysis}
Our study is based on the CCD observations (77 series) obtained with the 26-inch refractor of the Pulkovo Observatory during 2003-2019~(\citealt{2010AstL...36..349I, 2020AN....341..762I}), which are available online in the Pulkovo database\footnote{\url{http://izmccd.puldb.ru/vds.htm}} and the Strasbourg astronomical Data Center (CDS)\footnote{\url{https://cdsarc.unistra.fr/viz-bin/cat/J/AN/341/762}}.
The photographic observations from 1979 to 2005 (36 plates, 11 average annual positions) obtained with the same telescope add up to the CCD observations and at a first approximation reflect the orbital motion of the wide pair at this section of the orbit (\citealt{2014ARep...58...78K}). In the framework of this study we attempted to remeasure astronegatives which is described in section~\ref{sect:astronegative_remeasurements}. The series of Pulkovo observations from 1979 to 2019 is presented at Fig.~\ref{fig:pul}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig1.pdf}
\caption{Observations obtained with the 26-inch refractor during 1979-2019: open circles --- photographic, crosses --- CCD, open stars --- Hipparcos, Gaia~DR2 and EDR3, solid line --- the ephemeris of the orbit of the wide pair. }
\label{fig:pul}
\end{figure}
The results of 102 observations from 1830 to 2015 are presented in the WDS catalog~(\citealt{2016yCat....102026M}). The first observation by John Herschel is very different from the following measurements, so we will use the observation by Wilhelm Struve also obtained in 1830 as the first observation. Besides, we omit one observation from 1908 as it is an obvious mistake. We use observations from WDS to define more accurately the orbit of the wide pair obtained with the AMP method.
Gaia~DR2 catalog~(\citealt{2018A&A...616A...1G}) contains observations of the radial velocity only for the A-star. According to CDS, the spectroscopic observations of the radial velocities for ADS~9346 are not presented in published works. However, there is one high-precision observation of the A-star obtained with the Hobby-Eberly telescope in 2006~(\citealt{2018A&A...615A..31D}). There are not any observations of the radial velocity of the component B in those works. The only long-term series of observations of both components is publushed in paper (\citealt{2010AstL...36..204K}). The velocities were measured in 2000-2008 with the CORAVEL -- a correlation radial velocity spectrometer constructed by A.~Tokovinin~(\citealt{1987SvA....31...98T}), mounted on the 1-m telescope of the Crimean Astrophysical Observatory. An assumption was made that according to the Tokovinin criteria~(\citealt{1988Ap.....28..173T}) the A-star might have a companion. Fig.~\ref{fig:VR} shows the results of those measurements and their dependency over time. Lines represent linear trends corresponding to the radial accelerations for each component on the given interval. The observations can not reflect the orbital motion of the external pair because the acceleration of the more massive component $\dot{V}_{rA}=-0.160\pm0.060$ km/s/year is greater than $\dot{V}_{rB}=-0.047\pm0.061$ km/s/year. This fact confirms the possibility of the A-star having an additional long-period companion.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig2.pdf}
\caption{The radial velocity observations of the components: full circles -- the A-star velocity, open circles --- the B-star velocity, times sign -- Hobby-Eberly observation, a star -- Gaia observation. Lines show the linear trends for the A-star (solid line) and the B-star (dashes).}
\label{fig:VR}
\end{figure}
Gaia~DR2 contains the parallaxes of both components with the same accuracy ($0.03\,mas$). In the Gaia~EDR3 the error of the parallax is $0.016\,mas$ for the A-star and $0.043\,mas$ for the B-star. The parallax of the component B is measured with a lower accuracy and there are not any observations of the radial velocity.
The fact that in the Gaia~EDR3 the parallax of the A-star is measured with a higher accuracy suggests that the orbital period of the possible companion should be significantly larger than the observational period of Gaia, the orbit should have a larger eccentricity, and during observations the companion should be located near the apoastron.
\section{Astronegative remeasurements}
\label{sect:astronegative_remeasurements}
As already mentioned in Introduction, the appearance of new high-precision data motivated us to conduct this research: more dense series of Pulkovo CCD observations, Gaia observations, radial velocity measurements. The methodology of astronegative measurements has also improved in the recent decade. That is why it seems reasonable to use new resources and repeat the digitization of astronegatives and the analysis of the scans to improve the accuracy of the photographic series for ADS~9346. The main goal of the procedure is to make sure that quasi-periodic perturbation that takes place for the CCD series is also present in old observations and is not a systematic effect associated with the CCD series only.
Previously we used the flatbed scanner to digitize the photographic plates with the images of ADS~9346. The main imperfection of this scanner is the unstable field of systematic errors from one scanning to another (and even during the process of digitization). In the past few years, the MDD measuring complex has been actively used at the Pulkovo Observatory. (for detailed information see \citealt{2016AstL...42...41I, 2021A&A...645A..76K}).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig3.png}
\caption{Images of the visual binary ADS~9346 at the fragment of the plate 10356 obtained with the 26-inch refractor of Pulkovo Observatory on May 13, 1979.}
\label{fig:plate10356}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig4.png}
\caption{The image structure of ADS~9346 at the plate 10356.}
\label{fig:plate10356_3D}
\end{figure}
Figures \ref{fig:plate10356} and ~\ref{fig:plate10356_3D} show the fragment of the photoplate with the images of the ADS~9346 components and the structure of those images. This gives us a common understanding of the quality of astronegatives. All measurements were made using the technique described in the work~(\citealt{2021A&A...645A..76K}). To define the orientation and the scale we used reference stars which positions at the epoch of observations were taken from the Gaia~EDR3 catalog. In total we analyzed 37 astronegatives. The reference stars with the necessary signal to noise ratio were present at 27 from those astronegatives. Only these plates were analyzed later on. There were from 10 to 22 expositions taken on each plate, therefore we were able to estimate the intrinsic accuracy of the measurements: for $\rho$ -- $7\,mas$, for $\theta$ -- $0.05\,deg$. These estimates correspond to the level of accuracy of $1.5\,\mu m$ which is in agreement with the data from (\citealt{2021A&A...645A..76K}).
Fig.~\ref{fig:pul} represents the relative motion in the system ADS~9346. The initial series of photographic observations is systematically inconsistent with the series of CCD observations of $\rho$. It is difficult to come up with the reason of such divergence. The photographic series were obtained with the filter ZhS-18. The response curves of the photographic emulsion and the CCD sensor differ significantly. This could lead to a shift due to atmospheric dispersion, since the color indexes of the components are very different. Therefore, for the purposes of our research, the correction to the photographic series was determined empirically. The CCD series are shifted by $30\,mas$ for $\rho$, which corresponds to $1-\sigma$ for the photographic series. The series for $\rho(t)$ are characterized by noticeable quasi-periodic perturbations. They are more evident in CCD observations and can be noticed at a similar range in the photographic data. This allows us to assume that the attempt to repeat the digitization process was successful. The existence of variations greater than the standard error of $\rho$ suggests that this can be associated with the invisible low-mass companion in the system ADS~9346. More strictly it can be shown using the Fourier-analysis of the CCD series and the sum of the photographic and the CCD series. Fig. \ref{fig:power_sp} illustrates the power spectrums. As can be seen, using all observations we can obtain the period estimate close to 20 years. We should note that these series are quite irregular, therefore later this estimate will be improved in a more natural way for a given task.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig5a.png}
\includegraphics[width=\textwidth]{ms2021-0211fig5b.png}
\caption{Power spectrums for the series of $\rho(t)$. Top plot takes into account only CCD observations. Bottom plot -- the whole series (photographic observations plus CCD observations).}
\label{fig:power_sp}
\end{figure}
\section{Photometric-based component masses}
\label{sect:photometric_masses}
The a priori mass estimate is one of the requirements of the AMP method. The high-precision photometric data allows us to hope for a sufficient accuracy of the mass determination. Taking into account the trigonometric parallax and the apparent magnitude estimates from Gaia~EDR3 alongside with the metallicity and the interstellar extinction data, we can easily put the components of ADS~9346 on the color-magnitude diagram. The PARSEC\footnote{\url{http://stev.oapd.inaf.it}}~(\citealt{2012MNRAS.427..127B}) tool makes it possible to plot the necessary isochrones (see Fig.~\ref{fig:cmd}). Then, we can interpolate and choose the isochrones and the points on those isochrones for the best fit in relation to the positions of the components A and B on the diagram.
As a result of the spectrum analysis during the radial velocity determination for the A-star (\citealt{2018A&A...615A..31D}), the values of the key parameters for this component were obtained. For instance, the metallicity $[Fe/H]=0.22$, the apparent color index $(B-V) = 0.9$~mag. The spectrum gives the intrinsic color index $(B-V)_0=0.86$~mag, meaning that the interstellar reddening $E(B-V)=0.04$~mag. Thus, we can easily estimate the interstellar extinction and reddening for the Gaia~EDR3 bands using the data from \citep{2019ApJ...877..116W}. The obtained values are the following: $A_G = 0.105$~mag, $E(G_{BP}-G_{RP}) = 0.055$~mag. On the other hand, there are direct measurements of the interstellar extinction and reddening in Gaia~DR2~(\citealt{2018A&A...616A...1G}): $A_G = 0.249$~mag and $E(G_{BP}-G_{RP}) = 0.123$~mag. The formal standard errors for these values do not exceed $0.01$~mag.
The results of the mass and age determinations for the A-star are presented in the work~(\citealt{2018A&A...615A..31D}): $m_A = 1.08~M_\odot$, $\log(t)=9.96$. These values were obtained using the technique similar to the one described in the first paragraph of this section. However, the authors based their research on the trigonometric parallaxes from Hipparcos2~(\citealt{2007A&A...474..653V}): $p_t=18.03\pm0.69$~mas. They suggest that the estimated value $p_t=22.2\pm4.23$~mas corresponds to the spectral data, which is not in agreement with the weighted average from Gaia~EDR3 ($p_t=17.86\pm0.04$~mas) within the limits of $1\sigma$. We suppose that the independent measurements from Hipparcos2 and Gaia are trustworthy, meaning that the mass determination needs to be reconsidered. The credibility of the results becomes debatable due to the uncertainties in the interstellar extinction and reddening estimates mentioned above (see Fig.~\ref{fig:cmd}). The middle ground can be found in the mass estimates about $1.21\pm0.1$~$M_\odot$ for the A-star and $1.05\pm0.15$~$M_\odot$ for the B-star.
Thus, the analysis of the photometric and spectral data gives us 2.26~$M_\odot$ for the total mass and $\log(t)=9.76$ for age, which obviously corresponds better to the high metallicity. The age determination in the paper (\citealt{2018A&A...615A..31D}) seems too ambitious. A star that is older than 9~Gy with metallicity $[Fe/H] = 0.22$ is less likely to be located in the solar neighbourhood than a star with the same metallicity and the age that is a little over 5.7~Gy.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig6.png}
\caption{PARSEC-isochrones in the vicinity of ADS~9346 at the color-magnitude diagram corresponding to different ages and the metallicity $[Fe/H] = 0.22$ (taking the interstellar extinction and reddening into account). Black circles show the positions of the components according to the interstellar extinction from the works~(\citealt{2018A&A...615A..31D, 2019ApJ...877..116W}). Grey circles show the positions according to the Gaia~DR2 data.}
\label{fig:cmd}
\end{figure}
\section{Calculation of the orbital parameters of the photocenter of the A-star plus the invisible low-mass companion}
\label{sect:a-component-sat-orb}
The observational results obtained with the 26-inch refractor of the Pulkovo Observatory during 1979-2019 are presented at Fig.~\ref{fig:pul}. As mentioned above, there is a perturbation in the relative motion with the period around 20 years. To get more reliable orbital period estimates for the possible invisible companion, we added the Hipparcos and Gaia~DR2 data at the epoch 2015.5, Gaia~EDR3 data at the epoch 2016.0. The weights for observations were calculated according to the standard measurement errors.
We can not determine which star has a companion using the relative positions, but the fact that the A-star being more massive also has a larger acceleration than the B-star gives us a reason to assume that only the A-star can have the long-period companion. Moreover, the list of stars with possible low-mass companions from the work~(\citealt{2019A&A...623A..72K}) contains the A-star, which is another confirmation of the existence of the companion.
The relative motion of the binary star with one companion can be expressed with the following equations:
\begin{equation}
\label{eq:one}
x_i=x_0+\dot{x}(t_i-t_0)+\ddot{x}(t_i-t_0)^2/2+BX_{i}+GY_{i}
\end{equation}
\begin{equation}
\label{eq:two}
y_i=y_0+\dot{y}(t_i-t_0)+\ddot{y}(t_i-t_0)^2/2+AX_{i}+FY_{i}
\end{equation}
Here $x_i=\rho\sin\theta$, \quad $y_i=\rho\cos\theta$ --- the apparent coordinates in the tangential plane, \quad $X_i=\cos{E_i}-e$, \quad $Y_i=\sqrt{1-e^2}\sin{E_i}$ ---
the orbital coordinates calculated using the orbital period $P$, the periastron epoch $T$ and the eccentricity $e$ for each companion. $A$, $B$, $F$, $G$ --- the unknown Thiele-Innes elements which define the geometrical elements of the Kepler orbit of the photocenter $a_{ph}, i, \omega, \Omega$. The unknown parameters $x_0$, $\dot{x}$, $\ddot{x}$, $y_0$, $\dot{y}$, $\ddot{y}$ reflect the relative motion of the center of mass of the B-star in relation to the center of mass of the system A-star plus the invisible companion. Using these parameters, we can calculate the apparent motion parameters and the orbit of the external pair.
First of all we considered periods in the range from 10 to 15 years with a step size of 1 year, because the series of CCD observations are 16 years long. $T$ and $e$ were obtained using the brute-force search method with the step size of 0.1 year for $T$ and 0.1 for $e$. The criteria of the method is the minimal standard deviation $\sigma_{xy}=\sqrt{\sigma_x^2+\sigma_y^2}$. Here $\sigma_x=\sqrt{\sum{p_i(dx_i)^2}/n}$, $p_i$ -- the weights corresponding to the observational error, $\sigma_y$ is calculated in the same way.
According to the given criteria, all orbits with the periods from 11 to 15 years fit the observational data equally good. Solving the equations \ref{eq:one} and \ref{eq:two} we can obtain the Thiele-Innes elements (that define the orbit of the companion) with great confidence. Hence, we use the independent observations of the radial velocities as a criteria: $t=2006.03, \Delta{V_{rA}}=V_{rA}-V_{rA\gamma}=-0.632\pm0.027$~km/s~\citep{2018A&A...615A..31D} and $t=2015.5, \Delta{V_{rA}}=-0.15\pm0.12$~km/s (Gaia~DR2), if we set $V_{rA\gamma}\approx-14.0$~km/s. (see Fig.~\ref{fig:os_AB}).
Additionally, we compare the trend obtained using the ephemerides from 2000-2008 with the observed radial velocity acceleration of the A-star ($-0.16\pm0.06$ km/s/year).
We get the best solution (an agreement within the confidence intervals) for the orbit with $P=15\pm0.5$~years, $e=0.86\pm0.01$, $T=2009.5\pm0.1$. The corresponding ephemerides: $t=2006.0, \Delta{V_{rA}}=-0.637$~km/s; $t=2015.5, \Delta{V_{rA}}=-0.07$~km/s; $\dot{V}_{rA}=-0.11$~km/s/year.
The solution of the equations \ref{eq:one} and \ref{eq:two}, as well as the radial velocity semi-amplitude of the visible component, the semi-major axis of the relative orbit and the elemnents of the Kepler orbit corresponding to the Thiele-Innes elements are presented in Table~\ref{tab:orb-ti}.
\begin{table}
\bc
\begin{minipage}[]{100mm}
\caption[]{The parameters of the relative motion of the photocenter of A-star plus low-mass companion at 2010.0.\label{tab:orb-ti}}\end{minipage}
\setlength{\tabcolsep}{2.5pt}
\small
\begin{tabular}{rr}
\hline\noalign{\smallskip}
$P_{in}$, yr & $15\pm0.5$ \\
$x_0$, mas & $5695.1\pm 1.3 $ \\
$y_0$, mas & $4960.3\pm 1.5 $ \\
$\dot{x}$, mas/yr & $5.07\pm0.14$ \\
$\dot{y}$, mas/yr & $-6.63\pm0.17$ \\
$\ddot{x}$, mas/yr$^2$ & $-0.1130\pm0.0302$ \\
$\ddot{y}$, mas/yr$^2$ & $0.0968\pm0.0345$ \\
\hline\noalign{\smallskip}
$T$, yr & $2009.5\pm0.1$ \\
$e$ & $0.86\pm0.01$ \\
\hline $B$, mas & $-8.33\pm1.30$ \\
$A$, mas & $2.39\pm1.51$ \\
$G$, mas & $-7.97\pm1.72$ \\
$F$, mas & $-3.41\pm2.01$ \\
\hline $a_{ph}$, mas & $11.55(-0.23)\pm2.46(2.45)$ \\
$i, ^\circ$ & $110.8(-15.8)\pm22.0(14.3)$ \\
$\omega, ^\circ$ & $314.7(+17.5)\pm32.1(28.6)$ \\
$\Omega, ^\circ$ & $266.1(+7.5)\pm21.1(19.9)$ \\
\hline\noalign{\smallskip}
$a$, a.e. & 7.0 \\
$K_{ph}$, km/s & 2.4 \\
\hline\noalign{\smallskip}
$M_1, M_\odot$ & 1.21 \\
$m_s, M_\odot$ & 0.129 \\
\hline $\sigma_x$, mas & 9.20 \\
$\sigma_y$, mas & 9.39 \\
$\sigma_{xy}$, mas & 13.14 \\
\noalign{\smallskip}\hline
\end{tabular}
\ec
\tablecomments{0.86\textwidth}{The Kepler elements correspond to the Thiele-Innes elements. The shift of the mean solution and its error are in brackets. $a_{ph}$ --- the semi-major axis of the photocentric orbit, $a$ --- the semi-major axis of the relative orbit, $K_{ph}$ --- the half range of the photocentric radial velocity. The minimal mass of the companion $m_s$ is calculated using the parallax from Gaia~EDR3 and the mass of the visible component($M_1$.)}
\end{table}
If we believe that the photocenter coincides with the visible component, we can calculate the minimal mass of the companion $m_s$ which depends on the parallax and the mass of the visible component $M_1$. Here we use the weighted average parallax of the A-star from Gaia~EDR3: $p_t = 17.838\pm0.016$~mas.
The residual errors of the Kepler orbital elements were obtained with the Monte-Karlo method in the following way. All residuals were randomly altered 50 times with the dispersion 9~mas which corresponds to the variations (see Tab.~\ref{tab:orb-ti}). We calculated the mean solution shifted in relation to the initial (model) solution. The errors were calculated in relation to the model solution. The shift and the errors of the mean solution are presented in brackets.
The graphical representation of the observations and the ephemerides of the right ascention $(dx)$, the declination $(dy)$ and the radial velocity ($\Delta{V_r}$) are shown at Fig.~\ref{fig:os_AB}.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{ms2021-0211fig7a.pdf}
\includegraphics[width=0.7\textwidth]{ms2021-0211fig7b.pdf}
\includegraphics[width=0.7\textwidth]{ms2021-0211fig7c.pdf}
\caption{The comparison between the ephemerides and the positional observations of the companion in right ascension (a), declination (b) and the observed radial velocity (c). For the positional observations: crosses --- CCD observations, open circles --- Gaia and Hipparcos. For the radial velocity observations: full circles --- CORAVEL, times signs --- Hobby-Eberly observations, a star --- Gaia~DR2.}
\label{fig:os_AB}
\end{figure}
There is a systematic difference between space observations and the observations with the 26-inch refractor which is significant for some stars. For this star the difference between the Gaia~DR2 system and the CCD observations is 6~mas for $\rho$. Therefore, it is worth mentioning that at the $dx(t)$ plot the changes in the Gaia~DR2 (2015.5) and the Gaia~EDR3 positions are in full agreement with the ephemerides for this area.
We can conclude that the effect is significant ($a\approx11$~mas), the inclination of the orbit is close to $90^\circ$, in the tangential plane the companion moves in the line of the orbital node ($\Omega\approx270^\circ$), which is in agreement with the $dx(t)$ plot. There is still an unaccounted effect with the possible period of 4 years at the $dy(t)$ plot.
Fig.~\ref{fig:dev} shows the residuals after taking into account the companion with the orbital period of 15 years.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig8a.pdf}
\includegraphics[width=\textwidth]{ms2021-0211fig8b.pdf}
\caption{The residuals (after taking into account the companion) of the right ascension (a) and the declination (b). Full circles --- CCD observations, open circles --- the average annual positions of the photographic observations, stars --- the Hipparcos and Gaia data.}
\label{fig:dev}
\end{figure}
The calculated orbit with the period of 15 years can be improved or changed when the radial velocity observations in the area of periastron become available.
\section{Estimation of the values of the AB pair orbital elements}
\label{sect:a-b-pair-orb}
One of the the best ways to determine the initial orbit of a visual binary using the short arc in the framework of the two-body problem is the AMP method. It was described numerous times starting from 1980 (i.e. \citealt{1980AZh....57.1227K, 2020AstL...46..555K}). After taking the motion of the companion into account (see Tab.~\ref{tab:orb-ti}) we calculated the apparent motion parameters at $t_0=2010.0$ (the separation between the components ($\rho$) in arcseconds, the positional angle ($\theta$) in degrees, the apparent relative motion ($\mu$) in mas per year and the positional angle of the apparent motion ($\psi$) in degrees) and their errors using the following simple expressions:
$$\rho=\sqrt{x^2+y^2}~, \qquad \theta=\arctan\frac{x}{y},$$
$$\mu=\sqrt{\dot{x}^2+\dot{y}^2}~, \qquad \psi=\arctan\frac{\dot{x}}{\dot{y}}.$$
$$\varepsilon_{\rho}=\sqrt{(x\varepsilon_{x})^2+(y\varepsilon_{y})^2}/\rho~, \qquad
\varepsilon_{\theta}=\sqrt{(y\varepsilon_{x})^2+(x\varepsilon_{y})^2}/\rho^2
$$
$$\varepsilon_{\mu}=\sqrt{(\dot{x}\varepsilon_{\dot{x}})^2+(\dot{y}\varepsilon_{\dot{y}})^2}/\mu~, \qquad
\varepsilon_{\psi}=\sqrt{(\dot{y}\varepsilon_{\dot{x}})^2+(\dot{x}\varepsilon_{\dot{y}})^2}/\mu^2
$$
In addition, we also obtain from the observation the parallax, the relative radial velocity and the total mass of the system.
We use the weighted average of the parallax $p_t$ from Gaia~EDR3, which is $17.842\pm0.046$~mas.
We set $V_{rA\gamma}=-14$km/s. Then, according to the radial velocity observations during 2000--2008, the relative radial velocity $V_{rB}-V_{rA}\approx0\pm0.5$~km/s. For a star with the orbital period about several thousands of years this value will not change significantly in 10 years.
The radius of the curvature $\rho_c$ can be calculated using the following equation:
\begin{equation}
\label{eq:three}
\rho_c=|\frac{(\dot{x}^{2}+\dot{y}^2)^{1.5}}{\dot{x}\ddot{y}-\dot{y}\ddot{x}}|
\end{equation}
However, the determination of $\rho_c$ with this equation (\ref{eq:three}) is associated with uncertainties due to large errors $\ddot{x}$ and $\ddot{y}$ ($\rho_c=2.2''\pm4.0''$), as a result we can not determine the distance between the components $r$ at the moment $t_0$ using the equation~\ref{eq:four}:
\begin{equation}
\label{eq:four}
r^3=4\pi^2(M_A+M_B)\frac{\rho\rho_c}{\mu^2}\mid{\sin(\psi-\theta)}\mid
\end{equation}
That is why for all variations we get the set of elliptical orbits which satisfy the following condition:
\begin{equation}
\label{eq:five} \frac{\rho}{p_t}\leq{r}\leq\frac{8\pi^2}{v^2}(M_A+M_B)
\end{equation}
Here $v$ --- the space velocity of B-star in relation to A-star.
From this set of orbit we choose the best fit for all observations of the binary star from its discovery. This solution is measured by the angle $$\beta=\pm\arccos\frac{\rho}{p_tr}$$ between the direction to the companion and its projection on the tangential plane, as was suggest in the paper~(\citealt{1996ARep...40..795K}).
The total mass of all visible components and the companion equals $2.26~M_\odot$. Taking into account the influence of the possible companion, we suppose that the total mass of the system is unknown. Then we correct the value using all available observations. When choosing the best fit, we do not compare the observations directly, but use the agreement between the Thiele-Innes elements (A, B, F, G) instead. They can be calculated using the geometrical orbital elements ($a, i, \omega, \Omega$) without observations, and also using the dynamical elements ($P, T, e$) combined with observations separated in time (\citealt{1983AZh....60.1208K}). The criteria for this method is the minimum of the function:
\begin{equation}
\label{eq:six} S=\sqrt{\Delta{A}^2+\Delta{B}^2+\Delta{F}^2+\Delta{G}^2}
\end{equation}
Here $\Delta{A}, \Delta{B}, \Delta{F}, \Delta{G}$ --- the differences between the Thiele-Innes elements obtained in two ways. As opposed to the direct comparison between the observations and the ephemerides, in this case we do not need to assign weights to particular heterogeneous observations which inevitably bring in some subjectivity. However, it is important to have several reliable points spaced along the entire arc near the middle of the observational sector.
As the range of observations from the WDS catalog is too wide, we calculated the additional reference series through the middle of the observational sector. All observations before 1991 (Hipparcos) were split in 30-40~years intervals and the apparent motion parameters ($\rho, \theta$) were calculated for the mean moment of each interval. Besides, the reference series include the first observation conducted by Struve, Hipparcos and Gaia data, and the AMP obtained using the photographic Pulkovo observations. Comparing the AMP-orbits with the reference series, we get the best formal solution.
The results --- AMP, $\beta_1$, and the total mass $M_{A+B}$, which correspond to the minimum of $S_1$, $\beta_2$ and $S_2$ with the fixed mass $M_{A+B}=2.4M_\odot$ --- are presented in Tab.~\ref{tab:pvd}. Two solutions are shown: the AMP obtained using the equations \ref{eq:one} and \ref{eq:two} (see Tab.~\ref{tab:orb-ti}) taking into account the companion with the period 15~years (solution 1) and homogeneous CCD observations without the companion (solution 2).
\begin{table}
\bc
\begin{minipage}[]{100mm}
\caption[]{The parameters of the relative motion of the A-B pair, calculated for different conditions\label{tab:pvd}}\end{minipage}
\setlength{\tabcolsep}{2.5pt}
\small
\begin{tabular}{ccrrrrrccrc}
\hline\noalign{\smallskip}
Var. & $t_0$ & $\rho, ''$ & $\theta, ^\circ$ & $\mu, ''$/yr
& $\psi, ^\circ$ & $\beta_1, ^\circ$ & $M_{A+B}, M_\odot$ &
$S_1, ''$ & $\beta_2, ^\circ$ & $S_2, ''$ \\
\hline\noalign{\smallskip}
1 & 2010.0 & 7.5524 & 48.945 & 0.0083 & 142.6 & 0 & 4.4 & 0.175 & 0 & 0.524 \\
& & $\pm0.0014$ & 0.010 & 0.0002 & 1.0 & 10 & & & 6 & \\
\hline\noalign{\smallskip}
2 & 2010.0 & 7.5560 & 48.989 & 0.0085 & 141.5 & $\pm14$ & 4.1 & 0.098 & 0 & 0.381 \\
& & $\pm0.0008$ & 0.004 & 0.0001 & 1.3 & 7 & & & 10 & \\
\noalign{\smallskip}\hline
\end{tabular}
\ec
\tablecomments{0.86\textwidth}{Two solutions are shown: 1 -- using the CCD observations with the companion ($P_{in}=15$~years); 2 -- using the CCD observations without the companion. The best fit corresponds to the minimum of $S_1$, if the mass
$M_{A+B}$ is undefined, or $S_2$, if $M_{A+B}=2.4~M_\odot$.}
\end{table}
\begin{table}
\bc
\begin{minipage}[]{100mm}
\caption[]{The orbital parameters of the A-B pair\label{tab:orbout}}\end{minipage}
\setlength{\tabcolsep}{2.5pt}
\small
\begin{tabular}{crrrrrrr}
\hline\noalign{\smallskip}
ref
&\multicolumn{2}{c}{This work,} & \multicolumn{2}{c}{This work,}
& \multicolumn{2}{c}{\cite{2019AstL...45...30I}} & \cite{2010AstL...36..204K}\\
&\multicolumn{2}{c}{ solution 1} & \multicolumn{2}{c}{solution 2}
& \multicolumn{2}{c}{ } & \\
\hline\noalign{\smallskip}
P, yr & 5435 & 2455 & 5725 & 2749 & 2547 & 4130 & 2644 \\
&$\pm519$ & 213 & 556 & 220 & 1031 & 2230 & 2239\\
\hline\noalign{\smallskip}
T, yr & 3612 & 3165 & 3170 & 3323 & 1528 & 1526 & 3100\\
&$\pm128$ & 103 & 599 & 814 & 398 & 954 & 2241\\
\hline\noalign{\smallskip}
e & 0.07 & 0.45 & 0.05 & 0.39 & 0.75 & 0.80 & 0.34\\
&$\pm.02$ & .04 & .04 & .04 & .20 & .18 & .22\\
\hline\noalign{\smallskip}
$a, ''$ & 7.38 & 5.24 & 7.64 & 5.56 & 11.48 & 15.57 & 5.78\\
&$\pm.47$ & .30 & .49 & .30 & 5.20 & 7.72 & 2.39\\
\hline\noalign{\smallskip}
$i, ^\circ$ & 0 & 0 & 0 & 14 & 71 & 74 & 19\\
&$\pm 17$ & 20 & 16 & 8 & 11 & 11 & 8\\
\hline\noalign{\smallskip}
$\Omega+\omega, ^\circ$ & 162 & 224 & 127 & 225 & - & - & 212\\
& $\pm128$ & 86 & 126 & - & - & - & 22 \\
\hline\noalign{\smallskip}
$\Omega, ^\circ$ & - & - & - & 142 & 16 & 18 & 159\\
& $\pm$- & - & - & 43 & 38 & 61 & - \\
\hline\noalign{\smallskip}
$\omega, ^\circ$ & - & - & - & 83 & 273 & 283 & 53\\
&$\pm$ - & - & - & 36 & 18 & 20 & - \\
\hline\noalign{\smallskip}
$M_{A+B}$, $M_\odot$ & 2.4 & 4.2 & 2.4 & 4.1 & 41 & 38 & 4.8 \\
\hline $\sigma_\rho$, mas & 113.5 & 112.4 & 112.8 & 112.3 & 114.0 & 112.6 & 122.2 \\
$\sigma_\tau$, mas & 131.2 & 131.1& 130.5 & 130.4 & 131.8 & 132.4 & 132.4 \\
\noalign{\smallskip}\hline
\end{tabular}
\ec
\tablecomments{0.86\textwidth}{For orbits determined in this paper $\Delta{V_r}=0$~km/s is fixed. Total masses for all orbits correspond to the parallaxes from Gaia~EDR3 (17.8425~mas). $\omega$ and $\Omega$ can differ by $180^\circ$. Due to little inclinations of orbits in relation to the tangential plane, in 2010 we could obtain only $\omega+\Omega$. Now, using the Gaia data, we can separate these parameters.}
\end{table}
We obtained the orbital elements for both variants with the expected total mass $2.4\,M_\odot$ and the mass corresponding to the minimum of function S. The results are presented in Tab.~\ref{tab:orbout}.
For comparison we also show the orbits from \cite{2019AstL...45...30I} with the period 2547~years (without weights) and 4130~years (with weights), as well as the orbit obtained in the work~(\citealt{2010AstL...36..204K}) using the homogeneous photographic observations, AMP at the epoch 1990.0, for which the obtained total mass was also too high. Total masses for all orbits correspond to the parallaxes from Gaia~EDR3. Within the limits of errors we achieve a good agreement of all three AMP-orbits with the mass that exceeds $4 M_\odot$.
The errors of the AMP-orbits are defined by the errors of the initial data. The errors of the elements obtained in 2010 are too large due to the large parallax error used back then.
Since $\Delta{V_r}\approx0.0$~km/s, the location of the node can be determined to $180^\odot$. Since $\beta$ is small, we can conclude that the plane of the inner orbit is oriented near the tangential plane. A small inclination is characteristic for all AMP-obits. With the expected mass $2.4 M_\odot$ we get almost circular orbits in the tangential plane.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{ms2021-0211fig9a.pdf}
\includegraphics[width=0.7\textwidth]{ms2021-0211fig9b.pdf}
\includegraphics[width=0.7\textwidth]{ms2021-0211fig9c.pdf}
\caption{The comparison between the orbits of the external pair AB and observations. Crosses --- WDS observations, open squares --- the reference series, full and open circles --- the Pulkovo photographic and CCD observations, a triangle --- the first observation by Struve, stars --- the Hipparcos and Gaia~EDR3 observations, bold full line --- the orbit with 2.4$M_\odot$ (var.~2), bold dashes --- the orbit with 4.1$M_\odot$, thin full line --- the orbit from \cite{2019AstL...45...30I} with weights, thin dashes --- without weights.}
\label{fig:out}
\end{figure}
Since the length of a series of observations is longer than the orbital periods of a possible companion, the effect of its action is blurred and does not affect the calculation of the AMP. Taking our preliminary orbit into account does not improve the agreement with observations, but it does not distort the result either. The new orbits obtained in this work do not contradict the previous studies. Fig.~\ref{fig:out} presents observations and ephemerides of radically different orbits.
\section{Discussion}
\label{sect:discussion}
The question of the presence of the low-mass companion can be finally resolved only using the additional observations. Let us consider all the currently known arguments confirming the presence or absence of the low-mass companion.
\begin{enumerate}
\item Signs of a companion for the A-star were obtained using the positional data but are in agreement with the independent radial velocity observations as well.
\item In Gaia~EDR3 there is a parameter that characterizes the quality of astrometric observations, their agreement with models of the linear movement of the photocenter of a star, PSF and other effects. This parameter is astrometric\_excess\_noise\_sig or $D$ as in the original paper~(\citealt{2012A&A...538A..78L}).
A significant exceeding of residual variance over the sum of squares of errors of individual effects ($D>2$) might be caused by different reasons or their combination. For instance, the images of bright stars are often overexposed. Another reason increasing the value $D$ -- the difference between the real movement of stars and a linear (5-parameter) model. In case of ADS~9346, $D=12.7$ for the A-star and $D=142.8$ for the B-star. An impact that a companion might have on total brightness with separation less than $0.2''$ is likely insignificant. Therefore, the reason why the parameter $D$ is large for ADS~9346 components might be associated with the overexposure, as the majority of stars surrounding ADS~9346 with magnitudes in the range from $7^m$ to $8.5^m$ are characterized by the parameter $D$ of the same order. Thus, we can conclude that the additional Gaia~EDR3 data limits the mass of a possible companion by 0.5~$M_\odot$. Otherwise, the parameter $D$ would be larger (the difference in magnitudes between the A-star and its companion would be $5^m$).
\item Another argument confirming that the A-star has a companion follows from the analysis of the spectrum energy distribution (SED) based on the photometric observations conducted during the implementation of different observational programs in different spectrum ranges from Gaia to IRAS (Gaia~DR2, Gaia~EDR3, 2MASS~\citep{2006AJ....131.1163S}, WISE~\citep{2010AJ....140.1868W}, IRAS~\citep{2015A&C....10...99A}). SED for the components of ADS~9346 are presented in Fig.~\ref{fig:ir-excess}. As can be seen from the figure, there is an IR-excess for the A-star at 100~$\mu m$ (obtained by IRAS). According to the paper~(\citealt{2007ApJ...658.1289T}), this can be considered an indication of duplicity. This emission is generated by dust clouds near a relatively old star. The analysis performed by Trilling and coauthors~(\citealt{2007ApJ...658.1289T}) shows that stable large-scale dust structures near stars like the A-component of ADS~9346 can exist in regions of stable resonances in case of stellar multiplicity.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{ms2021-0211fig10a.png}
\includegraphics[width=\textwidth]{ms2021-0211fig10b.png}
\caption{SED for the components of ADS~9346 (top --- for the A-star, bottom -- for the B-star). The data is from Gaia~DR2, Gaia~EDR3, 2MASS, WISE, IRAS.}
\label{fig:ir-excess}
\end{figure}
\section{Conclusions}
\label{sect:conclusions}
The main result of our study is the confirmation of perturbation in the relative motion of A-B components of ADS~9346 based on observations conducted with the 26-inch Pulkovo refractor. The observed variations can be explained by the existence of a low-mass (0.129~$M_\odot$) companion that revolves around the A-star with the period of 15 years. The orbit of the companion is highly eccentric, the passage of the periastron in 2009 coincided with active observations with the 26-inch refractor, but there were no radial velocity observations that could confirm or disprove our conclusion. We calculated the preliminary orbit of the photocenter of the A-star plus the invisible companion consistent with the available radial velocity observations.
The presence of a companion is indirectly confirmed by the IR-excess in the emission of the A-star. For stars more than 5~Gy old dust radiation at a wavelength of 100~$\mu m$ can be caused by dust structures confined in regions of stable resonances.
Using the AMP method, we calculated new orbits of the external pair with a total mass given according to estimates based on photometric data and a mass obtained in best agreement with astrometric observations.
There are two possible reasons why these mass estimates differ:
\begin{itemize}
\item The accuracy of astrometric observations collected over 200 years is low. The orbit that is in agreement with all observations can correspond to any mass (see Tab.~\ref{tab:orbout} and Fig.~\ref{fig:os_AB}). The Thiele-Innes elements can not be strictly determined on short arcs, therefore the formal best solution (4.1~$M_\odot$) is one of the possible solutions. However, it does contradict with other data. That is why we give two solutions: with the formally obtained mass and the expected mass.
\item There is still a possibility that component B has another short-period companion which we can not detect according to our observations. Indirect evidence of such a scenario is that while there is a radial velocity measurement for component A in Gaia, there is none for component B; the parallax error of the component B in Gaia EDR3 is 3 times greater than that of the component A; as can be seen from Fig.~\ref{fig:cmd} the component B strongly moves away from the isochron to the blue side. All this gives reason to believe that the properties of this star do not fit into the formal model.
\end{itemize}
The AMP method allows the consistent use of the object observation results obtained by different methods, and therefore leads to more reliable results, which is confirmed by a comparison with the orbits published in ~\cite{2019AstL...45...30I}. For high-volume determination of orbits for the purpose of statistical studies, formal methods are acceptable, but on short arcs they cannot lead to a reliable result. For the study of individual stars, the AMP method is preferable, supplemented by agreement with all available photometric data and radial velocity measurements.
\normalem
\begin{acknowledgements}
The reported study was funded by RFBR according to the research projects 19-02-00843~A and 20-02-00563~A and with partial support of the grant 075-15-2020-780 of the Government of the Russian Federation and the Ministry of Higher Education and Science.
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
\end{acknowledgements}
\bibliographystyle{raa}
|
1,108,101,565,944 | arxiv | \section{Introduction}
Hydrodynamics is important because it provides a connection between
microscopic models of particle interactions and experimentally observable
behavior. Its experimental significance is due to the fact that
hydrodynamics describes the evolution of fundamental quantities, including
local mass, momentum and energy density which are important in a variety of
applications from microfluidics to cosmology. At a microscopic level, these
quantities are ''slow'' variables which evolve on a timescale which is well
separated from faster, microscopic variables so that the effect of the
latter can be adequately encapsulated in the various transport coefficients
appearing in the hydrodynamic description. Recently, much interest has
centered on the hydrodynamics of granular media which are characterized by a
loss of energy during collisions between the grains\cite%
{GranularPhysicsToday}, \cite{GranularGases}, \cite{GranularGasDynamics}.
The simplest microscopic model of granular materials consists of hard-sphere
grains which lose a fixed fraction $\varepsilon $ of the part of the kinetic
energy associated with the longitudinal, or normal, component of the
velocities at the moment of collision while the transverse components of the
velocities of the colliding particles are unchanged. The transport
properties of this model system, which will be referred to as a ''simple
granular gas'' below, have been worked out in some detail\cite%
{DuftyGranularTransport},\cite{HCS_Mix}. However, there is interest in more
complex models in which the fractional energy loss $\varepsilon $ is itself
a function of the normal kinetic energy as such models are apparently more
realistic\cite{Brilliantov},\cite{Poshel},\cite{McNamara}. Furthermore,
there are a wealth of phenomena, such as endo- and exo-thermic chemical
reactions, in which kinetic energy gets converted to some other form and
thus couples, e.g., chemical reactions and hydrodynamics. A good example is
somnoluminescence\cite{SomnoReview},\cite{Bubbles} where classical
hydrodynamics is often used, with some success\cite{Gaspard}, to try to
understand the complex processes taking place under extreme conditions.
Recently, the possible scattering laws for dissipative collisions consistent
with conservation of momentum and angular momentum has been discussed and
formally exact expressions for the balance equations of mass, momentum,
energy and species have been formulated and the Enskog approximation
discussed\cite{LutskoJCP}. The purpose of the present work is to show that
the Chapman-Enskog expansion \cite{McLennan}can be applied to that kinetic
theory for the case of a one-component fluid in $D$ dimensions with an
arbitrary model for the normal energy loss and reduced to a relatively
simple form thereby providing convenient expressions for the transport
coefficients covering this entire class of models. These results will
therefore include as special cases the known results for elastic
hard-spheres in 2 and 3 dimensions\cite{Gass},\cite{McLennan} and for the
simple granular gas in 3 dimensions\cite{DuftyGranularTransport} as well as
such interesting models as those with velocity-dependent coefficient of
restitution\cite{Brilliantov},\cite{Poshel},\cite{McNamara} for which the
Enskog equation has not previously been solved.
The emphasis on instantaneous interactions is due to their unique
properties. Instantaneous hard-core interactions can be described in the
Enskog approximation which is a finite density approximation known to give
reasonable results for moderately dense fluids\cite{McLennan} and which can
even describe hard-core solids\cite{KDEP}. For most other interactions, only
the Boltzmann description, a low density approximation, is available. The
fact that the Enskog kinetic model is applicable to solids means that there
is scope for application to dense granular systems, which might involve
jamming, as well as to extreme conditions such as occur during
somnoluminescence. Thus, despite its artificial nature, the hard-core model
is an important tool in understanding the real world.
In Section II of this paper, the possible scattering laws, exact balance
laws and consequent Enskog equations are reviewed. The general form of the
Chapman-Enskog expansion is also discussed. The zeroth-order Chapman-Enskog
result is just the exact description of a spatially homogeneous system and
is discussed in Section III. For an equilibrium system, this is the Maxwell
distribution but when there is energy loss, the fluid cools and the
resulting homogeneous but non-stationary state is known as the Homogeneous
Cooling State (HCS). Section III formulates the description of the HCS in
terms of a generating function formalism and gives all information needed to
calculate the simplest corrections to the Maxwell distribution for arbitrary
energy-loss models. In Section IV, the Chapman-Enskog expansion is extended
to first order which is sufficient to get the Navier-Stokes transport
properties. The general formalism is illustrated by application to the
simple granular gas in $D$ dimensions. The paper concludes with a discussion
of the approximations made and comparison to more complete calculations.
\section{Theoretical Background}
\subsection{Kinetic theory with energy loss}
We consider a collection of particles having mass $m$ and hard sphere
diameter $\sigma $ which interact via instantaneous collisions. The position
and velocity of the $i$-th particle will be denoted by $\overrightarrow{q}%
_{i}$ and $\overrightarrow{v}_{i}$ respectively and the the combined phase
variable $\left( \overrightarrow{q}_{i},\overrightarrow{v}_{i}\right) $ will
sometimes be denoted as $x_{i}$. The only scattering law allowing for energy
loss that is still consistent with conservation of total momentum and
angular momentum\cite{LutskoJCP} is that two particles having relative
velocity $\overrightarrow{v}_{12}=\overrightarrow{v}_{1}-\overrightarrow{v}%
_{2}$ prior to collision must have relative velocity
\begin{equation}
\overrightarrow{v}_{12}^{\prime }=\overrightarrow{v}_{12}-\widehat{q}%
_{12}\left( \overrightarrow{v}_{12}\cdot \widehat{q}_{12}+sgn\left(
\overrightarrow{v}_{12}\cdot \widehat{q}_{12}\right) \sqrt{\left(
\overrightarrow{v}_{12}\cdot \widehat{q}_{12}\right) ^{2}-\frac{4}{m}\delta E%
}\right)
\end{equation}%
after the collision, where $\widehat{q}_{12}$ is a unit vector pointing from
the center of the first atom to the center of the second atom. The energy
loss is $\delta E$ which we allow in general to be a function of the normal
relative velocity%
\begin{equation}
\delta E=\Delta \left( \overrightarrow{v}_{12}\cdot \widehat{q}\right) .
\end{equation}%
and the center of mass velocity $\overrightarrow{V}_{12}=\overrightarrow{v}%
_{1}+\overrightarrow{v}_{2}$ is unchanged. It is easy to confirm that the
change of energy upon collision is%
\begin{equation}
\frac{1}{2}mv_{1}^{\prime 2}+\frac{1}{2}mv_{2}^{\prime 2}=\frac{1}{2}%
mv_{1}^{2}+\frac{1}{2}mv_{2}^{2}-\delta E.
\end{equation}%
The simple granular fluid model is based on the energy loss function%
\begin{equation}
\Delta \left( x\right) =\left( 1-\alpha ^{2}\right) \frac{m}{4}x^{2}
\end{equation}%
where the constant $\alpha $ is the coefficient of restitution. More complex
models involve a velocity dependent $\alpha $ while other choices of energy
loss function, involving e.g. a thresholds for energy loss, would be
appropriate for chemical reactions. In the following, it is convenient to
introduce the momentum transfer operator $\widehat{b}$ defined for any
function of the velocities $g\left( \overrightarrow{v}_{1},\overrightarrow{v}%
_{2};t\right) $ by%
\begin{equation}
\widehat{b}g\left( \overrightarrow{v}_{1},\overrightarrow{v}_{2};t\right)
=g\left( \overrightarrow{v}_{1}^{\prime },\overrightarrow{v}_{2}^{\prime
};t\right) .
\end{equation}%
In some applications, the energy loss may not occur for all collisions but
rather might be a random occurrence. We will therefore also consider
throughout a somewhat generalized model in which for any particular
collision, the energy loss function is randomly chosen from a set of
possible functions $\left\{ \Delta _{a}\left( x\right) \right\} $ with
probability $K_{a}\left( \widehat{q}_{12}\cdot \overrightarrow{v}%
_{12}\right) $ which may itself, as indicated by the notation, depend on the
dynamic variable $\widehat{q}_{12}\cdot \overrightarrow{v}_{12}$. A simple
case would be the a fixed fraction $K_{1}=1-p$ of collisions are elastic
with energy loss function $\Delta _{1}\left( x\right) =0$ while the
remainder occur with probability $K_{2}=p$ are inelastic with energy loss $%
\Delta _{2}\left( x\right) \neq 0$. In any case, it is assumed that $%
\sum_{a}K_{a}\left( x\right) =1$ for all $x$. The momentum transfer operator
for the type $a$ collisions will be written as $\widehat{b}_{a}$.
The kinetic theory, Liouville equation and the Enskog approximation, for
arbitrary energy loss function has been discussed in ref. \cite{LutskoJCP}.
The one body distribution function $f\left( \overrightarrow{q}_{1},%
\overrightarrow{v}_{1};t\right) $, giving the probability to find a particle
at position $\overrightarrow{q}_{1}$ with velocity $\overrightarrow{v}_{1}$
at time $t$, satisfies, in the Enskog approximation, the kinetic equation%
\begin{equation}
\left( \frac{\partial }{\partial t}+\overrightarrow{v}_{1}\cdot \frac{%
\partial }{\partial \overrightarrow{q}_{1}}\right) f\left( x_{1};t\right) =J%
\left[ f,f\right]
\end{equation}%
where the shorthand notation $x_{1}=\left( \overrightarrow{q}_{1},%
\overrightarrow{v}_{1}\right) $ is used. The collision operator is%
\begin{equation*}
J\left[ f,f\right] =-\int dx_{2}\;\overline{T}_{-}\left( 12\right) \chi
\left( \overrightarrow{q}_{1},\overrightarrow{q}_{2};\left[ n\right] \right)
f\left( \overrightarrow{q}_{1},\overrightarrow{v}_{1};t\right) f\left(
\overrightarrow{q}_{2},\overrightarrow{v}_{2};t\right)
\end{equation*}%
where $\chi \left( \overrightarrow{q}_{1},\overrightarrow{q}_{2};\left[ n%
\right] \right) $ is the local equilibrium pair distribution function which
is in general a functional of the local density. The binary collision
operator $\overline{T}_{-}\left( 12\right) $ is
\begin{equation}
\overline{T}_{-}\left( 12\right) =\left[ \sum_{a}J_{a}\left( \overrightarrow{%
v}_{1},\overrightarrow{v}_{2}\right) \left( \widehat{b}_{a}\right)
^{-1}K_{a}\left( \widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right) -1%
\right] \Theta \left( -\overrightarrow{v}_{12}\cdot \overrightarrow{q}%
_{12}\right) \delta \left( q_{12}-\sigma \right) \overrightarrow{v}%
_{12}\cdot \widehat{q}_{12}
\end{equation}%
where $\Theta \left( x\right) $ is the step function, $\left( \widehat{b}%
_{a}\right) ^{-1}$ is the inverse of the momentum exchange operator and $%
J_{a}$ is the Jacobian of the inverse collision%
\begin{equation}
J_{a}\left( \overrightarrow{v}_{1},\overrightarrow{v}_{2}\right) =\left|
\frac{\partial \left( \widehat{b}_{a}\overrightarrow{v}_{1},\widehat{b}_{a}%
\overrightarrow{v}_{2}\right) }{\partial \left( \overrightarrow{v}_{1},%
\overrightarrow{v}_{2}\right) }\right| ^{-1}.
\end{equation}%
It will not be necessary to work much with this complicated operator as most
calculations can make use of its simpler adjoint $T_{+}(12)$ defined for
arbitrary functions of the phase variables $A\left( x_{1},x_{2}\right) $ and
$B\left( x_{1},x_{2}\right) $ by%
\begin{equation}
\int dx_{1}dx_{2}\;A\left( x_{1},x_{2}\right) \overline{T}_{-}\left(
12\right) B\left( x_{1},x_{2}\right) =-\int dx_{1}dx_{2}\;B\left(
x_{1},x_{2}\right) T_{+}\left( 12\right) A\left( x_{1},x_{2}\right)
\end{equation}%
so that%
\begin{equation}
T_{+}\left( 12\right) =\Theta \left( -\overrightarrow{v}_{12}\cdot
\overrightarrow{q}_{12}\right) \delta \left( q_{12}-\sigma \right) \left( -%
\overrightarrow{v}_{12}\cdot \widehat{q}_{12}\right) \left[
\sum_{a}K_{a}\left( \widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right)
\widehat{b}_{a}-1\right] .
\end{equation}
\subsection{Hydrodynamic fields and balance equations}
The hydrodynamic fields are the local mass density $\rho \left(
\overrightarrow{r},t\right) $, the local velocity field $\overrightarrow{u}(%
\overrightarrow{r},t)$ and the local temperature field $T\left(
\overrightarrow{r},t\right) $. They are defined in terms of the distribution
by%
\begin{eqnarray}
\rho \left( \overrightarrow{r},t\right) &=&mn\left( \overrightarrow{r}%
,t\right) =m\int d\overrightarrow{v}_{1}\;f\left( \overrightarrow{r},%
\overrightarrow{v}_{1};t\right) \label{fields} \\
\rho \left( \overrightarrow{r},t\right) \overrightarrow{u}(\overrightarrow{r}%
,t) &=&m\int d\overrightarrow{v}_{1}\;\overrightarrow{v}_{1}f\left(
\overrightarrow{r},\overrightarrow{v}_{1};t\right) \notag \\
\frac{D}{2}n\left( \overrightarrow{r},t\right) k_{B}T\left( \overrightarrow{r%
},t\right) &=&\frac{1}{2}m\int d\overrightarrow{v}_{1}\;V_{1}^{2}f\left(
\overrightarrow{r},\overrightarrow{v}_{1};t\right) \notag
\end{eqnarray}%
where $n\left( \overrightarrow{r},t\right) $ is the local number density and
$D$ is the number of dimensions. In the third equation $\overrightarrow{V}%
_{1}=\overrightarrow{v}_{1}-\overrightarrow{u}(\overrightarrow{r},t)$. Their
time evolution follows from that of the distribution and is given by\cite%
{LutskoJCP}%
\begin{eqnarray}
\frac{\partial }{\partial t}n+\overrightarrow{\nabla }\cdot \left(
\overrightarrow{u}n\right) &=&0 \label{balance} \\
\frac{\partial }{\partial t}\rho \overrightarrow{u}+\overrightarrow{\nabla }%
\cdot \left( \rho \overrightarrow{u}\overrightarrow{u}\right) +%
\overrightarrow{\nabla }\cdot \overleftrightarrow{P} &=&0 \notag \\
\left( \frac{\partial }{\partial t}+\overrightarrow{u}\cdot \overrightarrow{%
\nabla }\right) T+\frac{2}{Dnk_{B}}\left[ \overleftrightarrow{P}:%
\overrightarrow{\nabla }\overrightarrow{u}+\overrightarrow{\nabla }\cdot
\overrightarrow{q}\right] &=&\xi \notag
\end{eqnarray}%
The pressure tensor is the sum of two contributions $\overleftrightarrow{P}=%
\overleftrightarrow{P}^{K}+\overleftrightarrow{P}^{C}$ with the kinetic part
being
\begin{equation}
\overleftrightarrow{P}^{K}\left( \overrightarrow{r},t\right) =m\int d%
\overrightarrow{v}_{1}\;f_{l}\left( \overrightarrow{r},\overrightarrow{v}%
_{1},t\right) \overrightarrow{V}_{1}\overrightarrow{V}_{1}, \label{flux1}
\end{equation}%
and the collisional contribution being%
\begin{eqnarray}
\overleftrightarrow{P}^{C}\left( \overrightarrow{r},t\right) &=&-\frac{m}{4V}%
\sigma \sum_{a}\int dx_{1}dx_{2}\;\widehat{q}_{12}\widehat{q}_{12}\left(
\widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right) \delta \left(
q_{12}-\sigma \right) \Theta \left( -\widehat{q}_{12}\cdot \overrightarrow{v}%
_{12}\right) \label{flux2} \\
&&\times \chi \left( \overrightarrow{q}_{1},\overrightarrow{q}_{2};\left[ n%
\right] \right) f\left( x_{1};t\right) f\left( x_{2};t\right) K_{a}\left(
\widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right) \int_{0}^{1}dy\;\delta
\left( \overrightarrow{r}-y\overrightarrow{q}_{1}-\left( 1-y\right)
\overrightarrow{q}_{2}\right) \notag \\
&&\times \left( -\overrightarrow{v}_{12}\cdot \widehat{q}_{12}-sgn\left(
\overrightarrow{v}_{12}\cdot \widehat{q}_{12}\right) \sqrt{\left(
\overrightarrow{v}_{12}\cdot \widehat{q}_{12}\right) ^{2}-\frac{4}{m}\Delta
_{a}\left( \widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right) }\right) .
\notag
\end{eqnarray}%
Similarly, the heat flux has a kinetic contribution%
\begin{equation}
\overrightarrow{q}^{K}\left( \overrightarrow{r},t\right) =\frac{1}{2}m\int d%
\overrightarrow{v}_{1}\;f\left( \overrightarrow{r},\overrightarrow{v}%
_{1},t\right) \overrightarrow{V}_{1}V_{1}^{2} \label{flux3}
\end{equation}%
and a collisional contribution%
\begin{eqnarray}
\overrightarrow{q}^{C}\left( \overrightarrow{r},t\right) &=&-\frac{m}{4V}%
\sigma \sum_{a}\int dx_{1}dx_{2}\;\widehat{q}_{12}\left( \widehat{q}%
_{12}\cdot \overrightarrow{v}_{12}\right) \delta \left( q_{12}-\sigma
\right) \Theta \left( -\widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right)
\label{flux4} \\
&&\times \chi \left( \overrightarrow{q}_{1},\overrightarrow{q}_{2};\left[ n%
\right] \right) f\left( x_{1};t\right) f\left( x_{2};t\right) K_{a}\left(
\widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right) \int_{0}^{1}dx\;\delta
\left( \overrightarrow{r}-x\overrightarrow{q}_{1}-\left( 1-x\right)
\overrightarrow{q}_{2}\right) \notag \\
&&\times \frac{1}{2}\left( \overrightarrow{V}_{1}+\overrightarrow{V}%
_{2}\right) \cdot \widehat{q}_{12}\left( -\overrightarrow{v}_{12}\cdot
\widehat{q}_{12}-sgn\left( \overrightarrow{v}_{12}\cdot \widehat{q}%
_{12}\right) \sqrt{\left( \overrightarrow{v}_{12}\cdot \widehat{q}%
_{12}\right) ^{2}-\frac{4}{m}\Delta _{a}\left( \widehat{q}_{12}\cdot
\overrightarrow{v}_{12}\right) }\right) . \notag
\end{eqnarray}%
Finally, because of the possibility of energy loss, the equation for the
temperature includes a source term given by%
\begin{eqnarray}
\xi \left( \overrightarrow{r},t\right) &=&\frac{1}{2V}\sum_{a}\int
dx_{1}dx_{2}\;\left( \widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right)
\delta \left( q_{12}-\sigma \right) \Theta \left( -\widehat{q}_{12}\cdot
\overrightarrow{v}_{12}\right) \label{flux5} \\
&&\times K_{a}\left( \widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right)
\Delta _{a}\left( \widehat{q}_{12}\cdot \overrightarrow{v}_{12}\right) \chi
\left( \overrightarrow{q}_{1},\overrightarrow{q}_{2};\left[ n\right] \right)
f\left( x_{1};t\right) f\left( x_{2};t\right) K_{a}\left( x_{12}\right)
\delta \left( \overrightarrow{r}-\overrightarrow{q}_{1}\right) . \notag
\end{eqnarray}%
All of these expressions are exact, given the Enskog approximation, and show
that the hydrodynamics of the system is completely specified once the
one-body distribution is known.
\subsection{Chapman-Enskog expansion}
The Chapman-Enskog expansion is basically a gradient expansion of the
kinetic equation assuming a particular form for the solution. Specifically,
one attempts to construct a so-called normal solution in which all space and
time dependence occurs through the hydrodynamic fields%
\begin{equation}
f\left( \overrightarrow{r},\overrightarrow{v};t\right) =f\left(
\overrightarrow{v}|\overrightarrow{r},\psi _{t}\right)
\end{equation}%
where the compact notation for the set of hydrodynamic fields $\psi
_{t}\left( \overrightarrow{r}\right) =\left\{ n\left( \overrightarrow{r}%
,t\right) ,T\left( \overrightarrow{r},t\right) ,\overrightarrow{u}\left(
\overrightarrow{r},t\right) \right\} $ has been introduced and the notation
indicates that the distribution is a functional of the hydrodynamic fields
at time $t$. This means that time derivatives will be evaluated as
\begin{equation}
\frac{\partial }{\partial t}f\left( \overrightarrow{r},\overrightarrow{v}%
;t\right) =\sum_{i}\int d\overrightarrow{r}^{\prime }\frac{\partial \psi
_{t,i}\left( \overrightarrow{r}^{\prime }\right) }{\partial t}\;\frac{\delta
}{\delta \psi _{t,i}\left( \overrightarrow{r}^{\prime }\right) }f\left(
\overrightarrow{r},\overrightarrow{v};t\right) . \label{time-deriv}
\end{equation}%
To order the terms in the gradient expansion, we introduce a uniformity
parameter $\epsilon $ and replace $\overrightarrow{\nabla }$ with $\epsilon
\overrightarrow{\nabla }$ and order terms in $\epsilon $. Since the space
and time derivatives are related by the balance equations, we also introduce
an expansion of the time derivative $\frac{\partial }{\partial t}=\partial
_{t}^{(0)}+\epsilon \partial _{t}^{(1)}+...$ as well as of the distribution
itself
\begin{equation}
f\left( \overrightarrow{q}_{1},\overrightarrow{v}_{1},t\right) =f_{0}\left[
\overrightarrow{v}_{1}|\overrightarrow{q}_{1},\psi _{t}\right] +\epsilon
f_{1}\left[ \overrightarrow{v}_{1}|\overrightarrow{q}_{1},\psi _{t}\right]
+...
\end{equation}%
Finally, in the Enskog approximation the collision operator is nonlocal and
so must also be expanded (see Appendix \ref{AppExpandOperator}) as $J\left[
f,f\right] =J_{0}\left[ f,f\right] +\epsilon J_{1}\left[ f,f\right] +...$.
Substituting these expansions into the Enskog equation and equating terms
order by order in $\epsilon $gives a set of equations for the distribution
the first two of which are%
\begin{eqnarray}
\partial _{t}^{(0)}f_{0}\left( x_{1};t\right) &=&J_{0}\left[ f_{0},f_{0}%
\right] \label{expansion} \\
\left( \partial _{t}^{(1)}+\overrightarrow{v}_{1}\cdot \frac{\partial }{%
\partial \overrightarrow{q}_{1}}+\right) f_{0}\left( x_{1};t\right)
+\partial _{t}^{(0)}f_{1}\left( x_{1};t\right) &=&J_{0}\left[ f_{0},f_{1}%
\right] +J_{0}\left[ f_{1},f_{0}\right] +J_{1}\left[ f_{0},f_{0}\right] .
\notag
\end{eqnarray}%
Since all time and space dependence of the distribution occurs via the
hydrodynamic fields, the balance equations must also be expanded giving at
zeroth order%
\begin{eqnarray}
\partial _{t}^{(0)}n &=&0 \label{f0} \\
\partial _{t}^{(0)}n\overrightarrow{u} &=&0 \notag \\
\partial _{t}^{(0)}T &=&\frac{2}{Dnk_{B}}\xi ^{(0)} \notag
\end{eqnarray}%
and at first order%
\begin{eqnarray}
\partial _{t}^{(1)}\rho +\overrightarrow{\nabla }\cdot \left(
\overrightarrow{u}\rho \right) &=&0 \label{f1} \\
\partial _{t}^{(1)}\rho \overrightarrow{u}+\overrightarrow{\nabla }\cdot
\left( \rho \overrightarrow{u}\overrightarrow{u}\right) +\overrightarrow{%
\nabla }\cdot \overleftrightarrow{P}^{(0)} &=&0 \notag \\
\left( \partial _{t}^{(1)}+\overrightarrow{u}\cdot \overrightarrow{\nabla }%
\right) T+\frac{2}{Dnk_{B}}\left[ \overleftrightarrow{P}^{(0)}:%
\overrightarrow{\nabla }\overrightarrow{u}+\overrightarrow{\nabla }\cdot
\overrightarrow{q}^{(0)}\right] &=&\frac{2}{Dnk_{B}}\xi ^{(1)} \notag
\end{eqnarray}%
where, as noted, the fluxes and sources must also be expanded accordingly
(see Appendix \ref{AppExpandFluxes}). The logic of the normal solution is
that these balance equations define the meaning of the time derivatives so
that the time derivatives in eq.(\ref{expansion}) are evaluated using eq.(%
\ref{f1}) and eq.(\ref{time-deriv}). Together with the expressions for the
fluxes, eqs.(\ref{flux1})-(\ref{flux5}) suitably expanded, this gives a
closed set of integro-differential equations for the distribution function.
\section{Chapman-Enskog at zeroth order: the Homogeneous Cooling State}
\subsection{Expansion of the zeroth-order distribution}
At zeroth order in the gradient expansion, the distribution $f_{0}\left(
x_{1};t\right) $ must be a local function of the hydrodynamic fields so eqs.(%
\ref{expansion}) and (\ref{f0}) give%
\begin{equation}
\left( \frac{2}{Dnk_{B}}\xi ^{(0)}\right) \frac{\partial }{\partial T}%
f_{0}\left( x_{1};t\right) =J_{0}\left[ f_{0},f_{0}\right] \label{ce0}
\end{equation}%
with
\begin{eqnarray}
\xi ^{(0)}\left( \overrightarrow{r},t\right) &=&\frac{1}{2}\sum_{a}\int d%
\overrightarrow{v}_{1}d\overrightarrow{v}_{2}d\widehat{q}\;\left( \widehat{q}%
\cdot \overrightarrow{v}_{12}\right) \Theta \left( -\widehat{q}\cdot
\overrightarrow{v}_{12}\right) K_{a}\left( \sigma \widehat{q}\cdot
\overrightarrow{v}_{12}\right) \Delta _{a}\left( \sigma \overrightarrow{q},%
\overrightarrow{v}_{12}\right) \label{s0} \\
&&\times \chi _{0}\left( \sigma ;n\left( \overrightarrow{r}\right) \right)
f_{0}\left( \overrightarrow{r},\overrightarrow{v}_{1};t\right) f_{0}\left(
\overrightarrow{r},\overrightarrow{v}_{2};t\right) . \notag
\end{eqnarray}%
where $\chi _{0}\left( \sigma ;n\right) $ is the pair distribution function
for a uniform equilibrium fluid of density $n$. Notice that this is not only
the zeroth order component of the Chapman-Enskog expansion, but that it is
also an exact (within the Enskog approximation) equation for the
distribution of a spatially homogeneous system. If there is no energy loss,
the solution will simply be the Maxwell distribution. When energy is lost in
collisions, and in the absence of external forcings, the system cools and
this is commonly known as the Homogeneous Cooling Solution (HCS).
To solve for the HCS, we expand the velocity dependence about an equilibrium
distribution by writing it as%
\begin{equation}
f_{0}(x_{1})=f_{M}\left( v_{1};\psi _{t}\right) \sum_{i}c_{i}\left( \psi
_{t}\right) S_{i}\left( \frac{m}{2k_{B}T}v_{1}^{2}\right)
\label{hsc-distribution}
\end{equation}%
where the Maxwellian distribution $f_{M}\left( v_{1};\psi _{t}\right) =n\pi
^{-D/2}\left( \frac{2k_{B}T}{m}\right) ^{-D/2}\exp \left( -\frac{m}{2k_{B}T}%
v_{1}^{2}\right) $ and it is important to note that this depends on the
exact local fields $\psi _{t}(\overrightarrow{r})$. The functions $\left\{
S_{i}\left( x\right) \right\} _{i=0}^{\infty }$ comprise a complete set of
polynomials which are orthogonal under a Gaussian measure so that%
\begin{equation}
\int d\overrightarrow{v}\;f_{M}\left( v_{1};\psi _{t}\right) S_{i}\left(
\frac{m}{2k_{B}T}v_{1}^{2}\right) S_{i}\left( \frac{m}{2k_{B}T}%
v_{1}^{2}\right) =A_{i}\delta _{ij} \label{6}
\end{equation}%
where $A_{i}$ is a normalization constant. In fact, these can be written in
terms of the Sonine, or associated Laguerre, polynomials%
\begin{equation}
L_{k}^{\alpha }\left( x\right) =\sum_{m=0}^{k}\frac{\Gamma \left( \alpha
+k+1\right) \left( -x\right) ^{m}}{\Gamma \left( \alpha +m+1\right) \left(
k-m\right) !m!} \label{7}
\end{equation}%
and satisfy%
\begin{equation}
\int_{0}^{\infty }dx\;x^{\alpha }Lk\left( x\right) L_{m}^{\alpha }\left(
x\right) \exp \left( -x\right) =\frac{\Gamma \left( \alpha +k+1\right) }{%
\Gamma \left( k+1\right) }\delta _{mk} \label{8}
\end{equation}%
so that in $D$ dimensions
\begin{equation}
S_{k}\left( x\right) =L_{k}^{\frac{D-2}{2}}\left( x\right) \label{9}
\end{equation}%
and%
\begin{equation}
A_{k}=\frac{\Gamma \left( \frac{1}{2}D+k\right) }{\Gamma \left( \frac{1}{2}%
D\right) \Gamma \left( k+1\right) }.
\end{equation}%
Substituting eq.(\ref{hsc-distribution}) into the differential equation, eq.(%
\ref{ce0}), multiplying through by $L_{k}^{\frac{D-2}{2}}\left( \frac{m}{%
2k_{B}T}v_{1}^{2}\right) $ and integrating gives%
\begin{equation}
\left( \frac{2}{Dnk_{B}T}\xi ^{(0)}\left( \psi _{t}\right) \right) \left[ T%
\frac{\partial }{\partial T}c_{k}\left( \psi _{t}\right) +k\left(
c_{k}\left( \psi _{t}\right) -c_{k-1}\left( \psi _{t}\right) \right) \right]
=\sum_{rs}I_{k,rs}\left( \psi _{t}\right) c_{r}\left( \psi _{t}\right)
c_{s}\left( \psi _{t}\right) \label{a0}
\end{equation}%
with%
\begin{eqnarray}
I_{k,rs}\left( \psi _{t}\right) &=&-n^{-1}A_{k}^{-1}\int d\overrightarrow{v}%
_{1}d2\;L_{k}^{\frac{D-2}{2}}\left( \frac{m}{2k_{B}T}v_{1}^{2}\right)
\overline{T}_{-}\left( 12\right) \left( \frac{2k_{B}T}{m}\right)
^{-D}f_{M}\left( v_{1};\psi _{t}\right) \label{13} \\
&&\times f_{M}\left( v_{2};\psi _{t}\right) L_{r}^{\frac{D-2}{2}}\left(
\frac{m}{2k_{B}T}v_{1}^{2}\right) L_{s}^{\frac{D-2}{2}}\left( \frac{m}{%
2k_{B}T}v_{1}^{2}\right) . \notag
\end{eqnarray}
Notice that since $\xi ^{(0)}\sim n^{2}$ and $I_{k,rs}\sim n$, the
coefficients $c_{k}$ can only depend on temperature so $c_{k}\left( \psi
_{t}\right) =c_{k}\left( T(t)\right) $. Furthermore, it must be the case
that $c_{0}=1$ and $c_{1}=0$ in order to satisfy the definitions of the
hydrodynamic fields. It is easy to show that $I_{rs}^{0}=0$ so that the $k=0$
equation is trivial. Suppressing the dependence on $\psi _{t}$, the $k=1$
equation gives%
\begin{equation*}
-\frac{2}{Dnk_{B}}\xi ^{(0)}=\sum_{rs}I_{1,rs}c_{r}c_{s}
\end{equation*}%
and it may be confirmed that this is consistent with eq.(\ref{s0}). The
first nontrivial approximation is to take $c_{k}=0$ for $k>2$ and to use the
$k=2$ equation to get
\begin{eqnarray}
-\frac{2}{Dnk_{B}}\xi ^{(0)} &=&I_{1,00}+\left( I_{1,20}+I_{1,02}\right)
c_{2} \label{a2} \\
\left( \frac{2}{Dnk_{B}T}\xi ^{(0)}\right) \left[ T\frac{\partial }{\partial
T}c_{2}+2c_{2}\right] &=&I_{2,00}+\left( I_{2,20}+I_{2,02}\right) c_{2}
\end{eqnarray}%
where the quadratic terms on the right are typically neglected as they are
of similar structure to the neglected quartic terms. For a granular fluid,
there is no quantity with the units of energy except for the temperature, so
the coefficients of the expansion, which are dimensionless, are
temperature-independent. For systems with additional energy scales, the
coefficients must be determined by solving the resultant ordinary
differential equations with appropriate boundary conditions. For example, if
the energy loss were bounded, then at high temperatures it should be
insignificant and one would expect $\lim_{T\rightarrow \infty }c_{k}\left(
T\right) =\delta _{k0}$ to be the boundary condition.
\subsection{Generating function formalism}
The calculation of the integrals which define the coefficients on the left
in eq.(\ref{a0}) can be formulated in terms of a generating function.
Specifically, it is shown in Appendix \ref{AppGeneratingFunction} that
\begin{eqnarray}
I_{k,rs}\left( \psi _{t}\right) &=&-n^{\ast }\frac{\Gamma \left( \frac{1}{2}%
D\right) \Gamma \left( k+1\right) }{\Gamma \left( \frac{1}{2}D+k\right) }%
\left( \frac{2k_{B}T}{m\sigma ^{2}}\right) ^{1/2} \label{I} \\
&&\times \frac{1}{r!s!k!}\lim_{z_{1}\rightarrow 0}\lim_{z_{2}\rightarrow
0}\lim_{x\rightarrow 0}\frac{\partial ^{r}}{\partial z_{1}^{r}}\frac{%
\partial ^{s}}{\partial z_{2}^{s}}\frac{\partial ^{k}}{\partial x^{k}}\left[
\sum_{a}G_{a}\left( \psi _{t}|\Delta _{a}\right) -G_{0}\right] \notag
\end{eqnarray}%
where $n^{\ast }=n\sigma ^{D}.$The generating functions are
\begin{eqnarray}
G_{a}\left( \psi _{t}|\Delta _{a}\right) &=&-\frac{1}{2}\pi
^{-1/2}S_{D}\left( 1-z_{1}x\right) ^{-\frac{1}{2}D}\allowbreak \left( \frac{%
1-z_{1}x}{2-x-z_{2}-z_{1}+xz_{1}z_{2}}\right) ^{\frac{1}{2}} \\
&&\times \int_{0}^{\infty }du\;K_{a}^{\ast }\left( \sqrt{u}\right) \exp
\left( \frac{\left( 2-z_{2}-z_{1}\right) x}{2-x-z_{2}-z_{1}+xz_{1}z_{2}}%
\frac{1}{2}\Delta _{a}^{\ast }\left( \sqrt{u}\right) \right) \notag \\
&&\times \exp \left( -\frac{1-z_{2}x}{2-x-z_{2}-z_{1}+xz_{1}z_{2}}u\right)
\notag \\
&&\times \exp \left( -\frac{1}{2}\frac{\left( z_{2}-z_{1}\right) x}{%
2-x-z_{2}-z_{1}+xz_{1}z_{2}}\left( u-\sqrt{u}\sqrt{u-2\Delta _{a}^{\ast
}\left( \sqrt{u}\right) }\right) \right) \notag
\end{eqnarray}%
and%
\begin{equation}
G_{0}=-\frac{1}{2}\pi ^{-1/2}S_{D}\left( 1-z_{1}x\right) ^{-\frac{D+1}{2}%
}\left( 2-x-z_{2}-z_{1}+xz_{1}z_{2}\right) ^{\frac{1}{2}}.
\end{equation}%
with $S_{D}$ the area of the $D$ dimensional unit hard sphere,
\begin{equation}
S_{D}=\frac{2\pi ^{D/2}}{\Gamma \left( D/2\right) }
\end{equation}%
which is, e.g., $4\pi $ in three dimensions and $2\pi $ for $D=2$. The
scaled probabilities and energy loss functions are
\begin{eqnarray}
K^{\ast }\left( x\right) &=&K\left( x\sqrt{\frac{2}{m\beta }}\right) \\
\Delta ^{\ast }\left( x\right) &=&\beta \Delta \left( x\sqrt{\frac{2}{m\beta
}}\right) . \notag
\end{eqnarray}%
In general, $K^{\ast }\left( x\right) $ and $\Delta ^{\ast }\left( x\right) $
are functions of temperature and, hence, time but in order to keep the
resulting expressions below from becoming too cumbersome, these arguments
will be suppressed.The utility of this generating function, which is
admittedly complex, is that the limits and derivatives needed to evaluate
eq.(\ref{I}) are easily programmed using symbolic manipulation packages.
To complete the description of the uniform fluid, I give the quantities
necessary to calculate the lowest order correction. These are written
conveniently as
\begin{eqnarray}
I_{k,rs} &=&n^{\ast }\chi \frac{S_{D}}{2D\left( D+2\right) \sqrt{\pi }}%
\left( \frac{k_{B}T}{m\sigma ^{2}}\right) ^{1/2}I_{k,rs}^{\ast } \\
I_{k,rs}^{\ast } &=&I_{k,rs}^{\ast E}+\sum_{a}\int_{0}^{\infty }K_{a}\left(
\sqrt{u}\right) I_{k,rs}^{\ast I}e^{-\frac{1}{2}u}du \notag
\end{eqnarray}%
where $I_{02,2}^{\ast E}+I_{20,2}^{\ast E}=-8\left( D-1\right) $ and all
other elastic contributions are zero, and the inelastic kernals are
\begin{eqnarray}
I_{1,00}^{\ast I} &=&\left( D+2\right) \Delta ^{\ast }\left( \sqrt{u}\right)
\\
I_{2,00}^{\ast I} &=&\left( \Delta ^{\ast }\left( \sqrt{u}\right)
+3-u\right) \Delta ^{\ast }\left( \sqrt{u}\right) \notag \\
I_{1,02}^{\ast I}+I_{1,20}^{\ast I} &=&\frac{D+2}{16}\left(
u^{2}-6u+3\right) \Delta ^{\ast }\left( \sqrt{u}\right) \notag \\
I_{02,2}^{\ast I}+I_{20,2}^{\ast I} &=&\frac{1}{16}\left(
\begin{array}{c}
\Delta ^{\ast }\left( \sqrt{u}\right) \left( \Delta ^{\ast }\left( \sqrt{u}%
\right) \left( u^{2}-6u+3\right) -u^{3}+9u^{2}-\left( 8D+49\right)
u+8D+37\right) \allowbreak \\
+16\left( D-1\right) \left( u-\sqrt{u}\sqrt{\left( u-2\Delta ^{\ast }\left(
\sqrt{u}\right) \right) }\right)%
\end{array}%
\right) \notag
\end{eqnarray}
The pressure can similarly be expressed in the generating function
formalism, but here we just give for later use the expression for the
pressure (see Appendix \ref{AppExpandFluxes}) including the lowest order
corrections to the Gaussian distribution
\begin{eqnarray}
\frac{p^{\left( 0\right) }}{nk_{B}T} &=&1+n^{\ast }\chi \frac{S_{D}}{2D}
\label{pressure} \\
&&+nk_{B}Tn^{\ast }\chi \frac{S_{D}}{2D}\frac{1}{\sqrt{2\pi }}%
\sum_{a}\int_{0}^{\infty }vK_{a}^{\ast }\left( -v\right) g\left( v,\Delta
_{a}^{\ast }\left( -v\right) \right) \left( 1+\frac{D}{16}c_{2}\left(
v^{4}-6v^{2}+3\right) \right) \exp \left( -\frac{1}{2}v^{2}\right) dv.
\notag
\end{eqnarray}%
where the function $g\left( v,\Delta \right) $ is defined as%
\begin{equation*}
g\left( v,\Delta \right) =sgn\left( v\right) \sqrt{v^{2}-2\Delta }-v.
\end{equation*}
\subsection{The simple granular fluid in D dimensions}
For a simple granular fluid with constant coefficient of restitution one has
$\Delta ^{\ast }\left( v\right) =\beta \Delta \left( v\sqrt{\frac{2k_{B}T}{m}%
}\right) =\left( 1-\alpha ^{2}\right) \frac{1}{2}v^{2}$ and $K^{\ast }\left(
x\right) =1$ giving
\begin{eqnarray}
G\left( \psi _{t}|\Delta \right) &\rightarrow &-\pi ^{-1/2}S_{D}\left(
1-z_{1}x\right) ^{-\frac{1}{2}D}\allowbreak \left( 1-z_{1}x\right) ^{\frac{1%
}{2}}\left( 2-x-z_{2}-z_{1}+xz_{1}z_{2}\right) ^{1/2} \\
&&\times \left[ -\frac{1}{2}\left( 2-z_{2}-z_{1}\right) x\left( 1-\alpha
^{2}\right) +2-2z_{2}x+\left( z_{2}-z_{1}\right) x\left( 1-\alpha \right) %
\right] ^{-1}. \notag
\end{eqnarray}%
and
\begin{eqnarray}
I_{1,00}^{\ast } &=&2\left( D+2\right) \left( 1-\alpha ^{2}\right) \\
I_{2,00}^{\ast } &=&\allowbreak 2\left( 1-2\alpha ^{2}\right) \left(
1-\alpha ^{2}\right) \notag \\
I_{1,02}^{\ast }+I_{1,20}^{\ast } &=&\frac{3\left( D+2\right) }{8}\left(
1-\alpha ^{2}\right) \notag \\
I_{2,02}^{\ast }+I_{2,20}^{\ast } &=&-8\left( D-1\right) +\frac{1}{8}\left(
\alpha -1\right) \left( 30\alpha ^{3}+30\alpha ^{2}+24D\alpha +105\alpha
+137-8D\right) \notag \\
&=&\allowbreak \allowbreak \frac{1}{8}\left( \alpha +1\right) \left(
30\alpha ^{3}-30\alpha ^{2}+24D\alpha +105\alpha -56D-73\right) \allowbreak
\notag
\end{eqnarray}%
so that the lowest order correction to the Gaussian is
\begin{eqnarray}
c_{2} &=&\frac{I_{2,00}}{-2I_{1,00}-\left( I_{2,20}+I_{2,02}\right) } \\
&=&\frac{16\left( 1-2\alpha ^{2}\right) \left( 1-\alpha \right) }{%
24D+9-\left( 41-8D\right) \alpha +30\alpha ^{2}\left( 1-\alpha \right) }
\notag
\end{eqnarray}%
The cooling rate is%
\begin{equation}
\xi ^{(0)}=-\left( 1-\alpha ^{2}\right) n^{\ast }\chi \frac{S_{D}}{2\sqrt{%
\pi }}\left( \frac{k_{B}T}{m\sigma ^{2}}\right) ^{1/2}nk_{B}T\left[ 1+\frac{3%
}{16}c_{2}\right] \label{s1}
\end{equation}%
and for a simple granular fluid, $g\left( v,\Delta ^{\ast }\left( -v\right)
\right) =v\left( \alpha -1\right) $ gives
\begin{equation}
p^{\left( 0\right) }=nk_{B}T\left\{ 1+n^{\ast }\chi \frac{S_{D}}{4D}\left(
1+\alpha \right) \right\}
\end{equation}%
so it is seen that the second order terms do not contribute.
This completes the discussion of the zeroth order Chapman-Enskog solution
which is also the HCS. The simplest approximation is to take the
distribution to be Maxwellian with the temperature obeying eq.(\ref{f0}).
The first nontrivial approximation is to keep $c_{2}$ which is given by eq.(%
\ref{a2}) with coefficients that depend on the energy loss model.
\section{Chapman-Enskog at first order: the Navier-Stokes equations}
The first order equation can be written as%
\begin{equation}
\partial _{t}^{(0)}f_{1}-\mathcal{L}_{0}\left[ f_{1}\right] =J_{1}\left[
f_{0},f_{0}\right] -\left( \partial _{t}^{(1)}+\overrightarrow{v}\cdot
\overrightarrow{\nabla }\right) f_{0}
\end{equation}%
where $\mathcal{L}_{0}\left[ f_{1}\right] =\left( J_{0}\left[ f_{1},f_{0}%
\right] +J_{0}\left[ f_{0},f_{1}\right] \right) $ is the linearized
Boltzmann operator. The first order balance equations are used to eliminate
the time derivative on the right. It is convenient to divide the first order
heat source into two parts%
\begin{equation}
\xi _{1}=\xi _{0}\left[ f_{1}\right] +\xi _{1}\left[ f_{0}\right]
\end{equation}%
where the first term on the right is, as indicated, a linear operator acting
on the first order distribution and the second is of first order in the
gradients and depends solely on the zeroth order distribution. Furthermore,
since it is a scalar, $\xi _{1}\left[ f_{0}\right] $ must be proportional to
the only scalar gradient, namely $\overrightarrow{\nabla }\cdot
\overrightarrow{u}$ so that we will write $\xi _{1}\left[ f_{0}\right] =$ $%
\xi _{1}^{\nabla u}\left[ f_{0}\right] \overrightarrow{\nabla }\cdot
\overrightarrow{u}$ (see appendix \ref{AppExpandFluxes} for more details).
Then, the first order equation becomes%
\begin{equation}
\partial _{t}^{(0)}f_{1}+\frac{2}{Dnk_{B}T}\xi _{0}\left[ f_{1}\right]
\left( T\frac{\partial }{\partial T}f_{0}\right) -\mathcal{L}_{0}\left[ f_{1}%
\right] =J_{1}\left[ f_{0},f_{0}\right] -\frac{2}{Dnk_{B}T}\xi _{1}^{\nabla
u}\left[ f_{0}\right] \left( \overrightarrow{\nabla }\cdot \overrightarrow{u}%
\right) \left( T\frac{\partial }{\partial T}f_{0}\right) -\sum_{\alpha
}\sum_{i}B_{i}^{\alpha }\left( \overrightarrow{V};\left[ f_{0}\right]
\right) \partial _{i}\psi _{t,\alpha } \label{ce1}
\end{equation}%
with
\begin{eqnarray}
B_{i}^{n} &=&\left( n^{-1}f_{0}+\frac{1}{nk_{B}T}\frac{\partial p^{(0)}}{%
\partial n}\left[ \frac{\partial }{\partial z_{1}}f_{0}\right] _{T}\right)
V_{1i} \\
B_{i}^{T} &=&\left( \frac{1}{nk_{B}T}\frac{\partial p^{(0)}}{\partial T}%
\left[ \frac{\partial }{\partial z_{1}}f_{0}\right] _{T}+\frac{\partial }{%
\partial T}f_{0}\right) V_{1i} \notag \\
B_{i}^{u_{j}} &=&\frac{2}{Dnk_{B}}\left( -p^{(0)}\frac{\partial }{\partial T}%
f_{0}-\frac{mnV_{1}^{2}}{2T}\left[ \frac{\partial }{\partial z_{1}}f_{0}%
\right] _{T}-\frac{Dnk_{B}}{2}f_{0}\right) \delta _{ij} \notag \\
&&+\frac{m}{2k_{B}T}\left( V_{1i}V_{1j}-\frac{1}{D}V_{1}^{2}\delta
_{ij}\right) \left[ -\frac{\partial }{\partial z_{1}}f_{0}\right] _{T}
\notag
\end{eqnarray}%
where the variable $z=\frac{m}{2k_{B}T}V^{2}$.
It is shown in Appendix \ref{AppExpandOperator} that $J_{1}\left[ f_{0},f_{0}%
\right] $ can be written as%
\begin{equation}
J_{1}\left[ f_{0},f_{0}\right] =\sum_{\gamma ,i}\left( \partial _{i}\psi
_{t,\gamma }\left( \overrightarrow{r}\right) \right) \cdot \left( J_{i}^{(0)}%
\left[ f_{0},\frac{\partial }{\partial \psi _{\gamma }}f_{0}\right] +\frac{1%
}{2}\delta _{\gamma n}\frac{\partial \ln \chi }{\partial n}J_{i}^{(0)}\left[
f_{0},f_{0}\right] \right)
\end{equation}%
where the detailed form of the operator $J_{i}^{(0)}$ is given in the
Appendix. The right hand side of eq.(\ref{ce1}) is therefore expressed
entirely in terms of the gradients of the hydrodynamic fields so that the
first order correction to the distribution must also be proportional to the
gradients. Since the only vector available is $\overrightarrow{V}$ and the
only tensors are the unit tensor and the symmetric traceless tensor $%
V_{i}V_{j}-\frac{1}{D}\delta _{ij}V^{2}$, the first order distribution must
take the form%
\begin{equation}
f_{1}=f_{0}\left( x_{1}\right) \left[
\begin{array}{c}
A^{\left( n\right) }\left( \overrightarrow{V}_{1}\right) V_{1i}\partial
_{i}n+A^{\left( T\right) }\left( \overrightarrow{V}_{1}\right)
V_{1i}\partial _{i}T+A^{\left( \nabla u\right) }\left( \overrightarrow{V}%
_{1}\right) \overrightarrow{\nabla }\cdot \overrightarrow{u} \\
+\sqrt{\frac{D}{D-1}}A^{\left( \partial u\right) }\left( \overrightarrow{V}%
_{1}\right) \left( V_{1i}V_{1j}-\frac{1}{D}\delta _{ij}V^{2}\right) \left(
\partial _{i}u_{j}+\partial _{j}u_{i}-\frac{2}{D}\delta _{ij}\overrightarrow{%
\nabla }\cdot \overrightarrow{u}\right)%
\end{array}%
\right] . \label{ff1}
\end{equation}%
Then, both sides of the kinetic equation are expressed in terms of the
gradients of the hydrodynamic fields and since those gradients can vary
independently, their coefficients must vanish individually giving%
\begin{align}
& \phi _{I_{\alpha }}^{\alpha }\left( \overrightarrow{V}_{1}\right) \left[
\frac{2}{Dnk_{B}T}\xi _{0}\left[ f_{0}\right] \frac{\partial }{\partial T}%
f_{0}A^{\left( \alpha \right) }+\sum_{\gamma }K_{\gamma }^{\alpha }\left[
A^{\left( \gamma \right) }\right] \right] -\mathcal{L}_{0}\left[
f_{0}A^{\left( \alpha \right) }\phi _{I_{\alpha }}^{\alpha }\right]
\label{XX} \\
& =\Omega _{I_{\alpha }}^{\alpha }\left[ f_{0},f_{0}\right] -C^{\alpha
}\left( \overrightarrow{V}|f_{0}\right) \phi _{I_{\alpha }}^{\alpha }\left(
\overrightarrow{V}_{1}\right) , \notag
\end{align}%
where Greek indices range over the four values $n,T,\nabla u$ and $\partial
u $. In this equation, the capitalized index, $I_{\alpha }$, is a
super-index corresponding to a set of Cartesian indices as illustrated by
the definition
\begin{equation}
\phi _{I_{\alpha }}^{\alpha }\left( \overrightarrow{V}\right) =\left(
V_{i},V_{i},1,\sqrt{\frac{D}{D-1}}\left( V_{i}V_{j}-\frac{1}{D}V^{2}\delta
_{ij}\right) \right) .
\end{equation}%
The linear functional $K_{\gamma }^{\alpha }$ encapsulates contributions
coming from the action of the functional derivative on the non-local term $%
\overrightarrow{\nabla }T$ as well as terms coming from $\xi ^{(0)}\left[
f_{1}\right] $ and is given by
\begin{eqnarray}
K_{\gamma }^{\alpha }\left[ A^{\left( \gamma \right) }\right] &=&A^{(T)}f_{0}%
\left[ \delta _{\alpha n}\frac{\partial }{\partial n}+\delta _{\alpha T}%
\frac{\partial }{\partial T}\right] \left( \frac{2}{Dnk_{B}T}\xi _{0}\left[
f_{0}\right] \right) \\
&&-\delta _{\alpha \nabla u}\frac{1}{Dnk_{B}T}\xi _{0}\left[ f_{0}\right] %
\left[ f_{0}A^{(\nabla u)}\right] \frac{\partial }{\partial T}f_{0} \notag
\end{eqnarray}%
The source terms on the right are, after some manipulation, given by%
\begin{eqnarray}
C^{n}\left( \overrightarrow{V}|f_{0}\right) &=&n^{-1}f_{0}+\frac{1}{nk_{B}T}%
\frac{\partial p^{(0)}}{\partial n}\left[ \frac{\partial }{\partial z}f_{0}%
\right] _{T} \\
C^{T}\left( \overrightarrow{V}|f_{0}\right) &=&\frac{1}{nk_{B}T}\frac{%
\partial p^{(0)}}{\partial T}\left[ \frac{\partial }{\partial z}f_{0}\right]
_{T}+\frac{\partial }{\partial T}f_{0} \notag \\
C^{\nabla u}\left( \overrightarrow{V}|f_{0}\right) &=&-\frac{2}{Dnk_{B}T}%
\left( \xi _{1}^{\nabla u}\left[ f_{0}\right] -p^{(0)C}\right) \left( \frac{D%
}{2}f_{0}+\frac{mV^{2}}{2T}\left[ \frac{\partial }{\partial z}f_{0}\right]
_{T}\right) \notag \\
C^{\partial u}\left( \overrightarrow{V}|f_{0}\right) &=&-\frac{m}{2k_{B}T}%
\left[ \frac{\partial }{\partial z}f_{0}\right] _{T} \notag
\end{eqnarray}%
and%
\begin{eqnarray}
\Omega _{i}^{n}\left[ f_{0},f_{0}\right] &=&\frac{1}{2}\frac{\partial \ln
n^{2}\chi }{\partial n}J_{i}^{(0)}\left[ f_{0},f_{0}\right] \label{Omega} \\
\Omega _{i}^{T}\left[ f_{0},f_{0}\right] &=&J_{i}^{(0)}\left[ f_{0},\frac{%
\partial }{\partial T}f_{0}\right] \notag \\
\Omega ^{\nabla u}\left[ f_{0},f_{0}\right] &=&\frac{1}{D}\sum_{i}J_{i}^{(0)}%
\left[ f_{0},\frac{\partial }{\partial u_{i}}f_{0}\right] \notag \\
\Omega _{ij}^{\partial u}\left[ f_{0},f_{0}\right] &=&\frac{1}{4}\left(
\begin{array}{c}
J_{j}^{(0)}\left[ f_{0},\frac{\partial }{\partial u_{i}}f_{0}\right]
+J_{i}^{(0)}\left[ f_{0},\frac{\partial }{\partial u_{j}}f_{0}\right] \\
-\frac{2}{D}\delta _{ij}\sum_{l}J_{l}^{(0)}\left[ f_{0},\frac{\partial }{%
\partial u_{l}}f_{0}\right]%
\end{array}%
\right) . \notag
\end{eqnarray}
We conclude the discussion of the first order approximation with some
general remarks concerning the solution of eqs.(\ref{XX})-(\ref{Omega}).
First, the hydrodynamic fields are defined by eq.(\ref{fields}) which can be
written as
\begin{equation}
\psi _{t,i}\left( \overrightarrow{r}\right) =\int \widehat{\psi }_{i}\left(
\overrightarrow{V}\right) f\left( \overrightarrow{r},\overrightarrow{V}%
;t\right) d\overrightarrow{V} \label{f3}
\end{equation}%
with the array of velocity moments $\widehat{\psi }\left( \overrightarrow{V}%
\right) =\left( 1,\frac{m}{2}V^{2},\overrightarrow{V}\right) $. However,
from the definition eq.(\ref{hsc-distribution}), it is clear that the zeroth
order distribution satisfies%
\begin{equation}
\psi _{t,i}\left( \overrightarrow{r}\right) =\int \widehat{\psi }_{i}\left(
\overrightarrow{V}\right) f_{0}\left( \overrightarrow{r},\overrightarrow{V}%
;t\right) d\overrightarrow{V}
\end{equation}%
so that it must be the case that all higher order contributions to the
distribution give%
\begin{equation}
0=\int \widehat{\psi }_{i}\left( \overrightarrow{V}\right) f_{j}\left(
\overrightarrow{r},\overrightarrow{V};t\right) d\overrightarrow{V}
\end{equation}%
for all $i$ and $j$. Since the gradients of the hydrodynamic fields are
arbitrary, this means that in the case of the first order distribution, the
coefficients of the gradients must be orthogonal to the first three velocity
moments under the measure $f_{0}\left( \overrightarrow{V}\right) $ or%
\begin{equation}
0=\int \widehat{\psi }_{i}\left( \overrightarrow{V}\right) A^{\left( \alpha
\right) }\left( \overrightarrow{V}\right) f_{0}\left( \overrightarrow{r},%
\overrightarrow{V};t\right) d\overrightarrow{V}. \label{orthog}
\end{equation}
Second, it is clear that eq.(\ref{XX}) is a linear equation in the
coefficients $A^{\left( \alpha \right) }\left( \overrightarrow{V}\right) $
so that the conditions for the existence of a solution follows the usual
theory of linear operators. In particular, defining a Hilbert space with
measure $f_{0}\left( \overrightarrow{V}\right) $ the Fredholm alternative,
which states that for linear operator $L$ and source term $B$, the equation $%
LV=B$ has a solution if and only if $B$ is orthogonal to the null space of $%
L $. We expect that $\widehat{\psi }\left( \overrightarrow{V}\right) $ is in
the null space of the operator defined by the right hand side of eq.(\ref{XX}%
). In fact, it is clear that multiplying by $\widehat{\psi }_{i}\left(
\overrightarrow{V}_{1}\right) \;$and integrating over velocities gives, on
the right hand side,
\begin{equation}
-\delta _{\alpha \nabla u}\frac{1}{Dnk_{B}T}\xi _{0}\left[ f_{0}A^{(\nabla
u)}\right] \frac{\partial }{\partial T}\int d\overrightarrow{V}_{1}\;%
\widehat{\psi }_{i}\left( \overrightarrow{V}\right) \phi _{I_{\alpha
}}^{\alpha }\left( \overrightarrow{V}_{1}\right) f_{0}-\int d\overrightarrow{%
V}_{1}\;\widehat{\psi }_{i}\left( \overrightarrow{V}_{1}\right) \mathcal{L}%
_{0}\left[ f_{0}A^{\left( \alpha \right) }\phi _{I_{\alpha }}^{\alpha }%
\right]
\end{equation}%
Now, the $\mathcal{L}_{0}$ term vanishes for $\;\widehat{\psi }_{i}=1$ and $%
\overrightarrow{V}$ due to the conservation of particle number and total
momentum respectively. For the last choice, $\widehat{\psi }_{i}=\frac{m}{2}%
V^{2}$, it is only nonzero for $\alpha =\nabla u$ due to rotational symmetry
(for other choices of $\alpha $, $\phi _{I_{\alpha }}^{\alpha }$ is a vector
or traceless tensor). Thus the only non-vanishing element of this system of
equations is that for $\alpha =\nabla u$ and $\widehat{\psi }_{i}=\frac{m}{2}%
V^{2}$ which becomes
\begin{equation}
-\frac{1}{2T}\xi _{0}\left[ f_{0}A^{(\nabla u)}\right] -\int d%
\overrightarrow{V}_{1}\;\frac{m}{2}V^{2}\mathcal{L}_{0}\left[ f_{0}A^{\left(
\nabla u\right) }\right]
\end{equation}%
and which is seen to vanish from the definition of $\xi _{0}\left[ g\right] $%
, eq.(\ref{psi0}) and $\mathcal{L}_{0}\left[ g\right] $. Thus, $\widehat{%
\psi }$ is indeed in the null space of the linear operator and a necessary
condition for the existence of a solution is that the right hand side is
orthogonal to $\widehat{\psi }$ as well. That this is indeed the case is
easily verified.
\subsection{Approximate solution of the integral equations}
The integral equations summarized by eq.s(\ref{XX})-(\ref{Omega}) will be
solved by expanding the unknown functions $A^{\left( \gamma \right) }$ in
associated Laguerre polynomials as%
\begin{equation}
A^{\left( \alpha \right) }\left( \overrightarrow{V}\right)
=\sum_{s}a_{s}^{\left( \alpha \right) }L_{s}^{\left( \frac{D-2}{2}+\lambda
_{\alpha }\right) }\left( \frac{m}{2k_{B}T}V^{2}\right) \label{cex}
\end{equation}%
where the coefficients are in general functions of the hydrodynamic fields, $%
a_{s}^{\left( \alpha \right) }=a_{s}^{\left( \alpha \right) }\left( \psi
_{t}\right) $ although, for clarity, this dependence will be suppressed
below. It is interesting to note that for a simple granular gas, Garzo and
Dufty\cite{DuftyGranularTransport} wrote the first order distribution as in
eq.(\ref{ff1}) but with $f_{0}$ replaced by the Maxwellian $f_{M}\left(
v_{1};\psi _{t}\right) $. The use of $f_{0}$ here is motivated by the fact
that the source term in the first order equations, eq.(\ref{XX}), is
proportional to $f_{0}$ so that it seems appropriate to use this in the
definition of the first order correction but this is not necessary. In fact,
one clearly has that%
\begin{eqnarray}
f_{0}\left( \overrightarrow{V}\right) A^{\left( \alpha \right) }\left(
\overrightarrow{V}\right) &=&f_{0}\left( \overrightarrow{V}\right)
\sum_{s}a_{s}^{\left( \alpha \right) }L_{s}^{\left( \frac{D-2}{2}+\lambda
_{\alpha }\right) }\left( \frac{m}{2k_{B}T}V^{2}\right) \\
&=&f_{M}\left( V;\psi _{t}\right) \sum_{s}\overline{a}_{s}^{\left( \alpha
\right) }L_{s}^{\left( \frac{D-2}{2}+\lambda _{\alpha }\right) }\left( \frac{%
m}{2k_{B}T}V^{2}\right) \notag
\end{eqnarray}%
for new coefficients $\overline{a}_{s}^{\left( \alpha \right) }$ which are
linear combinations of the $a_{s}^{\left( \alpha \right) }$ and in
particular, if $a_{s_{0}}^{\left( \alpha \right) }$ is the first
non-vanishing coefficient in the sum, then $a_{s_{0}}^{\left( \alpha \right)
}=\overline{a}_{s_{0}}^{\left( \alpha \right) }$. In the development to
follow, the usual approximation will be made wherein only the lowest
non-vanishing coefficient is retained (the ''lowest Sonine approximation'')\
so that the two choices should be equivalent. However, in this
approximation, the exact relation $a_{s_{0}}^{\left( \alpha \right) }=%
\overline{a}_{s_{0}}^{\left( \alpha \right) }$ is only preserved if terms of
the form $c_{2}a_{s_{0}}^{\left( \alpha \right) }$ are systematically
neglected. This therefore motivates the simplifying approximation made here
whereby such terms are in fact neglected throughout. In the Conclusions, the
present approximation is evaluated for the special case of a simple granular
fluid in 3 dimensions by comparison to expressions given in ref. \cite%
{DuftyGranularTransport} where all dependence on $c_{2}$ is retained.
In order to solve the integral equations, the expansions eq.(\ref{cex}) are
substituted into eq.(\ref{XX})and the $\alpha $-th equation is multiplied by
$\phi _{I_{\alpha }}^{\alpha }\left( \overrightarrow{V}_{1}\right)
L_{k}^{\left( \frac{D-2}{2}+\lambda _{\alpha }\right) }\left( \frac{m}{%
2k_{B}T}V_{1}^{2}\right) $ and tensorial indices contracted and $%
\overrightarrow{V}_{1}$ integrated. The left hand side of these equations is
found to be simplified by the choices $\lambda _{n}=\lambda _{T}=1$, $%
\lambda _{\nabla u}=0$ and $\lambda _{\partial u}=2$ which are made
henceforth. The result after some simplification can be written as%
\begin{equation}
\frac{2}{Dk_{B}}\xi _{0}\frac{\Gamma \left( \frac{D}{2}+\lambda _{\alpha
}+k\right) }{\Gamma \left( D/2\right) \Gamma \left( k+1\right) }\left( \frac{%
2k_{B}T}{m}\right) ^{\lambda _{\alpha }}\left[ \frac{\partial }{\partial T}%
a_{k}^{\left( \alpha \right) }+\frac{1}{T}\left( \lambda _{\alpha }+k\right)
a_{k}^{\left( \alpha \right) }-\frac{1}{T}ka_{k-1}^{\left( \alpha \right) }%
\right] +\sum_{l}I_{kl}^{\alpha }a_{l}^{\left( \alpha \right)
}-\sum_{l}K_{kl}^{\alpha \alpha ^{\prime }}a_{l}^{\left( \alpha ^{\prime
}\right) }=\Lambda _{k}^{\alpha }+\Omega _{k}^{\alpha }
\end{equation}%
where the contributions from the Boltzmann operator are%
\begin{equation}
I_{kl}^{\alpha }=-\sum_{I_{\alpha }}\int d\overrightarrow{V}_{1}\;\phi
_{I_{\alpha }}^{\alpha }\left( \overrightarrow{V}_{1}\right) L_{k}^{\left(
\frac{D-2}{2}+\lambda _{\alpha }\right) }\left( \frac{m}{2k_{B}T}%
V_{1}^{2}\right) \mathcal{L}_{0}\left[ L_{l}^{\left( \frac{D-2}{2}+\lambda
_{\alpha }\right) }\left( \frac{m}{2k_{B}T}V^{2}\right) \phi _{I_{\alpha
}}^{\alpha }\left( \overrightarrow{V}\right) \right] ,
\end{equation}%
and the last coefficient on the left is%
\begin{equation}
K_{kl}^{\alpha \alpha ^{\prime }}=\int d\overrightarrow{V}_{1}\;\phi
_{I_{\alpha }}\left( \overrightarrow{V}_{1}\right) L_{k}^{\left( \frac{D-2}{2%
}+\lambda _{\alpha }\right) }\left( \frac{m}{2k_{B}T}V_{1}^{2}\right)
K_{\alpha ^{\prime }}^{\alpha }\left[ \phi _{I_{\alpha ^{\prime }}}\left(
\overrightarrow{V}_{1}\right) L_{l}^{\left( \frac{D-2}{2}+\lambda _{\alpha
^{\prime }}\right) }\left( \frac{m}{2k_{B}T}V^{2}\right) \right]
\end{equation}%
The source terms on the right are%
\begin{equation}
\Lambda _{k}^{\alpha }=-\int d\overrightarrow{V}_{1}\;\phi _{I_{\alpha
}}\left( \overrightarrow{V}_{1}\right) L_{k}^{\left( \frac{D-2}{2}+\lambda
_{\alpha }\right) }\left( \frac{m}{2k_{B}T}V_{1}^{2}\right) C^{\alpha
}\left( \overrightarrow{V}|f_{0}\right) \phi _{I_{\alpha }}\left(
\overrightarrow{V}_{1}\right)
\end{equation}%
and%
\begin{equation}
\Omega _{k}^{\alpha }=\int d\overrightarrow{V}_{1}\;\phi _{I_{\alpha
}}\left( \overrightarrow{V}_{1}\right) L_{k}^{\left( \frac{D-2}{2}+\lambda
_{\alpha }\right) }\left( \frac{m}{2k_{B}T}V_{1}^{2}\right) \Omega
_{i}^{\alpha }\left[ f_{0},f_{0}\right] .
\end{equation}
A straightforward evaluation using the orthogonality and standard recursion
relations of the associated Laguerre polynomials\cite{AbramStegun} gives%
\begin{eqnarray}
K_{kl}^{\alpha \alpha ^{\prime }} &=&\frac{\Gamma \left( \frac{1}{2}%
D+k+1\right) }{\Gamma \left( \frac{1}{2}D\right) \Gamma \left( k+1\right) }%
n\left( \frac{2k_{B}T}{m}\right) \delta _{\alpha ^{\prime }T}\delta
_{kl}\left( \delta _{\alpha n}\frac{\partial }{\partial n}+\delta _{\alpha T}%
\frac{\partial }{\partial T}\right) \frac{2}{Dnk_{B}}\xi _{0} \\
&&+\delta _{k1}\delta _{\alpha \nabla u}\delta _{\alpha ^{\prime }\nabla u}%
\frac{1}{2nk_{B}T}\xi ^{(0)}\left[ f_{0}L_{l}^{\left( \frac{D-2}{2}\right)
}\left( \frac{m}{2k_{B}T}V^{2}\right) \right] \left( \frac{1}{T}\right) n
\notag
\end{eqnarray}%
and
\begin{equation}
\Lambda _{k}^{\alpha }=\frac{\Gamma \left( \frac{D-2}{2}+k+1\right) }{\Gamma
\left( \frac{D}{2}\right) \Gamma \left( k+1\right) }\left(
\begin{array}{c}
\frac{2k_{B}T}{m}\left( \frac{1}{2}D+k\right) \left[ \frac{1}{k_{B}T}\frac{%
\partial p^{(0)C}}{\partial n}c_{k}+c_{k+1}\right] \\
-\frac{n}{T}\frac{2k_{B}T}{m}\left( \frac{1}{2}D+k\right) \left( \left( -%
\frac{1}{nk_{B}T}T\frac{\partial p^{(0)}}{\partial T}+2k+1\right)
c_{k}-\left( k+1\right) c_{k+1}-kc_{k-1}+\frac{\partial c_{k}}{\partial T}-%
\frac{\partial c_{k+1}}{\partial T}\right) \\
\frac{2}{Dk_{B}T}\left( \xi _{1}^{\nabla u}\left[ f_{0}\right]
-p^{(0)c}\right) \left( k\left( c_{k-1}-c_{k}\right) +T\frac{\partial c_{k}}{%
\partial T}\right) \\
\frac{2nk_{B}T}{m}\sqrt{\frac{D-1}{D}}\frac{\left( D+k\right) \left(
D+k+2\right) }{4}\left( c_{k+1}-c_{k}\right)%
\end{array}%
\right)
\end{equation}
Finally, it is useful to note that the lowest order coefficients are related
to the kinetic parts of the transport coefficients (see Appendix \ref%
{AppExpandFluxes}). Specifically, the first order contribution to the
pressure tensor takes the usual form
\begin{equation}
P_{ij}^{(1)}=-\eta \left( \partial _{i}u_{j}+\partial _{j}u_{i}-\frac{2}{D}%
\delta _{ij}\left( \overrightarrow{\nabla }\cdot \overrightarrow{u}\right)
\right) -\gamma \delta _{ij}\left( \overrightarrow{\nabla }\cdot
\overrightarrow{u}\right)
\end{equation}%
where the shear viscosity, $\eta $, has a kinetic contribution given by%
\begin{equation}
\eta ^{K}=-2nk_{B}T\left( \frac{k_{B}T}{m}\right) \sqrt{\frac{D}{D-1}}%
a_{0}^{\partial u},
\end{equation}%
and the bulk viscosity $\gamma $ has no kinetic contribution. The the first
order contribution to heat flux vector is
\begin{equation}
\overrightarrow{q}^{\left( 1\right) }\left( \overrightarrow{r},t\right)
=-\mu \overrightarrow{\nabla }\rho -\kappa \overrightarrow{\nabla }T
\end{equation}%
where $\kappa $ is the coefficient of thermal conductivity and $\mu $ is a
new transport coefficient characterizing the way in which density gradients
can cause heat flow due to differential cooling rates. It vanishes in the
elastic limit. The kinetic parts of these transport coefficients are given
by
\begin{eqnarray}
\mu ^{K} &=&\left( nk_{B}T\left( \frac{k_{B}T}{m}\right) \frac{D+2}{2}%
\right) a_{1}^{\rho } \\
\kappa ^{K} &=&\left( nk_{B}T\left( \frac{k_{B}T}{m}\right) \frac{D+2}{2}%
\right) a_{1}^{T}. \notag
\end{eqnarray}%
These expressions are exact if $f_{0}$ is replaced by a Gaussian in eq.(\ref%
{ff1}) but if the first order correction is written in terms of $f_{0}$ then
there are terms in $c_{2}$ which would contribute\ (as discussed in the
Appendix) in principle but which would in any case be dropped here since
they are of order $c_{2}a_{s_{0}}^{\left( \gamma \right) }$. The collisional
contributions will be discussed below.
\subsection{Lowest order approximations}
The simplest nontrivial approximation is to keep only the lowest order
nonzero coefficient in each expansion in eq.(\ref{cex}). This means $%
a_{1}^{\left( n\right) },a_{1}^{\left( T\right) },a_{2}^{\left( \nabla
u\right) }$ and $a_{0}^{\left( \partial u\right) }$. Since the transport
coefficients are more interesting than the distribution itself, we write
these equations in terms of the kinetic parts of the transport coefficients
giving%
\begin{equation}
\xi _{0}\left( D+2\right) \frac{1}{m}T\frac{\partial \mu ^{K}}{\partial T}%
+I_{11}^{n}\mu ^{K}-\xi _{0}\left( D+2\right) \frac{1}{m}T\left( \frac{%
\partial \ln n\chi _{0}}{\partial n}\right) \kappa ^{K}=\left( nk_{B}T\left(
\frac{k_{B}T}{m}\right) \frac{D+2}{2}\right) \left( \Omega _{1}^{n}+\frac{%
k_{B}T}{m}\frac{D\left( D+2\right) }{2}c_{2}\right) \label{l1}
\end{equation}%
\begin{equation*}
\xi _{0}\left( D+2\right) \frac{1}{m}\left[ T\frac{\partial \kappa ^{K}}{%
\partial T}+\left( T\frac{\partial }{\partial T}\ln \xi _{0}\right) \kappa
^{K}\right] +I_{11}^{T}\kappa ^{K}=mn\left( \frac{k_{B}T}{m}\right) ^{2}%
\frac{D+2}{2}\left( \Omega _{1}^{T}+\frac{n}{T}\frac{2k_{B}T}{m}\frac{1}{4}%
D\left( D+2\right) \left( 1+2c_{2}+\frac{\partial c_{2}}{\partial T}\right)
\right)
\end{equation*}%
\begin{align}
\frac{1}{4}\xi _{0}\frac{D+2}{k_{B}T}\left[ T\frac{\partial }{\partial T}%
a_{2}^{\left( \nabla u\right) }+2a_{2}^{\left( \nabla u\right) }\right]
+I_{22}^{\nabla u}a_{2}^{\left( \nabla u\right) }& =\Omega _{2}^{\nabla u}+%
\frac{D+2}{4k_{B}T}\left( \xi _{1}^{\nabla u}\left[ f_{0}\right]
-p^{(0)c}\right) \left( -2c_{2}+T\frac{\partial c_{2}}{\partial T}\right)
\notag \\
\xi _{0}\frac{D+2}{m}\left( \frac{2k_{B}T}{m}\right) T\frac{\partial \eta
^{K}}{\partial T}+I_{00}^{\partial u}\eta ^{K}& =mn\left( \frac{k_{B}T}{m}%
\right) ^{2}\left[ n\left( \frac{k_{B}T}{m}\right) D\left( D+2\right) -2%
\sqrt{\frac{D}{D-1}}\Omega _{0}^{\partial u}\right] \notag
\end{align}
The Boltzmann integrals and the source terms are evaluated in a
straightforward manner and the present evaluations were performed as
described in Appendix \ref{AppEvaluation}, making frequent use of symbolic
manipulation The results can be written as%
\begin{eqnarray}
I_{rs}^{\gamma } &=&I_{rs}^{\gamma E}\left[ 1+\sum_{a}\int_{0}^{\infty
}K_{a}^{\ast }\left( -v\right) e^{-\frac{1}{2}v^{2}}v\left( \Delta
_{a}^{\ast }\left( -v\right) S_{rs}^{\gamma }\left( v\right) +\frac{1}{4}%
vg\left( v,\Delta _{a}^{\ast }\left( -v\right) \right) \right) dv\right]
\label{l2} \\
\Omega _{rs}^{\gamma } &=&\Omega _{rs}^{\gamma E}+\chi n^{2}\sigma ^{D}S_{D}%
\frac{1}{\sqrt{2\pi }}\sum_{a}\int_{0}^{\infty }K_{a}^{\ast }\left(
-v\right) e^{-\frac{1}{2}v^{2}}v\left( T_{rs}^{\gamma }\left( v\right)
+U_{rs}^{\gamma }\left( v\right) c_{2}+V_{rs}^{\gamma }\left( v\right) \frac{%
dc_{2}}{dT}\right) dv \notag
\end{eqnarray}%
with the elastic contributions%
\begin{eqnarray}
I_{11}^{nE} &=&I_{11}^{TE}=n^{2}\sigma ^{D-1}S_{D}\chi \left( \frac{k_{B}T}{m%
}\right) ^{3/2}\frac{2\left( D-1\right) }{\sqrt{\pi }} \label{l3} \\
I_{22}^{\nabla uE} &=&n^{2}\sigma ^{D-1}S_{D}\chi \left( \frac{k_{B}T}{m}%
\right) ^{1/2}\frac{\left( D-1\right) }{2\sqrt{\pi }} \notag \\
I_{00}^{\partial uE} &=&\chi n^{2}\sigma ^{D-1}S_{D}\left( \frac{k_{B}T}{m}%
\right) ^{5/2}\frac{4D}{\sqrt{\pi }} \notag
\end{eqnarray}%
and the inelastic kernals%
\begin{eqnarray}
S_{11}^{n}\left( v\right) &=&S_{11}^{T}\left( v\right) =-\frac{D+8}{16\left(
D-1\right) }\left( v^{2}-1\right) \label{l4} \\
S_{22}^{\nabla u}\left( v\right) &=&\frac{1}{64\left( D-1\right) }\left(
v^{6}-9v^{4}+\left( 8D+49\right) \allowbreak v^{2}-37-8D\right) \notag \\
&&-\frac{1}{64\left( D-1\right) }\left( v^{4}-6\allowbreak v^{2}+3\right)
\Delta ^{\ast }\left( v\right) \notag \\
S_{00}^{\partial u}\left( v\right) &=&\frac{1}{4D}\left( v^{2}-1\right)
\notag
\end{eqnarray}%
The elastic contributions to the sources are%
\begin{eqnarray}
\Omega _{1}^{nE} &=&\frac{1}{2}\frac{\partial n^{2}\chi }{\partial n}\sigma
^{D}S_{D}\left( \frac{k_{B}T}{m}\right) \frac{D+5}{4}c_{2} \label{l6} \\
\Omega _{1}^{TE} &=&-n^{2}\sigma ^{D}\chi S_{D}\frac{k_{B}T}{m}\frac{1}{T}%
\frac{3}{4}\left( 1+2c_{2}+\frac{dc_{2}}{dT}\right) \notag \\
\Omega _{1}^{\nabla uE} &=&n^{2}\sigma ^{D}\chi S_{D}\frac{D-7}{8D}c_{2}
\notag \\
\Omega _{1}^{\partial uE} &=&-n^{2}\sigma ^{D}\chi S_{D}\frac{k_{B}T}{m}%
\frac{1}{2}\sqrt{\frac{D-1}{D}} \notag
\end{eqnarray}%
and the inelastic kernals are%
\begin{eqnarray}
T_{11}^{n}\left( v\right) &=&\frac{1}{2}\frac{\partial \ln n^{2}\chi }{%
\partial n}\left( \frac{k_{B}T}{m}\right) \frac{1}{4}\left( \left(
v^{2}-3\right) g\left( v,\Delta _{a}^{\ast }\left( -v\right) \right)
-2\Delta ^{\ast }\left( v\right) \left( g\left( v,\Delta _{a}^{\ast }\left(
-v\right) \right) +v\right) \right) \label{l7} \\
T_{11}^{T}\left( v\right) &=&\frac{k_{B}}{16m}\left( 2\left( \left(
v^{2}-1\right) g\left( v,\Delta _{a}^{\ast }\left( -v\right) \right)
+v^{3}+5v\right) \Delta _{a}^{\ast }\left( -v\right) -\left(
9-4v^{2}+v^{4}\right) g\left( v,\Delta _{a}^{\ast }\left( -v\right) \right)
\right) \notag \\
T_{22}^{\nabla u}\left( v\right) &=&\frac{1}{8D}\left( \left( 2\Delta ^{\ast
}\left( v\right) +3-v^{2}\right) g\left( v,\Delta _{a}^{\ast }\left(
-v\right) \right) +v\Delta _{a}^{\ast }\left( -v\right) \left(
v^{2}-1-\Delta _{a}^{\ast }\left( -v\right) \right) \right) \notag \\
T_{00}^{\partial u}\left( v\right) &=&\frac{1}{2}\sqrt{\frac{D-1}{D}}\left(
\frac{k_{B}T}{m}\right) \left( v\Delta _{a}^{\ast }\left( -v\right) -g\left(
v,\Delta _{a}^{\ast }\left( -v\right) \right) \right) \notag
\end{eqnarray}%
and%
\begin{eqnarray}
U_{11}^{n}\left( v\right) &=&\frac{1}{2}\frac{\partial \ln n^{2}\chi }{%
\partial n}\left( \frac{k_{B}T}{m}\right) \frac{1}{64}\left(
v^{6}-9v^{4}+\left( 49+8D\right) v^{2}-37-8D\right) g\left( v,\Delta
_{a}^{\ast }\left( -v\right) \right) \\
&&+\frac{1}{2}\frac{\partial \ln n^{2}\chi }{\partial n}\left( \frac{k_{B}T}{%
m}\right) \frac{1}{32}\Delta _{a}^{\ast }\left( -v\right) \left(
-v^{4}+6v^{2}-3\right) \left( g\left( v,\Delta _{a}^{\ast }\left( -v\right)
\right) +v\right) \notag \\
U_{11}^{T}\left( v\right) &=&\frac{k_{B}}{m}\frac{1}{256}\left(
-v^{8}+14v^{6}+\left( -8D-88\right) v^{4}+\left( 126+48D\right)
v^{2}-\allowbreak 24D+33\right) g\left( v,\Delta _{a}^{\ast }\left(
-v\right) \right) \notag \\
&&+\frac{k_{B}}{m}\frac{1}{128}\Delta _{a}^{\ast }\left( -v\right) \left(
\left( v^{6}-11v^{4}+21v^{2}-3\right) g\left( v,\Delta _{a}^{\ast }\left(
-v\right) \right) +v\left( v^{6}-5v^{4}+9v^{2}-57\right) \right) \notag \\
U_{22}^{\nabla u}\left( v\right) &=&\frac{1}{128D}v\Delta _{a}^{\ast }\left(
-v\right) \left( \Delta _{a}^{\ast }\left( -v\right) \left(
-v^{4}+10v^{2}-15\right) +v^{6}-11v^{4}+v^{2}\left( 61+8D\right)
-123-24D\right) \allowbreak \allowbreak \notag \\
&&+\frac{1}{128D}g\left( v,\Delta _{a}^{\ast }\left( -v\right) \right)
\left( 2\Delta _{a}^{\ast }\left( -v\right) \left( v^{4}-6v^{2}+3\right)
-v^{6}+9v^{4}+\left( 8D-65\right) v^{2}-8D+53\right) \allowbreak \\
U_{00}^{\partial u}\left( v\right) &=&\sqrt{\frac{D-1}{D}}\left( \frac{k_{B}T%
}{m}\right) \frac{1}{32}\left( v\Delta _{a}^{\ast }\left(
v^{4}-10v^{2}+15\right) -g\left( v,\Delta _{a}^{\ast }\left( -v\right)
\right) \left( v^{4}-6v^{2}+3\right) \right) . \notag
\end{eqnarray}%
The only non-vanishing coefficient of the temperature derivative is
\begin{eqnarray}
V_{11}^{T}\left( v\right) &=&\frac{1}{128}g\left( v,\Delta _{a}^{\ast
}\left( -v\right) \right) \allowbreak \left( -v^{6}+9v^{4}-57v^{2}+45\right)
\\
&&+\frac{1}{64}\Delta _{a}^{\ast }\left( -v\right) \left( \left(
v^{4}-6v^{2}+3\right) g\left( v,\Delta _{a}^{\ast }\left( -v\right) \right)
+\left( v^{4}+6v^{2}-33\right) v\right) . \notag
\end{eqnarray}
The collisional contributions to the shear and bulk viscosity are, in this
approximation,%
\begin{eqnarray}
\eta ^{C} &=&\frac{2}{3}\theta \eta ^{K}+\frac{D}{D+2}\gamma _{1} \label{c1}
\\
\gamma &=&\gamma _{1}-\left( nk_{B}T\right) a_{2}^{\nabla u}\frac{%
S_{D}n^{\ast }\chi }{32D\sqrt{2\pi }}\sum_{a}\int_{0}^{\infty }K_{a}^{\ast
}\left( -v\right) e^{-\frac{1}{2}v^{2}}v\left( 3-6v^{2}+v^{4}\right) g\left(
v,\Delta _{a}^{\ast }\left( -v\right) \right) dv \notag \\
\mu ^{C} &=&\theta \mu ^{K} \notag \\
\kappa ^{C} &=&\theta \kappa ^{K}+\frac{D}{2}\frac{k_{B}}{m}\gamma _{1}
\notag \\
&&-m\chi n^{2}\sigma ^{D+1}\left( \frac{k_{B}T}{m}\right) ^{3/2}\frac{1}{4T%
\sqrt{\pi }}\frac{S_{D}}{D}c_{2}\left[ 1+\frac{1}{4}\sum_{a}\int_{0}^{\infty
}K_{a}^{\ast }\left( -v\right) v^{2}e^{-\frac{1}{2}v^{2}}\left(
v^{2}-3\right) g\left( v,\Delta _{a}^{\ast }\left( -v\right) \right) dv%
\right] \notag
\end{eqnarray}%
with%
\begin{eqnarray}
\gamma _{1} &=&m\sigma n\left( \frac{k_{B}T}{m}\right) ^{\frac{1}{2}}\frac{%
S_{D}n^{\ast }\chi }{D^{2}\sqrt{\pi }} \label{c2} \\
&&\times \left[ 1-\frac{1}{16}c_{2}+\frac{1}{4}\sum_{a}\int_{0}^{\infty
}K_{a}^{\ast }\left( -v\right) e^{-\frac{1}{2}v^{2}}v^{2}g\left( v,\Delta
_{a}^{\ast }\left( -v\right) \right) \left( 1+\frac{1}{16}c_{2}\left(
v^{4}-10v^{2}+15\right) \right) dv\right] \notag \\
\theta &=&\frac{3S_{D}}{2D\left( D+2\right) }n^{\ast }\chi \left[ 1+\frac{1}{%
2\sqrt{2\pi }}\sum_{a}\int_{0}^{\infty }K_{a}^{\ast }\left( -v\right) ve^{-%
\frac{1}{2}v^{2}}\left( v^{2}-1\right) g\left( v,\Delta _{a}^{\ast }\left(
-v\right) \right) dv\right] \notag
\end{eqnarray}%
Finally, the first order corrections to the heat source are%
\begin{eqnarray}
\xi _{0}\left[ f_{1}\right] &=&-\left( \overrightarrow{\nabla }\cdot
\overrightarrow{u}\right) a_{2}^{\nabla u}n^{2}\sigma ^{D}\chi S_{D}\left(
\frac{k_{B}T}{m\sigma ^{2}}\right) ^{1/2}\frac{k_{B}T}{32\sqrt{\pi }}%
\sum_{a}\int_{0}^{\infty }K_{a}^{\ast }\left( -v\right) ve^{-\frac{1}{2}%
v^{2}}\Delta _{a}^{\ast }\left( -v\right) \left( v^{4}-6v^{2}+3\right) dv
\label{h1} \\
\xi _{1}\left[ f_{0}\right] &=&\left( \overrightarrow{\nabla }\cdot
\overrightarrow{u}\right) n^{2}\sigma ^{D}\chi S_{D}\frac{k_{B}T}{2\sqrt{%
2\pi }D}\sum_{a}\int_{0}^{\infty }\;K_{a}^{\ast }\left( -v\right) \Delta
_{a}^{\ast }\left( -v\right) v^{2}e^{-\frac{1}{2}v^{2}}dv \notag \\
&&+c_{2}\left( \overrightarrow{\nabla }\cdot \overrightarrow{u}\right)
n^{2}\sigma ^{D}\chi S_{D}\frac{k_{B}T}{32D\sqrt{2\pi }}\sum_{a}\int_{0}^{%
\infty }\;K_{a}^{\ast }\left( -v\right) \Delta _{a}^{\ast }\left( -v\right)
e^{-\frac{1}{2}v^{2}}\left( 15-10v^{2}+v^{4}\right) v^{2}dv \notag
\end{eqnarray}%
Equations (\ref{l1})-(\ref{h1}) are the primary results of this paper. They
give a prescription for the evaluation of the transport properties for an
arbitrary model of energy dissipation at the Navier-Stokes level and in the
usual, lowest Sonine approximation. In the next Subsection, these are
illustrated by using them to give the transport properties of a simple
granular fluid.
\subsection{Transport in simple granular fluids}
For the simple granular fluid, recall that $\Delta ^{\ast }\left( v\right)
=\left( 1-\alpha ^{2}\right) \frac{1}{2}v^{2}$ and $g\left( v,\Delta \right)
=v\left( \alpha -1\right) $. Since there is no other energy scale, the
coefficients of the first order solution must scale with temperature as
\begin{eqnarray}
a_{1}^{\left( n\right) } &\sim &T^{-1/2} \\
a_{1}^{\left( T\right) } &\sim &T^{-3/2} \notag \\
a_{2}^{\left( \nabla u\right) } &\sim &T^{-1/2} \notag \\
a_{0}^{\left( \partial u\right) } &\sim &T^{-3/2} \notag \\
\xi _{0} &\sim &T^{3/2} \notag
\end{eqnarray}%
giving%
\begin{gather}
\left[ \xi _{0}\left( D+2\right) \frac{3}{2m}+I_{11}^{n}\right] \mu ^{K}+\xi
_{0}\left( D+2\right) \frac{1}{m}T\left( \frac{\partial \ln n\chi _{0}}{%
\partial n}\right) \kappa ^{K}=\left( nk_{B}T\left( \frac{k_{B}T}{m}\right)
\frac{D+2}{2}\right) \left( \Omega _{1}^{n}+\frac{D\left( D+2\right) }{2}%
\frac{k_{B}T}{m}c_{2}\right) \\
\left[ \xi _{0}\left( D+2\right) \frac{2}{m}+I_{11}^{T}\right] \kappa
^{K}=mn\left( \frac{k_{B}T}{m}\right) ^{2}\frac{D+2}{2}\Omega _{1}^{T}+\frac{%
1}{T}mn^{2}\left( \frac{k_{B}T}{m}\right) ^{3}\frac{D\left( D+2\right) ^{2}}{%
4}\left( 1+2c_{2}\right) \notag \\
\left( \frac{3}{8}\xi _{0}\frac{D+2}{k_{B}T}+I_{22}^{\nabla u}\right)
a_{2}^{\left( \nabla u\right) }=\Omega _{2}^{\nabla u}-\frac{D+2}{2k_{B}T}%
\left( \xi _{1}^{\nabla u}\left[ f_{0}\right] -p^{(0)c}\right) c_{2} \notag
\\
\left( \xi _{0}\frac{D+2}{m}\left( \frac{k_{B}T}{m}\right) +I_{00}^{\partial
u}\right) \eta ^{K}=mn\left( \frac{k_{B}T}{m}\right) ^{2}\left[ n\left(
\frac{k_{B}T}{m}\right) D\left( D+2\right) -2\sqrt{\frac{D}{D-1}}\Omega
_{0}^{\partial u}\right] . \notag
\end{gather}%
From the zeroth order solution, one has
\begin{equation}
\xi _{0}\left[ f_{0}\right] =-\left( 1-\alpha ^{2}\right) n^{\ast }\chi
\frac{S_{D}}{2\sqrt{\pi }}\left( \frac{k_{B}T}{m\sigma ^{2}}\right)
^{1/2}nk_{B}T
\end{equation}%
Equations (\ref{l2}) are easily evaluated giving the Boltzmann integrals
\begin{eqnarray}
I_{11}^{n} &=&I_{11}^{T}=-n^{2}\sigma ^{D-1}S_{D}\chi \left( \frac{k_{B}T}{m}%
\right) ^{3/2}\frac{1}{8\sqrt{\pi }}\left( \alpha +1\right) \left( 3\alpha
\left( D+8\right) -11D-16\right) \\
I_{22}^{\nabla u} &=&-n^{2}\sigma ^{D-1}S_{D}\chi \left( \frac{k_{B}T}{m}%
\right) ^{1/2}\frac{1}{128\sqrt{\pi }}\left( \alpha +1\right) \left(
30\alpha ^{3}-30\alpha ^{2}+105\alpha +24\alpha D-56D-73\right) \notag \\
I_{00}^{\partial u} &=&n^{2}\sigma ^{D-1}S_{D}\chi \left( \frac{k_{B}T}{m}%
\right) ^{5/2}\frac{1}{\sqrt{\pi }}\left( 1+\alpha \right) \left( 3-3\alpha
+2D\right) \notag
\end{eqnarray}%
and sources%
\begin{eqnarray}
\Omega _{1}^{n} &=&\frac{1}{2}\frac{\partial n^{2}\chi }{\partial n}\sigma
^{D}S_{D}\left( \frac{k_{B}T}{m}\right) \frac{1}{16}\left( 1+\alpha \right) %
\left[ 6\alpha \left( 1-\alpha \right) -\left( 3\alpha ^{2}-3\alpha
+10+2D\right) c_{2}\right] \\
\Omega _{1}^{T} &=&\frac{3}{16}n^{2}\sigma ^{D}\chi S_{D}\frac{k_{B}}{m}%
\left( \alpha +1\right) ^{2}\left[ 2\alpha -1+\left( \alpha +1\right) c_{2}%
\right] \notag \\
\Omega _{2}^{\nabla u} &=&\frac{3}{64D}n^{2}\sigma ^{D}\chi S_{D}\left(
1+\alpha \right) \left( \left( 5\alpha -1\right) \left( 1-\alpha \right)
\left( 1+\alpha \right) -\frac{1}{6}c_{2}\left( 15\alpha ^{3}-3\alpha
^{2}+3\left( 4D+15\right) \alpha -\left( 20D+1\right) \right) \right) \notag
\\
\Omega _{0}^{\partial u} &=&-\frac{1}{8}n^{2}\sigma ^{D}\chi S_{D}k_{B}T%
\sqrt{\left( \frac{D-1}{D}\right) }\left( \alpha +1\right) \frac{3\alpha -1}{%
m} \notag
\end{eqnarray}%
The low density (Boltzmann)\ transport coefficients in the elastic ($\alpha
=1$) limit can be read off and are%
\begin{eqnarray}
\kappa _{0} &=&k_{B}\left( \frac{k_{B}T}{m}\right) ^{1/2}\frac{\sqrt{\pi }}{%
8\sigma ^{D-1}S_{D}}\frac{D\left( D+2\right) ^{2}}{D-1} \\
\eta _{0} &=&\sqrt{\frac{mk_{B}T}{\pi }}\frac{\left( D+2\right) \pi }{4S_{D}}%
\sigma ^{1-D} \notag \\
\mu _{0} &=&\gamma _{0}=0. \notag
\end{eqnarray}%
To facilitate comparison of the finite density transport coefficients to
those of ref.\cite{DuftyGranularTransport}, it is useful to introduce
dimensionless quantities%
\begin{eqnarray}
\nu _{T}^{\ast } &=&\chi \frac{D-1}{2D}\left( \alpha +1\right) \left( 1+%
\frac{3\left( D+8\right) \left( 1-\alpha \right) }{8\left( D-1\right) }%
\right) \\
\nu _{\eta }^{\ast } &=&\chi \left( 1-\frac{1}{4D}\left( 1-\alpha \right)
\left( 2D-3\alpha -3\right) \right) \notag \\
\nu _{\gamma }^{\ast } &=&-\frac{1}{48}\chi \left( \frac{\alpha +1}{2}%
\right) \left( 30\alpha ^{3}-30\alpha ^{2}+105\alpha +24\alpha
D-56D-73\right) \notag \\
\zeta ^{\ast } &=&\frac{D+2}{4D}\chi \left( 1-\alpha ^{2}\right) \notag
\end{eqnarray}%
so that
\begin{eqnarray}
I_{11}^{n} &=&I_{11}^{T}=\frac{2D}{\sqrt{\pi }}S_{D}nn^{\ast }\left( \frac{%
k_{B}T}{m}\right) \left( \frac{k_{B}T}{m\sigma ^{2}}\right) ^{1/2}\nu
_{T}^{\ast } \\
I_{00}^{\partial u} &=&\frac{4D}{\sqrt{\pi }}S_{D}nn^{\ast }\left( \frac{%
k_{B}T}{m}\right) ^{2}\left( \frac{k_{B}T}{m\sigma ^{2}}\right) ^{1/2}\nu
_{\eta }^{\ast } \notag \\
I_{22}^{\nabla u} &=&\frac{3}{4\sqrt{\pi }}S_{D}nn^{\ast }\left( \frac{k_{B}T%
}{m\sigma ^{2}}\right) ^{1/2}\nu _{\gamma }^{\ast } \notag \\
\frac{1}{m}\xi ^{(0)} &=&-\frac{2D}{\sqrt{\pi }\left( D+2\right) }%
S_{D}nn^{\ast }\frac{k_{B}T}{m}\left( \frac{k_{B}T}{m\sigma ^{2}}\right)
^{1/2}\zeta ^{\ast } \notag
\end{eqnarray}%
and then%
\begin{eqnarray}
\kappa ^{K} &=&\kappa _{0}\frac{D-1}{D}\left[ \nu _{T}^{\ast }-2\zeta ^{\ast
}\right] ^{-1}\left( 1+2c_{2}+\frac{3}{8D\left( D+2\right) }n^{\ast }\chi
S_{D}\left( \alpha +1\right) ^{2}\left( 2\alpha -1+\left( \alpha +1\right)
c_{2}\right) \right) \\
\mu ^{K} &=&2\kappa _{0}\frac{T}{n}\left[ \nu _{T}^{\ast }-2\zeta ^{\ast }%
\right] ^{-1} \notag \\
&&\times \left( \kappa ^{K\ast }\zeta ^{\ast }+\frac{D-1}{D}c_{2}+\frac{1}{2}%
\frac{\partial n^{\ast 2}\chi }{\partial n^{\ast }}S_{D}\frac{3\left(
D-1\right) }{8D^{2}\left( D+2\right) }\left( 1+\alpha \right) \left(
-2\alpha \left( 1-\alpha \right) +\frac{1}{6}\left( 3\alpha ^{2}-3\alpha
+10+2D\right) 2c_{2}\right) \right) \notag \\
\eta ^{K} &=&\eta _{0}\left[ \nu _{\eta }^{\ast }-\frac{1}{2}\zeta ^{\ast }%
\right] ^{-1}\left( 1+\frac{S_{D}}{4D\left( D+2\right) }n^{\ast }\chi \left(
\alpha +1\right) \left( 3\alpha -1\right) \right) \notag \\
a_{2}^{\nabla u} &=&\frac{\eta _{0}}{nk_{B}T}\frac{1}{4D\left( D+2\right) }%
\left[ \nu _{\gamma }^{\ast }-D\zeta ^{\ast }\right] ^{-1}\left( \chi
n^{\ast }S_{D}\lambda ^{\ast }+\frac{16D\left( D+2\right) }{nk_{B}T}%
p^{(0)c}\left( \frac{1}{3}-\alpha \right) c_{2}\right) \notag
\end{eqnarray}%
where $\kappa ^{K\ast }=\kappa ^{K}/\kappa _{0}$ and
\begin{equation}
\lambda ^{\ast }=\left( 1+\alpha \right) \left[ \left( 5\alpha -1\right)
\left( 1-\alpha \right) \left( 1+\alpha \right) -\frac{1}{6}c_{2}\left(
15\alpha ^{3}-3\alpha ^{2}+3\left( 4D+15\right) \alpha -\left( 20D+1\right)
\right) \right]
\end{equation}
The collisional parts of the transport coefficients are%
\begin{eqnarray}
\eta ^{C} &=&\eta ^{K}\frac{S_{D}}{D\left( D+2\right) }n^{\ast }\chi \left(
\frac{1+\alpha }{2}\right) +\frac{D}{D+2}\gamma \\
\gamma &=&\eta _{0}\frac{4S_{D}^{2}}{\pi \left( D+2\right) D^{2}}\left(
\frac{1+\alpha }{2}\right) n^{\ast 2}\chi \left( 1-\frac{1}{16}c_{2}\right)
\notag \\
\mu ^{C} &=&\frac{3S_{D}}{2D\left( D+2\right) }n^{\ast }\chi \left( \frac{%
1+\alpha }{2}\right) \mu ^{K} \notag \\
\kappa ^{C} &=&\frac{3S_{D}}{2D\left( D+2\right) }n^{\ast }\chi \left( \frac{%
1+\alpha }{2}\right) \kappa ^{K}+\kappa _{0}\left( 1+\alpha \right) \frac{%
2S_{D}^{2}\left( D-1\right) }{\pi D^{2}\left( D+2\right) ^{2}}\allowbreak
n^{\ast 2}\chi \left( 1-\frac{7}{16}c_{2}\right) \notag
\end{eqnarray}%
and the first order correction to the cooling rate is%
\begin{eqnarray}
\xi ^{(1)} &=&\xi _{0}\left[ f_{1}\right] +\xi _{1}\left[ f_{0}\right] \\
\xi _{0}\left[ f_{1}\right] &=&-a_{2}^{\nabla u}\left( \overrightarrow{%
\nabla }\cdot \overrightarrow{u}\right) \left( \frac{3}{2}nk_{B}T\right)
\frac{nk_{B}T}{\eta _{0}}\frac{D+2}{64}\left( 1-\alpha ^{2}\right) \chi
\notag \\
\xi _{1}\left[ f_{0}\right] &=&\left( \overrightarrow{\nabla }\cdot
\overrightarrow{u}\right) \left( \frac{3}{2}nk_{B}T\right) \frac{S_{D}}{4D}%
n^{\ast }\chi \left( 1-\alpha ^{2}\right) \notag
\end{eqnarray}%
Taking into account that the quantities $\zeta =-\frac{2}{3nk_{B}T}\xi ,$ $%
c^{\ast }=2c_{2}$ and $c_{D}=\frac{1}{2}a_{2}^{\nabla u}$ are used in ref.%
\cite{DuftyGranularTransport}, it is easy to verify that the present
expressions agree for the special case of $D=3$ with those of ref.\cite%
{DuftyGranularTransport} up to terms of order $c^{\ast }a_{s_{0}}^{\left(
\gamma \right) }$ except for the expression for $a_{2}^{\nabla u}$. The
expressions given for this quantity in ref.\cite{DuftyGranularTransport} are
incorrect and according to \cite{priv:2005} the correct expressions agree
with those given here except for the terms $-\frac{1}{6}c_{2}\left( 3\left(
4D+15\right) \alpha -\left( 20D+1\right) \right) $ in the expression for $%
\lambda ^{\ast }$ which give $-\frac{1}{6}c_{2}\left( 81\alpha -61\right) $
for $D=3$ whereas ref\cite{priv:2005} gives $\frac{1}{12}c^{\ast }\left(
159\alpha -19\right) $.
\section{Conclusions}
A normal solution to the Enskog approximation for a hard-sphere gas with
energy loss has been determined using the Chapman-Enskog procedure to first
order in the gradients, and the transport properties given to second order
thus specifying the Navier-Stokes hydrodynamic description. The zeroth order
distribution function, which describes the homogeneous cooling state, was
expanded about equilibrium, eq.(\ref{hsc-distribution}), and the equations
for the coefficients given. The required collision integrals were expressed
in terms of a generating function allowing for evaluation using symbolic
mathematical packages and the explicit form of the required integrals needed
to determine the first correction to the Gaussian approximation were given
explicitly in the form of one-dimensional integrals. The expressions for the
transport properties have similarly been reduced to simple quadratures for
the standard lowest-Sonine approximation. The Navier-Stokes equations for
such a system thus take the usual form, eq.(\ref{balance}), with pressure
tensor%
\begin{equation}
P_{ij}=p^{(0)}\delta _{ij}-\eta \left( \partial _{i}u_{j}+\partial _{j}u_{i}-%
\frac{2}{D}\delta _{ij}\left( \overrightarrow{\nabla }\cdot \overrightarrow{u%
}\right) \right) -\gamma \delta _{ij}\left( \overrightarrow{\nabla }\cdot
\overrightarrow{u}\right)
\end{equation}%
and heat-flux vector%
\begin{equation}
\overrightarrow{q}\left( \overrightarrow{r},t\right) =-\mu \overrightarrow{%
\nabla }\rho -\kappa \overrightarrow{\nabla }T
\end{equation}%
where $\mu $ represents a new transport coefficient not present when the
collision conserve energy. Equations (\ref{l1})-(\ref{l7}) determine the
kinetic parts of the transport coefficients and eq.s(\ref{c1}) and (\ref{c2}%
) determine their collisional parts. The pressure, $p^{(0)}$, is given in
eq.(\ref{pressure}) and the source term in the temperature equation, which
accounts for the cooling, is given by eq.(\ref{s1}) and eq.(\ref{h1}).
Finally, as a simple application, the transport properties for a granular
fluid in $D$ dimensions were given and previous results for the special case
$D=3$ recovered.
\begin{figure*}[tbp]
\includegraphics[angle=0,scale=0.4]{fig1}
\caption{The four transport coefficients for a simple granular fluid in 3
dimensions at reduced density $n\protect\sigma^{3}=0.5$. The lines are the
results of ref.\protect\cite{DuftyGranularTransport}, the circles are the
results of this paper and the broken line is the Gaussian approximation.}
\label{fig1}
\end{figure*}
One question which has been left unanswered is whether we can judge the
effect of neglecting some of the contributions to the transport coefficients
due to the non-Gaussian part of the zeroth order distribution. To answer
this, we show in figure 1 the four transport coefficients for a simple
granular fluid in three dimensions at a moderately high density of $n\sigma
^{3}=0.5$ . The values are calculated based on the expressions of ref.\cite%
{DuftyGranularTransport} which include all contributions in $c_{2}$, those
given in the previous Section and the results of the Gaussian approximation,
$c_{2}=0$. All three approximations are in agreement for the shear and bulk
viscosity, but the Gaussian approximation gives quantitatively poor results
for thermal conductivity and the new transport coefficient. On the other
hand, the expressions given above are in good agreement with the full
expressions for the entire range of the coefficient of restitution thus
giving some justification for the approximations used here.
\begin{acknowledgements}
This work was supported in part by the European Space Agency
under contract number C90105.
\end{acknowledgements}
|
1,108,101,565,945 | arxiv | \section{Introduction}
Since its introduction by Grothendieck, local cohomology has become a major part of commutative algebra that has been studied from different points of view. When $R$ is a polynomial or power series ring over a field $k$, each local cohomology module $\lc^j_I(R)$ admits a natural module structure over $D(R,k)$, the ring of $k$-linear differential operators ($D(R,k)$-modules will be reviewed in \S\ref{section: preliminaries}). In characteristic 0, \cite{LyubeznikFinitenessLocalCohomology} shows that $\lc^j_I(R)$ has finite length as a $D(R,k)$-module, though $\lc^j_I(R)$ is rarely finitely generated as an $R$-module. To this day, using the finite length property of $\lc^j_I(R)$ in the category of $D(R,k)$-modules is still the only way to prove that $\lc^j_I(R)$ has finitely many associated primes in characteristic 0. In characteristic $p$, Frobenius action on local cohomology modules was used with great success by a number of authors, {\it e.g.}, \cite{PeskineSzpiroDimensionProjective}, \cite{HartshorneSpeiserLocalCohomologyInCharacteristicP} and \cite{HunekeSharpBassnumbersoflocalcohomologymodules}.
Lyubeznik (\cite{LyubeznikFModulesApplicationsToLocalCohomology})
conceptualizes the previous work to develop a theory of $F$-modules in characteristic $p$
(the reader may find an overview in \S\ref{section: preliminaries}).
As we have seen, in characteristic $p$, local cohomology modules $\lc^j_I(R)$ can be viewed as both $D(R,k)$-modules and $F$-modules; \cite{LyubeznikFModulesApplicationsToLocalCohomology} compares these two points of view. It's shown that each $F$-module $M$ admits a natural $D(R,k)$-module structure and its length as an $F$-module, $l_{F_R}(M)$, is no more than its length as a $D(R,k)$-module, $l_{D(R,k)}(M)$. The comparison of these two points of view was continued in \cite{BlickleDmodulestructureofRFmodules}, where Blickle shows that over an algebraically closed field
the $D(R,k)$-module length is equal to the $F^{\infty}$-module length ($F^{\infty}$-modules will be reviewed in \S\ref{section: preliminaries}). The fact that local cohomology modules have finite length as $D(R,k)$-modules has found many applications; for instance \cite{NunezBetancourtWittGeneralizedLyubeznikNumbers} introduces numerical invariants of local rings using the length of local cohomology modules as $D(R,k)$-modules and shows that there are close connections between these invariants and $F$-singularities.
Despite the importance of the finiteness of the length of local cohomology modules as $D(R,k)$-modules and $F$-modules, finding the actual length of local cohomology modules as such modules remains an intriguing and difficult open question. In this paper we provide partial answers to this question in characteristic $p$.
\begin{theorem}[Theorems \ref{theorem--D-module length for isolated singularities} and \ref{theorem--lower bounds of F-module length for F-pure}]
Let $R=k[[x_1,\dots, x_n]]$ (or $k[x_1,\dots,x_n]$) with $\m=(x_1,\dots,x_n)$, where $k$ is a field of characteristic $p>0$. Let $A=R/I$ be reduced and equidimensional (respectively, graded reduced and equidimensional) of dimension $d\geq 1$.
\begin{enumerate}
\item If $A$ has an isolated non-$F$-rational point at $\fm$ ({\it e.g.}, $A$ has an isolated singularity at $\m$), then
\[l_{D(R,k)}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c=\dim_k((\lc_{\fm}^d(A))_s)+c\]
where $c$ is the number of minimal primes of $A$. Moreover, if $k$ is separably closed, then
\[l_{F_R}(\lc^{n-d}_I(R))=l_{F_R^e}(\lc^{n-d}_I(R))= l_{F_R^\infty}(\lc^{n-d}_I(R))= l_{D_R}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c.\]
\item If $A$ is $F$-pure and quasi-Gorenstein, then $l_{F_R}(\lc_I^{n-d}(R))$ is exactly the number of special primes of $\lc_\m^d(A)$.
\end{enumerate}
\end{theorem}
One ought to remark that there is an effective algorithm to compute special primes of $\lc_\m^d(A)$ (\cite{KatzmanZhangAlgorithmAnnihilatorAritinian}), hence the result above provides a practical tool to compute $l_{F_R}(\lc_I^{n-d}(R))$ when $A=R/I$ is $F$-pure and quasi-Gorenstein.
We also construct the first example of a local cohomology module over an algebraically closed field whose $D(R,k)$-module length disagrees with its $F$-module length.
\begin{theorem}[Proposition \ref{proposition--D-length bigger than F-length 2}]
\label{theorem: main example}
Let $R = \overline{\mathbb{F}}_{p}[x, y, z, t]$ with $p \equiv 4 \pmod 7$ and $f = tx^7 + ty^7 + z^7$. Then
\[
l_{F_R}(\lc^1_{f}(R)) = 3 < 7 = l_{F^\infty_R}(\lc^1_{f} (R))=l_{D(R,\overline{\mathbb{F}}_p)} (\lc^1_{f} (R)).
\]
\end{theorem}
While finding the actual length remains elusive in its full generality, we provide both upper and lower bounds.
\begin{theorem}[Theorem \ref{theorem: length of localization}]
\label{theorem: general bound on hypersurface}
Let $R=k[x_1,\dots,x_n]$ be a polynomial ring over a field $k$ and $f\in R$ be a polynomial of degree $d$. Then
\[l_{D(R,k)}(\lc^1_{f}(R))\leq (d+1)^n-1.\]
\end{theorem}
\begin{theorem}[Theorem \ref{theorem--Upper bounds for D-module length when singular locus has dimension 1}]
\label{theorem: formulas and bounds}
Let $R=k[x_1,\dots,x_n]$ and $\m=(x_1,\dots,x_n)$. Let $I$ be a homogeneous reduced and equidimensional ideal of $R$. Set $A=R/I$ with $\dim A=d\geq 2$. Suppose the non-$F$-rational locus of $A$ has dimension $\leq1$ (e.g., the nonsingular locus has dimension $\leq1$). Then we have
\begin{eqnarray*}
l_{D(R,k)}(\lc_I^{n-d}(R))&\leq &c+\sum_{\dim R/P=1}\dim_{\kappa(P)} (\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P}))_s+\dim_k({\lc^d_{\fm}(A)})_s\\
&=&c+\sum_{\dim R/P=1} \dim_{\kappa(P)}(0^*_{\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P})})_s+\dim_k(0^*_{\lc^d_{\fm}(A)})_s
\end{eqnarray*}
where $c$ is number of minimal primes of $I$.
\end{theorem}
\begin{theorem}[Theorem \ref{theorem--lower bounds of F-module length for F-pure}]
\label{theorem: F-length for F-pure}
Let $k$ be a field of positive characteristic, let $R$ denote the local ring $k[[x_1, \ldots, x_n]]$
(or the graded ring $k[x_1, \ldots, x_n]$), and $\m = (x_1, \ldots, x_n)$.
Let $A=R/I$ be reduced, equidimensional
(or, respectively, graded reduced and equidimensional), and $F$-pure of dimension $d\geq 1$.
Then $l_{F_R}(\lc_I^{n-j}(R))$ is at least the number of special primes of $\lc_\m^j(A)$.
\end{theorem}
The upper and lower bounds on the $D(R,k)$-module and $F_R$-module length in the above results are sharp in many cases (see \S \ref{section: formulas and bounds}, \S \ref{section: F-pure}, \S \ref{section: diagonal hypersurfaces}),
and we can explicitly describe an $F_R$-submodule filtration of $\lc_I^{n-d}(R)$ in terms of the generating morphisms when $R/I$ is Cohen-Macaulay, which is maximal when $R/I$ is Gorenstein and $F$-pure (see Theorem \ref{Theorem: F-length for CM rings} and \ref{Theorem: F-length for Gorenstein rings}).
We also construct an example (Example \ref{example: completion of simple no longer simple}) of a simple $D(R,k)$-module whose completion at a prime ideal $P$ is not a simple $D(\widehat{R}_P,\kappa(P))$-module.
Our paper is organized as follows. In Section \ref{section: preliminaries}, we recall some basic notions and results regarding $D(R,k)$-modules, $F$-modules, and tight closure theory. Section \ref{section: general bound} is concerned with Theorem \ref{theorem: general bound on hypersurface}. Section \ref{section: formulas and bounds} is devoted to proving Theorem \ref{theorem: formulas and bounds}. In Section \ref{section: F-pure} we prove Theorem \ref{theorem: F-length for F-pure}, we also describe explicitly the maximal $F_R$-module filtration of $\lc_I^{n-d}(R)$ in terms of their generating morphisms when $R/I$ is Gorenstein and $F$-injective (the Cohen-Macaulay $F$-injective case will also be discussed). In Section \ref{section: diagonal hypersurfaces}, we compute the dimension of the stable part (under the natural Frobenius action) of the top local cohomology of Fermat hypersurfaces. \S \ref{section: examples} proves Theorem \ref{theorem: main example} and related results. Examples and remarks showing the sharpness of our bounds will be given throughout.
\subsection*{Acknowledgements}
This material is based partly upon work supported by the National Science Foundation under Grant No.1321794;
some of this work was done at the Mathematics Research Community (MRC) in Commutative Algebra in June 2015.
The second, third and fourth authors would like to thank the staff and organizers of the MRC and the American Mathematical Society for the support provided.
The first author thanks the Department of Mathematics of the University of Illinois at Chicago for its hospitality while working on this and other projects. The authors thank the referee for valuable comments.
\section{Preliminaries}
\label{section: preliminaries}
Throughout this paper, we always assume that $R=k[[x_1,\dots, x_n]]$ or $R=k[x_1,\dots,x_n]$ where $k$ is a field (not necessarily algebraically closed or perfect), and $A$ is a reduced and equidimensional quotient or graded reduced and equidimensional quotient of $R$ of dimension $d\geq 1$. We set $\fm=(x_1,\dots,x_n)$. In this section we collect some notations and preliminary results from \cite{LyubeznikFModulesApplicationsToLocalCohomology} and \cite{BlickleIntersectionhomologyDmodule}.
\subsection*{$D$-modules} The differential operators $\delta$: $R\to R$ of order $\leq n$ can be defined inductively as follows. A differential operator of order $0$ is just multiplication by an element of $R$. A differential operator of order $\leq n$ is an additive map $\delta$: $R\to R$ such that for every $r\in R$, the commutator $[\delta, r]=\delta\circ r-r\circ\delta$ is a differential operator of order $\leq n-1$. The differential operators form a ring with the multiplication defined via the composition. We denote this ring by $D_R$.
We denote by $D(R,k)\subseteq D_R$ the subring of $D_R$ consisting of all $k$-linear differential operators. Since $R=k[[x_1,\dots, x_n]]$ or $R=k[x_1,\dots,x_n]$, it can be verified that $D(R, k)$ is generated by all operators of the form $\frac{1}{j!}\frac{\partial^{j}}{\partial x_i^{j}}$. By a {\it $D_R$-module} or a {\it $D(R,k)$-module} we mean a left module over $D_R$ or $D(R,k)$.
When $k$ is a field of characteristic $p$, it is not hard to show that every differential operator of order $\leq p^e-1$ is $R^{p^e}$-linear, where $R^{p^e}\subseteq R$ is the subring of all the $p^e$-th powers of all the elements of $R$. In other words, we always have $D_R$ is a subring of $\cup_e\Hom_{R^{p^e}}(R,R)$.\footnote{In fact $D_R=\cup_{e}\Hom_{R^{p^e}}(R,R)$ when $R$ is $F$-finite, i.e., $R$ is finitely generated as a $R^{p}$-module \cite{YekutieliExplicitconstructionofResidueComplex}.} In particular, all differential operators are automatically $k$-linear if $k$ is perfect, thus $D_R=D(R,k)$ if $k$ is perfect.
\subsection*{$F$-modules} Assume that $k$ is a field of characteristic $p$. The notion of $F$-modules was introduced in \cite{LyubeznikFModulesApplicationsToLocalCohomology}, and further investigated and generalized in \cite{BlickleThesis, BlickleDmodulestructureofRFmodules}. We use $R^{(e)}$ to denote the target ring of the $e$-th Frobenius map $F^e$: $R\rightarrow R$. We shall let $F^e(-)$ denote the Peskine-Szpiro's Frobenius functor from $R$-modules to $R$-modules. In detail, $F^e(M)$ is given by base change to $R^{(e)}$ and then identifying $R^{(e)}$ with $R$, i.e., $F^e(M)=R^{(e)}\otimes_R M$.
An {\it $F^e_R$-module} is an $R$-module $M$ equipped with an $R$-linear isomorphism $\theta$: $M\rightarrow F^e(M)$ which we call the structure morphism of $M$. A homomorphism of $F^e_R$-modules is an $R$-module homomorphism $f$: $M\rightarrow M'$ such that the following diagram commutes
\[\xymatrix{
M\ar[r]^{f}\ar[d]^{\theta} & M'\ar[d]^{\theta'}\\
F^e(M)\ar[r]^{F^e(f)}&F^e(M')\\
}\]
When $e=1$ we simply say $M$ is an $F_R$-module (or $F$-module if $R$ is clear from the context). It is easy to see that every $F^e_R$-module is also an $F^{er}_R$-module for every $r\geq 1$ by iterating the structure isomorphism $r$ times. The union of the categories of $F^e_R$-modules over all $e$ forms what we called the category of $F^\infty_R$-modules.\footnote{In \cite{BlickleThesis, BlickleDmodulestructureofRFmodules, BlickleIntersectionhomologyDmodule},
$F^e_R$-modules and $F^\infty_R$-modules are called unit $R[F^e]$-modules and unit $R[F]$-modules respectively. In this paper we will use Lyubeznik's notation \cite[Remark 5.6]{LyubeznikFModulesApplicationsToLocalCohomology} since we think this is more natural in comparison with the usual $F_R$-modules.} With these definitions, the categories of $F^e_R$-modules and $F^\infty_R$-modules are abelian categories.
When $R=k[x_1,\dots,x_n]$ and $M$ is a graded $R$-module, there is a natural grading on $F(M)=R^{(e)}\otimes_R M$ given by $\deg(r\otimes m)=\deg r+p^e\cdot\deg m$ for homogeneous elements $r\in R$ and $m\in M$. With this grading, a {\it graded $F^e_R$-module} is an $F^e_R$-module $M$ such that the structure isomorphism $\theta$ is degree-preserving. A morphism of graded $F^e_R$-modules is a degree-preserving morphism of $F^e_R$-modules. It is not hard to see that graded $F^e_R$-modules form an abelian subcategory of the category of $F^e_R$-modules. Graded $F^e_R$-modules, at least for $e=1$, were introduced and studied in detail in \cite{ZhangYGradedFModules} and \cite{MaZhangEuleriangradedDmodules}.
A generating morphism of an $F^e_R$-module $M$ is an $R$-module homomorphism $\beta$: $M_0\rightarrow F^e(M_0)$, where $M_0$ is some $R$-module, such that $M$ is the limit of the inductive system in the top row of the commutative diagram
\[\xymatrix{
M_0\ar[r]^{\beta}\ar[d]^{\beta} & F^e(M_0)\ar[d]^{F^e(\beta)}\ar[r]^{F^e(\beta)}& F^{2e}(M_0)\ar[r]^{\quad F^{2e}(\beta)}\ar[d]^{F^{2e}(\beta)}&{\cdots}\\
F^e(M_0)\ar[r]^{F^e(\beta)}& F^{2e}(M_0)\ar[r]^{F^{2e}(\beta)}&F^{3e}(M_0)\ar[r]^{\quad F^{3e}(\beta)}&{\cdots}\\
}\]
and $\theta$: $M\rightarrow F^e(M)$, the structure isomorphism of $M$, is induced by the vertical arrows in this diagram. $M_0$ is called a {\it root} of $M$ if $\beta$: $M_0\to F^e(M_0)$ is injective. An $F^e_R$-module $M$ is called {\it $F$-finite} if $M$ has a generating morphism $\beta$: $M_0\rightarrow F^e(M_0)$ with $M_0$ a finitely generated $R$-module. When $M_0$ is graded and $\beta$ is degree-preserving, we say that $M$ is {\it graded $F$-finite $F^e_R$-module}.
It is a fundamental result of Lyubeznik (\cite{LyubeznikFModulesApplicationsToLocalCohomology}) that local cohomology modules $\lc_I^i(R)$ have a natural structure of $F$-finite $F^e_R$-modules for every $e\geq 1$.
Moreover, when $R=k[x_1,\dots,x_n]$ and $I$ is a homogeneous ideal of $R$, $\lc_I^i(R)$ are graded $F$-finite $F^e_R$-modules \cite{ZhangYGradedFModules}.
Following \cite{LyubeznikFModulesApplicationsToLocalCohomology}, for any $F$-finite $F_R$-module $M$, there exists a smallest $F_R$-submodule $N\subseteq M$ with the property that $M/N$ is supported only at $\m$. Hence $M/N$ is isomorphic (as an $R$-module) to $E^{\oplus r}$ where $E=E_R(k)$ denotes the injective hull of $k$. We define $\crk(M)$, the {\it the corank of $M$,} to be $r$.
One important feature of $F_R$-modules is that they have a natural structure of $D_R$-modules, and thus $D(R,k)$-modules. We briefly recall this here, and we refer to \cite[Section 5]{LyubeznikFModulesApplicationsToLocalCohomology} for details. Let $M$ be an $F_R$-module with structure isomorphism $\theta$. We set $\theta_e$ to be the $e$-th iterate of $\theta$, i.e., $$\theta_e=F^{e-1}(\theta)\circ\cdots\circ F(\theta)\circ\theta: M\to F^e(M). $$ Now every element $\delta\in\Hom_{R^{p^e}}(R,R)$ acts on $F^e(M)=R^{(e)}\otimes_RM$ via $\delta\otimes\id_M$. We let $\delta$ act on $M$ via $\theta_e^{-1}\circ(\delta\otimes\id_M)\circ\theta_e$. It is not very difficult to check that this action is well-defined. Moreover, an entirely similar construction shows that every $F^e_R$-module also have a canonical structure of a $D_R$-module. It follows that $F_R^\infty$-modules are naturally $D_R$-modules, and thus $D(R,k)$-modules. In sum, we have the following inclusion of abelian categories:
\[
\{F_R-\mbox{modules} \}\subseteq \{F_R^e-\mbox{modules}\} \subseteq \{F_R^\infty-\mbox{modules}\}\subseteq \{D_R-\mbox{modules}\} \subseteq \{D(R,k)-\mbox{modules}\}.
\]
Therefore for any $F_R$-module $M$ we have the following inequalities on its length considered in the corresponding categories:
\begin{equation}
\label{equation--basic relation on length in different categories}
l_{F_R}(M)\leq l_{F_R^e}(M)\leq l_{F_R^\infty}(M)\leq l_{D_R}(M)\leq l_{D(R,k)}(M).
\end{equation}
\subsection*{$A\{f\}$-modules}
Recall that we always assume $A$ is a reduced and equidimensional or graded reduced and equidimensional quotient of $R$ of dimension $d\geq 1$. Assume also that $k$ is a field of characteristic $p$. Let $M$ be a (in most cases Artinian) module over $A$. We say that $M$ is an $A\{f\}$-module if there is an additive map $f\colon M\to M$ such that $f(am)=a^pf(m)$. We will use $M_s = \bigcap k\langle f^i(M)\rangle$ to denote the Frobenius stable part of $M$, note that this is a $k$-vector space, and depends not only on $M$ but also on the action $f$ on $M$. It is well-known that when $M$ is either Noetherian or Artinian over $A$, $M_s$ is a finite dimensional $k$-vector space \cite{HartshorneSpeiserLocalCohomologyInCharacteristicP}, \cite{LyubeznikFModulesApplicationsToLocalCohomology}.
Let $\scr{H}_{R,A}$ denote the Lyubeznik functor introduced in \cite{LyubeznikFModulesApplicationsToLocalCohomology}: for every Artinian $A\{f\}$-module $M$, we have a natural induced map $\alpha$: $F_R(M)\to M$, since $F_R(M)^\vee\cong F_R(M^\vee)$ by \cite[Lemma 4.1]{LyubeznikFModulesApplicationsToLocalCohomology}, we define $$\scr{H}_{R,A}(M)=\varinjlim(M^\vee\xrightarrow{\alpha^\vee} F_R(M^\vee)\xrightarrow{F(\alpha^\vee)} F_R^2(M^\vee)\to\cdots),$$ which is an $F$-finite $F_R$-module. In the graded case we have a similar functor ${}^*\scr{H}_{R,A}$ that takes a graded Artinian $A\{f\}$-module to a graded $F$-finite $F_R$-module: one needs to replace Matlis dual by graded Matlis dual in the construction of ${}^*\scr{H}_{R,A}$, (see \cite{LyubeznikSinghWaltherlocalcohomologysupportedatdeterminantalideals} for details). One important example that we will use repeatedly is that $\scr{H}_{R, A}(\lc_{\fm}^d(A))\cong \lc_I^{n-d}(R)$, and in the graded case we also have ${^*}\scr{H}_{R, A}(\lc_{\fm}^d(A))\cong \lc_I^{n-d}(R)$ \cite[Example 4.8]{LyubeznikFModulesApplicationsToLocalCohomology}, \cite[Proposition 2.8]{LyubeznikSinghWaltherlocalcohomologysupportedatdeterminantalideals}.
\subsection*{Tight closure and $F$-singularities} Tight closure theory was introduced by Hochster-Huneke in \cite{HochsterHunekeTC1}. In this article, we only need some basic properties of tight closure of zero in the top local cohomology module, $0^*_{\lc_\m^d(A)}$. Under mild conditions, for example when $(A,\m)$ is an excellent local domain of dimension $d$, $0^*_{\lc_\m^d(A)}$ is the largest proper $A$-submodule of $\lc_\m^d(A)$ that is stable under the natural Frobenius action on $\lc_\m^d(A)$ (cf.~\cite{SmithFRatImpliesRat}). A local ring $(A,\m)$ is called {\it $F$-rational} if $A$ is Cohen-Macaulay and $0^*_{\lc_\m^d(A)}=0$.\footnote{This is not the original definition of $F$-rationality, but is shown to be equivalent (\cite{SmithFRatImpliesRat}).} Under mild conditions on the ring, for example when $(A,\m)$ is an excellent, reduced and equidimensional local ring, $A$ is $F$-rational on the punctured spectrum if and only if $0^*_{\lc_\m^d(A)}$ has finite length.
A local ring $(A,\m)$ of characteristic $p>0$ is called {\it $F$-pure} if the Frobenius endomorphism $F$: $A\rightarrow A$ is pure.\footnote{A map of $A$-modules $N\rightarrow N'$ is pure if for every $A$-module $M$ the map $N\otimes_AM\rightarrow N'\otimes_AM$ is injective. This implies that $N\rightarrow N'$ is injective, and is weaker than the condition that $0\rightarrow N\rightarrow N'$ be split.} Under mild conditions, for example when the Frobenius map $A\xrightarrow{F}A$ is a finite map or when $A$ is complete, $F$-purity of $A$ is equivalent to the condition that the Frobenius endomorphism $A\xrightarrow{F} A$ is split \cite[Corollary 5.3]{HochsterRobertsFrobeniusLocalCohomology}. The Frobenius endomorphism on $A$ induces a natural Frobenius action on each local cohomology module $\lc_\m^i(A)$ and we say $(A,\m)$ is {\it $F$-injective} if this natural Frobenius action on $\lc_\m^i(A)$ is injective for every $i$ (\cite{FedderFPureRat}). This holds if $A$ is $F$-pure \cite[Lemma 2.2]{HochsterRobertsFrobeniusLocalCohomology}. For some other basic properties of $F$-pure and $F$-injective singularities, see \cite{HochsterRobertsFrobeniusLocalCohomology}, \cite{FedderFPureRat}, \cite{EnescuHochsterTheFrobeniusStructureOfLocalCohomology}.
\section{A general bound of local cohomology modules as $D$-modules}
\label{section: general bound}
In this section, we will establish a bound of $\lc^j_I(R)$ as a $D(R,k)$-module when $R=k[x_1,\dots,x_n]$ is a polynomial ring over a field $k$ (of any characteristic) in terms of the degrees of generators of $I$. To this end, we begin with recalling the notion of the Bernstein filtration and $k$-filtration.
Denote $\frac{1}{t!}\frac{\partial^t}{\partial x^t_i}$ by $D_{t,i}$ for $t\in\mathbb{N},1\leq i\leq n$. Then $D(R,k)=R\langle D_{t,i}\mid t\in \mathbb{N}, 1\leq i\leq n\rangle$. We set $\scr{F}_j$ to be the $k$-linear span of the set of products
\[\{x^{i_1}_1\cdots x^{i_n}_n\cdot D_{t_1,1}\cdots D_{t_n,n}|i_1+\cdots+i_n+t_1+\cdots+t_n\leq j\}.\]
Then the {\it Bernstein} filtration on $D(R,k)$ (\cite[Definition 2.6]{LyubeznikCharFreeHolonomic}) is defined to be $k=\scr{F}_0\subset \scr{F}_1\subset \scr{F}_2 \cdots$
\begin{definition}[Definition 3.2 in \cite{LyubeznikCharFreeHolonomic}]
A {\it $k$-filtration} on a $D(R,k)$-module $M$ is an ascending chain of finite-dimensional $k$-vector spaces $M_0\subset M_1\subset \cdots$ such that $\cup_i M_i=M$ and $\scr{F}_i M_j\subset M_{i+j}$ for all $i$ and $j$.
\end{definition}
We need the following result of Lyubeznik. We note that in characteristic $0$, the $D$-module length of holonomic $D$-modules has been studied before, see \cite{BernsteinModulesoverRingofDifferential}, \cite{BjorkRingsofDifferentialOperators}.
\begin{theorem}[Theorem 3.5 in \cite{LyubeznikCharFreeHolonomic}]
\label{theorem: bound by multiplicity}
Let $M$ be a $D(R,k)$-module with a $k$-filtration $M_0\subset M_1\subset \cdots$. Assume there is a constant $C$ such that $\dim_k(M_i)\leq Ci^n$ for sufficiently large $i$. Then the length of $M$ as a $D(R,k)$-module is at most $n!C$.
\end{theorem}
The statement of \cite[Theorem 3.5]{LyubeznikCharFreeHolonomic} assumes that there is a constant $C$ such that $\dim_k(M_i)\leq Ci^n$ for {\it all} $i\geq 0$; however, the proof of \cite[Theorem 3.5]{LyubeznikCharFreeHolonomic} only uses the fact that there is a constant $C$ such that $\dim_k(\scr{M}_i)\leq Ci^n$ for {\it sufficiently large} $i$. Hence the proof of Theorem \ref{theorem: bound by multiplicity} is identical to the one of \cite[Theorem 3.5]{LyubeznikCharFreeHolonomic}, and is omitted.
To illustrate the advantage of only requiring $\dim_k(M_i)\leq Ci^n$ for {\it sufficiently large} $i$, we consider a simple example.
\begin{example}
\label{example: length of R}
Set $R_i$ to be the $k$-span of monomials in $x_1,\dots,x_n$ of degree at most $i$. It is clear that $R_0\subset R_1\subset \cdots$ is a $k$-filtration of $R$. It is well-known that $\dim_k(R_i)=\binom{n+i}{i}$, which is a polynomial in $i$ of degree $n$ with leading coefficient $\frac{1}{n!}$. Hence, given any $\varepsilon>0$, we have $\dim_k(R_i)\leq \frac{1+\varepsilon}{n!}i^n$ for sufficiently large\footnote{We should remark that how large $i$ needs to be certainly depends on $\varepsilon$.} $i$. Since the length of a module is an integer, it follows from Theorem \ref{theorem: bound by multiplicity} that the length of $R$ in the category of $D(R,k)$-modules is 1. On the other hand, if one requires $\dim_k(R_i)\leq Ci^n$ for {\it all} $i$, then one will need $C\geq \frac{n+1}{n!}$ (consider the case when $i=1$) and consequently can not deduce the correct length of $R$ from \cite[Theorem 3.5]{LyubeznikCharFreeHolonomic}.
\end{example}
\begin{remark}
\label{remark: induced bound on localization}
In the proof of \cite[Corollary 3.6]{LyubeznikCharFreeHolonomic}, the following statement is proved: {\it Let $M$ be a $D(R,k)$-module with a $k$-filtration $M_0\subseteq M_1\subseteq \cdots$ and $f\in R$ be a polynomial of degree $d$. If there is a constant $C$ such that $\dim_k(M_i)\leq Ci^n$ for sufficiently large $i$, then $M'_i=\{\frac{m}{f^i}\mid m\in M_{i(d+1)}\}$ induces a $k$-filtration of $M_f$ such that $\dim_k(M'_i)\leq C(d+1)^ni^n$ for sufficiently large $i$.}
\end{remark}
\begin{theorem}
\label{theorem: length of localization}
Let $f\in R$ be a polynomial of degree $d$. Then the length of $R_f$ in the category of $D(R,k)$-modules is at most $(d+1)^n$. And, the length of $\lc^1_{f}(R)$ is at most $(d+1)^n-1$.
\end{theorem}
\begin{proof}
Combining Example \ref{example: length of R} and Remark \ref{remark: induced bound on localization}, we see that $R_f$ admits a $k$-filtration $R'_0\subset R'_1\subset\cdots$ such that, for any any $\varepsilon>0$, one has $\dim_k(R'_i)\leq \frac{1+\varepsilon}{n!}(d+1)^ni^n$. According to Theorem \ref{theorem: bound by multiplicity}, we have that the length of $R_f$ in the category of $D(R,k)$-modules is at most $(1+\varepsilon)(d+1)^n$ for any $\varepsilon>0$; it follows that the length of $R_f$ in the category of $D(R,k)$-modules is at most $(d+1)^n$.
The second conclusion follows from the exact sequence $0\to R\to R_f\to \lc^1_{f}(R)$ and the fact that the length of $R$ is 1.
\end{proof}
\begin{corollary}
\label{corollary: length of local cohomology in terms of degree}
Let $I$ be an ideal of $R$. If $I$ is generated by $f_1,\dots,f_t$ with $\deg(f_i)=d_i$, then the length of $\lc^j_I(R)$ in the category of $D(R,k)$-modules is at most
\[\sum_{1\leq i_1\leq \cdots\leq i_j\leq t}(d_{i_1}+\cdots+d_{i_j}+1)^n-1.\]
\end{corollary}
\begin{proof}
It is clear from Theorem \ref{theorem: length of localization} that the length of $\bigoplus R_{f_{i_1}\cdots f_{i_j}}$ is at most $$\sum_{1\leq i_1\leq \cdots\leq i_j\leq t}(d_{i_1}+\cdots+d_{i_j}+1)^n.$$ Our corollary follows from the fact that $\lc^j_I(R)$ is a proper subquotient of $\bigoplus R_{f_{i_1}\cdots f_{i_j}}$ in the category of $D(R,k)$-modules.
\end{proof}
The bounds in Theorem \ref{theorem: length of localization} and Corollary \ref{corollary: length of local cohomology in terms of degree}, though general, are very coarse. In the rest of the paper, we will focus on the length of $\lc^c_I(R)$ where $c$ is the height of $I$ and $k$ is of prime characteristic $p$, where $F$-module theory and tight closure theory can be used to produce sharper bounds.
\section{Formulas and upper bounds on the $D$-module and $F$-module length}
\label{section: formulas and bounds}
\textbf{Notation}: Henceforth $R$ denotes $k[[x_1,\dots, x_n]]$ or $k[x_1,\dots,x_n]$ where $k$ is a field of characteristic $p>0$,
$\m=(x_1,\dots,x_n)$, and we let $A=R/I$ be reduced and equidimensional or graded reduced and equidimensional of dimension $d\geq 1$.
We first analyze the case when $A$ has an isolated non-$F$-rational point at $\{\m\}$. We start with a few lemmas.
\begin{lemma}
\label{lemma--left exactness of taking stable part}
Let $0\rightarrow L\xrightarrow{\alpha} M\xrightarrow{\beta} N\rightarrow 0$ be an exact sequence of Artinian $A\{f\}$-modules. Let $f_L, f_M, f_N$ denote the
Frobenius actions on $L,M,N$ respectively. Then the stable parts form a left exact sequence of finite dimensional vector spaces: $0\rightarrow
L_s\xrightarrow{\alpha} M_s\xrightarrow{\beta} N_s$.
\end{lemma}
\begin{proof}
Let $\mathbb{K}$ be the perfect closure of $k$; define $A^{\mathbb{K}}=A\otimes_k \mathbb{K}$
and for any $A\{f\}$-module $X$, let $X^{\mathbb{K}}$ be the $A^{\mathbb{K}}\{ f \}$-module
$A^{\mathbb{K}} \otimes_k X$ where the action of $f$ is given by
$f\left( \sum_{i=1}^\mu x_i \otimes \lambda_i \right)= \sum_{i=1}^\mu f(x_i) \otimes \lambda_i^p$
for $x_1, \dots, x_\mu\in X$ and $\lambda_1, \dots, \lambda_\mu\in \mathbb{K}$.
\cite[Proposition 4.9]{LyubeznikFModulesApplicationsToLocalCohomology} implies that $X^{\mathbb{K}}_s=X_s \otimes_k \mathbb{K}$, so if we could show that
$0\rightarrow L_s^{\mathbb{K}} \xrightarrow{\alpha^{\mathbb{K}}} M_s^{\mathbb{K}}\xrightarrow{\beta^{\mathbb{K}}} N_s^{\mathbb{K}}$, the fact that
${\mathbb{K}}$ is faithfully flat over $k$ would imply the exactness of
$0\rightarrow L_s\xrightarrow{\alpha} M_s\xrightarrow{\beta} N_s$.
Therefore we may assume that $k$ is a perfect field.
The exactness at $L_s$ and $\beta\alpha=0$ are obvious. Hence to prove exactness it suffices to show that $\ker\beta \subseteq \im \alpha$. Since $N$ is Artinian, by \cite[Proposition 4.4]{LyubeznikFModulesApplicationsToLocalCohomology} $\cup_{r\geq1}\ker f_N^r=\ker f_N^{r_0}$ for some $r_0$ sufficiently large and $\ker f_N\subseteq \ker f_N^2\subseteq \cdots$ stabilizes.
Pick $x \in M_s$ such that $\beta(x)=0$. Since we are assuming that $k$ is perfect, the $k$-vector space generated by $f^r(M)$ is just $f^r(M)$ for all $r\geq 0$;
therefore, for each $r\geq r_0$, there exists $y \in M$ such that $x=f_M^r(y)$. We have
$ f_N^r(\beta(y))=\beta( f_M^r(y))=0$. So $f_N^{r_0}(\beta(y))=0$ and hence $z=f_M^{r_0}(y)\in L$. Therefore for each $r\geq r_0$, there exists $z\in L$ such that
$x= f_M^{r-r_0}(z)$, hence $x\in L_s$.
\end{proof}
\begin{lemma}
\label{lemma--stable parts of tight closure of top local cohomology modules}
Suppose $\dim A=d \geq 1$, then $(0^*_{\lc_{\fm}^d(A)})_s\cong (\lc_{\fm}^d(A))_s$.
\end{lemma}
\begin{proof}
We have
$0\to 0^*_{\lc_{\fm}^d(A)}\to \lc_{\fm}^d(A)\to \lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)}\to 0$.
By Lemma \ref{lemma--left exactness of taking stable part}, it suffices to show that $(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})_s=0$. But by \cite[Proposition 4.10]{LyubeznikFModulesApplicationsToLocalCohomology}, $\dim(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})_s= \crk\scr{H}_{R,A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})$.
Let $P_1,\dots, P_c$ be all the minimal primes of $A$. By \cite[Theorem 4.3 and 4.4]{BlickleIntersectionhomologyDmodule}, $\scr{H}_{R,A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})$ is a direct sum of simple $F_R$-modules, each has $P_i$ as its unique associated prime. This implies $\crk\scr{H}_{R,A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})=0$ because $\dim A\geq 1$, and hence $(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})_s=0$ as desired.
\end{proof}
\begin{theorem}
\label{theorem--D-module length for isolated singularities}
Let $R=k[[x_1,\dots, x_n]]$ (or $k[x_1,\dots,x_n]$) with $\m=(x_1,\dots,x_n)$, where $k$ is a field of characteristic $p>0$. Let $A=R/I$ be reduced and equidimensional (respectively, graded reduced and equidimensional) of dimension $d\geq 1$. Assume that $A$ has an isolated non-$F$-rational point at $\fm$ (e.g., $A$ has an isolated singularity at $\m$). Then $$l_{D(R,k)}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c=\dim_k((\lc_{\fm}^d(A))_s)+c$$ where $c$ is the number of minimal primes of $A$. Moreover, if $k$ is separably closed, then we also have $$l_{F_R}(\lc^{n-d}_I(R))=l_{F_R^e}(\lc^{n-d}_I(R))= l_{F_R^\infty}(\lc^{n-d}_I(R))= l_{D_R}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c.$$
\end{theorem}
\begin{proof} The second equality follows immediately from Lemma \ref{lemma--stable parts of tight closure of top local cohomology modules}. Thus it suffices to show $l_{D(R,k)}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c$ and $l_{F_R}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c$ when $k$ is separably closed, since the other equalities would follow from (\ref{equation--basic relation on length in different categories}).
The short exact sequence $0\to 0^*_{\lc_{\fm}^d(A)}\to \lc_{\fm}^d(A)\to \lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)}\to 0$ induces:
\[0\to \scr{H}_{R, A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})\to \scr{H}_{R, A}(\lc_{\fm}^d(A))\cong \lc_I^{n-d}(R)\to \scr{H}_{R, A}(0^*_{\lc_{\fm}^d(A)})\to 0.\]
Now by \cite[Corollary 4.2 and Theorem 4.4]{BlickleIntersectionhomologyDmodule}, $\scr{H}_{R, A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})$ is a direct sum of simple $D(R,k)$-modules, each supported at a different minimal prime of $A$, so its $D(R,k)$-module length is $c$, and in fact each of these simple $D(R,k)$-modules are (simple) $F_R$-modules \cite[Theorem 4.3]{BlickleIntersectionhomologyDmodule}. Thus $l_{D(R,k)}(\scr{H}_{R, A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)}))=l_{F_R}(\scr{H}_{R, A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)}))=c$.
Since $A$ has an isolated non-$F$-rational point at $\m$, $0^*_{\lc^d_{\fm}(A)}$ has finite length as an $A$-module. This implies that $\scr{H}_{R,A}(0^*_{\lc^d_{\fm}(A)})$ is supported only at $\fm$. Hence $\scr{H}_{R,A}(0^*_{\lc^d_{\fm}(A)})$, {\it as a $D(R,k)$-module}, is a direct sum of finitely many copies of $\lc^n_{\fm}(R)$ by
\cite[Lemma (c)]{LyubeznikInjectivedimensionofDmodulescharacteristicfreeapproach}.
Moreover, {\it when $k$ is separably closed}, $\scr{H}_{R,A}(0^*_{\lc^d_{\fm}(A)})$ is a direct sum of copies of $E=\lc^n_{\fm}(R)$ even {\it as an $F_R$-module} \cite[Lemma 4.3]{MaThecategoryofF-moduleshasfiniteglobaldimension}. The number of copies of $\lc^n_{\fm}(R)$ is exactly its $D(R,k)$-module length ($F_R$-module length when $k$ is separably closed). However, since $\scr{H}_{R,A}(0^*_{\lc^d_{\fm}(A)})$ is an $F_R$-module supported only at $\m$, the number of copies of $\lc^n_{\fm}(R)$ is by definition $\crk\scr{H}_{R,A}(0^*_{\lc^d_{\fm}(A)})$, which is $\dim_k(0^*_{\lc^d_{\fm}(A)})_s$ by \cite[Proposition 4.10]{LyubeznikFModulesApplicationsToLocalCohomology}. Hence $l_{D(R,k)}(\scr{H}_{R,A}(0^*_{\lc^d_{\fm}(A)}))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s$, and $l_{F_R}(\scr{H}_{R,A}(0^*_{\lc^d_{\fm}(A)}))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s$ when $k$ is separably closed.
\end{proof}
\begin{remark}
\label{remark--F-module length for isolated singularities}
When $k$ is not separably closed, $l_{F_R}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c$ fails to hold in general, see Corollary \ref{corollary--D-length bigger than F-length 1}. However, if $k$ is a {\it finite field}, then we always have $l_{F_R^\infty}(\lc^{n-d}_I(R))=\dim_k(0^*_{\lc^d_{\fm}(A)})_s+c$, see Proposition \ref{proposition--F-module length for isolated sing over finite field}.
\end{remark}
\begin{remark}
\label{remark--D-module length for graded isolated singularities}
When $I$ is a homogeneous reduced and equidimensional ideal in $R=k[x_1,\dots,x_n]$ (i.e., $A$ is a graded domain), it is easy to check that $(0^*_{\lc^d_{\fm}(A)})_s=({\lc^d_{\fm}(A)})_s=({\lc^d_{\fm}(A)}_0)_s$ since the stable part will be concentrated in degree 0. Therefore, we have an upper bound of the $D(R,k)$-module length of $\lc^{n-d}_I(R)$ in the graded isolated singularity case: it is at most $\dim_k(\lc^d_{\fm}(A)_0)+c$. Geometrically, it is at most $\dim_k(\lc^{d-1}(X,\cO_X))+c$ where $X=\Proj(A)$.
\end{remark}
We have the following application:
\begin{example}
\label{example--Calabi-Yau hypersurface}
Let $A=k[x_1,\dots,x_n]/(f)$ where $k$ is a field of prime characteristic $p$ and $\deg(f)=n$. Denote $k[x_1,\dots,x_n]$ be $R$. Then there is a commutative diagram of short exact sequences
\[
\xymatrix{
0 \ar[r] & \lc^{n-1}_{\fm}(A)_0 \ar[r] \ar[d]^F &\lc^{n}_{\fm}(R)_{-n} \ar[r]^f \ar[d]^{f^{p-1}F} & \lc^{n}_{\fm}(R)_0=0 \ar[r] \ar[d]^F & 0\\
0 \ar[r] & \lc^{n-1}_{\fm}(A)_0 \ar[r] &\lc^{n}_{\fm}(R)_{-n} \ar[r]^f & \lc^{n}_{\fm}(R)_0=0 \ar[r] & 0
}
\]
where $F$ denotes the natural Frobenius maps. It follows that $\dim_k(\lc^{n-1}_{\fm}(A)_0)=\dim_k(\lc^{n}_{\fm}(R)_{-n})=1$. It also follows from the diagram that $F:\lc^{n-1}_{\fm}(A)_0\to \lc^{n-1}_{\fm}(A)_0$ is injective if and only if so is
$f^{p-1}F\colon \lc^{n}_{\fm}(R)_{-n}\to \lc^{n}_{\fm}(R)_{-n}$ which holds if and only if $f^{p-1}\notin \fm^{[p]}$.
Assume further that $f$ is irreducible with an isolated singularity at $\fm$ over a perfect field $k$, {\it i.e.}, $A$ is a Calabi-Yau hypersurface over $k$. Then by Fedder's Criterion, $F\colon\lc^{n-1}_{\fm}(A)_0\to \lc^{n-1}_{\fm}(A)_0$ is injective if and only $A$ is $F$-pure. Consequently, $\dim_k(\lc^{n-1}_{\fm}(A)_0)_s=1$ if and only $A$ is $F$-pure. Thus it follows from Theorem \ref{theorem--D-module length for isolated singularities} that:
\[l_{D(R,k)}(\lc^1_{f}(R))=\begin{cases}1& A\ {\rm is\ not\ }F{\rm -pure}\\2& {\rm otherwise}\end{cases}\]
In particular, when $\Proj(A)$ happens to be an elliptic curve, there are infinitely many primes $p$ such that $\lc^1_{f}(R)$ is a simple $D(R,k)$-module and infinitely many primes $p$ such that $\lc^1_{f}(R)$ has length 2 as a $D(R,k)$-module.
\end{example}
Next we partially generalize Theorem \ref{theorem--D-module length for isolated singularities} to the case that the dimension of the singular locus of $R/I$ is $1$. We first prove a lemma, which should be well known to experts.
\begin{lemma}
\label{lemma--localizing and completing F-module}
Let $M$ be an ($F$-finite) $F_R$-module (resp., $F^\infty_R$-module). Then $M_P$ and $M\otimes \widehat{R_P}$ are ($F$-finite) $F_{R_P}$ and $F_{\widehat{R_P}}$-modules (resp., $F^\infty_{R_P}$ and $F^\infty_{\widehat{R_P}}$-modules). Moreover, if $M$ is a simple $F_R$-module (resp., simple $F^\infty_R$-module), then $M_P$ and $M\otimes \widehat{R_P}$, if not zero, are simple as $F_{R_P}$ and $F_{\widehat{R_P}}$-modules (resp., simple as $F^\infty_{R_P}$ and $F^\infty_{\widehat{R_P}}$-modules).
\end{lemma}
\begin{proof}
The conclusion for $F$-modules follows from Proposition 2.7 and Corollary 2.9 of \cite{BlickleIntersectionhomologyDmodule}. The argument for $F^\infty_R$-modules is very similar, we omit the details.
\end{proof}
\begin{theorem}
\label{theorem--Upper bounds for D-module length when singular locus has dimension 1}
Let $R=k[x_1,\dots,x_n]$ and $\m=(x_1,\dots,x_n)$. Let $I$ be a homogeneous reduced and equidimensional ideal of $R$. Set $A=R/I$ with $\dim A=d\geq 2$. Suppose the non-$F$-rational locus of $A$ has dimension $\leq1$ (e.g., the nonsingular locus has dimension $\leq1$). Then we have
\begin{eqnarray*}
l_{D(R,k)}(\lc_I^{n-d}(R))&\leq &c+\sum_{\dim R/P=1}\dim_{\kappa(P)} (\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P}))_s+\dim_k({\lc^d_{\fm}(A)})_s\\
&=&c+\sum_{\dim R/P=1} \dim_{\kappa(P)}(0^*_{\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P})})_s+\dim_k(0^*_{\lc^d_{\fm}(A)})_s
\end{eqnarray*}
where $c$ is number of minimal primes of $I$.
\end{theorem}
\begin{proof}
Clearly the second equality follows from Lemma \ref{lemma--stable parts of tight closure of top local cohomology modules}. Therefore it suffices to prove the first inequality. We may assume that the dimension of the non-$F$-rational locus is $1$, since otherwise the result follows from Theorem \ref{theorem--D-module length for isolated singularities} (the second term is 0). We begin with the following claim:
\begin{claim}
\label{claim--graded F-module filtration}
There exists graded $F_R$-submodules $$0\subseteq L\subseteq M\subseteq \lc_I^{n-d}(R)$$ such that every $D(R,k)$-module composition factor of $L$ is supported at a minimal prime of $A$, every $D(R,k)$-module composition factor of $M/L$ is supported at a dimension $1$ prime, and $\lc_I^{n-d}(R)/M$ is supported only at $\m$.
\end{claim}
\begin{proof}[Proof of Claim]
We have a short exact sequence
\[0\to 0^*_{\lc_{\fm}^d(A)}\to \lc_{\fm}^d(A)\to \lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)}\to 0\]
that induces:
\[0\to {}^*\scr{H}_{R, A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})\to {}^*\scr{H}_{R, A}(\lc_{\fm}^d(A))\cong \lc_I^{n-d}(R)\to {}^*\scr{H}_{R, A}(0^*_{\lc_{\fm}^d(A)})\to 0.\]
By the graded version of \cite[Corollary 4.2 and Theorem 4.4]{BlickleIntersectionhomologyDmodule}, ${}^*\scr{H}_{R, A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})$ is a direct sum of simple $D(R,k)$-modules, each supported at a different minimal prime of $A$. So we set $L={}^*\scr{H}_{R, A}(\lc_{\fm}^d(A)/0^*_{\lc_{\fm}^d(A)})$. The existence of $M$ follows by applying \cite[Theorem 2.9 (3)]{LyubeznikSinghWaltherlocalcohomologysupportedatdeterminantalideals} to ${}^*\scr{H}_{R, A}(0^*_{\lc_{\fm}^d(A)})$, which is a graded $F$-finite $F_R$-module. Note that the support of each $D(R,k)$-module composition factor of $M/L$ has dimension $1$. This is because the support of ${}^*\scr{H}_{R, A}(0^*_{\lc_{\fm}^d(A)})$ has dimension $1$ since the dimension of the non-$F$-rational locus is $1$.
\end{proof}
We know that $l_{D(R,k)}(L)=c$. Moreover, it is clear from the above claim that $M$ is the smallest $F_R$-submodule of $\lc_I^{n-d}(R)$ such that $\lc_I^{n-d}(R)/M$ is only supported at $\m$. So $\lc_I^{n-d}(R)/M$ is isomorphic, {\it as a $D(R,k)$-module}, to $E^{\oplus r}\cong\lc_\m^d(R)^{\oplus r}$ where $r=\crk\lc_I^{n-d}(R)$ by \cite[Lemma (c)]{LyubeznikInjectivedimensionofDmodulescharacteristicfreeapproach} and the definition of corank. Thus we have $l_{D(R,k)}(\lc_I^{n-d}(R)/M)=\crk\lc_I^{n-d}(R)=\dim_k(\lc_\m^d(A))_s$ by \cite[Proposition 4.10]{LyubeznikFModulesApplicationsToLocalCohomology}.
It remains to estimate the $D(R,k)$-module length of $M/L$. Suppose we have
\begin{equation}
\label{equation--D-module filtration}
0\subseteq L=M_0\subseteq M_1\subseteq M_2\subseteq\cdots\subseteq M_t=M\subseteq \lc_I^{n-d}(R)
\end{equation}
such that each $N_i=M_i/M_{i-1}$ is a simple $D(R,k)$-module. We know that each $N_i$ has a unique associated prime $P$ with $\dim R/P=1$ and $A_P$ not $F$-rational.
\begin{claim}
\label{claim--number of D-module composite factor}
The number of $N_i$ such that $\Ass(N_i)=P$ is at most $\crk \lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P})$.
\end{claim}
\begin{proof}[Proof of Claim]
We localize (\ref{equation--D-module filtration}) at $P$ and complete. We have
\[
0\subseteq L\otimes\widehat{R_P}=M_0\otimes\widehat{R_P}\subseteq M_1\otimes \widehat{R_P}\subseteq\cdots \subseteq M_t\otimes \widehat{R_P}
=\lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P})
\]
with successive quotients $N_i\otimes \widehat{R_P}$ (the last equality follows because $\lc_I^{n-d}(R)/M$ is supported only at $\m$). Each $N_i\otimes \widehat{R_P}$ is either $0$ or a $D(\widehat{R_P},k)$-module supported only at $P\widehat{R_P}$ (and thus a direct sum of $E(\widehat{R_P}/P\widehat{R_P})$), depending on whether $\Ass(N_i)\neq P$ or $\Ass(N_i)= P$. Therefore $\lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P})/(L\otimes\widehat{R_P})$, at least as an $\widehat{R_P}$-module, is isomorphic to $E(\widehat{R_P}/P\widehat{R_P})^r$. The number of $N_i$ such that $\Ass(N_i)=P$ is thus $\leq r$.
But $L\otimes\widehat{R_P}$ is a direct sum of simple $F_{\widehat{R_P}}$-submodule of $\lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P})$ supported at minimal primes of $\widehat{A_P}$ by Lemma \ref{lemma--localizing and completing F-module}, so we have $r= \crk{\lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P})}$ by the definition of corank. This finishes the proof of the claim.
\end{proof}
Applying the above claim to (\ref{equation--D-module filtration}) we get:
\[l_{D(R,k)}(M/L)\leq \sum_{\dim R/P=1} \crk \lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P}).\]
Because$\scr{H}_{\widehat{R_P}, \widehat{A_P} }(\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P}))
\cong \lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P})$ (the indices match because $I$ is equidimensional),
by \cite[Proposition 4.10]{LyubeznikFModulesApplicationsToLocalCohomology}, we have
\[
\crk \lc_{I\widehat{R_P}}^{n-d}(\widehat{R_P})=
\dim_{\kappa(P)} (\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P}))_s.
\]
Finally, summing up the ${D(R,k)}$-module length of $L$, $M/L$, and $\lc_I^{n-d}(R)/M$, we have:
$$l_{D(R,k)}(\lc_I^{n-d}(R))\leq c+\sum_{\dim R/P=1}\dim_{\kappa(P)} (\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P}))_s+\dim_k({\lc^d_{\fm}(A)})_s.$$
\end{proof}
We end this section with some remarks and questions regarding Theorem \ref{theorem--Upper bounds for D-module length when singular locus has dimension 1}:
\begin{remark}
It is clear that the sum $\sum_{\dim R/P=1}\dim_{\kappa(P)} (\lc_{P\widehat{A_P}}^{d-1}(\widehat{A_P}))_s$ in Theorem \ref{theorem--Upper bounds for D-module length when singular locus has dimension 1} is a finite sum: in fact we only need to consider those primes $P$ such that $A_P$ is not $F$-rational (which form a finite set by our assumption). We ought to point out that, more generally, without any assumption on the $F$-rational locus, \cite[Proposition 4.14]{LyubeznikFModulesApplicationsToLocalCohomology} shows that there are only finitely many prime ideals $P$ such that $(\lc_{P\widehat{A_P}}^{j}(\widehat{A_P}))_s\neq 0$.
\end{remark}
\begin{remark}
We do not know whether the inequality in Theorem \ref{theorem--Upper bounds for D-module length when singular locus has dimension 1} is an equality. This is due to the fact that, in the proof of Claim \ref{claim--number of D-module composite factor}, we do not know whether the $D(\widehat{R_P},k)$-module $N_i\otimes \widehat{R_P}$ is isomorphic to a {\it single copy} of $E(\widehat{R_P}/P\widehat{R_P})$.
\end{remark}
In general, a simple $D(R,k)$-module may not stay simple as a $D(\widehat{R}_P,\kappa(P))$-module after taking localization and completion. We point out the following example which is derived from \cite[Example 5.1]{BlickleDmodulestructureofRFmodules}.
\begin{example}
\label{example: completion of simple no longer simple}
Let $R=k[x]$ where $k$ is an algebraically closed field of positive characteristic. Let $M=R\oplus R$ be a free $R$-module of rank $2$. We give $M$ an $F_R$-module structure by setting the composition map
\[R\oplus R\xrightarrow{\theta_M}F(R)\oplus F(R)\xrightarrow{\theta_R^{-1}\oplus\theta_R^{-1}} R\oplus R\]
to be the map represented by the matrix:
\[
\begin{pmatrix}
-x & 1 \\
1 & 0
\end{pmatrix}
\]
where $\theta_R$ denotes the standard isomorphism $R\cong F(R)$.
Next we pick a nonzero simple $F_R^\infty$-module $N\subseteq M$ (note that the only associated prime of $N$ is $0$). By \cite[Corollary 4.7]{BlickleDmodulestructureofRFmodules}, $N$ must be a simple $D(R,k)$-module since $k$ is algebraically closed. However, after we localize at $0$, that is, tensor with the fraction field $k(x)$ of $R$, $M\otimes_R k(x)$ becomes a simple $F_{k(x)}^\infty$-module because $M\otimes_R k(x)^{1/p^\infty}$ is a simple $F_{k(x)^{1/p^\infty}}^\infty$-module by \cite[Example 5.1]{BlickleDmodulestructureofRFmodules}.\footnote{Note that the matrix we used here is the inverse of the matrix as in \cite[Example 5.1]{BlickleDmodulestructureofRFmodules}. This is because we are describing the matrix representing the $F$-module structure on $M\otimes_Rk(x)^{1/p^\infty}$ while Blickle was working with the matrix representing the Frobenius action on $M\otimes_Rk(x)^{1/p^\infty}$. We leave the reader to check that they define the same $F$-module structure on $M\otimes_Rk(x)^{1/p^\infty}$.} Thus we must have $N\otimes_R k(x)\cong M\otimes_R k(x)$, but $M\otimes_R k(x)$ is not a simple $D(k(x), k(x))$-module because obviously every one-dimensional $k(x)$-subspace is a nontrivial $D(k(x), k(x))$-submodule.
\end{example}
\begin{remark}
One approach to generalizing Theorem \ref{theorem--Upper bounds for D-module length when singular locus has dimension 1} is to find an $F$-submodule $M$ of $\lc^{n-d}_I(R)$ such that none of the composition factors of $M$ has 0-dimensional support and the support of $\lc^{n-d}_I(R)/M$ is contained in $\{\fm\}$. In the graded case, the existence of such an $M$ follows from the proof of \cite[Theorem 2.9]{LyubeznikSinghWaltherlocalcohomologysupportedatdeterminantalideals}. We don't know whether \cite[Theorem 2.9]{LyubeznikSinghWaltherlocalcohomologysupportedatdeterminantalideals} can be extended to non-graded case.
\end{remark}
Hence it is natural to ask the following:
\begin{question}
Does there always exist an $F$-submodule $M$ of $\lc^{n-d}_I(R)$ such that none of the composition factors of $M$ has 0-dimensional support while the support of $\lc^{n-d}_I(R)/M$ is contained in $\{\fm\}$?
\end{question}
Despite the above remarks and questions, we still expect that there should be an analogue of Theorem \ref{theorem--D-module length for isolated singularities} and Theorem \ref{theorem--Upper bounds for D-module length when singular locus has dimension 1} or similar estimates in the local case and without the restriction on the non-$F$-rational locus.
\section{A lower bound on $F$-module length of local cohomology modules}
\label{section: F-pure}
In this section we will give lower bounds on $l_{F_R}(\lc_I^c(R))$. Throughout this section we will still assume $R=k[[x_1,\dots, x_n]]$ or $k[x_1,\dots,x_n]$ with $\m=(x_1,\dots,x_n)$ where $k$ is a field of characteristic $p>0$, and $A=R/I$ be reduced and equidimensional or graded reduced and equidimensional of dimension $d\geq 1$.
Henceforth in this section $E=E_R(k)$ will denote the injective hull of the residue field of $R$ and
$E_A=E_A(k)= \Ann_E I$ will denote the injective hull of the residue field of $A$.
We first collect definitions and facts from \cite{SharpGradedAnnihilatorsofModulesovertheFrobeniusSkewpolynomialRingandTC} and \cite{KatzmanParameterTestIdealOfCMRings}.
Given an Artinian $A\{f\}$-module $W$, a {\it special ideal} of $W$ is an ideal of $A$ that is also the annihilator of some $A\{f\}$-submodule $V\subseteq W$, a {\it special prime} is a special ideal that is also a prime ideal (note that the special ideals depend on the $A\{f\}$-module structure on $W$, i.e., the Frobenius action $f$ on $W$).
An important result of Sharp \cite[Corollary 3.7]{SharpGradedAnnihilatorsofModulesovertheFrobeniusSkewpolynomialRingandTC} and Enescu-Hochster
\cite[Theorem 3.6]{EnescuHochsterTheFrobeniusStructureOfLocalCohomology} shows that, when $f$ acts injectively on $W$, the number of special primes of $W$ is finite.
The module of inverse polynomials $E$ comes equipped with a natural Frobenius map $T$ given by $T(\lambda x_1^{-\alpha_1} \dots x_n^{-\alpha_n})=\lambda^p x_1^{-p\alpha_1} \dots x_n^{-p\alpha_n}$
for all $\lambda\in k$ and $\alpha_1, \dots, \alpha_n\geq 0$.
Any Frobenius map on $E$ has the form $uT$ where $u\in R$ and
any Frobenius map on $E_A$ has that form with
$u\in (I^{[p]}:I)$. (cf.~\cite[Proposition 3.36]{BlickleThesis}).
Such an action $uT$ on $E_A$ is injective if and only if $u\notin \m^{[p]}$, and is nonzero if and only if $u\notin I^{[p]}$.
If we now specialize the notion of special ideals to the $A\{f\}$-module $E_A$ where $f=uT$, we see that these are
ideals $J$ such that $u\in J^{[p]}:J$
and we refer to these as {\it $u$-special ideals}
(\cite[Theorem 4.3]{KatzmanParameterTestIdealOfCMRings}). A {\it $u$-special prime} is a $u$-special ideal that is also a prime ideal.
\subsection{$F$-pure case} Our main result in this subsection is the following:
\begin{theorem}
\label{theorem--lower bounds of F-module length for F-pure}
Assume $A=R/I$ is reduced and equidimensional of dimension $d\geq 1$.
Suppose $A$ is $F$-pure. Then $l_{F_R}(\lc_I^{n-j}(R))$ is at least the number of special primes of $\lc_\m^j(A)$.
Moreover, when $A$ is quasi-Gorenstein, $l_{F_R}(\lc_I^{n-d}(R))$ is exactly the number of special primes of $\lc_\m^d(A)$.
\end{theorem}
\begin{proof}
Let $P$ be a special prime of $\lc_\m^j(A)$. Take an $A\{f\}$-submodule $N\subseteq \lc_\m^j(A)$ such that $\Ann N=P$. Recall that the Frobenius action on $N$ induces a map $F(N)\to N$. We claim that this map is surjective: let $N'\subseteq N$ be the image, we have $N'\subseteq N\subseteq \lc_\m^j(A)$ are $A\{f\}$-submodules such that the Frobenius action on $N/N'$ is nilpotent. But $\lc_\m^j(A)$ is anti-nilpotent (i.e., the Frobenius action on $H_\m^j(R)/N'$ is injective) by \cite[Theorem 3.7]{MaFinitenesspropertyoflocalcohomologyforFpurerings}. So we must have $N'=N$ and thus $F(N)\twoheadrightarrow N$ is surjective.
Taking the Matlis dual, we get $N^\vee\hookrightarrow F(N)^\vee\cong F(N^\vee)$. This shows that $N^\vee$ is a root of $\scr{H}_{R, A}(N)$ (recall that by definition, $\scr{H}_{R,A}(N)=\varinjlim(N^\vee\to F(N^\vee)\to F^2(N^\vee)\to\cdots)$). In particular, we know that the set of associated primes of $N^\vee$ is the same as the set of associated primes of $\scr{H}_{R, A}(N)$ (this follows easily from the argument in \cite[Remark 2.13]{LyubeznikFModulesApplicationsToLocalCohomology}). But $\Ann N=\Ann N^\vee=P$ and $N^\vee$ is a finitely generated $R$-module, thus $P$ is a minimal associated prime of $N^\vee$ and hence a minimal associated prime of $\scr{H}_{R,A}(N)$. This implies that $\scr{H}_{R,A}(N)$ must have a simple $F_R$-module composition factor with $P$ its unique associated prime. But we have $\lc_I^{n-j}(R)
\cong\scr{H}_{R,A}(\lc_\m^j(A))\twoheadrightarrow \scr{H}_{R,A}(N)$, hence for every special prime $P$ of $\lc_\m^j(A)$, $\lc_I^{n-j}(R)$ has a simple $F_R$-module composition factor with $P$ its unique associated prime. This proves that $l_{F_R}(\lc_I^{n-j}(R))$ is at least the number of special primes of $\lc_\m^j(A)$.
Finally, when $A$ is quasi-Gorenstein, $\lc_\m^d(A)\cong E_A$, the injective hull of the residue field. So there is a one-one correspondence between $A\{f\}$-submodules of $\lc_\m^d(A)$ and their annihilator ideals. Let $P_1,\dots,P_m$ be all the special primes with $\height P_1\geq\height P_2\geq\cdots\geq \height P_m$. Let $Q_j=P_1\cap P_2\cap\cdots\cap P_j$. We have an ascending chain of $A\{f\}$-submodules of $\lc_\m^d(A)=E_A$: $$0\subsetneqq \Ann_EQ_1\subsetneqq\Ann_EQ_2\subsetneqq\cdots\subsetneqq\Ann_EQ_m=E_A.$$ It suffices to show that $\scr{H}_{R,A}(\Ann_EQ_j/\Ann_EQ_{j-1})$ is a nonzero simple $F$-module. It is nonzero because the Frobenius action on $\Ann_EQ_j/\Ann_EQ_{j-1}$ is not nilpotent (in fact it is injective because $\lc_\m^d(A)$ is anti-nilpotent). If it is not simple, then there exists another $A\{f\}$-submodule $M$ such that $\Ann_EQ_{j-1}\subsetneqq M\subsetneqq \Ann_EQ_{j}$, which implies $Q_j\subsetneqq \Ann M\subsetneqq Q_{j-1}$. But since the Frobenius action on $\lc_\m^d(A)=E_A$ is injective and $M$ is an $A\{f\}$-submodule, $\Ann M$ is an intersection of the special primes by \cite[Corollary 3.7]{SharpGradedAnnihilatorsofModulesovertheFrobeniusSkewpolynomialRingandTC} or \cite[Theorem 3.6]{EnescuHochsterTheFrobeniusStructureOfLocalCohomology}. This is then impossible by our height hypothesis on $P_i$ and the definition of $Q_j$.
\end{proof}
\begin{remark}
When $A$ is not $F$-pure, the number of special primes of $\lc_\m^j(A)$ is not necessarily a lower bound of $l_{F_R}(\lc_I^{n-j}(R))$. In fact, in Example \ref{example--Calabi-Yau hypersurface}, when $A=R/f$ is a Calabi-Yau hypersurface that is not $F$-pure, then $l_{F_R}(\lc_f^{1}(R))=1$ while the number of special primes of $\lc_\m^1(A)$ is 2: $(f)$ and $\m$ are both special primes of $\lc_\m^1(A)$. So the first conclusion of Theorem \ref{theorem--lower bounds of F-module length for F-pure} need not hold when $A$ is not $F$-pure.
\end{remark}
\begin{example}
Let $A=k[x_1,\dots,x_n]/(x_ix_j \mid 1\leq i<j\leq n)=R/I$. Then $A$ is a one-dimensional $F$-pure ring, and $A$ is not Gorenstein when $n\geq 3$. A straightforward computation using \cite[Theorem 5.1]{EnescuHochsterTheFrobeniusStructureOfLocalCohomology} shows that the special primes of $\lc_\m^1(A)$ are $P_i=(x_1,\dots,\widehat{x}_i,\dots,x_n)$ and $\m$ (thus there are $n+1$ special primes) but $l_{F_R}(\lc_I^{n-1}(R))=2n-1$. Hence $l_{F_R}(\lc_I^{n-1}(R))$ is strictly bigger than the number of special primes when $n\geq 3$ (and the difference can be arbitrarily large when $n\gg0$). This shows the second conclusion of Theorem \ref{theorem--lower bounds of F-module length for F-pure} need not hold when $A$ is not quasi-Gorenstein.
\end{example}
\subsection{Gorenstein case: a second approach}
In this subsection we assume that $A=R/I$ is Gorenstein and $F$-injective (equivalently, Gorenstein and $F$-pure). In this case $\lc^i_I(R)$ vanishes unless $i=n-d$ and we already know from Theorem \ref{theorem--lower bounds of F-module length for F-pure} that $l_{F_R}(\lc_I^{n-j}(R))$ is equal to the number of special primes of $\lc^{d}_{\mathfrak{m}}(A)$.
Our goal here is to give a more detailed analysis on the $F_R$-submodules of $\lc_I^{n-j}(R)$ in terms of their generating morphisms, and in particular we recover the second conclusion of Theorem \ref{theorem--lower bounds of F-module length for F-pure}.
Since $A$ is Gorenstein, $E_A\cong\lc^{d}_{\mathfrak{m}}(A)$ and thus there is a natural Frobenius action on $E_A$.
In this case the module $\frac{I^{[p]}:I}{I^{[p]}}$ is a cyclic $A$-module, and the natural Frobenius action on $E_A$ is given up to sign by
$uT$, where we fix $u\in I^{[p]}:I$ whose image in $\frac{I^{[p]}:I}{I^{[p]}}$ generates it as an $A$-module (and $T$ denotes the natural Frobenius on $E$). The $u$-special ideals (resp. $u$-special primes) are thus the special ideals (resp. special primes) and they are finite by
\cite[Corollary 3.7]{SharpGradedAnnihilatorsofModulesovertheFrobeniusSkewpolynomialRingandTC} or \cite[Theorem 3.6]{EnescuHochsterTheFrobeniusStructureOfLocalCohomology}.
}
Following the construction in \cite[section 4]{LyubeznikFModulesApplicationsToLocalCohomology}, we obtain a generating morphism for $\lc^{n-d}_I(R)$ of the form
$R/I \xrightarrow{u} R/I^{[p]}$ with $u$ as above.
To obtain a root, we let $K=\cup_{e\geq 1} (I^{[p^e]} : u^{1+\dots+p^{e-1}} )$ and
now $R/K \xrightarrow{u} R/K^{[p]}$ is a root of $\lc^{n-d}_I(R)$.
\begin{lemma}\label{Lemma: submodules of F-finite F-modules}
The proper $F$-finite $F$-submodules of $\lc^{n-d}_I(R)$
have roots $J/K \xrightarrow{u} J^{[p]}/K^{[p]}$
as $J$ ranges over all proper $u$-special ideals, and, furthermore, distinct special ideals $J$
define distinct $F$-finite $F$-submodules of $\lc^{n-d}_I(R)$.
\end{lemma}
\begin{proof}
\cite[Corollary 2.6]{LyubeznikFModulesApplicationsToLocalCohomology} establishes a bijection between $F$-finite $F$-submodules $\mathcal{N}$ of $\lc^{n-d}_I(R)$
and $R$-submodules of the root $R/K$ which is given by
$\mathcal{N} \mapsto \mathcal{N} \cap R/K$.
Fix such $\mathcal{N}$ and write $J/K=\mathcal{N} \cap R/K$.
The fact that $\mathcal{N}$ is a $F$-finite $F$-submodule of $\lc^{n-d}_I(R)$ implies that
the image of the restriction of the map $R/K \xrightarrow{u} R/K^{[p]}$ to $J/K$ is in $F(J/K)=J^{[p]}/K^{[p]}$ and hence
$J$ is a $u$-special ideal. Clearly, any such $u$-special ideal defines a $F$-finite $F$-submodule of $\lc^{n-d}_I(R)$, i.e.,
$\displaystyle \lim_{\rightarrow} \left(J/K \xrightarrow{u} J^{[p]}/K^{[p]}\xrightarrow{u^p} J^{[p^2]}/K^{[p^2]} \xrightarrow{u^{p^2}} \dots \right)$.
To finish the proof we need to show that any two distinct $u$-special ideals $J_1$ and $J_2$ define different
$F$-finite $F$-submodules of $\lc^{n-d}_I(R)$. If this is not the case then for some $e\geq 0$,
$u^{\nu_e} J_1/ K^{[p^e]} = u^{\nu_e} J_2/ K^{[p^e]}$ where $\nu_e=1+p+\dots + p^{e-1}$,
$u^{\nu_e} J_1 + K^{[p^e]}= u^{\nu_e} J_2 + K^{[p^e]}$.
The fact that $f$ is injective on $E$ is equivalent to $u^{\nu_e}\notin L^{[p^e]}$ for all $e\geq 1$ and all
proper ideals $L\subsetneq R$ (\cite[Theorem 4.6]{KatzmanParameterTestIdealOfCMRings}), and in particular
$u^{\nu_e}\notin J_1^{[p^e]}$ and $u^{\nu_e}\notin J_2^{[p^e]}$.
But now
$u^{\nu_e} J_1 \subseteq u^{\nu_e} J_2 + K^{[p^e]} \subseteq J_2^{[p^e]}$ and $J_2^{[p^e]}$ is a primary ideal (because $R$ is regular),
hence $J_1 \subseteq \sqrt{ J_2^{[p^e]} } = J_2$. Similarly, also $J_2 \subseteq J_1$, contradicting the fact that $J_1\neq J_2$.
\end{proof}
\begin{theorem}
\label{Theorem: F-length for Gorenstein rings}
Let $A=R/I$ be Gorenstein and $F$-injective where $R=k[[x_1,\dots, x_n]]$ or $k[x_1,\dots,x_n]$ with $\m=(x_1,\dots,x_n)$. Let $\{ P_1, \dots, P_m \}$ be the set of all the special prime ideals of $\lc^{d}_{\mathfrak{m}}(A)$ which contain $K$, and assume that these were ordered so that $\height P_1 \geq \height P_2 \geq \dots \geq \height P_m$. Write $Q_j=P_1 \cap \dots \cap P_j$ for all $1\leq j \leq m$.
The chain of roots
\[\xymatrix@C=5pt{
\displaystyle
0 & \subset & \frac{Q_m}{K} \ar[d]^{u} & \subset & \frac{Q_{m-1}}{K} \ar[d]^{u} & \subset & \dots & \subset & \frac{Q_1}{K} \ar[d]^{u} & \subset & \frac{R}{K} \ar[d]^{u}\\
0 & \subset & \frac{Q_m^{[p]}}{K^{[p]}} & \subset & \frac{Q_{m-1}^{[p]}}{K^{[p]}} & \subset & \dots & \subset & \frac{Q_1^{[p]}}{K^{[p]}} & \subset & \frac{R}{K^{[p]}} \\
}\]
corresponds to a maximal filtration of $\lc^{n-d}_I(R)$ in the category of $F_R$-modules.
\end{theorem}
\begin{proof}
Since $u Q_j \subseteq Q_j^{[p]}$ and $u K \subseteq K^{[p]}$ the vertical maps are well defined, and
the diagram is clearly commutative.
To show that the factors are non-zero, note that if $Q_{j+1}=P_1 \cap \dots \cap P_{j+1}=P_1 \cap \dots \cap P_j=Q_j$ then $P_{j+1} \supseteq P_1 \cap \dots \cap P_j$ and $P_{j+1} \supseteq P_i$ for some $1\leq i \leq j$.
But the ordering of $P_1, \dots, P_m$ implies that $\height P_{j+1} \leq \height P_i$, giving $P_i=P_{j+1}$, a contradiction.
If the factors are not simple, then for some $1\leq j \leq m$ there exists a special ideal $J$ such that
$P_1 \cap \dots \cap P_j \cap P_{j+1} \subsetneq J \subsetneq P_1 \cap \dots \cap P_j$.
Being special, $J$ is radical and has the form
$P_1 \cap \dots \cap P_j \cap P_{k_1} \cap P_{k_2}\cap \dots \cap P_{k_s}$ for
$j< k_1, \dots k_s \leq m$. Now for every $1\leq \ell\leq s$,
$P_{k_\ell} \supseteq P_1 \cap \dots \cap P_{j+1}$ so $P_{k_\ell} \supseteq P_w$ for some $1\leq w\leq j+1$,
and the height condition implies $P_{k_\ell} = P_w$ and we conclude that
$J\subseteq P_1 \cap \dots \cap P_{j+1}$, a contradiction.
It remains to show that the $F_R$-submodules defined by the roots $Q_j/K \xrightarrow{u} Q_j^{[p]}/K^{[p]}$ are distinct:
this follows from Lemma \ref{Lemma: submodules of F-finite F-modules}.
\end{proof}
\begin{corollary}
Suppose $A=R/I$ is Gorenstein and $F$-injective. The length of $\lc^{n-d}_I(R)$ in the category of $F_R$-modules equals the number of $u$-special primes of $\lc^{d}_{\mathfrak{m}}(A)$ that contain $K$.
\end{corollary}
\begin{remark}
Suppose $A=R/I$ is Gorenstein and $F$-injective. We can actually prove that the length of $\lc^{n-d}_I(R)$ in the category of $F^e_R$-modules for every $e$ (and hence in the category of $F_R^\infty$-modules) equals the number of $u$-special primes of $\lc^{d}_{\mathfrak{m}}(A)$ that contain $K$. It suffices to prove that every $F_R^e$-submodule of $\lc^{n-d}_I(R)$ is already an $F_R$-submodule of $\lc^{n-d}_I(R)$. By the same argument as in Lemma \ref{Lemma: submodules of F-finite F-modules}, all $F_R^e$-submodules of $\lc^{n-d}_I(R)$ have roots of the form $J/K$ where $J$ is an ideal containing $K$ with the property $u^{1+p+\cdots+p^{e-1}} J \subseteq J^{[p^e]}$. Since $A=R/I$ is Gorenstein and $F$-injective (and hence $F$-pure), $u$ is a generator of $(I^{[p]}:I)/I^{[p]}$ as an $R/I$-module and $u\notin\m^{[p]}$ by Fedder's criterion \cite{FedderFPureRat}. We will show that these imply $uJ\subseteq J^{[p]}$ and thus $J/K$ already generates an $F_R$-submodule of $\lc^{n-d}_I(R)$. Since $u\notin\m^{[p]}$ and $R$ is regular, there is a $p^{-1}$-linear map (i.e., a Frobenius splitting) $\phi$: $R\to R$ such that $\phi(u)=1$. Therefore $u^{1+p+\cdots+p^{e-1}} \in J^{[p^e]}:J$ implies $$u^{1+p+\cdots+p^{e-2}}=\phi(u^{p(1+p+\cdots+p^{e-2})}\cdot u)=\phi(u^{1+p+\cdots+p^{e-1}})\in \phi(J^{[p^e]}:J)\subseteq\phi(J^{[p^e]}:J^{[p]})=J^{[p^{e-1}]}:J$$
and thus by an easy induction we have $u\in J^{[p]}:J$ (note that we have used $(J^{[p^{e-1}]}:J)^{[p]}=J^{[p^e]}:J^{[p]}$ because $R$ is regular so the Frobenius endomorphism is flat).
\end{remark}
\subsection{Cohen-Macaulay case}
In this subsection we assume that $A=R/I$ is reduced and Cohen-Macaulay. In this case the canonical module of $A$ can be identified with an ideal $\omega\subseteq A$, let $\Omega$ be the pre-image of $\omega$ in $R$, that is, $\Omega/I=\omega \subseteq A$.
The inclusion $\omega\subseteq A$ is compatible with the Frobenius endomorphism, and the short exact sequence
$0 \rightarrow \omega \subseteq A \rightarrow A/\omega \rightarrow 0$
induces an $A$-linear map
$$0 \rightarrow \lc_\mathfrak{m}^{d-1} (A/\omega) \rightarrow \lc_\mathfrak{m}^{d}(\omega) \rightarrow \lc_\mathfrak{m}^{d}(A) \rightarrow \lc_\mathfrak{m}^{d} (A/\omega) \rightarrow 0 .$$
Now each of the Artinian $A$-modules is equipped with a Frobenius map induced by the Frobenius endomorphism acting on the short exact sequence, and $\lc_\mathfrak{m}^{d} (A/\omega)$ vanishes since $\dim A/\omega < \dim A=d$. So we obtain a short exact sequence of $A\{f\}$-modules
$$0 \rightarrow \lc_\mathfrak{m}^{d-1} (A/\omega) \rightarrow \lc_\mathfrak{m}^{d}(\omega) \rightarrow \lc_\mathfrak{m}^{d}(A) \rightarrow 0 .$$
We can now identify $ \lc_\mathfrak{m}^{d}(\omega) $ with $E_A=\Ann_E I$.
Note that the annihilator of $\lc_\mathfrak{m}^{d-1} (A/\omega)$ in $A$ is $\omega$, hence the annihilator of $\lc_\mathfrak{m}^{d-1} (A/\omega)$ in $R$ is $\Omega$ (since $R/I=A$ and $\Omega/I=\omega$). Thus we may identify $\lc_\mathfrak{m}^{d-1} (A/\omega)$ with $\Ann_E \Omega$.
We now have a short exact sequence
$$0 \rightarrow \Ann_E \Omega \rightarrow \Ann_E I \rightarrow \lc_\mathfrak{m}^{d}(A) \rightarrow 0$$
of $A\{f\}$-modules, and we can also write $\lc_\mathfrak{m}^{d}(A)= \Ann_E I/\Ann_E \Omega$.
Recall that any Frobenius action on $\Ann_E I$ has the form $u T$ where $T$ is the natural Frobenius on
$E$ and $u\in(I^{[p]} : I)$.
Fix $u\in R$ to be such that $u T$ is the Frobenius action on $\Ann_E I$ in the exact sequence above.
We now can obtain an $F_R$-module filtration of $\lc^{n-d}_I(R)=\scr{H}_{R,A}(\Ann_E I/\Ann_E \Omega)$ by applying
the Lyubeznik functor $\scr{H}_{R,A}$ to a chain of surjections
$$\Ann_E I/\Ann_E \Omega \rightarrow \Ann_E I/\Ann_E J_1 \rightarrow \dots \rightarrow \Ann_E I/\Ann_E J_m $$
where $J_1, \dots, J_m$ are $u$-special ideals such that $\Omega \supseteq J_1 \supseteq \dots \supseteq J_m \supseteq I$. We let $K=\cup_{e\geq 1} (I^{[p^e]} : u^{1+p+\dots+p^{e-1}})$ so that $\Omega/K \xrightarrow{u} \Omega^{[p]}/K^{[p]}$ is a root for $\lc^{n-d}_I(R)=\scr{H}_{R,A}(\Ann_E I/\Ann_E \Omega)$.
}
\begin{theorem}
\label{Theorem: F-length for CM rings}
Assume $A=R/I$ is Cohen-Macaulay.
Let $\{ P_1, \dots, P_m \}$ be the set of all the $u$-special prime ideals
$P\supseteq K$
such that $P\nsupseteqq \Omega$ and $u\notin P^{[p]}$, and assume that these were ordered so that $\height P_1 \geq \height P_2 \geq \dots \geq \height P_m$.
Write $Q_j=\Omega \cap P_1 \cap \dots \cap P_j$ for all $1\leq j \leq m$.
The chain of roots
\[\xymatrix@C=5pt{
\displaystyle
0 & \subset & \frac{Q_m}{K} \ar[d]^{u} & \subset & \frac{Q_{m-1}}{K} \ar[d]^{u} & \subset & \dots & \subset & \frac{Q_1}{K} \ar[d]^{u} & \subset & \frac{\Omega}{K} \ar[d]^{u}\\
0 & \subset & \frac{Q_m^{[p]}}{K^{[p]}} & \subset & \frac{Q_{m-1}^{[p]}}{K^{[p]}} & \subset & \dots & \subset & \frac{Q_1^{[p]}}{K^{[p]}} & \subset & \frac{\Omega}{K^{[p]}} \\
}\]
corresponds to a filtration of $\lc^{n-d}_I(R)$ in the category of $F$-modules with non-zero factors.
\end{theorem}
\begin{proof}
The homomorphic images $\Ann_E I/\Ann_E Q_1, \dots, \Ann_E I/\Ann_E Q_1$ of
$\Ann_E I / \Ann_E \Omega$ are preserved by the natural Frobenius action on
$\lc^{d}_{\mathfrak{m}}(A)\cong \Ann_E I / \Ann_E \Omega$
because each $Q_i$, being the intersection of $u$-special ideals, is itself special.
An application of the Lyubeznik functor $\scr{H}_{R,A}$ to the chain of surjections
$$\Ann_E I/\Ann_E \Omega \rightarrow \Ann_E I/\Ann_E J_1 \rightarrow \dots \rightarrow \Ann_E I/\Ann_E J_m $$
yields a filtration of
$\lc^{n-d}_I(R)$
whose generating morphisms are the vertical maps in the following commutative diagram
\[\xymatrix@C=5pt{
\displaystyle
0 & \subset & \frac{Q_m}{I} \ar[d]^{u} & \subset & \frac{Q_{m-1}}{I} \ar[d]^{u} & \subset & \dots & \subset & \frac{Q_1}{I} \ar[d]^{u} & \subset & \frac{\Omega}{I} \ar[d]^{u}\\
0 & \subset & \frac{Q_m^{[p]}}{I^{[p]}} & \subset & \frac{Q_{m-1}^{[p]}}{I^{[p]}} & \subset & \dots & \subset & \frac{Q_1^{[p]}}{I^{[p]}} & \subset & \frac{\Omega}{I^{[p]}} \\
}.
\]
We can replace these generating morphisms by their corresponding roots, and obtain the commutative diagram
\[\xymatrix@C=5pt{
\displaystyle
0 & \subset & \frac{Q_m}{K} \ar[d]^{u} & \subset & \frac{Q_{m-1}}{K} \ar[d]^{u} & \subset & \dots & \subset & \frac{Q_1}{K} \ar[d]^{u} & \subset & \frac{\Omega}{K} \ar[d]^{u}\\
0 & \subset & \frac{Q_m^{[p]}}{K^{[p]}} & \subset & \frac{Q_{m-1}^{[p]}}{K^{[p]}} & \subset & \dots & \subset & \frac{Q_1^{[p]}}{K^{[p]}} & \subset & \frac{\Omega}{K^{[p]}} \\
}\]
where now all vertical maps are roots and, once we show that the inclusions in this diagram are strict, this
gives a filtration of $\lc^{n-d}_I(R)$ with non-zero factors.
We need to show that for all $e\geq 1$, and all $1\leq i<m$,
$$ (Q_{i+1}^{[p^e]} : u^{\nu_e}) \subsetneq (Q_{i}^{[p^e]} : u^{\nu_e}) $$
where $\nu_e=1+p+\dots+p^{e-1}$. If we have equality, we may take radicals of both sides to obtain
$$
\sqrt{(\Omega^{[p^e]} : u^{\nu_e})} \cap \bigcap_{j=1}^{i+1} \sqrt{(P_{j}^{[p^e]} : u^{\nu_e}) }
=
\sqrt{(\Omega^{[p^e]} : u^{\nu_e})} \cap \bigcap_{j=1}^i \sqrt{(P_{j}^{[p^e]} : u^{\nu_e})}
$$
and so
$$\sqrt{(P_{i+1}^{[p^e]} : u^{\nu_e})}\supseteq
\sqrt{(\Omega^{[p^e]} : u^{\nu_e})} \cap \bigcap_{j=1}^i \sqrt{(P_{j}^{[p^e]} : u^{\nu_e})} .$$
We claim that $u^{\nu_e} \notin P_{j}^{[p^e]}$ for each $j$ and we will use induction on $e$ to prove this. When $e=1$, this is precisely our assumption that $u\notin P^{[p]}_j$. Assume that $u^{\nu_e} \notin P_{j}^{[p^e]}$ and we wish to prove $u^{\nu_{e+1}} \notin P_{j}^{[p^{e+1}]}$. If $u^{\nu_{e+1}} \in P_{j}^{[p^{e+1}]}$, then we would have
\[u^{\nu_e}\in (P_{j}^{[p^{e+1}]}:u^{p^e})=(P^{[p]}_j:u)^{[p^e]}\subseteq P^{[p^e]}_j\]
a contradiction, where the last inclusion follows from the fact that $P^{[p]}_j$ is $P$-primary and our assumption that $u\notin P^{[p]}_j$.
Consequently $(P_{i+1}^{[p^e]} : u^{\nu_e})\neq R$ and we must have $(P_{i+1}^{[p^e]} : u^{\nu_e})\subseteq P_{i+1}$ and
so
$P_{i+1} \supseteq \sqrt{(\Omega^{[p^e]} : u^{\nu_e})} \cap \bigcap_{j=1}^i \sqrt{(P_{j}^{[p^e]} : u^{\nu_e})}$
and hence $P_{i+1}$ must contain one of the ideals in the intersection; since these ideals
are among the unit ideal, $P_1, \dots, P_i$ and $\sqrt{(\Omega^{[p^e]} : u^{\nu_e})}\supseteq \Omega$,
this is impossible.
\end{proof}
We have the following immediate corollary of Theorem \ref{Theorem: F-length for CM rings}.
\begin{corollary}
Let $A,R,I,u,K,\Omega$ be as in Theorem \ref{Theorem: F-length for CM rings}.
The length of $\lc^{n-d}_I(R)$ in the category of $F$-modules is at least the number of $u$-special prime ideals
$P\supseteq K$ of $\lc^{d}_{\mathfrak{m}}(A)$ such that $P\nsupseteqq \Omega$ and $u\notin P^{[p]}$.
\end{corollary}
\begin{remark}
If $A$ is quasi-Gorenstein and we take $\Omega=R$,
the prime special ideals in the statement of Theorem \ref{Theorem: F-length for CM rings}
are the prime special ideals $P\supseteq I$ of $\lc_\m^d(A)$ such that $P\supseteq K$, and $u\notin P^{[p]}$.
The set of all such primes has been known to be finite (\cite[Remark 5.3]{KatzmanSchwedeAlgorithmForComputing}) and
if $A$ is also $F$-injective, we obtain the same set of primes as in Theorem \ref{Theorem: F-length for Gorenstein rings};
thus Theorem \ref{Theorem: F-length for CM rings} generalizes Theorem \ref{Theorem: F-length for Gorenstein rings}.
\end{remark}
\section{A computation of Fermat hypersurfaces}
\label{section: diagonal hypersurfaces}
We have seen from Theorem \ref{theorem--D-module length for isolated singularities} and Remark \ref{remark--D-module length for graded isolated singularities} that the problem of computing the $D(R,k)$-module length of $\lc_I^{n-d}(R)$ when $A=R/I$ is a graded isolated singularity comes down to computing the dimension of the Frobenius stable part of $\lc_\m^d(A)_0$. In this section we study this problem for Fermat hypersurfaces $A=k[x_0, \ldots, x_d]/(x_0^n+x_1^n+\cdots +x_d^n)$ with $d\geq 2$. We express the dimension of the Frobenius stable part of $\lc_\m^{d}(A)$ explicitly in terms of the number of solutions to a system of equations on remainders. These results generalize earlier computations of Blickle in \cite[Examples 5.26--5.29]{BlickleThesis}.
\begin{remark}
Let $A = k[x_0,x_1, \dots,x_d]/(x_0^n + x_1^n +\cdots + x_d^n)$.
Then the degree $0$ part of the top local cohomology
$\lc^d_{\fm}(A)$
has a $k$-basis consisting of
the elements of the form $\frac{x_0^c}{x_1^{a_1}\cdots x_d^{a_d}}$, where $a_1,\dots, a_d$ are positive integers and $a_1+\cdots+a_d=c\leq n-1$.
Therefore, its dimension is $\binom{n-1}{d}$.
\end{remark}
In the following we will use $s\% t$ to denote the remainder of $s$ mod $t$.
\begin{remark}
\label{remark--elementary number theory}
We want to record an elementary observation. Let $n\geq 2$ be an integer and $p$ be a prime.
Let $p = nk + r$ where $r$ is the remainder. If for some positive integer $a < n$, $n|ar$, then we claim that $p | n$.
This is because $n|ar$ and $a<n$ implies that $n$ and $r$ must have a nontrivial common divisor. But $p = nk + r$ is prime, so the only nontrivial common divisor that $n$ and $r$ could have is $p$, in which case we must have $p$ divides $n$.
\end{remark}
\begin{theorem}\label{diagonal}
\label{theorem--dimension of the stable part for diagonal hypersurface}
Let $A = k[x_0, x_1, \ldots, x_d]/(x_0^n+x_1^n + \cdots + x_d^n)$ with $d\geq 2$.
Suppose $p \equiv r \mod n$.
If $p$ does not divide $n$, then the dimension of the stable part of $\lc^{d}_{\fm}(A)$
can be computed as the number of solutions of the following system of inequalities
on $1 \leq a_i \leq n-1$
\[\begin{cases}
a_1 + \cdots + a_d < n\\
(ra_1)\% n + \cdots + (ra_d)\% n < n \\
(r^2a_1)\% n + \cdots + (r^2a_d)\% n < n \\
\qquad\vdots\\
(r^{\varphi(n)-1}a_1)\% n + \cdots + (r^{\varphi(n)-1}a_d)\% n < n
\end{cases}.\]
\end{theorem}
\begin{proof}
A basis of the degree $0$ part of $\lc_\m^d(A)$ is formed by the elements
\[\frac{x_0^c}{x_1^{a_1}\cdots x_d^{a_d}}\]
where $a_1 + \cdots + a_d = c < n$ and $a_i \geq 1$.
On such element, Frobenius acts as
\[
\frac{x_0^c}{x_1^{a_1}\cdots x_d^{a_d}} \mapsto \frac{x_0^{cp}}{x_1^{a_1p}\cdots x_d^{a_dp}}
= \frac{x_0^{(cr)\% n} \cdot (- x_1^n - \cdots - x_d^n)^{\lfloor \frac{cp}{n} \rfloor }}{x_1^{a_1p}\cdots x_d^{a_dp}}.
\]
After expanding the expression we obtain the sum of monomials of the form
\[
(-1)^{\lfloor \frac{cp}{n} \rfloor} \binom {\lfloor \frac{cp}{n} \rfloor}{\alpha_1, \ldots, \alpha_d}
\frac{x_0^{(cr)\% n} \cdot (x_1^{n\alpha_1}\cdots x_d^{n\alpha_d})}{x_1^{a_1p}\cdots x_k^{a_dp}}
\]
for $\alpha_1 + \cdots + \alpha_d = \lfloor \frac{cp}{n} \rfloor$.
This element will be zero unless $\alpha_in < a_ip$ for all $i$.
Hence it is zero if
$\alpha_i > \lfloor \frac{a_ip}{n} \rfloor$ for some $i$.
In particular, the element $\frac{x_0^c}{x_1^{a_1}\cdots x_d^{a_d}}$ is in the kernel of the Frobenius map if
\[
\alpha_1 + \cdots + \alpha_d = \left \lfloor \frac{cp}{n} \right\rfloor >
\left\lfloor \frac{a_1p}{n} \right\rfloor + \cdots + \left \lfloor \frac{a_dp}{n} \right\rfloor, \]
i.e., $a_1p \% n + \cdots + a_dp \% n=a_1r \% n + \cdots + a_dr \% n \geq n$.
Similarly, if
\[
\left\lfloor \frac{cp}{n} \right\rfloor =
\left\lfloor \frac{a_1p}{n} \right\rfloor + \cdots + \left\lfloor \frac{a_dp}{n} \right\rfloor ,
\]
the only term that can possibly survive is
\footnotesize
\[
\binom {\lfloor \frac{cp}{n} \rfloor}{{\lfloor \frac{a_1p}{n} \rfloor}, \ldots, {\lfloor \frac{a_dp}{n} \rfloor}}
\frac {(-1)^{\lfloor \frac{cp}{n} \rfloor} x_0^{(cr)\%n} \cdot (x_1^{\lfloor \frac{a_1p}{n} \rfloor}\cdots x_d^{\lfloor \frac{a_dp}{n} \rfloor})^n}
{x_1^{a_1p}\cdots x_d^{a_dp}}
= \binom {\lfloor \frac{cp}{n} \rfloor}{{\lfloor \frac{a_1p}{n} \rfloor}, \ldots, {\lfloor \frac{a_dp}{n} \rfloor}}
\frac{(-1)^{\lfloor \frac{cp}{n} \rfloor} x_0^{(cr)\% n}}{x_1^{(a_1r)\% n}\cdots x_d^{(a_dr)\% n}}.
\]
\normalsize
Since $\lfloor \frac{cp}{n} \rfloor < p$, the binomial coefficient $ \binom {\lfloor \frac{cp}{n} \rfloor}{{\lfloor \frac{a_1p}{n} \rfloor}, \ldots, {\lfloor \frac{a_dp}{n} \rfloor}}$ is nonzero. Thus the last possibility that the above term be zero is that $a_ir$ is divisible by $n$ for some $i$. But this cannot happen as explained in Remark~\ref{remark--elementary number theory} (since this implies $p | n$).
In sum, an element of the basis
\[\frac{x_0^c}{x_1^{a_1}\cdots x_d^{a_d}}\]
is not in the kernel of Frobenius if and only if
\[(ra_1)\% n + \ldots + (ra_k)\% n < n\]
Thus the claim follows after considering further iterates of Frobenius (we only need to consider the first $\varphi(n)-1$ iterates because $r^{\varphi(n)}\equiv 1$ mod $n$ by Fermat's little theorem, so further iterates would repeat this pattern), because if a basis element is not in the kernel of all the iterates of the Frobenius map, then it contributes to an element in $\lc_\m^d(A)_s$.
\end{proof}
\begin{corollary}\label{injective roots}
Suppose $p \nmid n$, then Frobenius acts injectively on $\lc_\m^{d}(A)_0$ if and only if $p \equiv 1\mod n$. As a consequence, $(\lc_\m^{d}(A)_0)_s=\lc_\m^{d}(A)_0$ if and only if $p \equiv 1\mod n$.
\end{corollary}
\begin{proof}
If $p \equiv 1 \mod n$ the claim immediately follows from
Theorem~\ref{theorem--dimension of the stable part for diagonal hypersurface}.
For the other direction, we need to show that there are integers $a_i\geq 1$ such
that $a_1 + \cdots + a_d < n$ but $(ra_1)\% n + \cdots + (ra_d) \% n \geq n$.
Now $r$ is invertible modulo $n$, take $1 \leq a_1 < n-1$ such that $a_1r \equiv -1 \mod n$.
Then since $ra_i \geq 1$ for all $i > 1$, we always have $(ra_1)\% n + \ldots + (ra_d)\% n \geq n$ as $d\geq 2$.
\end{proof}
\begin{corollary}\label{nilpotent roots}
If $p^h \equiv -1 \mod n$ for some $h$, then Frobenius acts nilpotently on $\lc_\m^{d}(A)_0$. As a consequence, $(\lc_\m^d(A)_0)_s=0$ in this case.
\end{corollary}
\begin{proof}
Consider the equation
\[(-a_1)\% n + \cdots + (-a_d)\% n < n\]
corresponding to $r^h$ (which is $\equiv p^h\equiv -1 \mod n$).
Since $1 \leq a_i \leq n-1$, $(-a_1)\% n = n - a_1$, so the equation becomes
\[
dn - a_1 - \cdots - a_d < n.
\]
But this equation has no solution since $a_1 + \cdots + a_d < n$ and $d\geq 2$.
\end{proof}
\begin{remark}
The converse to the last corollary does not hold. For example, if $n = 11$ and $d = 2$, then a direct computation shows that
Frobenius acts nilpotently on $\lc^{2}_{\fm}(A)_0$ unless $p \equiv 1 \mod n$.
\end{remark}
\section{$D$-module length vs. $F$-module length}
\label{section: examples}
We continue to use the notation as in the beginning of Section 4 and Section 5. In \cite{BlickleDmodulestructureofRFmodules}, Blickle made a deep study on the comparison of $D$-module and $F$-module length. For example, in \cite[Theorem 1.1 or Corollary 4.7]{BlickleDmodulestructureofRFmodules} it was proved that if $k$ is algebraically closed, then for every $F$-finite $F^\infty_R$-module $M$, we have $l_{F^\infty_R}(M)=l_{D_R}(M)$, which is also $=l_{D(R,k)}(M)$ since $D_R=D(R,k)$ when $k$ is perfect. Moreover, when $k$ is perfect but not algebraically closed, Blickle constructed an example \cite[5.1]{BlickleDmodulestructureofRFmodules} of a simple $F_R^\infty$-module that is not $D_R$-simple (equivalently, not $D(R,k)$-simple since $k$ is perfect). In particular, even the $F_R^\infty$-module length may differ from the $D(R,k)$-module length in general.
However it is not clear whether these pathologies are artificial, i.e., can they occur for local cohomology modules with their natural $F_R$-module structure? In this section we will construct an example of a local cohomology module of $R$, with $k$ algebraically closed, such that its $F_R$-module length is strictly less than its $D_R$-module length.
To begin, let $V$ be a vector space over a field $k$ of positive characteristic $p$.
Then we can describe a $e$-th Frobenius action $f$ on $V$ in the following way.
Choose a basis $e_1, \ldots, e_n$ of $V$. Let $f(e_i) = a_{1i}e_1 + \ldots + a_{ni}e_n$.
Then for any element $b = (b_1, \ldots, b_n)^T$ written in the basis $e_i$, we can write
\[
f(b) = A b^{[p^e]}
\]
where $A = (a_{ij})$ and $[b^{p^e}]$ raises all entries to $p^e$-th power.
Or explicitly,
\[
f(b) = \begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{m2} & \cdots & a_{nn}
\end{pmatrix}
\begin{pmatrix}
b_1^{p^e} \\ \vdots \\ b_n^{p^e}
\end{pmatrix}.
\]
Via this description it is very easy to see the following result.
\begin{lemma}
\label{lemma--change of basis}
Let $k$ be a field of positive characteristic $p$ and let $V$ be a finite dimensional vector space with a $e$-th Frobenius action $f$.
Let $A$ be a matrix describing the action of $f$ in some basis $e_i$.
Then in a new basis obtained by an orthogonal matrix $O$, $f$ is represented by the matrix $O A (O^{\tau})^{[p^e]}$,
where all entries of the transpose $O^{\tau}$ are raised to the $p^e$-th power.
\end{lemma}
\begin{proposition}
\label{proposition--Fmatrix}
Let $R = k[x_1, \dots, x_n]$ or $k[[x_1, \ldots, x_n]]$ and $V$ be a $k$-vector space with a $e$-th Frobenius action $f$.
Then $l_{F^e_R} (\scr{H}_{R,R}(V))$ is the length of any longest flag of $f$-subspaces
\[
0 \subset V_1 \subset V_2 \subset \ldots \subset V_m = V
\]
such that $f$ is not nilpotent on $V_i/V_{i-1}$ for every $i$.
In particular, $l_{F^e_R} (\scr{H}_{R,R}(V)) = \dim V$ if and only if there is a basis of $V$
such that, with the notation as in Lemma \ref{lemma--change of basis}, $f$ can be represented by an upper-triangular matrix $A$ with nonzero entries on the main diagonal.
\end{proposition}
\begin{proof}
The first claim is \cite[Theorem 4.7]{LyubeznikFModulesApplicationsToLocalCohomology}.
If $l_{F^e_R}(\scr{H}_{R,R}(V)) = \dim V$, then we must have $\dim V_i = i$. Then we will choose a compatible basis for the flag,
i.e., $V_i = k\langle e_1, \ldots, e_i\rangle$. Since $f(V_i) \subseteq V_i$. Now $f(e_i)= a_{i1}e_1 + \ldots + a_{ii}e_i$. Thus the matrix representing $f$ is upper-triangular. Moreover, since $f$ acts nontrivially on $V_i/V_{i-1}$, we must have $a_{ii}\neq 0$. So the matrix have nonzero entries on the main diagonal.
Conversely, if the matrix is upper-triangular with nonzero entries on the main diagonal in some basis, it is easy to see that $V_i = k\langle e_1, \ldots, e_i\rangle$ form a flag of $f$-subspaces with $f$ acts nontrivially on $V_i/V_{i-1}$.
\end{proof}
\begin{remark}\label{cycle eigenvectors}
Before proceeding further, we need a simple result in linear algebra.
Over a finite field $\mathbb{F}_p$ where $p \neq 3$, consider the matrix
\[
M=\begin{pmatrix}
0 & 0 & a \\
a & 0 & 0 \\
0 & a & 0
\end{pmatrix}.
\]
where $a\neq 0$ in $\mathbb{F}_p$. The characteristic polynomial of this matrix is
\[
\lambda^3 - a^3 = (\lambda - a)(\lambda^2 + \lambda a + a^2).
\]
The discriminant of the quadratic polynomial is $D = -3a^2$.
Thus if $-3$ is a quadratic residue in $\mathbb{F}_p$ then the characteristic polynomial has three distinct eigenvalues.
On the other hand, if $-3$ is not a quadratic residue, then $(1, 1, 1)$ is the only eigenvector and the restriction of the matrix $M$ on the subspace of $\mathbb{F}_p^3$ orthogonal to $(1,1,1)$ can be expressed as
\[
\begin{pmatrix}
0 & -a \\
a & -a
\end{pmatrix},
\]
which has no eigenvalues over $\mathbb{F}_p$.
By Quadratic Reciprocity, $-3$ is a quadratic residue if and only if $p = 1 \pmod 3$.
Thus if $p \neq 1 \pmod 3$, this matrix has no eigenvalues over $\mathbb{F}_{p}$ and thus cannot be transformed in an upper-triangular form by a change of basis.
Otherwise, it has three eigenvectors and can be transformed in an upper-triangular form.
\end{remark}
Our first example shows that $l_{F_R}(\lc_I^{n-c}(R))$ can be strictly less than $l_{F^\infty_R}(\lc_I^{n-c}(R))$ and thus strictly less than $l_{D(R,k)}(\lc_I^{n-c}(R))$ by (\ref{equation--basic relation on length in different categories}) if $k$ is not separably closed, even when $A=R/I$ has isolated singularities.
\begin{corollary}
\label{corollary--D-length bigger than F-length 1}
Let $p$ be a prime number, $R = \mathbb{F}_{p}[x, y, z]$, and $f = x^7 + y^7 + z^7$.
Then
\[
l_{F_R^\infty}(\lc^1_{f} (R))=l_{D_R}(\lc^1_{f} (R))=
\begin{cases}
16 &\text { if } p \equiv 1 \pmod 7,\\
7 &\text { if } p \equiv 2 \text { or } 4 \pmod 7,\\
1 &\text { otherwise.}
\end{cases}
\]
On the other hand,
\[
l_{F_R}(\lc^1_{f} (R))=
\begin{cases}
16 &\text { if } p \equiv 1 \pmod 7,\\
7 &\text { if } p \equiv 2 \text { or } 4\pmod 7 \text{ and } p \equiv 1 \pmod 3,\\
5 &\text { if } p \equiv 2 \text { or } 4\pmod 7 \text{ and } p \not\equiv 1 \pmod 3,\\
1 &\text { otherwise.}
\end{cases}
\]
In particular,
$ l_{D_R}(\lc^1_{f} (R)) \neq l_{F_R}(\lc^1_{f} (R))$ for any $p \equiv 11 \pmod {21}$.
\end{corollary}
\begin{proof}
Let $V$ denote the $k$-vector space $(0^*_{\lc_{\fm}^2 (R/f)})_s=(\lc_{\fm}^2 (R/f)_0)_s$.
By Corollary~\ref{injective roots} and Corollary~\ref{nilpotent roots},
Frobenius acts injectively on $\lc_{\fm}^2 (R/f)_0$ if $p \equiv 1 \pmod 7$ and nilpotently if $p \equiv 3,5,6 \pmod 7$.
When $p \equiv 2 \text { or } 4\pmod 7$, using the algorithm described in Theorem \ref{theorem--dimension of the stable part for diagonal hypersurface}
it can be checked that the Frobenius map (i.e., $e=1$) on $V$ is spanned by two $3$-cycles.
If $p \equiv 4 \pmod 7$, the cycles are
\[
\frac{z^3}{x^2y} \to \frac{z^5}{xy^4} \to \frac{z^6}{x^4y^2}
\text { and }
\frac{z^3}{xy^2} \to \frac{z^5}{x^4y} \to \frac{z^6}{x^2y^4}.
\]
While if $p \equiv 2 \pmod 7$, the cycles become
\[
\frac{z^3}{x^2y} \to \frac{z^6}{x^4y^2} \to \frac{z^5}{xy^4}
\text{ and }
\frac{z^3}{xy^2} \to \frac{z^6}{x^2y^4} \to \frac{z^5}{x^4y}.
\]
In particular, one obtains that
\[
\dim V=
\begin{cases}
15 &\text { if } p \equiv 1 \pmod 7,\\
6 &\text { if } p \equiv 2 \text { or } 4 \pmod 7,\\
0 &\text { otherwise.}
\end{cases}
\]
Since $R/(f)$ is an isolated singularity, $l_{D_R}(\lc^1_{f} (R)) = \dim V + 1$ by Theorem \ref{theorem--D-module length for isolated singularities}.
In the cases when Frobenius acts injectively or nilpotently on $\lc_{\fm}^2 (R/f)_0$, we also deduce from Proposition \ref{proposition--Fmatrix}
that $l_{F_R}(\lc^1_{f} (R)) = \dim V + 1$ (note that when Frobenius acts injectively on $\lc_{\fm}^2 (R/f)_0$, the proof of Theorem \ref{theorem--dimension of the stable part for diagonal hypersurface} shows that Frobenius sends each canonical basis element of $\lc_{\fm}^2 (R/f)_0$ to a multiple of itself, so the representing matrix is diagonal).
In the remaining cases, in order to compute its $F_R^e$-module length we will study a matrix
which represents the Frobenius map on $V$ by Proposition \ref{proposition--Fmatrix}.
We can use the proof of Theorem \ref{theorem--dimension of the stable part for diagonal hypersurface} to describe the Frobenius action on the cycles.
If $p = 7k + 4$, one obtains that the Frobenius action on both cycles are described by the matrix
\[
A = \begin{pmatrix}
0 & 0 & (-1)^{6k + 3} \binom{6k+3}{2k+1} \\
(-1)^{3k + 1} \binom{3k+1}{k} & 0 & 0 \\
0 & (-1)^{5k + 2} \binom{5k+2}{k} & 0
\end{pmatrix},
\]
where we have chosen the natural bases, e.g.
$e_1 = \frac{z^3}{x^2y}, e_2 = \frac{z^5}{xy^4}, e_3 = \frac{z^6}{x^4y^2}$ for the first cycle.
Similarly, if $p = 7k + 2$, the matrix is
\[
\begin{pmatrix}
0 & 0 & (-1)^{5k + 1} \binom{5k + 1}{4k + 1} \\
(-1)^{3k} \binom{3k}{k} & 0 & 0 \\
0 & (-1)^{6k + 1} \binom{6k+1}{4k + 1} & 0
\end{pmatrix}.
\]
We claim that the non-zero entries of $A$ are equal.
Observe that by Wilson's theorem
\[
(n-1)! (p - n)! = (n-1)! (p - n)(p - n - 1) \cdots 1 = (n-1)! (-n)(-n - 1) \cdots (-p+1) = (-1)^{n}
\]
mod $p$. Furthermore, because $p = 7k + 4$ is odd, $k$ is odd.
Thus we can rewrite
\[
\binom{5k + 2}{k} =
\frac{(5k+2)!}{(k)!(4k+2)!}
= \frac{(p - 2k - 2)!}{(p - 6k - 4)!(4k + 2)!} = (-1)^{4k + 2} \binom{6k + 3}{2k + 1} = \binom{6k + 3}{2k + 1}
\]
and
\[
\binom{5k + 2}{k} =
\frac{(5k+2)!}{(k)!(4k+2)!}
= \frac{(p - 2k - 2)!}{k!(p - 3k - 2)!} = (-1)^{5k} \binom{3k + 1}{k} = - \binom{3k + 1}{k}.
\]
The case of $p = 7k +2$ is identical.
Since $a^{p} = a$ for any element $a \in \mathbb{F}_{p}$, the Frobenius action is linear.
Thus by Lemma \ref{lemma--change of basis} and Remark~\ref{cycle eigenvectors}, the matrix associated to the Frobenius map on the chosen basis can be transformed into upper-triangular form if and only if $p \equiv 1 \pmod 3$. Thus by Proposition \ref{proposition--Fmatrix}, in the case $p=7k+4$ or $p=7k+2$, we obtain that $l_{F_R}(\lc^1_{f} (R)) = l_{F_R}(\scr{H}_{R,R}(V)) + 1 = 7$ when $p \equiv 1 \pmod 3$, and otherwise $l_{F_R}(\lc^1_{f} (R) )= 5$.
Lastly, it is easy to see that the third iterate of the Frobenius map on $V$ can be represented by a diagonal matrix, hence
\[l_{F^3_R}(\scr{H}_{R,R}(V))=l_{F_R^3}(\scr{H}_{R,R}(0^*_{\lc_{\fm}^2 (R/f)}))=l_{F_R^\infty}(\scr{H}_{R,R}(0^*_{\lc_{\fm}^2 (R/f)}))=6\]
and $l_{F_R^\infty}(\lc^1_{f} (R))=7$.
\end{proof}
Finally, we exhibit an example of a local cohomology module of $R$, with $k$ algebraically closed, such that its $D(R,k)$-module length (equivalently, its $F_R^\infty$-module length) is strictly bigger than its $F_R$-module length. Recall that by Theorem \ref{theorem--D-module length for isolated singularities}, this cannot happen if $A=R/I$ has isolated singularities.
\begin{proposition}
\label{proposition--D-length bigger than F-length 2}
Let $p = 7k + 4$ be a prime number.
Let $R = \overline{\mathbb{F}}_{p}[x, y, z, t]$ and let $f = tx^7 + ty^7 + z^7$.
Then
\[
l_{F_R}(\lc^1_{f}(R)) = 3 < 7 = l_{F^\infty_R}(\lc^1_{f} (R))=l_{D(R,\overline{\mathbb{F}}_p)} (\lc^1_{f} (R)).
\]
\end{proposition}
\begin{proof}
Denote $R/(f)$ by $A$. First we claim that $(x,y,z)A$ is the only height-2 prime ideal of $A$ that contains the test ideal $\tau(A)$ of $A$. By \cite[Theorem 6.4]{SchwedeTuckerAsurveyoftestideals}, we have
\[\tau(A)=\sum_e\sum_{\phi\in \Hom_A(A^{1/p^e},A)}\phi(c^{1/p^e}),\]
where $c$ is a test element for $A$. According to \cite[Theorem on page 184]{HochsterFoundationsofTC}, $z^6$ is a test element for $A$ and we may set $c=z^6$. Since $A=R/(f)$, we have
\[\tau(A)=\Big(\sum_e\sum_{\varphi\in \Hom_R(R^{1/p^e},R)}\varphi((f^{p^e-1}z^6)^{1/p^e})\Big)A.\]
It is straightforward to check that $(x^6,y^6,z^6)\subset \tau(A)$ and $\tau(A)$ has height 2. Hence $(x,y,z)A$ is the only height-2 prime ideal that contains the test ideal $\tau(A)$ of $A$. Consequently, $\widehat{A}_P$ is $F$-rational for each height-2 prime $P\neq (x,y,z)A$, or equivalently $0^*_{\lc^{2}_{P\widehat{A}_P}(\widehat{A}_P)}=0$.
Next we calculate the stable part of $\lc^3_{\fm}(A)$ where $\fm=(x,y,z,t)$. To this end, we assign the grading $\deg(x)=\deg(y)=1$, $\deg(z)=2$, and $\deg(t)=7$ degrees $1,1, 2, 7$ to $R$ and, consequently, $f$ is homogeneous. It is straightforward to check that $\lc^3_{\fm}(A)_0$ has a $\overline{\mathbb{F}}_{p}$-basis:
\[
\left [\frac{z^5}{tx^2y} \right ], \left [\frac{z^5}{txy^2}\right ], \left [\frac{z^6}{tx^4y} \right],
\left [\frac{z^6}{tx^3y^2} \right ], \left [\frac{z^6}{tx^2y^3}\right ], \left [\frac{z^6}{txy^4}\right].
\]
Because the degree of $z$ is larger that the degrees of $x$ and $y$,
each of these elements is nilpotent under the natural Frobenius action.
For example, raising the last generator to the power $p = 7k + 4$, we get
\[
\left [\frac{z^{42k + 24}}{t^{7k + 4}x^{7k + 4}y^{28k + 16}}\right]
= \left [\frac{z^3(z^7)^{6k + 3}}{t^{7k + 4}x^{7k + 4}y^{28k + 16}}\right]
= \left [\frac{z^3(x^7 + y^7)^{6k + 3}}{t^{k + 1}x^{7k + 4}y^{28k + 16}}\right]
\]
which necessarily equals $0$ since the degree of any monomial in $x,y$ in the numerator is $7(6k+3)$ which is greater than $(7k+4)+(28k+16)$.
Hence $\lc^3_{\fm}(A)_s= (\lc^3_{\fm}(A)_0)_s=0$.
Given the grading on $R$, we are in the situation of Theorem \ref{theorem--Upper bounds for D-module length when singular locus has dimension 1}. By Claim \ref{claim--graded F-module filtration}, there exists a graded $F_R$-module filtration $0\subseteq L\subseteq M\subseteq \lc_f^1(R)$ where $L$ is supported at $(f)$, each $D(R,\overline{\mathbb{F}}_p)$-module (equivalently, $D_R$-module or $F_{R}^\infty$-module) composition factor of $M/L$ is supported at $(x,y,z)$, and $\lc_f^1(R)/M$ is supported only at $\m=(x,y,z,t)$.
By \cite[Corollary 4.2 and Theorem 4.4]{BlickleIntersectionhomologyDmodule}, $$l_{D(R,\overline{\mathbb{F}}_p)}(L)=l_{F_R^\infty}(L)=l_{F_R}(L)=1,$$ because there is only one minimal prime $(f)$ of $A$. Moreover, we have
\[
l_{D(R,\overline{\mathbb{F}}_p)}(\lc_f^1(R)/M)=l_{F_R^\infty}(\lc_f^1(R)/M)=l_{F_R}(\lc_f^1(R)/M)=0
\]
since $\dim_{\overline{\mathbb{F}}_p}(\lc_\m^3(A))_s=0$. Thus we actually have $M=\lc_f^1(R)$ in this example.
It remains to compute $l_{D(R,\overline{\mathbb{F}}_p)}(M/L)$, $l_{F_R^\infty}(M/L)$, and $l_{F_R}(M/L)$. Note that the first two are equal because we are working over an algebraically closed field \cite[Theorem 1.1]{BlickleDmodulestructureofRFmodules}. Moreover, if we take an $F_{R}^\infty$-module (resp. $F_R$-module) filtration of $M/L$, say
\[
L=M_0\subseteq M_1\subseteq M_2\subseteq\cdots\subseteq M_l=M=\lc_f^1(R),
\]
such that each $N_i=M_i/M_{i-1}$ is a simple $F_R^\infty$-module (resp. simple $F_R$-module) supported at $P=(x,y,z)$. Then if we localize at $P$ and complete, we have
\[
L\otimes\widehat{R_P}=M_0\otimes\widehat{R_P}\subseteq M_1\otimes \widehat{R_P}\subseteq\cdots \subseteq M_l\otimes \widehat{R_P}
=\lc_{f}^1(\widehat{R_P})
\]
such that each successive quotients $N_i\otimes\widehat{R_P}$ is still simple as an $F_{\widehat{R_P}}^\infty$-module (resp. simple as an $F_{\widehat{R_P}}$-module) by Lemma \ref{lemma--localizing and completing F-module}.
Observe that
\[
\widehat{R_P}/(f)\cong\overline{\mathbb{F}}_p(t)[[x,y,z]]/(tx^7+ty^7+z^7)
\]
which is an isolated singularity. Hence by Proposition \ref{proposition--Fmatrix} (and the proof of Theorem \ref{theorem--D-module length for isolated singularities}), the $F_{\widehat{R_P}}^\infty$-module (resp. $F_{\widehat{R_P}}$-module) length of $\lc_{f}^1(\widehat{R_P})/(L\otimes\widehat{R_P})$ is the longest flag of Frobenius-stable subspaces of
\[
V= \left (0^*_{\lc_{\fm}^2 (\overline{\mathbb{F}}_p(t)[[x,y,z]]/(tx^7+ty^7+z^7))} \right)_s.
\]
Via a direct computation similar to the proof of Theorem \ref{theorem--dimension of the stable part for diagonal hypersurface}
one can show that $\dim V = 6$ and $V$ is a direct sum of two three-dimensional Frobenius-stable subspaces. In the natural bases as in the proof of Corollary \ref{corollary--D-length bigger than F-length 1}, the Frobenius action on each cycle is represented by the matrix:
\[
A = \begin{pmatrix}
\!0 & \!0 & \!\binom {3k + 1}{k}t^{6k + 3} \\
\!\binom {3k + 1}{k}t^{3k + 1} & \!0 & \!0 \\
\!0 & \!\binom {3k + 1}{k}t^{5k + 2} & \!0
\end{pmatrix}
=
\binom {3k + 1}{k} \!\begin{pmatrix}
\!0 & \!0 & \!t^{6k + 3} \\
\!t^{3k + 1} & \!0 & \!0 \\
\!0 & \!t^{5k + 2} & \!0
\end{pmatrix}.
\]
We can easily see that the third iterate of the Frobenius map on $V$ can be represented by a diagonal matrix,
hence by Proposition \ref{proposition--Fmatrix}
\[l_{F_{\widehat{R_P}}^3}(\lc_{f}^1(\widehat{R_P})/(L\otimes\widehat{R_P}))=l_{F_{\widehat{R_P}}^\infty}(\lc_{f}^1(\widehat{R_P})/(L\otimes\widehat{R_P}))=6,
\]
and thus by the above discussion
\[l_{D(R,\overline{\mathbb{F}}_p)}(\lc_f^1(R))=l_{F_R^\infty}(\lc_f^1(R))=1+6=7.\]
Finally let us show that $V$ has no proper subspace stable under the Frobenius action. If $U$ is a proper Frobenius-stable subspace and $v \in U$, then we must have that $\langle v, F(v), F^2(v) \rangle \subseteq U \subsetneqq V$.
Thus if $v = (a, b, c)$ in the standard basis, then
\[
\det
\begin{pmatrix}
a & t^{6k + 3} c^p & t^{6k + 3 + (5k + 2)p } b^{p^2} \\
b & t^{3k + 1} a^p & t^{3k + 1 + (6k + 3)p } c^{p^2} \\
c & t^{5k + 2} b^p & t^{5k + 2 + (3k + 1)p } a^{p^2}
\end{pmatrix}
= 0
\]
Observe that if $w = \lambda v$ then
\[
\det (w, Fw, F^2w) = \lambda^{p^2 + p + 1}\det (v, Fv, F^2v),
\]
so we can multiply $v$ by the common denominator of $a, b, c$ and assume that $a, b, c \in \overline{\mathbb{F}}_p[t]$.
By a direct computation, we have
\begin{align*}
\Delta = &\det
\begin{pmatrix}
a & t^{6k + 3} c^p & t^{6k + 3 + (5k + 2)p } b^{p^2} \\
b & t^{3k + 1} a^p & t^{3k + 1 + (6k + 3)p } c^{p^2} \\
c & t^{5k + 2} b^p & t^{5k + 2 + (3k + 1)p } a^{p^2}
\end{pmatrix}\\
= & t^{(3k + 1)p + 8k + 3} a^{p^2 + p + 1} - t^{(6k + 3)p + 8k + 3} ab^pc^{p^2} - t^{(3k + 1)p + 11k + 5} a^{p^2}bc^p \\
& + t^{(6k + 3)p + 9k + 4} c^{p^2 + p + 1}+ t^{(5k + 2)p + 11k + 5} b^{p^2 + p + 1} - t^{(5k + 2)p + 9k + 4} a^p b^{p^2} c.
\end{align*}
Note that we may factor out $t^{(3k + 1)p + 8k + 3}$ and obtain $\Delta=t^{(3k + 1)p + 8k + 3}\Delta'$, where
\begin{multline*}
\Delta' = a^{p^2 + p + 1} - t^{(3k + 2)p} ab^pc^{p^2} -
t^{3k + 2} a^{p^2}bc^p \\ + t^{(3k + 2)p + k + 1} c^{p^2 + p + 1}
+ t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1} - t^{(2k + 1)p + k + 1} a^p b^{p^2} c.
\end{multline*}
Let $\deg a = \alpha, \deg b = \beta, \deg c = \gamma$. Then we have
\begin{align*}
\deg a^{p^2 + p + 1} &= p^2 \alpha + p \alpha + \alpha\\
\deg t^{(3k + 2)p} ab^pc^{p^2} &= p^2 \gamma + p (\beta + 3k + 2) + \alpha\\
\deg t^{3k + 2} a^{p^2}bc^p & = p^2 \alpha + p \gamma + \beta + 3k + 2\\
\deg t^{(3k + 2)p + k + 1} c^{p^2 + p + 1} &= p^2 \gamma + p (\gamma + 3k + 2) + \gamma + k + 1\\
\deg t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1} &= p^2 \beta + p (\beta + 2k + 1) + \beta + 3k + 2\\
\deg t^{(2k + 1)p + k + 1} a^p b^{p^2} c &= p^2 \beta + p (\alpha + 2k + 1) + \gamma + k + 1.
\end{align*}
Now we prove that there is always a nonzero term with highest degree in $\Delta'$ (hence $\Delta'$ cannot be zero). We consider the following three cases:
\begin{enumerate}
\item $\gamma\geq\max\{\alpha, \beta\}$.
\item $\beta>\gamma$ and $\beta\geq \alpha$.
\item $\alpha>\max\{\beta, \gamma\}$.
\end{enumerate}
The point is that in case (1) (similarly, in cases (2) and (3)), $\deg t^{(3k + 2)p + k + 1} c^{p^2 + p + 1}$
(resp. $\deg t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1}$, $\deg a^{p^2 + p + 1}$),
is strictly bigger than the degree of the other five terms. We give a detailed explanation in case (2) and leave the other cases to the reader to check (they are all very similar). In case (2), since $p=7k+4$, we have:
\begin{align*}
&\deg t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1}-\deg a^{p^2 + p + 1} \geq 3k+2 >0,\\
&\begin{multlined}[\textwidth]
\!\deg t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1}-\deg t^{(3k + 2)p} ab^pc^{p^2}\\
> p^2(\beta-\gamma)-p(k+1)\geq p^2-p(k+1) >0,
\end{multlined}\\
&\deg t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1}-\deg t^{3k + 2} a^{p^2}bc^p >p^2(\beta-\alpha)+p(\beta-\gamma)>0,\\
&\begin{multlined}[\textwidth]
\!\deg t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1}-\deg t^{(3k + 2)p + k + 1} c^{p^2 + p + 1} \\
>(p^2+p)(\beta-\gamma)-p(k+1)\geq p^2+p-p(k+1)>0,
\end{multlined}\\
&\deg t^{(2k + 1)p + 3k + 2} b^{p^2 + p + 1}-\deg t^{(2k + 1)p + k + 1} a^p b^{p^2} c =p(\beta-\alpha)+2k+1>0.
\end{align*}
This finishes the proof that $V$ does not have Frobenius stable subspaces and thus $$l_{F_{\widehat{R_P}}}(\lc_{f}^1(\widehat{R_P})/(L\otimes\widehat{R_P}))=2,$$ and hence by the above discussion $$l_{D(R,\overline{\mathbb{F}}_p)}(\lc_f^1(R))=l_{F_R^\infty}(\lc_f^1(R))=1+2=3.$$
\end{proof}
\begin{remark}
Blickle \cite[Section 5]{BlickleDmodulestructureofRFmodules} showed that for a general $F$-finite $F_R^\infty$-module $M$ the two lengths
$l_{F^\infty_R}(M)$ and $l_{D(R,k)} (M)$ might be different if the residue field $k$ is perfect but not algebraically closed. However, we do not know whether this can happen when $M=\lc_I^c(R)$. The Fermat hypersurfaces cannot provide such an example: the natural Frobenius
action decomposes into cycles and thus the matrix representing some large iterate of the Frobenius action will be a diagonal matrix, hence Theorem \ref{theorem--D-module length for isolated singularities} and Proposition \ref{proposition--Fmatrix} shows that the two lengths always coincide in this case.
\end{remark}
We end with a slight generalization of Blickle's result (\cite[Theorem 1.1]{BlickleDmodulestructureofRFmodules}) to local cohomology modules of rings with an isolated non-$F$-rational point over finite fields:
\begin{proposition}
\label{proposition--F-module length for isolated sing over finite field}
Let $(R,\m,k)$ and $A=R/I$ be as in the Notation at the beginning of Section 4. Assume $k$ is a finite field and $A$ has an isolated non-$F$-rational point at $\fm$. Then $l_{F^\infty_R}(\lc_I^c(R))=l_{D(R,k)}(\lc_I^c(R))$.
\end{proposition}
\begin{proof}
Following the proof of Theorem \ref{theorem--D-module length for isolated singularities} and Proposition \ref{proposition--Fmatrix}, it is enough to show that there exists $e>0$ such that the matrix $B_e$ representing the $e$-th Frobenius action on $(0^*_{\lc_\m^d(A)})_s$ is an upper triangular matrix. We will show that in fact $B_e$ is the identity matrix for $e\gg0$ sufficiently divisible. It is easy to observe that $B_e=B_1\cdot B_1^{[p]}\cdots B_1^{[p^{e-1}]}$ where $B^{[p^i]}$ denotes the matrix obtained by raising each entry of $B$ to its $p^i$-th power. Suppose $k=\mathbb{F}_{p^n}$ and thus $B^{[p^n]}=B$. Hence we know that $B_{ne}=B_1^e\cdot (B_1^{[p]})^e\cdots (B_1^{[p^{n-1}]})^e$. Now the result follows from the elementary fact that over $\mathbb{F}_{p^n}$, every invertible matrix $B$ has a power that is the identity matrix: look at the Jordan normal form of $B$ in $\overline{\mathbb{F}}_p$, taking a large $p^m$-th power will make each Jordan block into a diagonal matrix. But
an invertible diagonal matrix over $\overline{\mathbb{F}}_{p^n}$ can be raised to a large power to make each diagonal entry be $1$.
\end{proof}
\bibliographystyle{skalpha}
|
1,108,101,565,946 | arxiv | \section{Introduction}
The determination of black hole (BH) masses is one of the major focus in the studies of supermassive black holes and the active galaxies where they reside. Most BH mass estimates are based on correlations between the BH mass and the stellar bulge velocity dispersion, i.e., the M-$\sigma$ relation \citep[e.g.][]{ferrarese_merritt2000,gultekin+09}, or the AGN continuum luminosity, i.e., the mass-luminosity relation by which the optical, UV and X-ray luminosities are found to correlate with the size of the Broad Line Region (BLR) \citep[e.g.][and references therein]{koratkar_gaskell1991,kaspi_etal2005,landt+13}. While the use of the M-$\sigma$ relation requires the measurement of $\sigma$, it is not always easy to determine it, particularly in AGNs wherein the strong continuum from the nuclear region dilutes the stellar absorption lines. In order to overcome this difficulty, a number
of alternative scaling relations using emission lines such as [\ion{O}{iii}]~$\lambda$5007 to measure the mass of the bulge
\citep[e.g.][]{nelson_whittle1996}, [\ion{O}{ii}]~$\lambda$3727 \citep[e.g.][]{salviander_etal2006}, H$\beta$ or H$\alpha$ \citep[e.g.][]{kaspi_etal2005,greene_ho2005} to infer on the BLR size, or [\ion{Fe}{ii}] in the near-infrared \citep[e.g.][]{riffel_etal2013} to infer on the stellar $\sigma$, have been proposed.
Coronal Lines (CLs) are emission lines with high ionisation potentials (IPs range between $\sim$50 eV up to few hundreds eV) which makes them excellent tracers of the ionising continuum. Although often fainter than the classical medium-ionisation lines used for photoionisation diagnosis, high angular resolution in nearby AGN has shown that CLs particularly in the near-infrared (NIR) are among the most conspicuous features \citep[e.g.][]{marconi1994, muller-sanchez+11, rodriguez+17, gravity2020}. In this work, we highlight the dependence of the BH mass with one such CL, [\ion{Si}{vi}]~1.963$\mu$m (IP [\ion{Si}{vi}] = 167 eV) in the NIR\footnote{this CL is among the most common and brightest ones observed in spectra of AGNs \citep{rodriguez+11, lamperti+17}. We refer the readers to \citet{2020arXiv201000075R} for a broad overview of our presented results.} after normalising it to the nearest H\,{\sc i} broad line emission (in this case Br$\gamma$). A tight correlation between BH mass and the CL ratio [\ion{Si}{vi}]/Br$\gamma_{\rm broad}$ is observed.
\section{Coronal line diagnostic diagrams}
Objects in this work are selected by having BH masses determined by reverberation mapping and single epoch optical and/or near-IR spectra with accurate CL measurements. The first criterion restricts the sample to Type-1 sources only. The second avoids variability issues. The final working sample of objects has 21 AGNs with well-defined [\ion{Si}{vi}]~1.963$\mu$m/Br$\gamma_{\rm broad}$ estimates \citep[see Table 1 in][]{2020arXiv201000075R}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\columnwidth]{si6brgbroad.pdf}
\caption{Observed [\ion{Si}{vi}]~1.963$\mu$m/Br$\gamma_{\rm broad}$ ratio versus black hole mass for the objects in our sample. The black line is the linear best-fit to the data and the red-dashed and -dotted lines show the 1$\sigma$ and 2$\sigma$ deviation, respectively. Figure courtesy: \citet{2020arXiv201000075R}.}
\label{fig:my_label}
\end{figure}
Fig.~\ref{fig:my_label} presents a new diagnostic diagram in which the BH mass for the objects in our sample is plotted against the [\ion{Si}{vi}]~1.963$\mu$m/Br$\gamma_{\rm broad}$ ratio, which shows a clear trend with $M_{\rm BH}$ over three orders of magnitude in BH mass. A linear regression yields:
\begin{equation}
\log M_{\rm BH} = (6.40\pm 0.17) - (1.99\pm 0.37) \times \log \left(\rm{\frac{[Si\,{\sc VI}]}{Br\gamma_{\rm broad}}}\right),
\end{equation}
and a 1$\sigma$ dispersion of 0.47 dex. The regression analysis follows the {\sc LtsFit} package\footnote{\href{http://www-astro.physics.ox.ac.uk/~mxc/software/\#lts}{http://www-astro.physics.ox.ac.uk/mxc/software/lts}} \citep{capellari+13}, which accounts for the errors in all variables. The Pearson correlation coefficient for the correlation is $r$ = -0.76, with a \textit{p}-value = 3.8$\times 10^{-5}$.
\section{Concluding remarks}
With a final compendium of 21 AGNs, the dispersion in BH mass in the proposed calibration is 0.47 dex (1$\sigma$). In comparison, a dispersion of 0.44 dex is inferred from the $M-\sigma$ relation in 49 galactic bulges with direct dynamical BH mass estimate \citep{gultekin+09}. The intrinsic scatter in the mass - luminosity relations is in the 40\% range \citep{kaspi_etal2005}, mostly driven by differences in optical - UV continuum shape.
The present BH mass scaling relation is restricted to Type-1 AGN including narrow line Seyfert galaxies. The limitation is driven by the imposition of including {\it bona-fide} BH masses only, and the need to normalise to broad \ion{H}{i} gas. We are nonetheless examining possibilities to extend it to Type 2. The new scaling offers an economic and physically motivated alternative for BH estimate using single epoch spectra, avoiding large telescope time (reverberation mapping) or absolute flux calibration (the continuum luminosity method). With James Webb Space Telescope and big surveys in the IR region, large samples of AGNs could be weighted using this approach. For a full account of the observed spectra for the sources, and detailed photoionisation modelling that confirms the origin of the observed scaling relation, we refer the readers to \citet{2020arXiv201000075R}.
\acknowledgements{The project was partially supported by the Polish Funding Agency National Science Centre, project 2017/26/\-A/ST9/\-00756 (MAESTRO 9), MNiSW grant DIR/WK/2018/12 and acknowledges partial support from CNPq Fellowship (164753/2020-6).}
\bibliographystyle{ptapap}
|
1,108,101,565,947 | arxiv | \section{Introduction}
\label{sec:intro}
Cosmic rays with energies up to at least $\sim 10^{15}$~eV are thought to originate in supernova remnants (SNRs). They are high-energy particles that have a simple power-law energy spectrum that extends over five decades in energy. The need for an efficient acceleration mechanism in SNRs has motivated the development of the theory of diffusive shock acceleration (DSA), according to which particles are accelerated at shock fronts \citep[see~][for a comprehensive review]{2001MalkovDrury}.
Young supernova remnants are ideal locations for studying this process, because of the high shock velocity, and the presence of a few nearby young remnants for which detailed observations are available \citep[e.g.~][]{2004Hwangetal, 2005Bambaetal, 2009Aceroetal}. The presence of high-energy electrons spiralling in a $\sim 10\ \umu$G magnetic field has been established from the emission of synchrotron radiation from radio wavelengths to X-rays. Both the magnetic field strength and the typical particle energy are inferred from synchrotron theory and can be used to compare with theoretical predictions \citep{1994Achterbergetal,2003VinkLaming, 2005Voelketal, 2005Vink}. Even though synchrotron emission only indicates the presence of relativistic electrons, the presence of energetic protons is suggested by the observations of TeV gamma rays \citep[e.g.~][]{2009Aharonianetal}, and by indications of magnetic field amplification beyond that what is expected from simple shock compression \citep{2005Warrenetal, 2008CassamChenaietal}. In addition, there are indications that the SNR blast waves are not simple hydrodynamic blast waves in a single-component gas. They behave in a way that indicates that a significant fraction of the pre-shock energy density resides in cosmic rays \citep{2000Decourchelleetal,2009Helderetal}.
Various groups work on trying to get an integral picture of the interaction between SNR shocks, magnetic fields, and the associated particle acceleration \citep[e.g.~][]{1999BerezhkoEllison,2005KangJones,2005AmatoBlasi, 2006Vladimirovetal,2006BerezhkoVoelk,2007ZirakashviliAharonian,2010Ferrandetal}. The difficulty is the fact that the process is inherently non-linear.
The spectral slope of the particles is determined mainly by the difference between the plasma velocities on the two sides of the shock. This difference depends on the compression ratio, which in turn is determined by the effective equation of state of the gas-cosmic ray mixture: the presence of cosmic rays tends to soften the equation of state around the shock, which in turn increases the total compression.
In addition, the gradient in cosmic ray pressure slows down the incoming flow in the cosmic-ray precursor to the shock. High-energy particles with a larger scattering mean free path probe a larger region around the shock and `feel' a higher compression ratio. For these reasons the spectrum flattens at the high-energy end in fully non-linear models. An additional nonlinearity arises when the magnetic fields are amplified by the streaming of cosmic rays.
Cosmic rays isotropise by scattering off Alfv\'en waves. In gyro-resonant scattering the scattering rate depends on the cosmic ray energy through the slope of the spectrum of the scattering waves. It is often assumed that the diffusion rate is described by Bohm diffusion, where the diffusion coefficient scales as $\kappa_{\rm B} \propto E$, with $E$ the energy of the particle. These waves are self-generated by the streaming of cosmic rays away from the shock, through a resonant instability \citep[e.~g.][]{1975Skilling}, and / or through a non-resonant instability \citep{2001BellLucek, 2004Bell, 2009LuoMelrose}.
Various authors focus on the feedback of cosmic rays onto the hydrodynamics near the shock \citep{1997Malkov, 2007Blasietal, 2009Patnaudeetal,2010Ferrandetal}. The distribution function of the particles is calculated, from which the cosmic ray pressure can be determined. The pressure term alters the equation of state and feeds back on the hydrodynamics. Alternatively, a standard power law distribution is assumed, simplifying and speeding up the process, making it fast enough for application on larger scales \citep{2007Ensslinetal}. The disadvantage of this approach is that -- generally speaking-- the time-dependence or the influence of a complicated magnetic field geometry can not be taken into account.
Our work focuses on the time- and position dependence of cosmic ray spectra in the supernova remnant, where we use the test-particle approximation, neglecting feedback of the particles onto the plasma. The high-energy particles isotropise on both sides of the shock, due to scattering off waves, and are accelerated by repeated crossing cycles at the shock. The acceleration of relativistic particles can be described by a set of stochastic differential equations (SDEs) \citep{1992AchterbergKruells,1994KruellsAchterberg} and this has been applied succesfully by a number of authors \citep{1999MarcowithKirk, 2004vanderSwaluwAchterberg, 2010MarcowithCasse} in various hydrodynamic codes.
We use the adaptive mesh refined magneto-hydrodynamics (MHD) code: AMRVAC \citep{2003Keppensetal, 2007HolstKeppens} as the framework for our particle acceleration method.
Not only do we calculate the acceleration/deceleration of test particles due to compression/expansion of the flow, we also model the dependence of diffusion and radiative losses, while keeping computational costs down with the adaptive mesh strategy. Our approach has the advantage that it is able to also tackle a more complicated circumstellar density profile
than other models. The disadvantage is (for now) having to neglect the nonlinear feedback of the cosmic rays onto the plasma.
We describe the different models we use to calculate the particle spectrum in supernova remnants in \S~\ref{sec:1Dmodels}. In \S~\ref{sec:theory} we will discuss the theory of diffusive shock acceleration and the evolution of the supernova remnant, and derive some analytical estimates for the expected cosmic ray spectrum. The method and set-up of the simulations will be described in \S~\ref{sec:method}. In \S~\ref{sec:testmodels} we will describe some test models and results obtained with this method. We will subsequently present the results for the particle spectra from the SNR models in \S~\ref{sec:snr} and conclude with a discussion and summary in \S~\ref{sec:discussion}.
\section{SNR models}
\label{sec:1Dmodels}
We simulate the evolution of the SNR and concurrently calculate the cosmic ray distribution in phase-space for the various models. In these models we evaluate the effect of: {\em i}): the approximation that is used for the diffusion coefficient, {\em ii}): the adopted equation of state,
{\em iii}): the density profile of the background into which the supernova remnant evolves, and {\em iv}): the strength of the magnetic field, both in the surrounding medium, and in the ejecta.
Further details of the latter three can be found below.
We summarize the entire set of models used in our simulations in Table~\ref{table:models}.
\subsection{Influence of the equation of state}
The compression ratio at the shock depends on the adiabatic index (specific heat ratio) of the plasma.
Since the expected slope of the spectral energy distribution of the cosmic rays depends on the compression ratio, by varying the equation of state we can test our code and see if the results change according to expectation.
Besides providing a nice test for the code, varying the adiabatic index is also physically relevant for systems in which efficient cosmic ray acceleration occurs. The adiabatic index of the plasma parametrises the equation of state of the plasma. The value of $\gamma = 5/3$ corresponds to the case where the gas consists of a mono-atomic gas, and is what we use for simulations in which the contribution of cosmic rays is neglected. The value of $\gamma = 4/3$ corresponds to the adiabatic index of a relativistic gas. A plasma in which a large part of the pressure is made up by cosmic rays has an effectively softer equation of state, with a value of $4/3 < \gamma < 5/3$. When cosmic rays escape from the system, they may drain the shock region of energy, softening the equation of state further.
Approximating the hydrodynamics with a lower, but fixed, adiabatic index is only a first step to see how the slope of the cosmic ray spectrum is affected by a softer equation of state.
In reality, the cosmic ray pressure gradient in the shock precursor slows down the incoming flow, which in turn results in a concave cosmic ray spectrum if the scattering mean free path increases with energy. Additionally magnetic field amplification may partly reverse the effect by stiffening the equation of state.
\subsection{The explosion environment}
We consider two situations for the environment into which we let the ejecta expand.
{\em i}): The case of a homogeneous medium, such as may be applicable to Type Ia SNe. These models will be referred to as interstellar medium (ISM) models.
{\em ii}): A $\rho \propto R^{-2}$ profile for the plasma, as will arise from a steady stellar wind. This situation is more applicable to core collapse supernovae whose
progenitors have strong stellar winds. These models will be referred to as circumstellar medium (CSM) models.
We set the density of the homogeneous ISM to a value of $2.34 \times 10^{-24}$~g~cm$^{-3}$.
The wind parameters for the $\rho \propto R^{-2}$ CSM are chosen such that the volume containing an amount of mass equal to the ejecta mass is equal in both cases.
The ejecta mass that we use is $3.5$~M$_\odot$ (Sect.~\ref{sec:ejecta}). The radius containing an equal amount of mass in the ISM is $R_{3.5 {\rm M}_\odot}=\left(3 M/4 \pi \rho\right)^{1/3} \sim 3$~pc.
We employ mass loss parameters that are typical for the winds from red supergiants, with a value for the mass loss rate of $\dot M=1.71\times10^{-5}$~M$_\odot$~yr$^{-1}$
and a wind velocity equal to $v_{\rm wind}(10^{16} {\rm cm})=10$~km~s$^{-1}$. In all cases we set the initial temperature of the CSM/ISM and the ejecta to $T=10^4$~K.
\subsection{Magnetic field}
For the magnetic field we look at the case of a constant magnetic field strength, with values of $3$, $10$, and $20$ $\umu$G.
We restrict ourselves for now to parallel shocks, where the magnetic field is aligned with the shock normal, for reasons mentioned in \S~\ref{sec:method}.
In this case the magnetic field is the same on both sides of the shock and the diffusion coefficient depends only on the particle energy in the case of Bohm diffusion.
While the code has an MHD solver and we can follow the magnetic field dynamically, this only becomes interesting in two dimensions, something we defer until a later work.
We solve the Euler equations and parametrise the magnetic field strength for use in the SDEs.
We find (see \S~\ref{sec:rev}) that particles diffuse far enough downstream to reach the reverse shock such that they are re-accelerated, provided that the magnetic field in the ejecta is strong enough to confine the particles.
We therefore investigate two different cases for the magnetic field in the ejecta: we either parametrise the magnetic field strength such that it decays with radius as $R^{-2}$ as to represent the expanding fossil field of the progenitor, frozen into the plasma while conserving magnetic flux,
or we keep the magnetic field fixed at the same value as in the ISM/CSM. In the first case, the magnetic field strength for reasonable progenitor fields is very weak, making the gyroradius (and thus the diffusion length-scale) of the particles very large when Bohm diffusion is assumed.
In the latter case, the diffusion in the ejecta proceeds at a similar rate as in the ISM/CSM, so particles that encounter the reverse shock can be re-accelerated, as we will illustrate in Sec.~\ref{sec:rev}.
\begin{table*}
\centering
\caption{\textrm{Overview of the supernova remnant models.} }
\label{table:models}
\begin{tabular}{c c c c c c c}
\hline\hline
MODEL: & ISM$\kappa_{\rm c}$ & ISM$\kappa_{\rm B}$ & CSM$\kappa_{\rm B}$ & ISM$\kappa_{\rm B}$r & CSM$\kappa_{\rm B}$r & CSM$\kappa_{\rm B}$s
\\
\hline
$\rho$ background (g~cm$^{-3}$)& $2.34\times 10^{-24}$ & $2.34\times 10^{-24}$ & $\propto r^{-2}$& $2.34\times 10^{-24}$ & $\propto r^{-2}$ & $\propto r^{-2}$\\
diffusion coefficient & constant & Bohm & Bohm & Bohm & Bohm & Bohm\\
magnetic field strength ($\umu$G) & $10$ & 3, 10, 20 & 3, 10, 20 & 20 & 20 & 20 \\
reverse shock acceleration & no & no & no & yes & yes & no\\
equation of state & normal ($\gamma=5/3$) & normal & normal & normal & normal & soft ($\gamma=4/3$)\\
\hline
\end{tabular}
\end{table*}
\section{Theory}
\label{sec:theory}
\subsection{Diffusive Shock Acceleration}
At the forward shock of SNRs, and possibly also at the reverse shock \citep{2008HelderVink, 2010ZirakashviliAharonian},
particles are believed to be accelerated to relativistic energies, up to $10^{15}$~eV for protons.
The favoured mechanism is diffusive shock acceleration (DSA), where every time the particle crosses the shock interface it experiences a Lorentz boost, after which the momentum direction is randomised in the local frame. Since the plasma rest frames on either side of the shock differ in speed, this results in a systematic increase of the particle momentum every time the particle cycles from upstream to downstream and back \citep{1978Bella}.
\citet{1977Krymskii, 1977Axfordetal, 1978Bella,1978Bellb, 1978BlandfordOstriker, 1983Drury} derive the spectrum of shock-accelerated particles, based on the energy gain per cycle versus the probability of being advected away and lost from the acceleration process. The escape probability is
given by $P_{\rm esc}=4 V_{\rm s}/rc$, where $V_{\rm s}$ is the shock velocity, $r$ the shock compression ratio and $c$ is the velocity of the cosmic rays, equal to the speed of light.
The mean time $\Delta t$ for a relativistic particle to complete a cycle probing the upstream and the downstream plasma and the corresponding boost in momentum $p$ is given by \citep{1983LagageCesarsky,1983Drury}:
\begin{eqnarray}
\Delta t =\frac{4}{c V_{\rm s}}(\kappa_1+r \kappa_2),\\
\Delta p =\left(\frac{r-1}{3r}\right)\frac{4V_s}{c}p.
\end{eqnarray}
where $\kappa$ is the diffusion coefficient and we will use the subscript $1$ for upstream-, and $2$ for downstream properties.
The test particle limit results in a power-law spectrum in momentum, with the number density of particles per unit momentum following the distribution
\begin{eqnarray}
F(p)\equiv\frac{\partial {\cal N}}{\partial p}\propto p^{-q},
\end{eqnarray}
with
\begin{eqnarray}
\label{eq:q}
q=1+\frac{\ln(1/P_{\rm ret})}{\ln((p+\Delta p)/p)}=\frac{r+2}{r-1}
\end{eqnarray}
the slope. Here $P_{\rm ret}=1-P_{\rm esc}$ is the return probability in a shock crossing cycle.
\subsection{Acceleration time-scales}
\label{sec:time-scalesanalytical}
The acceleration rate of relativistic protons or electrons of given energy $E = pc \gg m_{\rm p}c^2\sim 1$~GeV is:
\begin{eqnarray}
\label{eq:accrate}
\left(\frac{{\rm d}E}{{\rm d}t}\right)_{\rm dsa}=\left(\frac{r-1}{3r}\right)\frac{V_{\rm s}^2}{(\kappa_1+r\kappa_2)}E,
\end{eqnarray}
giving a typical time-scale for acceleration time of
\begin{eqnarray}
t_{\rm acc} \equiv \left(\frac{1}{E} \frac{dE}{dt}\right)^{-1} =
\frac{3r(\kappa_1+r\kappa_2)}{(r-1)V_{\rm s}^2}\label{eq:tacc}.
\end{eqnarray}
It will be useful to consider the case of Bohm diffusion for comparison with the simulations, and to look at the combined effect of acceleration and synchrotron losses, which is important for electrons. The change in energy at the shock is then given by:
\begin{eqnarray}
\label{eq:lossrate}
\left(\frac{{\rm d}E}{{\rm d}t}\right)_{\rm acc+synch}=\frac{3 Z e B V_{\rm s}^2}{(1+r)r c}-\frac {\lambda_s B^2 E^2}{m c^2},
\end{eqnarray}
for a parallel shock, where $\kappa_1=\kappa_2=D_{\rm B} = c E/3ZeB$ is the Bohm diffusion coefficient,
which depends on the gyroradius of the particles, $B$ the magnetic field strength, $Z$ the charge number, $e$ the elementary electric charge and $m$ the mass of the particle.
We have introduced a term that accounts for synchrotron losses with
\begin{eqnarray}
\label{eq:betas}
\lambda_s=\frac{\sigma_T}{6 \pi m c} \propto m^{-3}.
\end{eqnarray}
The mass dependence makes this term negligible for protons. High-energy electrons on the other hand lose a substantial fraction of their energy by synchrotron radiation.
With a Thomson cross section of $\sigma_T=6.65\times10^{-25}$~cm$^2$ for electrons this gives for $\lambda_s=1.29\times10^{-9}$~cm~s~g$^{-1}$.
The same loss-term can be used to account for inverse Compton losses, where the energy that is lost in upscattering photons from the microwave background corresponds
to synchrotron losses in an equivalent magnetic field $B_{\rm CMB}=3.27\ \umu$G \citep{1998Reynolds}. When the actual magnetic field is stronger than this value, synchrotron losses will dominate over inverse Compton losses unless there is another source of photons that boosts the inverse Compton scattering rate. For now we neglect Compton losses, which means that we slightly over-estimate the maximum energy
for electrons in our models with $B \sim 3 \: \mu{\rm G}$.
For oblique shocks the compression of the magnetic field and the change in residence times upstream and downstream have to be taken into account.
\subsection{Maximum energy of the cosmic rays}
The maximum attainable energy $E_{\rm max}$ for cosmic rays depends on the size of the accelerator,
the time they have spent there, and the strength of adiabatic- and synchrotron losses. In order to determine this, we follow the evolution of the supernova remnant. Initially, the ejecta expand freely into the surrounding medium. The expansion slowly decelerates as the ejecta sweep up mass from the CSM. The deceleration of the blast wave drives a reverse shock into the ejecta, heating the material to millions of Kelvin and making the SNR prominent in X-rays.
By the time the swept-up mass equals the ejecta mass the Sedov-Taylor phase of SNR evolution begins.
Energy conservation gives us typical length- and time-scales,
and determines the deceleration radius where the transition from free expansion phase to the Sedov-Taylor phase takes place \citep{1959Sedov,1995McKeeTruelove}.
Figure~\ref{fig:vxshock} shows the analytical solution for the evolution of the blast wave of a supernova remnant derived by \citet{1999TrueloveMcKee} .
We show two different cases, one in which the density of the ejecta is constant with radius ($n=0$-model), and the case in which the ejecta consists of a constant density core with a envelope in which the density decreases with radius as $R^{-9}$ ($n=9$-model, see also our description of the ejecta in the hydrodynamics in \S~\ref{sec:method}).
The early evolution of SNR radius and blast wave velocity is quite different in the two cases.
The evolution of the blast wave radius and velocity according to \citet{1999TrueloveMcKee} for a $n=9$ power law ejecta envelope into a $1$~cm$^{-3}$ ISM is given by:
\begin{eqnarray}
\label{eq:truelovemckee}
\tilde R_{\rm s}(\tilde t<\tilde t_{\rm ST})&=&1.12\ \tilde t^{2/3}\\\nonumber
\tilde R_{\rm s}(\tilde t>\tilde t_{\rm ST})&=&1.42\ (\tilde t-0.297)^{2/5}\\\nonumber
\tilde v_{\rm s}(\tilde t<\tilde t_{\rm ST})&=&0.75\ \tilde t^{-1/3}\label{eq:shockanalytical}\\\nonumber
\tilde v_{\rm s}(\tilde t>\tilde t_{\rm ST})&=&0.569\ (1.42\ (\tilde t-0.297)^{-3/5}),
\end{eqnarray}
with $\tilde R=R/R_{\rm ch}$ and $\tilde v=v/v_{\rm ch}$. The characteristic radius and velocity are given by
$R_{\rm ch}=3.07 (M_{\rm ej}/{\rm M}_\odot)^{1/3}$~pc, with $M_{\rm ej}$ the ejecta mass, and $v_{\rm ch}=R_{\rm ch}/t_{\rm ch}$ with $t_{\rm ch}=423 (M_{\rm ej}/{\rm M}_\odot)^{5/6}$~yr.
The transition time is given by $\tilde t_{\rm ST}=0.523$.
For a maximum residence time of the particles of $t_{\rm max} \sim R_{\rm s}/V_{\rm s}$, a simple approximation for the maximum energy for protons follows from Eq.~\ref{eq:accrate}:
\begin{eqnarray}
E_{\rm max}=\frac{3 Z e B V_{\rm s}R_{\rm s}}{\xi_\sigma c},
\end{eqnarray}
with $\xi_\sigma$ a relation between the compression ratio of the density and the magnetic field,
where $\xi_\sigma=20$ for a shock where the magnetic field is parallel to the shock normal, and $\xi_\sigma=8$ for a shock with the magnetic field perpendicular to the shock normal.
The acceleration efficiency is highest around the Sedov-Taylor transition phase.
For a simple ejecta profile and a homogeneous ISM ($n=0$, $s=0$, with $n$ the power law of the density in the ejecta envelope and $s$ the power law of the density of the surrounding medium) model, the corresponding typical radius of the blast wave and time of transition into this phase are given by:
\begin{eqnarray}
R_{\rm ST}& \approx & 2.23 \left(\frac{M_{\rm ej}}{{\rm M}_\odot}\right)^{1/3}n_0^{-1/3} {\rm pc}\\\nonumber
t_{\rm ST}& \approx & 209\ E_{51}^{-1/2}\left(\frac{M_{\rm ej}}{{\rm M}_\odot}\right)^{5/6}n_0^{-1/3} {\rm yr}.
\end{eqnarray}
At this point in the evolution of the SNR the maximum attainable energy for protons is:
\begin{eqnarray}
E_{\rm ST} \approx 107\ E_{51}^{1/2}\left(\frac{M_{\rm ej}}{{\rm M}_\odot}\right)^{-1/6} \left(\frac{B}{10 \mu G}\right) n_0^{-1/3}\; {\rm TeV},
\end{eqnarray}
with $n_0$ the number density of the surrounding medium, $E_{51}$ the explosion energy in units of $10^{51}$erg,
and where the corresponding radius and age of the supernova remnant are taken from \citet{1995McKeeTruelove}.
For electrons the maximum energy at sufficiently late times is limited by synchrotron losses and follows from the balance of the acceleration term
and the synchrotron loss term in Eq.~\ref{eq:lossrate}. This occurs when the electron energy equals:
\begin{eqnarray}
\label{eq:Eemax}
E_{\rm sync}&=&\sqrt{\frac{18 \pi e}{\xi_\sigma \sigma_{\rm T} B}} \: m_{\rm e} c V_{\rm s} \\\nonumber
& \approx & 11.6 \left(\frac{V}{6000\ {\rm km/s}}\right) \left(\frac{B}{10\ \mu G}\right)^{-1/2} {\rm TeV},
\end{eqnarray}
which corresponds to a synchrotron loss time-scale of \citep{2004vanderSwaluwAchterberg}:
\begin{eqnarray}
\label{eq:tsync}
t_{\rm sync} & \approx & 800 \left(\frac{B}{10\ \mu G}\right)^{-2} \left(\frac{E}{100\ {\rm TeV}}\right)^{-1} {\rm yr.}
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{vxshock}
\caption[ ] {Early-time evolution of the radius (black) and velocity (coloured) of a SNR in a homogeneous ISM.
The solid line corresponds to the $n=9$ model from \citet{1999TrueloveMcKee} and the dashed line to the $n=0$ model.
\label{fig:vxshock}}
\end{figure}
In Fig.~\ref{fig:pmaxtimen0n9} we show the solution of Eq.~\ref{eq:lossrate} for the analytical estimate of the maximum energy as a function of the blast wave radius.
From the equation we can see that the influence of the shock velocity on $p_{\rm max}$ is very strong.
The different evolution of the shock velocity in the $n=0$ versus the $n=9$ model therefore is reflected in the strong differences of the maximum energy in the two models,
as can be seen in Fig.~\ref{fig:pmaxtimen0n9}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{pmaxtimen0n9}
\caption[ ] {Maximum energy of relativistic electrons (solid) and protons (dashed) for different magnetic field strengths.
The black lines corresponds to the $n=9$ model from \citet{1999TrueloveMcKee} and the coloured lines to the $n=0$ model.
As long as the maximum energy is not limited by radiative losses, the influence of the shock velocity dominates the evolution of the maximum energy.
The curves with the highest energies correspond to a magnetic field strength of $3\ \umu$G.
The higher magnetic field strengths of $10$, $20$ and $50\ \umu$G limit the electron energy to subsequently lower values (normalized to the magnetic field strength).
\label{fig:pmaxtimen0n9}}
\end{figure}
\section{Method}
\label{sec:method}
\subsection{Stochastic differential equations}
We use the method of \citet{1992AchterbergKruells} to calculate the distribution function of cosmic rays.
We describe the method below (details of the derivation for 2D spherical geometry can be found in Appendix~\ref{app:spherical}) and outline the set-up used in the hydro-simulation.
The propagation of relativistic particles through the plasma can be described by a random walk. To simulate scattering off MHD waves, such as Alfv\'en waves,
the mean free path has a dependence on the particle momentum and the strength of the magnetic field.
The presence of a magnetic field may cause the diffusion to be anisotropic, with diffusion along the field proceeding more rapidly than across the field.
The acceleration and propagation of relativistic particles can be described by a phase space advection-diffusion equation \citep{1975Skilling, 1990Jones}:
\begin{eqnarray}
\frac{\partial F}{\partial t} = -\nabla_Z \cdot \left({\bf U} F)+\nabla_Z \cdot ({\bf \kappa} \cdot \nabla_Z F \right) \; ,
\label{eq:advdiff}
\end{eqnarray}
where $F({\bf Z},t)$ is the particle distribution in phase space, $\nabla_{Z}$ is the gradient operator in phase space,
${\bf Z}\equiv ({\bf x},{\bf p})$ is the phase-space position vector, $\bf U = {\rm d} {\bf Z}/{\rm d}t$ is the phase space velocity and ${\bf \kappa}$ the diffusion tensor in phase space.
By reordering the operators we obtain a Fokker-Planck equation of the standard form;
\begin{eqnarray}
\label{eq:fokkerplanck}
\frac{\partial F({\bf Z},t)}{\partial t} = -\nabla_Z \cdot \left[{\bf \dot Z}F({\bf Z},t)-
\nabla_Z \cdot \left({\bf \kappa}F({\bf Z},t)\right)\right].
\end{eqnarray}
Here
\begin{equation}
{\bf \dot Z} \equiv {\bf U} + \nabla_{Z} \cdot \kappa
\end{equation}
is an effective velocity that includes a drift term due to diffusivity gradients.
Equation~\ref{eq:fokkerplanck} is mathematically equivalent to a set of stochastic differential equations (SDEs) of the It\^o form \citep{1994Gardiner,1985Saslaw,1992AchterbergKruells}:
\begin{eqnarray}
\label{eq:ito}
d{\bf Z} = {\bf \dot Z} \: {\rm d}t + \sqrt{2{\bf \kappa}} \cdot {\rm d} {\bf W} \; .
\end{eqnarray}
The SDEs contain, apart from a regular (deterministic) term $\propto {\rm d}t$, a stochastic term that models the random walk. The Wiener process ${{\rm d} \bf W}$ in the stochastic term satisfies $\langle dW_i \rangle=0$ and $\langle dW_i dW_j \rangle=\delta_{ij}dt$.
The quantity $\sqrt{\kappa}={\bf T}$ represents a tensor ${\bf T}$ that should formally satisfy $T_{im}T_{mj}= \kappa_{ij}$.
A large number of statistically independent integrations of the SDEs creates a distribution of particles in phase space that corresponds to the solution of the corresponding Fokker-Planck equation \citep{1994Gardiner}.
These SDEs can be used to calculate particle acceleration in a time-dependent flow, as first proposed in \citet{1992AchterbergKruells}
and which has been applied \citep[e.g.][]{1994KruellsAchterberg, 1999MarcowithKirk, 2004vanderSwaluwAchterberg, 2010MarcowithCasse} in various hydrodynamic codes.
In 2D spherical geometry, the set of equations that needs to be solved to update the position of the particles in phase space (as we derive in Appendix~\ref{app:spherical}) is:
\begin{eqnarray}
{\rm d}u &=&-\frac{1}{3}({\bf \nabla \cdot V}) \: {\rm d}t - \lambda_s B^2 \sqrt{1+e^{2 u}} \: {\rm d}t \; , \nonumber \\
&& \nonumber \\
{\rm d}R&=&\left( V_R+\frac{1}{R^2}\frac{\partial (R^2 \kappa)}{\partial R}\right) \: {\rm d}t + \sqrt{2 \kappa \: {\rm d}t}\: \xi_R \; , \\
& & \nonumber \\
R \: {\rm d}\theta&=&\left( V_\theta+\frac{2 \kappa}{R \tan \theta}+\frac{1}{R}\frac{\partial \kappa}{\partial \theta}\right) \: {\rm d}t -\sqrt{2 \kappa \: {\rm d}t}\: \xi_\mu \; , \nonumber
\end{eqnarray}
where $V$ is the plasma velocity, $u = \ln(p/mc)$ and $\mu=\cos\theta$.
The stochastic term now contains a random number $\xi_i$ drawn from a unitary normal distribution such that its statistical average satisfies $\langle \xi_i \rangle=0$ and $\langle \xi_i \xi_j \rangle=\delta_{ij}$, where $i,j$ stand for $R$ and $\mu$. Note that these equations solve for $\tilde F=R^2 F$ rather than for $F$.
In 1D slab geometry with flow velocity $V(x \: , \: t)$ these equations simplify to:
\begin{eqnarray}
\label{1DslabSDE}
{\rm d}u&=&-\frac{1}{3} \left( \frac{\partial V}{\partial x} \right) \: {\rm d}t - \lambda_s B^2 \sqrt{1+e^{2 u}} \: {\rm d}t \; , \nonumber \\
& & \\
{\rm d}x&=&\left( V+\frac{\partial \kappa}{\partial x}\right) {\rm d}t +\sqrt{2 \kappa \: {\rm d}t} \: \xi_x\; . \nonumber
\end{eqnarray}
Some caution should be exercised when translating these equations into numerical schemes in the case that large magnetic field gradients are present.
When Bohm diffusion is assumed, gradients in the magnetic field strength induce gradients in the diffusion coefficient.
This creates the following problem: In Eq.~\ref{eq:Roperator} we see that the divergence of the diffusion tensor shows up in the effective velocity of the scattering centre of our cosmic rays.
In simple schemes the strong gradient in the magnetic field causes the particles to under-sample the shock statistically, giving rise to a momentum spectrum of the accelerated particles that is steeper than can be expected in reality.
A more sophisticated numerical scheme is needed to make sure that the spectrum reflects the physics, rather than mathematical artifacts (Achterberg \& Schure, in preparation). Its implementation will be presented in a follow-up paper.
Here we will only consider parallel shocks where the magnetic field strength is constant across the shock.
The resulting power-law spectrum should approach the analytical solution of a strong shock,
where the compression ratio depends only on the specific heat ratio $\gamma$ of the plasma, and the slope of the distribution depends only on $r$.
The compression ratio is given by $r=\rho_2/\rho_1=(\gamma+1)/(\gamma-1)$, where $\rho$ denotes the plasma density.
For a value of $\gamma=5/3$, $r=4$, whereas for $\gamma=1.1$, $r=21$. In terms of $u = \ln(p/mc)$ one has:
\begin{eqnarray}
F(u) = pF(p) \propto p^{-(q-1)},
\end{eqnarray}
with $q$ given by Eq.~\ref{eq:q}.
\subsection{Cosmic ray injection}
\label{sec:injection}
As mentioned in \S~\ref{sec:intro}, the injection of cosmic rays at the shock front is still a poorly understood process.
Observations hint at efficient injection, particularly for parallel shocks \citep{2008BerezhkoVoelk},
but it remains problematic to accelerate electrons from the thermal pool to sufficiently high energies for them to be picked up by the DSA process.
Since we can only model the highest-energy end of the particle distribution, we leave this problem to others (see \citet{2009SironiSpitkovsky} for relativistic shocks),
and assume that the rate of particle injection is proportional to the mass that is swept up per unit time by the blast wave.
Since we treat the cosmic rays in the test-particle limit,
the exact number that is injected is not so important, but the relative number of injected cosmic rays
with time influences the momentum distribution and the average acceleration time of the cosmic rays.
For a homogeneous ISM, the number of particles that is injected as the shock expands a distance ${\rm d}R$ in radius scales as:
\begin{eqnarray}
{\rm d}N(p)\propto R^2 \: {\rm d}R \: \delta(p-p_0),
\end{eqnarray}
We assume mono-energetic injection with $p_0$ the injection momentum, which for Bohm diffusion is restricted for numerical reasons, as will be explained below (Eq.~\ref{eq:pinj}).
For a CSM that is shaped by the stellar wind of the progenitor the density scales with $\rho \propto R^{-2}$.
In this case the injection proceeds uniformly with respect to the blast wave radius:
\begin{eqnarray}
{\rm d}N(p)\propto {\rm d}R \: \delta(p-p_0).
\end{eqnarray}
\subsubsection{Minimum injection momentum}
In our models, the particles are injected with relativistic energies.
They need to be able to cross the shock in one scattering mean free path, otherwise they will be adiabatically accelerated and we will not see the desired effect of DSA.
The particles are all injected with the same (relativistic) energy and the power law naturally follows provided the steps used for the integration of the SDEs satisfy
\begin{eqnarray}
\label{eq:deltaxcriterion}
\Delta x_{\rm adv} \simeq V_{s} \: \Delta t_{\rm dsa} \ll \Delta x_{\rm s} \ll \Delta x_{\rm diff} = \sqrt{2 \kappa \: \Delta t_{\rm dsa}}.
\end{eqnarray}
Here $\Delta t_{\rm dsa}$ is the time step used to integrate the SDEs, and $\Delta x_{\rm s}$ is the width of the shock transition, typically a few grid cells.
In the case of Bohm diffusion, the particles are inserted with an energy that makes it possible to meet the criterion in Eq.~\ref{eq:deltaxcriterion}
for the simulation of DSA from the earliest injection time ($t_0=31$~yr) onwards.
This puts the following constraint on the diffusion coefficient:
\begin{eqnarray}
\kappa \gg \frac{\Delta x_{\rm s}^2}{2 \Delta t_{\rm dsa}} \; ,\label{eq:xdiff}
\end{eqnarray}
with
\begin{eqnarray}
\kappa=\frac{r_g c}{3}=\frac{p c^2}{3 Z e B_{\rm max}} \quad {\rm (cgs)},\label{eq:bohm}
\end{eqnarray}
where $Z$ is the charge number of the particle, which we take to be $1$ since we only consider protons and electrons.
The requirement of the advective step, $\Delta x_{\rm adv} < \Delta x_{\rm s}$, in practise translates to a time-step requirement \begin{eqnarray}
\Delta t_{\rm dsa} &< &\frac{\Delta x_{\rm s}}{v_r}.
\label{eq:dtdsa}
\end{eqnarray}
Since the shock thickness is typically resolved in about 5 grid cells, the time step for calculating the diffusion can be larger than that used for the MHD time step.
This one is limited by the courant condition where we normally use $\Delta t_{\rm hydro}=0.8 \Delta x_{\rm grid}/v$.
We therefore `supercycle' (i.e.~increase the timestep) the diffusion of the particles to save computational time, and calculate the appropriate time step $\Delta t_{\rm dsa}$
for diffusion throughout the simulation.
The constraint on the timestep can be used to evaluate the valid range for the diffusion coefficient by combining Eqns.~\ref{eq:xdiff} to \ref{eq:dtdsa}.
The minimum injection energy for this model is then given by:
\begin{eqnarray}
\label{eq:pinj}
p_{\rm inj} = \frac{3 q B \kappa}{c^2} > \frac{3 q B v_{\rm max}^2 \Delta t_{\rm dsa}}{2 c^2}.
\end{eqnarray}
The scaling of the injection energy with magnetic field also means that our resolution requirement depends on the magnetic field strength assumed in the simulation.
We want the injection energy to be at least as low as $\sim 1$~TeV, such that it is well below the maximum energy as can be roughly determined by Eq.~\ref{eq:Eemax}.
At lower energies, the spectrum inside the SNR will be a featureless power law as the scattering mean free path of the particles is much less than, say,
the radius of curvature of the shock, and losses are unimportant.
The maximum number of particles injected at the end of the simulation is $10^6$.
Particle splitting is applied when the particle energy increases with a factor of $e$, in order to reduce Poisson noise in the distribution at high energies.
\subsection{The hydrodynamics}
\label{sec:ejecta}
We have incorporated the equations for calculating diffusive shock acceleration into the AMRVAC framework.
This allows us to solve the particle spectrum concurrently with the (magneto-)hydrodynamics of the flow, in this case the evolution of the supernova remnant.
We model the system in 1D spherical coordinates. We use a radial range of $3.0\times10^{19}$~cm ($\sim 10$~pc) and a base grid of 180 cells.
Refinement of a factor two is applied dynamically where strong density and velocity gradients are present, up to 7 to 10 levels. The effective resolution is then $46080-184320$~cells, or $1.6\times10^{14}-2.6\times10^{15}$~cm. This is sufficient for our simulations of the system with $B$ up to $20\ \umu$G.
The Euler equations are solved conservatively using a TVDLF scheme with minmod limiting on the primitive variables \citep{1996TothOdstrcil}.
Although this particular scheme does not resolve the shock within the least possible number of grid cells, the scheme is robust and does not have problems in dealing with the initial conditions for the supernova ejecta, such as large density gradients.
The supernova ejecta are inserted in the inner $0.1$~pc of the grid. The density and velocity profile are modelled according to the self-similar solution of \citet{1984Chevalier, 1999TrueloveMcKee}. The velocity increases linearly to the outer edge of the ejecta, whereas the density profile consists of a constant density core and a powerlaw envelope with a slope of $n=9$, interpolated as:
\begin{eqnarray}
\rho_{\rm ej}(r) =\frac{\rho_{\rm core}}{1 + (r/r_{\rm core})^{n} }.
\end{eqnarray}
The core radius and central density are such that the mass and energy of the ejecta are respectively $M_{\rm ej}=3.5$~M$_\odot$ and $E_{\rm ej}=10^{51}$~erg.
In Fig.~\ref{fig:rhovism} we show the density and velocity profile for a SNR, $600$~yr after explosion into a homogeneous medium. In this simple one-dimensional model four different regions can be identified: From inside out first we encounter the freely expanding ejecta, separated from the shocked ejecta by the reverse shock. Outside the contact discontinuity is the compressed ISM and the blast wave that separates the SNR from the undisturbed ISM. In Fig.~\ref{fig:rhovcsm} we show the same for the supernova remnant in a CSM. The evolution of the SNR is substantially different in the two cases. The separation between the blast wave and the ejecta is larger in the CSM model. This is due to the deceleration of the blast wave, which initially is high for the CSM model, but soon becomes slower than for the ISM background, as can also be seen in Fig.~\ref{fig:rvshocktimemulti}, which will be discussed later in Sect.~\ref{sec:csmism}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{rhovhdismem0010}
\caption[ ] {
Density and velocity profile of supernova ejecta at a time $t=600$~yr after explosion into a homogeneous medium. The shown profiles are for a plasma with an adiabatic index of $\gamma=5/3$, yielding the expected compression ratio $r=4$ at the shocks.
\label{fig:rhovism}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{rhovhdbck0010}
\caption[ ] {
Density and velocity profile of supernova ejecta at a time $t=600$~yr after explosion into a $\rho \propto R^{-2}$ medium. Compared to the evolution of a SNR in a homogeneous ISM, the reverse shock is much farther inwards and the distance between the contact discontinuity and the forward shock much larger.
\label{fig:rhovcsm}}
\end{figure}
\section{Test models}
\label{sec:testmodels}
In this section we will present the results from some test cases, in order to compare the numerical results with the analytical ones,
and to determine the dependence of the spectrum on the chosen geometry.
\subsection{Analytical shock profile}
We will first present the results from a calculation where we describe the shock using hypertangent profile in plane parallel geometry.
In the shock's reference frame the velocity $V(x)$ along the shock normal is
\begin{eqnarray}
V(x)&=&\left(\frac{r+1}{2r}\right)-\left(\frac{r-1}{2r}\right)\tanh\left(\frac{x-x_{\rm s}}{\Delta x_{\rm s}}\right) \; .\label{eq:tanh}
\end{eqnarray}
The shock has a thickness $\Delta x_{\rm s}$ (in our case: $3.75\times10^{-2}$ in arbitrary units) and is located at a location of $x=x_{\rm s}$ (in our case at $x=9$).
The shock compression ratio is chosen to be $r = 4$, the value for a strong shock in a $\gamma = 5/3$ gas. The time step is chosen to satisfy the condition in Eq.~\ref{eq:deltaxcriterion}: $\Delta t= 1.35\times 10^{-2}$ and the shock velocity is normalized to $V_{\rm s}=1$.
The diffusion coefficient is constant: $\kappa=0.28$. In this case, the numerically obtained cosmic ray distribution should closely match the analytical one.
In Fig.~\ref{fig:tanhFptime} we show the spectrum of the particles that are located near the shock, and that of all the particles. The number of particles that is introduced at the shock increases linearly with time, and they are all introduced with the same energy. The maximum number of particles used in these simulations is $10^6$. The time evolution shows that it takes time before the numerical slope approaches the analytical steady-state result, which in our figures would correspond to a horizontal line (meaning $q=2$ for our chosen compression ratio of $r=4$). While the spectrum at the shock indeed approximates the expected result, the spectrum of the whole particle population is steeper.
This is to be expected, because the overall spectrum consists of a superposition of spectra with different maximum acceleration times (for which Fig.~\ref{fig:tanhFptime}, lower panel, gives a sample), and hence different cut-off energies. The overall spectrum is therefore steeper than the spectrum at the shock front.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{tanhFptimec}
\includegraphics[width=0.5\textwidth]{tanhFpshtimec}
\caption[ ] {Spectrum as a function of time for accelerated particles in slab geometry and with a hypertangent velocity profile. The top panel shows the total spectrum, the bottom panel the spectrum at the location of the shock.
\label{fig:tanhFptime}}
\end{figure}
\subsection{Effects of the shock geometry: planar and spherical}
In this test model we explore the difference on the slope of the energy spectrum of the particles that arises as a result of the adopted geometry.
We will compare this to the analytical model, which assumes a steady state solution for a plane parallel shock.
We set up the supernova ejecta in a homogeneous density medium and let it evolve for 1500~yr. We artificially induce slab or spherical geometry, taking into account the volume- and surface elements depending on the chosen geometry, such that adiabatic losses are properly accounted for.
In a SNR the radius of curvature is much larger than the typical diffusion path of a particle, and slab geometry therefore is normally considered to be a good approximation.
However, we find that there are differences in the slope of the spectrum when spherical geometry is taken into account, and argue that geometry cannot simply be neglected.
Figure~\ref{fig:slabsphereFpshtime} shows the differences in the energy spectrum that arise
due to the choice of geometry. In plane-parallel geometry the spectrum is closer to the analytical test case as presented in the previous section.
In spherical geometry, the spectrum is somewhat steeper, with a slope $q\approx2.15$. This is to be expected since particles downstream of the shock experience adiabatic losses that are proportional to $V/r$. This reduces the mean energy gain they experience at the shock, which steepens the spectral slope.
Additionally, the exact slope in a time-dependent calculation depends on the injection rate and its dependence on time.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{slabsphereFpc}
\caption[ ] {Proton spectrum for the case of a constant diffusion coefficient. The black line shows the spectrum for a shock in plane parallel geometry.
The coloured line shows the case for spherical geometry. The maximum energy extends up to unrealistically high energies for which physically the particles would not be confined to the supernova remnant.
\label{fig:slabsphereFpshtime}}
\end{figure}
\section{Results}
\label{sec:snr}
\subsection{Influence of the choice of diffusion coefficient}
\label{sec:kappa}
The particle spectrum that results from the diffusive shock acceleration process in supernova remnants depends on a number of factors as described by the equations in the previous sections.
In this section we show how the assumption for the diffusion coefficient leaves its imprint on the slope of the spectrum.
\subsubsection{Constant diffusivity}
First, we consider the case of proton and electron acceleration with a constant diffusion coefficient at a spherical shock that expands into a constant density ISM.
We fix $\kappa$ to the value for which the particles satisfy criterion (\ref{eq:deltaxcriterion}) for the diffusion length.
In Fig.~\ref{fig:kconstFptime} we show the integrated distribution of protons and electrons as function of energy, represented as $p^2 \: F(p)$, for different times.
At late times, the solution well below the cut-off energy should approach the analytical solution (corresponding to a horizontal line in this representation). The reason why the slope remains steeper is because spherical geometry is taken into account, something that is not considered for the analytical solution. As found in the previous section, the spectral slope of cosmic rays in spherical geometry approximates 2.15 rather than 2. The same slope is found here.
For protons, the typical acceleration time-scale now only depends on the shock velocity (Eq.~\ref{eq:accrate}).
For electrons, the maximum energy is limited by the balance between acceleration rate and loss rate.
The acceleration rate decreases with the shock velocity as $V_{\rm s}^2$ and, hence, with time. The loss rate increases with energy.
The effect is therefore strongest at late times, which can be seen in Fig.~\ref{fig:kconstFptime}, where the late-time curves have the maximum electron energy at a lower value than the early-time curves.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{kconstpFptimec}
\includegraphics[width=0.5\textwidth]{kconsteFptimec}
\caption[ ] {Spectrum as a function of time for relativistic protons (top) and electrons (bottom) for a SNR in a constant density ISM. The maximum energy of the electrons is limited by synchrotron losses for a field of $B=10\ \umu$G. The diffusion coefficient is a constant. The top curves are for a SNR age of: $t\approx 1500$~yr. The subsequent lower and lighter coloured curves are for times: $t\approx 300$, $150$, and $30$~yr.
\label{fig:kconstFptime}}
\end{figure}
\subsubsection{Bohm diffusion}
When Bohm diffusion with $\kappa \propto E$ is assumed, the acceleration time-scale increases, for a given shock velocity, with particle energy as $t_{\rm acc} \propto E$ (Eq.~\ref{eq:tacc}).
It therefore takes longer for the higher energy particles to assume the equilibrium power law slope predicted by steady-state calculations.
As can be seen in Fig.~\ref{fig:ismn10Fptime} this is visible in the spectrum as a smooth roll-over of the spectrum beyond a certain energy.
For sufficiently `old' electrons synchrotron losses lower the maximum energy and lead to a steeper cut-off than for protons.
The different curves show the spectrum for different ages of the SNR. While for protons the cut-off energy steadily increases with time, this is not the case for the electrons.
As long as the electron spectrum is determined by the time available for acceleration (the age of the system), it closely resembles the proton spectrum.
However, once the electrons reach an energy where the synchrotron loss time-scale becomes comparable to the acceleration time-scale and/or the age of the system,
the spectrum deviates more and more from the proton spectrum, resulting in a lower cut-off energy and a steeper roll-over.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{ismn10pFptimec}
\includegraphics[width=0.5\textwidth]{ismn10eFptimec}
\caption[ ] {Spectrum as a function of time for relativistic protons (top) and electrons (bottom) for a SNR in a ISM.
The maximum energy of the electrons is limited by synchrotron losses for a field of $B=10\ \umu$G. The approximation of Bohm diffusion is used for the scattering of the particles.
As can be seen from the abrupt cut-off at the low-energy end of the spectrum, the injection energy of the particles is around $1$~TeV.
\label{fig:ismn10Fptime}}
\end{figure}
The spectral slope at the lower-energy end of the electron and proton spectrum is the same and does not depend on the choice of diffusion coefficient.
It equals $q\approx 2.15$, the same value we have found in the test-case of the hypertangent shock profile in spherical geometry.
This is to be expected, since the injection rate in both the test-case and the supernova remnant ISM models is the same,
and adiabatic losses operate in the same manner.
In the case of a constant diffusion coefficient, $\kappa$ was set to the value equal to the
Bohm diffusion coefficient for the lowest energy particles (at $p=p_0$).
The acceleration time for the high-energy particles is therefore relatively short compared
to the $\kappa_{\rm B}$ models, moving the cut-off to higher energies.
\subsection{Choice of equation of state}
In Fig.~\ref{fig:eosFp49} we show the energy spectrum for protons and electrons accelerated at a shock in a gas with adiabatic index $\gamma=4/3$.
The shock propagates into a CSM with a $\rho \propto R^{-2}$ density profile.
We represent the spectrum as $p^{3/2} \: F(p)$ and compare the results to that of a $\gamma=5/3$ plasma model.
For $\gamma=4/3$ the expected spectral slope at a steady shock is $q=1.5$ (see Eq.~\ref{eq:q}).
We see that the simulations bear out this expectation.
The other model parameters are $B=20\ \umu$G, $t_{\rm max}\approx 1500$~yr, and a CSM background.
For a shock propagating into a constant-density ISM the same change in the spectral slope will result.
Varying $\gamma$ is a nice test to see if the slope changes according to expectations, but also is a way of determining in which direction results change when
cosmic ray acceleration is efficient enough to change the equation of state.
In reality, the cosmic ray pressure that causes the softening of the equation of state is localized in the region around the shock, where most of the relativistic particles are located.
We do not include this position dependence and keep the adiabatic index fixed to the same value over the entire grid.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{eosFp49c}
\caption[ ] {The proton (solid line) and electron (dashed line) spectra as resulting from a SNR evolving into a CSM with a softer equation of state: the CSM$\kappa_B20$s model (coloured lines). The maximum energy of the electrons is limited by synchrotron losses because of the high magnetic field strength ($B=20\ \umu$G). The black lines indicate the spectra for the same model but with $\gamma=5/3$
\label{fig:eosFp49}}
\end{figure}
In our approach the softer equation of state results in a higher compression at the shock and a slower blast wave, as can be seen in Fig.~\ref{fig:eosshocktime}.
The cut-off energy for protons therefore lies at a lower value than in the $\gamma=5/3$ model.
Since we assume a fairly high magnetic field in this simulation and the $\gamma = 5/3$ case we use for comparison, the electron energy is limited by synchrotron losses.
Therefore in both models electrons have a similar value for $E_{\rm max}$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{eosshocktime}
\caption[ ] {The radius of the blast wave (solid line) and the reverse shock (dashed line) is plotted for the CSM model with $\gamma=4/3$ (coloured lines) and compared to the CSM $\gamma=5/3$ model (black). The higher compression ratio of the $\gamma=4/3$ model leads to the shorter distance between the blast wave and the reverse shock. The blast wave radius is typically smaller and thus its velocity, too, resulting in a lower $E_{\rm max}$ for the protons accelerated in this simulation.
\label{fig:eosshocktime}}
\end{figure}
\subsection{CSM versus ISM models}
\label{sec:csmism}
In this section we compare the results from the CSM and ISM models with $B=3\ \umu$G. Bohm diffusion is assumed in both cases.
As we already saw in the analytical calculations in Sect.~\ref{sec:time-scalesanalytical},
the maximum particle energy depends strongly on the shock velocity $V_{\rm s}$.
The density of the medium in which the SNR expands affects the shock velocity and therefore leads to differences in the cosmic-ray acceleration rate between the CSM and the ISM models.
A blast wave expanding into the CSM hits a relatively dense medium early on in its evolution. As a result, the initial velocity is smaller but the deceleration proceeds more slowly,
as the swept-up mass increases with radius as $M_{\rm sw} \propto R$, as opposed to $M_{\rm sw} \propto R^3$ in the ISM case.
Ultimately this results in a shock with a higher velocity at the end of the simulation ($1500$~yr).
In the ISM model the initial shock velocity is higher, but the deceleration much more severe, and the maximum attainable particle energy is lower, at
least for the model parameters used here.
In Fig.~\ref{fig:rvshocktimemulti} we show the evolution of the radius of the forward and the reverse shock in the top panel,
and in the bottom panel the velocity of the blast wave for the SNR in CSM, ISM.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{rshocktimemulti}
\includegraphics[width=0.5\textwidth]{vshocktimemulti}
\caption[ ] {Top: evolution of the radius of the forward (solid line) and the reverse (dashed line) shock of the SNR in the CSM (black) and ISM (yellow) models. Bottom: the corresponding velocity of the blast wave and the reverse shock (relative to the upstream velocity).
\label{fig:rvshocktimemulti}}
\end{figure}
The injection rate of particles is taken to be proportional to the mass that is swept up per unit time
by the blast wave.
As a result the age distribution of cosmic rays in the CSM and ISM models differs.
This difference affects both the maximum energy and the shape of the overall spectrum.
This is shown in Fig.~\ref{fig:ismcsmFptime}. The proton/electron spectrum in the CSM model, represented as $p^2 \: F(p)$, is slightly concave.
This arises because of the higher fraction of `old' particles in the CSM model compared to the ISM model. On average, these older particles have a higher energy and
(for Bohm diffusion) a larger diffusivity. Low-energy particles on the other hand are more rapidly swept away from the shock. This simulation shows the differences
that arise in a time-dependent calculation with respect to the results at a steady (unchanging) shock.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{ismcsm10peFpc}
\caption[ ] {Spectra for protons (solid) and electrons (dashed) for CSM (black) and ISM (coloured) models with $B=10\ \umu$G.
\label{fig:ismcsmFptime}}
\end{figure}
In Fig.~\ref{fig:pmaxismcsm} we show the maximum energy of the particles as extracted from the simulations.
Since the slope of the overall spectrum in this case is about $q=2.15$, we define $E_{\rm max}$ as the e-folding energy,
where $p^{2.15} \: F(p)$ for the cumulative spectrum decreases to a value smaller than $1/e$ times the value at lower energies.
The higher average velocity of the blast wave when it evolves into a CSM increases $E_{\rm max}$ by a factor $2-4$ for the CSM models.
Due to the low magnetic field strength, the synchrotron loss time for this model is significantly longer than the running time of the simulations ($\sim 4\times 10^4$~yr versus $1500$~yr).
Therefore there is no significant difference between the proton and the electron spectrum. In figure \ref{fig:pmaxismcsm} we overplotted the maximum energy as calculated analytically in Sect.~\ref{sec:method} for protons (since, because of the low magnetic field strength in this particular model that for electrons is about the same).
There is a clear difference between $E_{\rm max}$ derived from the analytical estimate with that from the simulations. For the CSM, the analytical model underestimates this value,
whereas for the ISM the analytical estimate is significantly larger than the value we derive from the simulations.
This conclusion holds when we calculate $E_{\rm max}$ as the e-folding energy for the spectrum at the shock, but also for the spectrum of all particles in the SNR.
We attribute this to the different age distributions in the CSM and ISM models, as explained above.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{pmaxismcsm}
\caption[ ] {$E_{\rm max}$ for protons (solid) and electrons (dashed) for CSM (black) and ISM (coloured) models with $\kappa_B$ and $B=3\ \umu$G. The analytical solution for the CSM (ISM) model (without radiation losses) is plotted with dash-dot-dots in black (coloured).
\label{fig:pmaxismcsm}}
\end{figure}
\subsection{Results for different magnetic field strengths}
\label{sec:magneticfield}
In the previous section we showed results for a magnetic field strength of $3\ \umu$G.
While this value is too low to induce significant differences between proton and electron spectra in these models, differences arise for stronger magnetic fields.
The maximum proton energy increases as $E_{\rm max} \propto B$ in the radiation loss-free case that applies here.The electron energy decreases roughly as $E_{\rm max} \propto 1/\sqrt{B}$ for electrons if synchrotron losses are important.
In Fig.~\ref{fig:ismcsmpmax} we compare the maximum particle energy for three different magnetic field strengths ($3$, $10$, and $20\ \umu$G) for the case of a CSM (dashed curves)
and the ISM model (solid curves). Bohm diffusion is assumed in both cases. For clarity, we plot the energy divided by the magnetic field strength to scale away the loss-free $E_{\rm max} \propto B$ behaviour.
From Eqs.~\ref{eq:Eemax} and \ref{eq:tsync} we find typical synchrotron loss time-scales of $7\times10^3$~yr for $B=10\ \umu$G, and $2\times10^3$~yr for $B=20\ \umu$G for the ISM models.
For the ISM model with $B=20\ \umu$G we see a significant difference between the maximum proton and electron energy. For the ISM models with lower magnetic field strengths, synchrotron losses are not very important for the energies reached in the simulations. For the CSM models, however,
the maximum cosmic ray energy at a given shock radius is higher due to the larger shock velocity. The higher energies decrease the synchrotron loss time-scales for electrons.
As can be seen in Fig.~\ref{fig:ismcsmpmax} (both for the $B=10\ \umu$G and the $B=20\ \umu$G case) the maximum electron energy is essentially determined by synchrotron losses for a
shock radius $R > 4$ pc.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{ismmultipmax}
\includegraphics[width=0.5\textwidth]{csmmultipmax}
\caption[ ] {Maximum energy as a function of blast wave radius for relativistic electrons (dashed) and protons (solid). The top (bottom) panel shows the case for the SNR in a ISM (CSM), for values of the magnetic field of $B=3,10,20\ \umu$G, assuming Bohm diffusion. The dotted lines show the steady-state solution for electrons by \citet{2007ZirakashviliAharonian} for the different magnetic field strengths.
\label{fig:ismcsmpmax}}
\end{figure}
When electron acceleration operates in a regime where the synchrotron losses roughly balance the acceleration,
we can compare $E_{\rm max}$ to the exact asymptotic solutions for the cut-off region of the spectrum in the case of a steady, plane parallel shock,
as derived analytically by \citet{2007ZirakashviliAharonian}. In our notation:
\begin{eqnarray}
E_{\rm max}=\sqrt{\displaystyle \frac{9 \pi e}{\sigma_{\rm T} B}} \: \frac{m_{\rm e} c V_{\rm s}}{(q+2)}.
\end{eqnarray}
At early times, when $E_{\rm max}$ is age-limited, this solution is not valid.
Later (typically for a shock radius $R > 3-4$ pc) we find this steady-state result slightly underestimates $E_{\rm max}$ for electrons.
Asymptotically, our value for the electron $E_{\rm max}$ seems to converge to the steady state result.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Nxu40csmn20p}
\includegraphics[width=0.5\textwidth]{Nxu40csmn20e}
\caption[ ] {Cosmic ray protons (top) and electrons (bottom) for the CSM$\kappa_B20$ model for a SNR age of $\sim 1300$~yr. The diagonally shaded area indicates the region $> 4.6$ diffusion lengths upstream of the shock, as calculated with the advection time-scale. The horizontally shaded area shows the region limited by $4.6$ diffusion lengths when the time-scale is determined by synchrotron losses. The dotted line shows the evolution of $p_{\rm max}$ as a function of radius and the thick black dot indicates $p_{\rm max}$ at the shock.
\label{fig:nxu_norev}}
\end{figure}
We illustrate the difference between age-limited and loss-limited cosmic ray acceleration in Fig.~\ref{fig:nxu_norev}.
There we show the cosmic-ray distribution of in energy-radius phase-space for a SNR with an age of $\sim 1300$~yr, using the CSM model with $B=20\ \umu$G.
The relativistic particles are accelerated at the forward shock, where the particles reach the highest energy.
The maximum energy of protons is limited by the age of the remnant, and therefore still increases at the end of the simulation.
Sufficiently far downstream from the shock, the `older' cosmic rays are located that have been advected away from the shock.
The maximum energy of this older population is lower, as reflected in the distribution of particles.
The dotted line in the figure shows the cut-off energy as a function of radius,
and the thick black dot marks the intersection of the shock location with the cut-off energy in the energy-radius plane.
The majority of cosmic rays is located between the forward shock and the
contact discontinuity.
Upstream, cosmic rays diffuse ahead of the shock over a typical distance set by the diffusion length $L_{\rm d} \sim V_{\rm s}/\kappa_{\rm B}(E) \propto E$.
The diagonally shaded area is the region more than $4.6$ diffusion lengths ahead of the shock, in which 99 per cent of the particles should be
located ($F(p)=F(p)_0\int e^{-\Delta x/x_0}=0.01 F(p)_0$).
In this model the attainable electron energy is limited mostly by synchrotron losses, and considerably less than the proton energy at an age of $\sim 1300$~yr.
The high-energy electrons therefore diffuse over smaller distances compared to protons.
The horizontally shaded region in the electron plot shows the region more than $4.6$ loss-limited diffusion lengths ahead
of the shock, ($L_{\rm loss,d} =\sqrt{2 \kappa t_{\rm loss}}$, with $t_{\rm loss}=6 \pi m_{\rm e}^2 c^3/\sigma_T B^2 E$).
In Fig.~\ref{fig:nxu_time} we follow the population of protons that were injected during the first 300 yr of the simulation, and track their distribution in the energy-radius plane
as the simulation progresses.
At the end of the simulation (at $t\approx 1500$~yr), they constitute about 10 per cent of the total proton population.
The maximum energy in the distribution of these protons turns out to be similar to that of the entire proton poulation.
This is due to the fact that for this particular model (ISM$\kappa_B$10p), the Sedov-Taylor time is roughly equal to the time at which we start tracking the
population of particles. As we discussed earlier, the acceleration efficiency is highest around the Sedov-Taylor transition,
after which the maximum energy of the particles only increases by a factor of a few.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Nxutimeismn10p}
\caption[ ] {
The evolution of the proton distribution in the energy-radius plane as a function of time for the ISM$\kappa_B10$ model.
The filled contour plots the total particle distribution for times (top to bottom) $381$, $635$, $953$, and $1587$~yr.
The contour lines track the population of particles that is injected up to time $381$ yr, after which the particles from this early population
are still subject to diffusion and acceleration.
\label{fig:nxu_time}}
\end{figure}
We have determined that the overall spectrum is somewhat steeper than the canonical $q=2$ power law.
We attribute this to the losses that are not taken into account in the analytical calculations for planar, steady shocks.
It is therefore interesting to look at the spectrum at the shock, one diffusion length $L_{\rm d}(E_{\rm max})$ upstream and downstream,
and compare it to the spectrum of all particles in the SNR.
In Fig.~\ref{fig:spectimeloc} we show these spectra for the CSM$\kappa_B$10 model for electrons and protons.
It becomes apparent that the spectrum at the shock follows the analytically predicted power law slope quite closely.
Upstream of the shock only the most energetic particles can be found. Since we probe the region upstream at a diffusion length for $E_{\rm max}$,
this is where the peak of the upstream spectrum is found. At the same distance downstream, the slope of the spectrum is slightly steeper than at the shock.
The spectrum is loss-limited for the electrons, as is evident from the sharper cut-off.
In (semi-)analytical models, the spectrum at the shock in the presence of losses is often described by \citep[e.g.][]{2001MalkovDrury}:
\begin{eqnarray}
F(p) \propto p^{-q} \: {\rm e}^{-\left(p/p_{\rm max}\right)^\alpha},
\end{eqnarray}
with $\alpha=1$ for protons and $\alpha=2$ for electrons. We find that the simulated spectra quite nicely follow this
exponential cut-off prescription for the cumulative particle spectrum.
At the shock, the cut-off for protons is slightly sharper than usually assumed: we find $\alpha \approx 1.2-1.3$.
For electrons, the cut-off is less sharp, but still sharper than for protons: it closely follows $\alpha \approx 1.7$.
This will depend on to what extent the electron spectrum is terminated by synchrotron losses.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{specloccsmn10pbw}
\includegraphics[width=0.5\textwidth]{specloccsmn10ebw}
\caption[ ] {Proton (top) and electron (bottom) spectra (CSM$\kappa_B10$ model) at the shock and at locations of one diffusion length
(for an energy $E_{\rm max}$) upstream and downstream of the shock, with $\Delta x_{\rm diff}=\sqrt{2\kappa t_{\rm adv}}$ and $t_{\rm adv}=\kappa/V_{\rm sh}^2$.
The thin solid line shows the cumulative spectrum.
\label{fig:spectimeloc}}
\end{figure}
\subsection{Re-acceleration at the reverse shock}
\label{sec:rev}
At the forward shock, the magnetic field is likely amplified by the Bell-Lucek mechanism.
Whether a seed magnetic field with values as low as to be expected at the reverse shock from an expanding SNR would be sufficient to trigger magnetic field amplification,
and whether this amplification ultimately leads to a field strength that is high enough to confine particles at the reverse shock,
is still a matter of discussion \citep{2005Ellisonetal}.
Some studies require higher seed values for the field strength, while other studies observe magnetic field amplification from an almost zero field \citep{2008Changetal}.
We explore what happens when the magnetic field in the ejecta is strong enough to confine cosmic rays when Bohm diffusion is assumed. For a stronger magnetic field, the particles can undergo many shock crossings because of their smaller mean free path, and effectively can be re-accelerated at the reverse shock. A small mean free path also makes the expansion losses in the flow relatively less important. For simpliciy, we consider the case where the magnetic field strength in the ejecta is equal to that in the CSM/ISM.
In our simulations we observe that very energetic cosmic rays diffuse far enough downstream into the remnant to reach the reverse shock.
Re-acceleration can then occur if the local magnetic field is sufficiently strong.
Fig.~\ref{fig:nxu_rev} shows that the distribution in the energy-radius plane now has two peaks in energy. The highest energy (around $10^{14}$ eV)
is still attained at the forward shock,
but acceleration at the reverse shock now causes an additional peak in the energy spectrum at its location ($R \sim 3$ pc for $t \sim 700$~yr).
\begin{figure}
\centering
\includegraphics[width=0.23\textwidth]{Nxu40ismr20e} \includegraphics[width=0.23\textwidth]{Nxu40ismr20p}
\includegraphics[width=0.23\textwidth]{Nxu40csmr20e} \includegraphics[width=0.23\textwidth]{Nxu40csmr20p}
\caption[ ] {
The distribution of protons (right) and electrons (left) for a SNR in a ISM (top) or CSM (bottom), when the magnetic field at the reverse shock is amplified
to the same levels as at the forward shock. We assume a magnetic field strength of $20\ \umu$G, and the distribution is plotted for a remnant of an age of about $700$~yr.
The location of the shock is indicated at the location of $E_{\rm max}$ with a black dot.
\label{fig:nxu_rev}}
\end{figure}
If magnetic fields turn out to be indeed high enough to confine cosmic rays at the reverse shock, the picture sketched here is over-simplified.
Apart from the cosmic rays that have diffused far into the SNR from the forward shock, an additional cosmic ray component may arise if higher-Z elements from the supernova ejecta are
accelerated at the reverse shock. Figure~\ref{fig:rev_norevspec} shows the influence of re-acceleration on the shape of the spectrum.
The proton spectrum is slightly flatter, and extends to a slightly higher energy. The electron spectrum is hardly modified as synchrotron losses lead to a cut-off around
$10^{13.5}$ eV.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{csmrevnorevFptimec}
\caption[ ] {Spectra when the magnetic field at the reverse shock is equally strong as at the forward shock for the CSM$\kappa_B20$r model (coloured).
The solid (dashed) lines show the proton (electron) spectrum for a remnant age of $t=1587$~yr.
The black curves show the spectra when there is no re-acceleration at the reverse shock.
\label{fig:rev_norevspec}}
\end{figure}
Figure~\ref{fig:rev_norev} shows that when re-acceleration takes place, the maximum proton energy increases compared to the situation
without re-acceleration at the reverse shock. For electrons this initially is also the case, as long as the particle energy is limited by the time spent in the source.
However, in the loss-limited regime, the maximum electron energy is less compared with the case of weak fields near the reverse shock as the larger volume with a
high magnetic field leads to more synchrotron losses that are apparently not compensated by re-acceleration at the reverse shock.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{csmrevpmax}
\includegraphics[width=0.5\textwidth]{ismrevpmax}
\caption[ ] {Maximum energy as a function of time for both relativistic electrons (dashed) and protons (solid) in a SNR where the magnetic field at the reverse shock is $20\ \umu$G, as it is in the rest of the remnant (coloured), compared to the model with negligible magnetic field in the ejecta (black). As in the case of no re-acceleration, the energy the particles obtain in the CSM models (top) is higher than in the ISM models (bottom). While for protons the energy is higher in the `reacceleration' models, for the electrons this turns out mostly not to be the case. Since the added magnetic field induces synchrotron losses also downstream of the contact discontinuity, the maximum energy in these models in fact turns out to be lower when the SNR is in its loss-limited regime.
\label{fig:rev_norev}}
\end{figure}
\section{Discussion and Conclusions}
\label{sec:discussion}
In this paper, we have calculated diffusive shock acceleration through the first-order Fermi mechanism, using stochastic differential equations.
We treat the cosmic rays as test-particles and follow their acceleration and propagation along with the evolution of a supernova remnant.
We have extended the model as described by \citet{1992AchterbergKruells} \citep[and used by e.g.~][]{2004vanderSwaluwAchterberg}
to account for spherical geometry, something that is relevant when using this model to simulate cosmic ray acceleration in supernova remnants.
The model is set up generically so it can use local magnetic field strengths to calculate the diffusion coefficient of the test-particles.
However, in this paper we employ a constant magnetic field strength in the ISM/CSM. The ejecta magnetic field is parametrised in such a way that it either decays as expected for an expanding field that is frozen into the plasma, or we fix the magnetic field at the same strength as used for the ISM/CSM. This is because the numerical method that we use is not applicable in
cases where strong gradients of the diffusion coefficient that may result from gradients in the magnetic field are present.
The unique set-up of the calculation of the acceleration of the cosmic rays concurrent with the evolution of the SNR allows us to model the time- and location-dependent spectrum more accurately than other models. The disadvantage of our method is that it does not (yet) include feedback of the cosmic ray pressure onto the local plasma.
Our calculations show the following results.
\subsubsection*{Energy spectrum and spectral slope:}
With our method we can accurately model the spectrum of particles, where the slope is close to $q=2$ for an adiabatic index of $5/3$, and to $q=1.5$ for an adiabatic index of $4/3$.
The slope of the spectrum depends on the location: near the shock the slope of the spectrum is closest to the analytically predicted value for steady and planar shocks,
while the slope of the cumulative spectrum (all particles in the source) is steeper. The cumulative spectrum is steeper than that at the shock because it also contains particle populations from downstream of the shock, for which the particles were in the acceleration process for a shorter time and hence have lower cut-off energies.
We also find that in spherical geometry the overall spectrum is slightly steeper when compared to a simulation in slab geometry. The main reason for this is the inclusion of adiabatic expansion, which causes the acceleration process to be slightly less effective: particles lose a fraction of their energy in the downstream part of the shock crossing cycle, reducing the mean energy gain per cycle. The analytical solution for the steady state parallel geometry where adiabatic losses are excluded therefore does not strictly apply in spherical geometry.
The detailed shape of the spectrum also depends on the injection rate of particles as a function of time, which differs in the ISM and the CSM models that we employ.
\subsubsection*{Maximum particle energy:}
There are additional differences between the results of our approach and those obtained using steady-state analytical models.
The cut-off energy $E_{\rm max}$ is different from that obtained analytically from the balance between acceleration and losses.
For protons the $E_{\rm max}$ is determined by the age of the source. The CSM simulations show a higher $E_{\rm max}$ than the analytical estimate,
whereas for the ISM models the trend is the other way around.
We attribute this to the difference in cosmic ray age distribution, where, since we assume the injection rate to be proportional to the amount of swept-up mass,
the fraction of `older' particles, which are accelerated for a longer time, is relatively high in the CSM model and relatively low for the ISM model.
For Bohm diffusion, the high-energy end of the spectrum shows a distinct cut-off, either caused by the finite age of the SNR or, in the case of electrons,
by synchrotron losses.
\subsubsection*{Shape of the cut-off in the energy spectrum:}
The shape of the cut-off region for the cumulative spectrum follows quite nicely the quasi-exponential drop that is often assumed.
However, if one looks at the spectrum in the close vicinity of the shock, the proton spectrum falls off slightly sharper than for the overall proton spectrum,
while for electrons in the loss-limited regime the cut-off of the spectrum at the shock is more gradual than the cut-off in the overall spectrum.
\subsubsection*{CSM versus ISM:}
The strong dependence of the acceleration rate on the shock velocity causes the cosmic ray distribution to become sensitive to the surrounding medium.
The density profile of the environment into which the supernova explodes determines the shock velocity and its evolution.
The shock velocity (together with the magnetic field) therefore determines the acceleration rate and the maximum particle energy that can be attained.
Because in our models the average shock velocity is much higher for a SNR expanding into a CSM, and because the average cosmic ray age in the remnant is larger in the CSM case, $E_{\rm max}$ is larger in our CSM models than in the ISM models.
This suggests that
core-collapse SNe in dense environments, such as expected around a red supergiant, may be the most efficient particle accelerators and therefore the dominant
contributors to cosmic rays up to the ``knee''-energy. In the absence of a significant magnetic field in the ejecta,
the particles are mostly located between the blast wave and the contact discontinuity. The distance between those is also sensitive to the
velocity-evolution of the SNR and therefore the surrounding medium, and determines how long electrons are subjected to synchrotron losses.
\subsubsection*{Re-acceleration at the reverse shock:}
If the magnetic field is sufficiently amplified at the reverse shock, cosmic rays that are advected away from the blast wave can be re-accelerated at the reverse shock.
This has important consequences for the maximum energy and the distribution of the cosmic rays.
The maximum attainable energy for protons becomes significantly higher if re-acceleration at the reverse shock takes place,
whereas for electrons the additional synchrotron losses in the now strongly magnetised SNR interior can have the opposite effect.
In reality it is conceivable that due to localized magnetic field amplification, the net effect is a higher maximum energy for electrons, too.
\bigskip
Overall, we conclude that a time-dependent calculation of diffusive shock acceleration in SNRs shows significant differences compared with steady-state plane-parallel analytical models. The environment of the SNR has a large impact on the maximum attainable energy of the cosmic rays. The age distribution of the cosmic rays determines whether a time-dependent approach yields higher or lower maximum attainable energies.
\section*{Acknowledgements}
This study has been financially supported by J.V.'s Vidi grant from the Netherlands Organisation for Scientific Research (NWO). This work was sponsored by the Stichting Nationale Computerfaciliteiten (National Computing Facilities Foundation, NCF) for the use of supercomputer facilities, with financial support from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (Netherlands Organization for Scientific Research, NWO). K.M.S. acknowledges the hospitality of the astronomy department at the University of Florida, where part of this work was performed.
\begin{appendix}
\section{SDEs in spherical geometry}
\label{app:spherical}
In the equation for the distribution function of relativistic particles we need to take account of geometrical effects in order to describe adiabatic losses in a spherical shock
geometry.
The advection-diffusion equation (Eq.~\ref{eq:advdiff}) can be written, in spherical coordinates $(R \: , \: \theta \: , \: \phi)$ with axial symmetry
($\partial/\partial \phi = 0$), as:
\begin{eqnarray}
\label{eq:ad2d}
\frac{\partial}{\partial t} F(R \: , \: \theta \: , \: p \: , \: t) &=& -S_R-S_\theta-S_p,
\end{eqnarray}
with
\begin{eqnarray}
S_R=\frac{1}{R^2}\frac{\partial}{\partial R}\left[R^2\left(V_R F-\kappa \frac{\partial F}{\partial R}\right)\right],
\end{eqnarray}
\begin{eqnarray}
S_\theta=\frac{1}{R \sin \theta}\frac{\partial}{\partial \theta}\left[\sin \theta \left(V_\theta F-\frac{\kappa}{R} \frac{\partial F}{\partial \theta}\right)\right],
\end{eqnarray}
and
\begin{eqnarray}
S_p=\frac{\partial}{\partial p}\left[\frac{d p}{dt} F\right],
\ {\rm
with} \quad
\frac{d p}{dt}= - \frac{p}{3} ({\bf \nabla \cdot V})
\end{eqnarray}
and
\begin{eqnarray}
{\bf \nabla \cdot V} = \frac{1}{R^2} \frac{\partial}{\partial R} \left( R^2 \: V \right) + \frac{1}{R \: \sin \theta} \frac{\partial}{\partial \theta} \left( \sin \theta \: V \right) \; .
\end{eqnarray}
We assume isotropic spatial diffusion: ${\bf \kappa}=\kappa {\bf I}$, with ${\bf I}$ the unit matrix in configuration space.
The magnetic field orientation is not taken into account for the diffusion rate, and diffusion in momentum space (Fermi-II acceleration) is neglected.
In order to be able to apply the It\^o method (Eq.~\ref{eq:ito}), we have to re-order the differential operators in $S_R$ and $S_\theta$,
such that they conform to the Fokker-Planck standard form (cf. Eq.~\ref{eq:fokkerplanck}).
We substitute $\tilde F=R^2\ F$ into Eq.~\ref{eq:ad2d} to get:
\begin{eqnarray}
\frac{\partial \tilde F}{\partial t} = -\tilde S_R-\tilde S_\theta-\tilde S_{p}.
\end{eqnarray}
The operator of $\tilde S_R=R^2 S_R$ can be rewritten to the standard form as follows:
\begin{eqnarray}
\label{eq:Roperator}
\tilde S_R&&=\frac{\partial}{\partial R} \left[ V_R \tilde F- R^2 \kappa \frac{\partial}{\partial R} \left( \frac{\tilde F}{R^2} \right) \right] \nonumber \\
&& \\
&&=\frac{\partial}{\partial R}\left[\left(V_R +\frac{1}{R^2}\frac{\partial (R^2 \kappa )}{\partial R}\right)\tilde F- \frac{\partial}{\partial R}\left(\kappa \tilde F \right)\right].
\nonumber
\end{eqnarray}
This now conforms with the standard form with an effective radial velocity
\begin{eqnarray}
V_R^{\rm eff}=V_R+\frac{1}{R^2}\frac{\partial (R^2 \kappa)}{\partial R}.
\end{eqnarray}
This is the velocity that enters the equivalent SDE.
In a similar fashion, $\tilde S_\theta=R^2 S_\theta$ can be rewritten by changing the independent variable
to $\mu = \cos\theta$ (so that $\partial/\partial \theta=-\sin \theta \partial/\partial \mu$):
\begin{eqnarray}
\tilde S_\theta=&&\frac{1}{R \sin \theta}\frac{\partial}{\partial \theta}\left[\sin \theta \left(V_\theta \tilde{F}-\frac{\kappa}{R} \frac{\partial \tilde{F}}{\partial \theta}\right)\right] \nonumber \\
&& \nonumber \\
=&&-\frac{\partial}{\partial \mu}
\left[ \frac{V_\theta \sin \theta}{R} \tilde F+ \frac{\kappa \sin^2 \theta}{R^2}\frac{\partial \tilde F}{\partial \mu} \right] \nonumber \\
& & \\
=&&-\frac{\partial}{\partial \mu}
\left[ \left( \frac{V_\theta \sin \theta}{R} +\frac{2 \mu \kappa}{R^2} -\frac{(1-\mu^2)}{R^2}\frac{\partial \kappa}{\partial \mu} \right) \tilde F \right. \nonumber \\
&&+ \left. \frac{\partial}{\partial \mu}\left(\frac{(1-\mu^2)\kappa \tilde F}{R^2}\right) \right]. \nonumber
\end{eqnarray}
In short, this gives us:
\begin{eqnarray}
\tilde S_\theta=\frac{\partial}{\partial \mu}
\left[ V_\mu^{\rm eff} \tilde F - \frac{\partial}{\partial \mu}\left( D_\mu \tilde F\right) \right]
\end{eqnarray}
with
\begin{eqnarray}
V_\mu^{\rm eff} = -\frac{V_\theta \sin \theta}{R}+\frac{\partial}{\partial \mu}D_\mu \nonumber
\end{eqnarray}
and
\begin{eqnarray}
D_\mu = \left(\frac{1-\mu^2}{R^2}\right) \kappa = \frac{\kappa \sin^2 \theta}{R^2}.\nonumber
\end{eqnarray}
For $\tilde S_p$ we substitute $u=\ln (p/mc)$ and $dp=p\ du$ to obtain the equivalent equation for $u$:
\begin{eqnarray}
\tilde S_u=p \tilde{S}_{p} = -\frac{\partial}{\partial u}\left[\left(\frac{1}{3} {\bf \nabla \cdot V} +\lambda_s B^2 \sqrt{1+e^{2 u}}\right) \tilde F\right].
\end{eqnarray}
We have added on the right-hand side the effect of synchrotron losses, with $\lambda_s$ given by Eq.~\ref{eq:betas}.
As stated before: synchrotron losses may be neglected for protons but are important for electrons.
We now have the advection-diffusion equation in the standard form of the Fokker-Planck equation to which we can
apply the It\^o method to calculate the particle distribution function in a stochastic manner (Eq.~\ref{eq:ito}).
We solve the following equations numerically to update the position of the particles in phase space:
\begin{eqnarray}
{\rm d}u&=&-\frac{1}{3}({\bf \nabla \cdot V}) \: {\rm d}t - \lambda_s B^2 \sqrt{1+e^{2 u}} \: {\rm d}t \; , \nonumber \\
& & \nonumber \\
{\rm d}R&=&V_R^{\rm eff} \: {\rm d}t +\sqrt{2 \kappa {\rm d}t}\: \xi_R \nonumber \\
&=&\left( V_R+\frac{1}{R^2}\frac{\partial (R^2 \kappa)}{\partial R}\right) {\rm d}t + \sqrt{2 \kappa {\rm d}t} \: \xi_R \; , \\
& & \nonumber \\
{\rm d}\mu&=& V_\mu^{\rm eff} \: {\rm d}t +\sqrt{2 D_\mu {\rm d}t} \: \xi_\mu \nonumber \\
&=&\left( -\frac{V_\theta \sin \theta}{R}+\frac{\partial D_\mu}{\partial \mu}\right) {\rm d}t + \sqrt{2 D_\mu {\rm d}t}\: \xi_\mu. \nonumber
\end{eqnarray}
If we revert from $\mu$ to $\theta$ the last equation becomes:
\begin{eqnarray}
R \: {\rm d} \theta&=&\left( V_\theta+\frac{2 \kappa}{R \tan \theta}+\frac{1}{R}\frac{\partial \kappa}{\partial \theta}\right) {\rm d}t -\sqrt{2 \kappa {\rm d}t}\: \xi_\mu.
\end{eqnarray}
The stochastic terms $\xi_i$ should satisfy $\langle \xi_i \rangle=0$ and $\langle \xi_i \xi_j \rangle=\delta_{ij}$, where $i,j$ stand for $R$ and $\mu$.
Note that these equations solve for $\tilde F=R^2 F$ rather than for $F$.
In a slab geometry with flow velocity $V(x \: , \: t)$ along the shock normal the corresponding equations simplify to Eqn (\ref{1DslabSDE}) of the main paper.
\end{appendix}
|
1,108,101,565,948 | arxiv | \section{Introduction}
The origin of Saturn's rings has been an ongoing puzzle since Galileo's
discovery of the system four hundred years ago.
Current theories that explain the rings' origin
are constrained by the mass, composition and age of the rings
\citep{harris84,dones91,charnoz09a,canup10,charnoz09b,cuzzi10} (See \citet{charnoz17} for a comprehensive review of the origin of planetary rings.)
Observations of density waves in the A and B rings and comparisons of
simulations of optical depth to Cassini observations for the denser B ring find a total mass of the rings
comparable to Saturn's moon Mimas \citep{esposito83,robbins10,hedman16} with uncertainties due to clumpiness of matter in the rings
leading to a range varying from $0.2-1.0 \times 10^{20}\;{\rm kg}$ cf. $M_{Mimas}=3.75\times 10^{19}\;{\rm kg}$.
The rings are mainly composed of ice with an ice mass fraction greater
than 90\% \citep{cuzzi10}, a measurement that seems inconsistent with
the mass-weighted average value of 60\% ice/40\% rock
for Saturn's 5 inner moons \citep{matson09}. If
the rings and moons form coevally in Saturn's early history
and are born from a massive spreading ring
\citep{charnoz09b,crida12},
one needs to explain their different compositions.
One proposed solution to this problem
is a scenario in which a differentiated moon
with a mass comparable to Titan falls into Saturn at early times and
loses its icy mantle via tidal stripping to form a nearly pure ice
ring \citep{canup10}.
The final constraint on ring origin theories comes from the age of
the rings. Two arguments suggest that the rings may be as young as a few
hundred million years causing problems for theories requiring a primordial origin.
The viscosity arising from dissipative particle collisions implies
that a ring will spread to conserve angular momentum while losing orbital
energy \citep{goldreichtremaine82}.
The viscous timescale is $t_{\nu} \sim \Delta R^2/\nu$ where $\Delta R$ is
the current radial width of the rings
and $\nu$ is the viscosity with the underlying assumption that the ring is born
as a radially thin object. This timescale
is a few hundred million years based on the current estimates of ring viscosity
for the A ring at least \citep{esposito86}.
The more massive B ring could be much older based on similar arguments
and more detailed treatments of the sources of viscosity in the denser B system
may imply longer evolution and survival times with the rings perhaps as old as the age of the Solar System
supporting primordial origin theories \citep{daisaka01,salmon10}.
Meteoroid bombardment of the rings can also change the
composition of the rings over time through contamination from
mixing with non-icy constituents. Modeling of the spectral evolution of the
rings under exposure to meteoroids also implies a young age of only
a few hundred million years again confounding primordial origin theories
\citep{cuzzi98,estrada15}.
Scenarios where the rings form recently include
collisional disruption of a moon or moons within the Roche zone through impacts with infalling comets \citep{harris84},
tidal disruption of a massive icy comet on a heliocentric orbit \citep{dones91}
and the onset of orbital instabilities driven by more
rapid tidal evolution that lead to moon-moon collisions recently\citep{cuk16}.
The collisional disruption theory is problematic since it is hard to form or move moons within the Roche zone.
The tidal disruption theory also seems improbable based on the large mass required for the interloping comet is
while the orbital instability theory still has difficulty creating rings with a high ice fraction.
In summary, the mass and composition of the rings create tension between theories
that propose a primordial and recent origin and there is no strong consensus
on a formation mechanism.
In this paper, we propose a modified version of the collisional disruption mechanism
of ring formation \citep{harris84} that is consistent with the mass and composition of the rings.
In this model, a comet on a heliocentric orbit with
sufficient mass and impact energy hits and disrupts a ring parent moon
and the resulting
debris spreads along the moon's orbit to form the ring system.\footnote{Perhaps a suitable name for this moon is Menoetius: a
Titan, son of Iapetus and brother of Prometheus, struck down by Zeus with a thunderbolt for his hubris and cast into
the underworld during the battle of the Titans (Hesiod, Theogony, 507).}
This mechanism is likely at play in the formation of the less massive rings
of Jupiter, Uranus and Neptune but faces difficulties in making Saturn's rings \citep{colwell94} because of their relatively large mass and
high ice fraction.
To work for Saturn, the mass and composition of the hypothetical ring parent moon must again be consistent
with Saturn's rings and moons (see Table \ref{table:moons}).
\begin{table*}
\centering
\begin{tabular}{rrrrr}
\hline
Object & a ($10^5 km$) & Mass ($10^{20}\;{\rm kg}$) & $\bar{\rho}$ (${\rm g/cm^3}$) & Ice Fraction\\
\hline
A Ring & 1.22-1.37 & 0.06 & - & 90-95\% \\
B Ring & 0.92-1.18 & 0.2-1.0 & - & 90-95\% \\
Janus & 1.51 & 0.020 & 0.61 & {\em porous} \\
Mimas & 1.85 & 0.375 & 1.15 & 72\% \\
Enceladus & 2.38 & 1.08 & 1.61 & 37\% \\
Tethys & 2.95 & 6.17 & 0.97 & 95\% \\
Dione & 3.77 & 10.95 & 1.48 & 45\% \\
Rhea & 5.27 & 23.07 & 1.23 & 64\% \\
\hline
Ring Parent & 1.40 & 0.750 & 1.05 & 84\% \\
\hline
\end{tabular}
\caption{The properties of Saturn's rings \citep{robbins10,cuzzi10,hedman16} and mid-sized moons \citep{matson09} compared
to a hypothetical ring parent moon.
Theories of a coeval origin of the rings and moons have difficulty in explaining a high
ice mass fraction compared to the moons. The properties of the proposed ring parent moon are included for comparison.
}
\label{table:moons}
\end{table*}
Since the mass of the rings is comparable to the mass of Saturn's moon Mimas \citep{robbins10},
the disrupted moon must be at least this large and one further needs to find a way to
create a ring made of nearly pure ice.
The ice fractions of Saturn's 5 mid-sized moons vary from
37\% (Enceladus) to 95\% (Tethys) with an mass-weighted
average value of about 60\% .
The disruption of a moon with the current average composition of the extant moons
would create a ring inconsistent with the observations
though perhaps a parent moon with the composition of Tethys might be adequate.
A further problem with this collisional scenario is the initial orbital radius of
a hypothetical ring parent moon.
To remain stable in the presence of Saturn's tidal forces,
the parent moon must be located outside of
the \citet{roche1847} radius $R_{Roche}=2.45 R_S (\rho_S/\rho_{moon})^{1/3}$
where $R_S$ and $\rho_S$ are the radius and mean density of Saturn and $\rho_{moon}$ in the moon's density.
However, the parent moon cannot be too far away from the Roche radius so
that some of the collisional debris can migrate inwards to form the
rings before reassembling into a new moon.
For a moon composed of pure ice, $R_{Roche}\approx 130000\;{\rm km}$ consistent with the
rings current radial extent of $R<137000\;{\rm km}$, so the parent moon
should ideally located near this radius.
The co-orbital moons Janus and Epimetheus are already near the current edge of the rings but their collisional disruption
would create a ring with a mass that is an order of magnitude too small.
Assuming the disrupting parent moon is massive enough,
then the debris will settle into a ring promptly and then spread radially
on the viscous timescale discussed above.
If the disruption occurred recently - say within the past 500 Myr, the primordial existence of a Mimas-sized moon located
just near the Roche radius might seem unlikely since tidal friction \citep{goldreich66} would have caused a moon
of this mass to migrate away from this position to Mimas's current orbital radius within the age of the Solar System
based on the conventional values of the
Saturn's Love number $k_{2,S}$ and tidal dissipation function $Q_S$ having a ratio $k_{2,S}/Q_S=2.0\times 10^{-5}$.
Recent analysis of a century of astrodynamical measurements combined with Cassini's observations
of Saturn's moons imply values $k_{2,S}/Q_S\approx 1.6\times 10^{-4}$ reducing
the tidal evolution timescale by an order of magnitude making it even more difficult
to consider a moon existing at this position for long \citep{lainey12,lainey17} and in fact
implying short timescales for catastrophic orbital instabilities for the entire inner moon system \citep{cuk16}.
However, \citet{fuller16} have argued that Saturn's effective $Q$ may be intermittently diminished
by resonant locking between dynamical tidal oscillations commensurate with orbital frequencies of satellites
while the long term average of $Q_S$ would be closer to the conventional value.
The implications of a evolving value of $Q_S$ have not been fully explored. For this paper, we will assume
the lower conventional value for the tidal ratio $k_{2,S}/Q_S$.
The only way to hold a moon in place near the Roche radius over the age of the solar system
prior to a recent disruption is via
$e$-type mean motion orbital resonant trapping with the other icy moons.
Another difficulty is that such destructive events may be rare with the disruption
of a ring parent moon of the mass of Mimas happening once every 20 Gyr at the current
cometary flux rate giving a probability of destruction
in the last 500 Myr years of $\sim 2\%$ \citep{lissauer88,dones09,charnoz09b}.
The rarity of an event in itself does not necessarily rule out its occurrence if no alternative
more probable mechanisms can be found.
The scenario proposed in this paper overcomes the above difficulties
and provides a modified collisional scenario for a recent origin of Saturn's rings.
The strange fact that the mass of the rings is close to the
mass of Mimas may not be a coincidence but rather a clue pointing to a collisional disruption
origin.\footnote{It should be noted that primordial origin theories that posit more complex viscosity evolution
for the rings also result in a ring with a mass comparable to Mimas \citep{salmon10}.}
Now consider the collisional disruption of a ring parent moon by a passing comet
of roughly twice the mass of Mimas located
just outside the current edge of the rings near the Roche radius
at a radius $a=140000\;{\rm km}$.
After disruption, half of the collisional
debris migrates inwards within the Roche zone to become the a new ring while
the remainder of the debris migrates outwards
along with any remnant of the collision and likely reassembles to become a new moon following the mechanisms
described by \citet{crida12}.
One might identify this new moon with Mimas itself.
In this picture, Mimas forms coevally with the rings and migrates to its
current position from the time the rings formed.
To support this hypothesis, imagine taking the total mass of
Saturn's A and B rings and Mimas itself and placing it within a single body - a hypothetical ring parent moon
composed of mass from the rings and Mimas itself.
Using reasonable estimates of the ring surface density \citep[e.g.,][]{robbins10}
and assuming there is no significant mass loss from the Saturn system during the disruption,
one can calculate the total mass and angular
momentum of this combined system and compute the orbital radius $a_{parent}$ for this hypothetical ring parent moon:
\begin{equation}
a_{parent} = (G M_S)^{-1} \left( \frac{J_A + J_B + J_{MIMAS}}{M_A + M_B + M_{MIMAS}} \right) ^2
\label{eq-aparent}
\end{equation}
Assuming a uniform density $\Sigma_{ring}$ with radial range $R_0 < R < R_1$, the ring mass and angular momentum are:
\begin{eqnarray}
M_{ring} &=& \pi \Sigma (R_1^2 - R_0^2) \\
J_{ring} &=& \frac{4\pi}{5} (G M_S)^{1/2} \Sigma (R_1^{5/2} - R_0^{5/2})
\end{eqnarray}
For Saturn's $A$ and $B$ rings, \citet{robbins10,hedman16} analysis set limits
on the ring surface densities with $\Sigma_A=42-52\;{\rm g/cm^2}$ and
$\Sigma_B=40-480\;{\rm g/cm^2}$ though earlier estimates by \citet{esposito83} give $\Sigma_A=50\;{\rm g/cm^2}$ and
$\Sigma_B\approx 100\;{\rm g/cm^2}$.
With the A and B ring radial ranges
of $122000-137000$ km and $92000-118000$ one can compute a range of ring masses and angular momenta using the equations above.
Using Mimas's mass and orbital radius of $M_{MIMAS}=3.75\times 10^{19}\;{\rm kg}$ and $a_{MIMAS}=185400\;{\rm km}$ once can
find the orbital angular momentum of Mimas and
so determine a mass and orbital radius for a ring parent moon from equation \ref{eq-aparent}.
The result is $M_{parent}=1.3-3.4 M_{MIMAS}$ with $a_{parent} = 128000-167000\;{\rm km}$.
Note that the mass of Enceladus is $M_{ENCELADUS}=2.9 M_{MIMAS}$
so proposed ring parent moon is consistent with the properties of the inner most moons.
This orbital radius straddles the Roche radius defining the edge of the rings so in principle an ice+rock moon could be located
in this position. This orbital radius is also
fortuitously very close to the current 4:2:1 mean motion resonant orbital radius
with Enceladus and Dione located at $a\approx 150000\;{\rm km}$.
Mean motion resonances between satellites are common in the solar system and multiple
resonances have a precedent in Jupiter's moons with Io, Europa and Ganymede \citep{goldreich65} arranged
in the same 4:2:1 hierarchy as originally observed by Laplace. A reasonable hypothesis is that a similar resonant
arrangement occurred with a ring parent moon, Enceladus and Dione when the inner icy moons of Saturn formed.
Such a system of icy moons may have formed by the viscous spreading of a primordial massive
ring composed of ice and rock
in the early phases of the formation of Saturn and its mid-sized moons \citep{canup10,charnoz10}
Perhaps the last of this primordial ring material ended up in a moon just beyond the Roche radius.
If this hypothetical ring parent moon was trapped in resonance in the past,
its outward tidal migration would be slowed down significantly
and it could remain in this vulnerable position prior to a collision that would eventually recreate a ring in recent
times.
A final requirement for this scenario to work is a mechanism that leads to the
nearly pure icy composition
of Saturn's rings.
The hypothetical parent moon is likely differentiated from the strong tidal heating
expected from close proximity to Saturn and resonant interactions with Enceladus and Dione.
If we imagine the parent moon that is twice the mass of Mimas with the same rock mass then its ice
fraction is 84\%, a plausible value ranking it between Mimas and Tethys.
One might expect a partial disruption to liberate the mantle to form ice rings while
leaving behind the rocky core that migrates outward within a newly forming Mimas
or small moon that subsequently interacts with Mimas.
In this paper, we explore this collisional scenario using simulations of hypervelocity cometary impacts with a
hypothetical differentiated ring parent moon located near Saturn's Roche radius. We first describe
methods based on a new collisional N-body rubble pile code using simulations with $10^7$ particles.
We present results that show how a ring of nearly pure ice can be created along with a remnant rocky
moon straddling the Roche radius. We argue that the subsequent evolution of this system can
plausibly transform into Saturn's ring system and a recently formed Mimas over a few hundred million years.
We also discuss the implications of this scenario for the rest of Saturn's inner satellite system.
\section{Methods}
\subsection{Collisional N-body methods}
We model the moon as system of gravitating hard colliding spheres using a new collisional N-body code based
upon a parallelized N-body treecode originally developed for galactic dynamics \citep{dubinski96}.
The code has been modified to follow collisions between particles treated as hard spheres and has similarities to other codes used
to model rubble piles \citep[e.g.,][]{richardson00}. The new code uses a parallelization
strategy that permits the simulation of N-body systems with as many as $10^8$ particles
on modern parallel supercomputers. We describe the code in detail in Appendix A.
\subsection{Initial conditions}
\subsubsection{Ring parent moon}
We model the ring parent moon as a rubble pile of $N$ frictionless hard spheres of equal radius.
Following similar methods applied to models of asteroids \citep[e.g.,][]{leinhardt00}, we set up systems of
hard spheres within an initially spherical distribution with small random velocities.
For sufficiently large numbers of particles, impacts with rubble pile
simulations behave like a fluid described by a hard sphere equation of state (EOS) and therefore can be used as a
proxy for more computationally difficult hydrodynamics simulations using equations
of state for ice and rock. We justify this approximation for our problem in more detail below.
We describe the methods for setting up equilibrium gravitating systems
of colliding hard spheres based on a hard-sphere EOS in Appendix B.
The ring parent moon model consists of 10M hard colliding spheres
divided between a rocky core containing
500K particles and a icy mantle containing 9.5M particles.
The particles within the rocky core have masses three times those of the icy mantle
so that mean densities of rock and ice are $\rho_{rock}=2.8\;{\rm g/cm^3}$
and $\rho_{ice}=0.935\;{\rm g/cm^3}$ - the density of ice at the ambient temperature of $T=70 K$ near Saturn.
The total mass of the moon is $M_{parent} = 2 M_{MIMAS}$ with the core mass intentionally
set to be the same value as the implied mass of rock within Mimas based on its mean density of $\rho=1.15\;{\rm g/cm^3}$.
The radius of the ring parent moon is $R=259\;{\rm km}$ leading to a mean density of $\rho=1.05\;{\rm g/cm^3}$ and
ice fraction of 84\%. The density profile of our model is presented in Figure \ref{fig-den}.
The particle radii for this model are $R=1.05\;{\rm km}$.
The moon is placed in a circular orbit at $a=140000\;{\rm km}$ with
an orbital velocity $v_m=16.5\;{\rm km/s}$ and period $P=14.9$ hrs about Saturn.
For a moon this close to Saturn, one expects significant tidal flattening and the
moon should take the form of a Roche ellipsoid. In principle, the exact shape of a Roche ellipsoid
can be computed for homogeneous bodies. Since our model is differentiated, we find
the equilibrium configuration by setting the initially spherical model described above
in synchronous rotation on its orbit and letting it relax dynamically
within Saturn's tidal field over one orbit.
The final Roche ellipsoid has an surface axis ratio
a:b:c=1:0.69:0.65 and major axis length of $a=320\;{\rm km}$
while the inner rocky core remains nearly spherical (Figure \ref{fig-moon}).
\subsubsection{Comet}
We similarly construct a model comet as a spherical rubble pile made of 5000 ice particles.
The comet's mass is $M_c=3.75\times 10^{16}\;{\rm kg}$ with a radius $R=21\;{\rm km}$ comparable to the
size and mass of Comet Hale-Bopp \citep{weissman07}.
The comet falls in on a heliocentric zero energy orbit
corresponding to an excess speed $v_\infty = 15\;{\rm km/s}$ in the vicinity of Saturn.
The comet speed before impact is
$v_{c}=(v_e^2 + v_\infty^2)^{1/2}\approx 27\;{\rm km}$ where $v_e$ is the escape velocity at the orbital radius $a$.
With the moon's orbital velocity of $v_m=16.5\;{\rm km/s}$, the relative velocity of impact $v_{rel}$ is
in the range $11 < v_{rel} < 44\;{\rm km/s}$. In this paper, we consider three collisional trajectories corresponding
to the case of a rear-end, side-on and head-on collisions with relative velocities of $v_{rel}=11$, 27 and 44 km/s to explore a
range of impactor energies. In all three cases, the impacts are direct with no inclination to the surface.
\begin{figure}
\includegraphics[width=3.5in]{figures/fig-den.pdf}
\caption{The density profile of the ring parent moon model. The moon has a rocky core with radius $R=100$ km and an icy mantle extending to $R=300$ km.
The dashed line describes the idealized density for a moon with a homogeneous rocky core and icy mantle. The solid black line
describes the density profile for the hydrostatic equilibrium calculated from the hard-sphere equation of state with velocity dispersions of
$\sigma = 6\;{\rm m/s}$ in the core and $\sigma=11\;{\rm m/s}$ in the mantle.
The red line describes the density profile after
the model has been dynamically evolved for 2.8 hours showing that the model is almost in exact equilibrium.
The discontinuity in density at $R=240\;{\rm km}$ is due to the phase change from
a hexagonally close packed crystal solid to a more random glass-like fluid phase.}
\label{fig-den}
\end{figure}
\section{Collisional disruption}
An impacting comet deposits kinetic energy explosively at the surface of the target
moon creating an inward propagating shockwave
that leads to complete or partial disruption of the moon \citep[e.g.,][]{holsapple93}.
Collisional disruption is quantified by the parameter $Q_D^*$,
the energy per unit mass of the target
required to unbind half of the total mass \citep{benz99}.
For isolated pure ice targets, simulations show the scaling relation
$Q_D^*=0.05 (R/{\rm m})^{1.188}\;{\rm J/kg}$ where $R$ is the radius
of the target in meters \citep{movshovitz15}.
The typical mass of an impacting comet $M_{c}$ with
relative velocity $v_{rel}$ that will disrupt the moon is found by equating the
disruption energy to the kinetic energy.
\begin{equation}
m_C = \frac{2 M_{moon} Q_D^*}{v_{rel}^2}.
\end{equation}
A typical relative velocity is 27 km/s as discussed above corresponding to a pure ice
comet impactor mass, $m_c \approx 4\times 10^{16}\;{\rm kg}$ with
radius $R\approx 22\;{\rm km}$. We use a comet mass comparable to this value in our simulations.
\begin{figure*}
\includegraphics[width=7.0in]{figures/fig-moon.pdf}
\caption{A cutaway rendering of the spherical differentiated ring parent moon and the tidally adjusted roche ellipsoid model with the comet impactor to scale.
The moon model is composed of a rocky core containing $520K$ particles and icy mantle made of $9.5M$ particles. All particles have a radius $R=1.05$ km.
The rock particles (orange) are $3\times$ the mass of the ice particles (blue). The comet is made of 5000 ice particles. The moon has a total mass
$M_{moon}=2 M_{MIMAS}$ while the comet has a mass $M_{comet}=10^{-3} M_{MIMAS}$.}
\label{fig-moon}
\end{figure*}
The use of a collisional N-body code to follow this process needs to be justified since hypervelocity
disruptive impacts are complex and are usually treated with a hydrodynamics code using equations of state
for ice and rock that span behavior from the solid state through melting to vaporization \citep[e.g.,][]{benz99,kraus11}.
Collisional particle simulations do not
treat the detailed hydrodynamics and shock-heating
induced phase transitions at the impact point
but we argue here that their application is a reasonable approximation since
the bulk of the energy goes into simply unbinding and shattering the moon.
One expects the shockwave from the impact to lead
to the melting and vaporization of some
of the target mass for impact velocities
$> 10\;{\rm km/s}$ \citep{kraus11}.
For ice on ice hypervelocity collisions, the expected mass of
melt plus vapor is $\sim 100$ times
the mass of the impactor \citep{kraus11}.
The relatively small mass of the impactor in these simulations implies that around
5\% of the mass of the moon is melted or vaporized so
while collisional N-body simulations miss these details of the complex
behavior at the point of impact, most of the impact energy goes into
disrupting the moon.
As the pressure wave from impact propagates away from the
impact point, the overpressure exceeds the local
hydrostatic pressure and causes the moon
to unbind. A solid icy moon with a rocky core therefore
shatters into fragments
and depending on the impact energy, the moon can completely or partially unbind.
For lower impact energies, a fraction of the mass
can remain bound after the impact.
The process of shattering is also not well-defined and depends on the structural
properties of the moon \citep{ballouz15} but in this rubble-pile approximation
the moon simply shatters into millions of equal mass fragments represented by
the particles.
These models use $10^{2-3}$ times as many particles as typically used in rubble pile
collisions and SPH simulations and so permit a small impactor to target mass ratio.
In hypervelocity collisions with rubble piles, it is necessary to use small enough timesteps
to accurately distribute the impact energy throughout the target body and so conserve energy
through the disruption process. Typically, the initial timestep needs
to be $\delta t \approx R_p/v$ where $R_p$ is the radius of the particles and $v$ is the relative
velocity at impact. For our largest simulations, $R_p=1.05$ km and $v < 44\;{\rm km/s}$ the initial $\delta t \sim 0.05 s$.
Once the impact energy has distributed throughout the target body, the particle collision rate drops off as the density
decreases and the timestep can be increased so speeding up the calculation.
To validate this approach, we carried
out a study of 20 isolated disruptive collisions of the putative ring parent moon using models composed of 1M particles with a range of impact
energies varying both the mass ratio and relative velocity to explore the disruption process for comparison to hydrodynamics
calculations.
During the impact and disruption, the particle collisions are elastic
and non-dissipative with a coefficient of restitution $\epsilon=1$.
We are in effect using the rubble-pile code as a proxy to model a nearly incompressible dense fluid following a hard-sphere EOS
rather than an actual collection of boulders bouncing off of each other at low relative velocities in a non-destructive
but dissipative manner. For $\epsilon \approx 0.9$, as is often used in simulations of asteroid collisions,
hypervelocity impacts dissipate energy at an artificially high rate.
We find in end that the total energy of the system is conserved to within 1\% for our choice of timestep with elastic collisions.
In real impacts, the main mode of
dissipation is radiative which accounts for a small amount of energy loss and is usually ignored in hydrodynamic simulations.
In rubble-pile simulations,
the heating manifests itself as an increase in the velocity dispersion and decrease in particle
number density as the hard-sphere ``gas" absorbs the energy.
After the disruption and dispersal of the mass along a ring, we use a coefficient of restitution $\epsilon < 1$ following the
dissipative evolution of the ring over 30 orbits.
From the scaling relation of \citet{movshovitz15}, on expects $Q_D^*=1.36\times 10^5\;{\rm J/kg}$ for the ring parent moon model.
We used 5 relative velocities between 11-44 km/s in steps of $2^{1/2}\times$
and 4 comet masses ranging from $0.94-7.5 \times 10^16\;{\rm kg}$ in steps of $2\times$ corresponding to impact energies per unit mass ranging
from $Q_D=0.015-3.8\times 10^6\;{\rm J/kg}$. In each case, we followed the collision well past the impact and partial
unbinding of mass to the re-accretion of material into a bound body.
We estimated the mass of the bound remnant using a friends-of-friends particle grouping method
similar to algorithms used in finding halos in cosmological simulations.
Figure \ref{fig-q} plots the collision energy per unit mass $Q_D$ versus the remnant
bound mass for the different comet:moon mass ratios. The plot shows a grouping of simulations
near the predicted value of $Q_D^*=1.36\times 10^5\;{\rm J/kg}$ and
a remnant mass of 50\% of the initial mass. The implication is that rubble pile simulations with hypervelocity
impacts reproduce the results of hydrodynamic simulations of disruption.
More complex hydrodynamic simulations should be done eventually to validate these findings
but this set of experiments gives us confidence that the use
of rubble piles in this context is a reasonable approximation.
\begin{figure}
\includegraphics[width=3.5in]{figures/fig-q.pdf}
\caption{The impact energy per unit mass versus the remnant bound mass for a range of hypervelocity impacts on the putative ring parent moon model. Both the
mass and relative velocity of the impactor are varied to explore the behavior of the collisional disruption process.
The groups of points at different impact energies reveal that
the controlling factor in collisional disruption is the energy per unit mass.
The impactor to target mass ratio is less important as long as the ratio is
small.
The plot also reveals that critical disruption energy $Q_D^*$ for the ring parent moon agrees with the scaling relation of \citet{movshovitz15}.}
\label{fig-q}
\end{figure}
\section{Ring parent moon disruption in orbit about Saturn}
We now proceed to simulate the disruption of the ring parent moon modeled as Roche ellipsoid in orbit about Saturn.
Saturn's gravitational field is modeled as an background potential within the N-body code. The potential is
represented as a spherical harmonic expansion using
Saturn's mass and zonal harmonics from Table 3 of \citet{jacobson06} to quadrupole order.
We examine three representative collisional scenarios where the comet impacts the
moon within the orbital plane from the rear-end (trailing face), side-on and
head-on (leading face) directions corresponding to different relative velocities and impact energies (Table \ref{table-sim}).
\begin{table*}
\centering
\begin{tabular}{rrrrrr}
\hline
Collision & $v_{rel}$ (km/s) & $Q_D$ ($10^5$ J/kg) & $m_{remnant}/m_{moon}$ & Ring ice fraction \\
\hline
rear-end & 11.2 & 0.3 & 0.51 & 1.0& \\
side-on & 22.4 & 1.2 & 0.37 & 1.0& \\
head-on & 44.2 & 4.9 & 0.08 & 0.91& \\
\hline
\end{tabular}
\caption{The mass of the comet is $m_{comet} = 3.75\times 10^{16}\;{\rm kg}=5\times 10^{-4} m_{moon}$ and it is
100\% ice.
The remnant mass is measured at the end of the simulation. The rocky core remains intact for the rear-end
and side-on collisions while about half of the rocky core in the head-on collision ends up in the ring. One expects the
remnant to accrete about half of the mass of ring as it evolves over the long term spreading of the ring. The optimal impact energy
for creating Mimas with the right mass is somewhere between $1.2-4.9\times 10^5\;{\rm J/kg}$.}
\label{table-sim}
\end{table*}
Figures \ref{fig-saturnseq}
and the accompanying animation
present the result of the head-on simulation and illustrate the disruption after impact
and the ring formation process.
\begin{figure*}
\includegraphics[width=7.0in]{figures/saturnseq4}
\caption{The formation of a ring around Saturn from the disruption of a parent icy moon after a collision with a comet.
The moon is initially on a circular orbit with radius $a=140000\;{\rm km}$ with an orbital period of 14.9 hours.
The moon has a mass of $M=7.5\times 10^{19}\;{\rm kg}$ and is differentiated with an icy mantle containing 84\%
of the mass with a rocky core containing the remaining mass.
In this example, a comet of mass $M_{comet}=3.75\times 10^{16}\;{\rm kg}$ collides with the moon head-on
within the orbital plane with relative velocity
$v_{rel}=44\;{\rm km/s}$ and disrupts the moon leaving a remnant rocky core containing 8\% of the original mass.
The debris spreads long the orbital radius with inelastic collisions between particles leading to a thin ring in
Saturn's equatorial plane within a few weeks. At the end of the simulation, the ring is composed
of 91\% ice with a radial width of approximately 10,000 km. See accompanying animation at this link: https://youtu.be/UtVnftTd1tA}
\label{fig-saturnseq}
\end{figure*}
In all cases, the comet is obliterated on impact and deposits its energy in a point-like explosion
on the surface of the target moon. The impact ejecta escapes the moon and spreads out along the orbit (Figure \ref{fig-rings-phasemix}).
\begin{figure*}
\includegraphics[width=7.0in]{figures/fig-rings-phasemix-v3.pdf}
\caption{The distribution of collisional debris and the formation of a ring in the three scenarios where the comet impactor collides with
with the ring parent moon from the rear, the side and head on. The comet orbit is coplanar with the target moon. The polar and edge-on views
of the debris are shown 7 days or approximately 11 orbits after the impact. The central circle or ellipse represents Saturn to scale.
In the rear-end collision, the impact ejecta trails the orbit and falls into Saturn and phase mixes.
Particles pile up on orbital turning points leading to grooves in the distribution. For the side-on collision, debris trails and leads the orbit leading to grooves inside and
outside of the orbit. In the head-on collision, the debris leads the orbit and the grooves are outside the orbit with a small fraction of the mass being ejected from the Saturn
system. Over time, the groove features are erased as the particles on eccentric orbits circularize after successive collisions with the forming ring.
The simulations end after about 20 days or 30 orbits and a broad ring featureless ring develops by this time. See the animation of this process at this link: https://youtu.be/t8GtvzyD0xw }
\label{fig-rings-phasemix}
\end{figure*}
In the head-on collision, the moon is almost completely disrupted
and the impact ejecta leads the orbit ending up on eccentric trajectories with apogees larger than the initial orbital
radius. After one orbit, only 8\% of the mass remains in a bound object composed of about half of the original rocky core
with all of the icy mantle liberated to form a ring (Fig. \ref{fig-remnant}).
In the side-on and rear-end collisions, a significant fraction of the icy mantle becomes unbound but the rocky core remains
intact with a remnant mass of 37\% and 51\% of the original moon.
The remnant bound mass depends on the
impact energy as shown in collisional disruption studies \citep{movshovitz15} though it seems for the lowest energy
impact corresponding to the rear-end collision significantly more mass is lost compared with isolate simulations.
In the case of a moon near the Roche radius in orbit around Saturn,
more mass can become unbound for a given impact energy than for isolated collisions
after the collision because of the small size of the Hill sphere.
In these simulations, the Hill radius of the parent moon
is only $700\;{\rm km}$ -- just a few times the moon's radius.
Debris that would normally remained bound to a disrupted moon and re-accrete can instead
escape when it moves beyond the Hill radius and mix with the forming ring system.
Lower energy impacts can therefore still be effective at unbinding a large fraction of the mass and forming a ring.
After the moon disrupts, the debris settles along the orbit to form an
initial toroidal distribution of particles. The evolution of this system is followed for 30 orbits.
Once the debris disperses on the orbit, we allow the collisions to be inelastic using
coefficient of restitution $\epsilon=0.9$ with a slight modification to permit elastic collisions
with $\epsilon=1.0$ for low relative velocities with $v<50\;{\rm m/s}$ to prevent runaway clumping
of particles (see Appendix A).
\begin{figure*}
\includegraphics[width=7.0in]{figures/fig-remnant.png}
\caption{Zooming into the collisional remnant in the head-on collision
simulation one orbit after the impact.
Ice particles are rendered in white and
rock in red. The width of the images from left to right are 288000, 72000,
18000, and 4500 km. The red central spheroid of the remnant is
approximately 300 km across.
This simulation is the most
energetic of the three and almost disrupts the moon entirely.
All of the icy mantle of the ring parent moon is ejected
while leaving behind the rocky core. Rock debris leaks away from the Roche
lobes of the nearly disrupted moon and mixes with the forming ring
leading to an ice fraction of 91\% consistent with the inferred composition
of the ring.}
\label{fig-remnant}
\end{figure*}
To save computation time, the remnant moon is extracted after one orbit and replaced with a single sink particle
moving along the same trajectory. Particles that collide with this sink particle are removed from the simulation
and their mass and momentum are absorbed.
The initial velocity dispersion is $\sigma \sim 0.5\;{\rm km/s}$ so collisions of particles in the forming
rings are energetic enough to initiate a collisional cascade \citep{dohnanyi69} that will break up the particles
into smaller pieces as they dissipate energy inelastically.
The number of particles should grow and lead to an increase in the ring optical depth and collision rate until most of
the initial random kinetic energy from the disruption is dissipated in inelastic collisions.
It is not possible to simulate a collisional cascade with the existing rubble pile code so instead we
use an {\em ad hoc} method to approximate the growing optical depth $\tau = N \pi r_p^2$ with $N$ the
particle surface number density and $r_p$ the simulation particle radius.
We increase the optical depth of the forming rings artificially by
growing the cross sectional area (increasing the radius) of the particles
in two stages after the debris settles into a ring.
In this way, the simulation particles are transformed into
scaled-particles to increase the ring optical depth and collision rate using the available
fixed number of particles to $\tau \approx 1$ \citep{rein10}.
At each stage, the particle cross sectional areas are increased by $10\times$
leading to a similar increase in the optical depth and collision rate.
This greater collision rate emulates the effect of a collisional cascade and speeds up the
dissipation of energy and relaxation of the ring.
The dissipation of energy from
inelastic collisions between the particles cause the ring to spread radially
and become vertically thinner \citep{brahic77}.
The ring particle
orbits also undergo orbital phase mixing spreading azimuthally to
form a ring with an approximate Gaussian
surface density profile (Figure \ref{fig-rings-phasemix}).
Figure \ref{fig-ringprofiles} shows the surface density profile, scale height and velocity dispersion of the ring at three times for the
head-on collision and reveals the rapid transformation of the spreading debris cloud from a point-like
collisional disruption into a ring system.
The final state of these simulations can be thought of the initial conditions for the subsequent
long term phase of viscous spreading of the rings into their current state as described by \citep{salmon10}.
\begin{figure*}
\includegraphics[width=7.0in]{figures/fig-2}
\caption{Structure and kinematics of the evolving ring for the three collision cases: rear-end, side-on and head-on
at three times during the evolution of the ring.
The structure is measured from the azimuthally averaged surface density and the thickness as measured by the RMS of
the particles vertical position with respect to the equatorial plane. The kinematics are described by the
azimuthally averaged velocity ellipsoid in different radial bins. Within a few weeks, the ring surface density
has a Gaussian shape with maximum centered on the initial orbital radius and a full width half maximum of
about 10000 km. Collisional dissipation leads to the circularization of the ring particle orbits and the decline
in velocity dispersion to $0.05-0.1\;{\rm km/s}$.
In each case, the disruption is not complete and a remnant remains in orbit embedded within the debris
- the radial position of the remnant's semi-major axis is shown by the black dot. For the rear-end and side-on
collisions the remnant contains nearly half of the original rocky core and the ring is pure ice. For the head-on
collision, the disruption is nearly complete but a remnant core containing half the rock remains with the rocky
debris (shown in red in the surface density plot) mixing with the ring.}
\label{fig-ringprofiles}
\end{figure*}
The three simulations span a range of impact energies and each form a ring system with a high ice fraction (see Table 2).
For lower impact energies, a significant bound mass made predominantly of the original rocky core can survive
while the debris is composed of the icy mantle. This is a simple way to explain the
icy purity of the rings though it requires an initial moon that is differentiated with a relatively high ice fraction
albeit in the expected range for Saturn's moons. A more thorough study involving moons with different ice
fractions and cometary impact energies can set stronger constraints on the range of plausible scenarios.
However, the 3 simulations presented here demonstrate that it is relatively easy to find simulation parameters
that preserve a rocky core while liberating an icy mantle.
The remnant also acts as a new seed for the re-accretion of debris to form a new moon which one might expect to clear
out the debris within its orbital radius within a timescale as short as $10^3$ years \citep{crida12}.
The newly formed Mimas will then
experience resonant interactions with the particles in the spreading ring. The resulting torques lead to outward
migration with the orbital radius $a$ reaching its current value with a timescale:
\begin{equation}
\tau_{migration} = \left ( \frac{1}{a}\frac{da}{dt} \right )^{-1} = 0.60 \frac{M_{Saturn}^2}{a^2 \Sigma \Omega M_{Mimas}}\left |\frac{a-r}{a}\right |^3
\end{equation}
where $\Sigma$ is the surface density of the ring, $\Omega$ is the orbital frequency and $r$ is the radius of the
outer edge of the ring \citep{goldreichtremaine82}.
From these simulations, the initial central surface density after settling down into a thin ring
is $\Sigma \approx 5000\;{\rm kg/m^2}$. With the ring's outer edge at $r=140000;{\rm km}$ and the current
radius of Mimas' orbit at $a=185400;{\rm km}$ lead to a migration
timescale of $\tau_{migration} = 120\;{\rm Myrs}$. Since the mass of the ring and Mimas are comparable, the
outward migration of Mimas will result in a back reaction from the ring which will cause it to spread inward more rapidly
than expected from collisional viscous effects alone and its mean surface density will decline as the ring spreads
to the current estimated value of $\Sigma = 1000\;{\rm kg/m^2}$ for the B ring.
This sets a more conservative timescale of a few times $\tau_{migration}$ because of the inverse dependence on $\Sigma$.
Determining the more detailed dynamical evolution of this system requires more work
but the timescale for moving Mimas out from the edge of the rings to its current position is
a few hundred million years consistent with the assumption of a recent formation time for the rings.
\section{Discussion}
The mechanism responsible for the creation of Saturn's rings is intertwined with the origin and subsequent dynamical evolution
of Saturn's icy mid-sized moons. In this section, we discuss some of the implications for the Saturn system
from the proposed collisional disruption mechanism.
\subsection{The age of Mimas}
The scenario proposed in this paper suggests a coeval origin of Saturn's rings and Mimas within the past few hundred million years.
This is contrary to the conventional view that Mimas formed primordially with Saturn and the other inner mid-sized icy moons more than 4 billion years ago.
The heavily cratered surface of Mimas is consistent with a very old age based on measurements of the size distribution of craters if they originate
from objects on heliocentric orbits \citep{zahnle03}.
However, measurements of the libration of Mimas by the Cassini images \citep{tajeddine14} are at odds with a primordial origin.
If Mimas formed primordially, one expects it to be either a homogeneous or differentiated body in hydrostatic equilibrium but
direct measurement of libration show anomalies inconsistent with these states.
Two explanations that can explain the anomaly are 1) an internal ocean or 2)
a non-hydrostatic ellipsoidal rocky core. We have shown that an icy ring can form from the partial disruption of a differentiated ring parent moon leaving
behind a rocky core that subsequently re-accretes to become Mimas while being pushed out to its current position through a combination of resonant
interactions with the ring and tidal dissipation. It seems plausible that the post-impact disturbed remnant rocky core would not have time to relax
to hydrostatic equilibrium and so support this hypothesis as the origin for the anomalous libration. Recent models of the thermal, structural and orbital
evolution of Mimas are also consistent with a late, layered formation scenario similar to the one proposed here \citep{neveu17}.
\subsection{Mean motion resonances and the heating of Enceladus}
The ring forming scenario has a further implication that may help in understanding another puzzle in Saturn's system of moons:
the heat source of Enceladus endogenic activity. \citet{spencer06} have measured the current heat output from Enceladus as $5.8\pm1.9\;{\rm GW}$ which is
much larger than the heat input expected from either radiogenic sources in the rock or tidal heating from the current MMR with Dione.
If a ring parent moon is in MMR
with Dione and Enceladus prior to disruption, one expects the transfer of angular momentum outward from tidal friction
to excite a larger eccentricity in the orbit of Enceladus which in turn increases the rate of tidal heating \citep{squyres83,peale99}.
It has been argued before that
the outward transfer of angular momentum from a possible past orbital resonance with Janus may have pumped up the
eccentricity of Enceladus thus providing a heat source \citep{lissauer84}. The mass of Janus proved to be too small
for this heating mechanism to work but the ring parent moon proposed here may have enough mass to increase
the tidal heating rate to values significantly above the current rate.
To illustrate this, we assume that the ring parent moon, Enceladus and Dione are in MMR prior
to the comet collision with orbital frequencies in the ratio of 4:2:1.
While in resonance, tidal friction will cause the moons to migrate outwards in tandem while maintaining the ratio of orbital frequencies and semi-major axes.
For 3 moons in MMR, the time for the inner most moon to migrate from semi-major axis $a_i$ to $a_f$ is given by:
\begin{equation}
t = t_{migrate} \left [ 1 - \left ( \frac{a_i}{a_f} \right )^{13/2} \right ]
\end{equation}
where the time constant is:
\begin{equation}
t_{migrate} = \left [ \frac{39}{2} \frac{m_1}{M_S} \left ( \frac{R_S}{a_1} \right )^5 \frac{k_{2,S}}{Q_S} n_1 \eta\right ]^{-1}
\end{equation}
where
\begin{equation}
\eta = \frac{1 + (m_2/m_1)^2 (a_2/a_1)^{-6} + (m_3/m_1)^2 (a_3/a_1)^{-6}}{1 + (m_2/m_1)(a_2/a_1)^{1/2} + (m_3/m_1)(a_3/a_1)^{1/2}}
\end{equation}
and $m_1$, $m_2$, $m_3$ are the masses of the moons, $a_1$, $a_2$, and $a_3$ are semi-major axes of the moons' orbits,
$n_1$ is the orbital frequency of the inner most moon, $R_S$ and
$M_S$ are the radius and mass of Saturn and $k_{2,S}$ and $Q_S$ are Saturn's Love number and dissipation
function \citep[e.g.,][]{murray00}.
The conventional value for Saturn's dissipation term is $k_{2,S}/Q_S=2\times 10^{-5}$ calculated under the assumption that Mimas
migrates from the synchronous radius to it's current radius over 4.6 Gyr (the age of the solar system) by tidal evolution \citep{goldreich66}.
With this value, time for the ring parent moon considered here in MMR to migrate from an orbital radius of $a=130000$ km to 150000
km is 4.5 Gyr showing that it is possible to trap such a moon near the Roche radius for the age of the solar system.
Assuming nearly circular orbits and conservation of energy and angular momentum,
one can also calculate the net heat flow into the moons from orbital binding
energy as:
\begin{equation}
H = \sum_i n_i T_i + \frac{2\sum_i T_i \times \sum_i E_i}{\sum_i L_i}
\label{eq-H}
\end{equation}
where $n_i$ are the satellite orbital frequencies, $T_i$ are the torques on the satellites due to tidal evolution, $E_i$ and $L_i$ are the satellite orbital
binding energies and angular momenta \citep{lissauer84,meyer07}.
The torque from tidal evolution is given by:
\begin{equation}
T = \frac{3}{2} \frac{G M_m^2 R_S^5}{a^6} \frac{k_{2S}}{Q_S}
\label{eq-torque}
\end{equation}
where $M_m$ is the mass of the moon, $a$ is the orbital radius, $R_S$ is the radius of Saturn,
A system of satellites in MMR will excite orbital eccentricities in the moons with equilibrium values that
balance the heat gain from the tidal evolution of the orbits and the heat loss from internal dissipation
in the moons caused by the varying tidal field on the
eccentric orbit.
The dissipation rate for a synchronously rotating moon is:
\begin{equation}
\frac{dE}{dt}=\frac{21}{2}\frac{k_{2,m}}{Q_m} \frac{G M_S^2 n R_m^5}{a^6}e^2
\label{eq-peale}
\end{equation}
where properties of the moon are denoted with the subscript $m$ and $M_S$ is the mass of Saturn, $a$ is the orbital radius of the moon and $e$ is the
eccentricity \citep{peale99}.
The satellite Love number can be estimated by:
\begin{equation}
k_{2,m} = \frac{3/2}{1 + 19\mu/(2\rho g R)}
\label{eq-love}
\end{equation}
with rigidity $\mu = 4\times 10^9\;{\rm N m^{-2}}$ and $\rho$ is the density, $g$ is the surface acceleration and $R$ is the radius with the
the satellite dissipation term in the range $Q_m=20-100$ \citep{meyer07}.
For Enceladus and Dione in a 2:1 resonance, the above analysis can be used to estimate a heating rate of 1.1 GW \citep{meyer07} which is much less than the
measured heat output of Enceladus \citep{spencer06}. If a ring parent moon existed in the past, this heating rate would be significantly larger.
Let us consider the parent moon twice the mass of Mimas $M=7.5\times 10^{19}\;{\rm kg}$ used in our simulation that begins
at an orbital radius of $a=135000$ km just beyond the Roche radius with Enceladus and Dione in MMR with orbital frequencies in the ratio 4:2:1.
Using the above equations with $Q_m=20$ for the moons, one finds a heat flow from equation \ref{eq-H} into the 3 moons of $H=107\;{\rm GM}$. This is 2
orders of magnitude larger than the current heating rate estimated above.
Prior to the ring parent moon's destruction, this higher heating rate on Enceladus implies
that its icy fraction may have been completely molten creating a smooth crater-free surface \citep{smith82}.
After the disruption of the parent moon and the formation of
the ring, the eccentricity of
Enceladus would decay to its current value with a time constant of order $10^8$ years \citep{meyer07}
with the heating rate falling off and Enceladus refreezing.
It has been argued that a previous molten state driven by a more eccentric orbit in the past
is needed to explain the observed heat output
under the more modest tidal heating rate existing today \citep{ojakangas86}.
To further understand this scenario, we carried out orbital integrations of an ideal system containing
ring parent moon, Enceladus in Dione in MMR to understand both the long term stability of this system
and heating rates. We modified the Mercury N-body code \citep{chambers99}
to include velocity dependent fictitious forces to mimic tidal evolution for the moons \citep[e.g.,][]{lee02}. Torques from
tidal evolution and internal dissipation are included allowing an exploration of the evolution of the semi-major axis, eccentricity and tidal heating of the
satellites. We compute the satellite Love numbers from equation \ref{eq-love} and assume $Q_m=20$ for all of the moons.
For Saturn, we use $k_{2,S}/Q_S=2\times 10^{-5}$ for the
tidal dissipation term. Since the changes from tidal evolution are very small compared to the orbital period, we speed up the calculations by increasing
the $Q$ factors by $1000\times$ \citep[e.g.,][]{meyer08} and rescaling the timescale in the plots\citep{meyer08}.
We begin the integration with the parent moon on a nominally circular co-planar orbit with $a=135000$ km and Enceladus and Dione with
orbital radii slightly larger than their $2:1$ and $4:1$ resonant positions.
Figures \ref{fig-a} and \ref{fig-ecc} show the evolution of the semi-major axes and eccentricities of the ideal 3 moon system after the lock into resonance.
Initially the semi-major axis of the ring parent grows rapidly in response to tidal friction with Saturn but the 3 moons quickly lock into mean motion resonance
slowing down the ring parent's outward migration significantly. The ring parent moon can therefore stay near the Roche zone for the age of the solar system and the 3 moon
system is stable for at least 0.5 Gyr. The resonance excites eccentricities in the 3 moons and equilibrium values for $e=0.005$, 0.08, and 0.02 are reached
for the ring parent, Enceladus and Dione respectively leading to heating rates from equation \ref{eq-peale} (in the same order) of $H=5$, 64, 37 GW. The total heating rate is
slightly larger than the value estimated from \ref{eq-H} because of the significant eccentricities excited in the moons. For the smaller ring parent
moon, the heating rate is sufficient to melt the ice and lead to a differentiated moon as assumed in the model. The heating rate for Enceladus is very high by comparison and
implies the existence of an oceanic mantle and volcanically active past. One can imagine significant mass loss during this phase which is consistent with the higher mean
density of $\rho=1.65\;{\rm g/cm^3}$ measured for Enceladus. Even Dione with 10 times the mass of Enceladus would experience significant heating under these circumstances
implying a large subsurface ocean in the past and perhaps more surface geological activity than one might expect from the current state.
\begin{figure}
\includegraphics[width=3.5in]{figures/fig-a.pdf}
\caption{The evolution of the semi-major axes of ideal system of 3 moons
consisting of the ring parent moon, Enceladus and Dione. Tidal
dissipation in Saturn causes the semi-major axis to grow. The initial
semi-major axes are intentionally set to values slightly out of resonance
with values smaller than the current values.
The ring parent moon first locks into a 2:1 mean motion resonance with
Enceladus. This system shortly after enters a mean motion resonance with
Dione that remains stable to the end of the integration of 500 Gyr. This
calculation suggests that a ring parent moon could be trapped in mean
motion resonance near the Roche radius for a long time - perhaps as
long as the age of the Solar system.}
\label{fig-a}
\end{figure}
\begin{figure}
\includegraphics[width=3.5in]{figures/fig-ecc.pdf}
\caption{The evolution of the eccentricities of the ideal system of 3 moons
consisting of the ring parent moon, Enceladus and Dione. When the moons
lock into mean motion resonance, the eccentricities of Enceladus and Dione
grow to equilibrium values of $e=0.08$ and $e=0.02$ respectively - values
an order of magnitude larger than the current values of $e=0.0047$
and $e=0.0022$. The ring parent's eccentricity settles to $e=0.005$.
These equilibrium values depend on the Love numbers and
tidal dissipation terms $Q$ mentioned in the paper and are subject to
some uncertainties. Nevertheless, since the tidal heating is proportional
to $e^2$ \citep{peale99}, we expect internal heating rates about 2 orders
of magnitude larger than today in this scenario. Prior to the removal of
the ring parent satellite, one therefore expects the icy mantle of Enceladus
to be largely molten.}
\label{fig-ecc}
\end{figure}
Finally, we examined the orbital evolution of Enceladus and Dione after the disruption of the satellite. We simply remove the ring parent moon from the system in MMR with
equilibrium eccentricities and continue the orbital integration. Figures \ref{fig-a-postcollision} and \ref{fig-ecc-postcollision} show the evolution of the semi-major axes
and eccentricities after the impact. The tidal migration of Enceladus and Dione stalls for a few hundred million years but the two moons eventually settle back into a 2:1 MMR.
The eccentricities decay with the timescales expected when resonant forcing is removed and a small eccentricity is re-excited when they get back into resonance. The timescale
for Enceladus to reach it's current eccentricity is approximately 0.3 Gyr which coincides with the expected timescale for the newly formed Mimas to migrate to it's current
orbital radius. One still expects remnant heat from the previous highly eccentric orbit of Enceladus and may explain the why it is currently still quite active despite the low
amount of tidal heating.
\begin{figure}
\includegraphics[width=3.5in]{figures/fig-a-postcollision.pdf}
\caption{The evolution of the semi-major axes of Enceladus and Dione
after the collisional disruption of the ring parent moon shown to
occur at $t=0$. The tidal evolutionary growth of the semi-major axes
stalls for 0.1 Gyr but Enceladus and Dione re-enter a MMR
about 0.25 Gyr after the loss of the ring parent moon.}
\label{fig-a-postcollision}
\end{figure}
\begin{figure}
\includegraphics[width=3.5in]{figures/fig-ecc-postcollision.pdf}
\caption{The evolution of the eccentricities of Enceladus and Dione after
the collisional disruption of the ring parent moon at $t=0$. The
eccentricities decay rapidly in the absence of the resonance but settle
into values comparable to the currently measured values. While the choices
of the satellite Love numbers and tidal dissipation functions $Q$ are
only estimates, in this scenario Enceladus and Dione enter the current
resonance and orbital configuration about 0.25 Gyr after the collision.
This is close to the timescale one expects for a newly formed Mimas to
reach its current orbital radius after formation from the ring debris.}
\label{fig-ecc-postcollision}
\end{figure}
\section{Conclusions}
In summary, the collisional disruption mechanism proposed in this paper presents a solution to the origin of
Saturn's rings that
is consistent with the mass, age and composition of the rings. A differentiated moon
with a mass twice that of Mimas and an ice fraction consistent with the other icy moons located near the
Roche radius can be disrupted by a heliocentric comet with a radius of $\sim 20$ km. Representative simulations
show that the icy mantle can be liberated in the collision forming nearly pure ice ring system with a mass comparable
to Mimas while leaving behind a rocky remnant. It is argued that the viscous spreading of the ring of debris will lead to the current
ring system and a new satellite composed of the remnant and re-accreted debris that we identify with Mimas.
While this event could have occurred any time during the Saturn system's life, the short timescales for viscous spreading of the rings
and contamination from meteoroid bombardment suggest that the event happened within the last few hundred million years.
The probability that an impact with sufficient destructive energy occurs in this timeframe is $\sim 0.01$ making it a $2-3\sigma$ event -
perhaps unlikely but not wildly so.
The model therefore simultaneous account for a mass of the rings comparable to Mimas, a young age and nearly pure ice composition.
A further requirement for this model to work is the necessity that the ring parent moon is locked in MMR with Enceladus
and Dione so that it remains trapped near the Roche radius prior to destruction. We have demonstrated that this phase would
tidally heat Enceladus at a much greater rate than now. Remnant heat might still be present explaining the
existence of Enceladus' current heat output, subsurface ocean and surface activity.
The Cassini spacecraft during its Grand Finale tour in 2017 flew close enough to the rings to measure a
gravitational perturbation using the radio science experiment
and so determine a definitive mass for the rings \citep{charnoz17}.
The scenario proposed here only works
if the ring mass is comparable to the mass of Mimas though Cassini may find that the rings are much more
massive as claimed by some. If the mass of the rings is close to the upper range of current estimates - about $3\times$
the mass of Mimas - this scenario seems less likely since it becomes increasingly difficult
to keep such a large ring parent moon within the Roche zone because of the stronger torques from tidal friction
and the lower probability for disruption by an impact.
However, if the mass for the ring system is found to
be comparable to the mass of Mimas or much less, this scenario is strongly constrained and it should be possible to narrow
the range of collision energies, impact geometries and the initial composition of the ring parent moon using more
sophisticated simulation methods.
\section*{Acknowledgments}
I acknowledge Peter Goldreich, Chris Thompson, Yanqin Wu and Luke Dones
for discussions that have inspired and clarified points in this paper.
I also thank Bj{\"o}rn J{\'o}nsson for permission to use his texture
map of Saturn for Figure \ref{fig-saturnseq} and the accompanying
animation. Simulations were carried out at the computing facilities
of the Canadian Institute for Theoretical Astrophysics. This research
was financially supported by the Natural Sciences and Engineering Research
Council of Canada.
|
1,108,101,565,949 | arxiv | \section{Introduction}
\label{sec:introduction}
Let us adopt the following notation for integer intervals: $[a \,.\,.\, b] \defn [a, b] \cap \field[N] = \{a, a+1, \dotsc, b\}$ with the special case $[n] \defn [1 \,.\,.\, n]$.
Let $S \subset \field[N]$ contain $n$ elements. A \emph{permutation} of $S$ is a linear ordering $\sigma_1, \sigma_2, \dotsc, \sigma_n$ of the elements of $S$.
We denote the set of all $n!$ permutations of~$S$ by~$\mathfrak{S}_S$ and by~$\mathfrak{S}_n$ if $S = [n]$.
Almost all statements and proofs below are given for the special case $S = [n]$ and the obvious generalization to generic finite subsets of $\field[N]$ is implicit.
It is customary to write the permutation as the word $\sigma = \sigma_1 \sigma_2 \,\cdots\, \sigma_n$ and we can identify the permutation~$\sigma$ with the bijection $\sigma(i) = \sigma_i$.
Replacing the linear ordering by a circular one, we arrive at the notion of \emph{circular permutations} of~$S$, \latinabbr{i.e.}, the arrangements of the elements $\sigma_1, \sigma_2, \dotsc, \sigma_n$ of $S$ around an oriented circle (turning the circle over generally produces a different permutation).
In this case one can always choose $\sigma_1$ to be the smallest element of~$S$ so that $\sigma_1 = 1$ if $S = [n]$.
It is clear that the set $\mathfrak{C}_S$ of all circular permutations of $S$ contains $(n-1)!$ elements; if $S = [n]$, we denote it by $\mathfrak{C}_n$.
Circular permutations may be written as words of the form $\sigma = \dot \sigma_1 \sigma_2 \,\cdots\, \dot \sigma_n$, where we emphasize the circular symmetry by a dot above outermost characters of the word.\footnote{This is in analogy with the notation for repeating decimals when representing rational numbers.}
Moreover, due to the circular symmetry we can define $\sigma(0) \defn \sigma(n)$ and $\sigma(n+1) \defn \sigma(1)$.
Let us introduce yet another class of permutations.
We denote by $\mathfrak{A}^\pm_S \subset \mathfrak{S}_S$ the rising and falling \emph{`atomic'} permutations\footnote{The rationale for this naming should become clear later when we see that arbitrary permutations can be decomposed into atomic permutations, but no further.} of an $n$-element subset $S \subset \field[N]$.
We define $\mathfrak{A}^+_S$ as those permutations that, when written as a word, begin with the smallest and end with the largest element of $S$.
The falling atomic permutations $\mathfrak{A}^-_S$ are reversed rising atomic permutations, \latinabbr{i.e.}, they begin with the largest element of $S$ and end with the smallest.
Naturally, the cardinality of $\mathfrak{A}^\pm_S$ is $(n-2)!$.
If $S = [n]$, we write $\mathfrak{A}^\pm_n$ and see that it is the set of permutations of the form $1\,\dotsm\,n$ (\latinabbr{i.e.}, $\sigma_1 = 1$ and $\sigma_n = n$) or $n\,\dotsm\,1$ (\latinabbr{i.e.}, $\sigma_1 = n$ and $\sigma_n = 1$) respectively.
We say that $i$ is a \emph{descent} of $\sigma$ if $\sigma(i) > \sigma(i+1)$ and, conversely, that it is an \emph{ascent} of a (circular) permutation $\sigma$ if $\sigma(i) < \sigma(i+1)$.
For example, the permutation \perm{52364178} has the descents $1, 4, 5$ and the ascents $2, 3, 6, 7$ whereas the circular permutation \cperm{14536782} has the descents $3, 7, 8$ and the ascents $1, 2, 4, 5, 6$.
We can collect all the descents of a (circular) permutation $\sigma$ in the \emph{descent set}
\begin{equation*}
D(\sigma) \defn \{ i \mid \sigma(i) > \sigma(i+1) \}.
\end{equation*}
It is an elementary exercise in enumerative combinatorics to count the number of permutations of~$[n]$ whose descent set is given by a fixed $S \subseteq [n-1]$.
Let $S = \{s_1, s_2, \dotsc, s_k \}$ be an ordered subset of~$[n-1]$, then
\begin{equation}\label{eq:num_permations_descent_set}
\beta(S)
\defn \big|\{ \sigma \in \mathfrak{S}_n \mid D(\sigma) = S \}\big|
= \sum_{T \subseteq S} (-1)^{|S-T|} \binom{n}{s_1, s_2 - s_1, s_3 - s_2, \dotsc, n - s_k},
\end{equation}
see, for example, \cite[Theorem 1.4]{bona:2012}.
This result is easily adapted to circular permutations.
Related to the notions of \emph{ascents} and \emph{descents} are the concepts of \emph{peaks} and \emph{valleys}:
A peak occurs at position $i$ if $\sigma(i-1) < \sigma(i) > \sigma(i+1)$, whereas a valley occurs in the opposite situation $\sigma(i-1) > \sigma(i) < \sigma(i+1)$.
The peaks and valleys of a (circular) permutation $\sigma$ split it into \emph{runs}, also called \emph{sequences} or \emph{alternating runs}.
\begin{definition}
A \emph{run}~$r$ of a (linear or circular) permutation $\sigma$ is an interval $[i \,.\,.\, j]$ such that $\sigma(i) \gtrless \sigma(i+1) \gtrless \dotsb \gtrless \sigma(j)$ is a monotone sequence, either increasing or decreasing, and so that it cannot be extended in either direction; its length is defined to be $j-i$.
If $\sigma$ is a permutation of an $n$-element set, the collection of the lengths of all runs gives a partition~$p$ of $n-1$ (linear permutations) or $n$ (circular permutations).
The partition $p$ is called the \emph{run structure} of~$\sigma$.
\end{definition}
It follows that a run starts and ends at peaks, valleys or at the outermost elements of a permutation.
For example, the permutation \perm{52364178} has runs $[1 \,.\,.\, 2]$, $[2 \,.\,.\, 4]$, $[4 \,.\,.\, 6]$, $[6 \,.\,.\, 8]$ with lengths $1, 2, 2, 2$, whereas the circular permutation \cperm{14536782} has runs $[1 \,.\,.\, 3]$, $[3 \,.\,.\, 4]$, $[4 \,.\,.\, 7]$, $[7 \,.\,.\, 9]$, of lengths $2, 1, 3, 2$.
Representing these runs by their image under the permutation, they are more transparently written as $\textls[100]{52, 236, 641, 178}$ and $\textls[100]{145, 53, 3678, 821}$ respectively.
The runs of (circular) permutations can also be neatly represented as directed graphs as shown in Figure~\ref{fig:run_graphs}.
In these graphs the peaks and valleys correspond to double sinks and double sources.
\begin{figure}
\centering
\begin{tikzpicture}
[ scale=1
, every node/.style={circle, draw, inner sep=1pt, minimum size=11pt}
, every path/.style={thick}
]
\node (5) at (0,0) {$5$};
\node (2) at (1,0) {$\mathbf{2}$} edge [->] (5);
\node (3) at (2,0) {$3$} edge [<-] (2);
\node (6) at (3,0) {$\mathbf{6}$} edge [<-] (3);
\node (4) at (4,0) {$4$} edge [->] (6);
\node (1) at (5,0) {$\mathbf{1}$} edge [->] (4);
\node (7) at (6,0) {$7$} edge [<-] (1);
\node (8) at (7,0) {$8$} edge [<-] (7);
\node[name=c, circle, minimum size=65pt, draw=none] at (11,0) {};
\node (1) at (c.north) {$\mathbf{1}$};
\node (4) at (c.north west) {$4$} edge [<-, bend left=10] (1);
\node (5) at (c.west) {$\mathbf{5}$} edge [<-, bend left=10] (4);
\node (3) at (c.south west) {$\mathbf{3}$} edge [->, bend left=10] (5);
\node (6) at (c.south) {$6$} edge [<-, bend left=10] (3);
\node (7) at (c.south east) {$7$} edge [<-, bend left=10] (6);
\node (8) at (c.east) {$\mathbf{8}$} edge [<-, bend left=10] (7);
\node (2) at (c.north east) {$2$} edge [->, bend left=10] (8)
edge [<-, bend right=10] (1);
\end{tikzpicture}
\caption{The two directed graphs representing the runs of the permutation \protect\perm{52364178} (left) and the circular permutation \protect\cperm{14536782} (right). Peaks and valleys are indicated by boldface numbers.}
\label{fig:run_graphs}
\end{figure}
Motivated by a problem in mathematical physics \cite{fewster:2012b} (see also section~\ref{sec:app}), we are interested in the following issue, which we have not found discussed in the literature.
By definition, the run structure associates each permutation $\sigma \in \mathfrak{C}_n$ with a partition~$p$ of~$n$.
For example, \cperm{14536782} and \cperm{13452786} both correspond to the same partition $1 + 2 + 2 + 3$ of $8$.
Our interest is in the inverse problem: given a partition $p$ of $n$, we ask for the number~$Z_{\mathfrak{C}}(p)$ of circular permutations whose run structure is given by~$p$.
One may consider similar questions for other classes of permutations, with slight changes; for example, note that the run structure of a permutation $\sigma \in \mathfrak{S}_n$ is a partition of $n-1$.
The literature we have found focuses on issues such as the number of permutations in which all runs have length one, \latinabbr{e.g.}, André's original study~\cite{andre:1895}, or on the enumeration question where the order of run lengths is preserved, so obtaining a map to compositions, rather than partitions, of $n$.
See, for example, \cite{brown:2007}, which studies this for the ordinary (noncircular) permutations.
In principle our question could be obtained by specialising this more difficult problem, but here we take a direct approach.
Another alternative approach not followed here, would be to take \eqref{eq:num_permations_descent_set} as a starting point and to sum over all descent sets corresponding to a particular run structure.
However, this would neither be straightforward from the theoretical side, nor would it make for efficient computation.
By contrast, our direct method was designed to facilitate computation; for the application in \cite{fewster:2012b} calculations were taken up to $n=65$ using exact integer arithmetic in Maple\texttrademark~\cite{maple:16}.
Sections~\ref{sec:atomic}, \ref{sec:circular} and \ref{sec:linear} deal, respectively, with the enumeration of the run structure of atomic, circular and linear permutations.
Using a suitable decomposition, this is accomplished in each case by reducing the enumeration problem to that for atomic permutations.
In section~\ref{sec:counting_valleys} we apply and extend the methods developed in the preceding sections to enumerate the valleys of permutations, thereby reproducing a result of Kitaev~\cite{kitaev:2007}.
Finally, in section~\ref{sec:app}, we discuss the original motivation for our work and other applications.
\section{Atomic permutations}
\label{sec:atomic}
Let us first discuss the significance of the atomic permutations.
We say that a permutation $\sigma \in \mathfrak{S}_S$ of~$S$ \emph{contains an atomic permutation} $\pi \in \mathfrak{A}_T$, $T \subset S$, if $\pi$ can be considered a subword of~$\sigma$.
The atomic permutation~$\pi$ in~$\sigma$ is called \emph{inextendible} if $\sigma$ contains no other atomic permutation $\pi' \in \mathfrak{A}_{T'}$, $T' \subset S$, such that $T \subsetneq T'$.
In particular, any permutation $\sigma \in \mathfrak{S}_S$ of~$S$ with $\abs{S} \geq 2$ contains an inextendible atomic permutation $\pi \in \mathfrak{S}_T$ of a subset $T \subset S$ that contains both the smallest and the largest element of~$S$.
That is, if $S = [n]$ and we consider $\sigma$ as a word, it contains a subword~$\pi$ of the form $1 \,\dotsm\, n$ or $n \,\dotsm\, 1$.
The permutation $\pi$ will be called the \emph{principal atom} of~$\sigma$.
\begin{proposition}\label{prop:decomposition}
Any permutation $\sigma \in \mathfrak{S}_S$ of a finite set $S \subset \field[N]$ can be uniquely decomposed into a tuple $(\pi^1, \dotsc, \pi^k)$ of inextendible atomic permutations $\pi^i \in \mathfrak{A}_{T_i}$, $T_i \subset S$ (non-empty) such that $\pi^i_{\abs{T_i}} = \pi^{i+1}_1$ for all $i < k$ and $\cup_i T_i = S$.
We call $\pi^i$ the \emph{atoms} of $\sigma$.
\end{proposition}
\begin{proof}
\emph{Existence:}
It is clear that any permutation of a set of $1$~or $2$~elements is an atomic permutation.
Suppose, for some $n\ge 3$, that all permutations of $n-1$ elements or less can be decomposed into inextendible atomic permutations.
Without loss of generality, we show that any non-atomic permutation $\sigma \in \mathfrak{S}_n$ also has a decomposition into inextendible atomic permutations.
Regarding $\sigma$ as a word, we can write $\sigma = \alpha \cdot n \cdot \omega$, where $\alpha$ and $\omega$ are non-empty subwords.
Notice that the permutations $\alpha \cdot n$ and $n \cdot \omega$ have a unique decomposition by assumption.
Since an atomic permutation begins or ends with the largest element, we find that a decomposition of~$\sigma$ into inextendible atomic permutations is given by the combination of the decompositions of $\alpha \cdot n$ and $n \cdot \omega$.
\emph{Uniqueness:} This is clear from the definition of inextendibility.
\end{proof}
We now begin the enumeration of atomic permutations according to their run structure.
That is, for every partition~$p$ of ${n-1}$ we aim to find the number $Z_{\mathfrak{A}}(p)$ of atomic permutations~$\mathfrak{A}^\pm_n$ of length~$n$.
Observe that any $\sigma \in \mathfrak{A}^+_{n}$ can be extended to a permutation in $\mathfrak{A}^+_{n+1}$ by replacing $n$ by $n+1$ and reinserting $n$ in any position after the first and before the last.
Thus, \perm{13425} can be extended to \perm{153426}, \perm{135426}, \perm{134526} or \perm{134256}.
Every permutation in $\mathfrak{A}^+_{n+1}$ arises in this way, as can be seen by reversing the procedure.
The effect on the run lengths can be described as follows.
\begin{description}[leftmargin=*]
\item[Case 1:]\phantomsection\label{item:case_1}
The length of one of the runs can be increased by one by inserting $n$ either at
\begin{enumerate}[leftmargin=*]
\item
the end of an increasing run if it does not end in $n+1$, thereby increasing its length (\latinabbr{e.g.}, \perm{13425} $\to$ \perm{134526})
\\[.5em]
\begin{tikzpicture}
[ scale=.82
, every node/.style={circle, draw=black!60, fill=white, inner sep=1pt, minimum size=11pt}
, every path/.style={thick}
]
\tikzstyle{dashsolid} = [dash pattern=on 4.5mm off .5mm on .5mm off .5mm on .5mm off .5mm on .5mm off .5mm on 4.5mm]
\draw[dotted, path fading=west] (0,0) -- (1,0) ++(0,1pt);
\draw[dotted, path fading=east] (5,0) -- (6,0) ++(0,1pt);
\node (A) at (1,0) {};
\node (B) at (2,0) {} edge [->] (A);
\node (C) at (4,0) {} edge [<-, dashsolid] (B);
\node (D) at (5,0) {} edge [->] (C);
\draw[->] (6.5,0) -- (7,0);
\draw[dotted, path fading=west] (7.5,0) -- (8.5,0) ++(0,1pt);
\draw[dotted, path fading=east] (13.5,0) -- (14.5,0) ++(0,1pt);
\node (A) at (8.5,0) {};
\node (B) at (9.5,0) {} edge [->] (A);
\node (C) at (11.5,0) {} edge [<-, dashsolid] (B);
\node[draw=black] (N) at (12.5,0) {$n$} edge [<-] (C);
\node (D) at (13.5,0) {} edge [->] (N);
\end{tikzpicture}
\item
the penultimate position of an increasing run, thereby increasing its own length if it ends in $n+1$ (\latinabbr{e.g.}, \perm{13425} $\to$ \perm{135426}) or increasing the length of the following decreasing run otherwise (\latinabbr{e.g.}, \perm{13425} $\to$ \perm{134256})
\\[.5em]
\begin{tikzpicture}
[ scale=.82
, every node/.style={circle, draw=black!60, fill=white, inner sep=1pt, minimum size=11pt}
, every path/.style={thick}
]
\tikzstyle{dashsolid} = [dash pattern=on 4.5mm off .5mm on .5mm off .5mm on .5mm off .5mm on .5mm off .5mm on 4.5mm]
\draw[dotted, path fading=west] (0,0) -- (1,0) ++(0,1pt);
\draw[dotted, path fading=east] (5,0) -- (6,0) ++(0,1pt);
\node (A) at (1,0) {};
\node (B) at (2,0) {} edge [<-] (A);
\node (C) at (4,0) {} edge [->, dashsolid] (B);
\node (D) at (5,0) {} edge [<-] (C);
\draw[->] (6.5,0) -- (7,0);
\draw[dotted, path fading=west] (7.5,0) -- (8.5,0) ++(0,1pt);
\draw[dotted, path fading=east] (13.5,0) -- (14.5,0) ++(0,1pt);
\node (A) at (8.5,0) {};
\node[draw=black] (N) at (9.5,0) {$n$} edge [<-] (A);
\node (B) at (10.5,0) {} edge [->] (N);
\node (C) at (12.5,0) {} edge [->, dashsolid] (B);
\node (D) at (13.5,0) {} edge [<-] (C);
\draw[dotted, path fading=west] (0,-0.75) -- (1,-0.75) ++(0,1pt);
\node (A) at (1,-0.75) {};
\node (B) at (2,-0.75) {} edge [->] (A);
\node (C) at (4,-0.75) {} edge [<-, dashsolid] (B);
\node[ellipse] (D) at (5.45,-0.75) {$n+1$} edge [<-] (C);
\draw[->] (6.5,-0.75) -- (7,-0.75);
\draw[dotted, path fading=west] (7.5,-0.75) -- (8.5,-0.75) ++(0,1pt);
\node (A) at (8.5,-0.75) {};
\node (B) at (9.5,-0.75) {} edge [->] (A);
\node (C) at (11.5,-0.75) {} edge [<-, dashsolid] (B);
\node[draw=black] (N) at (12.5,-0.75) {$n$} edge [<-] (C);
\node[ellipse] (D) at (13.95,-0.75) {$n+1$} edge [<-] (N);
\end{tikzpicture}
\end{enumerate}
\item[Case 2:]\phantomsection\label{item:case_2}
Any run of length $i+j \geq 2$ becomes three run of lengths $1$, $i$ and $j$ if we insert $n$ either after
\begin{enumerate}[leftmargin=*]
\item
$i$ elements of an increasing run (\latinabbr{e.g.}, \perm{13425} $\to$ \perm{153426} exemplifies $i=1$, $j=1$)
\\[.5em]\hspace*{-15pt}
\begin{tikzpicture}
[ scale=.82
, every node/.style={circle, draw=black!60, fill=white, inner sep=1pt, minimum size=11pt}
, every path/.style={thick}
]
\tikzstyle{dashsolid} = [dash pattern=on 4.5mm off .5mm on .5mm off .5mm on .5mm off .5mm on .5mm off .5mm on 4.5mm]
\tikzstyle{shortdashsolid} = [dash pattern=on 2.2mm off .5mm on .5mm off .5mm on .5mm off .5mm on .5mm off .5mm on 2.2mm]
\draw[dotted, path fading=west] (0,0) -- (1,0) ++(0,1pt);
\draw[dotted, path fading=east] (5,0) -- (6,0) ++(0,1pt);
\node (A) at (1,0) {};
\node (B) at (2,0) {} edge [->] (A);
\node (C) at (4,0) {};
\path[->, dashsolid] (B) edge node[above, rectangle, draw=none, fill=none, minimum size=0] {\footnotesize$i+j$} (C);
\node (D) at (5,0) {} edge [->] (C);
\draw[->] (6.5,0) -- (7,0);
\draw[dotted, path fading=west] (7.5,0) -- (8.5,0) ++(0,1pt);
\draw[dotted, path fading=east] (14.5,0) -- (15.5,0) ++(0,1pt);
\node (A) at (8.5,0) {};
\node (B) at (9.5,0) {} edge [->] (A);
\node[draw=black] (N) at (11,0) {$n$};
\path[->, shortdashsolid] (B) edge node[above, rectangle, draw=none, fill=none, minimum size=0] {\footnotesize$i\vphantom{j}$} (N);
\node (M) at (12,0) {} edge [->] (N);
\node (C) at (13.5,0) {};
\path[->, shortdashsolid] (M) edge node[above, rectangle, draw=none, fill=none, minimum size=0] {\footnotesize$j$} (C);
\node (D) at (14.5,0) {} edge [->] (C);
\end{tikzpicture}
\item
$i+1$ elements of a decreasing run (\latinabbr{e.g.}, \perm{14325} $\to$ \perm{143526} for $i=1$, $j=1$)
\\[.5em]\hspace*{-15pt}
\begin{tikzpicture}
[ scale=.82
, every node/.style={circle, draw=black!60, fill=white, inner sep=1pt, minimum size=11pt}
, every path/.style={thick}
]
\tikzstyle{dashsolid} = [dash pattern=on 4.5mm off .5mm on .5mm off .5mm on .5mm off .5mm on .5mm off .5mm on 4.5mm]
\tikzstyle{shortdashsolid} = [dash pattern=on 2.2mm off .5mm on .5mm off .5mm on .5mm off .5mm on .5mm off .5mm on 2.2mm]
\draw[dotted, path fading=west] (0,0) -- (1,0) ++(0,1pt);
\draw[dotted, path fading=east] (5,0) -- (6,0) ++(0,1pt);
\node (A) at (1,0) {};
\node (B) at (2,0) {} edge [<-] (A);
\node (C) at (4,0) {};
\path[->, dashsolid] (C) edge node[above, rectangle, draw=none, fill=none, minimum size=0] {\footnotesize$i+j$} (B);
\node (D) at (5,0) {} edge [<-] (C);
\draw[->] (6.5,0) -- (7,0);
\draw[dotted, path fading=west] (7.5,0) -- (8.5,0) ++(0,1pt);
\draw[dotted, path fading=east] (14.5,0) -- (15.5,0) ++(0,1pt);
\node (A) at (8.5,0) {};
\node (B) at (9.5,0) {} edge [<-] (A);
\node (M) at (11,0) {};
\path[->, shortdashsolid] (M) edge node[above, rectangle, draw=none, fill=none, minimum size=0] {\footnotesize$i\vphantom{j}$} (B);
\node[draw=black] (N) at (12,0) {$n$} edge [<-] (M);
\node (C) at (13.5,0) {};
\path[->, shortdashsolid] (C) edge node[above, rectangle, draw=none, fill=none, minimum size=0] {\footnotesize$j$} (N);
\node (D) at (14.5,0) {} edge [<-] (C);
\end{tikzpicture}
\end{enumerate}
\end{description}
An analogous argument can be made for the falling atomic permutations $\mathfrak{A}^-_n$.
Notice that partitions of positive integers can be represented by the monomials in the ring of polynomials\footnote{If one wants to encode also the order of the run (\latinabbr{e.g.}, to obtain a map from permutations of length $n$ to the compositions of $n$), one can exchange the polynomial ring with a noncommutative ring. Alternatively, if one wants to encode the direction of a run, one could study instead the ring $\field[Z][x_1, y_1, x_2, y_2, \dotsc]$, where $x_i$ denotes an increasing run of length $i$ and $y_j$ encodes a decreasing run of length $j$.} $\field[Z][x_1, x_2, \dotsc]$ in infinitely many variables $x_1, x_2, \dotsc$ and with integer coefficients.
That is, we express a partition $p = p_1 + p_2 + \dotsb + p_m$ as $x_{p_1} x_{p_2} \,\cdots\, x_{p_m}$
(\latinabbr{e.g.}, the partition $1 + 2 + 2 + 3$ of $8$ is written as $x_1 x_2^2 x_3$).
Now, let $p$ be a partition of $n-1$ and $X$ the corresponding monomial.
To this permutation there correspond $Z_{\mathfrak{A}}(p)$ permutations in $\mathfrak{A}^\pm_n$ which can be extended to permutations in $\mathfrak{A}^\pm_{n+1}$ in the manner described above.
Introducing the (formally defined) differential operator
\begin{equation}\label{eq:diff}
\mathcal{D} \defn \mathcal{D}_0 + \mathcal{D}_+
\quad\text{with}\quad
\mathcal{D}_0 \defn \sum_{i=1}^\infty x_{i+1} \pd{}{x_i},
\;\;
\mathcal{D}_+ \defn \sum_{i,j \,\geq\, 1} x_1 x_i x_j \pd{}{x_{i+j}},
\end{equation}
we can describe this extension in terms of the action of $\mathcal{D}$ on $X$.
We say that $\mathcal{D}_0$ is the \emph{degree-preserving} part of $\mathcal{D}$; it represents the \hyperref[item:case_1]{case 1} of increasing the length of a run: the differentiation $\partial/\partial x_i$ removes one of the runs of length $i$ and replaces it by a run of length $i+1$, keeping account of the number of ways in which this can be done. Similarly,
\hyperref[item:case_2]{case 2} of splitting a run into $3$ parts is represented by the \emph{degree-increasing} part $\mathcal{D_+}$.
For example, each of the $7$ atomic permutations corresponding to the partition $1 + 1 + 3$ can be extended as
\begin{equation*}
\mathcal{D} x_1^2 x_3 = 2 x_1 x_2 x_3 + x_1^2 x_4 + x_1^4 x_2,
\end{equation*}
\latinabbr{i.e.}, each can be extended to two atomic permutations corresponding to the partitions $1 + 2 + 3$, one corresponding to $1 + 1 + 4$ and one to $1 + 1 + 1 + 1 + 2$.
Therefore, starting from the trivial partition $1$ of $1$, represented as $x_1$, we can construct a recurrence relation for polynomials $A_n = A_n(x_1, x_2, \dotsc, x_n)$ which, at every step $n \geq 1$, encode the number of atomic permutations $Z_{\mathfrak{A}}(p)$ of length $n+1$ with run structure given by a partition~$p$ of~$n$ as the coefficients of the corresponding monomial in $A_n$.
The polynomial~$A_n$, accordingly defined by
\begin{equation}\label{eq:A_rel_Z}
A_n = \sum_{p \vdash n} Z_{\mathfrak{A}}(p) \prod_{i=1}^n x_i^{p(i)},
\end{equation}
where the sum is over all partitions $p$ of $n$ and $p(i)$ denotes the multiplicity of $i$ in the partition $p$, can thus be computed from the recurrence relation
\begin{subequations}\label{eq:A_recurrence}
\begin{align}
A_1 & \defn x_1, \\
A_n & \defn \mathcal{D} A_{n-1}, \qquad (n \geq 2).
\end{align}
\end{subequations}
We say that the polynomials~$A_n$ enumerate the run structure of the atomic permutations.
We summarize these results in the following proposition:
\begin{proposition}\label{prop:Z_A}
The number $Z_{\mathfrak{A}}(p)$ of rising or falling atomic permutations of length $n-1$ corresponding to a given run structure (\latinabbr{i.e.}, a partition $p$ of $n$), is determined by the polynomial $A_n$ via~\eqref{eq:A_rel_Z}.
The polynomials $A_n$ satisfy the recurrence relation~\eqref{eq:A_recurrence}.
\end{proposition}
\noindent
Note that atomic permutations always contain an odd number of runs and thus $Z_{\mathfrak{A}}(p)$ is zero for even partitions $p$.
It will prove useful to combine all generating functions $A_n$ into the formal series
\begin{equation*}
\mathcal{A}(\lambda)
\defn \sum_{n=0}^\infty A_{n+1} \frac{\lambda^n}{n!}
= \sum_{n=0}^\infty \mathcal{D}^n A_1 \frac{\lambda^n}{n!},
\end{equation*}
which can be expressed compactly as the exponential
\begin{equation*}
\mathcal{A}(\lambda) = \exp(\lambda \mathcal{D}) A_1.
\end{equation*}
The first few $A_n$ are given by
\begin{align*}
A_2 & = x_2 \\
A_3 & = x_3 + x_1^3 \\
A_4 & = x_4 + 5 x_2 x_1^2 \\
A_5 & = x_5 + 7 x_3 x_1^2 + 11 x_2^2 x_1 + 5 x_1^5 \\
A_6 & = x_6 + 9 x_4 x_1^2 + 11 x_2^3 + 38 x_3 x_2 x_1 + 61 x_2 x_1^4.
\end{align*}
For example, we can read off from $A_5$ that there is $1$ permutation in $\mathfrak{A}^\pm_6$ corresponding to the trivial partition $5 = 5$, $7$ corresponding to the partition $5 = 1 + 1 + 3$, $11$ corresponding to $5 = 1 + 2 + 2$ and $5$ corresponding to $5 = 1 + 1 + 1 + 1 + 1$.
As a check, we note that $1 + 7 + 11 + 5 = 24$, which is the total number of elements of $\mathfrak{A}^\pm_6$; similarly, the coefficients in the expression for $A_6$ sum to $120$, the cardinality of $\mathfrak{A}^\pm_7$. A direct check that the coefficients in $A_n$ sum to $(n-1)!$ for all $n$ will be given in the last paragraph of section \ref{sec:counting_valleys}.
We now consider what can be said concerning the degree $k$ part, $A^{(k)}_n$, of the polynomials $A_n$. The first degree term $A^{(1)}_n$ of $A_n$ is $x_n$ as can be seen by a trivial induction using $A^{(1)}_n = \mathcal{D}_0 A^{(1)}_{n-1}$, which follows from the recurrence relation~\eqref{eq:A_recurrence}.
Therefore $Z_{\mathfrak{A}}(n) = 1$.
For $A_n^{(k)}$ with $k > 1$ the effect of $\mathcal{D}_+$ has to be taken into account as well, complicating matters considerably.
Nevertheless, the general procedure is clear: once $A_m^{(k-2)}$ is known for all $m < n$, $A_n^{(k)}$ can be obtained as
\begin{equation*}
A_n^{(k)} = \mathcal{D}_0 A_{n-1}^{(k)} + \mathcal{D}_+ A_{n-1}^{(k-2)} = \sum_{m=k-1}^{n-1} \mathcal{D}_0^{n-m-1} \mathcal{D}_+ A_m^{(k-2)}.
\end{equation*}
Here one can make use of the following relation.
Applying $\mathcal{D}_0$ repeatedly to any monomial $x_{i_1} x_{i_2} \dotsm x_{i_k}$ of degree $k$ yields, as a consequence of the Leibniz rule,
\begin{equation}\label{eq:repeated_D}
\mathcal{D}_0^n x_{i_1} x_{i_2} \dotsm x_{i_k} = \sum_{\substack{j_1, j_2, \dotsc, j_k \,\geq\, 0 \\ j_1 + j_2 + \dotsb + j_k = n}} \binom{n}{j_1, j_2, \dotsc, j_k}\, x_{i_1 + j_1} x_{i_2 + j_2} \dotsm x_{i_k + j_k}.
\end{equation}
This observation provides the means to determine the third degree term $A_n^{(3)}$.
Applying $\mathcal{D}_+$ to any $A_m^{(1)} = x_m$ with $m \geq 2$ produces $x_1 x_p x_q$ with $p+q = m$ and $p, q \geq 1$.
Moreover, the repeated action of $\mathcal{D}_0$ on $x_1 x_p x_q$ is described by~\eqref{eq:repeated_D} and thus
\begin{equation*}
A_n^{(3)} = \sum_{\substack{p, q, r, s, t \,\geq\, 0 \\ 1 + p + q + r + s + t = n}} \binom{n - p - q - 1}{r, s, t}\, x_{1+r} x_{p+s} x_{q+t}.
\end{equation*}
After some algebra this yields
\begin{proposition}
The third degree term $A_n^{(3)}$ of the polynomial $A_n, n \geq 3,$ is given by
\begin{equation}\label{eq:third_degree}
A_n^{(3)}
= \sum_{\substack{i, j, k \,\geq\, 1 \\ i + j + k = n}} \sum_{q=1}^{k} \frac{n-q-1}{n-q-j}\, \binom{n - q - 2}{i - 1, j - 1, k - q}\, x_i x_j x_k.
\end{equation}
\end{proposition}
The equation~\eqref{eq:third_degree} for the third degree term $A_n^{(3)}$ can be rewritten into a formula for $Z_{\mathfrak{A}}(p_1 + p_2 + p_3)$, \latinabbr{i.e.}, the number of permutations of $[n+1]$ that start with $1$, end with $n+1$ and have three runs of lengths $p_1, p_2, p_3$, by changing the first sum to a sum over $i, j, k \in \{ p_1, p_2, p_3 \}$.
In particular, this gives rise to three integer series for the special cases
\begin{equation*}
Z_{\mathfrak{A}}(n+n+n), \quad Z_{\mathfrak{A}}(1+n+n), \quad Z_{\mathfrak{A}}(1+1+n),
\end{equation*}
with $n \in \field[N]$.
The first series
\begin{align*}
Z_{\mathfrak{A}}(n+n+n)
& = \sum_{q=1}^n \frac{3n-q-1}{2n-q}\, \binom{3n-q-2}{n-1,n-1,n-q}\\
& = 1, 11, 181, 3499, 73501, 1623467, \dotsc \qquad (n \geq 1)
\end{align*}
gives the number of atomic permutations with three runs of equal length $n$.
It does not appear to be known in the literature nor can it be found in the OEIS \cite{oeis} and the authors were not able to express it in a closed form.
For the second series, however, a simple closed form can be found:
\begin{align*}
Z_{\mathfrak{A}}(1+n+n)
& = \sum_{q=1}^n \bigg( \binom{2n-q}{n-1} + \binom{2n-q-1}{n-1} \bigg) + \frac{1}{2}\, \binom{2n}{n}\\
& = 2\, \binom{2n}{n} - 1
= 11, 39, 139, 503, 1847, \dotsc, \qquad (n \geq 2)
\end{align*}
is the number of atomic permutations in $\mathfrak{A}^\pm_{2n+2}$ with two runs of length $n$. One may understand this directly: there are $\binom{2n}{n}$
permutations in which the length $1$ run is between the others and $\binom{2n}{n}-1$ in which it is either first or last.
The third series, $Z_{\mathfrak{A}}(1+1+n)$, \latinabbr{i.e.}, the number of atomic permutations in $\mathfrak{A}^\pm_{n+3}$ with two runs of length $1$, is given by the odd numbers bigger than $3$:
\begin{equation*}
Z_{\mathfrak{A}}(1+1+n) = 2n + 1 = 5, 7, 9, 11, 13, 15, \dotsc, \qquad (n \geq 2).
\end{equation*}
Observe that terms of the form $x_1^n$ in $A_n$ encode alternating permutations,
which were already investigated by André in the 1880's \cite{andre:1881}.
As a consequence of his results, we find that the alternating atomic permutations are enumerated by the \emph{secant numbers} $S_n$, the coefficients of the Maclaurin series of $\sec x = S_0 + S_1 x^2/2! + S_2 x^4/4! + \dotsb$,
\begin{equation*}
Z_{\mathfrak{A}}\bigg(\sum_{i=1}^{2n+1} 1\bigg) = S_n = 1, 1, 5, 61, 1385, 50521, \dotsc \quad \text{($n \geq 0$, OEIS series \oeis{A000364})}.
\end{equation*}
This is due to the fact that all alternating atomic permutations of $[2n]$ can be understood as the reverse alternating permutations of $[2 \,.\,.\, 2n-1]$ with a prepended $1$ and an appended $2n$.
Moreover, since any $x_1^{2n+1}$ can only be produced through an application of $\mathcal{D}$ on $x_2 x_1^{2(n-1)}$, we also have $Z_{\mathfrak{A}}\big(2 + \sum_{i=1}^{2(n-1)} 1\big) = S_n$.
\section{Circular permutations}
\label{sec:circular}
The methods developed in the last section to enumerate atomic permutations can also be applied to find the number of circular permutations~$Z_{\mathfrak{C}}(p)$ with a given run structure~$p$.
Indeed, any circular permutation in $\mathfrak{C}_{n-1}$ can be extended to a permutation in $\mathfrak{C}_{n}$ by inserting $n$ at any position after the first (\latinabbr{e.g.}, \cperm{14532} can be extended to \cperm{164532}, \cperm{146532}, \cperm{145632}, \cperm{145362} or \cperm{145326}).
As in the case of atomic permutations, this extension either increases the length of a run or splits a run into three runs.
Namely, we can increase the length of one run by inserting $n$ at the end or the penultimate position of an increasing run or we can split a run of length $i+j \geq 2$ into three runs of lengths $i, j$ and $1$ by inserting $n$ after $i$ elements of an increasing run or after $i+1$ elements of a decreasing run.
We introduce polynomials $C_n$ representing the run structures of all elements of $\mathfrak{C}_n$, by analogy with the polynomials $A_n$ in the previous section:
\begin{equation}\label{eq:C_rel_Z}
C_n = \sum_{p \vdash n} Z_\mathfrak{C}(p) \prod_{i=1}^n x_i^{p(i)}
\end{equation}
and we say that the polynomials~$C_n$ enumerate the run structure of the circular permutations.
In the last paragraph we saw that we can use the differential operator $\mathcal{D}$ introduced in~\eqref{eq:diff} to find a recurrence relation similar to~\eqref{eq:A_recurrence}.
Namely,
\begin{subequations}\label{eq:C_recurrence}
\begin{align}
C_2 & \defn x_1^2, \\
C_n & \defn \mathcal{D} C_{n-1}, \qquad (n \geq 3)
\end{align}
\end{subequations}
giving in particular
\begin{align*}
C_3 & = 2 x_2 x_1 \\
C_4 & = 2 x_2^2 + 2 x_3 x_1 + 2 x_1^4 \\
C_5 & = 2 x_4 x_1 + 6 x_3 x_2 + 16 x_1^3 x_2 \\
C_6 & = 2 x_5 x_1 + 8 x_4 x_2 + 6 x_3^2 + 62 x_1^2 x_2^2 + 26 x_1^3 x_3 + 16 x_1^6
\end{align*}
from which we can read off that there are $2$ permutations in $\mathfrak{C}_5$ corresponding to $5 = 4 + 1$, $6$ corresponding to the partition $5 = 3 + 2$ and $16$ corresponding to $5 = 2 + 1 + 1 + 1$.
As a check, we note that $6 + 16 + 2 = 24$, which is the total number of elements of $\mathfrak{C}_5$; similarly, the coefficients in the expression for $C_6$ sum to $120$, the cardinality of $\mathfrak{C}_6$.
More on this can be found in the last paragraph of section \ref{sec:counting_valleys}.
In summary, we have a result analogous to proposition~\ref{prop:Z_A}:
\begin{proposition}\label{prop:Z_C}
The number $Z_\mathfrak{C}(p)$ of circular permutations of length $n$ corresponding to a given run structure $p$ is determined by the polynomial $C_n$ via~\eqref{eq:C_rel_Z}.
The polynomials~$C_n$ satisfy the recurrence relation~\eqref{eq:C_recurrence}.
\end{proposition}
\noindent
Note that circular permutations, exactly opposite to atomic permutations, always contain an even number of runs and thus $Z_\mathfrak{C}(p)$ is zero for odd partitions $p$.
The enumeration of circular and atomic permutations is closely related.
In fact, introducing a generating function $\mathcal{C}$ as the formal series
\begin{equation*}
\mathcal{C}(\lambda)
\defn \sum_{n=0}^\infty C_{n+2} \frac{\lambda^n}{n!}
= \sum_{n=0}^\infty \mathcal{D}^n C_2 \frac{\lambda^n}{n!}
= \exp(\lambda \mathcal{D}) C_2,
\end{equation*}
one can show the following:
\begin{proposition}\label{prop:square}
The formal power series $\mathcal{C}$ is the square of a formal series $\mathcal{A}$; namely,
\begin{equation}\label{eq:square}
\mathcal{C}(\lambda) = \mathcal{A}(\lambda)^2 = \big( \exp(\lambda \mathcal{D}) A_1 \big)^2,
\end{equation}
where $A_1 \defn x_1$.
\end{proposition}
\begin{proof}
This may be seen in various ways, but the most convenient is to study the first-order partial differential equation (in infinitely many variables)
\begin{equation}\label{eq:pde}
\frac{\partial\mathcal{C}}{\partial \lambda} - \mathcal{D}\mathcal{C} = 0,
\quad
\mathcal{C}(0) = C_2
\end{equation}
satisfied by $\mathcal{C}$.
We can now apply the method of characteristics to this problem.
Since it has no inhomogeneous part, the p.d.e.~\eqref{eq:pde} asserts that $\mathcal{C}$ is constant along its characteristics.
So, given $\lambda$ and $x_1, x_2, \dotsc$, let $\chi_1(\mu), \chi_2(\mu), \dotsc$ be solutions to the characteristic equations with $\chi_r(\lambda) = x_r$, \latinabbr{i.e.}, $\chi_1(\mu), \chi_2(\mu), \dotsc$ are the characteristic curves which emanate from the point $(\lambda, x_1, x_2, \ldots)$.
Then,
\begin{equation*}
\mathcal{C}(\lambda)|_{x_\bullet} = \mathcal{C}(0)|_{\chi_\bullet(0)} = C_2 \big( \chi_1(0) \big) = \chi_1(0)^2.
\end{equation*}
Applying the same reasoning again to $\mathcal{A}$, which obeys the same p.d.e.\ as $\mathcal{C}$ but with initial condition $\mathcal{A}(0) = A_1$,
\begin{equation*}
\mathcal{A}(\lambda)|_{x_\bullet} = \mathcal{A}(0)|_{\chi_\bullet(0)} = A_1 \big( \chi_1(0) \big) = \chi_1(0).
\end{equation*}
Therefore, proposition~\ref{prop:square} follows by patching these two equations together.
\end{proof}
\noindent
As a consequence also the polynomials $A_n$ and $C_n$ are related via
\begin{equation}\label{eq:relation}
C_n = \sum_{m=1}^{n-1} \binom{n-2}{m-1}\, A_m A_{n-m}.
\end{equation}
It then follows that the second degree part of $C_n$ is given by
\begin{equation*}
C^{(2)}_n = \sum_{m=1}^{n-1} \binom{n-2}{m-1}\, x_m x_{n-m}
\end{equation*}
and, applying \eqref{eq:third_degree}, that the fourth degree part can be written as
\begin{equation*}
C_n^{(4)} = \sum_{\substack{i, j, k, l \,\geq\, 1 \\ i + j + k + l = m}} \sum_{q=1}^{k} 2\, \frac{n-l-q-1}{n-l-q-j}\, \binom{n-2}{n-l-1} \binom{n - l - q - 2}{i - 1, j - 1, k - q}\, x_i x_j x_k x_l.
\end{equation*}
Similar to the atomic permutations, we find that the alternating circular permutations satisfy (\latinabbr{cf.}\ \cite[\S41]{andre:1895})
\begin{equation*}
Z_{\mathfrak{C}}\bigg(\sum_{i=1}^{2n} 1\bigg) = T_n = 1, 2, 16, 272, 7936, 353792, \dotsc \quad \text{($n \geq 1$, OEIS series \oeis{A000182})}
\end{equation*}
and also $Z_{\mathfrak{C}}\big(2 + \sum_{i=1}^{2n-3} 1\big) = T_n$, where $T_n$ are the \emph{tangent numbers}, the coefficients of the Maclaurin series of $\tan x = T_1 x_1 + T_2 x_3/3! + T_3 x_5/5! + \dotsb$.
Furthermore, from \eqref{eq:relation} we find the relation
\begin{equation*}
T_{n+1} = \sum_{m=0}^n \binom{2n}{2m}\, S_m S_{n-m},
\end{equation*}
which can be traced back to $\tan' x = \sec^2 x$.
To conclude this section, we note that the argument of proposition~\ref{prop:square} proves rather more: namely, that $\exp(\lambda \mathcal{D})$ defines a ring homomorphism from $\field[C][x_1, x_2, \dotsc]$ to the ring of formal power series $\field[C][[x_1, x_2, \dotsc]]$. This observation can be used to accelerate computations:
for example, the fact that $A_3 = x_3 +x_1^3$ implies that
\begin{equation*}
\mathcal{A}''(\lambda) = \mathcal{A}(\lambda)^3 + \exp(\lambda\mathcal{D})x_3,
\end{equation*}
which reduces computation of $A_{n+3}=\mathcal{D}^{n+2}x_1$ to the computation of $\mathcal{D}^n x_3$. Once $\mathcal{A}$ is obtained,
we may of course determine $\mathcal{C}$ by squaring.
\section{Linear permutations}
\label{sec:linear}
In the last section we studied the run structures of circular permutations $\mathfrak{C}_n$ and discovered that their run structures can be enumerated by the polynomials $A_n$.
One might ask, what the underlying reason for this is.
Circular permutations of $[n]$ have the same run structure as the linear permutations of the multiset $\{1,1,2,\dotsc,n\}$ which begin and end with $1$.
These permutations can then be split into two atomic permutations at the occurrence of their maximal element.
For example, the circular permutation \cperm{14532} can be split into the two atomic permutations \perm{145} of $\{1,4,5\}$ and \perm{5321} of $\{1,2,3,5\}$.
This also gives us the basis of a combinatorial argument for the fact that $\mathcal{C} = \mathcal{A}^2$.
Similarly it is in principle possible to encode the run structures of any subset of permutations using the polynomials $A_n$.
The goal of this section is to show how this may be accomplished for $\mathfrak{S}_S$ for any $S \subset \field[N]$.
As in sections~\ref{sec:atomic} and~\ref{sec:circular}, we want to find polynomials
\begin{equation*}
L_n = \sum_{p \vdash n} Z_\mathfrak{S}(p) \prod_{i=1}^n x_i^{p(i)}
\end{equation*}
that enumerate the run structure of the permutations~$\mathfrak{S}_{n+1}$.
This may be achieved in a two step procedure.
Since every permutation has a unique decomposition into inextendible atomic permutations, we can enumerate the set of permutations according to this decomposition.
The enumeration of permutations by their run structure follows because the enumeration of atomic permutations has already been achieved in section~\ref{sec:atomic}.
The key to our procedure is to understand the factorisation of the run structure into those of atomic permutations. Considering $\sigma \in \mathfrak{S}_n$ as a word, we can write it as the concatenation $\sigma = \alpha \cdot \pi \cdot \omega$, where $\pi$ is the principal atom of~$\sigma$ (see beginning of section~\ref{sec:atomic}) and $\alpha, \omega$ are (possibly empty) subwords of~$\sigma$.
Since the decomposition of~$\sigma$ into its atoms also decomposes its run structure, the complete runs of~$\sigma$ are determined by the runs of
$\alpha \cdot 1$, $\pi$ and $n \cdot \omega$ if $\pi$ is rising, or of
$\alpha \cdot n$, $\pi$ and $1 \cdot \omega$ if $\pi$ is falling.
Let $S_\omega$ be the set of letters in $\omega$ and define $\rho: S_\omega \to S_\omega$ to be the involution mapping the $i$'th smallest element of $S_\omega$ to the $i$'th largest, for all $1 \leq i \leq |S_\omega|$.
Then the run structure of $n \cdot \omega$ is identical to that of $1 \cdot \rho(\omega)$, where $\rho(\omega)$ is obtained by applying $\rho$ letterwise to $\omega$.
Furthermore, in the case $\pi = 1 \,\dotsm\, n$, the combined run structures of $\alpha \cdot 1$ and $n \cdot \omega$ are precisely the run structure of $\alpha \cdot 1 \cdot \rho(\omega)$, while, if $\pi = n \,\dotsm\, 1$, the combined run structures of $\alpha \cdot n$ and $1 \cdot \omega$ precisely form the run structure of $\alpha\cdot n \cdot \rho(\omega)$.
We refer to $\alpha\cdot 1\cdot\rho(\omega)$ or $\alpha\cdot n\cdot\rho(\omega)$ as the \emph{residual permutation}.
Summarising, the run structure of~$\sigma$ may be partitioned into that of~$\pi$ and either $\alpha \cdot 1 \cdot \rho(\omega)$ or $\alpha \cdot n \cdot \rho(\omega)$; accordingly, the monomial for~$\sigma$ factorises into that for the principal atom~$\pi$ and that for the residual permutation.
Therefore, the polynomial enumerating linear permutations by run structure can be given in terms of the those enumerating atomic permutations of the same or shorter length and of linear permutations of strictly shorter length.
This argument can be used to give a recursion relation for~$L_n$, which enumerates permutations of $[n+1]$ by their run structure.
Taking into account that the principal atom consists of $m+1$ letters, where $1 \leq m\leq n$, of which $m-1$ may be chosen freely from the set $[2 \,.\,.\, n]$, and that it might be rising or falling, and that the residual permutation may be any linear permutation on a set of cardinality $n-m+1$, we obtain the recursion relation
\begin{equation*}
L_n = 2 \sum_{m=1}^{n} \binom{n-1}{m-1} A_m L_{n-m},
\qquad
L_0=1.
\end{equation*}
Passing to the generating function,
\begin{equation*}
\mathcal{L}(\lambda) \defn \sum_{n=0}^\infty L_n \frac{\lambda^n}{n!},
\end{equation*}
we may deduce that
\begin{equation}\label{eq:linear_de}
\frac{\partial\mathcal{L}}{\partial\lambda} = 2\mathcal{A}(\lambda)\mathcal{L}(\lambda).
\end{equation}
Our main result in this section is:
\begin{proposition}\label{prop:linear}
The run structure of all permutations in $\mathfrak{S}_{n+1}$ is enumerated by
\begin{equation}\label{eq:linear}
L_n = \sum_{p \vdash n} \frac{2^{\abs{p}}}{\ord p}\, \binom{n}{p} \prod_{i=1}^{\abs{p}} A_{p_i},
\quad
L_0 = 1,
\end{equation}
where the sum is over all partitions $p = p_1 + p_2 + \dotsb$ of~$n$, $\abs{p}$ is the number of parts of partition, $\ord p$ is the symmetry order of the parts of~$p$ (\latinabbr{e.g.}, for $p = 1+1+2+3+3$ we have $\ord p = 2! 2!$) and $\binom{n}{p}$ is the multinomial with respect to the parts of~$p$.
The generating function for the~$L_n$ is
\begin{equation}\label{eq:linear_gen}
\mathcal{L}(\lambda) \defn \sum_{n=0}^\infty L_n \frac{\lambda^n}{n!}
= \exp \left( 2 \int_0^\lambda \mathcal{A}(\mu)\, \mathrm{d}\mu\right).
\end{equation}
\end{proposition}
\begin{proof}
Equation~\eqref{eq:linear_gen} follows immediately from \eqref{eq:linear_de}, as $\mathcal{L}(0)=1$, whereupon Faà di Bruno's formula~\cite[Eq.~(1.4.13)]{olver:2010} yields~\eqref{eq:linear}.
\end{proof}
To conclude this section, we remark that the first few $L_n$ are given by
\begin{align*}
L_1 & = 2 A_1 \\
L_2 & = 4 A_1^2 + 2 A_2 \\
L_3 & = 8 A_1^3 + 12 A_1 A_2 + 2 A_3 \\
L_4 & = 16 A_1^4 + 48 A_1^2 A_2 + 12 A_2^2 + 16 A_1 A_3 + 2 A_4 \\
L_5 & = 32 A_1^5 + 160 A_1^3 A_2 + 120 A_1 A_2^2 + 80 A_1^2 A_3 + 40 A_2 A_3 + 20 A_1 A_4 + 2 A_5\\
L_6 &= 64 A_1^6 + 480 A_1^4 A_2 + 320 A_1^3 A_3 + 720 A_1^2 A_2^2 + 120 A_1^2 A_4 + 480 A_1 A_2 A_3 + 120 A_2^3 \\&\qquad + 24 A_1 A_5 +
60 A_2 A_4 + 40 A_3^2 + 2A_6.
\end{align*}
Expanding the $A_k$ and writing the $L_n$ instead in terms of $x_i$, we obtain from these
\begin{align*}
L_1 & = 2 x_1 \\
L_2 & = 4 x_1^2 + 2 x_2 \\
L_3 & = 10 x_1^3 + 12 x_1 x_2 + 2 x_3 \\
L_4 & = 32 x_1^4 + 58 x_1^2 x_2 + 12 x_2^2 + 16 x_1 x_3 + 2 x_4 \\
L_5 & = 122 x_1^5 + 300 x_1^3 x_2 + 142 x_1 x_2^2 + 94 x_1^2 x_3 + 40 x_2 x_3 + 20 x_1 x_4 + 2 x_5\\
L_6 & = 544 x_1^6 + 1682 x_1^4 x_2 + 568x_1^3 x_3 + 1284 x_1^2 x_2^2 + 138 x_1^2 x_4 + 556 x_1 x_2 x_3 + 142 x_2^3 \\&\qquad + 24 x_1 x_5 + 60 x_2 x_4 + 40 x_3^2 + 2x_6,
\end{align*}
which show no obvious structure, thereby making proposition \ref{prop:linear} that much more remarkable.
\section{Counting valleys}
\label{sec:counting_valleys}
Instead of enumerating permutations by their run structure, we can count the number of valleys of a given (circular) permutation.
Taken together, the terms $C_n$ involving a product of $2k$ of the $x_i$ relate precisely to the circular permutations $\mathfrak{C}_n$ with $k$ valleys.
Since any circular permutation in $\mathfrak{C}_n$ can be understood as a permutation of $[3 \,.\,.\, n+1]$ with a prepended $1$ and an appended $2$ (\latinabbr{cf.}\ beginning of section~\ref{sec:linear}), $C_n$ may also be used to enumerate the valleys of ordinary permutations of $[n-1]$.
Namely, terms of $C_{n+1}$ with a product of $2(k+1)$ variables $x_i$ relate to the permutations of $\mathfrak{S}_n$ with $k$ valleys (\latinabbr{i.e.}, terms of $L_{n+1}$ which are a product of $2k$ of the $x_i$).
Let $V(n, k)$ count the number of permutations of $n$ elements with $k$ valleys.
Then we see that the generating function for $V(n, k)$ for each fixed $n \geq 1$ is
\begin{equation*}
K_n(\kappa) \defn \sum_{k=1}^n \kappa^k V(n, k)
= \frac{1}{\kappa} C_{n+1}(\sqrt{\kappa}, \dotsc, \sqrt{\kappa})
\end{equation*}
and we define $K_0(\kappa) \defn 1$.
The first few $K_n$ are
{\savebox\strutbox{$\vphantom{\kappa^2}$}
\begin{align*}
K_1(\kappa) & = 1 \\
K_2(\kappa) & = 2 \\
K_3(\kappa) & = 4 + 2 \kappa \\
K_4(\kappa) & = 8 + 16 \kappa \\
K_5(\kappa) & = 16 + 88 \kappa + 16 \kappa^2 \\
K_6(\kappa) & = 32 + 416 \kappa + 272 \kappa^2,
\end{align*}}%
which coincide with the results in~\cite{rieper:2000}.
In particular, the constants are clearly the powers of $2$, the coefficients of $\kappa$ give the sequence \oeis{A000431} of the OEIS~\cite{oeis} and the coefficients of $\kappa^2$ are given by the sequence \oeis{A000487}.
Likewise, the coefficients of $\kappa^3$ may be checked against the sequence \oeis{A000517}.
In fact, the same polynomials appear in André's work, in which he obtained a generating function
closely related to \eqref{eq:kitaev} below; see \cite[\S158]{andre:1895} (his final formula contains a number of sign errors, and is given in a form in which all quantities are real for $\kappa$ near $0$; there is also an offset, because his polynomial $A_n(\kappa)$ is our $K_{n-1}(\kappa)$).
\begin{proposition}
The bivariate generating function, \latinabbr{i.e.}, the generating function for arbitrary $n$, is
\begin{equation*}
\mathcal{K}(\nu, \kappa) = \sum_{n=0}^\infty K_n(\kappa) \frac{\nu^n}{n!} = 1 + \frac{1}{\kappa} \int_0^\nu \mathcal{C}(\mu)|_{x_1 = x_2 = \dotsb = \sqrt{\kappa}}\; \mathrm{d}\mu
\end{equation*}
and is given in closed form by
\begin{equation}\label{eq:kitaev}
\mathcal{K}(\nu, \kappa) = 1 - \frac{1}{\kappa} + \frac{\sqrt{\kappa - 1}}{\kappa} \tan\big( \nu \sqrt{\kappa - 1} + \arctan(1/\sqrt{\kappa - 1}) \big).
\end{equation}
\end{proposition}
\noindent
This result was found by Kitaev~\cite{kitaev:2007} and in the remainder of this section we will show how it may be derived from the recurrence relation~\eqref{eq:C_recurrence} of $C_n$.
To this end, we first note that $C_{n+1}$ satisfies the useful scaling relation
\begin{equation*}
\lambda^{n+1} C_{n+1}(x_1, x_2, \dotsc, x_n) = C_{n+1}(\lambda x_1, \lambda^2 x_2, \dotsc, \lambda^n x_n).
\end{equation*}
Setting $x_i = x / \lambda = \sqrt{\kappa}$ for all $i$, this implies
\begin{equation*}
\lambda^{n+1} C_{n+1}(\sqrt{\kappa}, \dotsc, \sqrt{\kappa}) = C_{n+1}(x, \lambda x, \dotsc, \lambda^{n-1} x)
\end{equation*}
and we find, by inserting the recurrence relations~\eqref{eq:C_recurrence} and applying the chain rule, that with this choice of variables
\begin{equation*}
\frac{1}{x^2} C_{n+1}(x, \lambda x, \dotsc, \lambda^{n-1} x) = \lambda x \pd{C_n}{x} + x^2 \pd{C_n}{\lambda} + 2 \lambda C_n.
\end{equation*}
Hence, in turn, $K_n(\kappa) = \kappa^{-1} C_{n+1}(\sqrt{\kappa}, \dotsc, \sqrt{\kappa})$ satisfies the recurrence relation
\begin{equation}\label{eq:Kn}
K_n(\kappa) = 2 \kappa (1 - \kappa) K_{n-1}'(\kappa) + \big( 2 + (n - 2) \kappa \big) K_{n-1}(\kappa)
\end{equation}
for $n \geq 2$.
For the bivariate generating function $\mathcal{K}$ this, together with $K_0 = K_1 = 1$, implies the p.d.e.
\begin{equation*}
(1 - \nu \kappa) \pd{\mathcal{K}}{\nu} + 2 \kappa (\kappa - 1) \pd{\mathcal{K}}{\kappa} + (\kappa - 2) \mathcal{K} = \kappa - 1,
\end{equation*}
which is to be solved subject to the initial condition $\mathcal{K}(0, \kappa) = 1$.
The above equation may be solved as follows: first, we note that there is a particular integral $1 - 1/\kappa$, so it remains to solve the homogeneous equation.
In turn, using an integrating factor, the latter may be rewritten as
\begin{equation}\label{eq:homog}
(1 - \nu \kappa) \pd{}{\nu} \frac{\kappa \mathcal{K}}{\sqrt{\kappa - 1}} + 2 \kappa (\kappa - 1) \pd{}{\kappa} \frac{\kappa \mathcal{K}}{\sqrt{\kappa - 1}} = 0,
\end{equation}
for which the characteristics obey
\begin{equation*}
\od{\nu}{\kappa} = \frac{1 - \nu \kappa}{2 \kappa (\kappa - 1)}.
\end{equation*}
Solving this equation, we find that
\begin{equation*}
\nu \sqrt{\kappa - 1} + \arctan \frac{1}{\sqrt{\kappa - 1}} = \text{const}
\end{equation*}
along characteristics; as~\eqref{eq:homog} asserts that $\kappa \mathcal{K}/\sqrt{\kappa - 1}$ is constant on characteristics, this gives
\begin{equation*}
\mathcal{K}(\nu, \kappa) = 1 - \frac{1}{\kappa} + \frac{\sqrt{\kappa - 1}}{\kappa} f\big( \nu \sqrt{\kappa - 1} + \arctan(1/\sqrt{\kappa - 1}) \big)
\end{equation*}
for some function $f$.
Imposing the condition $\mathcal{K}(0, \kappa) = 1$, it is plain that $f = \tan$, and we recover Kitaev's generating function~\eqref{eq:kitaev}.
To close this section, we note that \eqref{eq:Kn} has the consequence that $K_n(1) = n K_{n-1}(1)$ for all $n \geq 2$ and hence that $C_{n+1}(1,\dotsc,1) = K_n(1) = n!$ for such $n$, and indeed all $n\geq 1$, because $C_2(1, 1) = K_1(1) = 1$. The generating function obeys
\begin{equation*}
\mathcal{C}(\lambda)|_{x_\bullet=1}
= \sum_{n=0}^\infty (n+1)! \frac{\lambda^n}{n!}
= (1-\lambda)^{-2}
\end{equation*}
for all non-negative $\lambda < 1$ from which it also follows that
\begin{equation}\label{eq:geom_A}
\mathcal{A}(\lambda)|_{x_\bullet=1} = (1-\lambda)^{-1}
\end{equation}
(as $A_1(1) = 1$, we must take the \emph{positive} square root) and hence $A_{n}|_{x_\bullet=1} = (n-1)!$ for all $n \geq 1$.
This gives a consistency check on our results: the coefficients in the expression for $A_n$ sum to $(n-1)!$, the cardinality of $\mathfrak{A}^\pm_{n+1}$, while those in $C_n$ sum to the cardinality of $\mathfrak{C}_n$.
Furthermore, inserting \eqref{eq:geom_A} into the generating function $\mathcal{L}(\lambda)$ in \eqref{eq:linear_gen}, we find
\begin{equation*}
\mathcal{L}(\lambda)|_{x_\bullet=1}
= \sum_{n=0}^\infty L_n(1,\dotsc,1) \frac{\lambda^n}{n!}
= (1-\lambda)^{-2},
\end{equation*}
and thus $L_{n+1}(1,\dotsc,1) = n!$, which is the cardinality of $\mathfrak{S}_n$.
\section{Other applications}
\label{sec:app}
The original motivation for this work arose in quantum field theory, in computations related to the probability distribution of measurement outcomes for quantities such as averaged energy densities~\cite{fewster:2012b}. One actually
computes the cumulants $\kappa_n$ ($n \in \field[N]$) of the distribution: $\kappa_1 = 0$, while for each $n \geq 2$, $\kappa_n$ is given as a sum indexed by circular permutations $\sigma$ of $[n]$ such that $\sigma(1) = 1$ and $\sigma(2) < \sigma(n)$, in which each permutation contributes a term that is a multiplicative function of its run structure:
\begin{equation*}
\kappa_n = \sum_{\sigma} \Phi(\sigma)
\end{equation*}
where $\Phi(\sigma)$ is a product over the runs of $\sigma$, with each run of length $r$ contributing a factor $y_r$. Owing to the restriction $\sigma(2) < \sigma(n)$, precisely half of the circular permutations are admitted, and so $\kappa_n=\frac{1}{2}C_n(y_1,y_2,\ldots,y_n)$.
Thus the cumulant generating function is
\begin{align*}
W(\lambda)
&\defn \sum_{n=2}^\infty \kappa_n \frac{\lambda^n}{n!} = \frac{1}{2} \int_0^\lambda \mathrm{d}\mu\, (\lambda - \mu) \mathcal{C}(\mu)|_{x_\bullet=y_\bullet} \\
&= \frac{1}{2} \int_0^\lambda \mathrm{d}\mu\, (\lambda - \mu) \mathcal{A}(\mu)|_{x_\bullet=y_\bullet}^2
\end{align*}
and the moment generating function is $\exp W(\lambda)$ in the usual way. This expression makes sense a formal power series, but also as
a convergent series within an appropriate radius of convergence.
The values of $y_n$ depend on the physical quantity involved and the way it is averaged.
In one case of interest
\begin{align*}
y_n
&= 8^n \int_{(\field[R]^+)^{\times n}} \mathrm{d} k_1\, \mathrm{d} k_2\, \cdots\, \mathrm{d} k_n\, k_1 k_2 \,\cdots\, k_n \exp \left[ -k_1-\left(\sum_{i=1}^{n-1} |k_{i+1}-k_i|\right)-k_{n} \right] \\
&= 2^n \sum_{r_{n-1}=0}^{2} \sum_{r_{n-2}=0}^{2+r_{n-1}} \cdots \sum_{r_1=0}^{2+r_2} \prod_{k=1}^{n-1} (1+r_k) \\
&=2, 24, 568, 20256, 966592, \dotsc \qquad (n \geq 1)
\end{align*}
(the sums of products must be interpreted as an overall factor of unity in the case $n=1$).
Numerical investigation leads to a remarkable identity
\begin{equation*}
\mathcal{A}(\lambda)|_{x_\bullet=y_\bullet} = \frac{2}{1-12\lambda} \qquad \textrm{(conjectured)}
\end{equation*}
with exact agreement for all terms so far computed (checked up to $n=65$).
We do not have a proof for this statement, but the conjecture seems fairly secure. For example, we have shown above that $A_5= x_5 + 7 x_3 x_1^2 + 11 x_2^2 x_1 + 5 x_1^5$; substituting for $x_n$ the values of $y_n$ obtained above, we find $A_5= 995328$ which coincides with the fourth order coefficient in the expansion
\begin{equation*}
\frac{2}{1-12\lambda}= 2+ 24\lambda + 576\frac{\lambda^2}{2!}+20736\frac{\lambda^3}{3!} +995328\frac{\lambda^4}{4!}+O(\lambda^5).
\end{equation*}
In~\cite{fewster:2012b}, this conjecture was used to deduce
\begin{equation*}
\exp\big(W(\lambda)\big) = \mathrm{e}^{-\lambda/6}(1-12\lambda)^{-1/72} \qquad \textrm{(conjectured)},
\end{equation*}
which is the moment generating function of a shifted Gamma distribution.
The other generating functions of interest, with these values for the $x_k$ are
\begin{equation*}
\mathcal{C}(\lambda)|_{x_\bullet=y_\bullet} = \frac{4}{(1-12\lambda)^2},
\qquad
\mathcal{L}(\lambda)|_{x_\bullet=y_\bullet} = (1-12\lambda)^{-1/3}
\qquad \textrm{(conjectured)}.
\end{equation*}
For example, we have $C_5 = 2 x_4 x_1 + 6 x_3 x_2 + 16 x_1^3 x_2 =
165888$ and $L_5 = 122 x_1^5 + 300 x_1^3 x_2 + 142 x_1 x_2^2 + 94 x_1^2 x_3 + 40 x_2 x_3 + 20 x_1 x_4 + 2 x_5 =3727360$, to be compared with the terms of order $\lambda^3$ and $\lambda^5$, respectively, in
the expansions
\begin{align*}
\frac{4}{(1-12\lambda)^2} &= 4+ 96\lambda+ 3456\frac{\lambda^2}{2!}+ 165888\frac{\lambda^3}{3!}+ 995328\frac{\lambda^4}{4!}+ O(\lambda^5), \\
(1-12\lambda)^{-1/3} &=
1+ 4\lambda+ 64\frac{\lambda^2}{2!}+ 1792\frac{\lambda^3}{3!}+ 71680\frac{\lambda^4}{4!}+ 3727360\frac{\lambda^5}{5!}+O(\lambda^6).
\end{align*}
A natural question is whether there are other sequences that can be substituted for the $x_k$ to produce generating functions with simple closed forms.
To close, we give three further examples, with the corresponding generating functions computed.
The first has already been encountered in section \ref{sec:counting_valleys} and corresponds to the case $x_k = 1$ for all $k \in \field[N]$.
The second utilizes the alternating Catalan numbers: setting
\begin{equation*}
x_{2k+1}= \frac{(-1)^k}{k+1} \binom{2k}{k}, \quad(k \geq 0),
\qquad
x_{2k}=0, \quad (k \geq 1)
\end{equation*}
and thus $A_{2k} = 0$, we obtain, again experimentally,
\begin{equation*}
\mathcal{C}(\lambda) = \mathcal{A}(\lambda) = 1,
\quad
\mathcal{L}(\lambda) = \mathrm{e}^{2\lambda}
\qquad \textrm{(conjectured)}
\end{equation*}
with exact agreement checked up to permutations of length $n = 65$.
For example, one sees easily that with $x_1=1$, $x_3=-1$, $x_5=2$ and $x_2=x_4=0$, the expressions $A_k$ and $C_k$ given in sections~\ref{sec:atomic} and~\ref{sec:circular} vanish for $2\le k\le 6$,
and have $A_1=C_1=1$, likewise, $L_k=2^k$ for $1\le k\le 6$.
Third, André's classical result on alternating permutations (\latinabbr{cf.}\ last and penultimate paragraph of section \ref{sec:atomic} and \ref{sec:circular} respectively) gives the following: setting
\begin{equation*}
x_1=1, \qquad
x_{k}=0, \quad (k \geq 2)
\end{equation*}
we have, using \eqref{eq:square} and \eqref{eq:linear_gen},
\begin{equation*}
\mathcal{A}(\lambda) = \sec\lambda,
\quad
\mathcal{C}(\lambda)=\sec^2\lambda,
\quad
\mathcal{L}(\lambda) = (\sec\lambda + \tan\lambda)^2.
\end{equation*}
It seems highly likely to us that many other examples can be extracted from the structures we have described.
Moreover, we remark that it is possible to implement a merge-type sorting algorithm, called \emph{natural merge sort} \cite[Chap. 5.2.4]{knuth:1998}, based upon splitting permutations of an ordered set $S$ into its runs, which are ordered (alternatingly in ascending and descending order) sequences $S_i \subset S$.
Repeatedly merging these subsequences, one ultimately obtains an ordered sequence.
For example, first, we split the permutation \perm{542368719} into \perm{542}, \perm{368}, \perm{71} and \perm{9}.
Then, we reverse every second sequence (depending on whether the first or the second sequence is in ascending order): \perm{542} $\mapsto$ \perm{245} and \perm{71} $\mapsto$ \perm{17}.
Depending on the implementation of the merging in the following step, this `reversal' step can be avoided.
Last, we merge similarly to the standard merge sort: \perm{245} $\vee$ \perm{368} $\mapsto$ \perm{234568}, \perm{17} $\vee$ \perm{9} $\mapsto$ \perm{179} and finally \perm{234568} $\vee$ \perm{179} $\mapsto$ \perm{123456789}.
Natural merge sort is a fast sorting algorithm for data with preexisting order.
Using the methods developed above to enumerate permutations by their run structure, it is in principle possible to give average (instead of best- and worst-case) complexity estimates for such an algorithm.
\begin{acknowledgments}
D.~S. is very grateful for the kind hospitality extended by the Department of Mathematics at the University of York where this work was carried out.
C.~J.~F. thanks Nik Ruškuc and Vincent Vatter for useful comments on an early version of the text.
\end{acknowledgments}
\small
|
1,108,101,565,950 | arxiv |
\section{Missing Proofs}
\begin{proof}[Proof of~\autoref{thm:main1}]
Let $\nodes_\bound = \{x_1, \dots, x_k\}$. Recall that an access request $\ba = (a_1, \dots, a_k)$ corresponds to the query $\varphi[a_1/x_1, \dots, a_k/x_k]$; in other words, we substitute each occurrence of a bound variable $x_i$ with the constant $a_i$. Define the hypergraph $\hgraph_\bound = (\nodes_\bound, \edges_\bound)$, where $\edges_\bound = \{F \cap \nodes_\bound \mid F \in \edges\}$. We say that an access request $\ba$ is {\em valid} if it is an answer for the query $\varphi_\bound$ corresponding to $\hgraph_\bound$, i.e. $\ba \in \varphi_\bound(D)$. We can construct hash indexes of linear size $O(|D|)$ during the preprocessing phase so that we can check whether any access request is valid in constant time $O(1)$.
For every relation $R_F$ in the query, let $R_F(\ba) = \sigma_{x_i =a_i \mid x_i \in \nodes_\bound \cap F}(R_F)$. In other words, $R_F(\ba)$ is the subrelation that we obtain once we filter out the tuples that satisfy the selection condition implied by the access request.
If $\slack$ is the slack for the fractional edge cover $\bu$, define $\hat{u}_F = u_F / \slack$ for every $F \in \edges$. As we have discussed earlier, $\hat{\bu} = \{\hat{u}_F\}_{F \in \edges}$ is a fractional edge cover for the query $\varphi[a_1/x_1, \dots, a_k/x_k]$: indeed, it is necessary to cover only the non-bound variables, since all bound variables are replaced by constants in the query. Hence, using a worst-case optimal join algorithm, we can compute the access request $\varphi[a_1/x_1, \dots, a_k/x_k]$ with running time
%
$$ T(\ba) = \prod_{F \in \edges} |R_F(\ba)|^{u_F/\slack}.$$
We can now describe the preprocessing phase and the data structure we build. The data structure simply creates a hash index. Let $J$ be the set of valid access requests such that $T(\ba) > T$. For every $\ba \in J$, we add to the hash index a key-value entry, where the key is $\ba$ and the value the (boolean) answer to the access request $Q[a_1/x_1, \dots, a_k/x_k]$.
We claim that the answer time using the above data structure is at most $O(T)$. Indeed, we first check whether $\ba$ is valid, which we can do in constant time. If it is not valid, we simply output no. If it is valid, we probe the hash index. If $\ba$ exists in the hash index, we obtain the answer in time $O(1)$ by reading the value of the corresponding entry. Otherwise, we know that $T(\ba) < T$ and hence we can compute the answer to the access request in time $O(T)$.
It remains to bound the size of the data structure we constructed during the preprocessing phase. Since the size is $O(|J|)$, we will bound the size of $J$. Indeed, we have:
%
\allowdisplaybreaks
\begin{align*}
\hspace{7em} T \cdot |J| \leq \sum_{\ba \in J} T(\ba)
& = \sum_{\ba \in J} \prod_{F \in \edges} |R_F(\ba)|^{u_F/\slack} \\
& = \sum_{\ba \in J} 1^{1-1/\slack} \cdot \left( \prod_{F \in \edges} |R_F(\ba)|^{u_F} \right)^{1/\slack} \\
& \leq \left(\sum_{\ba \in J} 1 \right)^{1-1/\slack} \cdot \left( \sum_{\ba \in J} \prod_{F \in \edges} |R_F(\ba)|^{u_F} \right)^{1/\slack} \\
& \leq |J|^{1-1/\slack} \cdot \prod_{F \in \edges} |R_F|^{u_F/\slack}
\end{align*}
Here, the first inequality follows directly from the definition of the set $J$. The second inequality is H{\"o}lders inequality. The third inequality is an application of the query decomposition lemma from~\cite{skewstrikesback}. By rearranging the terms, we can now obtain the desired bound.
\end{proof}
\begin{proof}[Proof of~\autoref{thm:main2}]
We introduce some new notation. Let $\tree = (\htree, A)$ denote the $\nodes_\bound$-connex tree decomposition with $f$ as its $\delta$-width, and $h$ as its $\delta$-height. For each node $t \in \htree \setminus A$, we denote by $\manc(t)$ the union of all the bags for the nodes that are the ancestors of $t$, $\nodes_\bound^t = \bag_t \cap \manc(t)$ and $\nodes_\free^t = \bag_t \setminus \nodes_\bound^t$. Intuitively, $\nodes_\bound^t (\nodes_\free^t)$ are the bound (resp. free) variables for the bag $t$ as we traverse the tree top-down.
For example, in~\autoref{fig:cfhw}\textbf{(b)}, $A = \bag_{t_1}$, $\manc(t_2) = \{x_1, x_6\},$ $\nodes_\bound^t = \{x_1, x_6\}$ and $\nodes_\free^t = \{x_1, x_5\}$. For node $t_3$, $\manc(t_3) = \{x_2, x_5, x_1, x_6\}$, $\nodes_\bound^t = \{x_2, x_5\}$ and $\nodes_\free^t = \{x_3, x_4\}$
\paragraph*{Data Structure Construction} We apply~\autoref{thm:main1} to each bag (except the root bag) in $\htree$ with the following parameters: $(i) \mH^t = (\nodes^t, \edges^t)$ where $\nodes^t = \mB_{t}$ and $\edges^t = \edges_{\mB_{t}}$; $(ii)$ bound variables are $\nodes_\bound^t = \manc(t) \cap \mB_{t}$; and $(iii)$ fractional edge cover $\mathbf{u}$ corresponding to bag $\mB_{t}$. The space requirement for the data structure corresponding to bag $\mB_{t}$ is $S = O(|D| + |D|^{\rho_t(\delta)}) \leq O(|D| + |D|^f)$. This follows directly from the definition of $\delta(t)$ width of bag $t$ which is assumed to be at most $f$. Recall that the data structure stores a list of all access requests $\mathbf{a}$ defined over the schema $\nodes_\bound^t$ such that answering time $T > O(|D|^f)$. Let us call this stored list as $\mL(t)$.
\paragraph*{Query Answering} We now describe the query answering algorithm. Let $C = \{x_1, \dots , x_k\}$ and access request $\mathbf{a} = (a_1, \dots, a_k)$. We first need to check whether $\mathbf{a}$ is valid. If the request is not valid, we can simply output no. This can be done in constant time after creating hash indexes of size $O(|D|)$ during the preprocessing phase. If the access request is valid, the second step is to check whether $Q(\mathbf{a})$ is true or false. Let $\mathcal{P}$ denote the set of bags that are children of root bag. Then, for each bag $\mB_t \in \mathcal{P}$, we check whether $\pi_{\nodes_\bound^t} (\mathbf{a}) \in \mL(t)$. If it is stored, it means that that running time of $\pi_{\nodes_\bound^t} (\mathbf{a})$ is greater than $O(|D|^{\delta(t)})$. If the entry for $\pi_{\nodes_\bound^t} (\mathbf{a}) $ is false in the data structure, we can output false immediately since we know that no output tuple can be formed by the subtree rooted at bag $\mB_t$.
If there is no entry for $\pi_{\nodes_\bound^t} (\mathbf{a})$ in $\mL(t)$, this means that answering time of evaluating the join at node $t$ is $T \leq O(|D|^{\delta(t)})$. Thus, we can evaluate the join for the bag by fixing $\nodes_\bound^t$ as $\pi_{\nodes_\bound^t} (\mathbf{a})$ using any worst-case optimal join algorithm, which guarantees that the total running time is at most $O(|D|^{\delta(t)})$. If no output is generated, the algorithm outputs false since no output tuple can be formed by subtree rooted at $\mB_t$. If there is output generated, then there can be at most $O(|D|^{\delta(t)})$ tuples. For each of these tuples, we recursively proceed to the children of bag $\mB_t$ and repeat the algorithm. Each fixing of variables at bag $t$ acts as the bound variables for the children bag. In the worst case, all bags in $\htree$ may require join processing. Since the query size is a constant, it implies that the number of root to leaf paths are also constant. Thus, the answering time is dominated by the longest root to leaf path, i.e the $\delta$-height of the decomposition. Thus, $T = O(|D|^{\sum_{t \in P} \delta(t)}) = O(|D|^h)$.
\end{proof}
\section{Optimizing Tree Decompositions} \label{appendix:optimizations}
{In this section, we discuss two key optimizations that lead to improved tradeoffs. The first optimization relates to the space usage of the root bag. Observe that the space requirement of~\autoref{thm:main2} was defined as the maximum over all bags except $A$ (which is the connected subset of tree nodes whos union is exactly equal to $C$). Our key observation is that if the space budget $S = O(|D|^{\max_{t \in A} \rho^*_t})$, then we can achieve an answering time of $T = O(1)$. This is because we have materialized all possible valid access requests (and their true/false answer) and similar to the query answering phase, given an access request, we can check in constant time whether the request is valid and find its boolean answer.
\begin{figure}
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (320,24.69) -- (200,215) ;
\draw (320,24.69) -- (444,212.69) ;
\draw (200,215) -- (444,212.69) ;
\draw (319,72.69) -- (245,189.69) ;
\draw (319,72.69) -- (395,188.69) ;
\draw (245,189.69) -- (395,188.69) ;
\draw (320,24.69) -- (319,72.69) ;
\draw (245,189.69) -- (200,215) ;
\draw (395,188.69) -- (444,212.69) ;
\draw (324,7) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle \textcolor[rgb]{0.82,0.01,0.11}{x}$};
\draw (187,211) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle \textcolor[rgb]{0.82,0.01,0.11}{y}$};
\draw (451,207) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle \textcolor[rgb]{0.82,0.01,0.11}{z}$};
\draw (331,68) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle p$};
\draw (245,192.69) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle q$};
\draw (390,192) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle r$};
\end{tikzpicture}
\caption{Query graph with all relations of size $|E|$. $x,y,z$ are the bound variables.} \label{triangle}
\end{figure}
\begin{example}
Consider the query $\varphi^{\bound \bound \bound}(x,y,z) = R(x,y) \land S(y,z) \land T(x,z) \land U(p,q) \land V(q,r) \land W(p,r) \land$ $D(x,p) \land E(y,p) \land F(r,z)$ as shown in~\autoref{triangle}. Applying our main result on this query gives a tradeoff $S = O(|E|^3/\tau)$ and $T=O(\tau)$ by assigning an edge cover of $1/2$ to the relations forming the two triangles. The $C$-connex decomposition will consist of the root bag containing variables $C = \{x,y,z\}$ and a child bag containing free variables $p,q,r$ and bound variables $x,y,z$. However, note that materializing the root bag which consists of relation $R,S,T$ takes space only $|E|^{3/2}$. Thus, the tradeoff is only useful when $S = O(|E|^3/\tau) \leq |E|^{3/2}$.
\end{example}
Our second optimization relates to complete materialization of all tuples over free variables in a bag. Consider a bag $t \notin A$ in the $C$-connex decomposition.~\autoref{thm:main2} sets space usage of $t$ as $S = O(|D|^{\rho_t(\delta)})$ and answering time of $T = O(|D|^{\delta(t)})$. The key observation is that one can achieve $S = O(|D|^{\rho^*_t(\nodes^t_\free)})$ and $T = O(|D|^{\rho^*_t(\nodes^t_\free))}$, where $\rho^*_t(\nodes^t_\free)$ is the fractional edge cover number over the free variables in $t$, by materializing the join over free variables of bag $t$ and storing a bit that tells whether a tuple $s$ participates in some join for the bags below it in the decomposition. Thus, in the query answering phase, we can check whether some tuple $s$ is true/false by iterating over all materialized tuples and return the answer. We demonstrate this with the following example.
\begin{example}
Consider the query $\varphi^{\bound \bound \bound \bound}(x_1, x_2,x_3, x_4) = R(x_1,y) \land S(x_2,y) \land T(y,z) \land U(x_3,z) \land$ $V(x_4,z)$. The $C$-connex decomposition contains two bags: the root bag with variables $C = \{x_1, x_2,x_3, x_4\}$ and a child bag containing free variables $y,z$ and bound variables $x_1, x_2,x_3, x_4$. The tradeoff obtained is $S = O(|E|^4/\tau^2)$ (note that slack $\alpha=2$) and $T= O(\tau)$. However, applying our optimization here, we observe that the fractional edge cover number for free variables $y,z$ is $1$. Thus, during the query answering phase, given an access request, we can iterate over all $O(|E|)$ tuples in relation $T$ and check whether the access request and some tuple $s \in T$ join or not, a constant time operation for each $s$. Thus, we can achieve answering time $T = O(|E|)$ using linear space $S = O(|E|)$, which is an improvement over the tradeoff obtained directly from~\autoref{thm:main2}.
\end{example}
Thus, for each bag $t \notin A$, we can set $\rho_t(\delta) = \rho^*_t(\nodes^t_\free)$ and $\delta(t) = \rho^*_t(\nodes^t_\free)$ to achieve the best possible width and height for the decomposition.
}
\section{General Space-Time Tradeoffs}
\label{sec:main}
\subsection{Space-Time Tradeoffs via Worst-case Optimal Algorithms}
Let $\varphi^{\eta}$ be an adorned query, and let $\hgraph = (\nodes,\edges)$ be the corresponding hypergraph. Let $\nodes_\bound$ denote the bound variables in the head of the query. For any fractional edge cover $\bu$ of $\nodes$, we define the {\em slack} of $\bu$ as:
$ \slack(\bu) := \min_{x \in \nodes \setminus \nodes_\bound} \left( \sum_{F: x \in F} u_F\right). $
In other words, the slack is the maximum factor by which we can scale down the fractional cover $\bu$ so that it remains a valid edge cover of the non-bound variables in the query\footnote{We will omit the parameter $\bu$ from the notation of $\alpha$ whenever it is clear from the context.}. Hence $\{u_F/\slack(\bu)\}_{F \in \edges}$ is a fractional cover of the nodes in $S = \nodes \setminus \nodes_\bound$. We always have $\slack(\bu) \geq 1$.
\begin{example}
Consider $\varphi^{\bound \dots \bound}(y_1, \dots, y_k) = R_1(x,y_1)\land R_2(x,y_2) \land \dots R_k(x, y_k)$ with the optimal fractional edge cover $\bu$, where $u_i=1$ for $i \in \{1, \dots, k\}$. The slack is $\slack(\bu) = k$, since the fractional edge cover $\hat{\bu}$, where $\hat{u}_i = u_i/k = 1/k$ covers the only non-bound variable $x$.
\end{example}
We can now state our first main theorem.
\begin{theorem}\label{thm:main1}
Let $\varphi^{\eta}$ be a Boolean adorned query with hypergraph $(\nodes,\edges)$. Let $\bu$ be any fractional edge cover of $\nodes$.
Then, for any input database $D$, we can construct a data structure that answers any access request in time $O(T)$ and takes space
$$S = {O}\left(|D| + \prod_{F \in \edges} |R_F|^{u_F} / T^\slack \right)$$
\end{theorem}
We should note that~\autoref{thm:main1} applies when the relation sizes are different; this gives us sharper upper bounds compared to the case where each relation is bounded by the total size of the input. Indeed, if using $|D|$ as an upper bound on each relation, we obtain a space requirement of $O(|D|^{\rho^*}/T^\slack)$ for achieving answering time $O(T)$, where $\rho^*$ is the fractional edge cover number.
Since $\slack \geq 1$, this gives us at worst a linear tradeoff between space and time, i.e., $S \cdot T = O(|D|^{\rho^*})$. For cases where $\slack \ge 1$, we can obtain much better tradeoff.
\begin{example}
Continue the example in this section $\varphi^{\bound \dots \bound}(y_1, \dots, y_k) = R_1(x,y_1) \land R_2(x,y_2) \land \dots \land R_k(x, y_k)$. We obtain an improved tradeoff: $S \cdot T^{k} = O(|D|^k)$. Note that this result matches the best-known space-time tradeoff for the $k$-\textsf{Set Disjointness } problem~\cite{goldstein2017conditional}. (Note that all atoms use the same relation symbol $R$, so $|R_i| = |D|$ for every $i=1, \dots, k$. )
\end{example}
We next present a few more applications of~\autoref{thm:main1}.
\begin{example}[Edge Triangles Detection] For the Boolean version, it was shown in~\cite{goldstein2017conditional} that -- conditioned on the strong set disjointness conjecture -- any data structure that achieves answering time $T$ needs space $S = \Omega(|E|^2/T^2)$. A matching upper bound can be constructed by using a fractional edge cover $\mathbf{u}(R,S,T) = (1,1,0)$ with slack $\slack = 2$. Thus,~\autoref{thm:main1} can be applied to achieve answering time $T$ using space $S = O(|E|^2/T^2)$. Careful inspection reveals that a different fractional edge cover $\mathbf{u}(R,S,T) = (1/2,1/2,1/2)$ with slack $\alpha = 1$, achieves a better tradeoff. Thus,~\autoref{thm:main1} can be applied to obtain the following corollary.
\end{example}
\begin{corollary}
For a graph $G=(V,E)$, there exists a data structure of size $S = O(|E|^{3/2}/T)$ that can answer the edge triangles detection problem in $O(T)$ time.
\end{corollary}
The data structure implied by~\autoref{thm:main1} is always better when $T \leq \sqrt{|E|}$ \footnote{All answering times $T > \sqrt{|E|}$ are trivial to achieve using linear space by using the data structure for $T' = \sqrt{E}$ and holding the result back until time $T$ has passed. However, in certain practical settings such as transmitting data structure over a network, it is beneficial to construct a sublinear sized structures. In those settings, $T > \sqrt{|E|}$ is useful.}, thus refuting the conditional lower bound in~\cite{goldstein2017conditional}. We should note that this does not imply that the strong set disjointness conjecture is false, as we have observed an error in the reduction used by~\cite{goldstein2017conditional}.
\begin{example}[Square Detection]
Beyond triangles, we consider the edge square detection problem, which checks
whether a given edge belongs in a square pattern in a graph $G = (V,E)$,
$ \varphi^{\bound\bound}_\square(x_1, x_2) = R_1(x_1,x_2) \land R_2(x_2,x_3) \land R_3(x_3,x_4) \land R_4(x_4,x_1).$
Using the fractional edge cover $\bu = (1/2,1/2,1/2,1/2)$ with slack $\slack =1$,
we obtain a tradeoff $S = O(|E|^2/T)$.
\end{example}
\subsection{Space-Time Tradeoffs via Tree Decompositions}
\autoref{thm:main1} does not always give us the optimal tradeoff. For the $k$-reachability problem with the adorned query $\varphi^{\bound \bound} (x_1, x_{k+1}) = R_1(x_1, x_2) \land \dots \land R_k(x_k, x_{k+1})$, \autoref{thm:main1} gives a tradeoff $S \cdot T = |D|^{\lceil (k+1)/2\rceil}$, by taking the optimal fractional edge covering number $\rho^* = \lceil (k+1)/2\rceil$ and slack $\slack=1$, which is far from efficient. In this section, we will show how to leverage tree decompositions to further improve the space-time tradeoff in~\autoref{thm:main1}.
Again, let $\varphi^{\eta}$ be an adorned query, and let $\hgraph = (\nodes,\edges)$ be the corresponding hypergraph. Given a set of nodes $C \subseteq \nodes$, a $C$-connex tree decomposition of $\hgraph$ is a pair $(\htree,A)$, where $(i)$ $\htree$ is a tree decomposition of $\hgraph$, and $(ii)$ $A$ is a connected subset of the tree nodes such that the union of their variables is exactly $C$. For our purposes, we choose $C = \nodes_\bound$. Given a $\nodes_\bound$-connex tree decomposition, we orient the tree from some node in $A$. We then define the bound variables for the bag $t$, $\nodes^t_\bound$ as the variables in $\bag_t$ that also appear in the bag of some ancestor of $t$. The free variables for the bag $t$ are the remaining variables in the bag, $\nodes^t_\free = \bag_t \setminus \nodes^t_\bound$.
\begin{example}
Consider the $5$-path query $\varphi^{\bound \bound} (x_1, x_{6}) = R_1(x_1, x_2) \land \dots \land R_5(x_5, x_{6})$. Here, $x_1$ and $x_6$ are the bound variables.~\autoref{fig:cfhw} shows the unconstrained decomposition as well as the $C$-connex decomposition for $\varphi^{\bound \bound} (x_1, x_{6})$, where $C = \{x_1, x_6\}$. The root bag contains the bound variables $x_1, x_6$. Bag $\mB_{t_2}$ contains $x_1, x_6$ as bound variables and $x_2, x_5$ as the free variables. Bag $\mB_{t_3}$ contains $x_2, x_5$ as bound variables for $\mB_{t_3}$ and $x_3, x_4$ as free variables.
\begin{figure}[t]
\begin{subfigure}{0.45\linewidth}
\centering
\scalebox{.9}{\begin{tikzpicture}
\tikzset{edge/.style = {->,> = latex'},
vertex/.style={circle, thick, minimum size=5mm}}
\def0.25{0.25}
\begin{scope}[fill opacity=1]
\draw[] (0,-2) ellipse (0.8cm and 0.33cm) node {\small $x_1, x_2$};
\draw[] (0,-3) ellipse (0.8cm and 0.33cm) node {\small $x_2, x_3$};
\draw[] (0,-4) ellipse (0.8cm and 0.33cm) node {\small $x_3, x_4$};
\draw[] (0,-5) ellipse (0.8cm and 0.33cm) node {\small $x_4, x_5$};
\draw[] (0,-6) ellipse (0.8cm and 0.33cm) node {\small $x_5, x_6$};
\draw[edge] (0,-2.33) -- (0,-2.65);
\draw[edge] (0,-3.33) -- (0,-3.65);
\draw[edge] (0,-4.33) -- (0,-4.65);
\draw[edge] (0,-5.33) -- (0,-5.65);
\end{scope}
\end{tikzpicture}}
\label{subfig:1}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\vspace{2em}
\centering
\scalebox{.9}{\begin{tikzpicture}
\tikzset{edge/.style = {->,> = latex'},
vertex/.style={circle, thick, minimum size=5mm}}
\def0.25{0.25}
\begin{scope}[fill opacity=1]
\draw[fill=black!10] (5,-2) ellipse (0.8cm and 0.33cm) node {\small ${\color{red} x_1, x_6}$};
\node[vertex] at (6.1,-2) {$\bag_{t_1}$};
\draw[] (5,-3.5) ellipse (1.2cm and 0.33cm) node {\small \small $\prq{x_2, x_5}{\color{red} x_1, x_6}$};
\node[vertex] at (6.5,-3.5) {$\bag_{t_2}$};
\draw[] (5,-5) ellipse (1.2cm and 0.33cm) node {\small \small $\prq{x_3, x_4}{\color{red} x_2, x_5}$};
\node[vertex] at (6.5,-5) {$\bag_{t_3}$};
\draw[edge] (5,-2.33) -- (5,-3.2);
\draw[edge] (5,-3.83) -- (5,-4.65);
\end{scope}
\end{tikzpicture}
}
\end{subfigure}
\caption{The hypergraph $\hgraph$ for a length-5 path query,
along with two tree decompositions: the left is unconstrained, and the right is a $C$-connex one with
$C = \{x_1, x_6 \}$. Bound variables are colored red. Nodes in $A$ are colored grey.}
\label{fig:cfhw}
\end{figure}
\end{example}
Next, we define a parameterized notion of width for the $\nodes_\bound$-connex tree decomposition. The width is parameterized by a function $\delta$ that maps each node $t$ in the tree to a non-negative number, such that $\delta(t)=0$ whenever $t \in A$. The intuition here is that we will spend $O(|D|^{\delta(t)})$ in the node $t$ while answering the access request. The parameterized width of a bag $\bag_t$ is now defined as:
$ \rho_t(\delta) = \min_{\bu} \left( \sum_{F} u_F - \delta(t) \cdot \slack \right)$
where $\bu$ is a fractional edge cover of the bag $\bag_t$, and $\slack$ is the slack (on the bound variables of the bag). The $\delta$-width of the decomposition is then defined as $\max_{t \notin A} \rho_t(\delta)$. Finally, we define the $\delta$-height as the maximum-weight path from the root to any leaf, where the weight of a path $P$ is $\sum_{t \in P} \delta(t)$. We now have all the necessary machinery to state our second main theorem.
\begin{theorem}\label{thm:main2}
Let $\varphi^\eta$ be a Boolean adorned query with hypergraph $\hgraph = (\nodes,\edges)$. Consider any $\nodes_\bound$-connex tree decomposition of $\hgraph$. For some parametrization $\delta$ of the decomposition, let $f$ be its $\delta$-width, and $h$ be its $\delta$-height. Then, for any input database $D$, we can construct a data structure that answers any access request in time $T = O(|D|^h)$ in space $S = O(|D| + |D|^f)$.
\end{theorem}
The function $\delta$ allows us to trade off between time and space. If setting $\delta(t) = 0$ for every node $t$ in the tree, then the $\delta$-height becomes $O(1)$, while the $\delta$-width equals to the fractional hypetree width of the decomposition. As we increase the values of $\delta$ in each bag, the $\delta$-height increases while the $\delta$-width decreases, i.e., the answer time $T$ increases while the space decreases. Additionally, we note that the tradeoff from~\autoref{thm:main2} is at least as good as the one from~\autoref{thm:main1}. Indeed, we can always construct a tree decomposition where all variables reside in a single node of the tree. In this case, we recover exactly the tradeoff from~\autoref{thm:main1}. Due to a lack of space, we refer the reader to~\autoref{appendix:optimizations} for details.
\begin{example}
We continue with the $5$-path query. Since $\bag_{t_1} = \{x_1, x_6\} \in A$, we assign $\delta(t_1)=0$. For $\bag_{t_2} = \{x_1,x_2,x_5,x_6\}$, the only valid fractional edge cover assigns weight 1 to both $R_1,R_5$ and has slack 1. Hence, if we assign $\delta(t_2) = \tau$ for some parameter $\tau$, the width is $2-\tau$. For $\bag_{t_3} = \{x_2,x_3,x_4, x_5\}$, the only fractional cover also assigns weight 1 to both $R_2, R_4$, with slack $1$ again. Assigning $\delta(t_3) = \tau$, the width becomes $2-\tau$ for $t_3$ as well. Hence, the $\delta$-width of the tree decomposition is $2-\tau$, while the $\delta$-height is $2 \tau$. Plugging this to~\autoref{thm:main2}, it gives us a tradeoff with answering time $T = O(|E|^{2\tau})$ and space usage $S = O(|E|+|E|^{2-\tau})$, which matches the state-of-the-art result in~\cite{goldstein2017conditional}. The above argument can be generalized to $k$-path query with answering time $T = O(|E|^{(k-1)/2\tau})$ and space usage $S = O(|E|+|E|^{2-\tau})$.
\end{example}
\begin{example} Consider a variant of the square detection problem: given two vertice, the goal is to decide whether they occur in two opposites corners of a square, which can be captured by the following adorned query:
$$ \varphi^{\bound \bound}(x_1,x_3) = R_1(x_1, x_2) \land R_1(x_2, x_3) \land R_3(x_3, x_4) \land R_4(x_4,x_1).$$
\autoref{thm:main1} gives a tradeoff with answering time $O(T)$ and space $O(|E|^2/T)$. But we can obtain a better tradeoff using~\autoref{thm:main2}. Indeed, consider the tree decomposition where we have a root bag $t_1$ with $\bag_{t_1} = \{x_1,x_3\}$, and two children of $t_1$ with Boolean $\bag_{t_2} = \{x_1,x_2, x_3\}$ and $\bag_{t_3} = \{x_1,x_3, x_4\}$. For $\bag_{t_2}$, we can see that if assigning a weight of $1$ to both hyperedges, we get a slack of $2$. Hence, if $\delta(t_2) = \tau$, the $\delta$-width is $2-2\tau$. Similarly for $t_3$, we assign $\delta(t_3) = \tau$, for a $\delta$-width with $2-2\tau$. Applying~\autoref{thm:main2}, we obtain a tradeoff with time $T = O(\tau)$ (since both root-leaf paths have only one node), and space $S = O(|E| + |E|^{2-2\tau})$. So the space usage can be improved from $O(|E|^2/T)$ to $O(|E|^2/T^2)$.
\end{example}
\subsection{Extension to CQs with Negation}
\begin{figure}
\tikzset{every picture/.style={line width=0.75pt}}
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw[fill=black!10] (267,49) .. controls (267,37.95) and (288.71,29) .. (315.5,29) .. controls (342.29,29) and (364,37.95) .. (364,49) .. controls (364,60.05) and (342.29,69) .. (315.5,69) .. controls (288.71,69) and (267,60.05) .. (267,49) -- cycle ;
\draw (215,114) .. controls (215,102.95) and (260.22,94) .. (316,94) .. controls (371.78,94) and (417,102.95) .. (417,114) .. controls (417,125.05) and (371.78,134) .. (316,134) .. controls (260.22,134) and (215,125.05) .. (215,114) -- cycle ;
\draw (337,180) .. controls (337,168.95) and (358.71,160) .. (385.5,160) .. controls (412.29,160) and (434,168.95) .. (434,180) .. controls (434,191.05) and (412.29,200) .. (385.5,200) .. controls (358.71,200) and (337,191.05) .. (337,180) -- cycle ;
\draw (205,179) .. controls (205,167.95) and (226.71,159) .. (253.5,159) .. controls (280.29,159) and (302,167.95) .. (302,179) .. controls (302,190.05) and (280.29,199) .. (253.5,199) .. controls (226.71,199) and (205,190.05) .. (205,179) -- cycle ;
\draw[->] (315.5,69) -- (316,94) ;
\draw[->] (314,134) -- (253.5,159) ;
\draw[->] (314,134) -- (388.5,160) ;
\draw (300,45) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle \color{red}{x_{1} ,x_{6}}$};
\draw (240,104) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle \textcolor[rgb]{0,0,0}{\ x_{2} ,\ x_{3} ,\ x_{4} ,\ x_{5} \ |}\color{red} { \hspace{0.6em} x_1, x_6}$};
\draw (233,169) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle \textcolor[rgb]{0,0,0}{|} \color{red} {\hspace{0.6em} x_2, x_3}$};
\draw (371,171) node [anchor=north west][inner sep=0.75pt] [align=left] {$\displaystyle \textcolor[rgb]{0,0,0}{|} \color{red} { \hspace{0.6em} x_4, x_5}$};
\end{tikzpicture}
\caption{$C$-connex decomposition for~\autoref{ex:2}.} \label{fig:decom}
\end{figure}
{In this section, we present a simple but powerful extension of our result to adorned Boolean CQs with negation. Given a query $\varphi \in CQ^\neg$, we build the data structure from~\autoref{thm:main2} for $\varphi^+$ but impose two constraints on the decomposition: $(i)$ no leaf node(s) contains any free variable, $(ii)$ for every negated relation $R^-$, all variables of $R^-$ must appear together as bound variables in some leaf node(s). In other words, there exists a leaf node such that $\vars{R^-}$ are present in it. It is easy to see that such a decomposition always exists. Indeed, we can fix the root bag to be $C = \mathcal{V}_\bound$, its child bag with free variables as $\vars{\varphi^+} \setminus C$ and bound variables as $C$, and the leaf bag, which is connected to the child of the root, with bound variables as $\vars{\varphi^-}$ without free variables. Observe that the bag containing $\vars{\varphi^+}$ free variables can be covered by only using the positive atoms since $\varphi$ is safe. The intuition is the following: during the query answering phase, we wish to find the join result over all variables $\mathcal{V}_\free$ before reaching the leaf nodes; and then, we can check whether there the tuples satisfy the negated atoms or not, in $O(1)$ time. The next example shows the application of the algorithm to adorned path queries containing negation.
\begin{example} \label{ex:2}
Consider the query $Q^{\bound \bound}(x_1, x_6) = R(x_1, x_2) \land \neg S(x_2, x_3) \land T(x_3, x_4) \land \neg U(x_4,x_5) \land$ $V(x_5, x_6)$. Using the decomposition in~\autoref{fig:decom}, we can now apply~\autoref{thm:main2} to obtain the tradeoff $S = O(|D|^3/\tau)$ and $T = O(\tau)$. Both leaf nodes only require linear space since a single atom covers the variables. Given an access request $x_1 \leftarrow a, x_6 \leftarrow b$, we check whether the answer for this request has been materialized or not. If not, we proceed to the query answering phase and find at most $O(\tau)$ answers after evaluating the join in the middle bag. For each of these answers, we can now check in constant time whether the tuples formed by values for $x_2, x_3$ and $x_4, x_5$ are not present in relations $S$ and $U$ respectively.
\end{example}
For adorned queries where $\mathcal{V}_\bound \subseteq \vars{\varphi^-}$, we can further simplify the algorithm. In this case, we no longer need to create a constrained decomposition since the check to see if the negated relations are satisfied or not can be done in constant time at the root bag itself. Thus, we can directly build the data structure from~\autoref{thm:main2} using the query $\varphi^+$.
\begin{example}[Open Triangle Detection] \label{ex:3} Consider the query $\varphi^{\bound \bound}(x_2, x_3) = R_1(x_1, x_2) \land \neg R_2(x_2, x_3) \land R_3(x_1, x_3)$, where $\varphi^-$ is $\neg R_2(x_2, x_3)$ and $\varphi^+$ is $R_1(x_1, x_2) \land R_3(x_1, x_3)$ with the adorned view as $\varphi^{+\bound \bound}(x_2, x_3) = R_1(x_1, x_2) \land R_3(x_1, x_3)$. Observe that $\{x_2, x_3\} \subseteq \vars{\varphi^-}$. We apply~\autoref{thm:main2} to obtain the tradeoff $S = O(|E|^2/ \tau^2)$ and $T = O(\tau)$ with root bag $C = \{x_2, x_3\}$, its child bag with $\mathcal{V}_\bound = C$ and $\mathcal{V}_\free = \{x_1\}$, and the leaf bag to be $\mathcal{V}_\bound = C$ and $\mathcal{V}_\free = \emptyset$. Given an access request $x_2 \leftarrow a, x_3 \leftarrow b$, we check whether the answer for this request has been materialized or not. If not, we traverse the decomposition and evaluating the join to find if there exists a connecting value for $x_1$. For the last bag, we simply check whether $(a,b)$ exists in $R_2$ or not in $O(1)$ time.
\end{example}
\introparagraph{A note on optimality} It is easy to see that the algorithm obtained for Boolean CQs with negation is conditionally optimal assuming the optimality of~\autoref{thm:main2}. Indeed, if all negated relations are empty, the join query is equivalent to $\varphi^+$ and the algorithm now simply applies~\autoref{thm:main2} to $\varphi^+$. In example ~\autoref{ex:3}, assuming relation $R_2$ is empty, the query is equivalent to set intersection whose tradeoffs are conjectured to be optimal.
}
\section{Conclusion} \label{sec:conclusion}
In this paper, we investigated the tradeoffs between answering time and space required by the data structure to answer boolean queries. Our main contribution is a unified algorithm that recovers the best known results for several boolean queries of practical interests. We then apply our main result to improve upon the state-of-the-art algorithms to answer boolean queries over the four path query which is subsequently used to improve the tradeoff for all path queries of length greater than four and show conditional lower bounds. There are several questions that remain open. We describe the problems that are particularly engaging.
\textbf{Unconditional lower bounds.} It remains an open problem to prove unconditional lower bounds on the space requirement for answering boolean star and path queries in the RAM model. For instance, $2$-\textsf{Set Disjointness } can be answered in constant time by materializing all answers using $\Theta(|D|^2)$ space but there is no lower bound to rule out if this can be achieved using sub-quadratic space.
\textbf{Improved approximate distance oracles.} It would be interesting to investigate whether our ideas can be applied to existing algorithms for constructing distance oracles to improve their space requirement.
\textbf{More general space-time bounds.} The first question is to study the tradeoff between space vs answering time (and delay guarantees) for arbitrary non-boolean hierarchical queries and path queries. Using some of our techniques, it may be possible to smartly materialize a certain subset of joins that could be used to achieve better answering time guarantees.
\section{Preliminaries}
\label{sec:framework}
In this section we present the basic notation and terminology.
\renewcommand{\mS}{\mathbf{s}}
\introparagraph{Data Model} A schema $\bx = (x_1, \dots , x_n)$ is a non-empty ordered set of distinct variables. Each variable $x_i$ has a discrete domain $\domain(x_i)$. A tuple $t$ over schema $\bx$ is an element from $\domain(\bx) = \domain(x_1) \times \dots \times \domain(x_n)$. A relation $R$ over schema $\bx$ (denoted $R(\bx)$) is a function $R : \domain(\bx) \rightarrow \mathbb{Z}$ such that the multiplicity $R(t)$ is non-zero for finitely many $t$. A tuple $t$ exists in $R$, denoted by $t \in R$, if $R(t) > 0$. The size of relation $R$, denoted as $|R|$, is the size of set $\setof{t}{t \in R}$. A database $D$ is a set of relations and the size of the database $|D|$ is the sum of sizes of all its relations. Given a tuple $t$ over schema $\bx$ and a set of variables $\mS \subseteq \bx$, $t[\mS]$ denotes the restriction of $t$ to $\mS$ and the values of $t[\mS]$ follows the same variable ordering as $\mS$. We also define the {\em selection} operator $\sigma_{\mS = t} (R) = \setof{u \in R}{ u[\mS] = t}$ and {\em projection} operator $\pi_{\mS} (R) = \setof{u[\mS]}{ u \in R}$.
\smallskip
\introparagraph{Conjunctive Queries} A {\em Conjunctive Query} (CQ) is an expression of the form
$$ \label{eq:q}
\varphi(\by) = \exists_\bz R_1(\bx_1) \land R_2(\bx_2) \land \ldots \land R_n(\bx_n).
$$
Expressions $\varphi(\by), R_1(\bx_1), R_2(\bx_2), \ldots, R_n(\bx_n)$ are called {\em atoms} or {\em relations}. The atom $\varphi(\by)$ is the {\em head} of the query, while the atoms $R_i(\bx_i)$ form the {\em body}. Here $\vars{\varphi}$ is the set of all variables occurring in $\varphi$, i.e, $\by \cup \bz \cup \bx_1 \cup \dots \cup \bx_n$. The {\em existential} quantified variables $\bz$ is the set of variables $\vars{\varphi} \setminus \by$. Throughout the paper, we will omit the existential quantified part whenever $\by$ and $\bx_i$ are mentioned in the query. A CQ is {\em full} if every variable in the body appears also in the head (a.k.a. {\em quantifier-free}), and {\em Boolean} if the head contains no variables, i.e. it is of the form $\varphi()$ (a.k.a. {\em fully-quantified}). We will typically use the symbols $x,y,z,\dots$ to denote variables, and $a,b,c,\dots$ to denote constants.
Given an input database $D$, we use $\varphi(D)$ to denote the result of the query $\varphi$ over the database. In this paper, we will consider CQs that have no constants and no repeated variables in the same atom. Such a query can be represented equivalently as a {\em hypergraph} $\mathcal{H}_\varphi = (\nodes_\varphi, \edges_\varphi)$, where $\nodes_\varphi$ is the set of variables, and for each hyperedge $F \in \edges_\varphi$ there exists a relation $R_F$ with variables $F$.
\begin{example}
Suppose that we have a directed graph $G$ that is represented through a binary relation $R(x,y)$: this means that there exists an edge from node $x$ to node $y$. We can compute the pairs of nodes that are connected by a directed path of length $k$ using the following CQ, which we call a {\em path query}: $
P_{k}(x_1, x_{k+1}) = R(x_1, x_2) \land R(x_2, x_3) \land \dots \land R(x_k, x_{k+1}).
$
\end{example}
A CQ with negation, denoted as $CQ^\neg$, is a CQ where some of the atoms can be negative, i.e., $\neg R_i(\bx_i)$ is allowed. For $\varphi \in CQ^\neg$, we denote by $\varphi^+$ the conjunction of the positive atoms in $\varphi$. A $CQ^\neg$ is said to be \emph{safe} if every variable appears in at least some positive atom. In this paper, we restrict our scope to class of safe $CQ^\neg$, a standard assumption~\cite{wei2003containment, nash2004processing} ensuring that query results are well-defined and do not depend on domains.
\smallskip
\introparagraph{Join Size Bounds}
Let $\mathcal{H} = (\nodes, \edges)$ be a hypergraph.
A weight assignment $\bu = (u_F)_{F \in \edges}$
is called a {\em fractional edge cover} of $S \subseteq \nodes$ if
$(i)$ for every $F \in \edges, u_F \geq 0$ and $(ii)$ for every
$x \in S, \sum_{F:x \in F} u_F \geq 1$.
The {\em fractional edge cover number} of $S$, denoted by
$\rho^*_{\mathcal{H}}(S)$ is the minimum of
$\sum_{F \in \edges} u_F$ over all fractional edge covers of $S$.
We write $\rho^*(\mathcal{H}) = \rho^*_{\mathcal{H}}(\nodes)$. In a celebrated result, Atserias, Grohe and Marx~\cite{AGM} proved that
for every fractional edge cover $\bu$ of $\nodes$, the size
of join result is bounded by the {\em AGM inequality}:
$$
|\Join_{F \in \edges} R_F| \leq \prod_{F \in \edges} |R_F|^{u_F}
$$
The above bound is constructive~\cite{skewstrikesback,ngo2012worst}: there exists an algorithm that computes the result of $\Join_{F \in \edges} R_F$ in $O(\prod_{F \in \edges} |R_F|^{u_F})$ time for every fractional edge cover $\bu$ of $\nodes$.
\smallskip
\introparagraph{Tree Decompositions}
Let $\mathcal{H} = (\nodes, \edges)$ be a hypergraph of a CQ $\varphi$.
A {\em tree decomposition} of $\mathcal{H}$ is a tuple
$(\htree, (\bag_t)_{t \in V(\htree)})$ where $\htree$ is a tree, and
every $\bag_t$ is a subset of $\nodes$, called the {\em bag} of $t$, such that
%
\begin{itemize}
\item Each edge in $\edges$ is contained in some bag; and
\item For each variable $x \in \nodes$, the set of nodes $\{t \mid x \in \bag_t\}$ form a connected subtree of $\htree$.
\end{itemize}
The {\em fractional hypertree width} of a decomposition is
defined as $\max_{t \in V(\htree)} \rho^*(\bag_t)$, where
$\rho^*(\bag_t)$ is the minimum fractional edge cover of the vertices in $\bag_t$.
The fractional hypertree width of a query $\varphi$, denoted $\fhw(\varphi)$, is the minimum
fractional hypertree width among all tree decompositions of its hypergraph.
We say that a query is {\em acyclic} if $\fhw(\varphi)=1$.
\smallskip
\introparagraph{Computational Model}
To measure the running time of our algorithms, we will use the uniform-cost RAM model~\cite{hopcroft1975design}, where data values and pointers to databases are of constant size. Throughout the paper, all complexity results are with respect to data complexity, where the query is assumed fixed. Each relation $R$ over schema $\bx$ is implemented via a data structure that stores all entries $t \in R$ in $O(|R|)$ space, which supports look-up, insertion, and deletion entries in $O(1)$ time. For a schema $\mS \subseteq \bx$, we use an index structure that for some $t$ defined over $\mS$ can (i) check if $t \in \pi_{\mS} (R)$; and $(ii)$ return $|\sigma_{\mS = t} (R)|$ in constant time.
\section{Framework} \label{sec:framework2}
In this section, we discuss the concept of adorned queries and present our framework.
\subsection{Adorned Queries}
In order to model different access patterns, we will use the concept of {\em adorned queries} introduced by~\cite{ullman1986approach}. In an adorned query, each variable in the head of the query is associated with a binding type, which can be either {\em bound} ($\bound$) or {\em free} ($\free$). We denote this as $\varphi^\eta(x_1, \dots, x_k)$, where $\eta \in \{\bound,\free\}^k$ is called the {\em access pattern}. The access pattern tells us for which variables the user must provide a value as input. Concretely, let $x_1,x_2, \dots, x_\ell$ be the bound variables. An instantiation of the bound variables to constants, such as $\{x_i \gets a_i: i \in \{1,2,\cdots,\ell\}\}$, is an {\em access request}: we need to return the query results where we have replaced each bound variable $x_i$ with the corresponding constant $a_i$. In the next few examples, we demonstrate how to capture several data structure problems by adorned queries.
\begin{example}[Set Disjointness and Set Intersection]
In the set disjointness problem, we are given $m$ sets $S_1, \dots, S_m$ drawn from the same universe $U$. Let $N = \sum_{i=1}^m |S_i|$ be the total size of input sets. Each access request is a pair of indexes $(i,j), 1 \leq i,j, \leq m$, for which we need to decide whether $S_i \cap S_j$ is empty or not.
To cast this problem as an adorned query, we encode the family of sets as a binary relation $R(x,y)$, such that element $x$ belongs to set $y$. Note that the relation will have size $N$. Then, the set disjointness problem corresponds to:
$$\varphi^{\bound\bound}(y,z) = R(x,y) \land R(x,z).$$
An access request in this case specifies two sets $y=S_i, z=S_j$, and issues the (Boolean) query $\varphi() = R(x,S_i) \land R(x, S_j)$.
In the related set intersection problem, given a pair of indexes $(i,j)$ for $1 \leq i,j, \leq m$, we instead want to enumerate the elements in the intersection $S_i \cap S_j$, which can be captured by the following adorned query:
$ \varphi^{\bound\bound\free}(y,z,x) = R(x,y) \land R(x,z)$.
\end{example}
\begin{example}[$k$-Set Disjointness]
The $k$-set disjointness problem is a generalization of 2-set disjointness problem, where each request asks whether the intersection between $k$ sets is empty or not. Again, we can cast this problem into the following adorned query:
$$ \varphi^{\bound \dots \bound}(y_1, \dots, y_k) = R(x,y_1) \land R(x,y_2) \land \dots \land R(x,y_k)$$
\end{example}
\begin{example}[$k$-Reachability]
Given a direct graph $G$ , the $k$-reachability problem asks, given a pair vertices $(u,v)$, to check whether they are connected by a path of length $k$. Representing the graph as a binary relation $S(x,y)$ (which means that there is an edge from $x$ to $y$), we can model this problem through the following adorned query:
%
$$
\varphi^{\bound\bound}(x_1, x_{k+1}) = S(x_1, x_2) \land S(x_2, x_3) \land \dots \land S(x_k, x_{k+1})
$$
Observe that we can also check whether there is a path of length at most $k$ by combining the results of $k$ such queries (one for each length $1, \dots, k$).
\end{example}
\begin{example}[Edge Triangles Detection]
Given a graph $G = (V, E)$, this problem asks, given an edge $(u,v)$ as the request, whether $(u,v)$ participates in a triangle or not. This task can be expressed as the following adorned query $$\varphi^{\bound \bound}_\triangle (x,z) = R(x,y) \land R(y,z) \land R(x,z)$$
In the reporting version, the goal is to enumerate all triangles participated by edge $(x,z)$, which can also be expressed by the following adorned query
$
\varphi^{\bound \bound \free}_\triangle (x,z,y) = R(x,y) \land R(y,z) \land R(x,z)
$.
\end{example}
We say that an adorned query is {\em Boolean} if every head variable is bound. In this case, the answer for every access request is also Boolean, i.e., true or false.
\subsection{Problem Statement}
Given an adorned query $\varphi^\eta$ and an input database $D$, our goal is to construct a data structure, such that we can answer any access request that conforms to the access pattern $\eta$ as fast as possible. In other words, an algorithm can be split into two phases:
\begin{itemize}
\item {\bf Preprocessing phase:} we computes a data structure using space {\em $S$}.
\item {\bf Answering phase:} given an access request, we compute the answer using the data structure built in the preprocessing phase, within time $T$.
\end{itemize}
In this work, our goal is to study the relationship between the space of the data structure $S$ and the answering time $T$ for a given adorned query $\varphi^\eta$. We will focus on Boolean adorned queries, where the output is just true or false.
\section{Introduction} \label{sec:intro}
Recent work has made remarkable progress in developing data structures and algorithms for answering set intersection problems~\cite{goldstein2017conditional}, reachability oracles and directed reachability~\cite{agarwal2011approximate, agarwal2014space, cohen2010hardness}, histogram indexing~\cite{chan2015clustered, kociumaka2013efficient}, and problems related to document retrieval~\cite{afshani2016data, larsen2015hardness}. This class of problems splits an algorithmic task into two phases: the {\em preprocessing phase}, which computes a space-efficient data structure, and the {\em answering phase}, which uses the data structure to answer the requests to minimize the answering time. A fundamental algorithmic question related to these problems is the tradeoff between the space $S$ necessary for data structures and the answering time $T$ for requests.
For example, consider the $2$-\textsf{Set Disjointness } problem: given a universe of elements $U$ and a collection of $m$ sets $C_1, \dots, C_m \subseteq U$, we want to create a data structure such that for any pair of integers $1 \leq i,j \leq m$, we can efficiently decide whether $C_i \cap C_j$ is empty or not. Previous work~\cite{cohen2010hardness, goldstein2017conditional} has shown that the space-time tradeoff for $2$-\textsf{Set Disjointness } is captured by the equation $S \cdot T^2 = N^2$, where $N$ is the total size of all sets. The data structure obtained is conjectured to be optimal~\cite{goldstein2017conditional}, and its optimality was used to develop conditional lower bounds for other problems, such as approximate distance oracles~\cite{agarwal2011approximate, agarwal2014space}. Similar tradeoffs have been independently established for other data structure problems as well. In the $k$-\textsf{Reachability} problem~\cite{goldstein2017conditional, Cohen2010} we are given as an input a directed graph $G = (V,E)$, an arbitrary pair of vertices $u, v$, and the goal is to decide whether there exists a path of length $k$ between $u$ and $v$. In the \textsf{edge triangle detection} problem~\cite{goldstein2017conditional}, we are given an input undirected graph $G = (V,E)$, the goal is to develop a data structure that takes space $S$ and can answer in time $T$ whether a given edge $e \in E$ participates in a triangle or not. Each of these problems has been studied in isolation and, as a result, the algorithmic solutions are not generalizable due to a lack of comprehensive framework.
In this paper, we cast many of the above problems into answering {\em Conjunctive Queries (CQs)} over a relational database. CQs are a powerful class of relational queries with widespread applications in data analytics and graph exploration~\cite{graphgen2015, graphgen2017,deep2018compressed}. For example, by using the relation $R(x,y)$ to encode that element $x$ belongs to set $y$, $2$-\textsf{Set Disjointness } can be captured by the following CQ: $\varphi (y_1, y_2) = \exists_x R(x,y_1) \land R(x,y_2)$. As we will see later, $k$-\textsf{Reachability} can also be naturally captured by a CQ.
The insight of casting data structure problems into CQs over a database allows for a unified treatment for developing algorithms within the same framework, which in turn allows for improved algorithms and data structures. In particular, we can leverage the techniques developed by the data management community through a long line of research on efficient join evaluation~\cite{yannakakis1981algorithms, skewstrikesback, ngo2012worst}, including worst-case optimal join algorithms~\cite{ngo2012worst} and tree decompositions~\cite{gottlob2014treewidth,robertson1986graph}. The use of these techniques has been a subject of previous work~\cite{abo2020decision, greco2013structural, deep2018compressed, olteanu2016factorized, kara19, kara2019counting} for enumerating query results under static and dynamic settings. In this paper, we build upon the aforementioned techniques to develop a framework that allows us to obtain general space-time tradeoffs for any Boolean CQ (a Boolean CQ is one that outputs only true or false). As a consequence, we recover state-of-the-art tradeoffs for several existing problems (e.g., $2$-\textsf{Set Disjointness } as well as its generalization $k$-\textsf{Set Disjointness } and $k$-\textsf{Reachability}) as special cases of the general tradeoff. We can even obtain improved tradeoffs for some specific problems, such as edge triangles detection, thus falsifying existing conjectures.
\smallskip
\introparagraph{Our Contribution} We summarize our main technical contributions below.
\begin{enumerate}
\item \introparagraph{A Comprehensive Framework} We propose a unified framework that captures several widely-studied data structure problems. More specifically, we resort to the formalism of CQs and the notion of {\em Boolean adorned queries}, where the values of some variables in the query are fixed by the user (denoted as an {\em access pattern}), and aim to evaluate the Boolean query. We then show how this framework captures the $2$-\textsf{Set Disjointness } and $k$-\textsf{Reachability} problems. Our first main result (\autoref{thm:main1}) is an algorithm that builds a data structure to answer any Boolean CQ under a specific access pattern. Importantly, the data structure can be tuned to trade off space for answering time, thus capturing the full continuum between optimal space and answering time. At one extreme, the data structure achieves constant answering time by explicitly storing all possible answers. At the other extreme, the data structure stores nothing, but we execute each request from scratch. We show how to recover existing and new tradeoffs using this general framework. The first main result may sometimes lead to suboptimal tradeoffs since it does not take into account the structural properties of the query. Our second main result (\autoref{thm:main2}) combines tree decompositions of the query structure with access patterns to improve space efficiency. We then show how this algorithm can handle Boolean CQs with negation.
\item \introparagraph{Improved Algorithms} In addition to the main result above, we explicitly improve the best-known space-time tradeoff for the $k$-\textsf{Reachability} problem for $k \geq 4$. For any $k \geq 2$, the tradeoff of $S \cdot T^{2/(k-1)} = O(|E|^2)$ was conjectured to be optimal by~\cite{goldstein2017conditional}, where $|E|$ is the number of edges in the graph, and was used to conditionally prove other lower bounds on space-time tradeoffs. We show that for a regime of answer time $T$, it can be improved to $S \cdot T^{2/(k-2)} = O(|E|^2)$, thus breaking the conjecture.
To the best of our knowledge, this is the first non-trivial improvement for the $k$-\textsf{Reachability} problem. We also refute a lower bound conjecture for the edge triangles detection problem established by~\cite{goldstein2017conditional}.
\item \introparagraph{Conditional Lower Bounds} Finally, we show a reduction between lower bounds for the problem of $k$-\textsf{Set Disjointness } for different values of $k$, which generalizes the $2$-\textsf{Set Disjointness } to computing the intersection between $k$ given sets, for $k \ge 2$.
\end{enumerate}
\smallskip
\introparagraph{Organization} We introduce the basic terminology and problem definition in~\autoref{sec:framework} and~\autoref{sec:framework2}. We presents our main results for Boolean adorned queries in \autoref{sec:main} and our improved result for path queries in ~\autoref{sec:path}. We discuss the lower bounds and related work in~\autoref{sec:lowerbound} and~\autoref{sec:related}, and finally conclude in~\autoref{sec:conclusion} with promising future research directions and open problems.
\section{Lower Bounds} \label{sec:lowerbound}
In this section, we study the lower bounds for adorned star and path queries. We first present conditional lower bounds for the $k$-\textsf{Set Disjointness } problem using the conditional optimality of $\ell$-\textsf{Set Disjointness } where $\ell < k$. First, we review the known results from~\cite{goldstein2017conditional} starting with the conjecture for $k$-\textsf{Set Disjointness }.
\begin{conjecture}[due to \cite{goldstein2017conditional}] \label{conjecture:star}
Any data structure for $k$-\textsf{Set Disjointness } problem that answers queries in time $T$ must use space $S = \Omega(|D|^k/T^k)$.
\end{conjecture}
\autoref{conjecture:star} was shown to be conditionally optimal based on conjectured lower bound for the $(k+1)$-\textsf{Sum Indexing} problem, however, it was subsequently showed to be false~\cite{kopelowitz2019strong}, which implies that~\autoref{conjecture:star} is still an open problem. \autoref{conjecture:star} can be further generalized to the case when input relations are of unequal sizes as follows.
\begin{conjecture} \label{conjecture:star:unequal:relation}
Any data structure for $ \varphi_*^{\bound \dots \bound}(y_1, \dots, y_k) = R_1(x,y_1) \land \dots \land R_k(x,y_k)$ that answers queries in time $T$ must use space $S = \Omega(\Pi_{i=1}^k |R_i|/T^k)$.
\end{conjecture}
We now state the main result for star queries.
\begin{theorem} \label{thm:lower:bound}
Suppose that any data structure for $ \varphi_*^{\bound \dots \bound}(y_1, \dots, y_k)$ with answering time $T$ must use space $S = \Omega(\Pi_{i=1}^{k} |R_i|/T^k)$. Then, any data structure for $Q_*^{\bound \dots \bound}(y_1, \dots, y_\ell)$ with answering time $T$ must use space $S = \Omega(\Pi_{i=1}^{\ell} |R_i|/T^\ell)$, for $2 \leq \ell < k$.
\end{theorem}
\begin{proof}
Let $\Delta = T$ be the degree threshold for the $k$ bound variables $y_1, \dots y_k$. If any of the $k$ variables is light (i.e $|\sigma_{y_i=a[y_i]} R_i(y_i, x)| \leq \Delta$), then we can check whether the intersection between $k$ sets is empty or not in time $O(T)$ by indexing all relations in a linear time preprocessing phase. The remaining case is when all $k$ variables are heavy. We now create $\ell$ views $V_1, \dots V_\ell$ by arbitrarily partitioning the $k$ relations into the $\ell$ views followed by materializing the join of all relations in each view. Let view $V_i$ contain the join of $k_i$ relations. Then, $|V_i| = O((\Pi_{R \in J_i} |R|/T^{{k_i}-1}))$ where $J_i$ is the set of all relations assigned to view $V_i$.
We have now reduced the $k$-star query where all $k$ variables are heavy into an instance of $\ell$-star query where the input relations are $V_1, \dots, V_\ell$. Suppose that there exists a data structure that can answer queries in time $T$ using space $S = o(\Pi_{i=1}^{\ell} |V_i|/T^\ell)$. Then, we can use such a data structure for answering the original query where all variables are heavy. The space used by this oracle is
%
\begin{align*}
\hspace{8em} S &= o(\Pi_{i=1}^{\ell} |V_i|/T^\ell) = o((\Pi_{i=1}^\ell \Pi_{R \in J_i} |R|/T^{{k_i}-1}) \cdot (1 / T^\ell)) \\
&= o((\Pi_{i=1}^{k} |R_i| / T^{k - \ell}) \cdot (1 / T^\ell)) = o(\Pi_{i=1}^{k} |R_i|/T^k)
\end{align*}
which contradicts the space lower bound for $k$-star.
\end{proof}
\autoref{thm:lower:bound} creates a hierarchy for $k$-\textsf{Set Disjointness }, where the optimality of smaller set disjointness instances depends on larger set disjointness instances. Next, we show conditional lower bounds on the space requirement of path queries. We begin by proving a simple result for optimality of $P^{\bound \bound}_2$ (equivalent to $2$-Set Disjointness) assuming the optimality of $P^{\bound \bound}_3$ query.
\begin{theorem} \label{thm:lb:path}
Suppose that any data structure for $P^{\bound \bound}_3$ that answers queries in time $T$, uses space $S$ such that $S \cdot T = \Omega(|D|^2)$. Then, for $P^{\bound \bound}_2$ , it must be the case that any data structure that uses space $S = O(|D|^2/T^2)$, the answering time is $\Omega(T)$.
\end{theorem}
\begin{proof}
Let $\Delta = T$ be the degree threshold for all vertices. If both bound variables in $P^{\bound \bound}_3$ are heavy, then we can answer the query in constant time using space $\Theta(|D|^2/T^2)$ by materializing the answers to all heavy-heavy queries. In the remaining cases, at least one of the bound valuations is light. Without loss of generality, suppose $x_1$ is light. Then, we can make $\Delta$ calls to the oracle for query $P^{\bound \bound}_2(x_2, x_4) = R_2(x_2, x_3), R_3(x_3, x_4)$.
Suppose that there exists a data structure with space $O(|D|^2/T^2)$ for $P^{\bound \bound}_2(x_2, x_4)$ and answering time $o(T)$. Then, we can answer $P^{\bound \bound}_3$ with light $x_1$ in time $o(T^2)$. This improves the tradeoff for $P^{\bound \bound}_3$ since the product of space usage and answering time is $o(|D|^2)$ for any non-constant $T$, coming to a contradiction.
\end{proof}
Using a similar argument, it can be shown that the conditional optimality of~\autoref{thm:path} for $k=4$ implies that $S \cdot T = \Omega(|D|^2)$ tradeoff for $P^{\bound \bound}_3$ is also optimal (but only for the range $T \leq \sqrt{|D|}$ when the result is applicable).
\section{Path Queries} \label{sec:path}
In this section, we present an algorithm for the adorned query $P_k^{\bound \bound}(x_1, x_{k+1}) = R_1(x_1,x_2) \land \dots \land R_k(x_k, x_{k+1})$ that improves upon the conjectured optimal solution. Before diving into the details, we first state the upper bound on the tradeoff between space and query time.
\begin{theorem} [due to \cite{goldstein2017conditional}] \label{thm:path:existing}
There exists a data structure for solving $P_k^{\bound \bound}(x_1, x_{k+1})$ with space $S$ and answering time $T$ such that $S \cdot T^{2/(k-1)} = O(|D|^2)$.
\end{theorem}
Note that for $k=2$, the problem is equivalent to \textsf{SetDisjointness} with the space/time tradeoff as $S \cdot T^2 = O(N^2)$
~\cite{goldstein2017conditional} also conjectured that the tradeoff is essentially optimal.
\begin{conjecture}[due to \cite{goldstein2017conditional}] \label{conjecture:path}
Any data structure for $P_k^{\bound \bound}(x_1, x_{k+1})$ with answering time $T$ must use space $S = \tilde{\Omega}(|D|^2/T^{2/(k-1)})$.
\end{conjecture}
If $k$ is not a constant,~\autoref{conjecture:path} implies that $\Theta(|D|^2)$ space is needed for achieving $O(1)$ answering time. Building upon~\autoref{conjecture:path},~\cite{goldstein2017conditional} also showed a result on the optimality of approximate distance oracles. Our results implies that~\autoref{thm:path:existing} can be improved further, thus refuting ~\autoref{conjecture:path}. The first observation is that the tradeoff in~\autoref{thm:path:existing} is only useful when $T \leq |D|$. Indeed, we can always answer any boolean path query in linear time using breadth-first search.
Surprisingly, it is also possible to improve~\autoref{thm:path:existing} for the regime of small answering time as well. In what follows, we will show the improvement for paths of length 4; we will generalize the algorithm for any length in the next section.
\subsection{Length-4 Path}
\begin{lemma} \label{lem:basic}
There exists a parameterized data structure for solving $P_4^{\bound \bound}(x_1, x_{5})$ that uses space $S$ and answering time $T \leq \sqrt{|D|}$ that satisfies the tradeoff $S \cdot T = O(|D|^2)$.
\end{lemma}
For $k=4$,~\autoref{thm:path:existing} gives us the tradeoff $S \cdot T^{2/3} = O(|D|^2)$ which is always worse than the tradeoff in~\autoref{lem:basic}. We next present our algorithm in detail.
\smallskip
\introparagraph{Preprocessing Phase} Consider $P_4^{\bound \bound}(x_1, x_{5}) = R(x_1, x_2) \land S(x_2, x_3) \land T(x_3, x_4) \land U(x_4, x_5)$. Let $\Delta$ be a degree threshold. We say that a constant $a$ is {\em heavy} if its frequency on attribute $x_3$ is greater than $\Delta$ in both relations $S$ and $T$; otherwise, it is {\em light}. In other words, $a$ is heavy if $|\sigma_{x_3=a}(S)| > \Delta$ and $|\sigma_{x_3=a}(T)| > \Delta$. We distinguish two cases based on whether a constant for $x_3$ is heavy or light. Let $\mL_{\textsf{heavy}}(x_3)$ denote the unary relation that contains all heavy values, and $\mL_{\textsf{light}}(x_3)$ the one that contains all light values. Observe that we can compute both of these relations in time $O(|D|)$ by simply iterating over the active domain of variable $x_3$ and checking the degree in relations $S$ and $T$. We compute two views:
\begin{align*}
\hspace{10em} V_1(x_1,x_3) &= R(x_1, x_2) \land S(x_2, x_3) \land \mL_{\textsf{heavy}}(x_3) \\
\hspace{10em} V_2(x_3,x_5) &= \mL_{\textsf{heavy}}(x_3) \land T(x_3, x_4) \land U(x_4, x_5)
\end{align*}
We store the views as a hash index that, given a value of $x_1$ (or $x_5$), returns all matching values of $x_3$.
Both views take space $O(|D|^2/ \Delta)$. Indeed, $|\mL_{\textsf{heavy}}| \leq |D|/\Delta$. Since we can construct a fractional edge cover for $V_1$ by assigning a weight of 1 to $R$ and $\mL_{\textsf{heavy}}$, this gives us an upper bound of $|D| \cdot (|D|/\Delta)$ for the query output. The same argument holds for $V_2$. We also compute the following view for light values: $V_3(x_2, x_4) = S(x_2, x_3) \land \mL_{\textsf{light}}(x_3) \land T(x_3, x_4).$ This view requires space $O(|D| \cdot \Delta)$, since the degree of the light constants is at most $\Delta$. We can now rewrite the original query as $P_4^{\bound \bound}(x_1, x_{5}) = R(x_1, x_2) \land V_3(x_2,x_4) \land U(x_4, x_5).$
The rewritten query is a three path query. Hence, we can apply~\autoref{thm:main1} to create a data structure with answering time $T = O(|D|/\Delta)$ and space $S = O(|D|^2/(|D|/\Delta) ) = O(|D| \cdot \Delta)$.
\smallskip
\introparagraph{Query Answering} Given an access request, we first check whether there exists a 4-path that goes through some heavy value in $\mL_{\textsf{heavy}}(x_3)$. This can be done in time $O(|D|/\Delta)$ using the views $V_1$ and $V_2$. Indeed, we obtain at most $O(|D|/\Delta)$ values for $x_3$ using the index for $V_1$, and $O(|D|/\Delta)$ values for $x_3$ using the index for $V_3$. We then intersect the results in time $O(|D|/\Delta)$ by iterating over the $O(|D|/\Delta)$ values for $x_3$ and checking if the bound values for $x_1$ and $x_5$ from a tuple in $V_1$ and $V_2$ respectively.
If we find no such 4-path, we check for a 4-path that uses a light value for $x_3$. From the data structure we have constructed in the preprocessing phase, we can do this in time $O(|D|/ \Delta)$.
\smallskip
\introparagraph{Tradeoff Analysis}
From the above, we can compute the answer in time $T = O(|D|/ \Delta)$. From the analysis in the preprocessing phase, the space needed is $S = O(|D|^2/\Delta + |D| \cdot \Delta)$. Thus, whenever $\Delta \geq \sqrt{|D|}$, the space becomes $S = O(|D| \cdot \Delta)$, completing our analysis.
\subsection{General Path Queries}
We can now use the algorithm for the 4-path query to improve the space-time tradeoff for general path queries of length greater than four.
\begin{theorem} \label{thm:path}
Let $D$ be an input instance.
For $k \ge4$, there is a data structure for $P_k^{\bound \bound}(x_1, x_{k+1})$ with space $S = O(|D|\cdot \Delta)$ and answer time $T= O\left((\frac{|D|}{\Delta})^{\frac{k-2}{2}}\right)$ for $\Delta \geq \sqrt{|D|}$.
\end{theorem}
\begin{proof}
Fix some $\Delta \geq \sqrt{|D|}$. We construct the data structure for a path of length $k$ recursively. The base case is when $k=4$, with answer time $T_4 = |D|/\Delta$ and space $S_4 = |D| \cdot \Delta$.
In the recursive step, similar to the previous section, we set $\sqrt{\frac{|D|}{\Delta}}$ as the degree threshold for any constant that variables $x_{1}$ and $x_{k+1}$ can take. Let $\mL_{\textsf{heavy}}^1, \mL_{\textsf{heavy}}^{k+1}$ be unary relations that store the heavy values for $x_1, x_{k+1}$ respectively. We compute and store the result of
%
$$V(x_1, x_{k+1}) = \mL_{\textsf{heavy}}^1(x_1) \land R_1(x_1, x_2) \land \dots \land R_k(x_k, x_{k+1}), \mL_{\textsf{heavy}}^{k+1}(x_{k+1}).$$
%
This view has size bounded by $\left(|D|\cdot \sqrt{\frac{\Delta}{|D|}}\right)^2 = |D| \cdot \Delta$. We consider the following queries:
%
$$ V_1^{\bound\bound}(x_2, x_{k+1}) = R_2(x_2, x_3) \land \dots \land R_k(x_k, x_{k+1}). $$
$$V_2^{\bound\bound}(x_1, x_{k}) = R_1(x_1, x_2) \land \dots \land R_{k-1}(x_{k-1}, x_{k}).$$
both of which correspond to the $(k-1)$-path, so we can recursively apply the data structure here. Let $S_k, T_k$ be the space and time for $k$-path. For space, we have following observation: %
$$ S_k = |D| \cdot \Delta + S_{k-1}$$
As $S_4 = |D|\cdot \Delta$, we obtain $S_k = O(|D|\cdot \Delta)$.
Given an access request, we answer it by distinguishing two cases. If $x_1,x_{k+1}$ is heavy, we probe the stored view $V(x_1,x_{k+1})$ in time $O(1)$. If one of them is light (say w.l.o.g. $x_1$), we call recursively the data structure $V_1$ for every one of the $\leq \sqrt{|D|/\Delta}$ values connected with $x_1$. This gives us the following recurrence formula for answer time:
%
$$ T_k = (|D|/\Delta)^{1/2} \cdot T_{k-1}$$
Solving the recursive formula gives us $T_k = (|D|/\Delta)^{(k-2)/2}$.
\end{proof}
The space-time tradeoff obtained from ~\autoref{thm:path} is $S \cdot T^{2/(k-2)} = O(|D|^2)$, but only for $T \leq |D|^{(k-2)/4}$. To compare it with the tradeoff of $S \cdot T^{2/(k-1)} = O(|D|^2)$ obtained from~\autoref{thm:path:existing}, it is instructive to look at Figures~\ref{fig:k4} and~\ref{fig:k6}, which plot the space-time tradeoffs for $k=4$ and $k=6$ respectively. For $k=4$, we can see that the new tradeoff is better for $T \leq |D|^{1/2}$. Once $T$ exceeds $|D|^{1/2}$, it is still better to use the data structure from~\autoref{thm:path} until \autoref{thm:path:existing} takes over. For $k=6$, the switch point also happens at $T = |D|^{1/2}$ but requires more space. In general, as $k$ grows, the new tradeoff line becomes flatter and approaches \autoref{thm:path:existing}.
\begin{figure}
\begin{subfigure}{0.45\linewidth}
\includegraphics[scale=0.45]{k4.pdf}
\caption{Tradeoff for $P^{\bound \bound}_4(x_1, x_5)$} \label{fig:k4}
\end{subfigure}
\hspace{2em}
\begin{subfigure}{0.5\linewidth}
\includegraphics[scale=0.45]{k6.pdf}
\caption{Tradeoff for $P^{\bound \bound}_6(x_1, x_7)$} \label{fig:k6}
\end{subfigure}
\caption{Space/time tradeoffs for path query of length $k=4,6$. The line in blue (dashed) shows the tradeoff obtained from~\autoref{thm:path:existing}. The highlighted portion in brown shows the improved tradeoff using BFS. The red curve is the new tradeoff obtained using~\autoref{thm:path}. The green portion of the original curve is still the best possible when~\autoref{thm:path} is not applicable.}
\end{figure}
\section{Related Work} \label{sec:related}
The study of fine-grained space/time tradeoffs for query answering is a relatively recent effort in the algorithmic community. The study of distance oracles over graphs was first initiated by~\cite{patrascu2010distance} where lower bounds are shown on the size
of a distance oracle for sparse graphs based on a conjecture about the best possible data structure for a set intersection problem.~\cite{cohen2010hardness} also considered the problem of set intersection and presented a data structure that can answer boolean set intersection queries which is conditionally optimal~\cite{goldstein2017conditional}. There also exist another line of work that looks at the problem of approximate distance oracles. Agarwal et al.~\cite{agarwal2011approximate, agarwal2014space} showed that for stretch-2 and stretch-3 oracles, we can achieve $S \times T = O(|D|^2)$ and $S \times T^2 = O(|D|^2)$. They also showed that for any integer $k$, a stretch-$(1+1/k)$ oracle exhibits $S \times T^{1/k} = O(|D|^2)$ tradeoff. Unfortunately, no lower bounds are known for non-constant query time.~\cite{goldstein2017conditional} used~\autoref{thm:path:existing} to conjecture that the tradeoff $S \times T^{2/(k-1)} = O(|D|^2)$ is optimal which would also imply that stretch-$(1+1/k)$ oracle tradeoff is also optimal. A different line of work has considered the problem of enumerating query results of a non-boolean query.~\cite{cohen2010hardness} presented a data structure to enumerate the intersection of two sets with guarantees on the total answering time. This result was generalized to incorporate {\em full} adorned views over CQs~\cite{deep2018compressed}. Our work extends the results to the setting where the join variables are projected away from the query result (i.e. the adorned views are {\em non-full}) and makes the connection between several different algorithmic problems that have been studied independently. Further, we also consider boolean CQs that may contain negations. In the non-static setting,~\cite{berkholz2017answering} initiated the study of answering conjunctive query results under updates. More recently,~\cite{kara2019counting} presented an algorithm for counting the number of triangles under updates. There have also been some exciting developments in the space of enumerating query results with delay for a proper subset of CQs known as {\em hierarchical queries}.~\cite{kara19} presented a tradeoff between preprocessing time and delay for enumerating the results of any (not necessarily full) hierarchical queries under static and dynamic settings. It remains an interesting problem to find improved algorithms for more restricted set of CQs such as hierarchical queries. Prior work in the space of differential privacy~\cite{dong2021residual, meehan2021shuffling, roy2020crypte, chowdhury2020data} also require repeated querying and may benefit from the techniques in this paper. |
1,108,101,565,951 | arxiv | \section{\label{sec1} Introduction}
Detection of gravitational waves (GWs) from compact binary coalescences (CBCs) provides ideal tests of general relativity (GR) in the strong-field regime~\citep{LIGOScientific:2016lio,LIGOScientific:2017ycc,LIGOScientific:2017bnn,LIGOScientific:2018dkp,LIGOScientific:2019fpa,LIGOScientific:2020tif,LIGOScientific:2021sio}. The latest GW event catalogs contain nearly 100 CBC events~\citep{LIGOScientific:2021djp, LIGOScientific:2021usb}), based on which various tests have been performed~\citep{LIGOScientific:2020tif,LIGOScientific:2021sio}. No concrete evidence of deviation from GR has been found yet.
In the next decades, the third-generation (3G) ground-based GW detectors are expected to detect $>\mathcal{O}(10^5)$ CBC events per year, with signal-to-noise ratio (SNR) up to thousands~\citep{maggiore2020_ScienceCaseEinstein,himemoto2021_ImpactsOverlappingGravitationalwave,samajdar2021_BiasesParameterEstimation,relton2021_ParameterEstimationBias,oguri2018_EffectGravitationalLensing}. Since the statistical uncertainty shrinks when SNR increases or a catalog of events are combined, observations from 3G GW detectors are able to obtain much tighter constraints on gravity theories.
The inspiring enlarged detection catalog and higher SNR bring many difficulties to data analysis. For the purpose of testing GR (and any other theories), one needs to make sure the systematic errors are small so that the analysis will not favor the wrong theories. The parameterized tests of GR~\citep{meidam2018_ParameterizedTestsStrongfield} suffer from the same problems as in parameter estimation (PE), which has been investigated in many works, (e.g. \citep{cutler2007_LISADetectionsMassive,antonelli2021_NoisyNeighboursInference}.
For instance, inaccurate waveform models may have already caused some tensions in current GW observations~\citep{hu2022_AssessingModelWaveform, Williamson:2017evr} and are expected to be more important in future high SNR detections~\citep{cutler2007_LISADetectionsMassive,PhysRevD.103.124015,purrer2020_GravitationalWaveformAccuracy}. Additionally, the 3G detectors are enable to detect multiple signals at the same time. Detected overlapping signals can not be perfectly removed from the data, and could have non-negligible impact on PE when the merger times of overlapping signals are close~\citep{himemoto2021_ImpactsOverlappingGravitationalwave,samajdar2021_BiasesParameterEstimation,relton2021_ParameterEstimationBias,antonelli2021_NoisyNeighboursInference,Pizzati:2021apa}. The undetected overlapping signals, i.e., the signals that are too faint to be detected may also contribute to the systematic error~\citep{antonelli2021_NoisyNeighboursInference,Reali:2022aps}. These types of errors are inevitable in 3G detectors, and repeated biased estimations for each event might end up with a wrong conclusion in the catalog-level analysis \citep{PhysRevD.105.L061301,moore2021_TestingGeneralRelativity}.
In this work, we perform parameterized post-Newtonian (PPN) coefficient tests~\citep{meidam2018_ParameterizedTestsStrongfield} with our simulated event catalogs and inaccurate waveforms. Our results show systematic error does accumulate and could lead to a false measurement of deviation from GR. We find overlapping signals could magnify the effects of waveform systematics and cause wrong conclusions even if the waveform meets future accuracy requirement \citep{purrer2020_GravitationalWaveformAccuracy}. Even worse, the selected high-SNR events without known overlapping signals (so-called ``golden events'') may be more vulnerable to biased conclusions.
\section{Systematic biases in PPN tests \label{sec2}}
\subsection{Estimating systematic errors\label{sec2a}}
The generic formalism of estimating PE systematic errors is first proposed in \citet{cutler2007_LISADetectionsMassive} and then generalized and validated by \citet{antonelli2021_NoisyNeighboursInference}. Let $\vec{\theta}$ be the parameters of GWs, the frequency domain data of a GW detector is denoted as $d(\vec{\theta})$. We simply have
\begin{equation}
d(\vec{\theta}) = h(\vec{\theta}) + n,
\end{equation}
where $n$ is the detector noise, and $h(\vec{\theta})$ is the GW detected by the detector.
Under the assumption that the noise is stationary and Gaussian, the likelihood in GW PE is
\begin{equation}
L(\vec{\theta}) \propto e^{-\frac{1}{2} (d-h|d-h)} = e^{-\frac{1}{2} (n|n)},
\end{equation}
where $(\dots | \dots)$ is the inner product~\citep{finn1992_DetectionMeasurementGravitational}.
The optimal SNR is $\rho = \sqrt{(h|h)}$. For more than one data stream, the inner product definition should be replaced by the sum of inner products calculated individually by each data stream.
Consider a maximum likelihood estimator (which is equivalent to Bayesian estimation with flat prior), the maximum point $\vec{\theta}_\mathrm{ML}$ satisfies
\begin{equation}
\label{eq:ml}
\partial_i \ln L \mid_{\vec{\theta}= \vec{\theta}_\mathrm{ML}} = (\partial_i h | d-h) \mid_{\vec{\theta}= \vec{\theta}_\mathrm{ML}}=0,
\end{equation}
where $\partial_i$ denotes the derivative with respect to the i'th parameter. The data $d$ is known, but real parameter $\vec{\theta}_\mathrm{real}$ and the GW signal in the detector $h(\vec{\theta}_\mathrm{real})$ is unknown. In practice, they are replaced by a waveform model $h_\mathrm{m}(\vec{\theta}_\mathrm{ML})$. By doing this, errors are introduced to $d-h$:
\begin{align}
d-h = n + \delta H + \Delta \theta^j \partial_j h_\mathrm{m}. \label{eq:dminush}
\end{align}
The first term $n$ is what $d-h$ is supposed to be, the noise arising in the detector. The second term $\delta H = h(\vec{\theta}_\mathrm{real}) - h_\mathrm{m}(\vec{\theta}_\mathrm{real})$ is the excess strain which represents the difference between real signal(s) in the data and the model used to subtract signals. Inaccurate waveforms and overlapping signals can both contribute to this term. The third term comes from linear expansion of $h_\mathrm{m}(\vec{\theta}_\mathrm{real}) - h_\mathrm{m}(\vec{\theta}_\mathrm{ML})$, and $\Delta \theta^j$ is the error of the j'th parameter from the maximum likelihood estimator, and we adopt Einstein notation to indicate the sum over parameters. Substitute Eq.~\ref{eq:dminush} into Eq.~\ref{eq:ml} and approximate all derivatives at $\vec{\theta}_\mathrm{ML}$, we get
\begin{equation}
\label{eq:errors}
\Delta \theta^i \approx (\Gamma^{-1})^{ij} (\partial_j h_\mathrm{m} | n + \delta H) = \Delta \theta^i_\mathrm{stat} + \Delta \theta^i_\mathrm{sys},
\end{equation}
where $\Gamma_{ij} = (\partial_i h_\mathrm{m} | \partial_j h_\mathrm{m})$ is the Fisher matrix~\citep{cutler1994_GravitationalWavesMerging}. $\Delta \theta^i_\mathrm{stat} = (\Gamma^{-1})^{ij} (\partial_j h | n)$ is the error induced by the detector noise. $<\Delta \theta^i_\mathrm{stat}>=0$, so the maximum likelihood estimator is unbiased if $\delta H=0$; and $<\Delta \theta^i_\mathrm{stat}\Delta \theta^j_\mathrm{stat}> = (\Gamma^{-1})^{ij}$, which is consistent with the Fisher matrix formalism.
The $\Delta \theta^i_\mathrm{sys} = (\Gamma^{-1})^{ij} (\partial_j h_\mathrm{m} | \delta H)$ is the systematic error. Any effect that contributes to $\delta H$ could be a source of systematic bias in PE.
We will use $\sqrt{(\Gamma^{-1})^{ii}}$ as statistical uncertainty and $\Delta \theta^i_\mathrm{sys}$ as the predicted systematic error.
\subsection{PPN formalism, choices of parameters and waveforms \label{sec2b}}
The test of parameterized post-Newtonian coefficients~\citep{meidam2018_ParameterizedTestsStrongfield} is a generic formalism for testing deviations from GR. We use the waveform model \texttt{IMRPhenomPv2}~\citep{Husa:2015iqa,Khan:2015jqa}, whose phase is characterized by a set of parameters $\{p_i \}$, including inspiral phase parameters $\{\phi_0,\dots, \phi_7 \}$ and $\{\phi_{5l}, \phi_{6l} \}$, phenomenological coefficients $\{\beta_0,\dots, \beta_3 \}$, and merger-ringdown parameters $\{\alpha_0,\dots, \alpha_5 \}$. Deviations $p_i \rightarrow (1+\delta \hat{p}_i)p_i$ are introduced as the violations of GR; $\delta \hat{p}_i=0$ returns to GR. Testing GR is converted to estimating the testing parameters $\delta \hat{p}_i$. Although a specific modified gravity theory could create deviation in more than one testing parameter, previous works have shown that including one testing parameter at once is enough, and even more efficient to find violations from GR because it avoids the correlations between testing parameters and GR parameters~\citep{meidam2018_ParameterizedTestsStrongfield,Sampson:2013lpa}. In this work, we choose $\delta \hat{\phi}_0$ as the example testing parameter. We assume GR is the correct theory and focus on whether the PPN test falsely indicates deviations of GR.
As a qualitative investigation, we only consider three GR parameters (chirp mass $\mathcal{M}$, mass ratio $q$ and coalescence time $t_\mathrm{c}$) in PE (i.e., in Fisher matrix), and other parameters are treated as perfectly known. This choice captures the parameter that appears in the leading PN term and the corresponding PPN modifications. The decisive parameter in the analysis of overlapping signals, $t_\mathrm{c}$, is also included.
We induce a non-zero $\delta \hat{\beta}_2$ to mimic inaccurate waveform models. $\delta \hat{\beta}_2$ is a phenomenological coefficient for the intermediate regime between inspiral and merger and has insignificant correlation with the testing parameter $\delta \hat{\phi}_0$. We assume $\delta \hat{\beta}_2=0$ is our model waveform, while the ``real'' waveform could have $\delta \hat{\beta}_2=0$, $5\times 10^{-2}$, or $5\times 10^{-4}$. The first case means our model waveform is perfect, and all systematic errors will come from overlapping signals. The second case generates waveform mismatches around $10^{-4} - 10^{-3}$, which corresponds to the current waveform accuracy~\citep{pratten2021_ComputationallyEfficientModels,ossokine2020_MultipolarEffectiveonebodyWaveforms}. The last case produces mismatches around $10^{-7} - 10^{-6}$ and corresponds to the expectations for future waveform accuracy~\citep{purrer2020_GravitationalWaveformAccuracy,hu2022_AssessingModelWaveform}. The excess strains from inaccurate waveforms can be written as
\begin{equation}
\delta H_{\mathrm{wf}} = h(\vec{\theta}_\mathrm{real}) \mid_{\delta \hat{\beta}_2\neq 0} - h(\vec{\theta}_\mathrm{real})\mid_{\delta \hat{\beta}_2=0}
\end{equation}
We can use the approximation $h(\vec{\theta}_\mathrm{real}) - h_\mathrm{m}(\vec{\theta}_\mathrm{real}) \approx h(\vec{\theta}_\mathrm{ML}) - h_\mathrm{m}(\vec{\theta}_\mathrm{ML}) $, as the error would be a higher order term.
\subsection{Overlapping signals and mock catalogs\label{sec2c}}
When the data contains more than one signal, different signals may have impacts on the analysis of each other~\citep{himemoto2021_ImpactsOverlappingGravitationalwave,samajdar2021_BiasesParameterEstimation,relton2021_ParameterEstimationBias,antonelli2021_NoisyNeighboursInference}. It is known that the correlation between signals is not strong unless the merger times are very close, so we regard two signals as overlapping only if the merger time difference $|\Delta t|<4$s.
Overlapping signals can be classified into two types: detected and undetected. The former is strong enough to be detected and should be subtracted from data in the analysis for other signals (or, the ``main'' signal). The latter, however, is too faint to be recognized by the detector and may have some unnoticed impacts on PE. In this work, the SNR threshold for detection is set to 8, under which GWs are assumed to be undetected.
For a detected overlapping signal, its contribution to excess strain comes from its imperfect removal from the data, i.e.,
\begin{equation}
\label{eq:do}
\delta H_{\mathrm{DO}} = h'(\vec{\theta}_\mathrm{real}) - h_\mathrm{m}'(\vec{\theta}_\mathrm{ML}) \approx \Delta \theta'^{i} \partial_i h'_m + \delta H'_{\mathrm{wf}},
\end{equation}
where $'$ denotes variables of the detected overlapping signal. The first term arises from the inaccurate estimation of parameters for the overlapping signal, which is random since the error is partly caused by the random noise. As a conservative estimation of errors in overlapping signals PE and following \citet{antonelli2021_NoisyNeighboursInference}, we ignore waveform systematic errors in $\theta'^{i}$, and adopt the lowest order approximation for its correlation with the main signal. Substituting it into Eq.~\ref{eq:errors}, one obtains the covariance of the first term in systematic error
\begin{equation}
\label{eq:do1}
<\Delta \theta^i_{\mathrm{DO1}} \Delta \theta^j_{\mathrm{DO1}}> = \left( \Gamma^{-1} \Gamma^{-1}_{\mathrm{mix}} \Gamma^{'-1} (\Gamma^{-1}_{\mathrm{mix}})^\mathrm{T} (\Gamma^{-1})^\mathrm{T} \right)_{ij},
\end{equation}
where $(\Gamma_{\mathrm{mix}})_{ij} = (\partial_i h | \partial_j h')$ encodes the correlation between two signals and $\Gamma^{'} = (\partial_i h' | \partial_j h')$ is the Fisher matrix of the overlapping signal. The second term in Eq.~\ref{eq:do} represents the inaccurate waveform model we use to subtract signals, and can be calculated the same way as the waveform systematics, yielding $\Delta \theta^i_{\mathrm{DO2}} = (\Gamma^{-1})^{ij} (\partial_j h_\mathrm{m} | \delta H'_{\mathrm{wf}})$. In this work, systematic error from detected overlapping signals is calculated as $\Delta \theta^i_{\mathrm{DO2}}$ plus a random sample drawn from a multivariate Gaussian distribution with covariance matrix Eq.~\ref{eq:do1} and zero mean. For more than one detected overlapping signal, Eq.~\ref{eq:do} can be extended by defining $h'$ as the summation of all GWs in the data~\citep{antonelli2021_NoisyNeighboursInference}, which enlarges the dimension of $\Gamma_{\mathrm{mix}}$ and $\Gamma^{'}$.
The undetected overlapping signal simply contributes to systematic error by $\delta H_\mathrm{UO} = \sum_{\mathrm{undetected}} h''(\vec{\theta}_\mathrm{real})$. It is accessible in our simulation but unknown in real data analysis.
We consider BBH and BNS sources, and assume their distribution in redshift $z$ follows the analytical approximation~\citep{oguri2018_EffectGravitationalLensing,samajdar2021_BiasesParameterEstimation}
\begin{equation}
R_{\mathrm{GW}}(z)=\frac{a_1 e^{a_2 z}}{e^{a_3 z}+a_4} \mathrm{Gpc^{-3}yr^{-1}},
\end{equation}
which is then converted to observable event rate by multiplying a factor $\frac{1}{1+z}\frac{dV_\mathrm{c}}{dz}$. Here $V_\mathrm{c}$ is the comoving volume and we employ Planck15 cosmology~\citep{Planck:2015fie}. Note that ``observable'' GWs need to achieve an SNR of 8 to be ``detectable''. $a_{\{1,2,3,4\}}$ are model parameters. We set $a_2 = 1.6, a_3=2.1, a_4 = 30$ to mimic a peak at $z\sim 2$. $a_1$ is scaled based on local merger rate given by \citet{LIGOScientific:2020kqk} ($\mathcal{R}_{\mathrm{BNS}}=320_{-240}^{+490}$ and $\mathcal{R}_{\mathrm{BBH}}=23.9_{-8.6}^{+14.3} \mathrm{Gpc}^{-3} \mathrm{yr}^{-1}$) such that $R_{\mathrm{GW}}(z=0) = \mathcal{R}_{\mathrm{BNS/BBH}}$. We choose three values for $a_1$ which corresponds to lower, median, and higher estimation of local merger rate, respectively.
The masses of BBHs are generated by the PowerLaw $+$ Peak model in \citet{LIGOScientific:2020kqk}, while all BNS systems are set to be same: $1.45+1.4 M_\odot$, $\Lambda_1 = \Lambda_2 = 425$. \texttt{IMRPhenomPv2\_NRTidal} \citep{Dietrich:2018uni} is used to generate BNS waveforms with the same $\delta \hat{\beta}_2$ as BBH. We will perform tests of GR with all BBH events and use BNS events as a background: BNS events are only involved in the calculation as overlapping signals.
We assume zero spins, isotropically distributed inclination and source sky direction; and uniformly distributed coalescence time, phase, and polarization angle.
A summary of low, median, and high merger rates catalogs is shown in Tab.~\ref{tab1}. It shows most BBH events will not have an overlapping signal near their merger times, which implies overlapping signals contribute to systematic errors less frequently than waveform systematics. The undetected overlapping signal happens more often than the detected: the unnoticeable confusion background has drawn attention in recent works~\citep{Wu:2022pyg,Reali:2022aps} and needs further investigation.
\begin{table*}[]
\begin{ruledtabular}
\begin{tabular}{|c|cc|cc|cc|}
\hline
& \multicolumn{2}{c|}{\# of observable binaries} & \multicolumn{2}{c|}{Detected overlaps on BBH events} & \multicolumn{2}{c|}{Undetected overlaps on BBH events} \\ \hline
& \multicolumn{1}{c|}{BBH} & BNS & \multicolumn{1}{c|}{\# of overlaps} & \# (fraction) of events & \multicolumn{1}{c|}{\# of overlaps} & \# (fraction) of events \\ \hline
\multirow{4}{*}{Low} & \multicolumn{1}{c|}{\multirow{4}{*}{56526}} & \multirow{4}{*}{286088} & \multicolumn{1}{c|}{0} & 48380 (98\%) & \multicolumn{1}{c|}{0} & 45991 (93\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{1} & 937 (1.9\%) & \multicolumn{1}{c|}{1} & 3217 (6.5\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{2} & 11 (0.022\%) & \multicolumn{1}{c|}{2} & 119 (0.24\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{3} & 1 (0.0020\%) \\ \hline
\hline
\multirow{6}{*}{Median} & \multicolumn{1}{c|}{\multirow{6}{*}{88300}} & \multirow{6}{*}{1144354} & \multicolumn{1}{c|}{0} & 73224 (95\%) & \multicolumn{1}{c|}{0} & 58921 (77\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{1} & 3574 (4.6\%) & \multicolumn{1}{c|}{1} & 15658 (20\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{2} & 74 (0.096\%) & \multicolumn{1}{c|}{2} & 2108 (2.7\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{3} & 1 (0.0010\%) & \multicolumn{1}{c|}{3} & 174 (0.23\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{4} & 10 (0.013\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{5} & 2 (0.0030\%) \\ \hline
\hline
\multirow{8}{*}{High} & \multicolumn{1}{c|}{\multirow{8}{*}{143349}} & \multirow{8}{*}{2896647} & \multicolumn{1}{c|}{0} & 112745 (90\%) & \multicolumn{1}{c|}{0} & 63932 (51\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{1} & 11496 (9.2\%) & \multicolumn{1}{c|}{1} & 42931 (34\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{2} & 589 (0.47\%) & \multicolumn{1}{c|}{2} & 14143 (11\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{3} & 19 (0.015\%) & \multicolumn{1}{c|}{3} & 3208 (2.6\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{4} & 540 (0.43\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{5} & 86 (0.069\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{6} & 6 (0.0050\%) \\ \cline{4-7}
& \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{} & & \multicolumn{1}{c|}{7} & 3 (0.0020\%) \\ \hline
\end{tabular}
\caption{\label{tab1} A summary of three mock catalogs. From left to right, it shows catalog type, observable BBH and BNS per year (note this is not detectable), number of detected overlapping signals and number of (and fraction in) detected BBH events that have this number of detected overlapping signals, and the same statistic for undetected overlapping signals.}
\end{ruledtabular}
\end{table*}
Several simplifications have been adopted in our mock catalog: we regard BNS as a background and use only BBH as the test source; we ignore neutron star-black hole (NSBH) mergers and other possible types of sources; we use an analytical merger rate peaks at $z\sim 2$, ignoring binaries from Pop III stars. Our catalogs aim to generate an appropriate merger rate for the study of systematic error accumulation, rather than accurately modeling the astrophysical population. To achieve this, we also adjust the merger rate to different levels and expect the real situation would lie somewhere between our lowest and highest estimates.
Signals are injected into the 3rd generation GW detector Einstein Telescope with ET-B PSD \citep{Punturo:2010zz} and Gaussian noise realization. The frequency band is set to 10-4096Hz.
\section{Results\label{sec3}}
\subsection{Single events\label{sec3a}}
We first present an example event. The main signal is from a BBH with $\mathcal{M}_\mathrm{c} = 32 \mathrm{M}_\odot$, $q=0.9$ and network SNR of 50.3. The overlapping signal is an equal mass BBH with $\mathcal{M}_\mathrm{c} = 20 \mathrm{M}_\odot$. We scale its SNR from $\sim 37$ down to $\lesssim 8$ to make it detectable or undetectable. We vary the merger time difference (by 0.1s per step) and calculate the total systematic error with different waveform models. Note that, throughout this section, the ``systematic error'' refers to that of the testing parameter $\delta \hat{\phi}_0$. We define the error ratio as the absolute value of the ratio of systematic error and statistical uncertainty. The PPN coefficient test is subject to false deviations from GR when the ratio is greater than one.
The result is shown in Fig.~\ref{pic:example_event}. It shows the summation of systematic errors from all sources, including waveform systematics of main signal and overlapping excess strains. The latter may also partly arise from waveform systematics if the overlapping signal is detected.
\begin{figure}
\includegraphics[width=0.5\textwidth]{example_event.pdf}
\centering
\caption{\label{pic:example_event} The error ratio of $\delta \hat{\phi}_0$ varies with merger time difference. The main signal has $\mathcal{M}_\mathrm{c} = 32 \mathrm{M}_\odot$, $q=0.9$, and SNR of 50.33. The overlapping signal is an equal mass BBH with $\mathcal{M}_\mathrm{c} = 20 \mathrm{M}_\odot$. SNR of the overlapping signal is adjusted by changing luminosity distance: detected overlap is shown in upper panel, and the undetected is the lower one. We use three kinds of waveforms mentioned in Sec.~\ref{sec2b}: perfect waveform (solid line), ``current'' waveform (dashed line), and ``future waveform'' (faint dotted-dashed line). }
\end{figure}
The error from the overlapping signal oscillates when $\Delta t$ changes due to the repeating alignments and misalignments of phases of the two GWs. Overlap error is not symmetric around $\Delta t=0$ because the waveforms of the two sources are not symmetric, but the peak is always located in the region $|\Delta t|\leq 1$s, which means the overlapping signal only produces a large influence when two mergers are very close. These characteristics are consistent with previous works~\citep{himemoto2021_ImpactsOverlappingGravitationalwave,samajdar2021_BiasesParameterEstimation,relton2021_ParameterEstimationBias,antonelli2021_NoisyNeighboursInference}. When $|\Delta t|$ is large, it is waveform inaccuracy that dominates the systematic error.
Comparing the detected and undetected overlapping signal, the former produces larger systematic error when the waveform is not accurate because the waveform systematic is also involved in signal subtraction. This implies different types of systematic errors are correlated and could be a magnifying factor for each other, as expected from Eq.~\ref{eq:do}. However, we want to emphasize it is also possible for undetected signals to produce significant systematic errors in our simulation.
{The statistical error $\Delta \theta^i_\mathrm{stat} = (\Gamma^{-1})^{ij} (\partial_j h | n) \approx 1/|h| \approx 1/\mathrm{SNR}$, while the systematic error $\Delta \theta^i_\mathrm{stat} = (\Gamma^{-1})^{ij} (\partial_j h | \delta H)$ does not necessarily shrink when the SNR increases.} An example is waveform systematics of the main signal. Therefore, systematic error may dominate in high SNR scenario. We calculate systematic errors for each BBH event in our mock catalog, and plot the absolute error and error ratio with SNR in Fig.~\ref{pic:error_vs_snr}. The error ratio could exceed one for the ``current'' waveform, and it happens more often when SNR$>30$ despite the fact that high SNR events are rarer. Error ratios for the perfect waveform and ``future'' waveform are usually below one. However, as pointed out by \citet{moore2021_TestingGeneralRelativity}, false deviation could be achieved even though estimations for individual events are generally accurate. We will investigate this in more detail in the next subsection.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{error_with_SNR_v2.png}
\centering
\caption{\label{pic:error_vs_snr} Absolute error (first column) and error ratio (second column) of $\delta \hat{\phi}_0$ vs SNR for low (uppermost row), median (median row), high (bottom row) merger rate catalogs. Each point represents a BBH event. Blue points are for perfect waveform and all systematic errors come from overlapping signals; red points stand for ``current waveform'' case and yellow points for ``future waveform'' case. Grey points in the first column are statistical errors. This plot shows error ratios greater than 1 are mostly from ``current waveform'' and high SNR events. }
\end{figure*}
\subsection{Error accumulation in a catalog\label{sec3b}}
In this subsection, we will combine the results from all BBHs in a catalog and show how systematic error in testing GR accumulates. There are several ways of combining results from multiple events~\citep{zimmerman2019_CombiningInformationMultiple,isi2019_HierarchicalTestGeneral}. We employ two straightforward methods: multiplying likelihoods (equivalently, multiplying posteriors if priors are flat) and multiplying Bayes factors. The former assumes the modification parameter is the same for all events, while the latter allows the modification parameter to vary across events.
We assume the posterior follows a multivariate Gaussian distribution with covariance matrix $\Gamma^{-1}$ and mean equal to injection values plus systematic errors. From the first event in a catalog, we multiply the posterior from new events one by one and calculate the error ratio. Note that, the multiplication of two Gaussian distributions is still Gaussian whose mean (systematic error) is a linear combination of two original means. Therefore, a plus systematic error and a minus systematic error could neutralize if we combine events by multiplying likelihood. Bayes factor, on the other hand, takes the form~\citet{moore2021_TestingGeneralRelativity}
\begin{equation}
\begin{aligned}
\mathcal{B} &\sim \frac{Z_\mathrm{nonGR}}{Z_\mathrm{GR}} = \frac{\int \mathrm{d}\vec{\theta}_\mathrm{nonGR} L_\mathrm{nonGR}}{\int \mathrm{d}\vec{\theta}_\mathrm{GR} L_\mathrm{GR}}\\
&= \sqrt{2\pi} e^{\frac{1}{2}\Gamma_{\delta \hat{\phi}_0 \delta \hat{\phi}_0} \Delta\theta_{\mathrm{sys}}^2} \sqrt{\frac{\det \Gamma_\mathrm{GR}}{\det \Gamma_\mathrm{nonGR}}},
\end{aligned}
\end{equation}
where $\Gamma_\mathrm{nonGR}$ is the Fisher matrix including the testing parameter, while $\Gamma_\mathrm{GR}$ only includes GR parameters. $\Gamma_{\delta \hat{\phi}_0 \delta \hat{\phi}_0} = (\partial h/\partial\delta \hat{\phi}_0|\partial h/\partial\delta \hat{\phi}_0)$. $\Delta\theta_{\mathrm{sys}}$ is the systematic error of $\delta \hat{\phi}_0$. Here the systematic error appears as a quadratic term, so errors with different signs can accumulate.
Considering the arbitrary sequence of events, we permute the sequence 200 times and extract the ensemble average and 68\% confidence interval of the error ratio and Bayes factor. The results are shown Fig.~\ref{pic:error_accu_all}.
\begin{figure*}
\includegraphics[width=\textwidth]{error_accu_all.png}
\centering
\caption{\label{pic:error_accu_all} Systematic error accumulates with the increase of number of events. The first column shows the absolute error of $\delta \hat{\phi}_0$ and the second column shows the error ratio. The third column is the Bayes factor. Zoom-in windows show the behavior of the first several events. Solid red lines are ensemble average for ``current waveform'', dashed red lines are for ``future waveform'', and blue lines stand for the perfect waveform. The shadow along lines are 68\% confidence interval. The first, second, and third rows are for low, median, and high catalogs, respectively. The black dotted-dashed line is the threshold above which a false deviation of GR is claimed.
In both methods, false deviations can be achieved with the increase in the number of events, but multiplying the Bayes factor does it faster due to the quadratic term. The ``future waveform'' in the third column shows that a high overlap rate could be an amplifying factor of waveform inaccuracies.}
\end{figure*}
Let $N_\mathrm{event}$ be the number of events. When multiplying likelihoods, the statistical uncertainty shrinks as $1/\sqrt{N_\mathrm{event}}$. The absolute error of the testing parameter also decreases, but at a slower pace due to the perturbations from newly coming systematic errors. It also follows $1/\sqrt{N_\mathrm{event}}$ if there were no systematic errors and we observe that the test with the perfect waveform in a low merger rate catalog is approximately doing so. In most simulations it is the waveform inaccuracy that keeps contributing to the systematic errors. The slower decay of systematic error results in a climb of error ratio. At some point (typically $\sim 10^3$ events) it leads to a false deviation of GR for the ``current'' waveform. For the better waveform, the error ratio climbs as well, but it keeps below the statistical level until $10^5-10^6$ events.
Multiplying Bayes factors, however, is more sensitive to systematics errors in PE because of the quadratic term. The ``current'' waveform could lead to a strong false deviation of GR with only tens of events. Intriguingly, this method differs from multiplying likelihoods in the behavior of tests with ``future'' waveform in the high merger rate catalog. Multiplying likelihoods does not lead to false deviation in this case, but multiplying Bayes factors finally claims deviation from GR after some oscillations at first $\sim 5000$ events. Compared with the lower merger rates, we conclude the frequent inaccurate subtractions of detected overlapping signals amplify the effects of waveform systematics, even if we have used a relatively accurate waveform model with mismatches around $10^{-7}-10^{-6}$.
\subsection{Golden events\label{sec3c}}
We have combined all the detected BBH events in the above subsection. It is also interesting to test GR with only the ``golden events'', i.e., the GW events with high SNR and clean data that contribute to most of the information in the whole catalog test. This idea is widely used in many works, such as recent GWTC-3 tests of GR~\citep{LIGOScientific:2021sio} and cosmology~\citep{LIGOScientific:2021aug}. Since the noise is Gaussian in our simulation, we select the golden events with only two criteria: SNR above a chosen threshold (50 or 200) and there is no detected overlapping signals. Results for the error ratio and Bayes factor are shown in Fig.~\ref{pic:golden_events}.
\begin{figure*}
\includegraphics[width=1\textwidth]{golden_events.png}
\centering
\caption{\label{pic:golden_events} Similar to Fig.~\ref{pic:error_accu_all}, the error ratio and Bayes factor accumulation. The left two columns show results from SNR$>$50 events, the right two columns are for SNR$>$200 events. Compared with Fig.~\ref{pic:error_accu_all}, it shows that tests with high SNR events are more likely to make a false deviation from GR.}
\end{figure*}
It turns out that the high SNR events are more vulnerable to systematic errors. Fewer events are needed to create a false deviation for the ``current'' waveform model, and the ``future'' waveform is also able to produce false deviations in all three catalogs. There is no qualitative difference between results of different merger rates, because we have removed events with detected overlapping signals which magnifies waveform systematic effects. As mentioned in Sec.~\ref{sec3a}, statistical uncertainty decreases as $1/\mathrm{SNR}$ while systematics do not as long as waveform is not perfect. The false deviation for golden events is not surprising from this angle, but it does need more attention and an appropriate solution for future data analysis.
\section{Conclusions and discussions\label{sec4}}
We have investigated how systematic errors in testing GR accumulate under the influence of overlapping signals and inaccurate waveforms. We have considered different levels of waveform inaccuracies and event rates, and employed two approaches to combining the results.
We confirm that systematic errors could accumulate when combining multiple events, and could lead to incorrectly disfavoring GR. Since overlapping signals do not always occur, it is waveform inaccuracies that keep contributing to the systematic error in the catalog tests. An accurate waveform model is effective at preventing false deviations in most cases, while a worse one could lead to biased conclusions. We additionally find that overlapping signals can enlarge the effect of waveform systematics. By increasing the merger rate (and therefore the number of overlaps), we can achieve a false deviation of GR which could not happen at a lower merger rate. One can avoid this correlated error by selecting events with no detected overlapping signals, and, if one prefers, with high SNR as well. However, we have showed these events produce biases much faster because waveform systematics dominate in high SNR scenario.
We re-emphasize that systematic errors can accumulate when combining multiple events and lead to incorrect scientific conclusions. This problem is universal: in addition to tests of GR, any analysis based on a GW catalog is faced with this issue, such as constraints on cosmological models, neutron star models \citep{PhysRevD.105.L061301}, and astrophysical population inference. Furthermore, there are more sources of systematic errors than those investigated in this work: instrumental calibration~\citep{Sun:2020wke, Hall:2017off}, glitches~\citep{Powell:2018csz,Pankow:2018qpo}, missing physical effects~\citep{Pang:2018hjb,Saini:2022igm} and so forth. A full analysis of these contributions, and their relative importance, will be essential in designing analysis strategies for 3G detectors.
An obvious solution to these issues is continuing to improve waveform model accuracy and instrumental stability, but we believe more efforts are needed from the angle of data analysis. A proper estimate of confusion background may be necessary \citep{Reali:2022aps}, and new techniques might be needed, such as accounting for waveform systematic errors during PE~\citep{moore2014_NovelMethodIncorporating}.
\begin{acknowledgments}
The authors would like to thank Chris Messenger and Christian Chapman-Bird for helpful discussions and suggestions. We are grateful for computational resources provided by Cardiff University, and funded by STFC grant ST/I006285/1. QH is supported by CSC. JV is supported by STFC grant ST/V005634/1.
\end{acknowledgments}
|
1,108,101,565,952 | arxiv | \section{Introduction and main result}\label{sec:intro}
We consider the approximation of a nonnegative, integer-valued random variable $W$ by a compound geometric distribution. We say that $Y$ has a compound geometric distribution if it is equal in distribution to $\sum_{i=1}^NX_i$, where $X,X_1,X_2,\ldots$ are i$.$i$.$d$.$ and $N\sim\mbox{Geom}(p)$ has a geometric distribution with $\mathbb{P}(N=k)=p(1-p)^k$ for $k=0,1,2,\ldots$. In this work we will only consider the case where $X$ takes values in $\mathbb{N}=\{1,2,\ldots\}$. As usual, the empty sum is treated as zero, so that $\mathbb{P}(Y=0)=\mathbb{P}(N=0)=p$.
Such compound geometric distributions arise in a number of applications in a variety of fields, including reliability, queueing theory and risk theory: see \cite{k97} for an overview. It is well-known that a compound geometric distribution converges to an exponential distribution as $p\rightarrow0$. Explicit bounds in exponential approximation for compound geometric distributions have been given by Brown \cite{b90,b15}, Bon \cite{b06}, and Pek\"oz and R\"ollin \cite{pr11}. Brown's work takes advantage of reliability properties of such compound geometric distributions; such properties will also prove useful in our work. The bounds given by Pek\"oz and R\"ollin \cite{pr11} apply more generally than to compound geometric distributions, relaxing the assumptions that $N$ have a geometric distribution and that the $X_i$ be independent. Pek\"oz, R\"ollin and Ross \cite{prr13} give bounds in the geometric approximation of compound geometric distributions. Note that some of the above-mentioned bounds apply in the case where $N$ is supported on $\{0,1,\ldots\}$, and some in the case where $N$ has support $\{1,2,\ldots\}$.
Here we will consider the approximation of $W$ by our compound geometric random variable $Y$ using the total variation distance, defined by
$$
d_{TV}(\mathcal{L}(W),\mathcal{L}(Y))=\sup_{A\subseteq\mathbb{Z}^+}\left|\mathbb{P}(W\in A)-\mathbb{P}(Y\in A)\right|\,,
$$
where $\mathbb{Z}^+=\{0,1,2,\ldots\}$. Some work on compound geometric approximation in total variation distance has been done by Daly \cite{d10}, whose main application is to hitting times of Markov chains in a quite general setting. We build upon that work by presenting bounds which are more straightforward to evaluate, but which require some knowledge about the behaviour of the failure rate of the random variable $W$.
We recall that the failure rate (or hazard rate) of a nonnnegative, integer-valued random variable $W$ is defined to be
$$
r_W(j)=\frac{\mathbb{P}(W=j)}{\mathbb{P}(W>j)}\,,\qquad j\in\mathbb{Z}^+\,.
$$
Some authors use an alternative definition, taking the failure rate of $W$ to be
$$
\widetilde{r}_W(j)=\frac{\mathbb{P}(W=j)}{\mathbb{P}(W\geq j)}\,,\qquad j\in\mathbb{Z}^+\,.
$$
The failure rate of a continuous random variable may be defined analogously, by replacing the mass function in the numerators of the above with a density function.
Note that our bounds may be applied in conjunction with bounds for exponential or geometric approximation of compound geometric distributions discussed above.
In our main result, Theorem \ref{thm:main} below, we will assume that we have $\delta>0$ such that $r_W(j)\geq\delta$ for all $j$. Given such a $\delta$, the total variation distance between $W$ and a compound geometric distribution may be effectively bounded by computing their expectations. This is in contrast to the bounds presented in \cite{d10}, where more detailed information must be known about $W$ to allow them to be computed.
We note the work by Brown and Kemperman \cite{bk09} and Brown \cite{b14}, who find bounds on the distribution function and variance of a random variable, respectively, under bounds on its failure rate. Explicit bounds in probability approximation for a random variable with a bounded failure rate have also been derived in the recent work of Brown \cite{b15}. He gives sharp bounds in exponential approximation for random variables whose failure rate may be bounded from above, with applications to compound geometric distributions, and first passage times of birth-death processes and other reversible Markov chains (in continuous time). Note that here we are working with discrete random variables, under the assumption of a lower bound on the failure rate. Our results complement, but do not overlap with, Brown's work.
After stating our main theorem, a first application (approximating the equilibrium number of customers in an M/G/1 queueing system) will be presented to illustrate our bound. Further applications will be given in Sections \ref{sec:ifr} and \ref{sec:dfr}.
In Section \ref{sec:ifr} we will consider the well-studied problem of geometric approximation for random variables with increasing failure rate. We will consider two straightforward applications of our Theorem \ref{thm:main} (to Poisson processes and the P\'olya distribution) which allow us to explicitly compare our bound with a similar result from \cite{o81}. In Section \ref{sec:dfr} we consider compound geometric approximation for random variables with decreasing failure rate. In particular, we consider the number of customers served during a busy period of an M/G/1 queue, and the time to extinction of a discrete birth-death process.
The proof of our Theorem \ref{thm:main} is given in Section \ref{sec:proof}. The proof uses Stein's method (see \cite{bc05} and references therein), building upon previous work on Stein's method for geometric \cite{p96} and compound geometric \cite{d10} approximation. Finally, in Section \ref{sec:hr} we give related results which we illustrate with short examples.
For future use, for any $0\leq p\leq1$ and positive, integer-valued random variable $X$ with $u=1-d_{TV}(\mathcal{L}(X),\mathcal{L}(X+1))$, we define
\begin{equation}\label{eq:steinfactor}
H_p(X)=\min\left\{p+(1-p)\mathbb{P}(X>1),p\left(1+\sqrt{\frac{-2}{u\log(1-p)}}\right)\right\}\,.
\end{equation}
We now state our main result.
\begin{thm}\label{thm:main}
Let $W$ be a nonnegative, integer-valued random variable with $\mathbb{P}(W=0)=p\in(0,1)$ and $r_W(j)\geq\delta>0$ for all $j$. Let $Y=\sum_{i=1}^NX_i$, where $N\sim\mbox{Geom}(p)$ and $X,X_1,X_2,\ldots$ are i$.$i$.$d$.$ positive, integer-valued random variables. If
$$
\mathbb{E}X\geq\frac{p}{(1-p)\delta}\,,
$$
then
$$
d_{TV}(\mathcal{L}(W),\mathcal{L}(Y))\leq H_p(X)\left(\mathbb{E}Y-\mathbb{E}W\right)\,.
$$
\end{thm}
Note that under the conditions of this theorem, it is straightforward to show that $\mathbb{E}W\leq\delta^{-1}$, so that the resulting upper bound is nonnegative, as expected.
\subsection{Application to the number of customers in an M/G/1 queue}\label{sec:queue}
Consider an M/G/1 queueing system in equilibrium, with customers arriving at rate $\lambda$ and with i$.$i$.$d$.$ service times having the same distribution as the random variable $S$. Letting $\rho=\lambda\mathbb{E}[S]$, we assume throughout that $\rho<1$.
Let $W$ be the number of customers in the system. It is well-known that
$$
\mathbb{P}(W=0)=1-\rho\,,\qquad\mbox{and}\qquad
\mathbb{E}W=\rho+\frac{\rho^2\mathbb{E}[S^2]}{2(1-\rho)(\mathbb{E}S)^2}\,.
$$
See, for example, page 281 of \cite{a03}.
Let $R_j$ denote the residual service time of the customer currently being served in the queue, conditional on the event $\{W=j\}$. Then Ross \cite{r06} shows that
$$
r_W(j)=\frac{1-\rho}{\lambda\mathbb{E}R_j}\,,\qquad\mbox{ and }\qquad\mathbb{E}R_j\leq\sup_{t\in\mathbb{R}^+}\mathbb{E}\left[S-t|S\geq t\right]\,.
$$
We may thus apply Theorem \ref{thm:main} with the choice
$$
\delta=\frac{1-\rho}{\lambda\sup_{t\in\mathbb{R}^+}\mathbb{E}\left[S-t|S\geq t\right]}\,.
$$
The random variable $S$ is said to be new better than used in expectation (NBUE) if we have that $\mathbb{E}\left[S-t|S\geq t\right]\leq\mathbb{E}S$ for all $t\geq0$. In this case we may take $\delta=\rho^{-1}(1-\rho)$ and, in the notation of Theorem \ref{thm:main}, we may then take $X=1$ a$.$s$.$, so that $Y$ simply has a geometric distribution and $H_p(X)=p$. We thus obtain the following.
\begin{cor}\label{prop:q1}
Let $W$ be the number of customers in an M/G/1 queueing system in equilibrium as above. If $S$ is NBUE then
$$
d_{TV}(\mathcal{L}(W),\mbox{Geom}(1-\rho))\leq\rho^2\left(1-\frac{\mathbb{E}[S^2]}{2(\mathbb{E}S)^2}\right)\,.
$$
\end{cor}
Note that, as expected, this upper bound is zero if the service time $S$ has an exponential distribution (which is indeed NBUE).
Finally in this section, we refer the interested reader to the book by M\"uller and Stoyan \cite{ms02}, who prove many stochastic comparison and monotonicity results for queueing models (and in many other applications), and derive associated bounds on quantities such as the mean waiting time and mean busy period for stationary queues. Some of their work also takes advantage of reliability properties of the underlying random variables, as we have done here.
\section{Geometric approximation for IFR distributions}\label{sec:ifr}
In the notation of Theorem \ref{thm:main}, since $p=\mathbb{P}(W=0)$ we have that $r_W(0)=p(1-p)^{-1}$. If the failure rate $r_W(j)$ is increasing in $j$, this may clearly serve as the lower bound $\delta$. In this case, we may let the random variable $X$ be 1 almost surely, so that $Y$ has a geometric distribution. Noting that $H_p(X)=p$ in this case, we obtain the following.
\begin{cor}\label{cor:ifr}
Let $W$ be a nonnegative, integer-valued random variable with $\mathbb{P}(W=0)=p$. If $W$ has increasing failure rate (IFR) then
$$
d_{TV}(\mathcal{L}(W),\mbox{Geom}(p))\leq 1-p(1+\mathbb{E}W)\,.
$$
\end{cor}
Note that we do not need the monotonicity of $r_W$ to obtain such a bound; it suffices to have $r_W(j)\geq r_W(0)$ for all $j\in\mathbb{Z}^+$.
Geometric approximation theorems for IFR random variables are well-known. We use the remainder of this section to give two explicit examples in which we can compare our Corollary \ref{cor:ifr} to the main theorem of Obretenov \cite{o81}. Obretenov does not use total variation distance $d_{TV}$, but employs the Kolmogorov distance $d_K$ defined by
$$
d_K(\mathcal{L}(W),\mathcal{L}(Y))=\sup_{j\in\mathbb{Z}^+}\left|\mathbb{P}(W\leq j)-\mathbb{P}(Y\leq j)\right|\,.
$$
Since total variation distance is stronger than Kolmogorov distance, Corollary \ref{cor:ifr} also bounds the Kolmogorov distance between $W$ and our geometric distribution, and thus Obretenov's bound may be compared with ours.
Obretenov \cite{o81} shows that if $W$ is a nonnegative, integer-valued, IFR random variable with $\mathbb{E}W=\mu$ then
\begin{equation}\label{eq:o}
d_K\left(\mathcal{L}(W),\mbox{Geom}\left((1+\mu)^{-1}\right)\right)\leq\frac{\mu}{1+\mu}\left(1-\frac{\mbox{Var}(W)}{\mu(1+\mu)}\right)\,.
\end{equation}
Note that Obretenov chooses a geometric distribution having the same expectation as $W$, while we have chosen ours to have the same probability of being zero.
\subsection{Application to the P\'olya distribution}
Suppose $m$ balls are distributed randomly among $d\geq2$ urns, in such a way that all assignments are equally likely. Let $W$ count the number of balls in the first urn. Then $W\sim\mbox{Pya}(m,d)$ has a P\'olya distribution, with
$$
\mathbb{P}(W=k)=\frac{\binom{d+m-k-2}{m-k}}{\binom{d+m-1}{m}}\,,\qquad 0\leq k\leq m\,.
$$
It is straightforward to show that, with this definition
$$
\mathbb{P}(W=k)^2\geq\mathbb{P}(W=k-1)\mathbb{P}(W=k+1)\\,
$$
for $1\leq k\leq m$. Hence, $W$ is IFR. See, for example, page 177 of \cite{o81}.
We may thus apply Corollary \ref{cor:ifr}, which gives
\begin{equation}\label{eq:pold}
d_{TV}\left(\mathcal{L}(W),\mathcal{L}(Y)\right)\leq\frac{m}{d(d+m-1)}\,,
\end{equation}
where $Y\sim\mbox{Geom}\left(\frac{d-1}{d+m-1}\right)$. In this case, Obretenov's bound (\ref{eq:o}) is
\begin{equation}\label{eq:polo}
d_{K}\left(\mathcal{L}(W),\mbox{Geom}\left(\frac{d}{d+m}\right)\right)\leq\frac{2m}{(d+1)(d+m)}\,.
\end{equation}
Our bound is better than (\ref{eq:polo}) for $d$ large enough (specifically, $d^2+dm-3d-m>0$). Note, however, that (\ref{eq:pold}) bounds the total variation distance, while (\ref{eq:polo}) bounds only the weaker Kolmogorov distance. Our (\ref{eq:pold}) also improves upon bounds for geometric approximation of the P\'olya distribution in Example 3.1 of \cite{d10} and Section 4 of \cite{pw00}.
A simple lower bound corresponding to (\ref{eq:pold}) is given by
\begin{equation}\label{eq:poll}
d_{TV}(\mathcal{L}(W),\mathcal{L}(Y))\geq|\mathbb{P}(W=1)-\mathbb{P}(Y=1)|=\frac{m(d-1)}{(d+m-2)(d+m-1)^2}\,.
\end{equation}
In the case where $d$ is of order $O(m)$, this lower bound is of the same order as each of the upper bounds (\ref{eq:pold}) and (\ref{eq:polo}). Some numerical comparison of the bounds (\ref{eq:pold})--(\ref{eq:poll}) is given in Table \ref{tab:polya}.
\begin{table}
\caption{Geometric approximation for $W\sim\mbox{Pya}(m,d)$. Values of $p=(d-1)(d+m-1)^{-1}$ given to 4 d$.$p$.$; total variation distance between $W$ and $Y\sim\mbox{Geom}(p)$, and bounds (\ref{eq:pold})--(\ref{eq:poll}) given to 4 s$.$f.}
\label{tab:polya}
\begin{center}
\begin{tabular}{ccccccc}
\hline
$m$ & $d$ & $p$ & $d_{TV}(\mathcal{L}(W),\mathcal{L}(Y))$ & (\ref{eq:pold}) & (\ref{eq:polo}) & (\ref{eq:poll})\\
\hline
200 & 200 & 0.4987 & 0.0009458 & 0.002506 & 0.004975 & 0.0006281\\
200 & 10 & 0.0431 & 0.03055 & 0.09569 & 0.1732 & 0.0001981\\
10 & 10 & 0.4737 & 0.02255 & 0.05263 & 0.09091 & 0.01385\\
10 & 200 & 0.9522 & 0.0002190 & 0.0002392 & 0.0004738 & 0.0002190\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Application to Poisson processes}\label{subsec:pp}
Let $\{N(t):t\geq0\}$ be a homogeneous Poisson process of rate $\lambda$ and let $T$ be an IFR random variable independent of $\{N(t):t\geq0\}$. By Corollary 5.2 of \cite{rsz05}, $N(T)$ is also IFR. Since $\mathbb{P}(N(T)=0)=\mathbb{E}e^{-\lambda T}$, $\mathbb{E}N(T)=\lambda\mathbb{E}T$, and $\mbox{Var}(N(T))=\lambda\mathbb{E}T+\lambda^2\mbox{Var}(T)$, we have from our Corollary \ref{cor:ifr} that
\begin{equation}\label{eq:ppd}
d_{TV}\left(\mathcal{L}(N(T)),\mbox{Geom}\left(\mathbb{E}e^{-\lambda T}\right)\right)\leq 1-(\mathbb{E}e^{-\lambda T})\left(1+\lambda\mathbb{E}T\right)\,,
\end{equation}
while Obretenov's result (\ref{eq:o}) gives
\begin{equation}\label{eq:ppo}
d_{K}\left(\mathcal{L}(N(T)),\mbox{Geom}\left((1+\lambda\mathbb{E}T)^{-1}\right)\right)\leq\frac{\lambda\mathbb{E}T}{1+\lambda\mathbb{E}T}\left(1-\frac{\mathbb{E}T+\lambda\mbox{Var}(T)}{\mathbb{E}T(1+\lambda\mathbb{E}T)}\right)\,.
\end{equation}
To give an explicit example where we may compare these bounds, suppose that $T\sim\Gamma(\alpha,\beta)$ has a gamma distribution with density function $\phi(x)$ proportional to $x^{\alpha-1}e^{-\beta x}$ for some $\alpha>1$ and $\beta>0$. Then $T$ is IFR.
Since
$$
\mathbb{E}e^{-\lambda T}=\left(1+\frac{\lambda}{\beta}\right)^{-\alpha}\,,\qquad\mathbb{E}T=\frac{\alpha}{\beta}\,,\qquad\mbox{Var}(T)=\frac{\alpha}{\beta^2}\,,
$$
the bounds (\ref{eq:ppd}) and (\ref{eq:ppo}) become, respectively,
\begin{equation}\label{eq:gammad}
d_{TV}\left(\mathcal{L}(N(T)),\mbox{Geom}\left(\left(1+\frac{\lambda}{\beta}\right)^{-\alpha}\right)\right)\leq 1-\left(1+\frac{\lambda}{\beta}\right)^{-\alpha}\left(1+\frac{\alpha\lambda}{\beta}\right)\,,
\end{equation}
and
\begin{equation}\label{eq:gammao}
d_{K}\left(\mathcal{L}(N(T)),\mbox{Geom}\left(\frac{\beta}{\beta+\alpha\lambda}\right)\right)\leq\frac{\alpha(\alpha-1)\lambda^2}{(\beta+\alpha\lambda)^2}\,.
\end{equation}
To compare (\ref{eq:gammad}) and (\ref{eq:gammao}), we use Taylor's theorem to note that for small $\lambda$ the upper bound of (\ref{eq:gammad}) is approximately equal to
$\alpha(\alpha-1)\lambda^2\left(2\beta^2\right)^{-1}$,
which is smaller than the upper bound of (\ref{eq:gammao}) whenever $(\sqrt{2}-1)\beta>\alpha\lambda$. Finally, we again emphasise that (\ref{eq:gammad}) bounds the total variation distance, while (\ref{eq:gammao}) bounds only the weaker Kolmogorov distance.
We return to further applications of our results to Poisson processes in Section \ref{sec:hr}.
\section{Approximation for DFR distributions}\label{sec:dfr}
In this section we present some further applications of our main result, Theorem \ref{thm:main}. We will consider random variables which have the decreasing failure rate (DFR) property, so that the lower bound $\delta$ may be taken to be $\lim_{j\rightarrow\infty}r_W(j)$. The applications we will consider will be to the number of customers served in a busy period of an M/G/1 queue, and to the time to extinction of a discrete birth-death process. In each case we will construct the relevant random variable $W$ as the time at which a particular Markov chain on $\mathbb{Z}^+$ first visits the origin. In this case, Shanthikumar \cite{s88} gives sufficient conditions for the DFR property to hold and an expression for the failure rate which will allow us to apply our Theorem \ref{thm:main}.
Let $\{Z_n:n\geq-1\}$ be a discrete-time Markov chain with state space $\mathbb{Z}^+$ and transition matrix $P=(p_{ij})$. Let the entries of the matrix $P^+=(p^+_{ij})$ be given by $p^+_{ij}=\sum_{k=j}^\infty p_{ik}$ for $i,j\in\mathbb{Z}^+$. Assume that the Markov chain starts at $Z_{-1}=1$ and define the hitting time
\begin{equation}\label{eq:hit}
W=\min\{n\geq0:Z_n=0\}\,.
\end{equation}
Without loss of generality in what follows, we may assume that the state $0$ is absorbing. We have chosen to start our Markov chain at time $-1$ so that the support of $W$ matches that of our compound geometric distributions.
We say that the matrix $P^+$ is $\mbox{TP}_2$ if $p^+_{ik}p^+_{jl}\geq p^+_{il}p^+_{jk}$ for all $i<j$ and $k<l$. Theorem 3.1 of \cite{s88} states that if $P^+$ is $\mbox{TP}_2$ then $W$ is DFR. From the proof of that theorem, we also have that for such DFR hitting times $W$, $\widetilde{r}_W(j)\geq\widetilde{\delta}$ for all $j\in\mathbb{Z}^+$, where
\begin{equation}\label{eq:delta}
\widetilde{\delta}=\sum_{i=1}^\infty p_{i0}\lim_{j\rightarrow\infty}\mathbb{P}(Z_j=i|Z_j\geq1)\,.
\end{equation}
In order to evaluate this expression, we will therefore need an expression for the limiting distribution of our Markov chain conditional on non-absorption.
To translate a lower bound on $\widetilde{r}_W(j)$ into a lower bound on $r_W(j)$, we will use the following lemma, whose proof is straightforward and therefore omitted.
\begin{lem}\label{lem:delta}
Let $W$ be a nonnegative, integer-valued random variable with $\widetilde{r}_W(j)\geq\widetilde{\delta}$ for all $j\in\mathbb{Z}^+$. Then $r_W(j)\geq\widetilde{\delta}\left(1-\widetilde{\delta}\right)^{-1}$ for all $j\in\mathbb{Z}^+$.
\end{lem}
\subsection{Customers served during a busy period of an M/G/1 queue}
Consider the M/G/1 queue of Section \ref{sec:queue}, with customers arriving at rate $\lambda$ and i$.$i$.$d$.$ service times with the same distribution as $S$. Again letting $\rho=\lambda\mathbb{E}S$, we will assume throughout that $\rho<1$. We will also assume that $S$ is IFR, so that, by Shanthikumar's \cite{s88} Theorem 5.1, the number of customers served during a busy period is DFR.
Consider the embedded Markov chain $\{Z_n:n\geq-1\}$, where $Z_{-1}=1$ and $Z_n$ represents the number of customers in the system after the departure of customer $n$ (with customers labelled $0,1,2,\ldots$). Then $W+1$, with the hitting time $W$ given by (\ref{eq:hit}), is the number of customers served during a busy period of the queue.
This Markov chain has the transition probabilities $p_{00}=1$ and $p_{ij}=g(j+1-i)$, where
\begin{displaymath}
g(k)=\left\{ \begin{array}{cl}
\frac{1}{k!}\mathbb{E}\left[e^{-\lambda S}(\lambda S)^k\right] & \textrm{if } k\geq0\,,\\
0 & \textrm{if }k<0\,.
\end{array} \right.
\end{displaymath}
Hence,
\begin{equation}\label{eq:p}
p=\mathbb{P}(W=0)=p_{10}=\mathbb{E}e^{-\lambda S}\,.
\end{equation}
We also have that $\mathbb{E}W=\rho(1-\rho)^{-1}$. See, for example, page 217 of \cite{k75}.
Since $p_{i0}=0$ for $i>1$, the lower bound $\widetilde{\delta}$ given by (\ref{eq:delta}) becomes $\widetilde{\delta}=p\theta$, where $\theta=\lim_{j\rightarrow\infty}\mathbb{P}(Z_j=1|Z_j\geq1)$. To find an expression for $\theta$ we use a formula due to Kyprianou \cite{k72}. Suppose the density of the service time $S$ has Laplace transform $\varphi$, and let $\xi$ be the real solution of $1+\lambda\varphi^\prime(s)=0$ nearest the origin. By a result on page 829 of \cite{k72}, we then have that
\begin{equation}\label{eq:theta}
\theta=\frac{\xi-\lambda+\lambda\varphi(\xi)}{(\xi-\lambda)\varphi(\lambda)}\,.
\end{equation}
Using Lemma \ref{lem:delta}, we may then take the lower bound $\delta=p\theta(1-p\theta)^{-1}$ in Theorem \ref{thm:main} and we obtain the following.
\begin{thm}\label{thm:queue}
Let $W+1$ be the number of customers served in a busy period of an M/G/1 queue with arrival rate $\lambda$ and service time $S$. Let $p$ and $\theta$ be given by (\ref{eq:p}) and (\ref{eq:theta}), respectively. Suppose that $S$ is IFR and that $\rho=\lambda\mathbb{E}S<1$. Let $N\sim\mbox{Geom}(p)$ and $Y=\sum_{i=1}^NX_i$, where $X,X_1,X_2,\ldots$ are i$.$i$.$d$.$ with $(1-p)\theta\mathbb{E}X\geq1-p\theta$. Then
$$
d_{TV}(\mathcal{L}(W),\mathcal{L}(Y))\leq H_p(X)\left(\frac{(1-p)\mathbb{E}X}{p}-\frac{\rho}{1-\rho}\right)\,.
$$
\end{thm}
The number of customers served in a busy period of this queueing system is closely related to the total progeny of a certain branching process, and so our Theorem \ref{thm:queue} may also be applied in that setting. If we define the offspring of a customer to be the other customers who arrive while he is being served, the number of customers served during a busy period has the same distribution as the total progeny of the customer initiating the busy period. See page 284 of \cite{a03} for further details.
To illustrate our Theorem \ref{thm:queue}, we consider the example where $S\sim\Gamma(k,\beta)$ has an Erlang distribution for some integer $k\geq1$ and some $\beta>0$. In this case, $S$ is indeed IFR. Since $\mathbb{E}S=k\beta^{-1}$, our condition on $\rho$ requires that $k\lambda<\beta$.
Using (\ref{eq:p}), $p=\left(1+\lambda\beta^{-1}\right)^{-k}$. The Erlang density has Laplace transform $\varphi(s)=\beta^k(s+\beta)^{-k}$, and since $k\lambda<\beta$ it is straightforward to check that $\xi=(\lambda k\beta^k)^{1/(k+1)}-\beta$ and that therefore, by (\ref{eq:theta}),
$$
\theta=\left(\frac{\beta+\lambda}{\beta}\right)^k\left(1-\frac{1}{A}\right)\,,\mbox{ where }A=(\beta+\lambda)\left(\frac{k^k}{\lambda\beta^k}\right)^{1/(k+1)}-k\,.
$$
Theorem \ref{thm:queue} thus requires that we choose
\begin{equation}\label{eq:xq}
\mathbb{E}X\geq\frac{\beta^k}{(A-1)((\beta+\lambda)^k-\beta^k)}\,.
\end{equation}
If we choose $X$ such that equality holds in (\ref{eq:xq}), the upper bound of Theorem \ref{thm:queue} becomes $H_p(X)U\leq U$, where
\begin{equation}\label{eq:u}
U=\frac{1}{A-1}-\frac{k\lambda}{\beta-k\lambda}\,.
\end{equation}
Some numerical illustration of this bound is given in Table \ref{tab:erlang}.
\begin{table}
\caption{Some values of the upper bound $U$ of (\ref{eq:u}) for the Erlang service time example. Invalid parameter choices are marked with --.}
\label{tab:erlang}
\begin{center}
\begin{tabular}{ccccccc}
\hline
\multirow{2}{*}{$k$}& \multirow{2}{*}{$\lambda$} & \multicolumn{5}{c}{$\beta$}\\
\cline{3-7}
& & 0.1 & 0.5 & 1 & 1.5 & 10 \\
\hline
\multirow{5}{*}{1} & 0.001 & 0.1134 & 0.0470 & 0.0327 & 0.0265 & 0.0101 \\
& 0.005 & 0.3183 & 0.1134 & 0.0769 & 0.0617 & 0.0229 \\
& 0.01 & 0.5652 & 0.1714 & 0.1134 & 0.0901 & 0.0327 \\
& 0.05 & $>1$ & 0.5652 & 0.3183 & 0.2388 & 0.0769 \\
& 0.1 & -- & $>1$ & 0.5652 & 0.3978 & 0.1134 \\
\hline
\multirow{5}{*}{5} & 0.001 & 0.3784 & 0.1985 & 0.1588 & 0.1406 & 0.0846 \\
& 0.005 & $>1$ & 0.3783 & 0.2781 & 0.2378 & 0.1294 \\
& 0.01 & $>1$ & 0.5619 & 0.3784 & 0.3137 & 0.1588 \\
& 0.05 & -- & $>1$ & $>1$ & 0.8366 & 0.2781 \\
& 0.1 & -- & -- & $>1$ & $>1$ & 0.3784 \\
\hline
\multirow{5}{*}{10} & 0.001 & 0.5777 & 0.2827 & 0.2271 & 0.2025 & 0.1282 \\
& 0.005 & $>1$ & 0.5777 & 0.4027 & 0.3402 & 0.1874 \\
& 0.01 & -- & 0.9890 & 0.5777 & 0.4614 & 0.2272 \\
& 0.05 & -- & -- & $>1$ & $>1$ & 0.4027 \\
& 0.1 & -- & -- & -- & $>1$ & 0.5777 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Time to extinction of a birth-death process}
We let $\{Z_n:n\geq-1\}$ be the Markov chain with $Z_{-1}=1$, $p_{00}=1$, and
\begin{displaymath}
p_{ij}=\left\{ \begin{array}{cl}
p_i & \textrm{if } j=i+1\,,\\
q_i & \textrm{if }j=i-1\,,\\
r_i & \textrm{if }j=i\,,\\
0 & \textrm{otherwise}\,,
\end{array} \right.
\end{displaymath}
for $i\geq1$. Let $W$ be the hitting time defined by (\ref{eq:hit}): the time when this discrete birth-death process becomes extinct.
Clearly we have that $p=\mathbb{P}(W=0)=q_1$ and, from (\ref{eq:delta}),
\begin{equation}\label{eq:bddelta}
\widetilde{\delta}=q_1\lim_{j\rightarrow\infty}\mathbb{P}(Z_j=1|Z_j\geq1)\,.
\end{equation}
To find an expression for this limit, we use the famous Karlin--McGregor \cite{km59} representation of the $n$-step transition probabilities of this chain. Define $\pi_1=1$ and
$$
\pi_j=\frac{p_1\cdot p_2\cdots p_{j-1}}{q_2\cdot q_3\cdots q_j}\,,
$$
for $j\geq2$. Then Karlin and McGregor \cite{km59} show that there is a unique positive measure $\psi$, of total mass 1, supported on $[-1,1]$ such that
$$
p_{ij}(n)=\mathbb{P}(Z_n=j|Z_0=i)=\pi_j\int_{-1}^1x^nQ_i(x)Q_j(x)\,d\psi(x)\,,\qquad\mbox{for }i,j\geq1\,,
$$
where $\{Q_j:j\geq1\}$ is a sequence of polynomials (orthogonal with respect to $\psi$) satisfying the relations $Q_1(x)=1$, $p_1Q_2(x)=x-r_1$, and
$$
xQ_j(x)=q_jQ_{j-1}(x)+r_jQ_j(x)+p_jQ_{j+1}(x)\,,
$$
for $j\geq2$. Following the notation of van Doorn and Schrijner \cite{vds95}, $Q_{j+1}$ has $j$ distinct zeros which we denote $x_{1j}<x_{2j}<\cdots<x_{jj}$. We write $\eta=\lim_{k\rightarrow\infty}x_{kk}$ and
$$
C_n(\psi)=\frac{\int_{-1}^0(-x)^n\,d\psi(x)}{\int_0^1x^n\,d\psi(x)}\,.
$$
In what follows we make the following assumptions:
\begin{eqnarray}
\label{eq:bd1}\sum_{k=1}^\infty(p_k\pi_k)^{-1}&=&\infty\,,\\
\label{eq:bd2}\eta&<&1\,,\\
\label{eq:bd3}\lim_{n\rightarrow\infty}C_n(\psi)&=&0\,,\\
\label{eq:bd4}r_j&\geq&\frac12\mbox{ for all }j\geq1\,.
\end{eqnarray}
Assumption (\ref{eq:bd1}) guarantees that the birth-death process does eventually reach extinction: see Section 4 of \cite{vds95}. Assumptions (\ref{eq:bd2}) and (\ref{eq:bd3}) are used to ensure that the limit (\ref{eq:bddelta}) exists, and are taken from Lemma 4.1 of \cite{vds95}. Finally, assumption (\ref{eq:bd4}) is sufficient to guarantee that the transition matrix of our birth-death chain is $\mbox{TP}_2$, and hence that the extinction time $W$ is DFR. See page 6 of \cite{fj11} and Remark 3.2 of \cite{s88}.
We note that Section 3 of \cite{vds95} gives several conditions under which the assumption (\ref{eq:bd3}) holds and which may be used to check its validity in practice.
Under the assumptions (\ref{eq:bd1})--(\ref{eq:bd4}), Lemma 4.1 of \cite{vds95} gives us that $\widetilde{\delta}=1-\eta$, and so (by Lemma \ref{lem:delta}) we may take $\delta=\eta^{-1}(1-\eta)$ in Theorem \ref{thm:main}. Applying that result, we then obtain the following.
\begin{thm}
Let $W$ be the time to extinction of the discrete birth-death process defined above. Assume (\ref{eq:bd1})--(\ref{eq:bd4}) hold. Let $N\sim\mbox{Geom}(q_1)$ and $Y=\sum_{i=1}^NX_i$, where $X,X_1,X_2,\ldots$ are i$.$i$.$d$.$ with $(1-q_1)(1-\eta)\mathbb{E}X\geq q_1\eta$. Then
$$
d_{TV}(\mathcal{L}(W),\mathcal{L}(Y))\leq H_{q_1}(X)\left(\frac{(1-q_1)\mathbb{E}X}{q_1}-\mathbb{E}W\right)\,.
$$
\end{thm}
Finally, note that Brown \cite{b15} considers exponential approximation for hitting times of birth-death processes in continuous time, taking advantage of monotonicity of the failure rate in his work. See also the references within Brown's work.
\section{Proof of Theorem \ref{thm:main}}\label{sec:proof}
We use this section to give the proof of our main result, Theorem \ref{thm:main}. The proof is based on Stein's method for compound geometric approximation. Stein's method was first applied to the problem of approximation by a geometric distribution by Barbour and Gr\"ubel \cite{bg95} and Pek\"oz \cite{p96}. More recent developments in Stein's method for geometric approximation are given in \cite{pw00} and \cite{prr13}. Stein's method has previously been used in the compound geometric case in \cite{d10}, and compound geometric distributions have appeared in conjunction with Stein's method in papers by Bon \cite{b06}, Pek\"oz and R\"ollin \cite{pr11}, and Pek\"oz, R\"ollin and Ross \cite{prr13}. The interested reader is also referred to \cite{bc05} and references therein for an introduction to Stein's method more generally.
Throughout this section we will let $W$ and $Y$ be as defined in Theorem \ref{thm:main}, and $H_p(X)$ be given by (\ref{eq:steinfactor}). We define the random variable $V$ to be such that $V+X=_{st}(W|W>0)$, where $=_{st}$ denotes equality in distribution.
We will employ the usual stochastic ordering in what follows. Recall that for any two random variables $T$ and $U$, $T$ is said to be stochastically smaller than $U$ (written $T\leq_{st}U$) if $\mathbb{P}(T>j)\leq\mathbb{P}(U>j)$ for all $j$.
\begin{lem}\label{lem:ord1}
Let $W$ be a nonnegative, integer-valued random variable with $\mathbb{P}(W=0)=p$ and $r_W(j)\geq\delta>0$ for all $j\in\mathbb{Z}^+$. Let $V$ be as above and suppose that
$\mathbb{E}X\geq\frac{p}{(1-p)\delta}$. Then $V+X\leq_{st}W+X$.
\end{lem}
\begin{proof}
From the definition of $V$, the required stochastic ordering will follow if
\begin{equation}\label{eq:ord1}
(1-p)\mathbb{P}(W+X>j)\geq\mathbb{P}(W>j)\,,
\end{equation}
for all $j\in\mathbb{Z}^+$.
Conditioning on $X$ (which is independent of $W$), we write
$$
\mathbb{P}(W+X>j)=\mathbb{P}(W>j)+\mathbb{E}\left[\sum_{k=j+1-X}^j\mathbb{P}(W=k)\right]\,.
$$
Using this, we rearrange (\ref{eq:ord1}) to obtain that the required stochastic ordering holds if
\begin{equation}\label{eq:ord2}
\frac{1}{\mathbb{P}(W>j)}\mathbb{E}\left[\sum_{k=j+1-X}^j\mathbb{P}(W=k)\right]\geq\frac{p}{1-p}\,.
\end{equation}
Now, if $r_W(k)\geq\delta$ for all $k$ then
$$
\frac{1}{\mathbb{P}(W>j)}\mathbb{E}\left[\sum_{k=j+1-X}^j\mathbb{P}(W=k)\right]\geq\mathbb{E}\left[\sum_{k=j+1-X}^jr_W(k)\right]\geq\delta\mathbb{E}X\,.
$$
Hence, if $\mathbb{E}X\geq\frac{p}{(1-p)\delta}$ then (\ref{eq:ord2}) holds and our lemma follows.
\end{proof}
The proof of Theorem \ref{thm:main} then goes along similar lines to that of Proposition 3.1 in \cite{d10}. For $A\subseteq\mathbb{Z}^+$ we let $f_A:\mathbb{Z}^+\mapsto\mathbb{R}$ be such that $f_A(0)=0$ and
\begin{equation}\label{eq:stein}
I(j\in A)-\mathbb{P}(Y\in A)=(1-p)\mathbb{E}f_A(j+X)-f_A(j)\,,
\end{equation}
$I(\cdot)$ denoting an indicator function. We then note the following property of $f_A$.
\begin{lem}\label{lem:stein}
Let $f_A$ be as above. Then $\sup_{j\in\mathbb{Z}^+}\left|f_A(j+1)-f_A(j)\right|\leq\frac{1}{p}H_p(X)$.
\end{lem}
\begin{proof}
From the definition (\ref{eq:stein}), it is easy to check that
\begin{equation}\label{eq:proof0}
f_A(j)=-\sum_{i=0}^\infty(1-p)^i\left[\mathbb{P}(j+X_1+\cdots+X_i\in A)-\mathbb{P}(Y+X_1+\cdots+X_i\in A)\right]\,,
\end{equation}
from which it follows that $|f_A(j+1)-f_A(j)|$ may be bounded by
\begin{equation}\label{eq:proof1}
\sum_{i=0}^\infty(1-p)^i\left|\mathbb{P}(j+1+X_1+\cdots+X_i\in A)-\mathbb{P}(j+X_1+\cdots+X_i\in A)\right|\,.
\end{equation}
To complete the proof, we bound (\ref{eq:proof1}) in two different ways. Firstly, letting $N\sim\mbox{Geom}(p)$, we write this as
\begin{multline}\label{eq:proof2}
\frac{1}{p}|\mathbb{P}(j+1+X_1+\cdots+X_N\in A)-\mathbb{P}(j+X_1+\cdots+X_N\in A)|\\
\leq\frac{1}{p}d_{TV}(\mathcal{L}(Y),\mathcal{L}(Y+1))\leq1+\frac{(1-p)}{p}\mathbb{P}(X>1)\,,
\end{multline}
where the final inequality uses Theorem 3.1 of \cite{vc96}. Alternatively, we have that
\begin{multline*}
\left|\mathbb{P}(j+1+X_1+\cdots+X_i\in A)-\mathbb{P}(j+X_1+\cdots+X_i\in A)\right|\\
\leq d_{TV}(\mathcal{L}(X_1+\cdots+X_i),\mathcal{L}(X_1+\cdots+X_i+1))\,.
\end{multline*}
We may then follow the analysis of Theorem 3.1 of \cite{prr13} to obtain
$$
|f_A(j+1)-f_A(j)|\leq1+\sqrt{\frac{-2}{u\log(1-p)}}\,,
$$
where $u=1-d_{TV}(\mathcal{L}(X),\mathcal{L}(X+1))$. This completes the proof.
\end{proof}
\begin{remark}
\emph{Lemma \ref{lem:stein} improves upon part of Theorem 2.1 of \cite{d10}, by presenting a sharper bound and removing a restriction on the support of $X$. This may, in turn, be used to improve on Proposition 3.1 of \cite{d10}. For a general $X$, Lemma \ref{lem:stein} gives the bound $|f_A(j+1)-f_A(j)|\leq p^{-1}$ (which is the same bound given in \cite{d10}), but also shows that a better bound is possible when $\mathbb{P}(X=1)$ is large (informally, when $Y$ is close to a geometric distribution) or when $X$ is smooth (in the sense that the total variation distance between $X$ and $X+1$ is small). Note that the bound $|f_A(j+1)-f_A(j)|\leq p^{-1}$ is the best possible without imposing restrictions on $X$. To see this, suppose that $X=2$ a$.$s$.$ and let $A=2\mathbb{Z}^+$. In this case, (\ref{eq:proof0}) gives $f_A(j+1)-f_A(j)=(-1)^jp^{-1}$. The inequalities (\ref{eq:proof2}) are also sharp, in that if $Y\sim\mbox{Geom}(p)$ then $d_{TV}(\mathcal{L}(Y),\mathcal{L}(Y+1))=p$.}
\end{remark}
Using the definitions of $f_A$ and $V$, we may write
\begin{multline}\label{eq:ord3}
\mathbb{P}(W\in A)-\mathbb{P}(Y\in A)=(1-p)\mathbb{E}\left[f_A(W+X)-f_A(V+X)\right]\\
=(1-p)\sum_{j=0}^\infty\left[f_A(j+1)-f_A(j)\right]\left[\mathbb{P}(W+X>j)-\mathbb{P}(V+X>j)\right]\,.
\end{multline}
Now, under the conditions of Theorem \ref{thm:main}, Lemma \ref{lem:ord1} gives us that $V+X\leq_{st}W+X$. Hence, bounding (\ref{eq:ord3}) using Lemma \ref{lem:stein} gives
$$
|\mathbb{P}(W\in A)-\mathbb{P}(Y\in A)|\leq\frac{(1-p)}{p}H_p(X)\mathbb{E}\left[W-V\right]=H_p(X)\mathbb{E}\left[Y-W\right]\,,
$$
where the final equality follows from the definition of $V$. We have thus established Theorem \ref{thm:main}.
\begin{remark}
\emph{The techniques of this section may also be used to bound the Wasserstein distance $d_W(\mathcal{L}(W),\mathcal{L}(Y))=\sup_h\left|\mathbb{E}h(W)-\mathbb{E}h(Y)\right|$, where the supremum is taken over all 1-Lipschitz functions $h$. Under the conditions of Theorem \ref{thm:main}, we follow the above methods to obtain the bound $d_W(\mathcal{L}(W),\mathcal{L}(Y))\leq\mathbb{E}Y-\mathbb{E}W$. This bound is sharp. Suppose, for example, that $W$ is IFR and $Y\sim\mbox{Geom}(p)$. Then $W\leq_{st}Y$ (\cite[Theorem 1.B.1]{ss07}) and so $d_W(\mathcal{L}(W),\mathcal{L}(Y))=\mathbb{E}Y-\mathbb{E}W$. See Theorem 1.A.11 of \cite{ss07}.}
\end{remark}
\section{Some further results}\label{sec:hr}
In this section we note two results, closely related to Theorem \ref{thm:main}, which may prove useful in applications. Potential applications are indicated for each via short examples.
\subsection{Approximation for translated distributions}
Let $W$ be as in Theorem \ref{thm:main}, and let $W_m=_{st}(W-m|W\geq m)$ for some $m\in\mathbb{Z}^+$. In many cases it is more natural to seek a compound geometric approximation for $W_m$ (for some $m\geq1$) than for $W$. This may be achieved in a straightforward way using Theorem \ref{thm:main}. We note that the failure rate of $W_m$ may be bounded from below by
\begin{equation}\label{eq:deltam}
\delta_m=\min_{j\geq m}r_W(j)\geq\delta\,,
\end{equation}
and that if $W$ has monotone failure rate, $W_m$ inherits this property. Letting
\begin{equation}\label{eq:pm}
p_m=\mathbb{P}(W_m=0)=\frac{\mathbb{P}(W=m)}{\mathbb{P}(W\geq m)}\,,
\end{equation}
we may apply Theorem \ref{thm:main} to $W_m$ to obtain the following corollary.
\begin{cor}\label{cor:shift}
Let $W$ be a nonnegative, integer-valued random variable, $m\in\mathbb{Z}^+$, and $W_m=_{st}(W-m|W\geq m)$. Let $\delta_m$ and $p_m$ be given by (\ref{eq:deltam}) and (\ref{eq:pm}), respectively. Let $Y=\sum_{i=1}^NX_i$, where $N\sim\mbox{Geom}(p_m)$ and $X,X_1,X_2,\ldots$ are i$.$i$.$d$.$ with
$$
\mathbb{E}X\geq\frac{p_m}{(1-p_m)\delta_m}\,.
$$
Then
$$
d_{TV}(\mathcal{L}(W_m),\mathcal{L}(Y))\leq H_{p_m}(X)\left(\mathbb{E}Y-\mathbb{E}W_m\right)\,.
$$
\end{cor}
To illustrate one situation in which such a result would be useful, consider the Markov chain $\{Z_n:n\geq-1\}$ with state space $\{0,1,2\}$, $Z_{-1}=2$, and transition matrix
\[ \left( \begin{array}{ccc}
1 & 0 & 0 \\
\alpha_1 & \beta_1 & \epsilon_1 \\
\alpha_2 & \beta_2 & \epsilon_2 \end{array} \right)\,,\]
where, for $i\in\{1,2\}$, we have that $\alpha_i,\beta_i,\epsilon_i\in(0,1)$ with $\alpha_i+\beta_i+\epsilon_i=1$ and where we consider $\epsilon_i$ to be small.
Letting $W$ be the hitting time $W=\min\{n\geq0:Z_n=0\}$, the most natural geometric-type approximation in this setting is to approximate $W_1$ by a geometric distribution with parameter close to $\alpha_1$. We will show that this is easily achieved using Corollary \ref{cor:shift}. Elementary calculations show that, since $\mathbb{P}(W=0)=\alpha_2$,
$$
p_1=\frac{\alpha_1\beta_2+\alpha_2\epsilon_2}{1-\alpha_2}\,,\;\;\mathbb{E}W_1=\frac{\mathbb{E}W}{1-\alpha_2}-1\,,\;\mbox{and}\;\;\mathbb{E}W=\frac{(1-\beta_1)\epsilon_2+\beta_2(1+\epsilon_1)}{(1-\beta_1)(1-\epsilon_2)-\beta_2\epsilon_1}\,.
$$
For simplicity in what follows, we will assume that $\alpha_1\geq\alpha_2$ and that $\beta_1(\beta_2+\epsilon_2)\geq\beta_2(\beta_1+\epsilon_1)$. These conditions are sufficient to guarantee that $W$ is IFR (using Theorem 4.1 of \cite{rsz05}). In this case we may take $X=1$ a$.$s$.$ in Corollary \ref{cor:shift}, as we did in the IFR examples we have previously considered.
Corollary \ref{cor:shift} then gives us the bound
$d_{TV}(\mathcal{L}(W_1),\mbox{Geom}(p_1))\leq A+B+C$, where
$$
A=\frac{\alpha_1\beta_2+\alpha_2\epsilon_2}{\alpha_2(1-\alpha_2)+(\alpha_1-\alpha_2)(1-\alpha_2-\epsilon_2)}\,,\;\;
B=\frac{(\alpha_1\beta_2+\alpha_2\epsilon_2)(1+\epsilon_1-\epsilon_2)}{(1-\alpha_2)(\alpha_1-\epsilon_1(\alpha_1+\alpha_2))}\,,
$$
and
$$
C=\frac{\epsilon_2(\alpha_1-\alpha_2)(\alpha_1\beta_2+\alpha_2\epsilon_2)}{(1-\alpha_2)^2(\alpha_2\epsilon_1+\alpha_1(1-\epsilon_2))}\,.
$$
We conclude this illustration by noting that if either $\epsilon_1=\epsilon_2=0$ or $\alpha_1=\alpha_2$ then our upper bound is zero, as expected.
\subsection{Hazard rate ordering}
In this section we will need the hazard rate ordering. For two nonnegative random variables $T$ and $U$, $T$ is said to be smaller than $U$ in the hazard rate order (denoted $T\leq_{hr}U$) if $r_T(j)\geq r_U(j)$ for all $j$. See, for example, Section 1.B of \cite{ss07}.
In proving Theorem \ref{thm:main}, Lemma \ref{lem:ord1} gave conditions under which $V+X\leq_{st}W+X$, which then allowed us to deduce a compound geometric approximation bound. In the case of geometric approximation, we use the hazard rate order to express conditions under which this stochastic ordering holds.
If $X=1$ a$.$s$.$ (so we are in the geometric approximation case, and $H_p(X)=p$), then (\ref{eq:ord2}) tells us that $V+1\leq_{st}W+1$ if
$$
r_W(j)\geq\frac{p}{1-p}=r_N(j)\,,
$$
where $p=\mathbb{P}(W=0)$ and $N\sim\mbox{Geom}(p)$. That is, if $W\leq_{hr}N$ then $V+1\leq_{st}W+1$ and the bound of Theorem \ref{thm:main} holds with $X=1$ almost surely. A similar argument shows that if $N\leq_{hr}W$ then $W+1\leq_{st}V+1$ and we obtain an analogous geometric approximation result. In fact, we have the following.
\begin{thm}\label{cor:hr}
Let $W$ be a nonnegative, integer-valued random variable with $p=\mathbb{P}(W=0)$. Let $N\sim\mbox{Geom}(p)$ and suppose that either $W\leq_{hr}N$ or $N\leq_{hr}W$. Then
$$d_{TV}(\mathcal{L}(W),\mathcal{L}(N))\leq\left|1-p(1+\mathbb{E}W)\right|\,.$$
\end{thm}
To illustrate this result, we return to the setting of Section \ref{subsec:pp}, and let $\{N(t):t\geq0\}$ be a Poisson process of rate $\lambda$ and $T$ be a nonnegative random variable independent of $\{N(t):t\geq0\}$. We have the following.
\begin{cor}
Let $\{N(t):t\geq0\}$ and $T$ be as above. Let $p=\mathbb{E}e^{-\lambda T}$ and $\mu=\lambda p(1-p)^{-1}$. Let $\eta\sim\mbox{Exp}(\mu)$ have an exponential distribution with mean $\mu^{-1}$ and suppose that either $T\leq_{hr}\eta$ or $\eta\leq_{hr}T$. Then
$d_{TV}(\mathcal{L}(N(T)),\mbox{Geom}(p))\leq\lambda p\left|\mu^{-1}-\mathbb{E}T\right|$.
\end{cor}
\begin{proof}
We note that $p=\mathbb{P}(N(T)=0)$, $\mathbb{E}N(T)=\lambda\mathbb{E}T$, and that $N(\eta)\sim\mbox{Geom}(p)$. The bound follows from Theorem \ref{cor:hr} if either $N(T)\leq_{hr}N(\eta)$ or $N(\eta)\leq_{hr}N(T)$.
Consider first the inequality $N(T)\leq_{hr}N(\eta)$. Using Theorem 1.B.14 of \cite{ss07}, this holds if $T\leq_{hr}\eta$. To see this, we need to verify that if $Z_\alpha\sim\mbox{Po}(\alpha)$ has a Poisson distribution with mean $\alpha$ then $Z_\alpha\leq_{hr}Z_\beta$ whenever $\alpha\leq\beta$. This is most easily checked by noting that $\mathbb{P}(Z_\beta=j)\mathbb{P}(Z_\alpha=j)^{-1}$ is increasing in $j$, and then using Theorem 1.C.1 of \cite{ss07} to get the required hazard rate ordering.
Similarly, if $\eta\leq_{hr}T$ then $N(\eta)\leq_{hr}N(T)$ and the stated upper bound holds.
\end{proof}
\subsection*{Acknowledgements}
The author thanks Mark Brown for providing a preprint of his paper \cite{b15}, and an anonymous referee for pointing out useful material in the literature and suggestions which led to improvements in the results presented.
|
1,108,101,565,953 | arxiv | \section{Introduction}\label{sec:intro}
Empirical stellar spectral libraries are one of the most universal
tools in modern astronomy. They have applications in both
extragalactic and in stellar studies. The former include the
modelling of unresolved stellar populations
\citep[e.g.][]{2016A&A...589A..73R}, matching and removing continua
to reveal weak emission lines \citep[e.g.][]{1998ApJ...505..639E},
usage as templates to measure the stellar line-of-sight velocity
dispersions in galaxies
\citep[][]{1977ApJ...212..326S,2015MNRAS.452....2K,2018MNRAS.480.3215J,2018A&A...612A..66M,2019A&A...623A..87N}.
The stellar applications include measuring stellar parameters such
as effective temperatures \citep[e.g. ][]{2015MNRAS.454.4054B} and
surface gravities \citep[e.g.][]{2015ApJ...802L..10T} by template
matching or indices, measuring radial velocities
\citep[e.g.][]{2016MNRAS.456.4315S}, and verifying theoretical
stellar models which sometime are not as good as one may expect.
For example,
\citet{2013MNRAS.435..952S} found discrepancies in the Balmer
lines, suggesting that the theoretical spectral libraries may not
be as reliable source of stellar spectra as the empirical ones.
The lists of applications given here are by far incomplete.
We can add a number of open issues related to the libraries: the
need to derive homogenious and self-consistent stellar parameters
of the library stars -- right now the stellar parameters are
typically assembled from multiple sources. This requires a two
step process: first, derive global solutions of stellar parameters
$T_{\rm eff}$/[Fe/H]/$\log g$ versus spectral indices, and then
to invert these relations and to derive new uniform set of stellar
parameters for all stars
\citep[e.g.]{2016A&A...585A..64S,2019A&A...627A.138A}.
Another issue is to define optimal indices, most sensitive to one
or another stellar parameter \citep[e.g.][]{2013A&A...549A.129C}.
A particular problem related to galaxy models is the contribution
of the AGB stars \citep[e.g.][]{2005MNRAS.362..799M}.
The most widely used theoretical libraries today are the BaSeL
\citep{1992IAUS..149..225K,1997A&AS..125..229L,1998A&AS..130...65L,2002A&A...381..524W}
and the PHOENIX \citep{1999ApJ...525..871H,2012RSPTA.370.2765A,2013A&A...553A...6H},
but there have been
problems with the treatment of molecules, as shown early on by
\citet{1997A&A...318..841C}, that occasionally lead to poorly
predicted broad band colors. Among the empirical libraries the
work of \citet{1998PASP..110..863P} was the most widely used. It
includes 131 flux calibrated stars, but for the vast majority of
them the resolving power was below $R$=1000, which is relatively
low even for extragalactic applications where the intrinsic
velocity dispersion of galaxies require $R$$\sim$2000 or higher.
Other sets of spectra with better quality have become available:
ELODIE \citep{1998A&AS..133..221S,2001A&A...369.1048P,2004A&A...425..881L},
STELIB \citep{2003A&A...402..433L},
Indo-US \citep{2004ApJS..152..251V},
MILES \citep{2006MNRAS.371..703S}, and
CaT \citep{2001MNRAS.326..959C, 2007MNRAS.374..664C}.
More recently, single order library with a large number of stars
was reported by \citet{2018arXiv181202745Y}, but it has been
obtained with a 3\,arcsec fibers of the SDSS spectrograph
\citep{2017AJ....154...28B} and therefore does not avoid
completely the slit loss problem. \citet{2011MNRAS.418.2785M}
incorporated some of the libraries listed here in a comprehensive
stellar population model at high spectral resolution.
The X-shooter Spectral Library \citep[XSL;][]{2014A&A...565A.117C}
is the latest and most comprehensive effort in this direction. At
this time only the optical spectra are available. At this
time only the Data Release 1 was available. It contains 237
stars and when completed it will cover 0.3--2.5\,$\mu$m range at
a resolving power of $R$$\sim$7000--11000. The XSL is a good
example of the problems that increasing resolution and multi-order
cross-dispersed spectrographs bring in: the synthetic broad band
optical ($UBV$) colors agree poorly with the observed colors
from the Bright Star Catalog \citep[on average at $\sim$7\,\%
level, see Table\,5 and Fig.\,26 in][]{2014A&A...565A.117C}. The
differences are partially related to pulsating variable stars
having been observed in different phases. Slit losses are another
issue;
for many stars that is caused by the lack or the poor quality wide
slit observations. Despite these problems, the narrow features in
the XSL spectra are self-consistent, e.g. observations in different
orders agree well \citep[see Fig.\,8 in][]{2014A&A...565A.117C},
and there is a good agreement between features and theoretical
models and other empirical libraries \citep[for a comparison with
the UVES-POP see Figs.\,31--34 in][]{2014A&A...565A.117C}.
In other words, we are facing again a familiar problem: the old
theoretical libraries used to predict colors inconsistent with
the observations; nowdays, the newest empirical libraries do the
same, despite -- or because -- of the excellent quality of the new
data, that made it more apparent.
To address this issue we embarked on a project to build a {\it
slitloss-less} empirical spectral library with the MUSE
\citep[Multi-Unit Spectroscopic Explorer;][]{2010SPIE.7735E..08B}
integral field unit, spanning all major sequences on the
Hertzsprung-Russell diagram, with the specific goal of adjusting
and verifying the shapes of the spectra in other libraries, both
theoretical and empirical. The final product are spectra, suitable
for galactic modeling, stellar classification, and other
applications. Here we report the first subset of 35 MUSE stellar
spectra.
The next two sections describe the sample and the data, respectively.
Section\,\ref{sec:analysis} presents the analysis of our spectra and
Sec.\,\ref{sec:summary} summarizes this work.
\section{Sample}\label{sec:sample}
Our initial sample numbered 33 targets selected among the XSL
stars\footnote{\url{http://xsl.u-strasbg.fr/}}. We aimed to
populate the Hertzsprung-Russell diagram as homogeneously as
possible with $\sim$3-6 bright stars per spectral type, ensuring a
high signal-to-noise S/N$>$70--200 per spectral type, except for
the O-type where only a single star was available.
Spectra of two additional stars were obtained: HD\,193256 and
HD\,193281B. They serendipitously fell inside the field of view
during the observations of the project target HD\,193281A. An IFU
campaign covering the entire XSL is planned, but we made sure to
select stars over various spectral types, making this trimmed-down
library adequate for some applications, such as stellar
classification and templates fitting of galaxy spectra.
The SIMBAD spectral types as listed in \citet{2014A&A...565A.117C},
and complemented for the two extra targets, together with effective
temperatures $T_{\rm eff}$, surface gravities {\it log g} and
metalicities [Fe/H] collected from the literature, if available,
are listed in Table\,\ref{tab:params} and shown in
Fig.\,1\footnote{Stellar parameters from XSL also became available
after the submission of this paper: \citet{2019A&A...627A.138A}}.
The covered range of $T_{\rm eff}$ is 2600-33000\,K, of {\it log g}:
0.6-4.5 and of [Fe/H]: from $-$1.22 to 0.55, as far as the stellar
parameters as known. In case multiple literature sources with
equal quality were available for a certain parameter, we adopted the
average value and if a given source had significantly smaller errors
than the others, we adopted the value from that source.
\begin{figure}[!htb]
\centering
\includegraphics[width=7cm]{par01.eps}\\
\caption{Properties of the stars in our sample.
{\it Top}: Surface gravity {\it log g} versus effective temperature
$T_{\rm eff}$ for stars with [Fe/H]$\leq$$-$0.5\,dex (crosses),
$-$0.5$<$[Fe/H]$<$0.0\,dex (open circles) and [Fe/H]$\geq$0.0\,dex
(solid dots).
{\it Bottom}: Distributions of the stars by spectral type.
}\label{fig:params}
\end{figure}
\begin{table*}
\caption{Physical parameters of the program stars. The columns
contain: (1) object ID (asterisks mark non-XSL objects); (2)
SIMBAD spectral type; (3-4) radial velocity and reference; (4-8)
effective temperature, surface gravity, iron abundance and
reference. Our estimated spectral type and effective temperature
for HD\,193281B are also listed.}\label{tab:params}
\begin{center}
\begin{small}
\begin{tabular}{@{ }l@{ }c@{}c@{}l@{ }c@{}c@{}c@{}l@{ }}
\hline\hline
IDs & Sp.\,Type & ~~~V$_{\rm rad}$,\,km\,s$^{-1}$~~~~ & Reference~~~~~~~~~~~~~~~~~~~~~~~~~ & ~~~~~~~$T_{\rm eff}$,\,K~~~~~~ & ~~~~log\,$g$~~~~~ & ~~~~~~[Fe/H]~~~~~~ & Reference\\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\
\hline
HD\,057060 & O7e... & 20.0$\pm$1.7 & \protect{\citet{2004A&A...424..727P}} & 32508$\pm$1928 & 3.39$\pm$0.26 & 0.24$\pm$0.14 & \protect{\citet{2012A&A...538A.143K}} \\
& & & & 33215$\pm$2674 & 3.28$\pm$0.16 & $-$0.03$\pm$0.20 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,064332 & S & $-$1.3$\pm$0.5 & \protect{\citet{2006AstL...32..759G}} & 3399$\pm$44 & 0.61$\pm$0.40 & $-$0.04$\pm$0.18 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,067507 & CNv... & 23$\pm$10 & \protect{\citet{1953GCRV..C......0W}} & 2680 & ... & ... & \protect{\citet{2002A&A...390..967B}} \\
HD\,085405 & C & 3.50$\pm$1.6 & \protect{\citet{2006AstL...32..759G}} & 2769 & ... & $-$0.10 & \protect{\citet{2016A&A...591A.118S}} \\
& & & & 2645 & ... & ... & \protect{\citet{2002A&A...390..967B}} \\
HD\,096446 & B2IIIp & 6.1$\pm$0.8 & \protect{\citet{2006AstL...32..759G}} & 20086$\pm$530 & 3.59$\pm$0.08 & 0.06$\pm$0.04 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,099648 & G8Iab & $-$8.82$\pm$0.19 & \protect{\citet{2018yCat.1345....0G}} & 4970$\pm$75 & 2.25$\pm$0.43 & $-$0.01$\pm$0.15 & \protect{\citet{2012A&A...538A.143K}} \\
& & & & 4977$\pm$49 & 2.24$\pm$0.12 & $-$0.03$\pm$0.06 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,099998 & K3.5III & 18.43$\pm$0.37 & \protect{\citet{2018yCat.1345....0G}} & 4001$\pm$32 & 1.56$\pm$0.20 & $-$0.24$\pm$0.07 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,100733 & M3III & 21.07$\pm$0.29 & \protect{\citet{2018yCat.1345....0G}} & 3530 & ... & ... & \protect{\citet{2003AJ....125..359W}} \\
HD\,306799 & M0Iab & $-$16.38$\pm$0.19 & \protect{\citet{2008A&A...485..303M}} & 3650 & ... & ... & \protect{\citet{2003AJ....125..359W}} \\
HD\,101712 & M3Iab & $-$0.70$\pm$1.23 & \protect{\citet{2008A&A...485..303M}} & 3200 & ... & ... & \protect{\citet{2003AJ....125..359W}} \\
HD\,102212 & M1III & 50.28$\pm$0.09 & \protect{\citet{2009A&A...498..627F}} & 3738$\pm$6 & 1.55$\pm$0.10 & $-$0.41$\pm$0.05 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,114960 & K5III & 7.35$\pm$0.16 & \protect{\citet{2018yCat.1345....0G}} & 4000 & ... & ... & \protect{\citet{2003AJ....125..359W}} \\
IRAS\,15060+0947 & M9III & $-$8.2$\pm$2.6 & \protect{\citet{2015A&A...582A..68E}} & 3281 & ... & ... & \protect{\citet{2018A&A...616A...1G}} \\
HD\,147550 & B9V & $-$24.1$\pm$0.9 & \protect{\citet{2006AstL...32..759G}} & 9830$\pm$279 & 3.70$\pm$0.66 & $-$0.38$\pm$0.11 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,160365 & F6III & 8.14$\pm$2.31 & \protect{\citet{2008AJ....135..209M}} & 6009 & ... & ... & \protect{\citet{2016A&A...591A.118S}} \\
HD\,160346 & K3V & 17.856$\pm$0.784 & \protect{\citet{2017AJ....153...75K}} & 4808$\pm$65 & 4.53$\pm$0.22 & 0.03$\pm$0.10 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,163810 & G3V & 185.99$\pm$0.22 & \protect{\citet{2002AJ....124.1144L}} & 5818$\pm$15 & 4.35$\pm$0.06 & $-$1.20$\pm$0.04 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,164257 & A0 & 5.5$\pm$0.9 & \protect{\citet{2006AstL...32..759G}} & 9792$\pm$691 & 3.70$\pm$2.11 & 0.41$\pm$0.30 & \protect{\citet{2012A&A...538A.143K}} \\
\lbrack B86\rbrack\,133 & M4 & 44$\pm$5 & this work & 4637 & ... & $-$0.21 & \protect{\citet{2018A&A...616A...1G}}, \\
& & & & 2645 & ... & ... & \protect{\citet{2004ApJS..151..387I}} \\
HD\,167278 & F2 & $-$14.7$\pm$0.9 & \protect{\citet{2006AstL...32..759G}} & 6563$\pm$18 & 4.14$\pm$0.08 & $-$0.21$\pm$0.04 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,170820 & K0III & 2.84$\pm$0.06 & \protect{\citet{2008A&A...485..303M}} & 4707$\pm$57 & 1.65$\pm$0.13 & 0.17 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,172230 & A5 & $-$36.8$\pm$0.8 & \protect{\citet{2006AstL...32..759G}} & 7772$\pm$102 & 3.76$\pm$0.44 & 0.55$\pm$0.14 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,173158 & K0 & 14.06$\pm$0.32 & \protect{\citet{2018yCat.1345....0G}} & 5164$\pm$121 & 0.87$\pm$0.43 & 0.04$\pm$0.20 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,174966 & A3 & 5.6$\pm$0.9 & \protect{\citet{2018yCat.1345....0G}} & 7874$\pm$57 & 4.09$\pm$0.16 & 0.03$\pm$0.10 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,175640 & B9III & $-$26.0$\pm$4.3 & \protect{\citet{2006AstL...32..759G}} & 12067$\pm$326 & 4.07$\pm$0.55 & 0.22$\pm$0.18 & \protect{\citet{2012A&A...538A.143K}} \\
& & & & 12077$\pm$453 & 3.94$\pm$0.21 & 0.17$\pm$0.15 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,179821 & G5Ia & 81.78$\pm$3.71 & \protect{\citet{2018yCat.1345....0G}} & 6997 & 0.62 & 0.44 & \protect{\citet{2016A&A...591A.118S}} \\
& & & & 7107 & 1.00 & 0.45 & \protect{\citet{2016A&A...591A.118S}} \\
HD\,232078 & K3IIp & $-$388.34$\pm$0.27 & \protect{\citet{2008A&A...480...91S}} & 4295$\pm$48 & 0.82$\pm$0.27 & $-$1.08$\pm$0.11 & \protect{\citet{2012A&A...538A.143K}} \\
& & & & 4014$\pm$48 & 0.81$\pm$0.20 & $-$1.22$\pm$0.11 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,193256$^*$ & A8Vn... & 6$\pm$2 & this work & 7860 & 3.74 & $-$0.95 & \protect{\citet{2016A&A...591A.118S}} \\
HD\,193281A & A2III & 0.3$\pm$0.5 & \protect{\citet{2006AstL...32..759G}} & 8623$\pm$345 & 4.30$\pm$0.33 & $-$0.68$\pm$0.28 & \protect{\citet{2012A&A...538A.143K}} \\
& & & & 8597$\pm$218 & 4.11$\pm$0.14 & $-$0.37$\pm$0.13 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,193281B$^*$ & F5:V: & $-$43.13$\pm$0.97 & \protect{\citet{2018yCat.1345....0G}} & 8080 & 3.58 & $-$1.00 & \protect{\citet{2016A&A...591A.118S}} \\
& & & & 8080 & 3.58 & $-$1.00 & \protect{\citet{2016A&A...591A.118S}} \\
& & & & 8414 & ... & ... & \protect{\citet{2016A&A...591A.118S}} \\
& K2III & & & 4354$\pm$57 & ... & ... & this work \\
HD\,193896 & G5IIIa & $-$15.23$\pm$0.18 & \protect{\citet{2018yCat.1345....0G}} & 4900 & ... & ... & \protect{\citet{2003AJ....125..359W}} \\
HD\,196892 & F6V & $-$34.498$\pm$0.004 & \protect{\citet{2011A&A...526A.112S}} & 6028$\pm$22 & 4.17$\pm$0.10 & $-$0.99$\pm$0.07 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,200081 & G0 & 7.67$\pm$0.27 & \protect{\citet{2008A&A...480...91S}} & 5526$\pm$71 & 3.25$\pm$0.43 & 0.02$\pm$0.12 & \protect{\citet{2012A&A...538A.143K}} \\
HD\,204155 & G5 & $-$84.60$\pm$0.16 & \protect{\citet{2002AJ....124.1144L}} & 5704$\pm$28 & 3.89$\pm$0.16 & $-$0.70$\pm$0.07 & \protect{\citet{2012A&A...538A.143K}} \\
& & & & 5718$\pm$56 & 3.93$\pm$0.11 & $-$0.69$\pm$0.06 & \protect{\citet{2011A&A...531A.165P}} \\
HD\,209290 & M0.5V & 18.144$\pm$0.069 & \protect{\citet{2013A&A...552A..64S}} & 4031 & ... & $-$0.06 & \protect{\citet{2006ApJ...638.1004A}} \\
\hline
\end{tabular}
\end{small}
\end{center}
\end{table*}
\section{Observations and Data Reduction}\label{sec:reduction}
The spectra were obtained with MUSE at the European Souther
Observatory (ESO) Very Large Telescope, Unit Telescope 4, on
Cerro Paranal, Chile.
Table\,\ref{tab:obs_log} gives the observing log. We obtained
six exposures for each target, except for HD\,204155 which was
observed 12 times. To maximize the data yield most of the data
were obtained under non-photometric conditions, so the absolute
flux calibration is uncertain, but the ``true'' intrinsic shape
is preserved, because there is no ``stitching'' of multiple
orders and no variable slit losses due to atmospheric refraction.
We placed the science targets at the same spaxels as the
spectrophotometric standards, to minimize the instrument
systematics that might arise from residual spaxel-to-spaxel
variations.
The data reduction was performed with the ESO MUSE pipeline (ver.
2.6) within the ESO
Reflex\footnote{\url{https://www.eso.org/sci/software/esoreflex/}}
environment \citep{2013A&A...559A..96F}.
The 1-dimensional spectra were extracted within a circular aperture
with a radius of 6\,arcsec. This number was selected after some
experiments with apertures of difference sizes, to guarantee that
``aperture'' losses will lead to a change in the overall slope of
the spectra $<$1\,\% from the blue to the red end. The sky emission
was estimated within an annulus of an inner radius 7\,arcsec and a
width of 4\,arcsec. This step of the analysis was performed with an
IRAF\footnote{IRAF is distributed by the NOAO, which is operated by
the AURA Inc., under contract to the NSF.}/PyRAF tool
\citep{1986SPIE..627..733T,1993ASPC...52..173T,2012ascl.soft07011S}.
Three stars were treated differently.
For [B86]\,133 we reduced the extraction aperture radius to
4\,arcsec (keeping the sky annulus the same as for the majority
of the targets) to avoid contamination from nearby sources --
because the object is located in a crowded Milky Way bulge field.
HD\,193256 is close to the edge of the MUSE field of view, and
the extraction apertures had to be smaller, with a radius
4.6\,arcsec, the sky annulus had an inner radius of 4.6\,arcsec
and a width of 2\,arcsec.
HD\,193281 is a binary with $\sim$3.8\,arcsec separation and the
components cross-contaminate each other. To separate the two
spectra we first extracted a combined spectrum of the two stars
together with the same aperture and annulus as for the bulk of
the stars.
Next, we rotated each plane of the data cube by 180\degr\ around
the centre of the primary and subtracted the rotated plane from
the original non-rotated
plane, to remove the contribution of the primary at
the location of the secondary. Then, we extracted the spectrum of
the secondary with an aperture with a radius of 1.2\,arcsec and a
sky annulus with an inner radius of 1.8\,arcsec and a width of
4\,arcsec. Finally, we decontaminated the spectrum of the primary
by subtracting the spectrum of the secondary from the combined
spectrum of the binary.
Experiments with apertures of different sizes indicated that the
continuum shape of [B86]\,133 still changed at $<$1\% level across
the entire wavelength range, despite the narrower extraction
aperture. The spectra of the two other objects are less reliable
and in the case of HD\,193281B a change in the radius of a few
spaxels (0.2\,arcsec) leads to a flux change of $\sim$3\,\% over
the entire wavelength range. However, the spectrum of HD\,193281A
is still stable at $<$1\,\% because the secondary contributes
$\sim$1 and $\sim$11\,\% to the total flux at the blue and at the
red ends of the spectrum, respectively, so this $\sim$3\,\%
uncertainty is reduced by a factors of $\sim$100 and $\sim$9,
respectively, and the spectrum of HD\,193281A can be considered
reliable by to our criterion for $<$1\% stability across the
entire spectral range.
The telluric features were removed by running {\it molecfit} ver.
1.5.7 \citep{2015A&A...576A..77S,2015A&A...576A..78K} separately
on each of the six (12 for HD\,204155) target spectra themselves.
The agreement of individual solutions is excellent: typically the
fits yield a precipitable water estimate identical to within
$<$0.1\,mm.
The final spectrum for each target is the average of the 1-D spectra
derived from the six individual observations, and the error is the
r.m.s. of that averaging. A example of the data products is
plotted in Fig.\,\ref{fig:spectra_example}. The complete sample is
shown in Fig.\,\ref{fig:spectra_full_sample}. All final spectra are
given in Table\,\ref{table:full_spectra} and are available in
machine readable form at the journal's website.
\begin{figure*}[!htb]
\centering
\includegraphics[height=15cm]{B86133.eps}
\caption{An example of the MUSE spectra (black line) of \lbrack
B86\rbrack\,133 and of the corresponding XSL DR1 spectrum (red line;
normalized to match the MUSE spectrum flux). The plot title lists the
spectral type and the measured median S/N per resolution element over
the entire spectrum.
The upper sub-panels show the spectra extracted from each individual
exposure (shifted up for clarity) and the average spectra of the
object at its true flux level.
The bottom sub-panels show the standard deviation of the average
spectrum.
The spectra of the other sample stars are presented in the electronic
edition only (Fig.\,\ref{fig:spectra_full_sample}).}\label{fig:spectra_example}
\end{figure*}
\section{Analysis}\label{sec:analysis}
A direct comparison of the MUSE and XSL spectra for eight randomly
selected stars across the spectral type sequence is shown with some
zoomed-in spectral regions in Fig.\,\ref{fig:MUSE_XSL} (for the
rest of out spectra see Figs.\,\ref{fig:spectra_example} and
\ref{fig:spectra_full_sample}). Notably, the XSL spectra used
the continuum shape from a 5\,arcsec wide-slit observaitons.
In most cases the agreement on a
scale of a few hundred pixels--in other words, within the same
X-shooter order--is excellent. However, on wider scale we find
deviations between the XSL and MUSE spectra, as can be seen in
Fig.\,\ref{fig:spectra_ratios}. The exceptions are usually late
type stars -- \lbrack B86\rbrack\,133 and IRAS\,15060+0947 are
examples -- where the low signal-to-noise in the blue ($\sim$10
or bellow) and the variability that only occurs with extremely
red stars may account for the problem.
Furthermore, the ratios of many spectra show gradual change,
despite of their apparently high signal-to-noise: HD\,147550 and
HD\,167278 are examples where the amplitude of the ratio within
the MUSE wavelength range reaches 10-15\,\%. We fitted to the
ratios second order polynomials and extrapolated them over the
full wavelength range covered by the XSL library to demonstrate
that if these trends hold, the overall peak-to-peak flux
differences can easily rich $\sim$20\,\%, so the overall
continuum of the cross-dispersed spectra is somewhat ill-defined.
The coefficients of the polynomial fits are listed in
Table\,\ref{tab:ratio_fits}. and can be used to correct the shape
of the XSL spectra. We are far from critisizing
\citet{2014A&A...565A.117C} for the quality of their data
reduction, rather we point here that the high signal-to-noise
observations show how difficult it is to process cross-dispersed
spectra. Indeed, problems that may not be obvious with poor
quality data become apparent for signal-to-noise if 100-200.
\begin{figure*}[!htb]
\centering
\includegraphics[height=9cm]{sp06.eps} \includegraphics[height=9cm]{sp07.eps} \\
\caption{Comparison of a subset of our MUSE spectra (black lines)
with the XSL spectra (red lines; boxcar smoothed over 8 pixels).
The spectra
are normalized to unity between the two vertical dotted lines
shown on the left panel, and shifted vertically for display
purposes. The left panel shows the entire MUSE spectral range,
the right panel zooms onto the H$\beta$, H$\alpha$, and Ca triplet
wavelength ranges (left to right). No radial velocity corrections
are applied.}\label{fig:MUSE_XSL}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=15cm]{Ratios_11e.eps} \\
\includegraphics[width=15cm]{Ratios_12.eps} \\
\caption{Ratios of the XSL spectra to our MUSE spectra (blue;
covers only the MUSE wavelength range), normalized to unity
and median smoothed for display purposes with a 5-element
wide median filter. Two ratios are show for HD\,101712 --
for the two XSL spectra of this star.
A second order polynomial fits spanning the wavelength of XSL is
also shown in blue. The labels on the top of each panel contain
the name of the object, the normalization factor that indicates
the flux ratio of the independently flux-calibrated MUSE and
XSL spectra, and a standard deviation of the fits residuals.
The coefficients of polynomial fits are listed in
Table\,\ref{tab:ratio_fits}.}\label{fig:spectra_ratios}
\end{figure*}
The question remains, however, if the MUSE spectra have a more
reliable shape than the XSL spectra, because strictly speaking
so far we have only demonstrated the good internal agreement
between the six (or 12) individual MUSE observations. To provide
and external check we followed \citet{2014A&A...565A.117C}, and
calculated synthetic SDSS colors from both ours and the XSL
spectra (Fig.\,\ref{fig:synth_colors}) using the {\it pyphot}
tool\footnote{\url{http://mfouesneau.github.io/docs/pyphot/}}.
The XSL spectra were median smoothed to remove outliers, e.g. due
to poorly removed cosmic ray hits. The MUSE sequences are slightly
tighter than the XSL ones, confirming that the MUSE spectra have
more reliable shapes. This is expected, because of the slit losses
and the imperfect order stitching of the XSL spectra. Furthermore,
X-shooter has three arms -- in effect, three different instruments,
and some of the of the colors mix fluxes from different arms, which
may contribute to the larger scatter. A better spectral shape
verification will be possible in the future with the {\it Gaia}
low-resolution spectra.
\begin{figure*}[!htb]
\centering
\includegraphics[width=16cm]{cm05.eps}\\
\caption{Synthetic SDSS color-color diagrams derived from the MUSE
(open circles) and XSL (open triangles) spectra. Larger open
circles mark known variables, according to the SIMBAD database. and
although many statrs are variable, some distinct outliers are not.
Sequences for Solar abundance dwarfs (red line) and giant (green
line) stars from \citet{1998ApJS..119..121L} are also shown.
The extreme red outliers are IRAS\,15060+0947 (V*\,FV\,Boo) -- a
known Mira variable. There are two points for this object on
the right panel -- they correspond to the XSL and the MUSE
spectra.}\label{fig:synth_colors}
\end{figure*}
The Lick indices \citep{1994ApJS...94..687W} that fall within the
wavelength range covered by MUSE were measured in the new spectra
(Table\,\ref{tab:indices}). This included: Fe5015, Fe5270, Fe5335,
Fe5406, Fe5709, Fe5782, H$\beta$, Mg$_{\rm 1}$, Mg$_{\rm 2}$, Mg\,b,
Na\,D, TiO$_{\rm 1}$ and TiO$_{\rm 2}$. As designed by our target
selection, the measured values occupy the same locus as the Lick
library (Fig.\,\ref{fig:indices}).
In the course of the analysis we noticed that the Lick indices of
HD\,193281B correspond to a latter type than the F5:V: reported in
Simbad. We derived a new spectral type of K2III using as templates
our spectra of HD\,170820 and HD\,099998 and we adopted for this
star the average of their effective temperatures,
$T_{\rm eff}$=4354\,K with a tentative uncertainty of 57\,K -- the
larger of the uncertainties of the $T_{\rm eff}$ for these two
stars.
The metal features of HD\,179821 are stronger than for other stars
with similar temperature, but this is probably due to the supersolar
abundance of this star \citep{2016A&A...591A.118S}. Some Lick
indices of late-M and C/S stars also deviate from the locus, but the
spectra of these stars are dominated by broad molecular features,
making the atomic indices such as Fe, Mg and H, meaningless.
\begin{figure}[!htb]
\centering
\includegraphics[width=9.0cm]{i03e.eps}\\
\caption{Lick indices for the stars in our sample (red dots) and in
the sample of \citet[][black dots]{1994ApJS...94..687W}. Following
their definitions, Mg$_{\rm 1}$, Mg$_{\rm 2}$, TiO$_{\rm 1}$ and
TiO$_{\rm 2}$ are in magnitudes, and the rest are equivalent widths
in units of \AA. The two coldest objects that often deviate from the
M-star dominated sequences are the carbon stars HD\,067507 and
HD\,085405.}\label{fig:indices}
\end{figure}
\section{Summary and Conclusions}\label{sec:summary}
We present high signal-to-noise (S/N$>$70--200) MUSE spectra of
35 stars across the spectral type sequence. The comparison with
higher resolution existing data and spectral index measurements
show reasonably good agreement, except for differences in the
continuum shape that point at the real difficulties obtaining
high-resolution spectra with wide spectral coverage: the
instruments that deliver such kind of data spread the light over
many orders and their combination is not trivial. Importantly,
the integral field unit that we use does not suffer from slit
losses.
The sample of spectra presented here is relatively limited in
terms of number of stars, and to make this library more useful
we need to populate more densely the parametric space. In
particular, the metallicity range needs to be expanded. Our data
suffer from the high blue wavelength limit of MUSE, missing some
important CN, Ca and Fe spectral features in the 4100--4800\,\AA\
range. This is a hardware limitation that can only be addressed
with other/future instruments. Further accurate broad band
photometry is needed to extent the external verification of the
continuum shape -- so far {\it Gaia}, SDSS and other photometric
surveys provide measurements only for about a quarter of our
sample stars -- mostly because our program stars are too bright.
Expanding the MUSE library towards fainter stars will increase
this fraction and make such a test statistically significant.
Despite these issues, our MUSE spectral library can be a useful
tool for both stellar and galaxy research. This project started
as a simple effort to complement the SXL DR1 library, but our
spectra can be applied for various MUSE-based research -- they
have an extra advantage of being obtained with the same instrument
so the data format is the same, and any low-level instrumental
signatures that might have remained in the data could cancel out.
We plan to expand the number of library stars in the future.
\begin{acknowledgements}
This paper is based on observations made with the ESO Very Large
Telescope at the La Silla Paranal Observatory under program
099.D-0623.
We have made extensive use of the SIMBAD Database at CDS (Centre
de Donn\'ees astronomiques) Strasbourg and of the VizieR catalog
access tool, CDS, Strasbourg, France.
E.M.C., E.D.B., L.M., and A.P. acknowledge financial support from
Padua University through grants DOR1715817/17, DOR1885254/18,
DOR1935272/19, and BIRD164402/16.
We thank the referee for the comments that helped to improve the
paper.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,565,954 | arxiv | \section{Introduction}
In a tidal disruption event (TDE), a star wanders toward the center of a galactic nucleus and is tidally disrupted by a supermassive black hole (SMBH). A bright, multi-wavelength flare is produced when the stellar debris falls back and accretes toward the black hole. Observing such incidents is an important way to probe the otherwise dormant SMBHs in the center of many non-active galaxies and to study the feeding process of SMBHs.
An unsettled issue still remains regarding the physical origin of the observed emission (i.e., where the emission is produced). Initially it was expected that the emission is produced in the accretion disk, which should peak in soft X-rays \citep{Rees1988}. However, most of the observed TDEs are X-ray weak\footnote{Note that a separate class of TDEs are non-thermal X-ray dominated, and they are thought to originate from a relativistic jet and require special orientation in order to be seen \citep{giannios11,burrows11,bloom11,levan11,zauderer11,cenko12,brown15}.}, only a small fraction of TDE sample have been detected with X-ray emission in follow-up observations: GALEX D1-9 and D3-13 \citep{Gezari08}, ASASSN-14li \citep{holoien16b}, AT2018fyk \citep{Wevers19} and ASASSN-15oi \citep{holoien16a}, with upper limits in a few cases: PS1-10jh \citep{Gezari12}, iPTF16fnl \citep{Blagorodnova17,Brown18}, and iPTF16axa \citep{Hung17}, while the majority were discovered in optical or UV \citep[e.g.,][]{Gezari12,Chornock2014,holoien14,van VelzenFarrar2014,Arcavi2014}.
Two categories of models are proposed to explain the optical emissions. One involves ejected mass which reprocesses the high energy emission from the center to lower energies \citep{Ulmer1997,StrubbeQuataert2009}. The other focuses on the stream-stream interaction at the apocenter as the major energy dissipation site \citep{Piran2015,Jiang16}. Recently \cite{Dai2018} explain the X-ray/optical dichotomy by considering the inclination effect and the angular distribution of the mass outflow's property such as density, speed and temperature.
Some TDEs have been discovered to have surprisingly low blackbody temperatures of 1 $\sim$ 3 $\times 10^{4}$ K through wide-field UV and optical surveys \citep{Gezari09,Gezari12,van Velzen11,Arcavi2014,holoien14,holoien16a,holoien16b,Blagorodnova17,Hung17}, which can not be explained through radiation from traditional accretion process. These are attributed to larger radii associated with a reprocessing layer \citep{Loeb and Ulmer1997,Guillochon14,Roth16}, which form from a radiatively driven wind \citep{Miller15,Strubbe and Murray15,metzger16} or the radiation from stream-stream collisions during the circularization to form the disk \citep{Lodato12,Piran2015,Shiokawa15,Jiang16,Krolik16,Bonnerot17,Wevers17}. On the other hand, one should expect the X-rays to show up eventually, once the wind subsides or the circularization finishes.
In this paper, we analyse the observations of a recent, nearby TDE candidate AT2019azh which shows a long (by $\sim$ 200 days) delayed X-ray brightening with respect to its UV/optical peak, a pattern similar to that seen in the past TDE candidate ASASSN-15oi \citep{gezari17}. In \S\ref{sec2} we describes the observations and data reduction in optical, UV and X-ray bands. Then we analyze its multi-wavelength light curves in \S\ref{sec3}, present the spectral energy distribution (SED) fit and show the source's bolometric behavior (evolution of temperature and photospheric radius) in \S\ref{sec4}. We discuss three physical scenarios in TDEs in order to explain its multi-waveband behavior in \S\ref{sec5}. We summarize the results and conclude in \S\ref{sec:con}.
\section{Observations} \label{sec2}
\subsection{Optical discovery}
The bright nuclear transient AT2019azh was discovered by All Sky Automated Survey for SuperNovae (ASSA-SN) on UT 2019 Feb 22.02 \citep{brimacombe19} and by Zwicky Transient Facility (ZTF) on 2019 Feb 12.40 \citep{vanvelzen19} in the center of an E+A galaxy at $z$ = 0.022 (luminosity distance $D= 96$ Mpc). Follow-up spectroscopic observations by NUTS \citep{NUTS19} and ePESSTO \citep{ePESSTO19} show a featureless blue spectrum.
\cite{vanvelzen19} reported the ZTF and \textit{Swift} UVOT photometry which show a $\sim$15-day long slowly-rising or plateau phase starting from 2 days after the ASSA-SN trigger. However, later monitoring showed that the flux continued to rise until it peaked at $g$ = 14.4, about 31 days after the trigger (see Figures \ref{fig:mag}). The ZTF and host-subtracted UVOT photometry indicates a temperature of $log(T) = 4.5$ $\pm$ 0.1, and the ZTF photometry confirms that the transient is consistent with originating from the center of its host galaxy, with a mean offset of 0.07 $\pm$ 0.31 arcsec \citep{vanvelzen19}. \textit{Swift} XRT observations on 2019 Mar 11.45 detected 5 soft photons corresponding to a luminosity of $L_X= 2.5\times10^{41}$ erg s$^{-1}$ \citep{vanvelzen19}.
Based on its unclear position, persistent blue color, high blackbody temperature and lack of spectroscopic features associated with a supernova or AGN, \cite{vanvelzen19} identified AT2019azh as a TDE, which we will follow hereafter.
We collect the public available ASSA-SN\footnote{\href{https://asas-sn.osu.edu/light_curves/07988c67-2399-46f1-a9dc-3608c7e8141c}{https://asas-sn.osu.edu/light\_curves/07988c67-2399-46f1-a9dc-3608c7e8141c}} $g$ and $V$ band, the ZTF\footnote{\href{https://lasair.roe.ac.uk/object/ZTF17aaazdba/}{https://lasair.roe.ac.uk/object/ZTF17aaazdba/}} $g$ and $r$ band, $Swift$ UVOT, Gaia $G$ band photometry data. We plot the host-subtracted source light curves in Figure \ref{fig:mag}.
\begin{figure}
\begin{center}
\includegraphics[width=9.0cm]{mag}
\caption{Observed light curves of AT2019azh in optical, Swift UV and X-ray bands. The host contribution has already been subtracted in the optical and UV bands ({\it top}). The {\it bottom} panel shows the XRT 0.3-2 keV count rate light curve. The two data points with downward arrows are upper limits.} \label{fig:mag}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=11.5cm]{1hardband_img}
\includegraphics[width=11.6cm]{1softband_img}
\caption{Stacking {\it Swift XRT} images in the 2-10 keV (upper panels) and 0.3-2 keV (lower panels) bands.}
\label{fig:stickingimg}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm, angle=0]{x-ray}
\caption{The evolution of X-ray hardness of AT2019azh. Asterisks and crosses are the stacking count rates in 0.3-2 keV and 2-10 keV, respectively. The hardness ratio is defined as the count rate ratio between the two bands: $HR$ = (2-10 keV) / (0.3-2 keV).} \label{fig:x-ray}
\end{center}
\end{figure}
\subsection{Swift UV and X-rays}
AT2019azh has been observed for 36 times with the {\it Swift} Observatory since March 2, 2019 (update to November 11, 2019). We download and reduce the {\it Swift} Observations with the software HEASoft V.6.26 and the latest updated calibration files of {\it Swift}. The {UVOT} Telescope observed the source with multi-wavelength filters (V, B, U, UVM2, UVW1, UVW2). We extract the source photometry from the source region of radius of 3'', and the background from a source-free region with radius of 20'' near the source position , using the task of `uvotsource'. The UVOT photometry results are presented in Figure \ref{fig:mag}.
For {\it XRT}, we reprocess the event files with the task `xrtpipeline', and select the event files which operated in Photon Counting mode. The source is detected in almost all observations, with a count rate in the range of 0.001 - 0.1 cts s$^{-1}$ in 0.3 - 2 keV. The source file is extracted using a source region of radius of 30''. The background is estimated in an annulus region centered on the source position, with an inner radius of 60'' and outer radius of 90''. Due to the low counts, we only extract the 0.3-2 kV band count rate for single observations. They are listed in Table \ref{tab:1} and plotted in Figure \ref{fig:mag}.
We also stack the {\it XRT} observations into five groups and list them in Table \ref{tab:1}, which are marked as O01\_09, O10\_18, O20\_31, O01\_31, and O32\_40. The 0.3-2 and 2-10 keV band stacking images for each group are presented in Figure \ref{fig:stickingimg}. We find that the most of X-ray photons are detected in the soft band (0.3-2 keV). Interestingly, about two dozen of X-ray photons are also detected in 2-10 keV band during the observations before June 4, 2019 (O01\_31), while half of the hard band X-ray photons are detected during the 2019 April observations (O10\_18). We extract the source and background files from the stacking events files. The stacking count rates in 0.3-2 keV and 2-10 keV, respectively, are listed in Table \ref{tab:1}, and are plotted in Figure \ref{fig:x-ray} as well.
Interestingly, the soft band X-ray count rate shows a factor of $\sim$ 25 increase from O01\_31 to O32\_40, while the hard band X-ray count rate dims to be undetectable, as was shown in Figure \ref{fig:x-ray}. We will come back to this strong hardness evolution feature in \S \ref{sec:X-rays}.
\subsection{Radio} \label{sec:radio}
\cite{Perez-Torres19} reported radio detections of the source at 90 - 110 days after the ASAS-SN detection, from which we take the data and plot it in Figure \ref{fig:radio}. For comparison, we also plot the data of another radio bright TDE ASASSN-14li.
The host galaxy of AT2019azh is a non-active galaxy based on the lack of related spectroscopic feature \citep{brimacombe19}. Therefore, its radio emission ($\sim 10^{37}$ erg s$^{-1}$) is unlikely due to AGN activity. It is commonly believed that radio emission from TDEs are produced from the interaction of a relativistic jet or non-relativistic outflow with ambient medium. We discuss this later in \S \ref{sec5}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.5cm, angle=0]{radio}
\caption{Radio light curve of AT2019azh. Data is from \cite{Perez-Torres19}. Its optical discovery date is set to be MJD58536. Another TDE ASASSN-14li \citep[data from][discovery date MJD56983]{Alexander16} is plotted for comparison.} \label{fig:radio}
\end{center}
\end{figure}
\section{Temporal Behavior} \label{sec3}
The ASAS-SN and ZTF data make AT2019azh one of the few TDE candidates with a well-sampled rising light curve. It rises in the ASAS-SN $g$, $V$ and ZTF $g$, $r$ bands to the peak within 35 days, then decays gradually in all UV / optical filters. In addition, there is a minor re-brightening in the AS-ASSN $g$ and $V$ bands at $t \approx 90$ d.
Two features of the UV / optical light curves shown in Figure \ref{fig:mag} are notable. First, the data in the several filters indicate very little color evolution in a time span of $\sim$ 100 days, except for the earliest 10 days of the rising. This is supported by the inferred photospheric temperature evolution to be shown later in Figure \ref{fig:T}. It is also in line with earlier discovery report by \cite{vanvelzen19} and is a characteristic of most previously found TDEs \citep[e.g.,][]{holoien19}. Second, the post-peak magnitudes decay linearly with time, which means the flux decays in an exponential rather than a power law with time.
The X-ray count rate light curve in Figure \ref{fig:mag} shows a very different behavior. Apparently it has two slow flares during the optical bright phase, each varying by a factor of $\sim$ 10 with a duration of $\sim$ 50 days. Most importantly, the X-ray brightens by a factor of $\sim$ 30 at around 250 days after discovery. This late X-ray brightening is unusual for TDEs, and the only past similar case is ASASSN-15oi \citep{gezari17}.
The behaviors in X-rays and in UV/optical are quite different. The early ($t < 120$ d) X-rays show significant variations while UV/optical are in smooth rise and fall. Later, UV/optical decay monotonically for the rest of the time, but X-rays rise to its peak in about 250 days. These strongly suggest that the two emission components are produced probably at different locations and by different dissipation processes. This is helpful in discerning the most appropriate physical scenario in \S \ref{sec5}.
\begin{figure}
\begin{center}
\includegraphics[height= 6.cm, width=8cm, angle=0]{sed}
\caption{Blackbody fits to the multi-epoch SEDs composed of UV and optical photometry data. The numbers mark the time in days since the optical detection.} \label{fig:sed}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm, angle=0]{T}
\caption{Temperature evolution of AT2019azh from blackbody fits to the UV/Optical SED.}
\label{fig:T}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm, angle=0]{R}
\caption{Evolution of the photospheric radius derived from the blackbody SED fits to the UV/Optical.}
\label{fig:R}
\end{center}
\end{figure}
\section{Spectral analysis} \label{sec4}
\subsection{UV / optical}
To better quantify the physical parameters of the system, we modeled the UV and Optical SED of AT2019azh for epochs, which is shown in Figure \ref{fig:sed}, the blackbody fits provide good fits to the data. The resulting temperature evolution in days from detection is shown in Figure \ref{fig:T}. Additionally, the evolution of the effective radius for the UV/optical and X-ray emission is shown in Figure \ref{fig:R}. The bolometric luminosity is shown in Figure \ref{fig:L}.
The blackbody fits indicate that after a rise between 10 and 15 days after detection, the temperature of AT2019azh holds relatively constant around $T \simeq$ 26,000 K for about 90 days. This temperature is similar to those of other TDEs.
\begin{figure*}
\centerline{
\includegraphics[width=0.4\textwidth]{spec_o01_31}
\includegraphics[width=0.4\textwidth]{spec_o32_40}
}
\caption{Stacking {\it Swift XRT} spectra and best fit models. The data is marked as black color, the best fit models are marked as blue color, the blackbody model component is marked as red dot line (the second blackbody in O32\_40 is marked as pink dot line) and the power-law model component is marked as green dot line.} \label{fig:xrayspec}
\end{figure*}
\subsection{X-rays} \label{sec:X-rays}
For Swift X-ray data, we fit the two stacking spectra which are grouped in before and after June 4, 2019 (O01\_31 and O32\_40). We group the data to have at least 4 counts in each bin, and adopt mainly the C-statistic for the Swift spectral fittings which are performed using XSPEC (v.12.9; Arnaud 1996). We try to fit the spectra with five different models, which are a single power-law, a double power-laws, a single blackbody, a double blackbodies, and a single power-law plus single blackbody, respectively. For Galactic absorption, we adopt a column density of $N_{H}$ = 4.15 $\times$ $10^{20}$ cm$^{-2}$ \citep{HI4PI Collaboration2016} in the direction of AT2019azh.
The fitting results are listed in the Table \ref{tab:2}. We find that the best fit and accepted model for O01\_31 is a single power-law, with photon index $\Gamma = 1.9 \pm 0.6$, plus a single blackbody of temperature $kT=56 \pm$ 9 eV. The best fit and accepted model for O32\_40 is a double blackbodies, with $kT_1=51 \pm 4$ eV and $kT_2= 120 \pm 28$ eV. They are shown in Figure\, \ref{fig:xrayspec}.
Based on the best fit results, we estimate the X-ray fluxes of the two stacking spectra. The 0.3-2, and 2-10 keV band unabsorbed fluxes in O01\_31 are $2.34^{-0.37}_{+0.10} \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$, and $2.5^{-0.8}_{+1.0} \times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$, respectively. The 0.3-2 keV band unabsorbed flux in O32\_40 is $7.10^{-0.50}_{+0.08} \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$, while the 0.3-2 keV band flux of the second blackbody component is only $\sim 7\%$ of the flux of first one. Using the stacking spectral fitting results, we convert the count rate of each XRT exposure to the 0.3 - 10 keV flux. Replacing the blackbody component in the best fit models with a multi-blackbody from accretion disk model ('diskbb'), we also derive the X-ray blackbody radii, which are $\sim 7 \times 10^{10}$, $6\times 10^{11}$ and $1\times 10^{10}$ cm, respectively, for the blackbody component in O01\_31, the first, and the second one in O32\_40. These radii are 0.07-4 times of $R_g$ ($R_g = GM /c^2$), if considering an SMBH mass of $10^{6} M_{\sun}$.
As was shown in Figure \ref{fig:x-ray}, the X-ray hardness ratio drops substantially toward later times and becomes softer when the source is brighter, which is reminiscent of the state transitions behavior typically seen in BH X-ray binaries \citep[e.g.,][]{remillard06}. Thus, the X-rays here can be considered as the signature of the BH accretion. Therefore, the early X-rays probably correspond to a low accretion state which is harder, whereas the late X-ray brightening at $t=$250 d corresponds to a high accretion rate where the hard photons disappear.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm, angle=0]{L}
\caption{UV/optical luminosity evolution of AT2019azh from the blackbody SED fits (red) and X-ray luminosity evolution from Swift (black).} \label{fig:L}
\end{center}
\end{figure}
\subsection{Comparison between UV/optical and X-rays}
Figure \ref{fig:L} shows the UV/optical and X-ray luminosity evolution of AT2019azh. In the figure we also show $t^{-5/3}$ power-law and $e^{-t/\tau}$ exponential fits to the data. However, it is hard to compare the two fitting methods because of the large error, although in Figure \ref{fig:mag} the exponential fits well to data.
In addition, we plot the evolution of the X-ray-to-UV/optical luminosity ratio $L_X/L_{\rm opt}$ in Figure \ref{fig:LxL}. It shows that AT2019azh and ASASSN-15oi are generally very similar, in that this ratio rises from 0.001-0.01 during the early optically bright phase to $\sim 1$ later. They differ from ASASSN-14li which shows almost constant $L_X/L_{\rm opt} \sim 1$.
\begin{figure}
\begin{center}
\includegraphics[width=8.cm, angle=0]{LxL}
\caption{Evolution of the luminosity ratio of X-rays over UV/optical. The data source for the other three TDEs: ASASSN-14li and ASASSN-15oi are from \cite{gezari17}, and AT2018fyk is from \cite{Wevers19}.}
\label{fig:LxL}
\end{center}
\end{figure}
\section{Physical interpretation} \label{sec5}
In order to compare the evolution of AT2019azh to the relevant timescales for a TDE, we adopt an radius relation $r_*\approx m_{*}^{0.89}$ for low-mass main sequence stars \citep{Torres10}, where $R_*=r_{*}\times R_{\odot}$ and $M_{*} = m_{*}\times M_{\odot}$ is the star's radius and mass. The characteristic timescale for a TDE is set by the orbital period of the most tightly bound debris, known as the fallback time
\begin{equation}\label{eq:tfb}
t_{\rm fb}=41M_{6}^{1/2}m_{*}^{0.34} \; d
\end{equation}
where, $M = M_6 \times10^{6}$ $M_{\odot}$ is the mass of the black hole. The rise time of a TDE light curve could be a rough estimate of $t_{\rm fb}$; in the case of AT2019azh, from Figure \ref{fig:mag} we estimate $t_{\rm fb}\approx$ 30 d. After disruption, the less bound debris follows the most bound debris in returning, in a rate that drops with time as \citep{Rees1988,Phinney1989,Lodato09,RamirezRuizRosswog2009,Guillochon13}
\begin{equation} \label{eq:mdot}
\dot{M}_{fb}\simeq\dot{M}_{peak} \left({\frac{t}{t_{\rm fb}}}\right)^{-5/3}.
\end{equation}
In the following we discuss three possible scenarios for interpreting the UV/optical and X-ray behavior of AT2019azh.
\subsection{Super-Eddington accretion for the UV/optical peak} \label{sec:super}
The optical bright stage could be super-Eddington which peaks in the UV/optical bands, during which various energy dissipation processes will produce winds or outflows \citep{StrubbeQuataert2009,LodatoRossi2011,metzger16} which will regulate the luminosity \citep{KrolikPiran2012}, and block the X-rays or reprocess them into UV/opticals \citep[e.g.,][]{Dai2018}. After the accretion rate drops below the Eddington rate, the bolometric luminosity falls and X-rays from the inner accretion disk start to be seen \citep{Chen and Shen18}. This transition time can be estimated from Eq. (\ref{eq:mdot}) as
\begin{equation}\label{eq:tedd}
t_{\rm Edd} \simeq 2.2 \,\eta^{3/5}_{0.1}M^{2/5}_{6}r^{3/5}_{*}m^{1/5}_{*}~\mbox{yr},
\end{equation}
where $\eta= 0.1 \times \eta_{0.1}$ is the efficiency of converting accretion power to luminosity. AT2019azh is visible in X-rays during the early, optical bright phase. This suggests that, in \cite{Dai2018}'s model, the line of sight is somewhat close to the pole direction.
However, there are two evidences against this early super-Eddington accretion scenario. First, the X-rays and UV/optical light curves of AT2019azh behave very differently during the early time ($t <$ 100 d), as was shown in Figure \ref{fig:mag} and mentioned in \S \ref{sec3}. They show no sign of temporal correlation between the two bands that one should expect to see in this scenario, since there the two bands are produced by the same accretion process\footnote{We notice that the early UV/optical and X-rays in AT2018fyk do show a weak temporal correlation \citep{Wevers19}. Therefore, AT2018fyk may belong to the early super-Eddington accretion scenario.}.
Second, as was shown in Figure \ref{fig:x-ray} and mentioned in \S \ref{sec:X-rays}, hard X-ray photons appeared during early time but disappeared later. The resemblance of this particular behavior to the state transition pattern of BH X-ray binaries suggests that those X-rays, both the early and the late, are signatures of accretion. The early X-rays probably correspond to a low accretion state which is harder, whereas the late X-ray brightening at $t \simeq$ 250 d corresponds to a high accretion rate so the hard photons disappear. This is in sharp contradiction with the early UV/optical peak being a super Eddington accretion phase.
\subsection{Stream-stream collision followed by delayed accretion} \label{sec:acc}
Next we consider a two-process scenario, in which the late X-ray brightening comes from the delayed accretion through a recently formed accretion disk, and the early UV/optical emission is from the stream-stream collisions \citep[e.g.,][]{Piran2015,Jiang16} before the major body of the accretion disk is formed.
\cite{Bonnerot16} estimated the circularization timescale, which is driven by relativistic apsidal precession of the debris streams, to be
\begin{equation}
t_{\rm cir}=8.3~ t_{fb} M^{-5/3}_{6} \beta^{-3},
\end{equation}
where $\beta = R_{T}/R_{P}$, $R_{T} = R_{*}(M/M_{*})^{1/3}$ \citep{Rees1988,Phinney1989}. Once a disk is formed with radius $R$, the viscous inflow timescale for a standard $\alpha$-disk model \citep{ShakuraSunyaev73} is
$t_{\rm vis}=[\alpha \Omega_K(R)]^{-1} (h/r)^{-2}$, where $\alpha$ is the standard viscous parameter, $\Omega_K(R)$ is the Keplerian angular speed, $h/r$ is the disk's scale-height to radius ratio. Since the disk forms at radius of about $2R_p$, the ratio of $t_{\rm vis}$ over $t_{\rm fb}$ is
\begin{equation}
\frac{t_{\rm vis}}{t_{\rm fb}}= 0.1 \beta^{-3/2} \left(\frac{0.01}{\alpha}\right) \left(\frac{m_*}{M_6}\right)^{1/2} \left(\frac{h}{r}\right)^{-2}.
\end{equation}
In the following we attempt to model how the energy release rate from the stream-stream collisions evolves. During the fallback process, the orbit of the most bound debris has a semi-major axis of $a_{\rm mb} \simeq R^2_T/(2R_*)$, an orbital period of $t_{\rm fb}$ and an specific energy of $E_0= -GM/2 a_{\rm mb}$, upon its first pericenter passage. During subsequent passages its orbit shrinks (circularizes) due to self-crossing and collision, with its semi-major axis $a(t)$ being ever decreasing. We suppose the specific energy dissipation rate is
\begin{equation}
-\dot{q}=\frac{dE}{dt}=\delta(a)\frac{E(a)}{t(a)}
\end{equation}
where the efficiency factor $\delta\ll1$ represents the fraction of orbital energy that is dissipated per orbit, and the orbital period $t(a)=t_{0} (a/a_0)^{3/2}$. We assume $\delta(a)=\delta_0 (a/a_0)^{\gamma}$, where $\gamma > 0$, i.e., the dissipation efficiency drops progressively as the stream is being circularized. With algebra, the equation becomes
\begin{equation}
\frac{d}{dt}(\frac{a}{a_0})=-\frac{\delta_0}{t_0}(\frac{a}{a_0})^{\gamma-1/2}
\end{equation}
The solution is
\begin{equation}
a(t)=a_0[1+(\gamma-\frac{3}{2})\frac{\delta_0}{t_0}(t-t_0)]^{-\frac{1}{\gamma-3/2}}
\end{equation}
Therefore, we get the specific energy $E(t)=-\frac{GM}{2a}$, and the dissipation rate
\begin{equation}
\dot{q}(t)=\frac{GM}{2a_0}\frac{\delta_0}{t_0}[1+(\gamma-\frac{3}{2})\delta_0 (\frac{t-t_0}{t})]^{\frac{1}{\gamma-3/2}-1}.
\end{equation}
If $\gamma > 5/2$, then $\dot{q(t)}$ decreases with time as $t^{-\alpha}$, with $\alpha=(\gamma-5/2)/(\gamma-3/2)$. We assume the relationship between release energy and luminosity is $L(t)=\lambda \dot{q}(t)c^2$,
where $\lambda$ is the energy conversion efficiency $c$ is light speed. We take $\gamma =10$ and plot $L(t)$ in Figure \ref{fig:L}, which roughly shows the feasibility of matching the radiation from stream-stream collisions to the UV/optical data.
What produced the early ($t <$ 100 d) low-level X-ray activity? During the stream-stream collisions, the hydrodynamic numerical simulation by \cite{Shiokawa15} shows that some minor amount of debris lost a significant fraction of their angular momentum, such that their pericenter radius could shrink significantly and they could form an early, low mass accretion disk. If this is possible, then one naturally expects that the early accretion rate is low (i.e., sub-Eddington). The hard X-ray photons appearing during this phase might suggest a hot corona is formed during this `low hard' state, similar to what happens in BH X-ray binaries.
The above scenario is naturally consistent with the absence of temporal correlation between early X-rays and UV/opticals, since they are produced by different dissipation processes and at very different locations.
Once the stream-stream collision process was over, the major body of the disk should have formed disk formed around $t\sim$ 250 d. The accretion rate has risen to the peak, so does the soft X-ray flux, while hard X-ray photons disappear, probably with the hot corona.
\subsection{CIO's reprocessing of X-rays}
\cite{Lu and Bonnerot} argue that during the stream-stream collisions, a considerable number of the shocked gas will become unbound and ejected as the so-called collision-induced outflow (CIO), which could reprocess early X-rays (presumably coming from an inner disk which is formed in the same way as was described in \S \ref{sec:acc}) into optical bands.
This scenario is disfavored for the UV/optical peak in AT2019azh due to the following reasons. If the content of CIO is large and massive so that the CIO could cover the whole sphere around the source, then the early X-rays should not be visible, which is clearly not the case. If the CIO coverage is partial and it does not fully block the line of sight, then one should expect: 1) the observed X-ray flux shall exceed or at least be comparable to the UV/optical flux, not the opposite, because only a portion of $L_X$ gets reprocessed and becomes $L_{\rm opt}$; 2) the UV/optical should show a temporal correlation with the early X-rays, which is in contradiction with what the early light curves show in Figures \ref{fig:mag} and \ref{fig:L}.
Note that AT2019azh is detected in radio at $t \sim$ 100 d (see \S \ref{sec:radio} and Figure \ref{fig:radio}). This suggests that CIO may actually exist and its interaction with ambient medium produced the radio emission \citep{Lu and Bonnerot}. However, the CIO's reprocessing of X-rays can not be the origin of the UV/optical peak.
\section{Summary and Conclusion} \label{sec:con}
We present and analyze a large data set by ASAS-SN, ZTF, {\it Swift} and Gaia of the light curves of TDE candidate AT2019azh in optical/UV and X-ray bands. We highlight a rare case in which the late X-rays brightened by a factor of $\sim$ 30-100 around 250 days after discovery, while the UV/opticals continuously decayed. The early X-rays show two flaring episodes of variation, temporally uncorrelated with the early UV/opticals. In addition, we present the evolution of temperature and photospheric radius from the fitting of SED. We found a clear sign of evolution of the X-ray hardness ratio which drops substantially toward later times and becomes softer when the source is brighter.
The drastically different temporal behaviors in X-rays and UV/opticals suggest that the two bands are physically distinct emission components, and probably arise from different locations. The hard X-ray (2 -10 keV) photons found during $t <$ 100 d suggest that the early X-rays must be of accretion origin as well.
Putting all pieces together, we conclude that the full data are best explained by a two-process scenario, in which the UV/Optical peak is produced by the stream-stream collisions during the circularization phase; the same process causes some low angular momentum, shocked gas to form an early, low-mass accretion disk which emits the early X-rays. The major body of the disk is formed after the circularization finishes, at $t\sim$ 250 d. Its enhanced rate of accretion toward the black hole produces the late X-ray brightening.
AT2019azh is the second case, after ASASSN-15oi \citep{gezari17}, of TDEs that shows a clear sign of delayed accretion. However, the early detection and full multi-waveband coverage make AT2019azh the first strong case that the emission signature of stream-stream collision is identified and early steps of disk formation can be inferred. At the time of the paper is written, AT2019azh is still detectable in X-rays, so deeper and broader understanding of this event is reachable.
|
1,108,101,565,955 | arxiv | \section{Introduction}
Algorithms that detect anomalies have to learn normal behaviour to be able to identify anomalous behaviour. Sometimes we do know what types of anomalies we need to search for, and then use supervised Machine Learning (ML) methods to find them. As anomalies are, by definition, rarer than normal events, these supervised techniques need to be adapted to unbalanced datasets and made robust against fluctuations in the dominant {\it normal} or {\it in-distribution} dataset.
Oftentimes we do not know the whole set of possible anomalies we could encounter in data, or we cannot obtain a dataset with enough examples of anomalies. Supervised methods may perform well with known anomalies, but when applied to new ones would typically not identify them. To design procedures to detect unknown anomalies we then resort to unsupervised learning, trying to identify anomalies in a dataset as a function of some form of {\it distance} within the dataset. This procedure is quite heuristic, often starting with a visualization of the data and some form of dimensional reduction, followed with some intuitive understanding of the problem. This is a hit-and-miss method and in general the unsupervised strategies are substantially less powerful than a possible supervised method of detecting anomalies.
Here we present a different strategy, somewhat mid-way between supervised and unsupervised. We use the framework of a classification task (supervised learning) on a dataset with normal events, to introduce a concept of {\it awareness} of possible anomalies. We then use the output of the classification task to define a region where anomalies would concentrate.
We will show that this algorithm is effective at identifying generic anomalies, even those the algorithm has not been previously made aware of. In other words, this anomaly awareness procedure is robust, i.e. independent of the origin of the anomaly. In this sense our Anomaly Awareness algorithm is a hybrid method of learning, neither fully supervised nor unsupervised. Note that there is a good body of literature in Computer Science proposing deep learning semi-supervised methods to detect anomalies in images including medical applications, based mostly on AutoEncoders and GANs, see e.g. Refs.~\cite{schlegl2017unsupervised,akcay2018ganomaly,lu2018anomaly,singh2019development,minhas2019anomaly}. Anomaly Awareness is a new type of semi-supervised procedure, applicable to input images but also to other types of information.
We will exemplify the use of this method in a non-trivial task in our field-domain, Particle Physics. In the context of the Large Hadron Collider (LHC) searches for new phenomena, we show how Anomaly Awareness can help making these searches more robust, less dependent on the specific scenarios one has in mind. This {\it model-independence} of LHC searches is particularly important now that the traditional ways of thinking in Particle Physics are challenged by the absence of expected discoveries at the LHC. A handful of recent Particle Physics studies~\cite{anomalydetectionCwola, anomalyreso, anomalydetectionWulzer, anomalydetectionSimone, QCDorwhat, DeepAEShih, anomalyVAE, Dillon:2019cqt, Roy:2019jae, Blance:2019ibf, DAgnolo:2019vbw, Andreassen:2020nkr, Nachman:2020lpy, Amram:2020ykb, Romao:2020ocr, Cheng:2020dal} proposing ML algorithms to perform model independent searches~\footnote{Other than the model independent searches (in a unsupervised fashion) recent studies also proposed the use of ML methods for specific tasks to extract maximum information from Particle Physics experiments data, see e.g. \cite{Radovic:2018dip}).} are showing impressive reach for the considered toy examples.
This paper is organised as follows. In Sec.~\ref{sec:Algo} we describe the general algorithm of Anomaly Awareness as well as the Convolutional Neural Network(CNN) architecture we have employed for the subsequent analysis. We then show how we use it in Particle Physics in Sec.~\ref{sec:LHC} and then conclude in Sec.~\ref{sec:conclusions}.
\section{Algorithm description}~\label{sec:Algo}
We now explain in detail the Anomaly Awareness algorithm, see the Algorithm description~\ref{alrc_algorithm}. The starting point of the algorithm is a classification task and in its simplest form, a binary classification task. In the application to LHC searches of Sec.~\ref{sec:LHC} the input to this analysis will be in the form of 2D images of jet spatial structure, hence the neural network architecture is made of a few convolutional layers and a final classification layer, see Fig.~\ref{fig:CNN} for the specific choice we made.
In this initial run ({\it prior run}), the algorithm learns to classify only {\it normal} classes, and is not yet aware of the presence of anomalies. The end result of this run would be a trained algorithm with some choice of optimal hyper-parameters, which will be used to initialize the next run, the {\it anomaly awareness run}.
In the second run the algorithm will now see some anomalies. The new loss function contains the same term as in the prior run, e.g. cross-entropy for a binary classification task, but has a new term (Anomaly Awareness) which distributes the anomalous samples uniformly across the classes, e.g. assigning 50\% probability of belonging to each class in a binary task.
The Anomaly Awareness term is modulated by a parameter $\lambda_{AA}$, which sets the ratio of the number of anomalous examples respect to the normal samples in the loss function. So far this algorithm is similar to Outlier Exposure~\cite{hendrycks2018deep}, but in our case the AA term will contain an array of different anomalies which, as we will show later, is crucial to allow the algorithm to detect unknown anomalies. Another component of Anomaly Awareness, not present in Outlier Exposure, is the use of the classifier output $p$ to obtain an optimal window [$p_{An}^{min}$, $p_{An}^{max}$] to detect anomalies over a large background of normal events.
\begin{algorithm}[h!]
\caption{Anomaly Awareness (AA). \\Important parameters are $\lambda_{AA}$, $p_{An}^{min}$, $p_{An}^{max}$.}
\textbf{Prior Run}
\begin{algorithmic}
\STATE Initialize test:train splitting of {\it Normal} ($N$) dataset
\STATE Initialize hyper parameters
\STATE Initialize Model (CNN architecture)
\FOR{Training over the epochs}
\STATE Cross entropy loss
\STATE Update model parameters.
\ENDFOR
\STATE Get accuracy for $D_{test}$ and $D_{train}$ \\
This run sets the hyper-parameters for the AA run
\end{algorithmic}
\textbf{Anomaly Detection Run}
\begin{algorithmic}
\STATE Load the {\it Anomaly} ($An$) dataset
\STATE Initialize amount of data w.r.t. the {\it Normal} dataset
\STATE Initialize $\lambda_{AA}$
\FOR{Training over the epochs}
\STATE $l_1$ = Cross entropy loss ({\it Normal} dataset)
\STATE $l_2$ = Cross entropy loss ({\it Anomaly} dataset with Uniform Distribution)
\STATE Loss = $l_1+ \lambda_{AA} l_2$
\ENDFOR
\STATE Get softmax probabilities for all the datasets, \\
$p_i$, $i= N$, $An$
\STATE Select datapoints in a range [$p_{An}^{min}$, $p_{An}^{max}$], \\
range optimized to select anomaly over normal events
\end{algorithmic}
\label{alrc_algorithm}
\end{algorithm}
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/2cCNNmodel.pdf}
\caption{CNN Architecture used in this study. Input images are passed through a set of convolutional layers and end in a linear layer providing predictions for the classification problem. }
\label{fig:CNN}
\end{figure}
\section{A non-trivial example of Anomaly Awareness: Boosted hadronic phenomena}~\label{sec:LHC}
We now describe how this algorithm could be used in the area of Particular Physics, in particular in searches for new phenomena at the Large Hadron Collider (LHC). This description and the results we provide exemplify a {\it use case} in our field domain, but are not to be taken as the optimal procedure to follow in a realistic analysis. The LHC's environment is very complex, and modelling the behaviour of collisions requires a sophisticated machinery that we are just approximating here.
Let us first motivate the problem. The aim of Particle Physics is to understand the Laws of Nature at the most fundamental level. These laws do take very simple forms when described in terms of the right mathematical objects, but in terms of empirical probes they take tremendously complicated manifestations.
A perfect example of this complexity is the LHC, one of the best probes of Nature we currently have, where massive amounts of data are collected and analysed to test the so-called Standard Model (SM) of Particle Physics.
At the LHC, collision data is transformed into many measurements of SM observables, providing precise tests (per-cent and even per-mille precision) of the validity of the SM as a paradigm to explain Nature. And so far the SM has passed all these tests with flying colours.
Yet we know the SM paradigm, albeit very successful, is not complete. The SM does not explain the Universe as we see it, where 95\% is made of dark stuff (Dark Energy and Dark Matter) the SM does not account for, and for the rest 5\% we do not understand how antimatter got out of the way, or how neutrinos got massive.
Thus to answer the question {\it 'How does the Universe work?'} many experiments are looking for ways to find new phenomena, beyond the SM framework. At the LHC these searches take many forms, and here we are going to focus on certain types events where high energy jets are produced and new phenomena (beyond SM) could be found.
The SM interactions do produce these jets, for example in the form of quarks and gluons which then hadronise in the detector. We will denote these {\it normal} events as two classes: {\it Top} and {\it QCD}, and later add a third SM class, {\it W-jet}.
Searches are focused on finding some anomaly in the behaviour of these jets which indicates the presence of a new set of laws at play. Here we simulate anomalies produced by new particles, which we will denote as {\it resonances} leading to jets with 2-, 3- or 4-prongs ($R_{2,3,4}$), or new effective interactions which we denote as {\it EFT}~\footnote{EFT interactions correspond to Higgs production in association with a Z-boson as described in Ref.~\cite{Freitas:2019hbk} and switching on a coefficient $c_{HW}$ as defined in Ref.~\cite{Degrande:2016dqg} within the limits obtained in Ref.~\cite{Ellis:2018gqa}. The $R_{2,3,4}$ examples were generated with a RS model decaying into boosted $ZZ$, boosted $t\bar t$ and $hh$ with $h\to$ 4 jet, respectively.}.
\subsection{The input information}
To study anomalies in these events we will represent them as follows: all the information on directionality, timing and energy deposition of the event is reduced to gathering the largest amount of energy collected around a cluster ({\it leading fat jet}) in the hadronic calorimeter of the detector. We then represent the angular distribution ($\eta$, $\phi$) of the energy depositions inside the fat jet with a color coding that encodes relative amounts of energy (in GeV). The typical distributions for these events are shown in Figures~\ref{average} and ~\ref{average2}, where we see the differences among the sources of SM fat jets and possible new phenomena. These images are an average of all the events we have simulated ($\sim$ 50K events per scenario) and one should note there is substantial variability among events from the same source~\footnote{These images have been produced by running Monte Carlo simulations of 13 TeV LHC collisions at parton-level with aMC@NLO~\cite{Alwall:2011uj,Alwall:2014hca} and then showering and hadronizing with Pythia~\cite{Sjostrand:2006za,Sjostrand:2007gs}. We then used Pythia 8 SlowJet program for clustering. The main cuts applied to the events were finding a leading anti-$k_T$ jet of $R=1$, $p_T > 750$ GeV and $m_J \in$ [50,300] GeV.}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\columnwidth]{figures/top_av.png}
\includegraphics[width=0.49\columnwidth]{figures/qcd_av.png}
\includegraphics[width=0.49\columnwidth]{figures/wjet_av.png}
\caption{Average jet images for SM processes. From top to bottom and left to right: Top, QCD and W-jet.}
\label{average}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\columnwidth]{figures/efthiggs_av.png}
\includegraphics[width=0.49\columnwidth]{figures/rszz_av.png}
\includegraphics[width=0.49\columnwidth]{figures/rstop_av.png}
\includegraphics[width=0.49\columnwidth]{figures/rshh_av.png}
\caption{ Average jet images for New Physics processes. From top to bottom and left to right: EFT, resonance into boosted $ZZ$, resonance into boosted tops, and resonance into boosted pairs of Higgs bosons. }
\label{average2}
\end{figure}
\subsection{The prior run}
Using these images as input, we run an initial classification task on two classes in the {\it normal} distribution (Top {\it vs} QCD) using the convolutional architecture in Figure~\ref{fig:CNN} with batch size 100, 100 epochs and ReLU as activation function in all the layers~\footnote{Note that CNNs have been used for the Top {\it vs} QCD jet classification problem in Refs. \cite{Macaluso:2018tck,Kasieczka:2017nvn}.}.
Once trained over examples of Top and QCD events, the algorithm would give a prediction event-by-event of the probability of belonging to the class Top or class QCD. If we output the predictions for true Top and QCD events , e.g. in terms of Top probability, a good algorithm should distribute Top events near 1 and QCD events near 0, leading to two sharp peaks of the probability distribution function (PDF).
But what about any other types of events? We can also run the algorithm over anomalous examples and see where these are distributed in the Top probability axis. This is shown in Figure~\ref{fig:baseline2D}, where observe that other scenarios would typically be misclassified as Top or QCD. In other words, the classification task specializes on Top and QCD events characteristics, and any new type of scenario, which could exhibit other event characteristics, is mostly placed into one of the two classes. These {\it anomalies} are mis-identified as {\it normal} classes, QCD or Top.
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/2CBp2.png}
\caption{ Output of the binary classifier (Top {\it vs} QCD) on events from different sources (Top, QCD, W-jet, EFT and Resonances).}
\label{fig:baseline2D}
\end{figure}
\subsection{Adding Anomaly Awareness}
Now let us introduce Anomaly Awareness and, for the moment, just introduce awareness to a single new type of events, W-jets. In terms of the classification results, the effect of adding an AA term is not substantial, see Fig.~\ref{fig:classROC} and this result does not change when adding more AA terms.
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/RocCompareWonly.png}
\caption{ ROC classification curve for the prior run, and for a run including AA of W-jet. Similar curves are obtained when adding more AA types. }
\label{fig:classROC}
\end{figure}
However the effect on the anomalies, all of them, is substantial. As the algorithm becomes aware of possible anomalies, even when exposed to only one type, it does also become better at separating QCD/Top from other types. This is shown in Fig.~\ref{fig:WjetAA}, where now the probability distribution of anomalies gathers towards the centre of the distribution, i.e. they are classified neither as Top nor as QCD.
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/2CAAWp2.png}
\caption{Output of the Anomaly Awareness binary classifier (Top {\it vs} QCD) on events from different sources (Top, QCD, W-jet, EFT and Resonances). An Anomaly Awareness term has been included with only W-jets. }
\label{fig:WjetAA}
\end{figure}
One could think on using this behaviour to assign an {\it anomaly probability} to events such as
\begin{equation}\label{eq:naive}
P_{An} \to 1- P_{Top}-P_{QCD} \ ,
\end{equation}
but as we will see later, this definition would be too na\"ive for Particle Physics purposes. Indeed, in reality in a sample of LHC events there would be a variable number of QCD and Top events depending on where in the ROC curve we are setting our analysis. Moving towards the right on Figure~\ref{fig:WjetAA} corresponds to different choices of working points in the acceptance of Top and QCD events, an {\it efficiency} to collect or reject these events. An equation like Eq.~\ref{eq:naive} does not take into account the overall amount of QCD and Top events left after setting the threshold. We will discuss this issue in the last part of this Section.
As we introduce awareness to more types of anomalies, this behaviour continues to hold and improves up to a point. This can be seen in the Figures~\ref{fig:2to5}. In the top panel we observe the effect of adding an additional example in the awareness term, adding to W-jet additional awareness of $R_4$, a resonance leading to a high energy jet with 4-prongs. The improvement from Figure~\ref{fig:WjetAA} is clear, signalling that the awareness procedure improves with more variety of examples. We checked that the improvement is roughly independent of the choice of examples of anomalies, which indicates the procedure is robust.
Nevertheless, this improvement does not imply that awareness can be arbitrarily enhanced by just adding more examples. Indeed, we find that the power of the procedure {\it saturates}. This can be seen in the bottom panels of Fig.~\ref{fig:2to5}, where going from awareness of four different anomalies to extending to five does not change the overall picture.
This saturation is to be expected: the amount of information in the images we created is limited intrinsically and by design, as we are selecting just the leading jet in the event and plotting only angular distributions of energy depositions. Some additional information could be added to the analysis, as even in that leading jet one could add more information, like probability of the presence of a b-jet. And beyond the leading jet, important correlations with the other parts of the event could be added in this analysis. Hence we would expect a more detailed analysis to lead to better performance, although this is not the main focus of this work, which is presenting the idea of Anomaly Awareness and how it would qualitatively work in an LHC set-up.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\columnwidth]{figures/2CAAWrshhp2.png}\\
\includegraphics[width=0.49\columnwidth]{figures/2CAAWH1rshhRStopp2.png}
\includegraphics[width=0.49\columnwidth]{figures/2CAAall5.png}
\caption{ Output of the binary classification (Top {\it vs} QCD) on events from different sources (Top, QCD, W-jet, EFT and Resonances). Different Anomaly Awareness terms have been included: W-jet and $R_4$, a resonance leading to a high energy jet with 4 prongs ({\it top figure}), plus EFT and $R_3$, a resonance leading to a jet with 3 prongs ({\it bottom-left figure}) and finally adding to the former $R_2$, a resonance leading to a jet with 2 prongs ({\it bottom-right figure}). }
\label{fig:2to5}
\end{figure}
Let us finish discussing the effect of the modulation term $\lambda_{AA}$. This term sets the ratio of the number of normal examples shown to the algorithm, in the cross-entropy function, versus the number of anomalous examples subject to a uniform distribution. We can think on two limiting cases. On one hand, a very small value of $\lambda_{AA}$ would lead to the same result as the prior run, and would not bring the anomalies to the centre of the classification output. On the other hand, a large value of this parameter would degrade the prior classification task, broadening the PDFs for Top and QCD, the backgrounds we are fighting against. Somewhere in between, with a moderate amount of awareness, the optimal performance lies. In this work we have not optimized this value, simply set a near-optimal value ($\lambda_{AA}$=0.5) and found the expected behaviour for $\lambda_{AA}=0.3$ and 0.8, but in a detailed analysis this is a parameter which should be studied further.
\subsection{Generalization to more than two categories}
So far we have shown results based on a binary classification problem (Top {\it vs} QCD), but Anomaly Awareness could be generalized to classification problems with more than two classes. The only difference in the algorithm~\ref{alrc_algorithm} would be in the AA term, where the Uniform Distribution would be along all the classes. To illustrate this procedure, we repeat the analysis, now with three SM classes: Top, QCD and W-jet.
After training with a {\it normal} dataset with equal amounts of Top, QCD and W-jet, the algorithm can provide for each new event a probability of belonging to each class. In Figure~\ref{fig:base3C} we represent the PDF of events within these three categories (P(Top Jet), P(QCD), P(W-jet)). True top events (in red) are mostly gathered around values of one for P(Top Jet) and zero for P(W-jet) and P(QCD). Similarly true W-jet events (green) gather around values of one for P(W-jet) and zero for the others. This plot is 2D, but if we had plotted P(QCD), we would observe a similar behaviour: most true QCD events would be correctly classified.
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/3CBp2p3.png}
\caption{ Probability distributions for the {\it normal} classes in the three-class example: Top (red), QCD (blue) and W (green). The axes are the probability of an event to belong to class Top Jet (x-axis) or W Jet (y-axis). }
\label{fig:base3C}
\end{figure}
As in the two-class case discussed before, the prior classification algorithm, when faced by new types of events, would likely misclassify them as one of the known categories. For example, EFT anomalies would be mainly misclassified as W-jets. This is shown in Fig.~\ref{fig:AA3C}, where the black distribution represents the PDF of EFT events.
If we then run the model with Anomaly Awareness of all the anomalies discussed before (except EFT), the EFT events move towards the center of the PDF plane, see the pink blob in Fig.~\ref{fig:AA3C}. In other words, despite not being aware of EFT-type anomalies, exposure to other anomalies does help separating EFT fat jets from SM sources. We checked that adding EFT to the AA term on top of the other cases do not change this picture qualitatively, again indicating a {\it saturation} of the amount of information in these events which seems to be already covered by the diversity of $R_{2,3,4}$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/3CAABEFT23all3.png}
\caption{Probability distribution of EFT events after the prior run (black), and the effect of Anomaly Awareness on the distribution of EFT events (pink), when the algorithm is made aware of all the anomaly classes except EFT. Axes are the same as in Fig.~\ref{fig:base3C}. }
\label{fig:AA3C}
\end{figure}
\subsection{Anomaly detection}
So far we have discussed the effect of AA in the classification task. On one hand, it maintains the target of the prior run which is to identify correctly {\it normal} classes. This can be seen from the comparison of ROC curves in Fig.~\ref{fig:classROC}, where the overall effect of adding AA terms is negligible. But the effect on the anomalous events is substantial, bringing the distribution of predictions for anomalous events farther from the region of the {\it normal} classes, which gather around 0 and 1, see Figs.~\ref{fig:WjetAA} and ~\ref{fig:2to5}.
Now we want to discuss how this separation could be used in practice in an LHC search for anomalies. Note, though, that the following quantitative discussion is intended for illustration and not to be taken as a full-blown analysis of anomalies in high-$p_T$ fat jets at the LHC. As mentioned before, the LHC environment is complex, and modelling the behaviour of hadronic final states requires a sophisticated machinery that we are just approximating with simple theoretical simulation tools. Moreover, we have only considered information on the leading jet, missing then important correlations with other hadronic activity or correlated channels.
With all these caveats in mind, we describe a procedure one could follow to use AA in order to increase detection of anomalies.
To claim an anomaly detection we need a statistical criteria to determine how many anomalous events ($N_{An}$) over SM events $N_{SM}$ are required. For example a typical criteria is:
\begin{equation}
\textrm{criteria = }N_{An}/\sqrt{N_{SM}} ,
\end{equation}
where one can choose a value, $N_{An}/\sqrt{N_{SM}} =$ 5, as a benchmark to claim the significance of the anomalies is above fluctuations in the SM background. The number of events $N_{An, \, SM}$ depends on how often these types of events are produced in LHC collisions, i.e. the cross-sections $\sigma_{An, \, SM}$. It also depends on the thresholds we choose when applying the algorithm, i.e. how many of these events we reject and collect.
In Figures~\ref{fig:WjetAA} and ~\ref{fig:2to5}, one could choose such criteria as a window in the output probability of the classifier
\begin{equation}
p \in [\, p^{min}, \, p^{max}]
\end{equation}
and scan different windows to obtain the maximum efficiency to collect anomalies and reject SM events.
The effect of this scan is shown in Fig.~\ref{fig:Ratio}, where we plot the following quantity
\begin{equation}
\label{Rval}
R= \frac{\epsilon_{An}}{\sqrt{\sigma_{QCD} \, \epsilon_{QCD}+ \sigma_{t\bar t} \, \epsilon_{t \bar t}}} .
\end{equation}
In this equation $\epsilon$ denotes the area of the PDF curve in Figures~\ref{fig:WjetAA} and ~\ref{fig:2to5} on a given window.
Note how in $R$ the QCD and Top cross-sections are weighted in. Right after the high-$p_T$ selection cuts, the QCD total cross section is much larger than the Top. But one can use the output of the classifier to impose a threshold on P(Top Jet), and drastically reduce the amount of QCD events, closer to the amount of Top.
In anomaly detection, the task of identifying anomalies means fighting against QCD and/or Top, depending on where in the output classifier region our window lies. Towards the left, P(Top Jet) $\ll 1$, QCD is the dominant contribution to the denominator in $R$, and at the other end, Top is dominant. Somewhere in between these two extremes we should find the best window for anomaly detection. In Fig.~\ref{fig:Ratio} we see exactly that behaviour. $R$ is very small on both ends of the plot, where the QCD and Top backgrounds are overwhelming. As we move our window $[\, p^{min}, \, p^{max}]$ towards the center, both QCD and Top drop, allowing an optimal identification of anomalies at the maximum of $R$, $R_{max}$. The parameter $\delta$ in this plot corresponds to the width of the window, $\delta = p^{max}-p^{min}$ and one can see the value of $R_{max}$ does not depend strongly on the choice of $\delta$ as long as it is $\sim {\cal O}(0.1)$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/ratio.png}
\caption{Value of $R$ defined in Eq.~\ref{Rval} as a function of the EFT output classifier P(Top Jet) of an AA run with awareness of all the anomalies except EFT. The three curves correspond to window widths $\delta =$ 0.1, 0.08 and 0.12. }
\label{fig:Ratio}
\end{figure}
After determining $R_{max}$, one can turn the criteria for discovery $N_{An}/\sqrt{N_{SM}} = 5$ into a minimum value of the anomaly cross section one would be able to detect. This value would depend on the amount of data collected at the LHC (i.e. luminosity, ${\cal L}$), hence on the time it runs. Indeed, note that
\begin{equation}
\frac{N_{An}}{\sqrt{N_{SM}}}= R \, \sigma_{An} \, \sqrt{{\cal L}}
\end{equation}
hence
\begin{equation}
\sigma_{An}^{min} = \frac{5}{R_{max} \, \sqrt{{\cal L}}} \,
\end{equation}
which is shown in Fig.~\ref{fig:lumi} for the EFT case where AA is to all anomalies but EFT. We repeated the same analysis for other anomalies, $R_{2,3,4}$, with similar results, as expected from a procedure which aims to model-independence.
\begin{figure}[h!]
\centering
\includegraphics[width=0.97\columnwidth]{figures/LumiVsSig.png}
\caption{ Value of $\sigma_{An}^{min}$ (in fb) as a function of LHC luminosity (in fb$^{-1}$). The value of 3000 fb$^{-1}$ corresponds to the expected luminosity of the HL-LHC run. }
\label{fig:lumi}
\end{figure}
As a reference, the QCD cross section after the selection cuts is of the order of 50$\times 10^3$ fb, and Fig.~\ref{fig:lumi} shows we would be able to detect cross sections for anomalies of the order of 10 fb, a 1:5000 ratio of anomaly over in-distribution.
We do not want to finish this section without stressing once more that the results shown in Figs.~\ref{fig:Ratio} and ~\ref{fig:lumi} should be taken as a qualitative illustration on how to use AA for anomaly detection. A better simulation and analysis, including more information on the events and more types of anomalies, would likely lead to substantially better results than those shown here.
\section{Conclusions}~\label{sec:conclusions}
In this paper we have described a new method of anomaly detection, based on a classification task within a multiclass in-distribution, and the effect of adding to the task some level of anomaly awareness.
As a use case for this method we have addressed a non-trivial task of anomaly detection in Particle Physics. Using information of LHC events we have studied how Anomaly Awareness can help to establish a more model-independent strategy to search for new phenomena at high energies. We observe that Anomaly Awareness does not substantially interfere with the underlying classification task, see Fig.~\ref{fig:classROC}, but adds additional power of detecting anomalies.
We found that awareness of {\it any} anomaly helped on detecting others, and that adding more anomalies to the AA term did improve detection of new, unknown situations. We did notice, though, that this procedure levels off after awareness to a few examples, likely to indicate that the feature extraction of the algorithm has saturated.
Although we constructed jet images as input for the algorithm, Anomaly Awareness could be used with any type of input. For example, for the LHC application we could have used instead images of the leading and subleading jets simply pasted together, as proposed in Ref.~\cite{Khosa:2019qgp}, event information terms of a set of kinematic variables, mixed input, or even lower-level event information (closer to the raw output of the detector).
Finally, our discussion on LHC anomaly detection should be understood as a {\it proof-of-concept} on the use of Anomaly Awareness, and not as a dedicated study. We nevertheless find promising results, despite using just a part of the information available in the LHC events. And although we showed results with the EFT as the unseen anomaly, we found similar results for the other anomaly examples. Compared with supervised ML methods for EFTs~\cite{Freitas:2019hbk}, we find that our estimative limit of the anomalous cross section, Fig.~\ref{fig:lumi}, is of the same order of magnitude and motivates a more systematic study.
\section{Acknowledgements}
\noindent CKK is supported by the Newton Fellowship programme at the Royal Society (UK). VS acknowledges support from the UK Science and Technology Facilities Council (grant number ST/L000504/1), the GVA project PROMETEO/2019/083, as well as the national grant FPA2017-85985-P. The authors gratefully acknowledge the computer resources at Artemisa and the technical support provided by the Instituto de Fsica Corpuscular, IFIC (CSIC-UV). Artemisa is co-funded by the European Union through the 2014-2020 ERDF Operative Programme of Comunitat Valenciana, project IDIFEDER/2018/048.
\bibliographystyle{unsrt}
|
1,108,101,565,956 | arxiv | \section{Definitions and Rough classification}
\begin{definition}An abstract semigroup is a set $S$ equipped with an associative composition law
$\mu :S\times S\to S $.
When $S$ is a variety and $\mu$ is a morphism, we say that $(S,\mu)$ is an algebraic semigroup.
\end{definition}
\begin{theorem} (M.Brion ~\cite {Michel. Brion}, Section 4.3, Theorem 6.) \\Let $S$ be a complete
variety, and $\mu:S\times S\longrightarrow S$ a morphism. Then $(S,\mu)$ is an algebraic semigroup if
and only if there exist complete varieties $X$ , $Y$, an abelian variety $(A,+ )$ and morphisms
($\sigma$, $\pi$)
making the following diagram commutative:
\begin{displaymath}\xymatrix{X\times A \times Y \ar[r]^-\sigma\ar[d]^-{id}&S\ar[dl]^-\pi\\X\times A
\times Y}\end{displaymath}
and satisfying $\mu(s_1,s_2)=\sigma\Big(\nu\big(\pi(s_1),\pi(s_2)\big)\Big)$, where
\[\nu\big((x_1,a_1,y_1),(x_2,a_2,y_2)\big)=\big(x_1,a_1+a_2,y_2\big).\] In particular, $\sigma$ is a
closed immersion and a section of $\pi$.
\end {theorem}
\textbf{Notation}: We call $X\times A\times Y$ the kernel of $(S,\mu)$ and $A$ the associated abelian
variety of $(S,\mu)$.
\subsection{First steps in classification}
Note that $\dim S=2$ and $\sigma:X\times A\times Y\hookrightarrow S$ is a closed immersion, so $\dim
X+\dim Y+\dim A\le 2$.\\Then we list all the possibilities as follows:\\
\begin{enumerate}[1)]
\item $\dim X=\dim Y=0 , \dim A=2$. Then $(S,\mu)=(A,+)$ is an abelian surface.
\item $\dim X=\dim Y=0 , \dim A=1 $. Then $(A,+)$ is an elliptic curve and we have the following
commutative diagram:\\
$$\xymatrix{(A,+)\ar[r]^-\sigma\ar[d]^-{id}&(S,\mu)\ar[dl]^-\pi\\(A,+)}$$\\
where the semigroup law is given by the formula: \[\mu(s_1,s_2)=\sigma\
\big(\pi(s_1)+\pi(s_2)\big).\]
\item $\dim X=\dim Y=\dim A=0$ and $\mu$ is constant.
\item $\dim X=1$, $\dim Y=0$, $\dim A=0$ and $\mu(s_1,s_2)=\sigma(\pi(s_1))$.
\item (similar to 4)) $\dim X=0$, $\dim Y=1$, $\dim A=0$ and $\mu(s_1,s_2)=\sigma(\pi(s_2))$.
\item $\dim X=1$, $ \dim Y=0$, $\dim A=1$. Then $S=X\times A$, and the semigroup law is given by
$\mu((x_1,a_1),(x_2,a_2))=(x_1,a_1+a_2)$.
\item (similar to 6)) $\dim X=0$, $\dim Y=1$, $\dim A=1$. Then $S=Y\times A$ and the semigroup law
is given by $\mu((y_1,a_1),(y_2,a_2))=(y_2,a_1+a_2)$.
\item $\dim X=2$, $\dim Y=0$, $\dim A=0$ . Then $S=X$, and the semigroup law is given by
$\mu(s_1,s_2)=s_1$.
\item (similar to 8)) $\dim X=0$, $\dim Y=2$, $\dim A=0$. Then $S=X$, and the semigroup law is
given by $\mu(s_1,s_2)=s_2$.
\item $\dim X=\dim Y=1$. Then $S= X \times Y$ and the semigroup law is given by
$\mu((x_1,y_1),(x_2,y_2))=(x_1,y_2)$.
\end{enumerate}
\textbf{Remark 1):} \\We call cases 1) 3) 6) 7) 8) 9) 10) \textbf {trivial} and they will not be
considered any more.\\
In cases 2) 4) 5), we observe that $S$ always maps to a curve $C$ via some morphism
$\pi:S\longrightarrow C $ with a section $\sigma: C\longrightarrow S$, and there is always an
algebraic semigroup law $\tilde\mu$ on $C$, such that the semigroup law $\mu$ on $S$ is given
by:\begin{equation} \label{law}
\mu(s_1,s_2)=\sigma\big(\tilde\mu(\pi(s_1),\pi(s_2))\big).\end{equation} To be concrete, in case 2),
$(C,\tilde\mu)=(A,+)$; in case 4), $C=X$, and for all $ x_1,x_2\in X$, $ \tilde\mu(x_1,x_2)=x_1$; in
case 5), $C=Y$, and for all $ y_1,y_2\in Y$, $ \tilde\mu(y_1,y_2)=y_2$. We call the semigroup laws on
$S$ in these cases \textbf {nontrivial}. In the following, we deal with non-trivial algebraic
semigroup structures, and we denote the associated contraction morphism $\pi$ and its section
$\sigma$, by the following diagram \xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}.\begin{lemma}If
\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}, then $\pi_*\mathcal O_S=\mathcal O_C$.\end{lemma}
\begin{proof} ~\cite {Michel. Brion}, Section 4.3, Lemma 1.\end{proof}
From this lemma, we deduce that $\pi$ is a fibration, (which means every fibre of $\pi$ is connected),
and the projective curve $C$ is normal, hence smooth.
Now in order to solve the problem, our task becomes the following one: \\classify all
\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}, where $\pi$ is a fibration and $\sigma$ is a section
of $\pi$.
\subsection{Reduction of the problem to the minimal model}
In this subsection, we keep the assumption that \xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}. When
$g(C)\ge1$, we prove that $S$ has a non-trivial algebraic semigroup structure if and only if its
minimal model has one. This will reduce our problem to the case that $S$ is minimal. In Proposition
1.4, we consider the blowing-up of a point on $S$, and we show that any non-trivial algebraic
semigroup structure can be lifted to this blowing-up. Then we give an example to show that in some
cases, after blowing-down an exceptional rational curve on $S$, the non-trivial semigroup structures
are not preserved. In Proposition 1.5, we show that, if $g(C)\ge1$, any non-trivial algebraic
semigroup structure is preserved after contracting an exceptional rational curve.\\
The main results of this subsection are Proposition \ref {P1} and Proposition \ref {P2}.
\begin{proposition}\label{P1}
Assume that the non-trivial semigroup law $\mu$ on $S$ is given by the formula (1) in Remark 1). Let
$\xymatrix{\varphi: S'\ar[r] &S}$ denote the blowing-up of an arbitrary point $P$ on $S$. Then $S'$
has a unique non-trivial algebraic semigroup structure such that $\varphi$ is a homomorphism.
\end{proposition}Now let us begin the proof of Proposition 1.4.
\begin{proof} First, we construct a morphism from $S'$ to $C$, and a section of this morphism; then
we construct a non-trivial algebraic semigroup structure on $S'$, and verify that $\varphi$ is a
homomorphism. \\ Composing $\pi$ with $\varphi$, we get a morphism from $S'$ to $C$, denoted by
$\pi'=\pi\circ\varphi:S'\longrightarrow C$.\\ We now construct a section of $\pi'$. Since $\sigma$
maps $C$ isomorphically to its image $\sigma(C)$, in order to construct a morphism from $C$ to $S'$,
we consider the strict transform of $\sigma (C)$, and denote it by $C'$. \\Then
$\varphi_{|C'}:C'\longrightarrow \sigma(C)$ is the blowing-up of the point $P $ on $\sigma( C)$.
Since $\sigma(C)$ is smooth, $\varphi_{|C'}$ is an isomorphism. Let
$\sigma'=\varphi_{|C'}^{-1}\circ\sigma$, then $\sigma'$ is a morphism from $C$ to $S'$. Next let us
verify that $\sigma'$ is a section of $\pi'$ :\begin{equation}\label{2}
\pi'\circ\sigma'=\pi\circ\varphi\circ\sigma'=\pi\circ\sigma= id_C.\end{equation} Thus we construct a
semigroup law $\mu'$ on $S'$ by using the formula (1) in Remark 1). We now verify that $\varphi'$ is a
homomorphism of semigroups. Let $s'_1,s'_2 \in S'$, then \begin{equation}
\xymatrix{\mu(\varphi(s'_1),\varphi(s'_2))
=\sigma(\tilde\mu(\pi\circ\varphi(s'_1),\pi\circ\varphi(s'_2)))
=\sigma(\tilde\mu(\pi'(s'_1),\pi'(s'_2)))}\end{equation}
and we also have the following equations:\begin{equation}
\xymatrix{\varphi(\mu'(s'_1,s'_2))=\varphi(\sigma'(\tilde\mu(\pi'(s'_1),\pi'(s'_2)))=\sigma(\tilde\mu(\pi'(s'_1),\pi(s'_2)))}.\end{equation}
So we get $\mu(\varphi(s'_1),\varphi(s_2'))=\varphi(\mu'(s'_1,s'_2))$, which means that $\varphi$ is
a homomorphism of semigroups.\\ Finally the uniqueness of the semigroup law making $\varphi$ a
homomorphism is due to the fact that $\xymatrix{S'\ar[r]^\varphi &S}$ is birational. \end{proof}
We already know that blowing-up of a point on $S$ preserves a non-trivial algebraic semigroup
structure, what happens if we perform a blowing-down? The next example shows that the non-trivial
semigroup structures are not necessarily preserved.
\begin{example} Consider the blowing-down morphism:$$ \varphi:Proj_{\mathbb P^1}(\mathcal
O\oplus\mathcal O(-1))\longrightarrow \mathbb P^2. $$ We show that there exist non-trivial algebraic
semigroup structures on \\$Proj_{\mathbb P^1}(\mathcal O\oplus\mathcal O(-1))$, but every algebraic
semigroup structure on $ \mathbb P^2 $ is trivial. \\ Note that $Proj_{\mathbb P^1}(\mathcal
O\oplus\mathcal O(-1))$ is ruled over $\mathbb P^1$ and the ruling morphism $\pi$ has sections. For
each section, we can define a semigroup law on $Proj_{\mathbb P^1}(\mathcal O\oplus\mathcal O(-1))$
as in Remark 1). For example, recall the projection morphism\\ \xymatrix{\mathcal O\oplus\mathcal
O(-1)\ar[r]& \mathcal O\ar[r]&0} on $\mathbb P^1$ corresponds to a section:\\ \xymatrix{\sigma:
\mathbb P^1\ar[r]& Proj_{\mathbb P^1}(\mathcal O\oplus\mathcal O(-1))}. We can thus define a
non-trivial semigroup law $\mu$ on $Proj_{\mathbb P^1}(\mathcal O\oplus\mathcal O(-1))$ by
$\mu(x_1,x_2)=\sigma\circ \pi (x_1)$.
\\ Also recall that for an arbitrary smooth projective curve $C$, there are only constant morphisms
from $\mathbb P^2$ to $C$, which means that there will be no retraction from $\mathbb P^2$ to a curve.
It follows that the only possible algebraic semigroup structures on $\mathbb P^2 $ are trivial.
\end{example}\begin{proposition}\label{P2}Let $(S,\mu)$ satisfy the same assumptions as in
Proposition \ref{P1}. Let $\varphi: S\longrightarrow \tilde S$ denote the blowing-down of an
exceptional rational curve $E$ on $S$. Furthermore, we require that $\pi(E)$ is a point $P$. Then
there exists a unique non-trivial semigroup law $(\tilde S,\tilde \mu)$ such that $\varphi$ is a
homomorphism. \end{proposition}
Now let us begin the proof of Proposition 1.5.
\begin{proof}First we construct a morphism from $C$ to $\tilde S$, then we construct a contraction of
this morphism. After defining a non-trivial semigroup structure on $\tilde S$, we verify that
$\varphi$ is a homomorphism.\\
Composing $\varphi$ with $\sigma$, we get a morphism $\tilde
\sigma=\varphi\circ\sigma:C\longrightarrow \tilde S$.\\
Since $\varphi: S-E \longrightarrow \tilde S-P $ is an isomorphism, there is a set-theoretical map
$\tilde \pi$ from $\tilde S$ to $C$ satisfying $\pi=\tilde \pi\circ\varphi$.\\
Let us verify that $\tilde \pi$ is a morphism. It is easy to see $\tilde \pi$ is continuous, because
the preimage of any finite set is closed in $\tilde S$. We now define $\tilde \pi ^ {\sharp}: \mathcal
O_C\longrightarrow \tilde \pi_*\mathcal O_{\tilde S}$. For any open subset $U$ of $C$, there is a ring
homomorphism $\pi_{U}^{\sharp}:\mathcal O_C(U)\longrightarrow \mathcal O_S(\pi^{-1}(U))$. Since
$\varphi_*\mathcal O_S=\mathcal O_{\tilde S}$, there is an isomorphism $\varphi_{\tilde
\pi^{-1}(U)}^{\sharp}:\mathcal O_{\tilde S}(\tilde \pi^{-1}(U))\longrightarrow \mathcal
O_S(\pi^{-1}(U))$. We define $\tilde \pi ^ {\sharp}_U: \mathcal O_C (U)\longrightarrow \mathcal
O_{\tilde S}(\tilde \pi^{-1}(U))$ by $\tilde \pi ^ {\sharp}_U=\varphi_{\tilde
\pi^{-1}(U)}^{\sharp-1}\circ \pi_{U}^{\sharp}$. Then $(\tilde \pi,\tilde \pi^{\sharp})$ is a morphism
of ringed spaces, so we get a morphism $\tilde \pi$ from $\tilde S$ to $C$.\\
Now we verify $\tilde \sigma$ is a section of $\tilde \pi$: \begin{equation} \tilde \pi\circ\tilde
\sigma=\tilde \pi\circ\varphi\circ\sigma=\pi\circ\sigma=id_C.\end{equation}\\ Thus we have
constructed a non-trivial algebraic semigroup structure $\tilde \mu$ on $\tilde S$ as in Remark
1).\\ Let us verify that $\varphi$ is a homomorphism of semigroups: for any $s_1,s_2\in S$,
\begin{equation}
\varphi(\mu(s_1,s_2))=\varphi\circ\sigma(\tilde\mu(\pi(s_1),\pi(s_2))) \end{equation}and
\begin{equation}\tilde \mu(\varphi(s_1),\varphi(s_2))=\tilde \sigma\circ\tilde\mu(\tilde
\pi\circ\varphi(s_1),\tilde \pi\circ\varphi(s_2)). \end{equation}
Since \begin{equation} \tilde \pi\circ\varphi=\pi \end{equation}and \begin{equation}\tilde
\sigma=\varphi\circ\sigma \end{equation}we have $\mu(\varphi(s_1),\varphi(s_2))=\varphi(\tilde
\mu(s_1,s_2))$, which means that $\varphi$ is a homomorphism.\\Finally, $S$ and $\tilde S$ are
birationally equivalent, so there is a unique non-trivial algebraic semigroup structure on $\tilde S$
such that $\varphi$ is a homomorphism.\end{proof}
In view of the last two propositions, when $g(C)\ge 1$, we can can reduce our problem to the case
where $S$ is minimal, the case $g(C)=0$ need further study. But whatever in what follows, we always
assume $S$ is minimal.
\section{The case $\kappa(S)=-\infty$}In this section, we always assume $\kappa(S)=-\infty$, and our
aim is to prove:
\begin{theorem}[Main theorem]\label {M1}
$$\textrm{If}\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}. $$\begin{enumerate}[1)]
\item If $g(C)\ge 1$, then $\pi$ is a ruling morphism.
\item If $g(C)=0$ and the general fibre of $\pi$ is rational,\\then $S\simeq Proj_{\mathbb
P^1}(\mathcal O\oplus\mathcal O(-d)) $ for some $d\ne 1$ and $\pi$ is the ruling morphism.
\item If $g(C)=0$, i.e. $C\simeq \mathbb P^1$ and the general fibre of $\pi$ is not rational,\\then
$S\simeq \mathbb P^1\times X$, where X is a projective smooth curve satisfying $g(X)\ge1$ and
$\pi=pr_{1}$ is the first projection to $\mathbb P^1$.\end{enumerate}\end{theorem}
The parts 1) and 2) of Theorem \ref {M1} are more or less trivial. In order to prove 3), we analyze
the cone of curves of $S$, $NE(S)$. To be more precise, we show that $NE(S)$ is two-dimensional, and
give an explicit description of the extremal rays of this convex cone.
Before proving Theorem \ref{M1}, we need some general facts about ruled surfaces.
\begin{definition}{(definition and notation)}\begin {enumerate}[a)]\item (definition of normalised
sheaf).\\If $\pi:S\longrightarrow C$ is a ruled surface, then there exists a rank two locally free
sheaf $\mathcal E$ on $C$ s.t. $S\simeq \mathbb P_C(\mathcal E)$, $H^0(\mathcal E)\ne 0$ and for an
arbitrary invertible sheaf $\mathcal L$ with $deg(\mathcal L)<0$ on $C$, $H^0(\mathcal E\otimes
\mathcal L)=0$ (the existence of such $\mathcal E$ can be found in ~\cite {Hartshone}, Chapter 5,
Proposition 2.8).We call such $\mathcal E$ normalised. \item (definition of invariant $e$)\\For any
normalised sheaf $\mathcal E$, we define $e$ by \begin{equation} e=-deg(\mathcal E)=-deg(\bigwedge^2
\mathcal E). \end{equation}This number is a well-defined invariant of $S$, not depending on the
$\mathcal E$ we choose.\\(The proof that $e$ is a well-defined invariant, can be found in ~\cite
{Hartshone}, Chapter 5, Proposition 2.8).\item (some notation)\\In $Pic(S)$, we let $C_0$ denote the
linear equivalence class of $\mathcal O_{\mathbb P_C(\mathcal E)}(1)$. And we let $f$ denote the
numerical equivalence class of any general fibre of $\pi$. \end{enumerate}\end{definition}
We quote the following lemmas without proof, they can be found in the book ~\cite{Hartshone},
Proposition $2.6$, Proposition $2.20$ and Proposition $2.21$. We will use these lemmas to describe
$NE(S)$.
\begin {lemma}Let $S\simeq \mathbb P_C(\mathcal E)$ be a ruled surface over $C$, for some normalised
sheaf $\mathcal E$. Then there is a one-to-one correspondence between sections of $\pi$ and
surjections $\mathcal E\longrightarrow \mathcal L\longrightarrow 0$, where $\mathcal L$ is an
invertible sheaf on $C$.
There exists a section $\sigma$ such that $\sigma(C)=\mathcal O_{\mathbb P_C(\mathcal E)}(1)$ in
$Pic(S)$ ($\sigma$ may not be unique), $\sigma$ is called a canonical section of $\pi$.\end{lemma}
\begin{lemma}If $\pi:S\longrightarrow C$ is a ruled surface, then \\a) $Pic(S)=\mathbb Z C_0\oplus
\pi ^* Pic(C) $. \\ b) The self-intersection number satisfies $C_0^2=-e$.\\
c) The canonical line bundle of $S$ satisfies $K_S\sim_{num}-2C_0+(2g(C)-2-e)f$.\end{lemma}
\begin{lemma}\label{5}If $\pi:S\longrightarrow C$ is a ruled surface, with $e\ge 0$
then:\begin{enumerate}[a)]\item For any irreducible curve $Y$ on $S$, $Y$ is numerically equivalent
to $a C_0+b f$, for some $a,b\in \mathbb Z$. If $Y$ is not numerically equivalent to $C_0$ or $ f$,
then $a>0$ , $b\ge ae$.
\item A divisor $D\sim_{num}aC_0+bf$ is ample if and only if $a>0$ and $b>ae$.\end{enumerate}
\end{lemma}
\begin{lemma}\label{6}If $\pi:S\longrightarrow C$ is a ruled surface, with $e\less 0$ then
:\begin{enumerate}[a)]\item For any irreducible curve $Y$ on $S$, $Y$ is numerically equivalent to $a
C_0+b f$ for some $a,b\in \mathbb Z$. If $Y$ is not numerically equivalent to $C_0$ or $ f$, then
either $a=1$, $b>0$ or $a\ge2$, $2b\ge a e$.\item A divisor $D\sim_{num}a C_0+ b f$ is ample if and
only if $a>0$, $2b>a e$. \end{enumerate}\end {lemma}
The next lemma is a version of the ``Rigidity lemma''. (See Lemma 1.15, ~\cite {Olivier Debarre} )For
convenience, we give a detailed proof in the following.
\begin{lemma}{(Rigidity lemma)}\\ Assume that there are two fibrations $\pi_1:S\longrightarrow C_1$,
$\pi_2:S\longrightarrow C_2$ where $C_1$, $C_2$ are smooth curves. If $\pi_2$ contracts one fibre
$F_1$ of $\pi_1$, then it contracts all fibres of $\pi_1$ and there exists an isomorphism
$\theta:C_1\longrightarrow C_2$, s.t. $\pi_2=\theta\circ\pi_1$, which means $\pi_1$ and $\pi_2$ are
isomorphic as fibrations. \end{lemma}
\begin{proof}First we prove that $\pi_2$ contracts all fibres of $\pi_1$. Observe that $\pi_2$
contracts $F_1$ if and only if $\pi_{2*}(F_1)\sim_{num}0$. Since the fibres of $\pi_1$ are
parameterized by $C_1$, they are all numerically equivalent. So for an arbitrary fibre $F$ of $\pi_1$,
we have $\pi_{2*}(F)\sim_{num}0$, which implies that $\pi_2$ also contracts $F$.\\
Then $\pi_2$ factors through $\pi_1$ set-theoretically, i.e. there exists a map $ \theta $ s.t.
$\pi_2=\theta\circ\pi_1$.\\ We now show that $\theta$ is a morphism. \\Pick an arbitrary open set $U$
of $C_2$, we have $\pi_2^{-1}(U)=\pi_1^{-1}\circ\theta^{-1}(U)$. Since $\pi_1$ is surjective,
$\pi_1(\pi_2^{-1}(U))=\theta^{-1}(U)$. Because $\pi_1$ is flat, it is an open map. Since $\pi_1$
maps the open set $\pi_2^{-1}(U)$ onto $\theta^{-1}(U)$, so $\theta^{-1}(U)$ is open in $C_1$ . Now
we have proved that $\theta$ is continuous.\\ Then we construct $\theta^ { \sharp}:\mathcal
O_{C_2}\longrightarrow \theta_*\mathcal O_{C_1}$. Since both $\pi_i$ have connected fibres,
$\pi_{i*}\mathcal O_S=\mathcal O_{C_i}$. So for an arbitrary open subset $U$ of $C_2$, both
$\pi_{1}^{\sharp}:\mathcal O_{C_2}(U)\longrightarrow \mathcal O_S(\pi_2^{-1}(U))$ and
$\pi_{2}^{\sharp}:\mathcal O_{C_1}(\theta^{-1}(U))\longrightarrow \mathcal O_S (\pi_2^{-1}(U))$ are
isomorphisms. We define $\theta_{U}^{\sharp}:\mathcal O_{C_2}(U)\longrightarrow
O_{C_1}(\theta^{-1}(U))$ by $\theta_{U}^{\sharp}=\pi_{2}^{\sharp-1}\circ\pi_{1}^{\sharp}$, then
$(\theta, \theta^ { \sharp})$ is a morphism of ringed spaces. Now we have proved that $\theta$ is a
morphism of varieties.\\ Then $\theta$ is a finite morphism between smooth projective curves. Since
both $\pi_i$ have connected fibres, $deg(\theta)=1$, which implies $\theta$ is an isomorphism.
\end{proof} Recall the following result from the book ~\cite{BPV}, Theorem 18.4, Chapter 3:
\begin{theorem}{(Itaka's conjecture $C_{2,1}$).} Let $\varphi:S\longrightarrow C$ be a fibration, then
the following inequality holds for any general fibre $S_c$ :
\begin{equation} \kappa(S_c)+\kappa(C)\le \kappa(S). \end{equation}\end{theorem}
Now let us begin the proof of Theorem \ref{M1}.
\begin{proof}
\begin{enumerate}[1)]\item Assume that $g(C)\ge 1$. Then for any general fibre $F_0 $ of $\pi$, by
Theorem 2.1, $\kappa(F_0)+\kappa(C)\le \kappa(S)=-\infty$. Then $\kappa(F_0)=-\infty $, which means
$F_0$ is smooth rational, so $\pi$ is a ruling morphism.
\item If $g(C)=0$ and general fibres of $\pi$ are smooth rational curves, then $S$ is ruled over
$C\simeq \mathbb P^1$, which implies that $S$ is rational. Since $S$ is minimal, $S$ is
$\mathbb P^2$ or $S\simeq Proj_{\mathbb P^1}(\mathcal O\oplus\mathcal O(d)) $ for some $d\ne
1$. But every morphism from $\mathbb P^2$ to $\mathbb P^1$ is constant, so there will be no
sections of $\pi$. So $S\simeq \mathbb P^2$ is impossible, and we get $S\simeq Proj_{\mathbb
P^1}(\mathcal O\oplus\mathcal O(d)) $ for some $d\ne 1$.
\item
In this paragraph, we show that $S$ is ruled over some smooth curve $X$ and introduce some notation.
Since $\kappa(S)=-\infty$ and $S$ is minimal, $S$ is a ruled surface over some projective smooth
curve $X$. We denote the ruling morphism by $\varphi:S\longrightarrow X$, and write $S$ as $\mathbb
P_X(\mathcal E)$ for some normalised sheaf $\mathcal E$ on $X$. Using the notation of Definition
2.3, we denote the canonical section of $\varphi$ by $\tau$. Then we have the following diagram
involving two morphisms with sections:\\$$\xymatrix{S\ar[r]|\varphi
\ar[d]|\pi&X\ar@/_/@{>}[l]|\tau\\\mathbb P^1\ar@/_/@{>}[u]|\sigma}$$
\\We now show that $NE(S)$ is closed. Let $F_0$ be a general fibre of $\pi$ and denote its
numerical equivalence class by $f_0$. We show that $F_0$ can't be contracted by $\varphi$.
Otherwise, by the ``Rigidity lemma'', $\varphi$ will factor through $\pi$. Then $\varphi$ ,
$\pi$ will be isomorphic as fibrations, the general fibres of $\pi$ will be rational, which
contradicts our assumptions of 3). So $f_0$ is not a multiple of $f$, which implies that the
extremal lines generated by them are different. Since the second Betti-number, $b_2(S)=2$, the
cone $NE(S)$ is two-dimensional, and its boundary $\partial NE(S)$ consists of two extremal
lines. Since the self-intersection numbers $f_0^2$ and $f^2$ are zero, by the ``cone theorem for
surfaces''(Lemma 6.2, Chapter 6, ~\cite {Olivier Debarre} ), we know that both $f_0$ and $f$
lie in $\partial NE(S)$. It follows that $\partial NE(S)$ consists exactly of the two extremal
lines generated by $f_0$ and $f$. Since $f_0$ and $f$ belong to $NE(S)$, $\partial
NE(S)\subset NE(S) $, which means $NE(S)$ is closed.\\In the following, we aim to prove that the
intersection number $f_0\cdot f=1$. \\The two cases $e< 0$ , $e\ge0$ will be dealt with
separately.\\
a)
The case $e<0$ :\\First, we determine the ample cone and the nef cone of $S$.\\By Lemma
\ref{6}: we have \\$Amp(S)$= \{$aC_0+b f|a>0,\quad 2b>a e$\}.\\
Taking the closure of $Amp(S)$, we get the nef cone:\\ $Nef(S)=\{aC_0+bf|a\ge0,\quad 2b\ge
ae\}$.\\ \\Then, we determine the numerical equivalence class of $f_0$.\\Let $f_0=a_1C_0+b_1f$,
where $a_1=1,\quad b_1\ge 0$ or $a_1\ge 2,\quad 2b_1\ge a_1 e$. \\
If $a_1=1$ and $f_0=C_0+b_1f$, $b_1\ge 0$, then $f_0\in Amp(S)$. This contradicts $f_0^2=0$.\\
So $f_0=a_1C_0+b_1f$, where $a_1\ge2$, $ 2b_1\ge ae $.\\ Let us determine the numbers $a_1$ and
$b_1$ by calculating the intersection numbers $f_0^2$ and $f_0\cdot\sigma(\mathbb P^1)$.\\ Since
\begin{equation} 0=f_0^2=a_1^2C_0^2+2a_1b_1=2a_1b_1-ea_1^2\end{equation} we get $2b_1=a_1e$,
which implies $f_0= \frac{a_1}{2}(2C_0+ef)$.\\Let $d=(2C_0+ef)\cdot\sigma(\mathbb P^1)$, since
$f_0\cdot\sigma(\mathbb P^1)=1$, $a_1d=2$. Note that $a_1\ge2$, so $a_1=2$, $b_1=e$. Hence
\begin{equation} f_0=2C_0+ef.\end{equation}\\Then we show that $f_0\cdot f=1$.\\ If
$\sigma(\mathbb P^1)= f$, then $f_0\cdot f=f_0\cdot\sigma(\mathbb P^1)=1$, we are done.\\Our aim
is to exclude the case $\sigma(\mathbb P^1)\ne f$. Since $f_0\cdot\sigma(\mathbb P^1)=1$,
$\sigma(\mathbb P^1)$ is not a multiple of $f_0$. If $\sigma(\mathbb P^1)\ne f$, by Lemma
\ref{6}, $\sigma(\mathbb P^1)=a_2C_0+b_2f$, where $a_2=1,\quad b_2\ge 0$ or $a_2\ge 2,\quad
2b_2> a_2 e$, in both cases, $(a_2,b_2)$ satisfies $a_2>0,\quad 2b_2>a_2 e$, which means
$\sigma(\mathbb P^1)$ lies in $Amp(S)$. \\
Consider the canonical divisor $K_S$: \begin {align}
K_S&=-2C_0+(2g(X)-2-e)f\\&=(-2C_0-ef)+(2g(X)-2)f\\ &=-f_0+(2g(X)-2)f.\end{align}
\\So by Riemann-Roch theorem:\begin {align} &g(\sigma(\mathbb P^1))\\&=1+
\frac{1}{2}(\sigma(\mathbb P^1)^2+\sigma(\mathbb P^1)\cdot K_S)\\&=1+\frac{1}{2}\sigma(\mathbb
P^1)^2+\frac{1}{2}\{-f_0+(2g(X)-2)f\}\cdot\sigma(\mathbb P^1).\end{align}\\ Recall a result of
Nagata ~\cite{Nagata}, Theorem 1:\\``Let $ S$ be a $\mathbb P^1 $-bundle over a smooth curve $X$
of genus $g(X)$, then the invariant $e\ge-g(X)$.''\\So $g(X)\ge -e>0$. Since $\sigma(\mathbb
P^1)$ is ample, $\sigma(\mathbb P^1)^2>0$ and $\sigma(\mathbb P^1)\cdot f >0$. We have
\begin{equation}g(\sigma(\mathbb P^1))>1-\frac{1}{2}f_0\cdot\sigma(\mathbb
P^1)=1-\frac{1}{2}>0\end{equation} which is impossible.\\
So finally we get $\sigma(\mathbb P^1)=f$ and $f\cdot f_0=\sigma(\mathbb P^1)\cdot f_0=1$\\
b) The case $e\ge 0$ \\We now show $f_0=C_0$.\\
By Lemma \ref{5} a), we have $f_0=a_1C_0+b_1f$ where $a_1\ge 0$ and $b_1\ge a_1e$ or
$a_1=1,b_1=0$.\\
If $b_1>a_1e$. Then by Lemma 2.6 b), $f_0$ is ample, which is impossible.\\
So $b_1=a_1e$, $f_0=a_1(C_0+ef)$. Since $f_0^2=0$, $a_1^2(C_0^2+2e)=a_1^2 e=0$. So $b_1=a_1e$
must be zero, and $f_0=a_1C_0$. But $\sigma(\mathbb P^1)\cdot f_0=1$, so $a_1 \sigma(\mathbb
P^1)\cdot C_0=1$, which implies that $a_1$ must be 1. So $f_0=C_0$ and $f_0\cdot f=1$. \\ In
conclusion of our analysis of cases a) and b), we get that $f_0\cdot f=1$.\\
Define $\alpha :S \longrightarrow \mathbb P^1 \times X$ by $\alpha=(\pi,\varphi)$, we show
$\alpha$ is an isomorphism. \\
Assume there is an irreducible curve $C'$ on $S$ contracted by $\alpha$, then $C'$ will be
contracted by both $\pi$ and $\varphi$. Since every fibre of $\varphi$ is integral, $C'$ will be
some fibre $F'$ of $\varphi$. But we already know that no fibre of $\varphi$ can be contracted
by $\pi$. So $\alpha$ contracts no irreducible curves on $S$. This implies that $\alpha$ is a
quasi-finite morphism. Since $\alpha$ is also projective, it is a finite morphism.\\Since
$deg(\alpha)=f_0\cdot f=1$, $\alpha$ is an isomorphism. So we get $S\simeq \mathbb P^1\times X$
and $\pi=pr_1.$\end {enumerate}
\end{proof}For a given surface $S$, it is natural to ask how many algebraic semi-group laws
exist on it ? In the next theorem we solve this problem for minimal rational surfaces.
\begin{theorem}If $S$ is a minimal rational surface, then there are finitely many algebraic
semigroup laws on $S$ modulo $Aut(S)$. \end{theorem}
\begin{proof}Let ($S$, $\mu$)be a non-trivial algebraic semigroup law, we denote its associated
contraction by the following diagram: \xymatrix{S\ar[r]|\pi & C\ar@/_/@{>}[l]|\sigma}, where $C$
is the kernel of $\mu$. Since ($S$, $\mu$) is non-trivial, $C$ is a curve. Since $S$ is
rational, $C\simeq \mathbb P^1 $ and $\pi$ is a ruling morphism. So $S\simeq Proj_{\mathbb
P^1}(\mathcal O\oplus\mathcal O(-d))$ for some $d\ne 1$ . Then sections of the ruling morphism
$\pi$ are in one-to-one correspondence with surjections \xymatrix{\mathcal O\oplus\mathcal
O(-d)\ar[r]^{\quad p}& \mathcal L\ar[r]&0}, where $\mathcal L$ is an invertible sheaf on
$\mathbb P^1$. Consider the kernel of $p$, we denote it by $ \mathcal N$, then $\mathcal N$ is
also an invertible sheaf. Now there is an exact sequence \xymatrix{0\ar[r]&\mathcal
N\ar[r]&\mathcal O\oplus\mathcal O(-d)\ar[r]^{\quad p}& \mathcal L\ar[r]&0}. Observe that
$deg(\mathcal N)\le 0$, we let $\mathcal N=\mathcal O(-d_2)$ and $\mathcal L=\mathcal O(d_1)$,
where $d_2\ge 0$ and both $d_i$ are integers.\\ Case 1), $d_2>0$. \\We consider the long-exact
sequence:\\
$$\xymatrix{0\ar[r]&H^0(\mathbb P^1, \mathcal O (-d_2))\ar[r]&H^0(\mathbb P^1, \mathcal O\oplus
\mathcal O (-d_2))\ar[r]^{\quad p_1}&H^0(\mathbb P^1, \mathcal L)}.$$ Since $\dim H^0(\mathbb P^1,
\mathcal O (-d_2))=0$, $p_1$ is injective. So $\dim H^0(\mathbb P^1, \mathcal L)\ge 1$, which
implies that $deg(\mathcal L)\ge 0$. Consider \begin{align*}
Ext^1_{\mathbb P^1}(\mathcal N, \mathcal L) & = Ext^1_{\mathbb P^1}(\mathcal O(-d_2), \mathcal
O(d_1)) \\
& = Ext^1_{\mathbb P^1}(\mathcal O , \mathcal O(d_1+d_2))\\&=H^1(\mathbb P^1, \mathcal
O(d_1+d_2))\\&=H^0(\mathbb P^1, \mathcal O(-2-d_1-d_2)).
\end{align*}Since $d_1=deg(\mathcal L)\ge 0$, $d_2>0$, $\dim H^0(\mathbb P^1, \mathcal
O(-2-d_1-d_2)=0$. So any extension of $\mathcal N$ by $\mathcal L$ is trivial, which implies
$\mathcal N \oplus \mathcal L\simeq \mathcal O \oplus \mathcal O(-d) $. Since $\mathcal O \oplus
\mathcal O(-d) $ is normalised, $deg(\mathcal L)\le 0$, but we already know that $deg(\mathcal L)\ge
0$, so $\mathcal L \simeq \mathcal O$. Observe that \begin{align*}\Lambda ^2(\mathcal
N\oplus\mathcal L)= \mathcal N\otimes \mathcal L=\mathcal O(-d).\end{align*}So $\mathcal N\simeq
\mathcal O(-d)$ and there exists a commutative diagram:\\
$$\xymatrix{0\ar[r]&\mathcal O(-d)\ar[r]\ar[d]_{\simeq}&\mathcal O\oplus\mathcal
O(-d)\ar[r]\ar[d]_{\theta}&\mathcal O\ar[r]\ar[d]_{\simeq}&0\\0\ar[r]&\mathcal N\ar[r]&\mathcal
O\oplus\mathcal O(-d)\ar[r]&\mathcal L\ar[r]&0} $$where $\theta$ is an automorphism. \\Case 2),
$d_2=0$, .\\We have $\mathcal N\simeq \mathcal O$ and $\mathcal L =\mathcal O(-d)$. Consider the
surjection $$\xymatrix{\mathcal O\oplus\mathcal O(-d)\ar[r]^{\quad p}& \mathcal L\ar[r]&0},$$ then
$p\in Hom(\mathcal O \oplus \mathcal O(-d), \mathcal O(-d))=Hom(\mathcal O(-d), \mathcal
O(-d))=\mathbb C$, so $p$ is a multiplication by some non-zero scalar $\lambda\in \mathbb C$.
Consider the isomorphism $\alpha=(id, \times \lambda): \mathcal O \oplus \mathcal
O(-d)\longrightarrow \mathcal O \oplus \mathcal O(-d)$, then it fits into the following commutative
diagram:\\
$$\xymatrix{\mathcal O \oplus \mathcal O(-d)\ar[r]^p \ar[d] _{\alpha}&\mathcal
L\ar[d]_{\simeq}\ar[r]&0\\\mathcal O \oplus \mathcal O(-d)\ar[r]^{p_2}&\mathcal O(-d)\ar[r]&0 },
$$where $p_2$ is the second projection. \\ In conclusion, modulo $Aut(\mathcal O \oplus \mathcal
O(-d))$, there are finitely many surjections $\mathcal O \oplus \mathcal O(-d)\longrightarrow
\mathcal L$. So modulo $Aut(S)$, there are finitely many sections of $\pi$.
\end{proof}
\section{Classification in the case $\kappa(S)=0$} In this section, we assume that the Kodaira
dimension of $S $ equals zero.\\ First we state a classification theorem, which is part of
``Enriques Classification Theorem''. We recall some useful notations: $p_g=\dim H^0(S, \mathcal
O_S(K_S))$, $q= \dim H^1(S,\mathcal O_S)$.
\begin{theorem}{(Classification theorem)}\label {3.1}\\If $\kappa(S)=0$, then $S$ is one of the
following surfaces:
\begin{enumerate}[1)]\item $K3$ surface, $p_g=1, q=0, K_S=0$.
\item Enriques surface, $p_g=0, q=0, 2K_S=0$.
\item Bielliptic surface.\\This means there are two elliptic curves $E$, $F$ and a finite
group $G$ of translations of $E$ acting also on $F$ such that $F/G\simeq \mathbb P^1$,
and $S\simeq (E\times F)/G$. In this case, $p_g=0$, $q=1$.
\item Abelian surface. In this case $p_g=1$, $q=2.$ \end{enumerate}\end{theorem}
\begin{proof}~\cite {Beauville}, Chapter 8, Theorem 2.\end{proof}
Observe that in cases 1) and 2), $q=0$; in cases 3) and 4), $q>0$. We consider the case $q>0$ first,
then in the second part of this section, we consider the case $q=0$.
\subsection{The case $q>0$}First let us recall the classification of bielliptic surfaces in
the following lemma. Our main results of this subsection are Theorem 3.3 and Theorem 3.4.
\begin{lemma}Keep the notation of Theorem 3.1, then every bielliptic surface is of one of the
following types:
\begin{enumerate}[1)]
\item $(E\times F)/G$, $G=\mathbb Z/{2\mathbb Z}$, acting on $F$ by $x\mapsto -x$.
\item $(E\times F)/G$, $G=\mathbb Z/{2\mathbb Z}\oplus \mathbb Z/{2\mathbb Z}$, acting on $F$ by
$x\mapsto -x, x\mapsto x+\varepsilon$, where $\varepsilon\in \mathbb F_2$.
\item $(E\times F_i)/G$, where $F_i=\mathbb C/(\mathbb Z\oplus \mathbb Zi)$, and $G=\mathbb
Z/{4\mathbb Z}$, acting on $F$ by $ x \mapsto i x$.
\item $(E\times F_i)/G$, $G=\mathbb Z/{4\mathbb Z}\oplus \mathbb Z/{2\mathbb Z}$, acting on $F$
by $x\mapsto ix,\quad x\mapsto x+\frac{1+i}{2}$.
\item $(E\times F_{\rho})/G$, where $F_{\rho}=\mathbb C/({\mathbb Z\oplus \mathbb Z\rho})$, $\rho
$ is a primitive cubic root of identity, $G=\mathbb Z/{3\mathbb Z}$ acting on $F$ by
$x\mapsto \rho x$.
\item $(E\times F_{\rho})/G$, and $G=\mathbb Z/{3\mathbb Z}\oplus \mathbb Z/{3\mathbb Z}$, acting
on $F$ by $x\mapsto \rho x,\quad x\mapsto x+\frac{1-\rho}{3} $.
\item$(E\times F_{\rho})/G$, and $G=\mathbb Z/{6\mathbb Z}$, acting on $F$ by $x\mapsto -\rho
x$.\end{enumerate}\end{lemma}
\begin{proof}~\cite{Beauville}, Chapter 6, Proposition 6.20.\end{proof}
\begin{theorem}\label{M2}Keep the notation of Lemma 3.2. \\If $S$ is bielliptic, and
\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}, then:
\begin{enumerate}[a)] \item $E/G\simeq C$.\item the action of $G$ on $F$ has a fixed point
$P$.\item
$S$ is of type 1), 3), 5) or 7). Moreover, $\pi=p_1$ is the first projection and $\sigma:
E/G\longrightarrow S\simeq (E\times F)/G$ is of the form $\sigma(x)=(x,P)$.
\end{enumerate} \end{theorem}
\begin{theorem}\label{M3}Let $S$ be an abelian surface, and \xymatrix{S\ar[r]|\pi
&C\ar@/_/@{>}[l]|\sigma} . Then $S\simeq C\times E$ for some elliptic curve $E$, $\pi=pr_1$ is the
first projection and $\sigma: C\longrightarrow C\times E$, is defined by $x\longmapsto (x,y_0)$ for
some $y_0\in E$.\end{theorem}
In the following, we analyze the Albanese variety $Alb(S)$ and show that $C$ is elliptic. Then we
use this result to prove Theorem 3.4. For Theorem 3.3, we transfer our problem to the existence of
$G$-equivariant morphisms from $E$ to $F$. Then by some detailed calculations, we find that the
action of $G$ on $F$ must have a fixed point.\\
In the following lemma, we show that $C$ is elliptic.\\Before starting the proof, we recall the
definition and the universal property of Albanese variety: ( see Remark 14, Chapter 5, ~\cite
{Beauville} )\\
Let $X$ be a smooth projective variety. There exists an abelian variety A and a morphism
$\alpha_X:X\longrightarrow A$ with the following universal property:
for any complex torus $T$ and any morphism $f:X\longrightarrow T$, there exists a unique morphism
$\tilde f: A\longrightarrow T$, such that $f\circ \alpha=f$. The abelian variety A is called the
Albanese variety of $X$, and written by $Alb(X)$; the morphism $\alpha_X$ is called the Albanese
morphism.
\begin{lemma} If $q(S)>0$ and there exists a curve $C$ s.t. \xymatrix{S\ar[r]|\pi
&C\ar@/_/@{>}[l]|\sigma}, then $C$ must be elliptic. \end{lemma}
\begin{proof}By Theorem 3.1, $S$ is bielliptic or abelian.\\
a) The case when $S$ is abelian. Firstly, $C$ is not rational, because an abelian variety
contains no rational curve.
Now we prove $g(C)\le 1$. \\Recall that for an arbitrary smooth variety $X$, $Alb(X)$, as a
group, is generated by the image $\alpha(X)$ . \\So the surjective morphism
$\pi:S\longrightarrow C$ induces a surjective morphism $\tilde \pi :Alb(S)\longrightarrow
Alb(C)$ such that the following diagram:\\
$$ \xymatrix {S\ar@{->>}[r]^{\pi}\ar[d]^{\alpha_S}\ar[rd]|{\tilde \pi
\circ\alpha_S}&C\ar[d]^{\alpha_C}\\Alb(S)\ar@{->>}[r]^{\tilde\pi}&Alb(C)}\\$$is commutative.
Since $\alpha_S$ is an isomorphism, $\tilde \pi \circ\alpha_S$ is surjective. Note that it factors
through the curve $C$, so $g(C)= \dim(Alb(C))\le1$.\\
We conclude that $C$ must be elliptic.\\
b) The case $S$ is bielliptic.\\
Using the notation of Theorem 3.1, we write $S$ as $(E\times F)/G$.\\
There is a diagram:\\
$$\xymatrix{E\times F\ar[r]^{\varphi\quad} & (E\times F)/G \ar[r]^{\qquad \pi} & C}. $$ where
$\varphi$ is the quotient morphism. Let $f=\pi\circ\varphi$, we get a morphism from the
abelian surface $E\times F$ to $C$. Then $f$ induces a surjective morphism $\tilde f:E\times
F\longrightarrow Alb(C) $, such that the diagram:\\
$$ \xymatrix{E\times F \ar[r]^f \ar@{->>}[d]^{\tilde f}& C \ar[dl]^{\alpha_C} \\ Alb(C)}$$\\is
commutative.
Since $f$ factors through the curve $C$, $\dim(Alb(C))\le 1$, so $g(C)\le 1$.\\
If $g(C)=0$, we consider the cartesian square:\\
$$ \xymatrix{Y\ar[r]^{\sigma'\quad}\ar[d]&E\times F\ar[d]^{\varphi}\\ C\simeq \mathbb P^1
\ar[r]^{\sigma\quad}& (E\times F)/G}$$\\ Observe that $G$ acts freely on $E\times F$, the
quotient morphism $\varphi:E\times F\longrightarrow (E\times F)/G$ is \'{e}tale, so $Y$ is
also an \'{e}tale cover of $\mathbb P^1$. Because $\mathbb P^1$ is simply-connected, $Y$ must
be a disjoint union of finitely many copies of $\mathbb P^1$.
So there is a closed immersion $\sigma': \coprod \mathbb P^1\longrightarrow E\times F$. Since
$E\times F$ is an abelian surface, and there are no rational curves on it, we get a
contradiction. So $C$ is an elliptic curve and this completes our proof.
\end{proof}Now let us complete the proof of Theorem 3.4.
\begin{proof} By the previous lemma, $C$ is an elliptic curve. Pick a point $P_0$ on $C$ as the
origin, then $C$ becomes a 1-dimensional algebraic group. So $\pi$ is a homomorphism of abelian
varieties. Let $E=ker(\pi)$, then $S\simeq C\times E$.\end{proof}
In the rest of this part, we assume $S$ is bielliptic. \begin{lemma}If
$\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}$, and $g(C)=1$, then $C\simeq Alb(S)$ and
$\alpha_C\circ \pi=\alpha_S$. \end{lemma}
\begin{proof}
By the universal property of the Albanese variety, $\pi$ and $\varphi$ induce the diagram:
$$\xymatrix{Alb(S)\ar[r]|{\pi' }&Alb(C)\ar@/_/@{>}[l]|{\sigma'}}$$ \\such that $ \pi ' \circ
\sigma '=id_{Alb(C)}$. So $Alb(S)\simeq Alb(C)\times ker(\pi').$ Since $C $ is elliptic, $\dim
Alb(C)=1$ and $ Alb(C)\simeq C $. Observe that $q(S)=\dim Alb(S)=1$, so $\dim Alb(S)=\dim Alb(C)$.
Then $Alb(S)\simeq Alb(C)\simeq C$. \end{proof}
\begin{lemma} Using the notations of Theorem 3.1, we write $S$ as $(E\times F)/G$. If
\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}, then $C\simeq E/G$ and $\pi=p_1$. \end{lemma}
\begin{proof}By Lemma 3.5, $C$ is elliptic, then $C$ satisfies all the assumptions of Lemma 3.6, so
$C\simeq Alb(S)$. \\
Recall that $G$ is a group of translations of $E$. Also $E/G$ is elliptic, so that $Alb(E/G)=E/G$.
By the universal property of the Albanese variety, the first projection $p_1$ induces a morphism
$\widetilde {p_1}$ such that the diagram :\\
$$\xymatrix{S \ar[r]^{p_1}\ar[d]_{\pi}&E/G\\C\ar[ur]_{\widetilde {p_1 }}}$$ is commutative.\\ Note
that $\widetilde {p_1}$ is a finite morphism between projective curves and $p_1$ has connected
fibres, so $deg(\widetilde{p_1})=1$. Then $\widetilde{p_1}$ is an isomorphism, which completes our
proof. \end{proof}
In the following lemma, we transfer our problem to the existence of $G$- morphisms from $E$ to $F$.
\begin{lemma} There exists a section $\sigma$ of $p_1:S\longrightarrow E/G$ if and only if there
exists a $G$-morphism $h:E\longrightarrow F$.\end{lemma}
\begin{proof} The ``if'' part is trivial, we now prove the ``only if'' part.\\
The quotient morphism $\varphi: E\longrightarrow E/G$ induces a cartesian square:\\
$$\xymatrix{E\times F\ar[r]^{\tilde\varphi}\ar[d]^{p_1'}&(E\times
F)/G\ar[d]^{p_1}\\E\ar[r]^{\varphi}&E/G}$$where $p_1'$ is the first projection, and $\tilde\varphi$
is the quotient morphisms. \\
By the above cartesian square, any section $\sigma$ of $p_1$ induces a section of $p_1'$. We denote
it by $\delta:E\longrightarrow E\times F$, for all $ x\in E$, $\delta(x)=(x,h(x))$, where
$h=p_2\circ\delta$.\\
Now we illustrate all the morphisms in the following commutative
diagram:\\$$\xymatrix{E\ar[dr]|{\delta}\ar[drr]|{\sigma\circ\varphi}\ar[ddr]|{id_E}\\ & E\times F
\ar[r]^{\tilde\varphi\quad} \ar[d]|{p_1'}& (E\times F)/G\ar[d]|{p_1}\\ &E\ar[r]^{\varphi\quad} & E/G
\ar@/_/@{>}[u]_{\sigma}}$$\\and verify that $h$ is a $G$-morphism.
For any $x\in E$ and $ g\in G$, we denote the action of $g $ on $x$, by $g\cdot x$. We aim to show
that $\delta(g\cdot x)=g\cdot\delta(x)$. It surffice to verify that \begin{equation}
\sigma\circ\varphi(g\cdot x)=g\cdot(\sigma\circ\varphi(x)).\end{equation}Since $G$ acts on $(E\times
F)/G$ trivially, \begin{equation} g\cdot(\sigma\circ\varphi(x))=\sigma\circ\varphi(x).\end{equation}
Observe that $\varphi$ is a quotient morphism, so \begin{equation} \varphi(g\cdot
x)=\varphi(x).\end{equation} So \begin{equation} \sigma\circ\varphi(g\cdot
x)=\sigma\circ\varphi(x)=g\cdot(\sigma\circ\varphi(x)).\end{equation} Equation (21) holds, so
$\delta$ is a $G$-morphism, hence so is $h$. \end{proof}
Now we show that the action of $G$ on $F$ has a fixed point.
\begin{lemma}If $p_1: S\longrightarrow E/G$ has a section $\sigma$, then the action of $G$ on $F$
has a fixed point.\end{lemma}
\begin{proof} By Lemma 3.8, $\sigma$ induces a $G$-morphism $h: E\longrightarrow F$. We write it as
for all $ x\in E$, $h(x)=Ax+a$, where $A$ is the linear part of $h$. Observe that $G$ acts on $E$
as translations, we denote its action by: for all $ x\in E$, $g\cdot x= x+x_g$. And we denote the
action of $G$ on $F$ by: for all $ y \in F$, $g\cdot y=l(y)+ \varepsilon _g$, where $l$ is the
linear part.\\
Since $h(g\cdot x)=g\cdot h(x)$, we have \begin{equation}h(x+x_g)=g\cdot (Ax+a).\end{equation} The
left hand of (25) equals \begin{equation}h(x+x_g)=A(x+x_g)+a.\end{equation} The right hand of (25)
equals \begin{align}&g\cdot(Ax+a)\\=&l(Ax+a)+\varepsilon _g\\=&(l(Ax)+\varepsilon
_g)+(l(a)+\varepsilon _g)-\varepsilon _g\\=&g\cdot (Ax)+g\cdot a-\varepsilon _g.\end{align}
So \begin{equation}A(x+x_g)+a=g\cdot (Ax)+g\cdot a-\varepsilon _g.\end{equation}
Let $x=0$ in the above equation (31), we have \begin{equation}A(x_g)+a=(g\cdot 0-\varepsilon
_g)+g\cdot a.\end{equation} But \begin{equation}g\cdot 0=l(0)+\varepsilon _g=\varepsilon
_g.\end{equation} So \begin{equation}A(x_g)+a=g\cdot a. \end{equation} Subtracting equation (34)
from equation (31), we get \begin{equation}Ax=g\cdot Ax-\varepsilon _g, \end{equation} which
implies that \begin{equation}g\cdot Ax=Ax+\varepsilon _g.\end{equation}\\
If $h:E\longrightarrow F$ is constant, i.e. $h$ maps $E$ to a single point $P\in F$, then $h$ is
$G-$morphism, i.e. $g\cdot h(x)=h(g\cdot x) $, implies that $g\cdot P= P$, which means the action of
$G$ on $F$ has a fixed point.\\
Otherwise $h$ is surjective. Then its linear part $A$, is a linear automorphism of the complex plane
which maps the lattice of $E$ into the lattice of $F$. So by equation (36), for all $ y \in F$, we
have $g\cdot y=y+\varepsilon _g $, which means that $G$ acts on $F$ by translations. \\
Note that for all types of bielliptic surfaces in Lemma 3.2, the action of $G$ on $F$ has a
non-trivial linear part. So $h$ is not surjective, hence it is a constant morphism, and the action
of $G$ on $F$ has a fixed point. \\
\end{proof}
Observe that, for surfaces of types $2)$, $4)$, $6)$ in Lemma 3.2, $G$ has no fixed point on $F$.
This completes the proof of Theorem 3.3.
\\Now recall a known result (see ~\cite {Michel. Brion}, Section 4.5, {Remark 16}):
Consider the functor of composition laws on a variety $S$, i.e. the contravariant functor from
schemes to sets given by $T\longmapsto Hom(S\times S \times T, S)$, then the families of algebraic
semigroup laws yield a closed subfunctor, and this subfunctor is represented by a closed subscheme
$SL(S)\subseteq Hom(S\times S, S)$. \\For a given algebraic law $\mu_{t_0}$, we denote its
associated contraction by $\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}$ and its associated
abelian variety by $A$, then the connected component of $\mu_{t_0}$ in $SL(S)$ is identified with
the closed subscheme of $Hom(C, S)\times A$ consisting of those pairs $(\varphi, g)$ such that
$\varphi$ is a section of $\pi : S\longrightarrow C$. We denote the scheme of sections of $\pi$ by
$Mor_{\pi}(C, S)$, it is isomorphic to an open subscheme of $Hilb(S)$ by assigning every section
to its image in $S$. By using the local study of $Hilb(S)$, for any section $\sigma$ of $\pi$, the
dimension of the tangent space of $Hilb(S)$ at $\sigma(C)$ is $h^0(\sigma(C), \mathcal
N_{\sigma(C)/S})$, and the obstruction lies in $H^1(\sigma(C), \mathcal N_{\sigma(C)/S})$ (where
$\mathcal N_{\sigma(C)/S}$ is the normal bundle of $\sigma (C)$ in $S$).
(For the discussion of local study of $Hilb(S)$, see ~\cite{Mumford2}.)\\In the following, we want
to study the structure of $Mor_{\pi}(C,S)$.
For $Kod(S)=0$, we have the following two theorems.
\begin{theorem} Assume that $S$ is a bielliptic surface, if there is a curve $C$ satisfying
\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma}, then $Mor_{\pi}(C, S)$ consists of reduced isolated
points. \end{theorem}
\begin{theorem} If $S$ is an abelian surface, and there is a curve $E$ satisfying
\xymatrix{S\ar[r]|\pi & E\ar@/_/@{>}[l]|\sigma}, then $Mor_{\pi}(E,S)= S/E =ker (\pi)$.\end{theorem}
We postpone the proof of Theorem 3.10 and Theorem 3.11 to Section $6$. There we will determine the
structure of $Mor_{\pi}(C,S)$ for any smooth elliptic fibration $\pi$ in a more uniform way, see
Theorem 6.1. Then Theorem 3.10 and Theorem 3.11 are just direct corollaries of Theorem 6.1.
\subsection{The case $q(S)=0$}{The main result of this part is Theorem \ref {M4}.
\begin{lemma}If $q(S)=0$, and there exists \xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma} , then $C$
is rational and $\pi$ is an elliptic fibration.\end{lemma}
\begin{proof} If $q(S)=0$, then by Theorem \ref {3.1}, $S$ is $K3$ or Enriques. In both cases,
$K_S\sim_{num}0$. Pick an arbitrary general fibre $F$ of $\pi$. By the genus formula, we have:
\begin{equation}g(F)=1+\frac{1}{2}(F\cdot K_S+ F^2)=1\end{equation} so $\pi$ is an elliptic
fibration. On the other hand, $\pi$ induces a surjective morphism $\tilde\pi: Alb(S)\longrightarrow
Alb(C)$. So \begin{equation}g(C)=dim Alb(C)\le dim Alb(S)=q(S)=0.\end{equation}Then $C$ is
rational. \end{proof}
\begin{lemma}Every elliptic fibration $\pi:S\longrightarrow \mathbb P^1$ of an Enriques surface $S$
has exactly two multiple fibres, $2F$ and $2F'$. \end{lemma}
\begin{proof}~\cite {BPV}, Chapter 8, Lemma 17.2.\end{proof}
\begin{theorem}\label {M4}If $S$ is an Enriques surface or a general $K3$ surface, there is no
\xymatrix{S\ar[r]|\pi &C\ar@/_/@{>}[l]|\sigma} .(Hence there is no non-trivial semigroup structure
on $S$.)\end{theorem}
\begin{proof}If $S$ is an Enriques surface and such a diagram exist, then $C$ is rational, and
$\pi$ is an elliptic fibration. By Lemma 3.11, there exists a multiple fibre $2F$. For a general
smooth fibre $F_0$, $\sigma(C)\cdot F_0=1$, but under our assumptions, $\sigma(C)\cdot
F_0=\sigma(C)\cdot 2 F\ge2$, which is a contradiction. \\
The case for $S$ is a general $K3$ surface, see Proposition 7.1.3, ~\cite{Daniel Huybrechts}
\end{proof}}
\begin{theorem} For any $K3$ surface $S$ and any fibration $\pi:S\longrightarrow C$,
$Mor_{\pi}(C,S)$ consists of reduced isolated points.
\end{theorem}\begin{proof}Since $q(S)=0$, $C\simeq \mathbb P^1$. Let us determine the structure of
$Mor_{\pi}(\mathbb P^1, S)$. For any section $\sigma$ of $\pi$, the tangent space of $Hilb(S)$ at
$\sigma(\mathbb P^1)$ is $H^0(\mathbb P^1, \mathcal N_{\sigma(\mathbb P^1)/S})$, where $ \mathcal
N_{\sigma(\mathbb P^1)/S})$ is the normal bundle of $\sigma(\mathbb P^1)$ in $S$. Since $\mathbb
P^1$ and $S$ are both smooth, by the adjunction formula, $\mathcal N_{\sigma(\mathbb P^1)/S} =
\omega^{-1}_S\mid _{\sigma(\mathbb P^1)}\otimes \omega _{\mathbb P^1}$. By the definition of $K3$,
$\omega_S$ is trivial, so $ \mathcal N_{\sigma(\mathbb P^1)/S}$ is $\mathcal O(-2)$, and $\dim
H^0(\mathbb P^1, \mathcal N_{\sigma(\mathbb P^1)/S})=0$. So $ \dim T_{\sigma(\mathbb P^1)}(Hilb
(S))=0$, so the dimension of $Mor_{\pi}(\mathbb P^1, S)$ at $\sigma(\mathbb P^1)$ is zero.
\end{proof}
\section{The case $\kappa(S)=1$}{
In this section, we always assume $\kappa(S)=1$. The main result is Theorem 4.2.
\begin{lemma} There exists an elliptic fibration $\varphi: S\longrightarrow B$, where $B$ is a
smooth projective curve.\end{lemma}
\begin{proof}~\cite {Beauville}, Chapter 9, Proposition 2.\end{proof}
So we have the following diagram:\\
$$\xymatrix{S\ar[r]|\pi\ar[d]_{\varphi} &C\ar@/_/@{>}[l]|\sigma\\B}.$$
\textbf{Remark 2):} If $\pi=\varphi $, then $\varphi$ is an elliptic fibration with a section. This
implies that the generic fibre $F$ of $\varphi$ is a smooth curve of genus 1 over the functional
field of $B$, and the set of its rational points is nonempty. So $F$ is an elliptic curve and it can
be imbedded into $\mathbb P^2_{k(B)}$ as a cubic curve. For any two minimal surfaces $S_i$ and two
elliptic fibrations $\varphi _i: S_i\longrightarrow B$, if their generic fibres are isomorphic, then
the two fibrations $\varphi _i$ are isomorphic. So to give a minimal surface $S$ and an elliptic
fibration $\varphi: S\longrightarrow B$ is equivalent to give a homogeneous cubic equation in
$\mathbb P^2_{k(B)}$. So in what follows, we focus on the case $\pi \ne \varphi$.
\begin{theorem}\label{M5}If $g(C)\ge 1$, then $S\simeq B\times C$ and $\varphi$ is the first
projection, $\pi$ is the second projection. \end{theorem}
The key point to prove Theorem 4.2 is to show that $\sigma $ maps $C$ into a fibre of $\varphi$.
First we want to determine the singular fibres of the elliptic fibration $\varphi$. For this, we
recall ``Kodaira's table of singular fibres'' .
\begin{lemma}{(Kodaira's table of singular fibres)}\\
Let $f:S\longrightarrow \Delta$ be an elliptic fibration over the unit disk $\Delta$, such that all
fibres $S_x, x\ne0$, are smooth. We list all possibilities for the central fibre $S_0$:
\begin{enumerate}[a)]
\item $S_0$ is irreducible. Then $S_0$ is either smooth elliptic, or rational with a node, or
rational with a cusp.
\item $S_0$ is reducible. Then every component $C_i$ of $S_0=\sum n_iC_i$ is a ($-2$)- rational
curve.
\item $S_0$ is a multiple fibre, we write it as $S_0=mS_0'$. Then $S_0'$ is smooth elliptic, or
rational with a node, or of type b) \end{enumerate}\end{lemma}
\begin{proof} ~\cite {BPV}, Chapter 5, Proposition 8.1.\end{proof}
\begin{lemma}If $g(C)\ge1$, then $g(C)$ must be $1$. Moreover any fibre of $\varphi$ is a multiple
of a smooth elliptic curve.\end{lemma}
\begin{proof} Assume $g(C)\ge 2$. For an arbitrary smooth fibre $F$ of $\varphi$, \begin{equation}
1=g(F)<g(C)\end{equation} so $\pi$ contracts $F$. By the ``Rigidity lemma'', $\pi$ will factor
through $\varphi$, which contradicts our assumptions. So $g(C)=1$. \\ Let $F_0$ be a singular
fibre. If $F_0 $is not a multiple of a smooth elliptic curve, then by ``Kodaira's table of singular
fibres'', $F_0$ is a sum of irreducible rational curves. Then $F_0$ will be contracted by $\pi$, and
by the ``Rigidity lemma'' again, $\pi$ will factor through $\varphi$. This contradicts our
assumptions.
So every fibre of $\varphi$ is a smooth elliptic curve, or a multiple of a smooth elliptic curve.
\end{proof}
In view of Lemma 4.4, the elliptic fibration $\varphi$ is ``almost smooth'': its singular fibres
are only multiples of smooth curves. In the following lemma, we will see that after performing a
base change, we can eliminate all the multiple fibres, and get a smooth elliptic fibration.
\begin{lemma}Let $\varphi: S\longrightarrow B$ be a morphism from a surface onto a smooth curve
whose fibres are multiples of smooth curves. Then there are a ramified Galois cover
$q:B'\longrightarrow B$ with Galois group $G$, a surface $S'$ and a commutative cartesian square: \\
$$\xymatrix{S'\ar[r]^{q'}\ar[d]^{p'}&S\ar[d]^{p}\\B'\ar[r]^q &B}$$ such that the action of $G$ on
$B'$ lifts to $S'$, $q'$ induces an isomorphism $S'/G\simeq S$ and $p'$ is smooth.\end{lemma}
\begin{proof} Just use the cyclic covering trick, ~\cite {Beauville}, Chapter 6, Lemma 7.\end{proof}
In some cases, we can even get a trivial elliptic fibration, after performing successive base
changes.
\begin{lemma}Let $p:S\longrightarrow B$ be a smooth morphism from a surface to a curve, and $F$ a
fibre o f $p$. Assume either that $g(B)=1$ and $g(F)\ge 1$, or that $g(F)=1$. Then there exists an
\'{e}tale cover $B'$ of $B$, such that the fibration $p':S'=S\times _B B'$ is trivial, i.e.
$S'\simeq B'\times F$. Furthermore, we can take the cover $B'\longrightarrow B$ to be Galois with
group $G$, say, so that $S\simeq (B'\times F)/G$.\end{lemma}
\begin{proof} ~\cite {Beauville}, Chapter 6, Proposition 8.\end{proof}
\begin{lemma} Assume that $S$ and $C$ satisfy all the assumptions of Theorem 4.2.
Then $\sigma $ maps $C$ into a fibre of $\varphi$.\end{lemma}
\begin{proof} If not, $\varphi\circ\sigma:C\longrightarrow B$ will be a finite morphism between
curves. By Hurwitz's Theorem, $g(B)\le 1$. Let
$\varphi_C=\varphi|_{\sigma(C)}:\sigma(C)\longrightarrow B$, we now determine the degree of the
ramification divisor $R$ of $\varphi_C$. \\
If $g(B)=1$, assume that $\varphi$ has a multiple fibre $F_b$, and write $F_b=mF_b'$. Let
$\sigma(C)$ intersect with $F_b'$ at a point $P$, then the ramification index of $\varphi_C$ at $P$
satisfies \begin{equation}e_P\ge m>1.\end{equation} Then we get a ramified morphism $\varphi_C$
between elliptic curves, which is impossible.\\So $\varphi: S\longrightarrow B$ is a smooth elliptic
fibration.\\
Then by Lemma 4.6, there exists an \'{e}tale cover $B'\longrightarrow B$ such that $S'=B'\times_BS$
is a trivial elliptic fibration.
Then\begin{equation}Kod(S')=Kod(B'\times F)=Kod(B')+Kod(F)\end{equation} where $F$ is a general
fibre of $\varphi$. Since $B'$ and $F$ are both elliptic curves, $Kod(S')=0$. But $Kod(S')\ge
Kod(S) \ge 1$ , which is a contradiction. \\
So $g(B)=0$.
Assume that at points $b_1,\cdots,b_s$ of $B$, $\varphi$ has multiple fibres $F_i=m_iF_i'$ ($1\le
i\le s$) and each $F_i'$ intersects with $\sigma (C)$ at points $P_{i,1},\cdots ,P_{i,i_j}$ with
multiplicities $n_{i,1},\cdots, n_{i,j_i}$. Then $deg(R)$ saisfies:
\begin {align}
deg(R)&\ge \sum_{i=1}^{s}\sum_{k=1}^{j_i}(e_{P_{i,k}}-1)\\ & =
\sum_{i=1}^{s}\sum_{k=1}^{j_i}(m_in_{i,k}-1)\\&=\sum_{i=1}^{s}\{m_i(\sum_{k=1}^{j_i}n_{i,k})-j_i\}\\
&=\sum_{i=1}^{s}(m_iF_i'\cdot
\sigma(C)-j_i)\\&=\sum_{i=1}^{s}(F_i\cdot\sigma(C)-j_i)\\&=\sum_{i=1}^{s}(deg(\varphi_C)-j_i)\end
{align}
Since $\sum_{k=1}^{j_i}n_{i,j}=F_i'\cdot \sigma(C)$ and each $n_{i,j}\ge 1$, so \begin{equation}
j_i\le F_i'\cdot \sigma(C)=deg(\varphi_C)/m_i.\end {equation} So \begin{equation}
deg(R)\ge\sum_{i=1}^{s}(deg(\varphi_C)-j_i)\ge \sum_{i=1}^{s}deg(\varphi_C)(1-\frac{1}{m_i}).\end
{equation} Using Hurwitz's Theorem, we calculate $deg(R)$ as:\begin{align}
°(R)\\=&2g(E)-2-deg(\varphi_C)(2g(B)-2)\\=&2deg(\varphi_C).\end{align} Now by comparing (49) and
(52), we get \begin{equation}\sum_{i=1}^{s}(1-\frac{1}{m_i})-2\le0.\end {equation}
Recall Lemma 7.1 in ~\cite {Wall}: for any elliptic fibration $\varphi$, we define an invariant by
\begin{equation}
\delta(\varphi)=\chi(\mathcal O_S)+2g(B)-2+\sum_{i=1}^{s}(1-\frac{1}{m_i}),\end {equation} then
$Kod(S)=1$ if and only if \begin{equation}\delta(\varphi)>0 .\end {equation}In our situation:
\begin{equation}\delta(\varphi)=\chi(\mathcal O_S)-2+\sum_{i=1}^{s}(1-\frac{1}{m_i}).\end {equation}
By the inequality (53), $\delta(\varphi)\le \chi(\mathcal O_S) $.\\
Now let us determine $\chi(\mathcal O_S)$. Recall Noether's Formula:\begin{equation}\chi(\mathcal
O_S)=\frac{1}{12}(K_S^2+\chi_{top}(S)).\end {equation}\\Since $S$ is minimal and $Kod(S)=1$, we have
$K_S^2=0$. So \begin{equation}\chi(\mathcal O_S)=\frac{1}{12}\chi_{top}(S).\end {equation}
Recall that
\begin{equation}\chi_{top}(S)=\chi_{top}(B)\chi_{top}(F)+\sum_{i=1}^{s}(\chi_{top}(F_{i})-\chi_{top}(F)),
\end{equation} where $F_{i}$ are the singular fibres and $F$ is a general smooth fibre. In our case,
$F_i$ is a multiple of a smooth elliptic curve. So $\chi_{top}(F_{i})=\chi_{top}(F)=0$, hence
$\chi_{top}(S)=0$, which implies \begin{equation}\delta(\varphi)\le 0.\end {equation} This
contradicts the assumption that $Kod(S)=1$. \\
In conclusion, the assumption that $\varphi\circ\sigma:C\longrightarrow B$ is a finite morphism
always leads to contradictions, so $\sigma$ maps $E$ into a fibre of $\varphi$.
\end{proof}
Now we begin the proof of Theorem 4.2.
\begin{proof} We first consider the case when $\varphi$ is smooth.\\First we define
$\alpha=(\pi,\varphi):S\longrightarrow C\times B$, we show that $\alpha$ is an isomorphism. By Lemma
4.7, $\sigma$ maps $C$ into a fibre $F_b$ of $\varphi$. If $\varphi$ is smooth, $F_b$ is integral,
so $\sigma (C)=F_{b}$. For an arbitrary fibre $F$ of $\pi$, since $\sigma$ is a section of $\pi$ ,
$F\cdot F_{b}=F\cdot\sigma(C)=1$. Then $\alpha$ is birational. If there exists an irreducible
curve $X$ on $S$ contracted by $\alpha$, then $X$ will be a fibre of $\varphi$, because $\varphi$
has smooth integral fibres. So $\pi$ contracts one fibre of $\varphi$. By the ``Rigidity lemma'',
$\pi$ will factor through $\varphi$, contradicts to our assumptions. So $\alpha$ is a birational
quasi-finite, projective morphism, which is certainly an isomorphism. \\Now we deal with the general
case, which allows $\varphi$ has singular fibres. We know that, by Lemma 4.5, there exists
$q:B'\longrightarrow B$, such that $\varphi' :S'\simeq B'\times_B S\longrightarrow B'$ is smooth.
Note that $\varphi\circ \sigma:C\longrightarrow B$ maps $C$ to a single point of $b\in B$. Then
there exists a constant morphism $f:C\longrightarrow B'$ lifts $\varphi\circ\sigma$, such that the
diagram\\
$$\xymatrix{&B'\ar[d]^q\\C\ar[r]^{\varphi\circ\sigma}\ar@{.>}[ur]^f&B}$$ is commutative.
By the universal property of the fibre product,
there exists a morphism $\sigma':C\longrightarrow S'$ satisfying $q'\circ \sigma'=\sigma$ and
$\varphi'\circ q=f$. \\We illustrate all the morphisms in the following diagram:\\
$$\xymatrix{S'\ar[r]_{q'}\ar[d]_{\varphi'}&S\ar[d]^{\varphi}\ar[r]|\pi &C\ar@/^/@{>}[l]|\sigma
\ar@/_/@{>}[ll]_{\sigma'}\\B'\ar[r]^q&B}$$
and verify $\sigma'$ is a section of $\pi\circ q'$:\begin{equation}(\pi\circ
q')\circ\sigma'=\pi\circ(q' \circ\sigma')=\pi\circ\sigma'=id_C.\end{equation}
So there is a diagram:\\
$$\xymatrix{S'\ar[r]|{\pi'}\ar[d]_{\varphi'} &C\ar@/_/@{>}[l]|{\sigma'}\ar[dl]^f\\B'}.$$ \\
such that \begin{equation}
\pi'=\pi\circ q'\end{equation}\begin{equation}\varphi'\circ \sigma'=f.\end{equation} \\Then
$\varphi'\circ \sigma'(C)=f(C)=\{b'\}$, which means $\sigma'$ maps $C$ into a fibre $F_{b'}$ of
$\varphi'$. Now $S'$ is a smooth fibration over $C$ and $\sigma'$ maps $C$ into a fibre $F_{b'}$ of
$\varphi'$. \\
According to what we have discussed about our problem in the smooth case,
$$\alpha'=(\pi',\varphi'):S'\longrightarrow C\times B'$$ is an isomorphism.\\
Let $G$ be the Galois group of the covering $q:B'\longrightarrow B$, then $S\simeq S'/G\simeq
(C\times B')/G$ and the diagram:\\
$$\xymatrix{S'\simeq C\times B'\ar[d]^{q'}\ar[dr]^{p_1}\\S\simeq (C\times B')/G
\ar[r]^{\qquad\quad\pi}&C}$$\\is commutative.
We denote the action of $G$ on $C\times B'$ by \begin{equation}
g(x,b')=\big(\phi_{b'}(g)(x), g b'\big).\end{equation} Since $p_1$ factors through the quotient
morphism $q'$, \begin{equation}p_1\big(g(x,b')\big)=p_1\big((x,b')\big).\end{equation} This means
that for all $ g\in G$ and for all $ x\in C$, $\phi_{b'}(g)(x)=x$. So $G$ acts trivially on the
factor $C$. Then we have $$S\simeq (C\times B')/G\simeq C\times(B'/G)\simeq C\times B.$$
The proof of Theorem 4.2 is completed.\end{proof}
Now we prove a result about sections of non-smooth elliptic fibrations.
\begin{theorem} If there is a curve $C$ satisfies that \xymatrix{S\ar[r]|\pi
&C\ar@/_/@{>}[l]|\sigma}, and $\pi$ is not a smooth elliptic fibration, then $Mor_{\pi}(C,S)$
consists of reduced isolated points.\end{theorem}
\begin{proof}Consider the diagram we introduced in Lemma 4.1,
$$\xymatrix{S\ar[r]|\pi\ar[d]_{\varphi} &C\ar@/_/@{>}[l]|\sigma\\B}.$$ Then there are three
possibilities:
\begin{enumerate}[1)]\item $(B,\varphi)\simeq (C,\pi).$\item $(B,\varphi)\not\simeq (C,\pi)$ and
$g(C)\ge 1$.\item $(B,\varphi)\not\simeq (C,\pi)$ and $g(C)=0$.\end{enumerate}
For case 2), by Theorem 4.2, $S\simeq B\times C$ and $\pi$ is the second projection, hence it is a
smooth elliptic fibration, contradicts to our assumption. \\For case 3), $C\simeq \mathbb P^1$. By
the ``adjunction formula'', $$0=g(C)=1+\frac{1}{2}(K_S\cdot C+C^2).$$ Since $S$ is an elliptic
surface over $B$ and $Kod(S)=1$, $K_S \sim mF$, where $F$ is a fibre of $\varphi$ and $m$ is
positive. So $K_S$ is nef, which implies that $K_S\cdot C\ge 0$ and $C^2<0$. So $deg(\mathcal
N_{C/S})=C^2<0$, which implies that $\dim H^0(C,\mathcal N_{C/S})=0$.\\
For case 1), $\pi$ is an elliptic fibration over $C$ with a section. \\Then $$K_S\sim^{num}
(\chi(\mathcal O_S)+2g(C)-2)F,$$ where $F$ is a fibre of $\pi$.\\ So $$K_S^{-1}\cdot C=
-(\chi(\mathcal O_S)+2g(C)-2),$$ $$deg(\mathcal N_{C/S})=deg (K_S^{-1}\arrowvert _C\otimes \omega
_C)=-\chi(\mathcal O_S).$$By Noether's formula, \begin{equation}\chi(\mathcal
O_S)=\frac{1}{12}(K_S^2+\chi_{top}(\mathcal O_S)).\end {equation} Since $K_S^2=0$,
\begin{equation}\chi(\mathcal O_S)=\frac{1}{12}\chi_{top}(S).\end {equation}
Recall that
\begin{equation}\chi_{top}(S)=\chi_{top}(B)\chi_{top}(F)+\sum_{i=1}^{s}(\chi_{top}(F_{i})-\chi_{top}(F)),\end
{equation} where $F_{i}$ are the singular fibres and $F$ is a general smooth fibre. If $\pi$ is not
smooth, $\chi_{top}(S)>0$, which implies that $deg(\mathcal N_{C/S}<0)$, so $\dim H^0(C,\mathcal
N_{C/S})=0.$\end{proof}
In the above theorem, we miss the case that $\pi$ is a smooth fibration, we wil give an answer to
this case in Theorem 6.1. Then we can complete our discussion of $Mor_{\pi}(C,S)$, when
$\kappa(S)=1.$
\section{The case $\kappa(S)=2$}
In this section, we always assume that $\kappa(S)=2$. For any non-trivial fibration
$\pi:S\longrightarrow C$, we want to study $Mor_{\pi}(C,S)$ by counting the rational points on the
generic fibre of $\pi$. This idea is illustrated in the following lemma and our main result of this
section is Theorem 5.2.
\begin{lemma}Consider a fibration $\pi: S\longrightarrow C$. Then sections of $\pi$ are in
one-to-one correspondence with $k(C)$- rational points of the generic fibre. \end{lemma}
\begin{proof}Let $E$ denote the generic fibre of $\pi$ and consider the following cartesian square:
$$\xymatrix{E \ar[r]^{\pi' \quad} \ar[d]^{\tau}& Spec ~k(C)\ar[d]_{\tau'}\\S
\ar[r]_{\pi}&C~~,}$$ then any section of $\pi$ induces a section of $\pi'$.
Conversely, given a section $\sigma'$ of $\pi'$, we pick an arbitrary point $P\in C$,
then consider the following commutative diagram:
$$\xymatrix{Spec ~k(C)\ar[r]^{ \quad\tau \circ \sigma'}\ar[d]_f&S\ar[d]^{\pi}\\Spec
~\mathcal O_{C,P}\ar[r]^g&C~~.}$$ Since $\mathcal O_{C,P}$ is a discrete valuation ring
and $\pi$ is a projective morphism, by the ``Valuative Criterion of Properness'', there
is a unique morphism $h:Spec \mathcal O_{C,P} \longrightarrow S$ satisfying $h\circ
f=\tau\circ\sigma'$ and $\pi\circ f =g$. As $P$ varies along $C$, and $C$ is a projective
smooth curve, we get a morphism $\sigma: C\longrightarrow S$ satisfying $\pi\circ
\sigma=id_C$.\end{proof}
\begin{theorem}If there is a fibration $\pi:S\longrightarrow C$ and $S$ is not a product $C\times
C'$ for any smooth curve $C'$, then there are only finitely many sections of $\pi$.\end{theorem}
\begin{proof} If there are infinitely many sections of $\pi$, we consider the generic fibre $F$ of
$\pi$, then by Lemma 5.1, there are infinitely many rational points on $F$. Now recall a theorem
of Manin:(See Theorem 3, ~\cite {Manin}) \\
Theorem: Let $K$ be a regular extension of the field $k$ of characteristic zero and let $C$ be a
curve of genus bigger or equal than $2$ defined over $K$. If the set of points of $C$ defined over
$K$ is infinite, then there is a curve $C'$ which is birationally equivalent to $C$ over $K$ and
defined over $k$. All the points of $C_K$ except a finite number are images of points of $C'_{k}$.
\\ Since $\kappa (S)=2$, the arithmetic genus of every fibre of $\pi$ is bigger or equal than 2,
we can apply this theorem to $F$ and in our case, $k=\mathbb C$ and $K=k(C)$, then there is a
curve $C'$ defined over $\mathbb C$ and a birational map $\theta: S\longrightarrow C\times C'$.
Since $\kappa(S)=2$ and $S$ is minimal, by a corollary of ``Castelnuovo's Theorem'', $\theta$ is
defined everywhere and an isomorphism (See Theorem 19, Chapter 5, ~\cite {Beauville}). This
contradicts our assumption. \end{proof}
\section {Sections of elliptic fibration with smooth fibre}
Now we fix a smooth elliptic fibration $\pi:S\longrightarrow C$ and we want to study
$Mor_{\pi}(C,S).$ By Lemma 4.6, there is an \'{e}tale cover $D\longrightarrow C$ satisfying that
$S\times_C D$ is a trivial elliptic fibration. So in what follows, we always assume that $S\simeq
(D\times E)/G$ and consider the elliptic fibration $\pi : S\longrightarrow D/G.$
\begin {theorem} Assume that $S\simeq (D\times E)/G$ where
\begin{enumerate}[1)]\item $D$ and $E$ are projective smoth curves, $g(D)\ge1$ and $g(E)=1$.
\item $G$ is a finite group, which acts faithfully on $D$ and $E$, and freely on $D$ and $D\times
E$. \end{enumerate}Now consider $\pi: S\longrightarrow D/G$, where $\pi$ is the first
projection. \\Then:
\begin{enumerate}[1)] \item $Mor_{\pi}(D/G, S)\simeq E/G$ if every element of $G$ acts on $E$
only as translations.\item $Mor_{\pi}(D/G, S)$ consists of reduced isolated points if some
element g $(\ne id_G)$ of $G$ has fixed points on $E$.\end{enumerate}\end{theorem}
\begin{proof}First we prove claim 1). \\We view $Mor_{\pi}(D/G, S)$ as an open subscheme of
$Hilb(S)$ and by our assumption for all $ g \in G$ and $ y \in E$, $g\cdot y=y+y_g$, where
$y_g$ is a constant. Note that $E$ is an algebraic group, we want to define its action on
$Hilb(S)$ and aim to show that $Mor_{\pi}(D/G, S)$, as an open subscheme of $Hilb(S)$, consists
of only one orbit.
\\ Now for all $ y_0 \in E$, we define $t_{y_0}: D\times E\longrightarrow D\times E$ as
$t_{y_0}(x,y)=(x,y+y_0)$. Since $G$ acts on $E$ as translations and $E$ is an abelian group,
$t_{y_0}$ is a $G-$morphism. So it induces an automorphism $\alpha_{y_0}$ of the quotient
$(D\times E)/G$, i.e. $\alpha_{y_0}\in Aut (S)$.\\
Then we define the action of $E$ on $Hilb(S)$ by $$\alpha: E\times Hilb(S)\longrightarrow
Hilb(S), $$ $\alpha(y_0,P)=\alpha_{y_0}(P)$, where $P$ is any closed subvariety of $S$.\\
For a section $\sigma$, we want to show that $\pi\circ \alpha_{y_0}\circ\sigma=id_{D/G}$, then
sections of $\pi$ can move by the action of $E$. Observe that $\sigma$ induces a $G-$morphism
$f_{\sigma}:D\longrightarrow E$, so for all $ \bar {x}\in D/G$, $\sigma(\bar {x})=~\overline{(x,
f_{\sigma}(x))},$ then $$\pi\circ \alpha_{y_0}\circ\sigma(\bar {x})=\pi\circ
\alpha_{y_0}\overline{(x, f_{\sigma}(x))}=\pi\overline{(x,f_{\sigma(x)}+y_0)}=\bar{x}.$$
This implies that the orbit of $\sigma(D/G)$, we denote it by $O$, is contained in
$Mor_{\pi}(D/G, S)$. \\ Now let us determine the isotropy group of $\sigma(D/G)$. If
$\alpha_{y_0}\circ \sigma=\sigma$, then $y_0=y_g$ for some $g\in G$. Since $G$ acts faithfully
on $E$, the isotropy group is isomorphic to $G$ and $O\simeq E/G.$ \\Note that $\pi$ is a smooth
elliptic fibration with a section $\sigma$. Then $\chi(\mathcal O_S)=0$, and $K_S\sim^{num}
(\chi(\mathcal O_S)+2g(D/G)-2)F \sim^{num} (2g(D/G)-2)F$, where $F$ is a fibre of $\pi$. So
$deg(\mathcal N_{\sigma (D/G)/S})=deg (K_S^{-1}\arrowvert _{\sigma (D/G)}\otimes \omega _{\sigma
(D/G)})=0$, which implies that $\dim H^0(\sigma(D/G),\mathcal N_{\sigma (D/G)/S})\le 1.$ Then
$\dim Mor_{\pi}(D/G, S)\le 1$.
\\But we already have a closed immersion: $$E/G\hookrightarrow Mor_{\pi}(D/G, S),$$ so
$Mor_{\pi}(D/G, S)$ is locally isomorphic to $E/G$, hence smooth everywhere as $\sigma$ varies,
which implies $Mor_{\pi}(D/G, S)\simeq E/G.$
\\We now prove claim 2), our aim is to calculate $K_S\arrowvert _{\sigma(D/G)}. $ \\First, we
have the following commutative diagram:
$$\xymatrix {D\ar[r]^f \ar[d]^k &D\times E \ar[d]_h\\D/G \ar[r]^{\sigma}&(D\times E)/G~~,}$$
where $k$, $h$ are quotient morphisms and $\sigma$ induces a morphism $f_{\sigma}:
D\longrightarrow E$ satisfying that for all $ x\in D$, $f(x)=(x,f_{\sigma}(x)).$ \\So
$k^*\circ\sigma^*(K_S)=f^*\circ h^*(K_S)$, we denote this sheaf by $\mathcal L$, then the sheaf
we wanted, $K_S\arrowvert _{\sigma(D/G)}=\sigma^*(K_S)=(k_*\mathcal L)^G.$ \\Since $G$ acts
freely on $D\times E$, $h$ is \'{e}tale so the differential morphism $$dh: h^*\Omega
_S\longrightarrow \Omega _{D\times E}$$ is an isomorphism, furthermore, $dh$ is a
$G-$morphism.\\
Since $\Omega _{D\times E}\simeq p^*\Omega_D\oplus q^*\Omega_E$, so
$$h^*K_S\simeq^{G}p^*\Omega_D\otimes q^*\Omega_E$$ where ``$\simeq^G$'' means isomorphic as
$G-$sheaves. \\Then apply $f^*$ to both sides, we obtain $$\mathcal L=f^*\circ h^*K_S\simeq^G
f^*(p^*\Omega_D\otimes q^*\Omega_E)=\Omega_D\otimes_{\mathcal O_D} f_{\sigma}^*\Omega _E.$$ Then
$$\sigma^*K_S\simeq(k_*\mathcal L)^G\simeq k_*(\Omega_D\otimes_{\mathcal O_D} f_{\sigma}^*\Omega
_E)^G.$$ Since $k: D\longrightarrow D/G$ is \'{e}tale, $\Omega_D\simeq^G k^* \Omega_{D/G}.$\\ By
the projection formula, $k_*(\Omega_D\otimes_{\mathcal O_D} f_{\sigma}^*\Omega
_E)\simeq\Omega_{D/G}\otimes_{\mathcal O_{D/G}}k_* f_{\sigma}^*\Omega_E,$ so
$$\sigma^*K_S\simeq(\Omega_{D/G}\otimes_{\mathcal O_{D/G}}k_* f_{\sigma}^*\Omega_E)^G.$$ Since
$E$ is an abelian variety, $\Omega_E\simeq ^G\mathcal O_E\otimes _{\mathbb C}H^0(E,\Omega_E)$,
so $$f^*_{\sigma}\Omega_E\simeq^G f^*_{\sigma}\mathcal O_E\otimes _{\mathbb
C}H^0(E,\Omega_E)\simeq^G \mathcal O_D\otimes_{\mathbb C}H^0(E,\Omega_E).$$
Now we have $$\sigma^*K_S\simeq (\Omega_{D/G}\otimes_{\mathcal O_{D/G}}k_* \mathcal
O_D\otimes_{\mathbb C}H^0(E,\Omega_E))^G.$$ Since rank $\Omega_{D/G}=1$ and $\dim
H^0(E,\Omega_E)=1,$ we can denote a $G-$invariant element of $\Omega_{D/G}\otimes_{\mathcal
O_{D/G}}k_* \mathcal O_D\otimes_{\mathbb C}H^0(E,\Omega_E)$ by a pure tensor $a\otimes b\otimes
c$.\\
Then for all $ g \in G$, consider the action of $g$ on $a\otimes b\otimes c$:\\
$g(a\otimes b\otimes c)=a\otimes g b \otimes g c=a\otimes b\otimes c$. Since $G$ acts on
$H^0(E,\Omega_E)$ by a character $\chi\in Hom(G, \mathbb C^*)$, i.e. $g c=\chi(g)c$, then
$$a\otimes gb\otimes gc=a\otimes gb \otimes \chi(g)c=a\otimes b\otimes c.$$ So
$gb=\chi(g)^{-1}b$, let $\mathcal L_{\alpha}=\{a\in k_*\mathcal O_D\arrowvert ga=\alpha(g)a\}$,
then $$\sigma^*K_S\simeq \Omega_{D/G}\otimes_{\mathcal O_{D/G}}\mathcal L_{\chi^{-1}}.$$ Recall
a known theorem:(See Chapter 2, Section 7, Proposition 3, ~\cite{Mumford}.)
\begin{theorem}Assume $G$ acts freely on $X$, and $Y=X/G$. Then for all characters $\alpha:
G\longrightarrow \mathcal C^*$, $\mathcal L_{\alpha}$ is an invertible sheaf, and the
multiplication in $\pi_*(\mathcal O_X)$ induces an isomorphism $\mathcal L_{\alpha}\otimes
\mathcal L_{\beta}\simeq\mathcal L_{\alpha\beta}.$ The assignment $\alpha\mapsto \mathcal
L_{\alpha}$ defines an isomorphism $\hat{G}\simeq ker(PicY\longrightarrow PicX).$
\end{theorem}Now apply Theorem 6.2 to $X=D$ and $Y=D/G$, then $\mathcal
L_{\chi^{-1}}^{-1}=\mathcal L_{\chi}.$ \\By the adjunction formula:$$\mathcal N_{\sigma
(D/G)/S}\simeq \sigma^*K_S^{-1}\otimes \Omega_{D/G}\simeq \Omega_{D/G}^{-1}\otimes \mathcal
L_{\chi}\otimes \Omega_{D/G}\simeq \mathcal L_{\chi}.$$ By our assumption, some element $g\ne
id_G$ of $G$ has fixed points on $E$, since $G$ acts on $E$ faithfully, the lineat part of $g$
is non-trivial, which implies that $\chi\ne 1$. By Theorem 6.2, $\mathcal L_{\chi}\ne\mathcal
O_{D/G}\in$ $Pic ~(D/G)$, i.e. $\mathcal N_{\sigma (D/G)/S} \simeq L_{\chi}$ is not trivial. By
the proof of claim 1), we know that $deg\mathcal N_{\sigma (D/G)/S}=0$. If $H^0(D/G, \mathcal
N_{\sigma (D/G)/S})\ne0$, then $\mathcal N_{\sigma (D/G)/S}$ is linearly equivalent to an
effective divisor of degree 0, hence $ \mathcal N_{\sigma (D/G)/S}$ is trivial, we get a
contradiction. So $H^0(D/G, \mathcal N_{\sigma (D/G)/S})=0$, and this completes our
proof.\end{proof}
Now it is easy to see that Theorem 3.10 and Theorem 3.11 are just direct corollaris of Theorem
6.1.
Furthermore, Theorem 6.1 completes our study of $Mor_{\pi}(C,S)$, when $\kappa (S)=1$.
|
1,108,101,565,957 | arxiv | \section{Introduction}
Consider a dynamical system $(X,G)$ consisting of a compact Hausdorff topological space $X$ equipped with a continuous action of a group $G$.
A point $x \in X$ is called \emph{periodic} if its orbit is finite.
There are many classical examples of ``chaotic" dynamical systems
(e.g., the system generated by Arnold's cat map on the $2$-torus or, more generally, by any Anosov diffeomorphism on the $n$-torus) in which periodic points are dense.
In fact, density of periodic points often appears as one of the axioms in the various definitions of chaos that may be found in the mathematical literature.
This is the case in Devaney's definition of chaos, which requires density of periodic points together with topological transitivity
(cf. \cite{devaney}, \cite{kontorovich}, \cite{cc-chaos}, and the references therein).
When $X$ is metrizable, the density of periodic points in $(X,G)$ has an important ergodic consequence, namely the existence of a Borel invariant probability measure on $X$ with full support.
Indeed, if $(\OO_n)_{n \geq 1}$ is a sequence of disjoint finite orbits whose union is dense in $X$
and $(a_n)_{n \geq 1}$ are positive real numbers satisfying $\sum_{n \geq 1} a_n = 1$,
then
$$
\mu = \sum_{n \geq 1} \frac{a_n}{\vert \OO_n \vert}\left(\sum_{x \in \OO_n} \delta_x\right)
$$
is an invariant Borel probability measure whose support is $X$
(we use $\delta_x$ to denote the Dirac measure at $x$ and
$\vert \cdot \vert$ to denote cardinality of finite sets).
\par
In this paper, we investigate the question of the density of periodic points for certain symbolic dynamical systems.
Before stating our main results, let us briefly recall some basic definitions from symbolic dynamics.
\par
Let $A$ be a finite set, called the \emph{alphabet}, and $G$ a group.
Here we do not make the assumption that $G$ is finitely generated nor even that it is countable.
The set $A^G := \{x \colon G \to A\}$,
equipped with its \emph{prodiscrete topology}, i.e., the product topology obtained by taking the discrete topology on each factor $A$ of $A^G$, is a totally disconnected compact Hausdorff space,
called the space of \emph{configurations}.
It is metrizable unless $G$ is uncountable and $A$ contains more than one element.
The group $G$ acts on $A^G$ by the $G$-\emph{shift}, which is defined by the formula
$gx(h) := x(g^{-1}h)$ for all $g,h \in G$ and $x \in A^G$.
A configuration $x \in A^G$ is called \emph{periodic} if it is periodic with respect to the $G$-shift.
\par
A closed $G$-invariant subset of $A^G$ is called a \emph{subshift}.
Let $X \subset A^G$ be a subshift. A map $\tau \colon X \to X$ is called a \emph{cellular automaton} (or an \emph{endomorphism} of $X$) if $\tau$ is continuous (with respect to the prodiscrete topology) and $G$-equivariant (i.e., such that $\tau(gx) = g \tau(x)$ for all $(g,x) \in G \times X$).
One says that the subshift $X$ is \emph{surjunctive} if every injective cellular automaton $\tau \colon X \to X$ is surjective
(cf. \cite{gottschalk}, \cite{gromov-esav}, \cite{fiorenzi-periodic},
\cite{ca-and-groups-springer}).
The set consisting of all bijective cellular automata $\tau \colon X \to X$ is a group for composition of maps.
This group is called the \emph{automorphism group} of $X$ and will be denoted by $\Aut(X)$.
Recall that a group is called \emph{residually finite} if the intersection of its subgroups of finite index is reduced to the identity element.
The class of residually finite groups includes all virtually polycyclic groups (and in particular all finitely generated nilpotent groups and hence all finitely generated abelian groups), all free groups, and all finitely generated linear groups.
\par
It is a well known fact
(see Proposition \ref{p:properties-density} below)
that if $X \subset A^G$ is a subshift containing a dense set of periodic configurations then:
(i) $X$ is surjunctive,
(ii) the group $\Aut(X)$ is residually finite, and
(iii) if the action of $G$ on $X$ is faithful then the group $G$ itself is residually finite.
\par
One says that a subshift $X \subset A^G$ is \emph{of finite type} if there exist a finite subset
$\Omega \subset G$ and a subset $\PP \subset A^\Omega$ such
that $X$ consists of all $x \in A^G$ for which the restriction of the configuration $g x$ to
$\Omega$ belongs to $\PP$ for all $g \in G$.
One says that a subshift $X \subset A^G$ is \emph{strongly irreducible} if there exists a finite subset $\Delta \subset G$ such that,
if $\Omega_1$ and $\Omega_2$ are finite subsets of $G$ which are sufficiently far apart in the sense that the sets
$\Omega_1$ and $\Omega_2\Delta$ do not meet,
then, given any two configurations $x_1$ and $x_2$ in $X$, there exists a configuration $x \in X$ which coincides with $x_1$ on $\Omega_1$ and with $x_2$ on $\Omega_2$.
\par
We shall establish the following result.
\begin{theorem}
\label{t:density-periodic}
Let $G$ be a residually finite group and let $A$ be a finite set.
Suppose that $X \subset A^G$ is a strongly irreducible subshift of finite type
and that there exists a periodic configuration in $X$.
Then $X$ contains a dense set of periodic configurations.
\end{theorem}
As an immediate consequence, we get the following statement.
\begin{corollary}
Let $G$ be a residually finite group and let $A$ be a finite set.
Suppose that $X \subset A^G$ is a strongly irreducible subshift of finite type containing a periodic configuration.
Then $X$ is surjunctive and its automorphism group $\Aut(X)$ is residually finite.
If in addition $G$ is countable, then $X$ admits an invariant Borel probability measure with full support.
\qed
\end{corollary}
Theorem \ref{t:density-periodic} was previously obtained by Lightwood \cite[Lemma 5.4]{lightwood} in the case $G = \Z^d$ is a free abelian group of finite rank.
Lightwood \cite[Lemma 9.2]{lightwood} also
proved that any strongly irreducible subshift of finite type over $\Z^2$ contains a dense set of periodic configurations.
According to \cite[Section 9]{lightwood}, \cite[Section 10]{lightwood-2}, and \cite{hochman},
the question whether every strongly irreducible subshift of finite type over $\Z^d$
contains a dense set of periodic configurations
remains open for $d \geq 3$.
Robinson \cite{robinson} (see also \cite[Example 3]{schmidt}) constructed, using Wang tiles, nonempty subshifts of finite type over $\Z^2$ without periodic configurations.
On the other hand, it is well known that every nonempty subshift of finite type over $\Z$ must contain a periodic configuration and that every irreducible subshift of finite type, and even every irreducible sofic subshift, over $\Z$ contains a dense set of periodic configurations (see
\cite{fiorenzi-periodic} and Section \ref{sec:strongly-over-Z} below).
It is also known that a subshift of finite type over $\Z$ is topologically mixing if and only if it is strongly irreducible (see for example \cite[Corollary 6.1]{cc-myhill}).
In fact, when $G = \Z$, strong irreducibility is enough to guarantee density of periodic configurations.
More precisely, we have the following result whose proof was kindly communicated to us by Benjy Weiss.
\begin{theorem}
\label{t:benjy}
Let $A$ be a finite set and let $X \subset A^\Z$ be a strongly irreducible subshift.
Then $X$ contains a dense set of periodic configurations.
\end{theorem}
\begin{corollary}
Let $A$ be a finite set and
let $X \subset A^\Z$ be a strongly irreducible subshift.
Then $X$ is surjunctive and its automorphism group $\Aut(X)$ is residually finite.
Moreover, $X$ admits an invariant Borel probability measure with full support.
\qed
\end{corollary}
Let us note that the surjunctivity of strongly irreducible subshifts over $\Z$ also follows
from the more general fact that if $G$ is an amenable group then any strongly irreducible subshift over $G$ is surjunctive (see \cite[Corollary 4.4]{cc-myhill}).
\par
Observe that Theorem \ref{t:density-periodic} applies in particular to quiescent strongly irreducible subshifts of finite type over residually finite groups (a subshift is called \emph{quiescent} if it contains a constant configuration, i.e., a configuration that is fixed by the shift action).
Examples of such subshifts are provided by the \emph{golden mean subshift}
which is the subshift over the group $\Z$ consisting of all bi-infinite sequences of $0$'s and $1$'s with no consecutive $1$'s, and by all \emph{generalized golden mean subshifts}
over residually finite groups
(see Example \ref{ex:golden} for the definition of generalized golden mean subshifts).
In fact, the golden mean subshift and its generalizations are quiescent and have bounded propagation in the sense of Gromov \cite{gromov-esav}
(see Definition \ref{def:bounded-propagation} below) and it turns out that every subshift with bounded propagation is strongly irreducible and of finite type
(cf. \cite[Proposition 4.10]{fiorenzi-strongly} and Proposition \ref{p:bounded-prop-implies-ft} below).
The paper is organized as follows.
In Section \ref{s:background}, we fix notation and present some background material on shifts and subshifts.
Section \ref{sec:density} contains the proof of Theorem \ref{t:density-periodic}.
The class of W-subshifts, which is a class of subshifts over $\Z$ strictly containing the class of irreducible sofic subshifts and the class of strongly irreducible subshifts,
is introduced in Section \ref{s:W-subshifts}.
In Section \ref{sec:strongly-over-Z},
we establish the density of periodic configurations in W-subshifts
(Theorem \ref{density-for-W}).
In the final section, we have collected some additional remarks and a list of open questions.
\section{Background material }
\label{s:background}
\subsection{Group actions}
Let $X$ be a set equipped with an action of a group $G$, i.e., a map
$(g,x) \mapsto gx$ from $G \times X$ into $X$ such that
$g_1 (g_2 x) = (g_1 g_2)x$ and $1_G x = x$ for all $g_1,g_2 \in G$ and $x \in X$ (we denote by $1_G$ the identity element of $G$).
The orbit of a point $x \in X$ is the subset
$G x := \{g x : g \in G\}\subset X$ and its stabilizer is the subgroup
$\Stab_G(x) := \{g \in G : g x = x \} \subset G$.
Given a subgroup $H$ of $G$, one says that a point $x \in X$ is \emph{fixed} by $H$ if $H \subset \Stab_G(x)$, i.e., if $hx = x$ for all $h \in H$.
A point $x \in X$ is called \emph{periodic} if its orbit is finite.
The following conditions are equivalent: (1) $x$ is periodic; (2) the subgroup $\Stab_G(x)$ is of finite index in $G$;
(3) there exists a subgroup of finite index $H \subset G$ fixing $x$.
\par
Suppose now that $X$ is a topological space. The action of $G$ on $X$ is said to be
\emph{continuous} if, for each $g \in G$, the map $x \mapsto g x$ is continuous.
\subsection{Subshifts}
Throughout this paper, $G$ is a group and $A$ is a finite set.
\par
If $\Omega \subset G$ is a finite subset, an element $p \in A^\Omega$ is called a \emph{pattern} with \emph{support} $\Omega$.
For $x \in A^G$, we denote by $x\vert_\Omega$ the pattern with support $\Omega$ obtained
by restricting $x$ to $\Omega$.
Given subsets $X \subset A^G$ and $\Omega \subset G$, we define the subset
$X\vert_\Omega \subset A^\Omega$ by
$X\vert_\Omega = \{x\vert_\Omega : x \in X\}$.
\par
It follows from the definition of the prodiscrete topology that a subset $X \subset A^G$ is closed in $A^G$ if and only if it satisfies the following condition:
if a configuration $x \in A^G$ is such that $x\vert_\Omega \in X\vert_\Omega$ for every finite subset $\Omega \subset G$, then $x \in X$.
\par
The following fact is well known. We give a proof for completeness.
\begin{proposition}
\label{p:properties-density}
Let $G$ be a group and let $A$ be a finite set.
Suppose that $X \subset A^G$ is a subshift containing a dense set of periodic configurations. Then
\begin{enumerate}[\rm (i)]
\item
$X$ is surjunctive;
\item
the automorphism group $\Aut(X)$ is residually finite;
\item
if the action of $G$ on $X$ is faithful then the group $G$ itself is residually finite.
\end{enumerate}
\end{proposition}
\begin{proof}
(i)
Suppose that $\tau \colon X \to X$ is a cellular automaton.
Let $x \in X$ be a periodic configuration and $H_x := \Stab_G(x)$.
Let
$$
F_x := \{y \in X: h y = y \text{ for all } h \in H_x\}
$$
denote the set of configurations in $X$ that are fixed by $H_x$.
Observe that the set $F_x$ is finite
since $H_x$ is of finite index in $G$ and every element $y \in F_x$ is entirely determined by its restriction to a complete set of representatives of the right cosets of $H_x$ in $G$.
On the other hand, $\tau$ sends $F_x$ into itself since $\tau$ is $G$-equivariant.
If $\tau$ is injective, we deduce that $\tau(F_x) = F_x$.
Since $x \in F_x$, it follows that $x$ is in the image of $\tau$.
As $\tau(X)$ is closed in $X$ by compactness, the density of periodic points implies that $\tau(X) = X$. This shows that $X$ is surjunctive.
\par
(ii)
Let $x \in X$ be a periodic configuration.
Keeping the notation introduced in the proof of (i),
we see that each $\tau \in \Aut(X)$ induces by restriction a permutation of the set $F_x$.
This yields a group homomorphism $\rho_x$ from $\Aut(X)$ into the symmetric group of $F_x$.
As $F_x$ is finite, its kernel $\Ker(\rho_x)$ is of finite index in $G$.
Since $x \in F_x$ and periodic configurations are dense in $X$,
the intersection of the subgroups $\Ker(\rho_x)$, where $x$ runs over all periodic configurations in $X$, is reduced to the identity map on $X$.
It follows that $\Aut(X)$ is residually finite.
\par
(iii)
If a group $G$ acts faithfully and continuously on
a Hausdorff space $X$ and periodic points are dense in $X$, then
$G$ is residually finite (see for example
\cite[Theorem 2.7.1]{ca-and-groups-springer}).
\end{proof}
\begin{remark}
Assertions (i) and (ii) of Proposition \ref{p:properties-density} are proved in \cite{fiorenzi-periodic} under the hypothesis that $G$ is finitely generated.
\end{remark}
\subsection{Subshifts of finite type and strongly irreducible subshifts}
Let $\Omega$ be a finite subset of $G$.
Given a subset $\PP \subset A^\Omega$,
the subset $X(\Omega,\PP) \subset A^G$ defined by
$$
X(\Omega,\PP) := \{x \in A^G : (g x)\vert_\Omega \in \PP \text{ for all } g \in G \}
$$
is clearly a subshift of $A^G$.
One says that a subshift $X \subset A^G$ is \emph{of finite type} if there exist a finite subset $\Omega \subset G$ and a subset $\PP \subset A^\Omega$ such
that $X = X(\Omega,\PP)$.
The subset $\Omega$ is then called a \emph{defining window} and that $\PP$ is a \emph{set of defining patterns} for $X$.
\par
One says that a dynamical system $(X,G)$ is a \emph{factor} of a dynamical system $(Y,G)$ if there
exists a surjective continuous $G$-equivariant map $f \colon Y \to X$.
Note that if $(X,G)$ is a factor of $(Y,G)$ and periodic points are dense in $Y$,
then periodic points are also dense in $X$.
\par
One says that a subshift $X \subset A^G$ is \emph{sofic} if $X$ is a factor of some subshift of finite type, i.e., if there exists a finite set $B$, a subshift of finite type $Y \subset B^G$, and a surjective continuous
$G$-equivariant map $f \colon Y \to X$.
\par
Let $\Delta$ and $\Omega$ be finite subsets of $G$.
The $\Delta$-\emph{neighborhood} of $\Omega$ is defined as being the subset of $G$
consisting of all elements $g \in G$ such that the set $g \Delta$ meets $\Omega$.
Thus we have
\begin{equation*}
\Omega^{+\Delta} := \{g \in G : g\Delta \cap \Omega \not= \varnothing \} = \Omega\Delta^{-1}.
\end{equation*}
Note that we have $\Omega^{+\Delta} \subset \Omega^{+ \Delta'}$ if $\Delta \subset \Delta' \subset G$.
Observe also that we have $\Omega \subset \Omega^{+\Delta}$ whenever $1_G \in \Delta$.
\begin{definition}
\label{def:delta-irred-subshift}
Let $\Delta$ be a finite subset of $G$.
A subshift $X \subset A^G$ is said to be $\Delta$-\emph{irreducible} if it satisfies the following condition:
if $\Omega_1$ and $\Omega_2$ are finite subsets of $G$ such that
\begin{equation}
\label{e:delta-neighbor-disjoint}
\Omega_1^{+\Delta} \cap \Omega_2 = \varnothing,
\end{equation}
then, given any two configurations $x_1$ and $x_2$ in $X$, there exists a configuration $x \in X$ which satisfies
$x\vert_{\Omega_1} = x_1\vert_{\Omega_1}$ and $x\vert_{\Omega_2} = x_2\vert_{\Omega_2}$.
\par
A subshift $X \subset A^G$ is called \emph{strongly irreducible} if there exists a finite subset
$\Delta$ of $G$ such that $X$ is $\Delta$-irreducible.
\end{definition}
Note that if a subshift $X \subset A^G$ is $\Delta$-irreducible for some finite subset $\Delta \subset G$, then
$X$ is $\Delta'$-irreducible for any finite subset $\Delta' \subset G$ such that $\Delta \subset \Delta'$.
\begin{remark}
An easy compactness argument (cf. \cite[Lemma~4.6]{cc-myhill})
shows that we get an equivalent definition of $\Delta$-irreducibility if the sets
$\Omega_1$ and $\Omega_2$ are allowed to be infinite
in Definition \ref{def:delta-irred-subshift}.
\end{remark}
\begin{remark}
For $G = \Z^d$, strongly irreducible subshifts were introduced in \cite[Definition 1.10]{burton-steif}, in \cite[Section 2]{ward} (under the name \emph{subshifts with strong specification}), and
in \cite[Definition 2.4]{lightwood} (under the name \emph{square-mixing subshifts}).
They were also introduced in \cite[Definition 4.1]{fiorenzi-strongly} for finitely generated
groups $G$.
\end{remark}
Suppose that a group $G$ acts continuously on a topological space $X$.
One says that the action of $G$ on $X$ is \emph{topologically mixing} if, for any pair of nonempty open subsets $U$ and $V$ of $X$, there exists a finite subset $F \subset G$ such that $U \cap gV
\neq \varnothing$ for all $g \in G \setminus F$.
\par
One says that a subshift $X \subset A^G$ is \emph{topologically mixing} if the action of $G$ on $X$ is topologically mixing.
This amounts to saying that $X$ satisfies the following condition:
for any finite subset $\Omega \subset G$ and any two configurations $x_1, x_2 \in X $,
there exists a finite subset $F \subset G$ such that, for all $g \in G \setminus F$, there exists a configuration $x \in X$ such that $x\vert_\Omega = x_1\vert_{\Omega}$ and $ x\vert_{g\Omega} = x_2\vert_{g\Omega}$.
\par
Every strongly irreducible subshift $X \subset A^G$ is topologically mixing
(see for example \cite[Proposition 3.3]{cc-myhill}).
\begin{remark}
The Ledrappier subshift, i.e., the subshift $X \subset \{0,1\}^{\Z^2}$ consisting of all $x\in \{0,1\}^{\Z^2}$ such that $x(m,n) + x(m + 1,n) + x(m ,n+ 1)$ is even for all $(m,n) \in \Z^2$, provides an example of a quiescent subshift of finite type over
$\Z^2$ which is topologically mixing but not strongly irreducible
(cf. \cite[Example 1.8]{burton-steif}).
On the other hand, it is known that every topologically mixing sofic subshift over $\Z$ is strongly irreducible (see for example \cite[Corollary 1.3]{cc-myhill}).
\end{remark}
\begin{definition}
\label{def:bounded-propagation}
Let $G$ be a group and let $A$ be a finite set.
\par
Let $\Delta$ be a finite subset of $G$.
A subshift $X \subset A^G$ is said to have $\Delta$-\emph{propagation} if it satisfies the following condition:
if $\Omega$ is a finite subset of $G$ and $p \in A^\Omega$ is a pattern whose
restriction to $\Omega \cap g\Delta$ belongs to $X\vert_{\Omega \cap g\Delta}$ for all $g \in G$,
then $p \in X\vert_\Omega$.
\par
A subshift $X \subset A^G$ is said to have \emph{bounded propagation} if there exists a finite
subset $\Delta \subset G$ such that $X$ has $\Delta$-propagation.
\end{definition}
Subshifts with bounded propagation were introduced by Gromov \cite[p. 160]{gromov-esav}
(see also \cite{fiorenzi-strongly}).
\begin{example}[Generalized golden mean subshifts]
\label{ex:golden}
Let $G$ be a group and take $A = \{0,1,\dots,m\}$, where $m$ is a positive integer.
Suppose that we are given a finite sequence $E_1,\dots,E_k$ of finite subsets of $G$.
Consider the subshift $X \subset A^G$ consisting of all $x \in A^G$ such that, for all $g \in G$ and $1 \leq i \leq k$, there exists an element $h \in gE_i$ satisfying
$x(h) = 0$.
This is a quiescent subshift since the identically-$0$ configuration is in $X$.
Moreover, $X$ has $\Delta$-propagation for $\Delta = E_1 \cup \dots \cup E_k$.
Indeed, suppose that $\Omega \subset G$ is a finite subset and that $p \in A^\Omega$ is an element whose
restriction to $\Omega \cap g\Delta$ belongs to $X\vert_{\Omega \cap g\Delta}$ for all $g \in G$.
Then the configuration $x \colon G \to A$ defined by
$$
x(g) =
\begin{cases}
p(g) & \text{ if }g \in \Omega \\
0 & \text{ if } g \in G \setminus \Omega
\end{cases}
$$
satisfies $x \in X$ and $x\vert_\Omega = p$.
\par
For $G = \Z$, $m = k = 1$, and $E_1 = \{0,1\}$, we recover the usual golden mean subshift
$X \subset \{0,1\}^\Z$
consisting of all bi-infinite sequences of $0$'s and $1$'s with no consecutive $1$'s.
\par
For $G = \Z^2$, $m = 1$, $k = 2$, $E_1 = \{(0,0),(1,0)\}$ and $E_2 = \{(0,0),(0,1)\}$,
we obtain the generalized golden mean subshift $X \subset \{0,1\}^{\Z^2}$ also known as the \emph{hard sphere model}
(see \cite[Example 3.2]{lind}).
\par
For $G = \Z^d$, $k = d$, and $E_i = \{0,e_i\}$, $1 \leq i \leq d$, where $e_1,\dots,e_d$ denotes the canonical basis of $\Z^d$, we get the \emph{generalized hard-sphere model}
$X \subset \{0,1,\dots,m\}^{\Z^d}$ described in
\cite[Example 1.6]{burton-steif}.
\end{example}
The following result was proved by Fiorenzi \cite[Proposition 4.10]{fiorenzi-strongly} in the case when $G$ is finitely generated.
\begin{proposition}
\label{p:bounded-prop-implies-ft}
Let $G$ be a group and let $A$ be a finite set.
Then every subshift $X \subset A^G$ with bounded propagation
is strongly irreducible and of finite type.
\end{proposition}
\begin{proof}
Suppose that $X \subset A^G$ is a subshift with bounded propagation.
Choose a finite subset $\Delta \subset G$ such that $X$ has $\Delta$-propagation and
consider the set $\PP = X\vert_\Delta \subset A^\Delta$. If $x \in X$ then $(g^{-1}x)\vert_\Delta \in \PP$ for
all $g \in G$ since $X$ is $G$-invariant.
Thus, we have $X \subset X(\Delta,\PP)$. Conversely, suppose that $x \in X(\Delta,\PP)$.
Let $\Omega$ be a finite subset of $G$.
For each $g \in G$, we have $g^{-1}x \in X(\Delta,\PP)$ since $X(\Delta,\PP)$ is $G$-invariant. Therefore, there exists $y \in X$ such that $(g^{-1}x)\vert_\Delta = y\vert_\Delta$. This implies
$x\vert_{g\Delta} = (gy)\vert_{g\Delta}$ and hence
$x\vert_{\Omega \cap g\Delta} = (gy)\vert_{\Omega \cap g\Delta}$. Thus, we have
$x\vert_{\Omega \cap g\Delta} \in X\vert_{\Omega \cap g\Delta}$ for all $g \in G$.
As $X$ has $\Delta$-propagation,
it follows that $x\vert_\Omega \in X\vert_\Omega$.
Since $X$ is closed in $A^G$, we deduce that $x \in X$.
Thus, we have $X = X(\Delta,\PP)$. This shows that $X$ is of finite type.
\par
After replacing $\Delta$ by $\Delta \cup \{1_G\}$, we can assume that $1_G \in \Delta$.
Suppose that $\Omega_1$ and $\Omega_2$ are finite subsets of $G$ such that
\begin{equation}
\label{e:delta-neighbor-disjoint-2}
\Omega_1^{+\Delta} \cap \Omega_2 = \varnothing.
\end{equation}
Note that this implies $\Omega_1 \cap \Omega_2 = \varnothing$ since $\Omega_1 \subset \Omega_1^{+\Delta}$.
Let $x_1,x_2 \in X$.
Consider the finite subset $\Omega = \Omega_1 \cup \Omega_2 \subset G$
and the element $p \in A^\Omega$ given by $p\vert_{\Omega_1} = x_1\vert_{\Omega_1}$
and $p\vert_{\Omega_2} = x_2\vert_{\Omega_2}$ (observe that $p$ is well defined since the sets $\Omega_1$
and $\Omega_2$ are disjoint).
For all $g \in G$, we have $\Omega \cap g\Delta \subset \Omega_1$
or $\Omega \cap g\Delta \subset \Omega_2$ by \eqref{e:delta-neighbor-disjoint-2}.
This implies that
$p\vert_{\Omega \cap g\Delta} = x_1\vert_{\Omega_1 \cap g\Delta}$ or
$p\vert_{\Omega \cap g\Delta} = x_2\vert_{\Omega_2 \cap g\Delta}$.
It follows that $p\vert_{\Omega \cap g\Delta} \in X\vert_{\Omega \cap g\Delta}$ for all $g \in G$.
As $X$ has $\Delta$-propagation,
we deduce that $p \in X\vert_{\Omega}$. Thus, there is an element $x \in X$ such that
$x\vert_{\Omega_1} = x_1\vert_{\Omega_1}$ and $x\vert_{\Omega_2} = x_2\vert_{\Omega_2}$.
This shows that $X$ is $\Delta$-irreducible and hence strongly irreducible.
\end{proof}
\begin{remark}
In \cite[Section 4]{fiorenzi-strongly}, Fiorenzi gave an example of a strongly irreducible subshift of finite type
$X \subset \{0,1\}^\Z$ without bounded propagation, namely the subshift admitting $\{010,111\}$ as a defining set of forbidden words. Note that $X$ is quiescent since the identically-$0$ configuration is in $X$.
\end{remark}
\begin{remark}
It is true that any subshift which is a factor of a strongly irreducible subshift is itself strongly irreducible (see \cite[Proposition 3.4]{cc-myhill}).
On the other hand, a subshift which is a factor of a subshift with bounded propagation may fail to have bounded propagation.
For example, the even subshift $X \subset \{0,1\}^\Z$ (i.e., the subshift formed by all bi-infinite sequences of $0$s and $1$s in which every chain of $1$s which is bounded by two $0$s has even length) is sofic.
Indeed, $X$
is a factor of the golden mean subshift $Y \subset \{0,1\}^\Z$.
A factor map $\tau \colon Y \to X$ is obtained by associating with each sequence
$y \in Y$ the sequence $\tau(y) \in X$ deduced from $y$ by replacing each occurrence of the word $10$ by the word $11$.
The even subshift does not have bounded propagation since it is not of finite type.
\end{remark}
\section{Proof of Theorem \ref{t:density-periodic}}
\label{sec:density}
Choose a periodic configuration $x_0 \in X$ and a finite-index normal subgroup $K$ of $G$ such that $K \subset \Stab(x_0)$.
\par
Let $\Delta$ be a finite subset of $G$ such that
$X$ is $\Delta$-irreducible. Up to enlarging $\Delta$ if necessary, we can further assume that $\Delta$ is a defining window for $X$ and that we have $1_G \in \Delta$.
\par
Let $x \in X$ and let $\Omega$ be a finite subset of $G$.
Let us show that there exists a periodic configuration $y \in X$ which coincides with $x$
on $\Omega$. This will prove the density of periodic configurations in $X$.
\par
Consider the finite subsets of $G$
defined by
$$
\Omega_1 := \Omega^{+\Delta} =\Omega\Delta^{-1} \quad \text{and} \quad
\Omega_2 := \Omega_1^{+\Delta^{-1}\Delta} = \Omega_1\Delta^{-1}\Delta.
$$
Observe that we have $\Omega \subset \Omega_1 \subset \Omega_2$ since $1_G \in \Delta$.
As $X$ is $\Delta$-irreducible, we can find a configuration $z \in X$
which coincides with $x$ on $\Omega$ and with $x_0$ on $\Omega_2 \setminus \Omega_1$.
\par
Since $G$ is residually finite, we can find a finite-index normal subgroup $L$ of $G$
such that the restriction to $\Omega_2$ of the canonical epimorphism $\rho_L \colon G \to G/L$ is injective.
Consider now the finite-index normal subgroup $H$ of $G$ given by $H := K \cap L$.
Let $\rho_H \colon G \to G/H$ denote the canonical group epimorphism.
Define the configuration $y \in A^G$ by
$$
y(g) =
\begin{cases}
z(g') &\text{ if } \rho_H(g) = \rho_H(g') \text{ for some } g' \in \Omega_2, \\
x_0(g) & \text{ otherwise.}
\end{cases}
$$
First observe that $y$ is well defined since the restriction to $\Omega_2$ of $\rho_H$ is injective.
It is clear that $h y = y$ for all $h \in H$ so that $y$ is periodic.
On the other hand, the configurations $y$ and $x$ coincide on $\Omega$,
since $y$ coincides with $z$ on $\Omega_2$ and $z$ coincides with $x$ on $\Omega \subset \Omega_2$.
Let us prove that $y \in X$.
As $X$ is of finite type with defining window $\Delta$,
it suffices to show that, for each $g \in G$, there exists a configuration $w_g \in X$ such that $y$ coincides with $w_g$ on $g\Delta$.
If the set $\rho_H(g\Delta)$ does not meet $\rho_H(\Omega_1)$, then $y$ coincides with $x_0$
on $g\Delta$ and we can take $w_g = x_0$.
Suppose now that $\rho_H(g\Delta)$ meets $\rho_H(\Omega_1)$.
This means that there exist $h \in H$ and $\delta_0 \in \Delta$ such that
$h^{-1}g\delta_0 \in \Omega_1$.
Then, for all $\delta \in \Delta$, setting $k := g\delta$, we have that
$$
h^{-1}k = (h^{-1}g\delta_0)\delta_0^{-1}\delta \in \Omega_1\Delta^{-1}\Delta = \Omega_2.
$$
We deduce that
$$
y(k) = h y(k) = y(h^{-1}k) = z(h^{-1}k) = hz(k).
$$
Therefore, we can take $w_g = h z$ in that case.
This shows that $y \in X$ and completes the proof that periodic configurations are dense in $X$.
\qed
\section{W-subshifts}
\label{s:W-subshifts}
In this section we introduce a class of irreducible subshifts over $\Z$, the class of W-subshifts,
and prove that it strictly contains the class of irreducible sofic subshifts and the class of strongly irreducible subshifts.
Let us first introduce some additional notation and recall the definition of the language associated with a subshift over $\Z$.
\par
Let $A$ be a finite set.
We denote by $A^*$ the free monoid based on $A$.
Thus, $A^*$ is the set consisting of all finite words $w = a_1 a_2 \cdots a_n$, where $n \geq 0$ and $a_i \in A$ for $1 \leq i \leq n$,
and the monoid operation on $A^*$ is the concatenation of words.
The \emph{length} of the word $w = a_1 a_2 \cdots a_n$ is the integer $\vert w \vert := n$.
The identity element in $A^*$ is the empty word. It is the only word of length $0$.
\par
Given a word $w = a_1 a_2 \cdots a_n \in A^*$ of length $n$, we define a periodic configuration $w^\infty \in A^\Z$ by setting
$w^\infty(i + k n) = a_i$ for all $1 \leq i \leq n$ and $k \in \Z$.
\par
One says that a word $w \in A^*$ appears as a \emph{subword} of a configuration $x \in A^\Z$ if either $w$ is the empty word or there exist $i,j \in \Z$ with $i \leq j$ such that
$w = x(i)x(i+1) \cdots x(j) $.
\par
Consider now a subshift $X \subset A^\Z$.
The \emph{language} of $X$ is the subset $L(X) \subset A^*$ consisting of all words $w \in A^*$
such that $w$ appears as a subword of some configuration in $X$.
\par
One says that a subshift $X \subset A^\Z$ is \emph{irreducible} if for all words $u,v \in L(X)$ there exists a word $w \in L(X)$ such that $uwv \in L(X)$.
It is clear that every topologically mixing subshift over $\Z$ is irreducible.
\begin{proposition}
\label{p:conditions-W}
Let $A$ be a finite set and let $X \subset A^\Z$ be a subshift.
Let $n_0 \geq 0$ be an integer.
Then the following conditions are equivalent:
\begin{enumerate}[\rm (a)]
\item
for all $u,v \in L(X)$
there exists a word $c \in L(X)$ with length $\vert c \vert \leq n_0$ such that $ucv \in L(X)$;
\item
the subshift $X$ is irreducible and
for every $u \in L(X)$ there exists a word $c \in L(X)$ with length $\vert c \vert \leq n_0$ such that $ucu \in L(X)$.
\end{enumerate}
\end{proposition}
\begin{proof}
Condition (a) trivially implies (b).
Conversely, suppose that condition (b) is satisfied and let $u,v \in L(X)$.
By irreducibility, we can find a word $w \in L(X)$ such that $vwu \in L(X)$.
Moreover, there exists a word $c \in L(X)$ with length $\vert c \vert \leq n_0$ such that $vwucvwu \in L(X)$.
This implies $ucv \in L(X)$. Thus, condition (a) is satisfied.
\end{proof}
\begin{definition}
\label{def:weiss-subshift}
Let $A$ be a finite set.
We say that a subshift $X \subset A^Z$ is a \emph{W-subshift} if there exists an integer
$n_0 \geq 0$ satisfying one of the two equivalent conditions of Proposition \ref{p:conditions-W}.
\end{definition}
The following statement is well known.
\begin{lemma}
\label{l:property-irr-sofic}
Let $A$ be a finite set and let $X \subset A^\Z$ be an irreducible sofic subshift.
Then there exists an integer $n_0 \geq 0$ satisfying the following property:
for every $u \in L(X)$, there exists a word $c \in L(X)$ with length $\vert c \vert \leq n_0$ such that the periodic configuration $(uc)^\infty$ is in $X$.
\end{lemma}
\begin{proof}
By Lemma 3.3.10 in \cite{lind-marcus},
there exists a strongly connected finite directed graph $\GG$ whose edges are labelled by elements of $A$ such that $X$ is the set of labels of bi-infinite paths in $\GG$.
If $v_1$ and $v_2$ are vertices of $\GG$, denote by $N(v_1,v_2)$ the minimal length of a finite directed path going from $v_1$ to $v_2$. Then $X$ clearly satisfies the statement by taking $n_0 = \max N(v_1,v_2)$, where $v_1$ and $v_2$ run over all vertices of $\GG$.
\end{proof}
\begin{proposition}
\label{p:irr-sft-W}
Let $A$ be a finite set.
Then every irreducible sofic subshift $X \subset A^\Z$ is a W-subshift.
In particular, every irreducible subshift of finite type $X \subset A^\Z$ is a W-subshift.
\end{proposition}
\begin{proof}
This is an immediate consequence of Lemma \ref{l:property-irr-sofic} since the word $ucu$ appears
in the configuration $(uc)^\infty$, so that condition (b) of Proposition \ref{p:conditions-W}
is satisfied.
\end{proof}
\begin{proposition}
\label{p:si-W}
Let $A$ be a finite set.
Then every strongly irreducible subshift $X \subset A^\Z$ is a W-subshift.
\end{proposition}
\begin{proof}
Suppose that $X \subset A^\Z$ is strongly irreducible.
Choose a positive integer $n_0$ large enough so that $X$ is $\Delta$-irreducible for
$\Delta = \{-n_0, - n_0 + 1, \dots, n_0\}$.
Then, for all $u,v \in L(X)$, we can find $c \in L(X)$ of length $n_0$ such that $ucv \in L(X)$.
Therefore, condition (a) of Proposition \ref{p:conditions-W} is satisfied.
\end{proof}
\begin{example}
\label{e:not-s-i}
The subshift $X \subset \{0,1\}^\Z$ that is reduced to the two periodic configurations $(01)^\infty$ and $(10)^\infty$ is an irreducible subshift of finite type.
This implies that $X$ is a W-subshift.
However,
$X$ is not topologically mixing and therefore not strongly irreducible.
Thus, the class of strongly irreducible subshifts is strictly contained in the class of W-subshifts.
\end{example}
\begin{example}
\label{e:not-sofic}
Consider the subshift $X \subset \{0,1,2\}^\Z$
consisting of all $x \in \{0,1,2\}^\Z$ such that no words of the form $01^m2^n0$
with $m \not= n$
appear in $x$.
Clearly $X$ is strongly irreducible and hence a W-subshift.
However, it can be shown by means of a pumping argument \cite[Example 3.1.7]{lind-marcus} that $X$ is not sofic.
It follows that the class of irreducible sofic subshifts over $\Z$ is strictly contained in the class of
W-subshifts.
\end{example}
There are W-subshifts which are neither sofic nor strongly irreducible.
In fact, we have the following result.
\begin{proposition}
\label{p:W-not-is-nor-si}
There exist uncountably many W-subshifts $X \subset \{0,1\}^\Z$ which are neither sofic nor strongly irreducible.
\end{proposition}
\begin{proof}
We first observe that there are only countably many sofic subshifts $X \subset \{0,1\}^\Z$.
Indeed, each sofic subshift $X \subset \{0,1\}^\Z$ is presented by a finite directed graph
with edges labelled by $0$ and $1$ (cf. \cite[Theorem 3.2.1]{lind-marcus}), and there are only countably many such graphs up to isomorphism.
Thus, it suffices to prove the existence of uncountably many W-subshifts $X \subset \{0,1\}^\Z$ which are not strongly irreducible.
Let $\N$ denote the set of nonnegative integers. With every
$\sigma \in \{2,4\}^\N$ we associate the infinite set of positive odd integers
$$
S(\sigma) = \{1+\sum_{k=0}^{n-1} \sigma(k): n \in \N\}
$$
and the subshift $X_\sigma \subset \{0,1\}^\Z$ consisting of all $x \in \{0,1\}^\Z$
such that no words of the form $01^m0$ with $m \in \N \setminus S(\sigma)$ appear in $x$.
Observe that $X_\sigma = X_{\sigma'}$ if and only if $\sigma = \sigma'$. Thus the set $\{X_\sigma: \sigma \in \{2,4\}^\N\}$ is uncountable.
Let us fix $\sigma \in \{2,4\}^\N$.
Note that for every $m \in \N$ there exists $p_m \in \{0,1,2,3\}$
such that $m+p_m \in S(\sigma)$.
Let us show that $X_\sigma$ satisfies condition (a) in Proposition \ref{p:conditions-W} with
$n_0 = 3$. Let $u,v \in L(X_\sigma)$.
If $u = 1^m$ or $v = 1^n$, with $m,n \in \N$, we can take $c$ to be the empty word. Otherwise, we have
$u = u'01^m$ and $v = 1^n0v'$ for some words $u',v' \in \{0,1\}^*$ and $m,n \in \N$, and we can take $c = 1^{p_{m+n}}$. This shows that $X_\sigma$ is a W-subshift.
On the other hand, if we take $u=v=0$, we have $u,v \in L(X_\sigma)$, but every $w \in
\{0,1\}^*$ such that $uwv \in L(X_\sigma)$ must have odd length. It follows that
the subshift $X_\sigma$ is not topologically mixing and therefore not strongly
irreducible.
\end{proof}
\begin{figure}
\begin{center}
\begin{picture}(360,220)
\thicklines
\put(0,20){\line(1,0){340}}
\put(0,210){\line(1,0){340}}
\put(10,150){\line(1,0){250}}
\put(10,30){\line(1,0){250}}
\put(10,30){\line(0,1){120}}
\put(260,30){\line(0,1){120}}
\put(0,20){\line(0,1){190}}
\put(340,20){\line(0,1){190}}
\put(20,40){\line(1,0){230}}
\put(20,40){\line(0,1){70}}
\put(20,110){\line(1,0){230}}
\put(180,200){\line(1,0){150}}
\put(250,40){\line(0,1){70}}
\put(180,200){\line(0,-1){150}}
\put(180,50){\line(1,0){150}}
\put(330,200){\line(0,-1){150}}
\put(5,195){{\bf W-subshifts}}
\put(15,135){irreducible sofic subshifts}
\put(185,185){strongly irreducible}
\put(185,172){subshifts}
\put(25,95){irreducible subshifts}
\put(25,83){of finite type}
\put(194,140){{\tiny even subshift}}
\put(194,100){{\tiny full subshift}}
\put(185,160){$\bullet$}
\put(194,160){{\tiny the subshift in Example \ref{e:not-sofic}}}
\put(185,140){$\bullet$}
\put(185,100){$\bullet$}
\put(15,125){$\bullet$}
\put(25,73){$\bullet$}
\put(33,73){{\tiny the subshift in Example \ref{e:not-s-i}}}
\put(5,185){$\bullet$}
\put(13,185){{\tiny the subshifts in Proposition \ref{p:W-not-is-nor-si}}}
\put(22,125){{\tiny odd subshift}}
\end{picture}
\end{center}
\caption{Relations among classes of W-subshifts (the \emph{odd subshift} is the
subshift $X \subset \{0,1\}^\Z$ consisting of all configurations $x \in \{0,1\}^\Z$
such that no words of the form $01^n0$ with $n$ even appear in $x$)}
\label{fig:relations}
\end{figure}
\section{Density of periodic configurations in W-subshifts}
\label{sec:strongly-over-Z}
An immediate consequence of Lemma \ref{l:property-irr-sofic}
is that every irreducible sofic subshift over $\Z$ contains a dense set of periodic configurations (cf. \cite{fiorenzi-periodic}).
More generally, we have the following result.
\begin{theorem}
\label{density-for-W}
Let $A$ be a finite set and
let $X \subset A^\Z$ be a W-subshift.
Then $X$ contains a dense set of periodic configurations.
\end{theorem}
\begin{proof}
(Benjy Weiss)
It suffices to show that every word $w \in L(X)$ appears as a subword of some periodic configuration of $X$.
\par
As $X$ is a W-subshift, we can find an integer $n_0 \geq 0$ satisfying condition (b) of Proposition \ref{p:conditions-W}..
For each $u\in L(X)$, let $F(u)$
denote the finite set consisting of all words $c \in A^*$ with length $\vert c \vert \leq n_0$ such that $ucu \in L(X)$.
Take $u_0\in L(X)$ such that $F(u_0)$ has minimal cardinality.
Observe that $F(u_0)$ is not empty by our hypothesis on $n_0$.
Let us fix an arbitrary word $c_0 \in F(u_0)$.
\par
Suppose that $v \in A^*$ is such that $u_0vu_0\in L(X)$.
We then have $F(u_0vu_0 ) \subset F(u_0)$ and hence
\begin{equation*}
\label{e:fillers}
F(u_0 v u_0) = F(u_0)
\end{equation*}
by minimality.
In particular, we have $c_0 \in F(u_0 v u_0)$, that is, $u_0 v u_0 c_0 u_0 v u_0 \in L(X)$.
By induction, it follows that the sequence of words $(v_k)_{k \geq 1}$ defined by
$v_1 = u_0 v u_0$ and $v_{k + 1} = v_k c_0 v_k$ for $k \geq 1$, satisfies $v_k \in L(X)$ for all $k \geq 1$.
As
$$
v_{k + 1} = (u_0vu_0c_0)^{2^k - 1}u_0 v u_0
$$
for all $k \geq 1$,
we deduce that
$$
(u_0 v u_0 c_0)^n \in L(X)
$$
for all $n \geq 0$.
As $X$ is shift-invariant and closed in $A^\Z$, this implies that the periodic configuration $(u_0 v u_0 c_0)^\infty$ is in $X$.
This shows in particular that $v$ appears as a subword of a periodic configuration in $X$.
\par
Consider now an arbitrary word $w \in L(X)$.
Since $X$ is irreducible,
we can find
$u_1 \in A^*$ such that $u_0u_1w \in L(X)$ and $u_2 \in L(X)$ such that $ u_0 u_1 w u_2 u_0 = (u_0 u_1 w) u_2 u_0 \in L(X)$.
Applying our previous argument to $v = u_1 w u_2$, we conclude that $u_1 w u_2$ and hence $w$ appears as a subword of a periodic configuration of $X$.
This shows that the periodic configurations are dense in $X$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t:benjy}]
The statement immediately follows from
Proposition \ref{p:si-W} and Theorem \ref{density-for-W}.
\end{proof}
\section{Concluding remarks and open questions}
\label{s:questions}
As already mentioned in the introduction,
the question whether every strongly irreducible subshift of finite type over $\Z^d$
contains a dense set of periodic configurations
remains open for $d \geq 3$. It may be extended as follows.
\begin{question}
Let $G$ be a residually finite group.
Does every strongly irreducible subshift of finite type over $G$ contain a dense set of periodic configurations?
\end{question}
Note that, by Theorem \ref{t:density-periodic},
it would suffice to prove that such a subshift contains a periodic configuration unless it is empty.
\par
Let us observe that Theorem \ref{t:density-periodic} becomes false if the hypothesis saying that $X$ is strongly irreducible is replaced by the weaker condition that $X$ is topologically mixing.
Indeed, Weiss \cite[p. 358]{weiss-sgds} described a quiescent topologically mixing subshift of finite type over $\Z^2$ which is not surjunctive.
\par
As mentioned in the introduction, when $G$ is an amenable group (e.g., $G = \Z^d$), it is known that every strongly irreducible subshift over $G$ is surjunctive
(see \cite[Corollary 4.4]{cc-myhill}).
Weiss \cite[p. 358]{weiss-sgds} gave an example of a quiescent subshift of finite type over $\Z$ which is not surjunctive.
It consists of all bi-infinite sequences of $0$'s, $1$'s, and $2$'s in which only $00$, $01$, $11$, $12$, and $22$ are allowed to appear among the subwords of length $2$.
Gromov \cite{gromov-esav} and Weiss \cite{weiss-sgds} proved that if $G$ is a sofic group then the full shift $A^G$ is surjunctive for any finite alphabet $A$
(see also \cite[Chapter 7]{ca-and-groups-springer} for an introduction to sofic groups and a presentation of the Gromov-Weiss theorem).
The class of sofic groups is known to be very large.
It includes in particular all residually amenable groups and hence all residually finite group and all amenable groups.
Actually, no example of a non-sofic group has been found up to now, although
it is widely believed that such examples do exist (cf. \cite[p. 359]{weiss-sgds}).
On the other hand, it is unknown whether the full shift $A^G$ is surjunctive for any group $G$ and any finite alphabet $A$ (Gottschalk's conjecture).
\begin{question}
Let $G$ be a residually finite group (resp. a sofic group, resp. a group without any additional assumption).
Is every strongly irreducible subshift over $G$ surjunctive?
\end{question}
In Theorem \ref{t:benjy}, one cannot replace the hypothesis saying that $X$ is strongly irreducible by the weaker hypothesis that $X$ is topologically mixing.
Indeed, Petersen \cite{petersen} constructed a topologically mixing minimal subshift over $\Z$ and such a subshift cannot contain periodic configurations by minimality.
On the other hand, minimality clearly implies surjunctivity.
We are therefore led to the following questions.
\begin{question}
Is every topologically mixing subshift over $\Z$ surjunctive?
\end{question}
\begin{question}
Does every strongly irreducible subshift over $\Z^2$ contain a dense set of periodic configurations ?
\end{question}
Boyle, Lind and Rudolph \cite{boyle-lind-rudolph} proved that the automorphism group of every subshift of finite type over $\Z$ is residually finite.
They also gave an example \cite[Example 3.9]{boyle-lind-rudolph}
of a minimal subshift over $\Z$ whose automorphism group is not residually finite (it contains a subgroup isomorphic to $\Q$).
Note that such a subshift contains no periodic configurations by minimality.
On the other hand, Hochman \cite[Examples 1.4 and 1.5]{hochman-automorphism} gave examples of a positive entropy subshift of finite type
and of a topologically mixing subshift of finite type, both over $\Z^2$,
whose automorphism groups are not residually finite.
It seems that the following question is open.
\begin{question}
Is there a strongly irreducible subshift over $\Z^2$ whose automorphism group is not residually finite?
\end{question}
Observe that a positive answer to Question 5 would give a negative answer to Question 4.
|
1,108,101,565,958 | arxiv | \section{Introduction}
Given a graph $G$, let $\alpha_k = \alpha_k(G)$ denote the size of the largest set of vertices such that any two vertices in the set are at distance larger than $k$. Thus, with this notation, $\alpha_1$ is just the independence number $\alpha$ of a graph.
The parameter $\alpha_k(G)$ therefore represents the largest number of vertices which can be $k+1$ spread out in $G$. It is known that determining $\alpha_k$ is NP-Hard in general \cite{kZ1993}.
The $k$-independence number of a graph is directly related to other combinatorial parameters such as the average distance \cite{FH1997}, packing chromatic number \cite{GHHHR2008}, injective chromatic number \cite{HkSS2002}, and strong chromatic index \cite{M2000}. Upper bounds on the $k$-independence number directly give lower bounds on the corresponding distance or packing chromatic number \cite{AM2002}, as well as necessary conditions for the existence of perfect codes.
In this note we generalize and improve the known spectral upper bounds for the $k$-independence number from \cite{Fiolkindep}, \cite{act16} and \cite{acf19}. For some cases, we also show that our bounds are sharp. Let $G=(V,E)$ be a graph with $n=|V|$ vertices, $m=|E|$ edges, and adjacency matrix $\A$ with spectrum
$
\spec G=\{\theta_0,\theta_1^{m_1},\ldots,\theta_d^{m_d}\}.
$
where the different eigenvalues are in decreasing order, $\theta_0>\theta_1>\cdots>\theta_d$, and the superscripts stand for their multiplicities.
When the eigenvalues are presented with possible repetitions, we shall indicate them by
$
\ev G: \lambda_1 \geq \lambda_2 \geq \cdots\geq \lambda_n.
$
The first known spectral bound for the independence number $\alpha$ of a graph is due to Cvetkovi\'c \cite{c71}.
\begin{theorem}[Cvetkovi\'c \cite{c71}]
\label{thm:cvetkovic}
Let $G$ be a graph with eigenvalues $\lambda_1\ge \cdots \ge \lambda_n$. Then,
$$
\alpha\le \min \{|\{i : \lambda_i\ge 0\}| , |\{i : \lambda_i\le 0\}|\}.
$$
\end{theorem}
Another well-known result is the following bound due to Hoffman (unpublished; see for instance Haemers \cite{h95}).
\begin{theorem}[Hoffman \cite{h95}]
\label{thm:hoffman}
If $G$ is a regular graph on $n$ vertices with eigenvalues $\lambda_1\ge \cdots \ge \lambda_n$, then
\[
\alpha \leq n\frac{-\lambda_n }{\lambda_1 - \lambda_n}.
\]
\end{theorem}
Regarding the $k$-independence number, the following three results are known. The first is due to Fiol \cite{Fiolkindep} and requires a preliminary definition. Let $G$ be a graph with distinct eigenvalues $\theta_0 > \cdots > \theta_d$. Let $P_k(x)$ be chosen among all polynomials $p(x) \in \Re_k(x)$, that is, polynomials of real coefficients and degree at most $k$, satisfying $|p(\theta_i)| \leq 1$ for all $i = 1,...,d$, and such that $P_k(\theta_0)$ is maximized. The polynomial $P_k(x)$ defined above is called the {\em $k$-alternating polynomial} of $G$ and was shown to be unique in \cite{fgy96}, where it was used to study the relationship between the spectrum of a graph and its diameter.
\begin{theorem}[Fiol \cite{Fiolkindep}]
\label{thm:fiol}
Let $G$ be a $d$-regular graph on $n$ vertices, with distinct eigenvalues $\theta_0 >\cdots > \theta_d$ and let $P_k(x)$ be its $k$-alternating polynomial. Then,
\[
\alpha_k \leq \frac{2n}{P_k(\theta_0) + 1}.
\]
\end{theorem}
More recently, Cvetkovi\'c-like and Hoffman-like bounds were given by Abiad, Cioab\u{a}, and Tait in \cite{act16}.
\begin{theorem}[Abiad, Cioab\u{a}, Tait \cite{act16}]
\label{previous1act}
Let $G$ be a graph on $n$ vertices with adjacency matrix $\A$, with eigenvalues $\lambda_1 \geq \cdots \geq \lambda_n$. Let $w_k$ and $W_k$ be respectively the smallest and the largest diagonal entries of $\A^k$. Then,
\[
\alpha_k \leq \min\{|\{i : \lambda_i^k \geq w_k(G)\}| , |\{i : \lambda_i^k \leq W_k(G)\}|\}.
\]
\end{theorem}
\begin{theorem}[Abiad, Cioab\u{a}, Tait \cite{act16}]
\label{thm:abiad}
Let $G$ be a $\delta$-regular graph on $n$ vertices with adjacency matrix $\A$, whose distinct eigenvalues are $\theta_0(=\delta) > \cdots> \theta_d$. Let $\widetilde{W_k}$ be the largest diagonal entry of $\A+\A^2+\cdots+\A^k$. Let $\theta = \max\{|\theta_1| , |\theta_d|\}$. Then,
\[
\label{aida}
\alpha_k \leq n \frac{\widetilde{W_k}+ \sum_{j = 1}^k \theta^j}{\sum_{j = 1}^k \delta^j + \sum_{j = 1}^k\theta^j}.
\]
\end{theorem}
Finally, as a consequence of a generalization of the last two theorems,
Abiad, Coutinho, and the author \cite{acf19}, proved the following results.
\begin{theorem}[Abiad, Coutinho, Fiol \cite{acf19}]
\label{theo-gen-k}
Let $G$ be a $\delta$-regular graph with $n$ vertices and distinct eigenvalues $\theta_0(=\delta)>\theta_1> \cdots > \theta_d$. Let $W_k=W(p)=\max_{u\in V}\{\sum_{i=1}^k(\A^k)_{uu}\}$.
Then, the $k$-independence number of $G$ satisfies the following:
\begin{itemize}
\item[$(i)$]
If $k=2$, then
\begin{equation*}
\alpha_2\le n\frac{\theta_0+\theta_i\theta_{i-1}}{(\theta_0-\theta_i)
(\theta_0-\theta_{i-1})},
\end{equation*}
where $\theta_i$ is the largest eigenvalue not greater than $-1$.
\item[$(ii)$]
If $k>2$ is odd, then
\begin{equation*}
\label{eq-k}
\alpha_k(G)\le n\frac{W_k-\sum_{j=0}^k \theta_d^j}{\sum_{j=0}^k \delta^j-\sum_{j=0}^k \theta_d^j}.
\end{equation*}
\item[$(iii)$]
If $k>2$ is even, then
\begin{equation*}
\alpha_k(G)\le n\frac{W_k+1/2}{\sum_{j=0}^k \delta^j+1/2}.
\end{equation*}
\item[$(iv)$]
If $G=(V,E)$ is a walk-regular graph, then
\begin{equation*}
\alpha_k(G)\le n\frac{1-\lambda(q_k)}{q_k(\delta)-\lambda(q_k)}
\label{coro-walk-reg}
\end{equation*}
for $k=0,\ldots,d-1$, where $q_k=p_0+\cdots+p_k$ with the $p_i$'s being the predistance polynomials of $G$ (see next section), and $\lambda(q_k)=\min_{i\in [2,d]}\{q_k(\theta_i)\}$.
\end{itemize}
\end{theorem}
\section{Some Background}
For basic notation and results see \cite{biggs,g93}. Let $G=(V,E)$ be a (simple) graph with $n=|V|$ vertices, $m=|E|$ edges, and adjacency matrix $\A$ with spectrum $\spec G=\{\theta_0,\theta_1^{m_1},\ldots,\theta_d^{m_d}\}$. When the eigenvalues are presented with possible repetitions, we shall indicate them by $\lambda_1 \geq \lambda_2 \geq \cdots\geq \lambda_n$.
Let us consider the scalar product in $\Re_d[x]$:
$$
\langle f,g\rangle_G=\frac{1}{n}\tr(f(\A)g(\A))=\frac{1}{n}\sum_{i=0}^{d} m_i f(\theta_i)g(\theta_i).
$$
The so-called {\em predistance polynomials} $p_0(=1),p_1,\ldots, p_d$ are a sequence of orthogonal polynomials with respect to the above product, with $\dgr p_i=i$, and they are normalized in such a way that $\|p_i\|_G^2=p_i(\theta_0)$ (this makes sense since it is known that $p_i(\theta_0)>0$) for $i=0,\ldots,d$. Therefore they are uniquely determined, for instance, following the Gram-Schmidt process. These polynomials were introduced by Fiol and Garriga in \cite{fg97} to prove the so-called `spectral excess theorem' for distance-regular graphs, where $p_0(=1),p_1,\ldots, p_d$ coincide with the so-called distance polynomials .
See \cite{cffg09} for further details and applications.
A graph $G$ is called {\em $k$-partially walk-regular}, for some integer $k\ge 0$, if the number of closed walks of a given length $l\le k$, rooted at a vertex $v$, only depends on $l$. Thus, every (simple) graph is $k$-partially walk-regular for $k=0,1$, and every regular graph is $2$-partially walk-regular. Moreover $G$ is $k$-partially walk-regular for any $k$ if and only if $G$ is walk-regular, a concept introduced by Godsil and Mckay in \cite{gm80}. For example, it is well-known that every distance-regular graph is walk-regular (but the converse does not hold).
Eigenvalue interlacing is a powerful and old technique that has found countless applications in combinatorics and other fields. This technique will be used in several of our proofs. For more details, historical remarks, and other applications, see Fiol and Haemers \cite{f99,h95}.
Given square matrices $\A$ and $\B$ with respective eigenvalues $\lambda_1\geq \cdots \geq \lambda_n$ and $\mu_1 \geq \cdots \geq \mu_m$, with $m<n$, we say that the second sequence {\em interlaces} the first one if, for all $i = 1,\ldots,m$, it follows that
$\lambda_i \geq \mu_i \geq \lambda_{n-m+i}$.
\begin{theorem}[Interlacing \cite{f99,h95}]
\label{theo-interlacing}
Let $\S$ be a real $n \times m$ matrix such that $\S^T \S = \I$, and let $\A$ be a $n \times n$ matrix with eigenvalues $\lambda_1 \geq \cdots \geq \lambda_n$. Define $\B = \S^T \A \S$, and call its eigenvalues $\mu_1 \geq\cdots \geq \mu_m$. Then,
\begin{enumerate}[(i)]
\item
The eigenvalues of $\B$ interlace those of $\A$.
\item
If $\mu_i = \lambda_i$ or $\mu_i = \lambda_{n-m+i}$, then there is an eigenvector $\vecv$ of $\B$ for $\mu_i$ such that $\S \vecv$ is eigenvector of $\A$ for $\mu_i$.
\item
If there is an integer $k \in \{0,\ldots,m\}$ such that $\lambda_i = \mu_i$ for $1 \leq i \leq k$, and $\mu_i = \lambda_{n-m+i}$ for $ k+1 \leq i \leq m$ $(${\em tight interlacing}$)$, then $\S \B = \A \S$.
\end{enumerate}
\end{theorem}
Two interesting particular cases where interlacing occurs (obtained by choosing
appropriately the matrix $\S$) are the following. Let $\A$ be the adjacency matrix of a graph $G=(V,E)$. First, if $\B$ is a principal submatrix of $\A$, then $\B$ corresponds to the adjacency matrix of an induced subgraph
$G'$ of $G$. Second, when, for a given partition of the vertices of $\G$, say $V=U_1\cup\cdots\cup U_m$, $\B$ is the so-called {\em quotient matrix} of $\A$, with elements
$b_{ij}$, $i,j=1,\ldots,m$, being the average row sums of the corresponding block $\A_{ij}$ of $\A$. Actually, the quotient matrix $\B$ does not need to be
symmetric or equal to $\S^\top\A\S$, but in this case $\B$ is
similar to (and therefore has the same spectrum as) $\S^\top\A\S$.
\section{The minor polynomials}
In this section we introduce a new class of polynomials, obtained from the different eigenvalues of a graph, which are used later to derive our main results.
Let $G$ be a $k$-partially walk-regular graph with adjacency matrix $\A$ and spectrum $\spec G=\{\theta_0,\theta_1^{m_1},\ldots,\theta_d^{m_d}\}$.
Let $p$ be a polynomial of degree at most $k$, satisfying $p(\theta_0)=1$ and $p(\theta_i)\ge 0$ for $i=1,\ldots,d$. Then, in Section \ref{main-section} we prove that $G$ has $k$-independence number satisfying the bound $\alpha_k\le \tr p_k(\A)=\sum_{i=0}^d m_i p_k(\theta_i)$.
So, the search for the best result motivates the following definition.
\begin{definition}
\label{def-minor-p}
Let $G=(V,E)$ be a graph with adjacency matrix $\A$ with spectrum $\spec G=\{\theta_0,\theta_1^{m_1},\ldots,\theta_d^{m_d}\}$.
For a given $k=0,1,\ldots,d$, let us consider the set of real polynomials ${\cal P}_k=\{p\in \R_k:p(\theta_0)=1, p(\theta_i)\ge 0, 1\le i\le d\}$, and the continuous function $\Psi: {\cal P}_k\rightarrow \R^+$ defined by $\Psi(p)=\tr p(\A)$. Then, the $k$-minor polynomial of $G$ is the point
$p_k$ where $\Psi$ attains its minimum:
$$
\tr p_k(\A)=\min\left\{\tr p(\A) : p\in {\cal P}_k \right\}.
$$
\end{definition}
An alternative approach to the $k$-minor polynomials is the following: Let $p_k$ be the polynomial defined by $p_k(\theta_0)=x_0=1$ and $p_k(\theta_i)=x_i$, for $i=1,\ldots,d$, where the vector $(x_1,x_2,\ldots,x_d)$ is a solution of the following linear programming problem:
\begin{center}
\frame{
$\begin{array}{rl}
& \\
{\tt minimize} & \sum_{i=0}^d m_ix_i\\
{\tt with\ constraints} & f[\theta_0,\ldots,\theta_m]=0,\ m=k+1,\ldots,d\\
& x_i\ge 0,\ i=1,\ldots,d,\\
&
\end{array}$}
\end{center}
where $f[\theta_0,\ldots,\theta_m]$ denote the $m$-th divided differences of Newton interpolation, recursively defined by $f[\theta_i,\ldots,\theta_j]=\frac{f[\theta_{i+1},\ldots,\theta_j]-f[\theta_i,\ldots,\theta_{j-1}]}
{\theta_j-\theta_{i}}$, where $j>i$, starting with $f[\theta_i]=p(\theta_i)=x_i$, $0\le i\le d$.
Thus, we can easily compute the minor polynomial, for instance by using the simplex method. Moreover, as the problem is in the so-called standard form, with
$d$ variables, $x_1,\ldots,x_d$, and $d-(k+1)+1=d-k$
equations, the `basic vectors' have at least $d-(d-k)=k$ zeros. Note also that, from the conditions of the programming problem, the $k$-minor polynomial turns out to be of the form
$p_k(x)=f[\theta_0]+f[\theta_0,\theta_1](x-\theta_0)+\cdots +f[\theta_0,\ldots,\theta_k](x-\theta_0)\cdots (x-\theta_{k-1})$, with degree at most $k$. Consequently when we apply the simplex method, we obtain a $k$-minor polynomial $p_k$ with degree $k$ and exactly $k$ zeros at the mesh $\theta_1,\ldots,\theta_d$.
In fact, as shown in the following lemma, a $k$-minor polynomial has exactly $k$ zeros in the interval $[\theta_d,\theta_0)$.
\begin{proposition}
\label{theo:pols}
Let $G$ be a graph with spectrum $\spec G=\{\theta_0,\theta_1^{m_1},\ldots,\theta_d^{m_d}\}$. Then, for every $k=0,1,\ldots,d$, a $k$-minor polynomial $p_k$ has degree $k$ with its $k$ zeros in $[\theta_d,\theta_0)\subset \Re$.
\end{proposition}
\begin{proof}
We only need to deal with the case $k\ge 1$.
Assume that a $k$-minor polynomial $p_k$ has the zeros $\xi_{r}\le \xi_{r-1}\le \cdots \le \xi_{1}$ with $r\le k$. Then, it can be written as $p_k(x)=\prod_{i=1}^r \frac{x-\xi_i}{\theta_0-\xi_i}$. Let us first show that $\xi_r\ge \theta_d$. By contradiction, assume that $\xi_r<\theta_d$, and let $\theta_i$ the smallest eigenvalue which is not a zero of $p_k$ (the existence of such a $\theta_i$ is guaranteed from the condition $r\le k$). Then, the polynomial $q_r(x)=\frac{x-\theta_i}{\theta_0-\theta_i}\prod_{j=1}^{r-1}\frac{x-\xi_i}{\theta_0-\xi_i}$, with degree $r\le k$ satisfies the conditions $q_k(\theta_0)=1$, $q_k(\theta_i)\ge 0$ for $i=1,\ldots,d$, and $\Psi(q_k)<\Psi(p_k)$ since
$\frac{\theta_j-\theta_i}{\theta_0-\theta_i}<\frac{\theta_j-\xi_r}{\theta_0-\xi_r}$ for $j>i$, a contradiction with the fact that $\Psi(p_k)$ is minimum. Second, let us prove, again by contradiction, that $\xi_1>\theta_0$. Otherwise, we could consider the polynomial $q_{k-1}$, with degree $r-1<k$, defined as $q_{k-1}(x)=\prod_{i=2}^r \frac{x-\theta_i}{\theta_0-\theta_i}$ satisfying again $q_{k-1}(\theta_0)=1$, and $q_{k-1}k(\theta_i)\ge 0$ for $i=1,\ldots,d-1$ since $\frac{\theta_i-\xi_1}{\theta_0-\xi_1}>1$ for all $i=1,\ldots,d$. But, from the same inequalities, we also have $\Psi(q_{k-1})<\Psi(p_k)$, a contradiction.
Finally, assume that $r<k$. Since all zeros $\xi_r\le \cdots \le \xi_r$ are in the interval $[\theta_d,\theta_0)$, we can consider again the smallest one $\theta_i$ such that $p_k(\theta_i)>0$. Then, reasoning as before, the polynomial
$q_{r+1}(x)=\frac{x-\theta_i}{\theta_0-\theta_i}\prod_{i=1}^r\frac{x-\xi_i}{\theta_0-\xi}$, with degree $r+1\le k$ leads to the desired contradiction $\Psi(q_{r+1})<\Psi(p_k)$.
\end{proof}
The above results, together with $p_k(\theta_0)=1$ and $p_k(\theta_i)\ge 0$ for $i=1,\ldots,d$ drastically reduce the number of possible candidates for $p_k$. Let us consider some particular values of $k$:
\begin{itemize}
\item
The cases $k=0$ and $k=d$ are easy. Clearly, $p_0=1$, and $p_d$ has zeros at all the points $\theta_i$ for $i\neq 0$. In fact, $p_d=\frac{1}{n}H$, where $H$ is the Hoffman polynomial \cite{hof63}.
\item
For $k=1$, the only zero of $p_1$ must be at $\theta_d$. Hence,
\begin{equation}
\label{p1}
p_1(x)=\frac{x-\theta_d}{\theta_0-\theta_d}.
\end{equation}
Moreover, since $p_1(\theta_i)< 1$ for every $i=1,\ldots,d$, we have that
$$
(1=)\Psi(p_{d})<\Psi(p_{d-1})<\Psi(p_{d-2})<\cdots < \Psi(p_1)< \Psi(p_0)(=n),
$$
since, for $k=0,\ldots,d-1$, $p_{k+1}(\theta_i)\le p_kp_1(\theta_i)< p_k(\theta_i)$ for every $i=1,\ldots,d$.
\item
For $k=2$, the two zeros of $p_2$ must be at consecutive eigenvalues $\theta_{i}$ and $\theta_{i-1}$. More precisely, the same reasonings used in \cite{acf19} shows that $\theta_i$ must be the largest eigenvalue not greater than $-1$. Then, with these values,
\begin{equation}
\label{p2}
p_2(x)=\frac{(x-\theta_i)(x-\theta_{i-1})}{(\theta_0-\theta_i)(\theta_0-\theta_{i-1})}.
\end{equation}
\item
When $k=3$, the only possible zeros of $p_3$ are $\theta_d$ and the consecutive pair $\theta_i$, $\theta_{i-1}$ for some $i\in [2,d-1]$. In this case, empirical results seem to point out that such a pair must be around the `center' of the mesh (see the examples below).
\item
When $k=d-1$, the polynomial $p_{d-1}$ takes only one non-zero value at the mesh, say at $\theta$, which seems to be located at one of the `extremes' of the mesh. In fact, when $G$ is an $r$-antipodal distance-regular graph, we show in the last section that either $\theta=\theta_1$ or $\theta=\theta_d$ for odd $d$ yields the tight bound (that is, $r$) for $\alpha_{d-1}$, as does Theorem \ref{thm:fiol}.
Consequently, for such a graph with odd $d$, we have two different $(d-1)$-minor polynomials, say $p$ and $q$, and hence infinitely many $(d-1)$-minor polynomials of the form $r=\gamma p+(1-\gamma)q$ where $\gamma\in [0,1]$. (Notice that, if $\gamma\not\in\{0,1\}$, then $r$ must have some zero not belonging to the mesh $\{\theta_1,\ldots,\theta_d\}$.)
\end{itemize}
Now, let us give all the $k$-minor polynomials, with $k=1,\ldots,d$, for two particular distance-regular graphs. Namely, the Hamming graph $H(2,7)$ and the Johnson graph $J(14,7)$ (for more details about these graphs, see for instance \cite{bcn89}).
First, we recall that the Hamming graph $H(2,7)$ has spectrum
$$
\spec H(2,7)=\{7^1,5^7,3^{21},1^{35},-1^{35},-3^{21},-5^7,-7^1\}.
$$
Then, the different minor polynomials $p_0,\ldots,p_7$ are shown in Figure \ref{fig1}, and their values $x_i=p_k(\theta_i)$ at the different eigenvalues $\theta_0,\ldots,\theta_7$ are shown in Table \ref{table2}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=13cm]{polsHamming}
\vskip-7.5cm
\caption{The polynomials of the Hamming graph $H(2,7)$.}
\label{fig1}
\end{center}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|ccccccc|c|}
\hline
$k$ & $x_7$ & $x_6$ & $x_5$ & $x_4$ & $x_3$ & $x_2$ & $x_1$ & $x_0$\\
\hline
$1$ & 0 & 1/7 & 2/7 & 3/7 & 4/7 & 5/7 & 6/7 & 1 \\
\hline
$2$ & 1 & 1/2 & 1/6 & 0 & 0 & 1/6 & 1/2 & 1 \\
\hline
$3$ & 0 & 1/14 & 1/21 & 0 & 0 & 5/42 & 3/7 & 1 \\
\hline
$4$ & 2/9 & 0 & 0 & 1/45 & 0 & 0 & 2/9 & 1 \\
\hline
$5$ & 0 & 1/35 & 0 & 0 & 0 & 0 & 6/35 & 1 \\
\hline
$6$ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\hline
$7$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
\caption{Values $x_i=p_k(\theta_i)$ of the $k$-minor polynomials of the Hamming graph $H(2,7)$.}
\label{table2}
\end{table}
As another example, consider the
the Johnson graph $J(14,7)$ (see, for instance, \cite{bcn89,g93}). This is an antipodal (but not bipartite) distance-regular graph, with $n=3432$ vertices, diameter $D=7$, and spectrum
$$
\spec J(14,7)=\{49^1, 35^{13}, 23^{77}, 13^{273}, 5^{637}, -1^{1001}, -5^{1001}, -7^{429}\}.
$$
Then the solutions of the linear programming problem are in Table \ref{table3}, which correspond to the minor polynomials shown in Figure \ref{fig2}
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|ccccccc|c|}
\hline
$k$ & $x_7$ & $x_6$ & $x_5$ & $x_4$ & $x_3$ & $x_2$ & $x_1$ & $x_0$\\
\hline
$1$ & 0 & 1/28 & 3/28 & 3/14 & 5/15 & 15/28 & 3/4 & 1 \\
\hline
$2$ & 9/275 & 1/55 & 0 & 0 & 14/275 & 54/275 & 27/55 & 1 \\
\hline
$3$ & 0 & 5/1232 & 1/176 & 0 & 0 & 75/1232 & 5/16 & 1 \\
\hline
$4$ & 1/1485 & 0 & 0 & 0 & 0 & 14/495 & 2/9 & 1 \\
\hline
$5$ & 0 & 1/2860 & 0 & 0 & 0 & 0 & 27/260 & 1 \\
\hline
$6$ & 0 & 0 & 0 & 0 & 0 & 0 & 1/13 & 1 \\
\hline
$7$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\hline
\end{tabular}
\end{center}
\caption{Values $x_i=p_k(\theta_i)$ of the $k$-minor polynomials of the Johnson graph $J(14,7)$.}
\label{table3}
\end{table}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=13cm]{polsJohnson}
\vskip-7.6cm
\caption{The polynomials of the Johnson graph $J(14,7)$.}
\label{fig2}
\end{center}
\end{figure}
\section{A tight bound for the $k$-independence number}
\label{main-section}
Now we are ready to derive our main result about the $k$-independent number of a $k$-partially walk-regular graph. The proof is based on the interlacing technique.
\begin{theorem}
\label{new-theo}
Let $G$ be a $k$-partially walk-regular graph with $n$ vertices, adjacency matrix $\A$, and spectrum $\spec G=\{\theta_0^{m_0},\ldots,\theta_d^{m_d}\}$.
Let $p_k\in \Re_k[x]$ be a $k$-minor polynomial. Then, for every $k=0,\ldots,d-1$, the $k$-independence number $\alpha_k$ of $G$ satisfies
\begin{equation}
\label{eq:thm2}
\alpha_k\le \tr p_k(\A)=\sum_{i=0}^d m_i p_k(\theta_i).
\end{equation}
\end{theorem}
\begin{proof}
Let $U$ be a $k$-independent set of $G$ with $r=|U|=\alpha_k(G)$ vertices. Again, assume the first columns (and rows) of $\A$ correspond to the vertices in $U$. Consider the partition of said columns according to $U$ and its complement. Let $\S$ be the normalized characteristic matrix of this partition. The quotient matrix of $p(\A)$ with regards to this partition is given by
\begin{align}
\label{B_k=2}
\S^T p(\A) \S = \B_k & =
\left(
\begin{array}{cc}
\frac{1}{r}\sum_{u\in U}(p_k(\A))_{uu} & p_k(\theta_0)-\frac{1}{r}\sum_{u\in U}(p_k(\A))_{uu}\\
\frac{r p_k(\theta_0)-\sum_{u\in U}(p(\A))_{uu}}{n-r} & p_k(\theta_0)-\frac{r p_k(\theta_0)-\sum_{u\in U}(p(\A))_{uu}}{n-r}
\end{array}
\right)\\
&=\left(
\begin{array}{cc}
\frac{1}{n}\sum_{i=0}^d m_i p_k(\theta_i) & 1-\frac{1}{n}\sum_{i=0}^d m_i p_k(\theta_i)\\
\frac{r-\frac{r}{n}\sum_{i=0}^d m_i p_k(\theta_i)}{n-r} & 1-\frac{r -\frac{r}{n}\sum_{i=0}^d m_i p_k(\theta_i)}{n-r}
\end{array}
\right),
\end{align}
with eigenvalues $\mu_1=p(\lambda_1)=1$ and
$$
\mu_2=\tr \B_k-1=w(p_k)-\frac{r -rw(p_k)}{n-r}.
$$
where $w(p_k)=\frac{1}{n}\sum_{i=0}^d m_i p_k(\theta_i)$.
Then, by interlacing, we have
\begin{equation}\frac{{2\ell\choose \ell}}{2(\ell+1)}\frac{{2\ell\choose \ell}}{2(\ell+1)}\frac{{2\ell\choose \ell}}{2(\ell+1)}
\label{interlacing:theo2}
0\le \mu_2\le w(p_k)-\frac{r-rw(p_k)}{n-r},
\end{equation}
whence, solving for $r$, we get $r\le nw(p_k)$ and the result follows.
\end{proof}
As mentioned in the previous section, notice that, in fact, the proof works for any polynomial $p$ satisfying $p(\theta_0)=1$ and $p(\theta_i)\ge 0$ for $i=1,\ldots,d$. By way of example, if $G$ is a distance-regular graph with distance polynomials $p_0,\ldots,p_d$, we could take $p(x)=\frac{q_k^2(x)}{q_k^2(\theta_0)}$, with degree $2k$, where the sum polynomial $q_k=p_0+\cdots +p_k$ satisfies
$\|q_k\|_G^2=q_k(\theta_0)$. Now, recall that $q_k(\theta_0)=n_{k}$ corresponds to the number of vertices at distance $\le k$ from any vertex of $G$ (see, for instance Biggs \cite{biggs}).
Thus, we obtain
$$
\alpha_{2k}\le \Psi(p)=\sum_{i=0}^d m_i\frac{q_k^2(\theta_i)}{q_k^2(\theta_0)}=\frac{n}{q_k^2(\theta_0)}\|q_k\|_G^2=\frac{n}{n_k},
$$
as expected.
Another possibility is to use the polynomial $p(x)=\frac{P_k(x)+1}{P_k(\theta_0)+1}$, where $P_k$ is the $k$-alternating polynomial.
In this case, when $G$ is an $r$-antipodal distance-regular graphs and $k=d-1$, it turns out that the $d$-distance polynomial is
$p_d=H-\frac{r}{2}P_{d-1}+\frac{r}{2}-1$, where $H$ is the Hoffman polynomial (see \cite{Fiolkindep}). Then, we get
$\Psi(p)=\frac{2n}{P_{d-1}(\theta_0)+1}$, which coincides with the bound for $\alpha_{d-1}$ given in Theorem \ref{thm:fiol}.
Let us now consider some particular cases of Theorem \ref{new-theo} by using the minor polynomials.
\subsubsection*{The case $k=1$.}
As mentioned above, $\alpha_1$ coincides with the standard independence number $\alpha$. In this case the minor polynomial is
$p_1(x)=\frac{x-\theta_d}{\theta_0-\theta_d}$. Then, \eqref{eq:thm2} gives
\begin{equation}
\label{eq-k=1}
\alpha_1=\alpha\le \tr p_1(\A)=\frac{-n\theta_d}{\theta_0-\theta_d},
\end{equation}
which is Hoffman's bound in Theorem \ref{thm:hoffman}.
\subsubsection*{The case $k=2$.}
We already stated that $p_2(x)=\frac{(x-\theta_i)(x-\theta_{i-1})}{(\theta_0-\theta_i)(\theta_0-\theta_{i-1})}$.
Then, \eqref{eq:thm2} yields
\begin{equation}
\label{eq-k=2}
\alpha_2\le \tr p_2(\A)=n\frac{\theta_0+\theta_i\theta_{i-1}}{(\theta_0-\theta_i)
(\theta_0-\theta_{i-1})},
\end{equation}
in agreement with the result of \cite{acf19} (here in Theorem \label{theo-gen-k}$(i)$). Moreover, in the same paper, two infinite families of (distance-regular) graphs where the bound \eqref{eq-k=2} is tight were provided.
\subsection*{Some examples}
To compare the above bounds with those obtained in \cite{Fiolkindep} and \cite{act16} (here in Theorems \ref{thm:fiol} and \ref{thm:abiad}, respectively), let us consider again the Hamming graph $H(2,7)$ and the Johnson graph $J(14,7)$. Thus, in Table \ref{table4} we show the bounds obtained for $\alpha_k(H(2,7))$, whereas those of $\alpha_k(J(14,7))$ are shown in Table \ref{table5}. (Recall that every distance-regular graph is also walk-regular.)
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|ccccccc| }
\hline\hline
$k$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ \\
\hline
Bound from Theorem \ref{thm:fiol} & 109 & 72 & 36 & 19 & 7 & 2 & -- \\
\hline
Bound from Theorem \ref{thm:abiad} ($k> 2$) & - & - & 65 & 67 & 64 & 65 & 64 \\
\hline
Bound from Theorem \ref{theo-gen-k}$(i)$-$(iii)$ & - & 21 & 56 & 6 & 55 & 3 & 55 \\
\hline
Bound from Theorem \ref{new-theo} &
\bf 64 & \bf 16 & \bf 8 & \bf 3 & \bf 2 & \bf 2 & \bf 1 \\
\hline\hline
\end{tabular}
\end{center}
\caption{Comparison of the bounds for $\alpha_k$ in the Hamming graph $H(2,7)$.}
\label{table4}
\end{table}
Note that, in general, the bounds obtained by Theorem \ref{new-theo} constitute a significant improvement with respect to those in \cite{Fiolkindep,act16}. In particular, the bounds for $k=6,7$ are equal to the correct values $\alpha_6=2$ (since both graphs are $2$-antipodal), and $\alpha_7=1$ (since their diameter is $D=7$). Besides notice that, in the case of the Hamming graph, $\alpha_2=16$ since it contains the perfect Hamming code $H(7,4)$.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|ccccc| }
\hline\hline
$k$ & $3$ & $4$ & $5$ & $6$ & $7$ \\
\hline
Bound from Theorem \ref{thm:fiol} & 464 & 125 & 20 & 2 & -- \\
\hline
Bound from Theorem \ref{thm:abiad} & 935 & 721 & 546 & 408 & 302 \\
\hline
Bound from Theorem \ref{theo-gen-k}$(ii)$-$(iii)$ & 26 & 10 & 5 & 3 & 2 \\
\hline
Bound from Theorem \ref{theo-gen-k}$(iv)$ & 80 & 86 & 25 & 2 & 1 \\
\hline
Bound from Theorem \ref{new-theo} & \bf 19 & \bf 6 & \bf 2 & \bf 2 & \bf 1 \\
\hline\hline
\end{tabular}
\end{center}
\caption{Comparison of bounds for $\alpha_k$ in the Johnson graph $J(14,7)$.}
\label{table5}
\end{table}
\subsection{Antipodal distance-regular graphs}
\label{antipodal}
Finally, we consider an infinite family where our bound for $\alpha_{d-1}$ is tight.
With this aim, we assume that the minor polynomial takes non-zero value only at $\theta_1$.
Thus, $p_{d-1}(x)=\frac{1}{\prod_{i=2}^d (\theta_0-\theta_i)}\prod_{i=2}^d (x-\theta_i)$.
Then, the bound \eqref{eq:thm2} of Theorem \ref{new-theo} is
$$
\sum_{i=0}^d m_i p_{d-1}(\theta_i)=m_0p_{d-1}(\theta_0)+m_{1}p_{d-1}(\theta_1)=1+m_1\frac{\prod_{i=2}^d (\theta_1-\theta_i)}{\prod_{i=2}^d (\theta_0-\theta_i)}=1+m_1\frac{\pi_1}{\pi_0}
$$
where, in general, $\pi_i=\prod_{j=0,j\neq i}|\theta_i-\theta_j|$ for $i\in [0,d]$.
Now suppose that $G$ is an $r$-antipodal distance-regular graph.
Then, in \cite{Fiolkindep} it was shown that $G$ is so
if and only if its eigenvalue multiplicities are
$m_i=\pi_0/\pi_i$ for $i$ even, and $m_i=(r-1)\pi_0/\pi_i$ for $i$ odd.
So, with $m_1=(r-1)\pi_0/\pi_1$, we get
$$
\alpha_{d-1}\le 1+m_1\frac{\pi_1}{\pi_0}=r,
$$
which is the correct value.
When $G$ is an $r$-antipodal distance-regular graph with odd $d$, we can also consider the minor polynomial $q_{d-1}$ which takes non-zero value only at $\theta_d$, that is $q_{d-1}(x)=\frac{1}{\prod_{i=1}^{d-1} (\theta_0-\theta_i)}\prod_{i=1}^{d-1} (x-\theta_i)$. Then, reasoning as above,
we get again the tight bound $\Psi(q_{d-1})=1+m_d\frac{\pi_d}{\pi_0}=r$.
\subsection{Odd graphs}
For every integer $\ell\ge 2$, the odd graphs $O_\ell$ constitute a well-known family of distance-regular graphs with interactions between graph theory and other areas of combinatorics, such as coding theory and design theory.
The vertices of $O_\ell$ correspond to the $(\ell-1)$-subsets of a $(2\ell-1)$-set, and adjacency is defined by void intersection. In particular, $O_3$ is the Petersen graph. In general, the odd $O_\ell$ is a $\ell$-regular graph with order
$n={2\ell-1\choose \ell-1}=\frac{1}{2}{2\ell\choose \ell}$, diameter $\ell-1$, and its eigenvalues and multiplicities are $\theta_i=(-1)^i (\ell-i)$ and
$m(\theta_i)={2\ell-1\choose i}-{2\ell-1\choose i-1}$ for $i=0,1,\ldots,\ell-1$. For more details, see for instance, Biggs \cite{biggs} and Godsil \cite{g93}.
In Table \ref{table5} we show the bounds of the $k$-independence numbers for $O_{\ell}$, $\ell=2,3,4,5$ given by Theorem \ref{new-theo}. The numbers in bold faces, $7$ and $66$, correspond to the known $1$-perfect codes in $O_4$ and $O_6$, respectively.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|cccc| }
\hline\hline
graph / $k$ & $2$ & $3$ & $4$ & $5$ \\
\hline
$O_4$ & \bf 7 & -- & -- & -- \\
\hline
$O_5$ & 13 & 9 & -- & -- \\
\hline
$O_6$ & \bf 66 & 21 & 11 & -- \\
\hline
$O_7$ & 158 & 90 & 17 & 12 \\
\hline\hline
\end{tabular}
\end{center}
\caption{Some bounds for $\alpha_k$ in odd graphs $O_{\ell}$ for $\ell=4,5,6,7$.}
\label{table5}
\end{table}
More generally, \eqref{eq-k=1} and \eqref{eq-k=2} allow us to compute the bounds for $\alpha_1$ and $\alpha_2$ of every odd graph $O_{\ell}$, which turn out to be
\begin{align}
\alpha_1 & \le \frac{{2\ell\choose \ell}(\ell-1)}{4\ell-2}\sim \frac{2^{2\ell-2}}{\ell^{\frac{1}{2}}\sqrt{\pi}}, \label{alpha1odd}\\
\alpha_2 & \le \frac{{2\ell\choose \ell}(\ell-2)}{2(\ell+(-1)^\ell(\ell-2(-1^\ell))}\sim \frac{2^{2\ell-1}}{\ell^{\frac{3}{2}}\sqrt{\pi}}, \label{alpha2odd}
\end{align}
where we have indicated their asymptotic behaviour, when $\ell\rightarrow \infty$, by using the Stirling's formula. As a consequence, we have
the known result that, when $\ell$ is odd, the odd graph $O_{\ell}$ has no $1$-perfect code.
Indeed, the existence of $1$-perfect code in $O_{\ell}$ requires that $\alpha_2=\frac{n}{\ell+1}=\frac{{2\ell\choose \ell}}{2(\ell+1)}$ (since all codewords must be mutually at distance $\ge 3$). However, when $\ell$ is odd, \eqref{alpha2odd} gives
$\alpha_2\le \frac{{2\ell\choose \ell}(\ell-2)}{2(\ell-1)(\ell+2)}<\frac{{2\ell\choose \ell}}{2(\ell+1)}$, a contradiction. (In fact, when $n$ is a power of two minus one $\frac{{2\ell\choose \ell}}{2(\ell+1)}$ is not an integer, which also prevents the existence of a $1$-perfect code.)
Note that this result is in agreement with the fact that a necessary condition for a regular graph to have a $1$-perfect code is the exitence of the eigenvalue $-1$, which is not present in $O_{\ell}$ when $\ell$ is odd (see Godsil \cite{g93}).
Finally, by using the same polynomial as in Subsection \ref{antipodal}, we have that the $(d-1)$-independence number of $O_{\ell}$, where $d-1=\ell-2$, satisfies the bounds
$$
\alpha_{\ell-2}\le 1+m_1\frac{\pi_1}{\pi_0}=
\left\{
\begin{array}{ll}
2\ell-1, & \mbox{$\ell$ even},\\
2\ell-2, & \mbox{$\ell$ odd}.
\end{array}\right.
$$
For instance, for the Petersen graph $P=O_3$, this yields $\alpha_1\le 4$, as it is well-known.
\subsection*{Acknowledgments}
This research has been partially supported by AGAUR from the Catalan Government under project 2017SGR1087, and by MICINN from the Spanish Government under project PGC2018-095471-B-I00.
|
1,108,101,565,959 | arxiv | \section{Introduction}
\label{intro}
The issue of equivalence of covariant and light-front Hamiltonian perturbation theory has attracted a lot of attention in the past years \cite{bmcy05}. Equivalence of LFQED \cite{must} and covariant QED at the Feynman diagram level was addressed, for the first time, by us \cite{swat}, where we demonstrated how one can obtain all the propagating as well as instantaneous diagrams of one loop LFQED by performing the $k^-$-integration over the covariant expressions carefully. It was shown that the equivalence cannot be established by performing $k^-$-integration if one uses the commonly used two-term photon propagator in light cone gauge \cite{must}:
\begin{equation}
d_{\mu\nu}(k) = \frac{1}{k^2+i\epsilon}\bigg[-g_{\mu\nu} + \frac{\delta_{\mu+}
k_{\nu}+\delta_{\nu+}k_{\mu}}
{k^+}\bigg],
\label{eq:prop1}
\end{equation}
\noindent because the instantaneous contribution from the longitudinal polarization of the virtual photon cannot be removed. However, if one uses the three-term photon propagator \cite{brodsky01,suzuki03} given by
\begin{equation}
d_{\mu\nu}^\prime(k) = \frac{1}{k^2+i\epsilon}\bigg[-g_{\mu\nu} + \frac{\delta_{\mu+}k_
{\nu}+\delta_{\nu+}k_{\mu}}{k^+}-\frac{k^2\delta_{\mu+}\delta_{\nu+}}{(k^+)^2}
\bigg],
\label{eq:prop2}
\end{equation}
\noindent then one can show the exact cancellation between the instantaneous contribution from the longitudinal polarization of the virtual photon and the third term of this three-term photon propagator contributed by the transverse polarization of the virtual photon. It has been shown explicitly in \cite{ji} that the sum of both contribution from the transverse polarization and the longitudinal polarization of the virtual photon is equivalent to the manifestly covariant photon propagator.
In this work, we first revisit the proof of equivalence for one loop fermion self-energy in Section 2 and clarify the issue raised in \cite{manto}. Then, in Section 3, we extend our proof of equivalence of vertex correction $\Lambda^+$ to a general component $\Lambda^\mu$.
\section{Fermion Self-Energy}
\label{selfenergy}
In covariant perturbation theory, the expression for fermion self-energy in the light-front gauge can be rewritten as a sum of three terms \cite{swat}
\begin{equation} \label{se main}
\Sigma(p) = \Sigma_{1}^{(a)}(p)+ \Sigma_{1}^{(b)}(p) + \Sigma_{2}(p)
\end{equation}
where
\begin{equation}
\Sigma_{1}^{(a)} (p) + \Sigma_{2}(p)= \frac{ie^2}{2m}\int \frac {d^{4}k}{(2\pi)^4}\frac{{\gamma^\mu}{({p\llap/-k\llap/}+m)}{\gamma^\nu}d_{\mu\nu}{(k)}}{[(p-k)^2-m^2+i\epsilon][k^2-{\mu}^2+i\epsilon]}
\end{equation}
on performing the $k^-$ integration reproduces two of the four self-energy diagrams in LFQED (Figs.1(a) and 1(b) of Ref.\cite{must}), i.e. the standard diagram involving two three-point vertices and the diagram involving the four-point vertex corresponding to instantaneous fermion exchange. (Note that the time-component $(x^+)$ is taken to be upwards in all diagrams in this paper as in \cite{must}.) This is achieved by rewriting the fermion momentum as a sum of an on-shell part and an off-shell part. The other two diagrams involving the four-point instantaneous photon exchange vertex are obtained by performing the $k^-$-integration in $\Sigma_{1}^{(b)}(p)$ given by
\begin{equation}
\Sigma_{1}^{(b)}(p) =-\frac{ie^2}{2m}\int\frac{d^{4}k}{(2\pi)^4}\frac{\gamma^\mu (k\llap/^{\prime}_{on}+m){\gamma^\nu}\delta_{\mu+}\delta_{\nu+}k^2}{[(p-k)^2-m^2+i\epsilon][k^2-{\mu}^2+i\epsilon](k^+)^2},
\end{equation}
\noindent which cancels the corresponding contribution from the longitudinal polarization of the virtual photon. As highlighted in Ref. \cite{swat}, this contribution comes from the third term in the photon propagator and hence the equivalence cannot be proved if we neglect the third term in Eq.\ref{eq:prop2}.
It was pointed out erroneously in Ref.\cite{manto} that our expression for $\Sigma_{1}^{(a)}(p)$ in Eq.(58) of Ref.\cite{swat} involves off-shell momentum of the virtual photon and hence while converting it to on-shell momentum, one can generate the instantaneous photon diagram from this term itself and hence there is no need to add the third term in the photon propagator. However, the expression in Eq.(58) is actually a light front expression in which $k^-$ integration has already been performed using residue theorem and hence it is understood that the photon is on-shell although it is not mentioned explicitly as in the proof of vacuum polarization equivalence. We shall elaborate on this issue in greater detail in a future publication \cite{am2018}.
\section{Vertex Correction}
The standard covariant expression for vertex correction in the light-front gauge is given by
\begin{equation}\label{vc main}
\Lambda^{\mu}(p,p^{\prime},q) = ie^3\int \frac {d^{4}k}{(2\pi)^4}\frac{\gamma^{\alpha}({p\llap/^{\prime}}-{k\llap/}+m)\gamma^{\mu}({p\llap/}-{k\llap/}+m)\gamma^{\beta}d^{\prime}_{\alpha\beta}(k)}{[(p-k)^2-m^2+i\epsilon][(p^{\prime}-k)^2-m^2+i\epsilon][k^2-{\mu}^2+i\epsilon]}
\end{equation}
\noindent To demonstrate that this expression leads to the light front diagrams in Figs. 2 and 3, we split the fermion momenta in on-shell and off-shell parts \\
${(p\llap/-k\llap/)} = k\llap/^\prime_{on}+\frac{\gamma^{+}[(p-k)^2-m^2]}{2(p^{+}-k^{+})}$,
\\${(p\llap/^{\prime}-k\llap/)} = k\llap/^{\prime\prime}_{on}+\frac{\gamma^{+}[(p^{\prime}-k)^2-m^2]}{2(p^{\prime+}-k^{+})}$.
\\It can then be shown, in a straightforward manner, by performing the $k^-$ integration that the one loop vertex correction can be written as
\begin{equation}
\Lambda^{\mu}(p,p^{\prime},q)=\Lambda^{\mu (a)}(p,p^{\prime},q)+\Lambda^{\mu (b)}(p,p^{\prime},q)+\Lambda^{\mu (c)}(p,p^{\prime},q)+\Lambda^{\mu (d)}(p,p^{\prime},q)+\Lambda^{\mu (e)}(p,p^{\prime},q)
\end{equation}
where
\begin{equation}
\Lambda^{\mu (a)}(p,p^{\prime},q)=e^3\int\frac{d^{2}k_{\perp}}{(4\pi)^3}\int_{0}^{p^{\prime+}}\frac{dk^{+}}{k^{+}k^{\prime+}k^{\prime\prime+}}\frac{\gamma^{\alpha}({k\llap/^{\prime\prime}_{on}+m)\gamma^{\mu}(k\llap/^{\prime}_{on}+m)\gamma^{\beta}d_{\alpha\beta}(k)}}{(p^{-}-k^{-}_{on}-k^{\prime-}_{on})(p^{-}-q^{-}-k^{-}_{on}-k^{\prime\prime-}_{on})}
\end{equation}
\begin{equation}
\Lambda^{\mu (b)}(p,p^{\prime},q)=-e^3\int\frac{d^{2}k_{\perp}}{(4\pi)^3}\int_{p^{\prime+}}^{p^+}\frac{dk^{+}}{k^{+}k^{\prime+}k^{\prime\prime+}}\frac{\gamma^{\alpha}({k\llap/^{\prime\prime}_{on}+m)\gamma^{\mu}(k\llap/^{\prime}_{on}+m)\gamma^{\beta}d_{\alpha\beta}(k)}}{(p^{-}-k^{-}_{on}-k^{\prime-}_{on})(p^{-}-p^{\prime-}-k^{\prime-}_{on}+k^{\prime\prime-}_{on})}
\end{equation}
\begin{equation}
\Lambda^{\mu (c)}(p,p^{\prime},q)=2e^3\int\frac{d^{2}k_{\perp}}{(4\pi)^3}\int_{p^{\prime+}}^{p^+}\frac{dk^{+}}{(k^{+})^{2}k^{\prime+}k^{\prime\prime+}}\frac{\gamma^{+}(k\llap/^{\prime\prime}_{on}+m)\gamma^{\mu}(k\llap/^{\prime}_{on}+m)\gamma^{+}}{(p^{-}-p^{\prime-}-k^{\prime-}_{on}+k^{\prime\prime-}_{on})}
\end{equation}
\begin{equation}
\Lambda^{\mu (d)}(p,p^{\prime},q)=e^3\int\frac{d^{2}k_{\perp}}{(4\pi)^3}\int_{0}^{p^{\prime+}}\frac{dk^{+}}{k^{+}k^{\prime+}k^{\prime\prime+}}\frac{\gamma^{\alpha}(k\llap/^{\prime\prime}_{on}+m)\gamma^{\mu}\gamma^{+}\gamma^{\beta}d_{\alpha\beta}(k)}{(p^{\prime-}-k^{-}_{on}-k^{\prime\prime-}_{on})}
\end{equation}
\begin{equation}
\Lambda^{\mu (e)}(p,p^{\prime},q)=e^3\int\frac{d^{2}k_{\perp}}{(4\pi)^3}\int_{0}^{p^{+}}\frac{dk^{+}}{k^{+}k^{\prime+}k^{\prime\prime+}}\frac{\gamma^{\alpha}\gamma^{+}\gamma^{\mu}(k\llap/^{\prime}_{on}+m)\gamma^{\beta}d_{\alpha\beta}(k)}{(p^{-}-k^{-}_{on}-k^{\prime-}_{on})}
\end{equation}
$\Lambda^{\mu (a)}(p,p^{\prime},q)$, $\Lambda^{\mu (b)}(p,p^{\prime},q)$ and $\Lambda^{\mu (c)}(p,p^{\prime},q)$ have been obtained in Ref.\cite{swat} for $\Lambda^+$ and the same proof holds for $\Lambda^\mu$ also. $\Lambda^{\mu (d)}(p,p^{\prime},q)$ and $\Lambda^{\mu (e)}(p,p^{\prime},q)$ which were zero for the $+$ component come from the off-shell parts of the two fermion propagators and have been verified by us to be the same as the light front expressions using LFTOPT. The details of the calculation will be given in a future work\cite{am2018}. It should be pointed out that the sixth term with contribution from $p\llap/-p\llap/_{on}$ for both the fermion lines is zero due to Dirac structure of the numerator. As in the case of fermion self-energy, we again find that the diagram involving instantaneous photon exchange arises due to the third term in the photon propagator.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.27]{Fig_1_a.eps}
\includegraphics[scale=0.27]{Fig_1_b.eps}
\includegraphics[scale=0.3]{Fig_1_c.eps}
\caption{Self-Energy diagrams}
\label{sediag}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{Fig_2_a.eps}
\includegraphics[scale=0.27]{Fig_2_b.eps}
\includegraphics[scale=0.3]{Fig_2_c.eps}
\caption{"Regular" and instantaneous photon diagrams}
\label{reg inst photon}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{Fig_3_a.eps}
\includegraphics[scale=0.375]{Fig_3_b.eps}
\caption{Instantaneous fermion diagrams}
\label{inst fermions}
\end{figure}
\vfill
\section{Summary}
We have established equivalence for both the fermion self-energy and the vertex correction contributions, by demonstrating that \\1. The on-shell part of the fermion propagator in the covariant theory, when considered with the two-term photon propagator, is equivalent to the "regular" diagrams in LFTOPT whereas the instantaneous fermion diagrams arise from the off-shell part.
\\2. The third term in the gauge propagator leads to the diagrams containing instantaneous photons and hence should be necessarily considered in order to establish equivalence between covariant and LFQED.
|
1,108,101,565,960 | arxiv | \section{Introduction}\label{sec:introduction}
Multidimensional problems in a wide variety of applications have data or solutions that are often represented by tensors~\cite{kolda2009tensor}. A general tensor $\mathcal{X} \in \C^{n_1\times\cdots \times n_d}$ requires $\prod_{j=1}^d n_j$ degrees of freedom to store, which scales exponentially with the order $d$. Therefore, it is often essential to approximate or represent large tensors with data-sparse formats so that storing and computing with them is feasible. The tensor-train (TT) decomposition~\cite{oseledets2011tensor} is a tensor format with a storage cost that can scale linearly in $n_j$ and $d$. The TT format is used in molecular simulations~\cite{savostyanov2014exact}, high-order correlation functions~\cite{kressner2015low}, and partial differential equation (PDE) constrained optimization~\cite{dolgov2017low,benner2020low}. In practice, one tries to replace $\mathcal{X}$ by a tensor $\tilde{\mathcal{X}}$ with a data-sparse TT format such that
\begin{equation}
\| \mathcal{X} - \tilde{\mathcal{X}} \|_F \leq \epsilon \| \mathcal{X} \|_F, \qquad \|\mathcal{X}\|_F^2 = \sum_{i_1=1}^{n_1} \cdots \sum_{i_d = 1}^{n_d} |\mathcal{X}_{i_1,\ldots,i_d}|^2,
\label{eq:FrobeniusNorm}
\end{equation}
where $0\leq\epsilon<1$ is an accuracy tolerance~\cite{grasedyck2013literature,hackbusch2012tensor}. One major challenge that we address in this paper is how to compute $\tilde{\mathcal{X}}$ in a TT format from large $\mathcal{X}$ in parallel with a limited memory footprint. Once we obtain $\tilde{\mathcal{X}}$ in the TT format, tensor operations including addition, mode-$k$ products (see~\cref{eq:kfold}), contraction, and recompression can be executed in parallel as well.
Unlike in the matrix case where the truncated singular value decomposition (SVD) provides the best rank-$k$ approximation, tensors admit various low-rank formats with different desired properties. Other than the TT format, which represents each entry as the product of a sequence (``train") of matrices, there is the canonical polyadic (CP) format that expresses a tensor as the sum of vector outer products~\cite{kolda2009tensor}, and the orthogonal Tucker format that ensures factor matrices have orthonormal (ON) columns~\cite{de2000multilinear}. There are also multiple hierarchical formats such as tree-Tucker~\cite{oseledets2009breaking} and quantized TT (QTT)~\cite{dolgov2012fast} that can capture latent data structures. The TT format is popular because of its connection to linear matrix algebra, enabling rigorous analysis and numerically accurate and stable algorithms.
Researchers have designed parallel tensor algorithms to exploit modern computing architectures and handle larger tensors emerging in applications. There are parallel algorithms for computing CP~\cite{li2017model,smith2015splatt}, Tucker, and hierarchical Tucker decomposition~\cite{austin2016parallel,grasedyck2019parallel,ballard2020tuckermpi,kaya2016high}. Subsequent operations can also be done in parallel in various tensor formats, especially tensor contractions~\cite{solomonik2014massively}, and operations in TT format~\cite{daas2020parallel}. However, despite some current work based on hierarchical tree structure~\cite{grigori2020parallel}, regularized least squares problem satisfied by each core~\cite{chen2017parallelized}, and multiple SVDs on tensor slices~\cite{wang2020adtt}, parallel TT decomposition has received less attention, perhaps, due to the sequential nature of TTSVD~\cite{oseledets2011tensor}.
In this paper, we show that the column spaces of tensor unfoldings (see~\cref{sec:notation}) are connected by the TT cores (see~\cref{sec:TT}). Using this property, we develop new parallel algorithms that are scalable, stable, and accurate to compute TT formats of tensors. In particular, we distribute tensor information across several processors and ask each of them to contribute to computing the TT cores. We design parallel algorithms for various tensor input types:
\begin{itemize}[leftmargin=*,noitemsep]
\item \textbf{Parallel-TTSVD}: Previous TT decomposition methods such as TTSVD~\cite{oseledets2011tensor} and TT-cross approximation~\cite{oseledets2010tt} are sequential algorithms that require the entire tensor as input. In each iteration, both algorithms find one TT core by decomposing a specific matrix and use this core to determine the matrix in the next iteration. Based on the fact that there is a connection between the column space of various reshapes of a tensor (see~\cref{sec:notation}), we design an algorithm to compute the TT cores simultaneously by computing an ON basis for the column space of each tensor unfolding via SVD.
\item \textbf{Parallel Streaming TT Sketching (PSTT)}: Since SVD in Parallel-TTSVD can be computationally expensive, we can use randomized linear algebra to find ON bases that approximate the column space of tensor unfoldings. This algorithm is inspired by matrix sketching~\cite{halko2011finding}, Tucker sketching~\cite{sun2020low}, randomized algorithms for CP and Tucker format~\cite{ma2021fast}, and TT sketching in a sequential manner~\cite{che2019randomized}. Sketching algorithms are ideal for streaming data, where it is infeasible to store the tensor in cache. We show a two-sided version, PSTT2, has a storage cost as low as $\mathcal{O}(n^{\floor{d/2}})$. Moreover, PSTT2-onepass, a one-pass variant of PSTT2, uses only a single evaluation of each tensor entry, and is the most efficient in numerical experiments.
\item \textbf{TT2Tucker} and \textbf{Tucker2TT}: An ON basis of the column space of the second unfolding of each TT core allows us to get a Tucker format of the given tensor fast~\cite{batselier2020meracle}. Conversely, given a tensor in Tucker format, we can obtain its TT cores through the Tucker factor matrices and the TT cores of its Tucker core.
\item \textbf{TT-fADI}: Tensors also arise as the solutions of Sylvester tensor equations, i.e.,
\begin{equation}
\mathcal{X} \times_1 A^{(1)} + \cdots + \mathcal{X} \times_d A^{(d)} = \mathcal{F}, \qquad A^{(k)} \in \mathbb{C}^{n_k\times n_k}, \quad \mathcal{F} \in\mathbb{C}^{n_1\times \cdots \times n_d},
\label{eq:TensorDisplacement}
\end{equation}
where `$\times_k$' denotes the $k$-mode matrix product of a tensor (see~\cref{eq:kfold})~\cite{shi2021compressibility}. If $\mathcal{F}$ is provided in its TT format, then we can find $\mathcal{X}$ in TT format via the factored alternating direction implicit (fADI) method that solves Sylvester matrix equations~\cite{benner2009adi}.
\item \textbf{Implementations in message passing interface (MPI)}: We implement our algorithms in a distributed memory framework using OpenMPI in C. Each process is responsible for streaming part of the tensor and storing part of the intermediate calculations. We use well-established linear algebra packages to optimize our codes, including matrix multiplications in BLAS3, and QR and SVD in LAPACKE.
\end{itemize}
\Cref{sec:background} reviews some necessary tensor notations, TT and orthogonal Tucker format, and Sylvester equations. In~\cref{sec:parallelTT}, we consider computing the TT decomposition of a given tensor in parallel, where we have access to any entry. Then, we provide scalability and complexity analysis of our algorithms and demonstrate their performance on synthetic datasets in~\cref{NumericalExamples}. Finally, in~\cref{sec:TTsylv}, we obtain the TT format of implicitly known tensors, given as solutions of Sylvester tensor equations.
\section{Tensor notations, tensor formats, and Sylvester equations} \label{sec:background}
In this section, we review some tensor notations, TT and orthogonal Tucker format for low rank tensor approximations, and Sylvester matrix equations. Throughout this paper, for a tensor $\mathcal{X}$, we look for an approximation $\tilde{\mathcal{X}}$ that has low tensor ranks and satisfies~\cref{eq:FrobeniusNorm} for some $0 < \epsilon < 1$.
\subsection{Tensor notation} \label{sec:notation}
We use upper class letters to represent matrices and calligraphic capital letters to represent tensors. We commonly use MATLAB-style notation ``:" for indices, i.e., $a\!:\!b$ represents the index set $\{a,a+1,\ldots,b\}$, and a single ``:" stands for all the indices in that dimension. For example, $A(:,3\!:\!4)$ or $A_{:,3:4}$ represents the submatrix of $A$ that contains its third and fourth columns, and $\mathcal{Y}(:,j,:)$ represents the matrix slice of the tensor $\mathcal{Y}$ by fixing the second index to be $j$. We also use $\mathcal{Y}(:)$ to stack all the entries of $\mathcal{Y}$ into a single vector using column-major ordering. We use the MATLAB command ``reshape" to reorganize elements of a tensor. If $\mathcal{Y} \in \C^{n_1 \times n_2 \times n_3}$, then ${\rm reshape}(\mathcal{Y},n_1n_2,n_3)$ returns a matrix of size $n_1n_2 \times n_3$ formed by stacking entries according to their multi-index. Therefore, $\mathcal{Y}(:)$ and ${\rm reshape}(\mathcal{Y},n_1n_2n_3,1)$ are equivalent. Similarly, if $Z \in \C^{n_1 n_2 \times n_3}$, then ${\rm reshape}(Z,n_1,n_2,n_3)$ returns a tensor of size $n_1 \times n_2 \times n_3$. Throughout, we use the notation for tensors found in~\cite{kolda2009tensor}, which we briefly review now for readers' convenience. Consider a tensor $\mathcal{X}\in\C^{n_1\times\cdots\times n_d}$, then we have the following definitions.
\begin{description}[leftmargin=*,noitemsep]
\item[Flattening by reshaping.]
One can reorganize the entries of a tensor into a matrix without changing the column-major ordering, and this idea is fundamental to the TT decomposition. The $k$th unfolding of $\mathcal{X}$ is represented as
\[X_k={\rm reshape}\left(\mathcal{X},\prod_{s=1}^k n_s,\prod_{s=k+1}^d n_s\right). \]
\item[Flattening via matricization.]
Another way to flatten a tensor is to arrange the mode-$n$ fibers to be the columns of a matrix~\cite{kolda2006multilinear}, and this operation is central for the orthogonal Tucker decomposition. We denote the $k$th matricization of $\mathcal{X}$ by $X_{(k)} \in \C^{n_k \times \prod_{j\neq k} n_j}$. Since mode-1 fibers are the tensor columns, we have $X_{(1)}=X_1$.
\item[The $k$-mode product.] The $k$-mode product of $\mathcal{X}$ with a matrix $A\in\C^{m \times n_k}$ is denoted by $\mathcal{X} \times_k A$, and defined elementwise as
\begin{equation}
(\mathcal{X} \times_k A)_{i_1,\ldots,i_{k-1},j,i_{k+1},\ldots,i_d} = \sum_{i_k = 1}^{n_k} \mathcal{X}_{i_1,\ldots,i_d}A_{j,i_k}, \quad 1 \le j \le m.
\label{eq:kfold}
\end{equation}
This is equivalent to computing $AX_{(k)}$ and reorganizing back to a tensor.
\end{description}
\subsection{Tensor-train format} \label{sec:TT}
The TT format represents each tensor entry as the product of a sequence of matrices. A tensor $\mathcal{X}\in\C^{n_1\times \cdots \times n_d}$ has TT cores $\mathcal{G}_k \in \C^{s_{k-1} \times n_k \times s_k}$ for $1 \le k \le d$, if the cores satisfy
\[
\mathcal{X}_{i_1,\ldots,i_d} = \mathcal{G}_1(:,i_1,:)\mathcal{G}_2(:,i_2,:) \cdots \mathcal{G}_d(:,i_d,:), \qquad 1\leq i_k \leq n_k.
\]
Since the product of the matrices must be a scalar, we have $s_0 = s_d = 1$. We call $\pmb{s} = (s_0,\ldots,s_d)$ the size of the TT cores, and it is an entry-by-entry bound on the TT rank $\pmb{r}= (r_0,\ldots,r_d)$. In this way, a TT representation with TT core size $\pmb{s}$ requires $\sum_{k=1}^d s_{k-1}s_k n_k$ degrees of freedom for storage, which is linear in mode size $\pmb{n}=(n_1,\ldots,n_d)$ and order $d$.~\cref{fig:TT} illustrates a TT format with TT core size $\pmb{s}$.
\begin{figure}
\centering
\begin{tikzpicture}
\filldraw[black] (0,-0.5) node {$\mathcal{X}_{i_1,\ldots,i_d}$};
\filldraw[black] (1,-0.5) node {$=$};
\filldraw[color=black,fill=gray!20] (1.5,0) rectangle (4,-.5);
\filldraw[black] (2.8,-0.25) node {$\mathcal{G}_1(i_1,:)$};
\filldraw[black] (2.8,0.3) node {$1\! \times \! s_1$};
\filldraw[color=black,fill=gray!20] (4.2,0) rectangle (7.1,-2.5);
\filldraw[black] (5.7,-1.3) node {$\mathcal{G}_2(:,i_2,:)$};
\filldraw[black] (5.7,0.3) node {$s_1 \! \times \! s_2$};
\filldraw[black] (7.5,-1) node {$\cdots$};
\filldraw[color=black,fill=gray!20] (7.9,0) rectangle (10.3,-2.3);
\filldraw[black] (9.1,-1) node {$\mathcal{G}_{d-1}(:,i_{d-1},:)$};
\filldraw[black] (9.1,0.3) node {$s_{d-2} \! \times \! s_{d-1}$};
\filldraw[color=black,fill=gray!20] (10.5,0) rectangle (11,-2.4);
\filldraw[black] (10.75,-1) node {\rotatebox{270}{$\mathcal{G}_{d}(:,i_{d})$}};
\filldraw[black] (10.75,0.3) node {$s_{d-1} \! \times \! 1$};
\end{tikzpicture}
\caption{The TT format with TT core size $\pmb{s} = (s_0,\ldots,s_d)$. Each entry of a tensor is represented by the product of $d$ matrices, where the $k$th matrix in the ``train" is selected based on the value of $i_k$.}
\label{fig:TT}
\end{figure}
The TTSVD algorithm computes a TT format by sequentially constructing the TT cores~\cite{oseledets2011tensor}. In this way, we can use ranks of the tensor unfoldings to bound entries of the TT rank~\cite{oseledets2011tensor}. That is,
\begin{equation} \label{eq:TT_trivial}
r_k \le {\rm rank}(X_k), \quad 1 \le k \le d-1,
\end{equation}
where ${\rm rank}(X_k)$ is the rank of the unfolding matrix $X_k$. Therefore, if the ranks of all the matrices $X_k$ for $1\leq k\leq d-1$ are small, then the TT format of $\mathcal{X}$ is data-sparse. In particular, if SVDs in TTSVD are truncated to have an accuracy of $\epsilon/\sqrt{d-1}$ in the Frobenius norm, then we obtain an approximation $\tilde{\mathcal{X}}$ that satisfies~\cref{eq:FrobeniusNorm} in the TT format. However, TTSVD is sequential, so we wish to further exploit~\cref{eq:TT_trivial} to compute a TT format in parallel.
\subsection{Orthogonal Tucker format} \label{sec:OrthogonalTucker}
The orthogonal Tucker format represents a tensor $\mathcal{X} \in \C^{n_1\times\cdots \times n_d}$ with a core tensor $\mathcal{G} \in\C^{t_1 \times \cdots \times t_d}$ and a set of factor matrices $A_1,\ldots,A_d$ with orthonormal columns~\cite{kolda2009tensor,de2000multilinear}:
\begin{equation}
\mathcal{X} = \mathcal{G} \times_1 A_1 \cdots \times_d A_d , \qquad A_k \in \mathbb{R}^{n_k\times t_k}.
\label{eq:Tucker}
\end{equation}
In this case, we call $\pmb{t} = (t_1,\ldots,t_d)$ the size of the factor matrices of $\mathcal{X}$ and it provides an entry-by-entry bound on the multilinear rank $\pmb{\ell} = (\ell_1,\ldots,\ell_d)$. Such a decomposition contains $\sum_{k=1}^{d} n_k t_k + \prod_{k=1}^{d} t_k$ degrees of freedom, which is linear in size $\pmb{n}=(n_1,\ldots,n_d)$, and still exponential in dimension $d$ and thus can be infeasible for large $d$. Nevertheless, the Tucker format is very useful when each entry $t_j$ is significantly smaller than the corresponding mode size $n_j$.
Higher-order SVD (HOSVD)~\cite{de2000multilinear} can be used to compute the orthogonal Tucker format of a given tensor. The algorithm utilizes an ON basis of each tensor matricization as the corresponding factor matrix and computes the core tensor with these matrices. Therefore, finding the factor matrices in parallel is easy, as the matricizations are independent and can be handled simultaneously. In terms of accuracy, if the factor matrices are calculated via SVDs with $\epsilon/\sqrt{d}$ accuracy in the Frobenius norm and $0 < \epsilon < 1$, then we obtain an approximation $\tilde{\mathcal{X}}$ that satisfies~\cref{eq:FrobeniusNorm} in the orthogonal Tucker format.
\subsection{Sylvester matrix equations and fADI}
A Sylvester matrix equation for an unknown matrix $W$ has the form
\begin{equation} \label{2d:sylv}
A_1W-WA_2^T = F, \quad A_1 \in \C^{m \times m}, \ A_2 \in \C^{n \times n}, \ F \in \C^{m \times n}.
\end{equation}
For simplicity, we assume $A_1$ and $A_2$ are normal matrices, then $W$ has a unique solution if the spectra of $A_1$ and $A_2$ are disjoint~\cite{simoncini2016computational}. When $F$ has a low rank factorization $F = UV^*$ with $U \in \C^{m \times r}$ and $V \in \C^{n \times r}$, we can use the factored alternating direction implicit (fADI) method~\cite{benner2009adi} to obtain $W$ also in low rank format by solving a sequence of shifted linear systems. The main takeaway from the fADI method is that it solves for the column space and row space of $W$ independently. The shifts used in the iterations are known in many situations~\cite{fortunato2020fast,townsend2018singular}. For example, one set of shift parameters $\pmb{p}$ and $\pmb{q}$ can be chosen as the zeros and poles of a rational function $r \in \mathcal{R}_{k,k}$, that can achieve a quasi-optimal Zolotarev number~\cite{zolotarev1877application}
\[
Z_k(\Lambda(A_1),\Lambda(A_2)) := \inf_{r \in \mathcal{R}_{k,k}} \frac{\sup_{z \in \Lambda(A_1)} |r(z)|}{\inf_{z \in \Lambda(A_2)} |r(z)|},\qquad k\geq 0,
\]
where $\Lambda(A_1)$ and $\Lambda(A_2)$ are the spectra of $A_1$ and $A_2$, and $\mathcal{R}_{k,k}$ is the set of rational functions of the form $s(x)/t(x)$ with polynomials $s$ and $t$ of degree at most $k$. This choice is closely related to the fact that Zolotarev numbers can be used to bound approximations of $W$ that satisfies~\cref{2d:sylv}~\cite{shi2021compressibility}; namely,
\begin{equation} \label{zolo_fro}
\|W-W_k\|_F \le Z_k(\Lambda(A_1),\Lambda(A_2)) \|W\|_F,
\end{equation}
where $W_k$ is the best rank-$k$ approximation of $W$.
\section{Parallel TT approximations from other tensor formats} \label{sec:parallelTT}
In this section, we focus on describing parallel algorithms to compute a TT approximation of a tensor $\mathcal{X}$ when we have access to all its entries. We consider three scenarios: (1) we can afford to store the whole tensor in cache (see~\cref{sec:parallelTTSVD}), (2) we can only afford to store a proportion of its entries in cache (see~\cref{sec:TTsketching}), and (3) the Tucker format of $\mathcal{X}$ is known and can be stored in cache (see~\cref{sec:TTTucker}).
\subsection{Parallel TT decomposition with SVD} \label{sec:parallelTTSVD}
The derivation of the parallel TT decomposition starts with the analogy between TTSVD and HOSVD. Roughly speaking, HOSVD follows a ``compress-then-combine" approach, which compresses all matricizations first and then computes the core tensor. This makes HOSVD for computing the Tucker decomposition naturally parallelizable. Comparatively, for the TT format, TTSVD has a sequential nature that alternates between reshaping and compressing. Here, we design a ``compress-then-combine" algorithm for computing a TT approximation. We compress all the unfoldings first and then combine the resulting matrices to obtain the TT cores.
We first show that the ON bases of the column space of all tensor unfoldings are related for a $d$-dimensional tensor.
\begin{theorem} \label{thm:interlacing_d}
Let $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$, and $X_j \in \C^{\left(\prod_{i=1}^j n_i\right) \times \left(\prod_{i = j+1}^d n_i\right)}$ be its $j$th flattening for $1 \le j \le d-1$. If $X_j = U_jV_j^*$ with $U_j \in \C^{\left(\prod_{i=1}^j n_i\right) \times r_j}$, $V_j \in \C^{\left(\prod_{i = j+1}^d n_i \right) \times r_j}$, $r_j \le \min(\prod_{i=1}^j n_i,\prod_{i = j+1}^d n_i)$, and $U_j$ has ON columns, then for $1 \le k \le d-2$, there exists matrices $W_k \in \C^{r_k \times n_{k+1}r_{k+1}}$ such that
\[ {\rm reshape}\left(U_{k+1}, \prod_{i=1}^k n_i ,n_{k+1}r_{k+1}\right) = U_kW_k. \]
\end{theorem}
\begin{proof}
To proceed with the proof, we use the fact that for $1 \le i \le d-2$, each column of $X_{i+1}$ consists of $n_{i+1}$ consecutive columns of $X_i$. This is true for any $d \ge 3$ so it suffices to show the statement holds when $\mathcal{X} \in \C^{n_1 \times n_2 \times n_3}$.
For notational simplicity, we denote the frontal slices of $\mathcal{X}$ by $X_f^{(j)} = \mathcal{X}(:,:,j) \in \C^{n_1 \times n_2}$ for $1 \le j \le n_3$ and the lateral slices by $X_\ell^{(k)} = \mathcal{X}(:,k,:) \in \C^{n_1 \times n_3}$ for $1 \le k \le n_2$. Then, by construction, we have
\[ X_1 = \begin{bmatrix} X_f^{(1)} & \cdots & X_f^{(n_3)} \end{bmatrix}, \quad X_2 = \begin{bmatrix}X_\ell^{(1)} \\[5pt]\vdots \\[5pt] X_\ell^{(n_2)} \end{bmatrix}. \]
The columns of the frontal slices and those of the lateral slices are columns of the tensor $\mathcal{X}$, so we can find the same column in the frontal and lateral slices. That is,
\[ \left(X_\ell^{(k)}\right)_j = \left(X_f^{(j)}\right)_k = \mathcal{X}(:,k,j), \quad 1 \le j \le n_3, \quad 1 \le k \le n_2, \]
where $\left(X_\ell^{(k)}\right)_j$ denotes the $j$th column of $X_\ell^{(k)}$, and similarly for $\left(X_f^{(j)}\right)_k$.
Given $X_1 = U_1V_1^*$, we can write $X_2$ as
\[ X_2 = \begin{bmatrix} U_1\left(V_1^{(1)}\right)^* \\[5pt] \vdots \\[5pt] U_1\left(V_1^{(n_2)}\right)^* \end{bmatrix} = (I \otimes U_1)Z^*, \]
where $V_1^{(i)}$ is a submatrix that contains columns $i, i+n_2,\dots,i+(n_3-1)n_2$ of $V_1$ for $1 \le i \le n_2$, and `$\otimes$' is the Kronecker product of two matrices.
Since $U_1$ has ON columns, $I \otimes U_1$ has ON columns, and $Z^* = ST^*$, where $S \in \C^{r_1n_2 \times r_2}$ has ON columns and $T \in \C^{n_3 \times r_2}$. Without loss of generality, we can assume that $U_2 = (I \otimes U_1)S$; otherwise, there is an orthogonal matrix $Y \in \C^{r_2 \times r_2}$ such that $U_2 = (I \otimes U_1)SY$, $T = TY$, and $V_2 = V_2Y$. By reshaping $U_2 = (I \otimes U_1)S$, we find
\[ {\rm reshape}(U_2, n_1,n_2r_2) = U_1W, \quad W = {\rm reshape}(S,r_1,n_2r_2), \]
which proves the statement for $d = 3$.
\end{proof}
One may notice that for $1 \le k \le d-2$, ${\rm reshape}(W_k, r_k, n_{k+1}, r_{k+1})$ is the size of the $(k+1)$st core in a TT format. In the next theorem, we show the accuracy of the approximation if we construct a tensor with TT cores given by ${\rm reshape}(W_k, r_k, n_{k+1}, r_{k+1})$.
\begin{theorem} \label{thm:perf_alg1}
Let $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$, and $0 < \epsilon < 1$. Suppose further that for $1
\le j \le d-1$, each unfolding admits $\|X_j - U_jV_j^*\|_F \le \frac{\epsilon}{\sqrt{d-1}}\|\mathcal{X}\|_F$, where $U_j$ has ON columns. Then, the tensor $\tilde{\mathcal{X}}$ constructed by TT cores $\mathcal{G}_1 = U_1$, $\mathcal{G}_d = V_{d-1}^*$, and
\begin{equation}
\label{eq:core}
\mathcal{G}_{k+1} = {\rm reshape}\left(U_k^* \ {\rm reshape}\left(U_{k+1}, \prod_{i=1}^k n_i ,n_{k+1}r_{k+1}\right), r_k, n_{k+1}, r_{k+1}\right),
\end{equation}
for $1 \le k \le d-2$, satisfies $\|\mathcal{X}-\tilde{\mathcal{X}}\|_F \le \epsilon\|\mathcal{X}\|_F$.
\end{theorem}
\begin{proof}
Since the TT cores are computed with $U_1,\dots,U_{d-1}$, we can express each element of $\tilde{\mathcal{X}}$ with the same matrices:
\begin{align}
\tilde{\mathcal{X}}_{i_1,i_2\dots,i_d} &= \mathcal{G}_1(i_1,:) \mathcal{G}_2(:,i_2,:)\cdots \mathcal{G}_d(:,i_d) \nonumber \\
&= U_1(i_1,:)U_1^* U_2((i_2-1)n_1+1:i_2n_1,:) \cdots \nonumber \\
&\quad U_{d-2}^* U_{d-1}\left(\left((i_{d-1}-1)\prod_{k=1}^{d-2} n_k+1\right):\left(i_{d-1}\prod_{k=1}^{d-2}n_k\right),:\right)U_{d-1}^* X_{d-1}(:,i_d) \nonumber.
\end{align}
Now, let $Y_{d-1} = U_{d-1}U_{d-1}^*X_{d-1}$, $\tilde{Y}_{j+1} = {\rm reshape}\left(Y_{j+1}, \prod_{k = 1}^{j} n_k, \prod_{k=j+1}^d n_k\right)$, and $Y_j = U_jU_j^*\tilde{Y}_{j+1}$ for $1 \le j \le d-2$. Then,
\begin{align}
\|\mathcal{X}-\tilde{\mathcal{X}}\|^2_F &= \|Y_1-X_1\|^2_F \nonumber \\
&= \|U_1U_1^*\tilde{Y}_2-X_1\|^2_F \nonumber \\
&= \|U_1U_1^*(\tilde{Y}_2-X_1)+(U_1U_1^*-I)X_1\|^2_F \nonumber \\
&= \|U_1U_1^*(\tilde{Y}_2-X_1)\|^2_F+\|(U_1U_1^*-I)X_1\|^2_F \nonumber \\
&\le \|\tilde{Y}_2-X_1\|^2_F+\|(U_1U_1^*-I)X_1\|^2_F \nonumber \\
&= \|Y_2-X_2\|^2_F+\|(U_1U_1^*-I)X_1\|^2_F, \nonumber
\end{align}
where the fourth equality holds since $(U_1U_1^*(\tilde{Y}_2-X_1))^*((U_1U_1^*-I)X_1) = 0$, the inequality holds since $U_1U_1^*$ is an orthogonal projection, and the last equality holds as reshaping preserves Frobenius norm. Following this argument, by induction on $\|Y_j-X_j\|_F^2$ for $1 \le j \le d-1$, we have
\begin{align}
\|\mathcal{X}-\tilde{\mathcal{X}}\|^2_F &\le \sum_{k=1}^{d-1} \|(I-U_kU_k^*)X_k\|_F^2 \label{eq:proj_acc} \\
&\le \sum_{k=1}^{d-1} \|(I-U_kU_k^*)(X_k-U_kV_k^*)\|_F^2 \nonumber \\
&= \sum_{k=1}^{d-1} \|X_k-U_kV_k^*\|_F^2 \le \epsilon^2\|\mathcal{X}\|_F^2. \nonumber
\end{align}
\end{proof}
\cref{thm:perf_alg1} provides an algorithm to compute a tensor $\tilde{\mathcal{X}}$ in TT format that approximates $\mathcal{X}$ (see~\cref{alg:1}). It is also simple to observe that~\cref{alg:1} can be performed in parallel, since the unfoldings of a tensor are independent.
\begin{algorithm}
\caption{Parallel-TTSVD: Given a tensor, compute an approximant tensor in TT format using SVD in parallel. }
\begin{algorithmic}[1]
\label{alg:1}
\Require {A tensor $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$ and a desired accuracy $0<\epsilon<1$}
\Ensure {TT cores $\mathcal{G}_1, \dots, \mathcal{G}_d$ of an approximant $\tilde{\mathcal{X}}$}
\For {$1 \le j \le d-1$}
\State Compute a rank $r_j$ approximation of the $j$th flattening of $\mathcal{X}$ in truncated SVD form so that $\|X_j - U_j\Sigma_jV_j^*\|_F \le \epsilon\|\mathcal{X}\|_F/\sqrt{d-1}$.
\EndFor
\For {$1 \le k \le d-2$}
\State Calculate $W_{k+1} = U_k^* \ {\rm reshape}(U_{k+1}, \prod_{i=1}^k n_i ,n_{k+1}r_{k+1})$.
\State Set $\mathcal{G}_{k+1} = {\rm reshape}(W_{k+1}, r_k, n_{k+1}, r_{k+1})$.
\EndFor
\State Set $\mathcal{G}_1 = U_1$ and $\mathcal{G}_d = \Sigma_{d-1}V_{d-1}^*$.
\end{algorithmic}
\end{algorithm}
Since the unfoldings have different sizes, the amount of work assigned to each processor in~\cref{alg:1} varies. Roughly speaking, processors that deal with $X_j$ when $j$ is close to $\floor{d/2}$ have the most computationally expensive SVD. In practice, one can replace the SVD in~\cref{alg:1} with the randomized SVD~\cite{halko2011finding}, in which case the computational complexity on each processor is $\mathcal{O}(r_j\prod_{i=1}^d n_i)$ where $r_j$ is the rank of the $j$th unfolding. In this scenario,~\cref{alg:1} is ideal when all the $r_j$'s are equal so that the computation is evenly distributed across all the processors.
\subsection{Parallel TT Sketching} \label{sec:TTsketching}
When one implements~\cref{alg:1} in a distributed computing environment, such as on a multi-threaded computer, a copy of the entire tensor $\mathcal{X}$ needs to be made on each processor. However, storing the tensor might not be feasible. Under this setting, we cannot use SVD as it requires all the tensor entries to be in cache. Instead, we may only be able to read a small portion of the entries of $\mathcal{X}$ at a time before discarding.
A common idea in this scenario for large matrices and tensors is sketching, where information about the matrix or tensor is obtained via matrix-vector multiplications. This idea is used for computing low-rank approximations of matrices~\cite{halko2011finding}, Tucker decomposition on tensors~\cite{sun2019low,ma2021fast}, and TT decomposition~\cite{che2019randomized}. In particular, SVDs in TTSVD can be replaced by sketching and a randomized range finder~\cite{che2019randomized}. Here, we develop a parallel TT sketching algorithm based on~\cref{thm:interlacing_d} (see~\cref{alg:2}). Since we want a truncated QR decomposition to reveal the rank of a given matrix, we use the column pivoted QR (CPQR)~\cite{chan1987rank}. This is a so-called ``two-pass" algorithm since $\mathcal{X}$ is used twice: the first time to compute an ON basis of the column space of each unfolding, and the second time to compute the last TT core.
\begin{algorithm}
\caption{PSTT: Given a tensor, compute an approximant tensor in TT format using sketching.}
\begin{algorithmic}[1]
\label{alg:2}
\Require {A tensor $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$, TT core size $\pmb{r}$, and an oversampling parameter $p$}
\Ensure {TT cores $\mathcal{G}_1,\ldots,\mathcal{G}_d$ of an approximant $\tilde{\mathcal{X}}$}
\For {$1 \le j \le d-1$}
\State Generate $\Phi_j \in \R^{\left(\prod_{k=j+1}^d n_k\right) \times (r_j+p)}$ with i.i.d. standard Gaussian entries.
\State Calculate $S_j = X_j\Phi_j$, where $X_j$ is the $j$th flattening of $\mathcal{X}$
\State Compute a CPQR of $S_j$ to obtain $Q_j$ with ON cols and set $Q_j = Q_j(:,\!1\!:\!r_j)$.
\EndFor
\For {$1 \le k \le d-2$}
\State Calculate $W_{k+1} = Q_k^* \ {\rm reshape}(Q_{k+1}, \prod_{i=1}^k n_i ,n_{k+1}r_{k+1})$.
\State Set $\mathcal{G}_{k+1} = {\rm reshape}(W_{k+1}, r_k, n_{k+1}, r_{k+1})$.
\EndFor
\State Set $\mathcal{G}_1 = Q_1$, and $\mathcal{G}_d = Q_{d-1}^*X_{d-1}$.
\end{algorithmic}
\end{algorithm}
Since $Q_j$ has ON columns for $1 \le j \le d-1$,~\cref{eq:proj_acc} provides an error bound for~\cref{alg:2}.
\begin{theorem} \label{thm:perf_sketch}
Let $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$, $\pmb{r}$ be the desired TT core size, and $p \ge 2$. The approximation $\tilde{\mathcal{X}}$ computed in~\cref{alg:2} satisfies
\begin{equation} \label{eq:sketch_bound}
\mathbb{E}\left[\|\mathcal{X}-\tilde{\mathcal{X}}\|_F^2\right] \le \sum_{j=1}^{d-1} \left(1+\frac{r_j}{p-1}\right) \left(\sum_{k = r_j+1}^{M_j} \sigma_k^2(X_j) \right),
\end{equation}
where $\sigma_k(X_j)$ is the $k$th singular value of $X_j$ and $M_j = \min(\prod_{i=1}^j n_i, \prod_{i = j+1}^d n_i)$. Further assume that $p \ge 4$, then for all $u, t \ge 1$,
\begin{equation}
\|\mathcal{X}-\tilde{\mathcal{X}}\|_F^2 \le \sum_{j=1}^{d-1} \left( (1+t\sqrt{12r_j/p}) \left(\sum_{k = r_j+1}^{M_j} \sigma_k^2(X_j) \right)^{\tfrac{1}{2}} +ut\frac{e\sqrt{r_j+p}}{p+1}\sigma_{r_j+1}(X_j) \right)^2,
\label{eq:sketch_bound_prob}
\end{equation}
with failure probability at most $5t^{-p}+2e^{-u^2/2}$. \end{theorem}
\begin{proof}
The expectation bound in~\cref{eq:sketch_bound} follows from~\cref{eq:proj_acc} and~\cite[Thm 10.5]{halko2011finding}. The probability bound in~\cref{eq:sketch_bound_prob} follows from~\cref{eq:proj_acc} and~\cite[Thm 10.7]{halko2011finding}.
\end{proof}
If the accuracy in~\cref{thm:perf_alg1} is considered as a baseline, then~\cref{eq:sketch_bound} implies that the error is within a constant factor of the baseline for moderate $p$ with high probability. We can understand the probability bound as two parts. The sum of squares of the ``tail" singular values corresponds with the expected approximation error in~\cref{eq:sketch_bound}. Analogously, the $(j+1)$st singular value of each unfolding in the second term is related to the deviation above the mean.
A bottleneck of~\cref{alg:2} is the size of the dimension reduction maps (DRMs) $\Phi_j$, which grow exponentially with $d$. In practice, we can substitute them with Khatri-Rao products of smaller DRMs~\cite{sun2018tensor}. In other words, for $1 \le j \le d-1$, instead of using $\Phi_j \in \R^{\prod_{k=j+1}^d n_k \times (r_j+p)}$ with independent and identically distributed (i.i.d.) standard Gaussian entries, we use $\Psi_{d}^{(j)} \odot \cdots \odot \Psi_{j+1}^{(j)}$, where `$\odot$' denotes the Khatri-Rao product, and $\Psi_k^{(j)} \in \R^{n_k \times (r_j+p)}$ has i.i.d. standard Gaussian entries for $j+1 \le k \le d$. Then,
\begin{equation}
\label{eq:sketch_orig}
X_j\left( \Psi_{d}^{(j)} \odot \cdots \odot \Psi_{j+1}^{(j)} \right) = \sum_{\ell_d=1}^{n_d} \cdots \sum_{\ell_{j+2}=1}^{n_{j+2}}(X_j)_{I(\ell_{j+1},\dots,\ell_d)} \Psi_{j+1}^{(j)} D_{\ell_{d}}\cdots D_{\ell_{j+2}},
\end{equation}
where $I(\ell_{j+1},\dots,\ell_d) = \sum_{k=j+3}^d (\ell_k-1)\prod_{p=j+2}^{k-1} n_p + (\ell_{j+2}-1)$, $X_j$ is partitioned as $X_j = \begin{bmatrix} (X_j)_1 & \cdots & (X_j)_{\prod_{k=j+2}^d n_k} \end{bmatrix}$ with $(X_j)_i \in \C^{\prod_{k=1}^j n_k \times n_{j+1}}$ for $1 \le i \le \prod_{k=j+2}^d n_k$, and $D_{\ell_q}$ is a diagonal matrix whose diagonal elements are the elements on row $\ell_q$ of $\Psi_q^{(j)}$ for $j+2 \le q \le d$.~\Cref{alg:2} with the Khatri-Rao DRMs gives slightly less accurate approximations, but the storage cost is only linear in $\pmb{n}$ and $d$.
When the tensor $\mathcal{X}$ is too large to access its elements for a second time in line 8 of~\cref{alg:2}, we design a ``one-pass" (or ``single-pass") algorithm that computes $\mathcal{G}_d$ without using $X_{d-1}$ directly~\cite{sun2020low}. To be specific, we generate a new dimension reduction map $\Psi_{d-1} \in \R^{\left(\prod_{k=1}^{d-1} n_k\right) \times (r_{d-1}+p)}$ with i.i.d. standard Gaussian entries and compute $T_{d-1} = \Psi_{d-1}^*X_{d-1}$ in lines 2 and 3 when $j = d-1$. Then,
\begin{equation}
\label{eq:onepass_final_core}
T_{d-1} \approx \Psi_{d-1}^*Q_{d-1}Q_{d-1}^*X_{d-1} = (\Psi_{d-1}^*Q_{d-1})\mathcal{G}_d.
\end{equation}
In this way, we have $\mathcal{G}_d \approx (\Psi_{d-1}^*Q_{d-1})^{\dagger}T_{d-1}$, where the pseudo-inverse exists with probability $1$. We use PSTT-onepass to denote this single-pass version of~\cref{alg:2}. The expected error of PSTT-onepass can also be determined.
\begin{theorem} \label{thm:perf_sketch_one}
Let $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$, $\pmb{r}$ be the desired TT core size, and $p \ge 2$. The approximation $\tilde{\tilde{\mathcal{X}}}$ computed by PSTT-onepass satisfies
\[
\begin{aligned}
\mathbb{E}\left[\|\mathcal{X}-\tilde{\tilde{\mathcal{X}}}\|_F^2\right] & \le \sum_{j=1}^{d-2} \left(\!1+\frac{r_j}{p-1}\right) \!\!\!\sum_{k = r_j+1}^{M_j} \sigma_k^2(X_j)
+\left(\!1+\frac{r_{d-1}}{p-1}\right)^2 \!\!\! \sum_{k = r_{d-1}+1}^{M_{d-1}} \sigma_k^2(X_{d-1}),
\end{aligned}
\]
where $\sigma_k(X_j)$ is the $k$th singular value of $X_j$ and $M_j = \min(\prod_{i=1}^j n_i, \prod_{i = j+1}^d n_i)$ for $1 \le j \le d-1$.
\end{theorem}
\begin{proof}
Let $Z_{d-1} = Q_{d-1}(\Psi_{d-1}^*Q_{d-1})^{\dagger}\Psi_{d-1}^*X_{d-1}$, and for each $1\leq j\leq d-2$, let $\tilde{Z}_{j+1} = {\rm reshape}\left(Z_{j+1}, \prod_{k = 1}^{j-1} n_k, \prod_{k=j}^d n_k\right)$ and $Z_j = Q_jQ_j^*\tilde{Z}_{j+1}$. Then, from~\cref{eq:proj_acc} we find that
\[
\|\mathcal{X}-\tilde{\tilde{\mathcal{X}}}\|_F^2 \le \sum_{j=1}^{d-2} \|(I-Q_jQ_j^*)X_j\|_F^2+\|Z_{d-1}-X_{d-1}\|_F^2.
\]
So, we only need to bound the second term on the right hand side. Let $E_{d-1} = (\Psi_{d-1}^*Q_{d-1})^{\dagger}\Psi_{d-1}^*X_{d-1}$, then
\begin{align*}
\|Z_{d-1}&-X_{d-1}\|_F^2 = \|Q_{d-1}E_{d-1}-X_{d-1}\|_F^2 \\
&= \|Q_{d-1}Q_{d-1}^*Q_{d-1}E_{d-1}-Q_{d-1}Q_{d-1}^*X_{d-1}+Q_{d-1}Q_{d-1}^*X_{d-1}-X_{d-1}\|_F^2 \\
&= \|Q_{d-1}E_{d-1}-Q_{d-1}Q_{d-1}^*X_{d-1}\|_F^2+\|Q_{d-1}Q_{d-1}^*X_{d-1}-X_{d-1}\|_F^2 \\
&= \|E_{d-1}-Q_{d-1}^*X_{d-1}\|_F^2+\|(I-Q_{d-1}Q_{d-1}^*)X_{d-1}\|_F^2,
\end{align*}
where the third equality holds since $Q_{d-1}Q_{d-1}^*$ and $I-Q_{d-1}Q_{d-1}^*$ are orthogonal projectors. We are left to bound $\|E_{d-1}-Q_{d-1}^*X_{d-1}\|_F^2$. Plugging in $E_{d-1}$, we have
\begin{align*}
\|E_{d-1}&-Q_{d-1}^*X_{d-1}\|_F^2 = \|(\Psi_{d-1}^*Q_{d-1})^{\dagger}\Psi_{d-1}^*X_{d-1}-Q_{d-1}^*X_{d-1}\|_F^2 \\
&= \|(\Psi_{d-1}^*Q_{d-1})^{\dagger}\Psi_{d-1}^*X_{d-1}-(\Psi_{d-1}^*Q_{d-1})^{\dagger}(\Psi_{d-1}^*Q_{d-1})Q_{d-1}^*X_{d-1}\|_F^2 \\
&= \|(\Psi_{d-1}^*Q_{d-1})^{\dagger}\Psi_{d-1}^*(I-Q_{d-1}Q_{d-1}^*)X_{d-1}\|_F^2.
\end{align*}
The error bound follows from~\cite[Lemma B.1]{sun2020low}.
\end{proof}
The CPQR in line 4 of~\cref{alg:2} can be expensive when $j$ is large, and this still remains an issue when we use the ``single-pass" algorithm. It is simple to notice that~\cref{thm:interlacing_d} also holds for the row spaces of the unfoldings of $\mathcal{X}$. Let $d_* = \ceil{\frac{d}{2}}$. Then in practice, we compute $Q_j$, ON bases for the column spaces of $X_j$ when $j < d_*$, and $P_j$, ON bases for the row spaces of $X_j$ when $j > d_*$. In this way, the TT cores $\mathcal{G}_j$ for $j \neq d_*$ can be calculated similarly using lines 6 and 7 of~\cref{alg:2}, and the TT core in the middle $\mathcal{G}_{d_*}$ needs one extra step
\begin{equation}
\label{eq:PSTT2_final_core}
\mathcal{G}_{d_*} = {\rm reshape}\left(\mathcal{X},\prod_{j=1}^{d_*-1}n_j, n_{d_*}, \prod_{j=d_*+1}^d n_j \right) \times_1 Q_{d_*-1}^* \times_3 P_{d_*+1}^*.
\end{equation}
We call this variation PSTT2 to indicate that both column spaces and row spaces of tensor unfoldings are utilized, and the accuracy bounds in~\cref{thm:perf_sketch} continue to hold for PSTT2. Moreover, one can design PSTT2-onepass, a one-pass version of PSTT2, by carrying out an extra sketching step for the middle unfolding $X_{d_*}$. The sketching step can be performed on either the row or column space of $X_{d_*}$, and a pseudo-inverse follows it as in~\cref{eq:onepass_final_core} to obtain $\mathcal{G}_{d_*}$.
\subsection{Parallel TT and orthogonal Tucker conversion} \label{sec:TTTucker}
Some applications, including signal processing~\cite{de2004dimensionality}, computer vision~\cite{vasilescu2002multilinear}, and chemical analysis~\cite{henrion1994n}, construct and manipulate tensor data in the Tucker format. If one wants to explore latent structures in the TT format, a common approach is to convert the tensor back to the original format and then perform a TT decomposition. This method has a major drawback: it needs to explicitly construct the tensor in the original format, which ignores the low-rank structure one intends to utilize in both TT and Tucker format. Here, we develop a method to directly approximate a tensor $\mathcal{X}$ in orthogonal Tucker format by another tensor $\tilde{\mathcal{X}}$ in TT format.
If $\mathcal{X}$ has an orthogonal Tucker format~\cref{eq:Tucker}, and $\mathcal{H}_1,\dots,\mathcal{H}_d$ are the TT cores of $\mathcal{G}$, then $\mathcal{G} \times_j A_j$ is equivalent to $\mathcal{H}_j \times_2 A_j$ while keeping other cores unchanged for $1 \le j \le d$. In this way,~\cref{eq:Tucker} is equivalent to updating each TT core of $\mathcal{G}$ independently with the Tucker factor matrices. Therefore, we can use these updated cores as the TT cores of an approximation $\tilde{\mathcal{X}}$ (see~\cref{alg:3}).
\begin{algorithm}
\caption{Tucker2TT: Given the Tucker decomposition of a tensor, compute an approximant tensor in TT format.}
\begin{algorithmic}[1]
\label{alg:3}
\Require {The Tucker core $\mathcal{G}$ and factor matrices $A_1,\ldots,A_d$ of a tensor $\mathcal{X}$ (see~\cref{eq:Tucker})}
\Ensure {The TT cores $\mathcal{T}_1,\ldots,\mathcal{T}_d$ of an approximant tensor $\tilde{\mathcal{X}}$}
\State Perform a parallel TT decomposition on $\mathcal{G}$ and get TT cores $\mathcal{H}_1, \dots, \mathcal{H}_d$.
\For {$1 \le j \le d$}
\State Set $\mathcal{T}_j = \mathcal{H}_j \times_2 A_j$.
\EndFor
\end{algorithmic}
\end{algorithm}
As the Tucker core, $\mathcal{G}$ is much smaller in size than the original tensor $\mathcal{X}$, so line~$1$ of~\cref{alg:3} is computationally cheaper than forming $\mathcal{X}$ explicitly and computing a TT decomposition. In addition, the parallel TT decomposition to be used in line~$1$ depends on whether there is a prescribed accuracy of $0 < \epsilon < 1$ or a prescribed TT rank $\pmb{r}$. Finally, we can guarantee the algorithm's performance by showing that ${\rm rank}(X_j) = {\rm rank}(G_j)$ for $1 \le j \le d-1$.
\begin{lemma} \label{lm:tucker_tt_ranks}
Suppose a tensor $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$ has a Tucker decomposition~\cref{eq:Tucker}, where $A_j \in \C^{n_j \times s_j}$ and has ON columns for $1 \le j \le d$. Then, for $1 \le j \le d-1$, we have ${\rm rank}(X_j) = {\rm rank}(G_j)$.
\end{lemma}
\begin{proof}
We get the following equation by reshaping~\cref{eq:Tucker}:
\[ X_j = (A_j \otimes \dots \otimes A_1)G_j(A_d \otimes \dots \otimes A_{j+1})^T. \]
Since all $A_j$'s have ON columns, the rank of $X_j$ equals the rank of $G_j$.
\end{proof}
If one is provided with a tensor $\mathcal{X}$ in TT format, then it is also possible to find an approximation $\tilde{\mathcal{X}}$ in orthogonal Tucker format. By explicit calculation, one finds that $X_j$, the $j$th unfolding of the tensor $\mathcal{X}$ for $1 \le j \le d$, can be computed with the unfoldings of the TT cores:
\begin{equation} \label{eq:flattening_core_j}
X_j = \prod_{i = 1}^{j-1} \left(I_{\prod_{k=i+1}^j n_k} \otimes (G_i)_2\right) (G_j)_2 (G_{j+1})_1 \prod_{i = j+2}^d \left( (G_i)_1 \otimes I_{\prod_{k = j+1}^{i-1} n_k}\right),
\end{equation}
where $I_n$ is the identity matrix of size $n \times n$, and $(G_i)_p$ is the $p$th unfolding of $\mathcal{G}_i$ for $1 \le p \le 2$. Then, rewriting~\cref{eq:flattening_core_j} gives $X_j = P_j (G_j)_2 Q_j$ and
\begin{align*}
P_j = I_{n_j} \otimes \left( \prod_{i = 1}^{j-1} \left(I_{\prod_{k=i+1}^{j-1} n_k} \otimes (G_i)_2\right) \right), \quad Q_j = \prod_{i = j+1}^d \left( (G_i)_1 \otimes I_{\prod_{k = j+1}^{i-1} n_k}\right).
\end{align*}
We find that
\begin{align}
{\rm reshape}(\mathcal{X}, \prod_{k = 1}^{j-1} n_k, n_j, \prod_{k = j+1}^d n_k) = \mathcal{G}_j &\times_1 \left( \prod_{i = 1}^{j-1} \left(I_{\prod_{k=i+1}^{j-1} n_k} \otimes (G_i)_2\right) \right) \nonumber \\
&\times_3 \left(\prod_{i = j+1}^d \left( (G_i)_1 \otimes I_{\prod_{k = j+1}^{i-1} n_k}\right)\right).\nonumber
\end{align}
We can therefore find relationships between matricizations of $\mathcal{X}$ and those of $\mathcal{G}_j$:
\[
X_{(j)} = (G_j)_{(2)} \left[\left(\prod_{i = j+1}^d (G_i)_1 \otimes I_{\prod_{k = j+1}^{i-1} n_k}\right) \otimes \left( \prod_{i = 1}^{j-1} I_{\prod_{k=i+1}^{j-1} n_k} \otimes (G_i)_2 \right)^T \right],
\]
where $X_{(j)}$ is the $j$th matricization of $\mathcal{X}$ and $(G_j)_{(2)}$ is the second matricization of $\mathcal{G}_j$. When $(G_1)_2, \dots, (G_{j-1})_2$ have linearly independent columns and $(G_{j+1})_1,\dots,(G_d)_1$ have linearly independent rows, the column space of $X_{(j)}$ and $(G_j)_{(2)}$ match. This criterion can be satisfied when the TT rank of $\mathcal{X}$ is optimal. In this way, we can use any desired method, such as HOSVD or Tucker sketching, to find an orthogonal Tucker approximation of $\mathcal{X}$, simply by replacing $X_{(j)}$ by $(G_j)_{(2)}$. We summarize this procedure in~\cref{alg:4}, and use a particular version of HOSVD (see~\cite[Alg. 1]{batselier2020meracle}).
\begin{algorithm}
\caption{TT2Tucker: Given a tensor in TT format, compute an approximant tensor in orthogonal Tucker format.}
\begin{algorithmic}[1]
\label{alg:4}
\Require {The TT cores $\mathcal{T}_1, \dots, \mathcal{T}_d$ of a tensor $\mathcal{X}$}
\Ensure {The TT cores $\mathcal{H}_1,\ldots,\mathcal{H}_d$ of the Tucker core $\mathcal{G}$, and factor matrices $A_1,\ldots,A_d$ of $\tilde{\mathcal{X}}$}
\For {$1 \le j \le d$}
\State Compute $A_j$ with ON columns that approximates column space of $(T_j)_{(2)}$.
\State Calculate $\mathcal{H}_j = \mathcal{T}_j \times_2 A_j^*$.
\EndFor
\end{algorithmic}
\end{algorithm}
This algorithm is parallelizable since the TT cores of $\mathcal{X}$ are independent. Users need to either prescribe a desired accuracy or a multilinear rank to discover orthonormal bases of column spaces in step 2. HOSVD guarantees the performance of this algorithm if we use SVD, or~\cite[Thm. 5.1]{sun2020low} if we use sketching. Compared to HOSVD or Tucker sketching,~\cref{alg:4} is faster due to the known TT cores. As a result, memory costs and computation powers can be significantly reduced.
\section{Complexity Analysis and Numerical Examples} \label{NumericalExamples}
In this section, we model the computational and spatial cost of the proposed methods for TT decomposition, and compare these models with numerical experiments\footnote{For codes, see \url{https://github.com/SidShi/Parallel\_TT\_sketching}}. For simplicity, we assume that the tensor $\mathcal{X}$ of dimension $d$ is ``square,'' meaning that $n_i = n$ for all $1 \leq i \leq d$ and $r_i = r$ for all $ 1 \leq i \leq d-1$.
We focus on the two-sided sketching methods PSTT2 and PSTT2-onepass defined at the end of \cref{sec:TTsketching}, since both the SVD based and the one-sided PSTT algorithms have poor spatial complexity. In addition, the SVD algorithm is much slower than the counterparts with sketching. As a baseline, we compare the results to a modified version of Algorithm 5.1 in~\cite{che2019randomized}, which is referred to in this manuscript as Serial Streaming TT Sketching (SSTT).
In~\cref{sec:Computational_Details}, we discuss the computational environment and software used to implement our methods. Next, in~\cref{sec:Parallel_Tensor_Sketching}, we discuss how PSTT2, PSTT2-onepass, and SSTT are modified to be performed in a distributed memory environment. In~\cref{sec:Memory_Complexity}, we discuss the improvements of PSTT2 and PSTT2-onepass over previous serial algorithms in terms of spatial complexity. Finally, in~\cref{sec:Time_Complexity} we discuss time complexity of these algorithms. To illustrate the practicality of our algorithms, we show numerical experiments in~\cref{sec:Memory_Complexity} and~\cref{sec:Time_Complexity}.
\subsection{Computational Details} \label{sec:Computational_Details}
All experiments are performed in C, using the OpenMPI implementation of MPI for parallelization. All subroutines take advantage of LAPACKE and BLAS to vectorize large linear algebra operations, such as matrix-matrix multiplication and QR factorization. Experiments are performed on a machine with eight 12-core compute nodes, each consisting of two Intel Xeon E5-2620 v3 processors with 32 GB per node. For each trial, we measure the relative Frobenius error of the TT approximation, and achieve $\epsilon < 10^{-10}$ (see~\cref{eq:FrobeniusNorm}). For our experiments, we assume that the ranks are known apriori. Nevertheless, when they are not known, one can implement an adaptive algorithm such as Algorithm 5.2 in~\cite{che2019randomized}.
To measure the memory improvements of the two-sided methods, we compare the total allocated memory using the gperftools implementation of TCMalloc. In particular, the memory is measured on each core and then averaged. Transient memory allocations by MPI are ignored in the overall memory measurement.
Throughout the following sections, we refer to some standard MPI functions to describe the parallel algorithm. These primarily include
\begin{itemize} [leftmargin=*,noitemsep]
\item \textit{Send:} the operation of sending some array of memory from one core to another.
\item \textit{Receive:} the operation of receiving the memory sent by \textit{send}.
\item \textit{Reduce:} the operation of summing matrices from a group of cores, used to summarize information gained by individual cores.
\end{itemize}
To avoid confusion in the following sections, we use `core' to refer to a single core of the CPU and `TT core' to refer to a ``train" in the TT format.
\subsection{Parallel Tensor Sketching} \label{sec:Parallel_Tensor_Sketching}
We first discuss the process of parallelizing SSTT, PSTT2, and PSTT2-onepass. We focus on the parallel sketching step, which is responsible for most of the time and storage. The predominance of the sketching step is emblematic of the ``compress-then-combine'' approach, as most of the computational work happens when compressing. The other steps needed for a fully parallel algorithm are discussed at the end of the section.
The parallel algorithms have a structure that is reminiscent of dense matrix-matrix multiplication algorithms such as Cannon's algorithm~\cite{cannon1969cellular} or SUMMA~\cite{vandegeijn1997summa}. These algorithms perform the operation $C = AB$ by dividing $A$, $B$, and $C$ into submatrices and distributing these submatrices across multiple cores. In this way, the overall spatial complexity is reduced.
For the parallel implementation of the TT sketching algorithms, a natural extension is to partition an unfolding $X_j$ into submatrices, and then proceed with standard matrix-matrix multiplications for the sketch step (see~\cref{alg:2})
\begin{equation}
\label{eq:easy_sketch}
S_i = X_i \Phi_i.
\end{equation}
However, a computational hurdle with this approach is that we need to sketch multiple unfoldings during the same computational step since we want as few passes of the tensor as possible. As such, we aim for a guarantee that if $M$ is a submatrix of $X_i$, then some reshaping of $M$ is also a submatrix of $X_{i'}$ for some $i\neq i'$. In this way, parallel linear algebra algorithms work equally well for all unfoldings.
For this reason, we introduce the notion of a \textit{sub-tensor}, which is a multilinear generalization of the submatrix. In each dimension $k$, we partition the tensor index $1\leq k_i\leq n$ into $P_i$ equal ``chunks'' (or in the case that $P_i$ does not divide into $n$, nearly equal chunks). In MATLAB notation, we have that $\mathcal{Y}_j$ is a sub-tensor of $\mathcal{X}$ if
\[
\mathcal{Y}_j = \mathcal{X}\left(1 + \frac{n(j_1-1)}{P_1}:\frac{nj_1}{P_1}, 1+\frac{n(j_2-1)}{P_2}:\frac{nj_2}{P_2}, \dots, 1+\frac{n(j_d-1)}{P_d}:\frac{nj_d}{P_d} \right),
\]
where $\mathbf{j} = (j_1, \dots, j_d)$ is a multi-index specifying the target sub-tensor. Defining $P = \prod_{k = 1}^{d} P_k,$ it is clear that the memory needed to store $\mathcal{Y}_j$ is a factor of $1/P$ smaller than the memory needed to store $\mathcal{X}$. We also notice that any unfolding of $\mathcal{Y}_j$ is a submatrix of an unfolding of $\mathcal{X}$, although those submatrices are not necessarily contiguous. For notational simplity, we use the vectorized index notation $j = 1 + \sum_{k = 1}^{d} (j_k-1)\prod_{\ell=1}^{k-1} P_\ell$ for $1 \leq j \leq P$ as well.
The sub-tensor gives an efficient method to store one part of the multiplication in~\cref{eq:easy_sketch}. To be specific, the matrix $\Phi_i$ can be stored in $d$ or fewer matrices of size $\mathcal{O}(nr)$ via the Khatri-Rao DRMs in~\cref{eq:sketch_orig}. This is a small enough cost that each core can store every $\Phi_i$. As a result, the only thing we need to distribute is the ``sketch'' $S_i$. In fact, the matrices $S_i$ dominate the memory complexity of PSTT2 and PSTT2-onepass (see~\cref{sec:Memory_Complexity}), and are therefore important to distribute. We distribute them by splitting $S_i$ into \textit{sub-sketches} $S_{i,k}$, with the column sub-sketches defined by
\begin{equation*}
S_{i,k} =
{\rm reshape}(S_i, \underbrace{n, \dots, n}_{d-L}, r)
\left(1 + \frac{n(k_1 - 1)}{P_{1}}:\frac{nk_1}{P_{1}}, \dots, 1+\frac{n(k_{d-i}-1)}{P_{d-i}}:\frac{nk_{d-i}}{P_{d-i}}, : \right).
\end{equation*}
We note that sketches for finding row spaces are performed by simply using $X_j^T$ instead of $X_j$ in \cref{eq:easy_sketch}. Row sub-sketches and the other necessary steps can be immediately derived from the same reasoning.
Using the notions of sub-tensors and sub-sketches, we can expand \cref{eq:easy_sketch} into an explicit form with the Khatri-Rao DRMs $\Psi_\ell$ as $S_{i, k} = \sum_{j_{i+1}=1}^{P_{i+1}} \cdots \sum_{j_d=1}^{P_d} S_{i,k,j}$ with
\begin{align}
S_{i, k, j} =& \mathrm{reshape}\left(\mathcal{Y}_{j}, \frac{n}{P_1}, \dots, \frac{n}{P_i}, \prod_{\ell=i+1}^{d} \frac{n}{P_\ell} \right) \times_{i+1}\\\nonumber
&\left( \Psi_{d}^{(i)}\left(1+\frac{n(j_d-1)}{P_d}:\frac{nj_d}{P_d},:\right) \odot \cdots \odot \Psi_{i+1}^{(i)} \left(1 + \frac{n(j_{i+1}-1)}{P_{i+1}}:\frac{nj_{i+1}}{P_{i+1}},:\right)\right)^{T},\label{eq:KR_sub_product}
\end{align}
where $S_{i,k,j}$ is the contribution of the sub-tensor $\mathcal{Y}_j$ to the sub-sketch $S_{i,k}.$
Now, we write down the parallel sketching procedure, which is a subroutine (see~\cref{alg:parsketch}) for PSTT2, PSTT2-onepass, and SSTT. For generality, we assume that the input to each algorithm is some function $f$ that takes as input the index of the tensor $(\ell_1, \dots, \ell_d)$, and outputs a value of the tensor $\mathcal{X}_{\ell_1, \dots, \ell_d}$. This function allows us to load parts of the tensor into memory without necessarily loading the full tensor. We also assume that the time it takes to load a single element $\tau_f$ does not depend on the element or the number of elements loaded concurrently. In practice, this $f$ can be a function that reads data from a file or evaluates a known function.
\Cref{alg:parsketch} outputs full sketches $S_i$ for multiple values of $i$, and each of the sketches is stored as sub-sketches across all cores. If there are $C$ cores, then each core is responsible for streaming approximately $P/C$ sub-tensors and stores roughly a factor of $1/C$ of the number of sub-sketches $S_{i,j}$. By default, we assume that the sub-sketches are distributed in ``column-major'' order, i.e., the cores each store sub-sketches $S_{i,j}$ that are consecutive in $j$. To contribute the sub-sketch to the core holding $S_{i,k}$, it is necessary to add an additional send/receive communication step. This algorithm is convenient for PSTT2 and PSTT2-onepass, as all sub-sketches can be computed with a single stream of a given sub-tensor.
\begin{algorithm}
\caption{Find multiple sketches of a tensor in parallel.}
\begin{algorithmic}[1]
\label{alg:parsketch}
\Require {Tensor oracle $f$, sketch dimensions $\mathbf{i}$, Khatri-Rao DRMs $\Psi^{(k)}_i$}
\Ensure {Sketches $S_i$} for all $i\in\mathbf{i}$
\State Initialize $S_{i,k}$ to zero for each $i\in\mathbf{i}$, distributed among the available cores
\ParFor {$1 \leq j \leq P$}
\State Load $\mathcal{Y}_j$ into memory via $f$
\For {$i$ in $\mathbf{i}$}
\State Compute $S_{i,k,j}$ via~\cref{eq:KR_sub_product}
\State Send $S_{i,k,j}$ to owner of $S_{i,k}$
\For {each $S_{i, k, j'}$ received}
\State Add $S_{i, k, j'}$ to $S_{i,k}$
\EndFor
\EndFor
\EndParFor
\end{algorithmic}
\end{algorithm}
In all three algorithms, the step after~\cref{alg:parsketch} is to find an ON basis for $S_{i,k}$, except for the middle sketch of PSTT2-onepass. For this purpose, we use the skinny QR algorithm in~\cite{benson2013direct}, which is fast and has a low memory overhead in comparison to the sketching step. Furthermore, other steps that need to be parallelized are:
\begin{itemize}[leftmargin=*,noitemsep]
\item \textbf{SSTT}: The first step is to find an ON basis for the column space using~\cref{alg:parsketch} with $\mathbf{i}=1$ and skinny QR, and use this basis as the first TT core $\mathcal{G}_1$. Because $G_1$ only has $nr$ entries, it can be stored in each core to reduce the overall required communication. Then, we compute $Z = \mathcal{G}_1^T X_1$ and obtain sub-sketches of $Z$ with~\cref{alg:parsketch}. These sketching and multiplication steps are repeated until all TT cores are obtained, and we distribute all sub-sketches among the cores. Overall, the second pass of streaming takes a significant amount of time, and the largest storage contribution comes from storing the first calculated $Z$.
\item \textbf{PSTT2}: To obtain all but the middle TT core, it is necessary to multiply ON bases of column/row spaces against each other via~\cref{eq:core}, which can be easily rewritten in terms of sub-tensors. Since the ON bases are much smaller than the tensor itself, communicating sub-sketches avoids high communication costs. In the end, the middle TT core is obtained via another streaming loop to perform~\cref{eq:PSTT2_final_core}, which adds significant computational time to the overall algorithm.
\item \textbf{PSTT2-onepass}: All TT cores but the middle one are obtained as in PSTT2. Then, the middle TT core is computed with two matrix-matrix multiplications and a small least-squares problem. These are rewritten in terms of sub-sketches, and are cheap as they use already compressed data.
\end{itemize}
\subsection{Memory Complexity} \label{sec:Memory_Complexity}
In this section, we analyze the memory complexity of our algorithms. We give estimates of the memory costs of a sub-tensor, a TT representation, and the three algorithms SSTT, PSTT2, and PSTT2-onepass. The asymptotic costs are then compared to measured total memory allocation per core from numerical experiments, showing PSTT2 and PSTT2-onepass have lower overall memory requirements, especially for high-dimensional tensors.
Throughout the section, we focus on trials using the Hilbert tensor, defined as
\begin{equation*}
\mathcal{X}_{i_1,\dots,i_d} = \frac{1}{1-d+i_1+\dots+i_d}, \quad 1 \le i_j \le n_j, \quad 1 \le j \le d.
\end{equation*}
It is known that this tensor can be accurately approximated by a numerically low TT rank tensor, and the TT ranks can be estimated a priori~\cite{shi2021compressibility}. Also, it is apparent that the total memory allocated does not depend on actual values of the tensor, but only on the dimension $d$, sizes $\mathbf{n}$, and ranks $\mathbf{r}$. Therefore, even though the Hilbert tensor is an artificial example, the memory results generalize to real-world tensors of similar sizes and ranks. We report numerical experiments for $d=3$, $d=5$, and $d=9$ Hilbert tensors with sizes $\mathbf{n}$, ranks $\mathbf{r}$, and partitions $\mathbf{P}$ given in~\cref{tab:1}.
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
$d$ & 3 & 5 & 9\\
\hline
$\mathbf{n}$ & $960,960,960$ & $96,96,96,96,96$ & $12,12,12,12,12,12,12,12,12$\\
$\mathbf{r}$ & $1,25,25,1$ & $1,17,18,18,17,1$ & $1,12,18,18,19,19,18,18,12,1$\\
$\mathbf{P}$ & $96,1,96$ & $96,1,1,1,96$ & $12,6,1,1,1,1,1,6,12$\\
tensor size & $6.59$ GB & $60.8$ GB & $38.4$ GB \\
TT size & $4.94$ MB & $0.710$ MB & $0.197$ MB\\
sub-tensor size & $0.769$ MB & $6.79$ MB & $7.60$ MB\\
\end{tabular}
\caption{\label{tab:1} Dimensions and memory sizes of the three tensors used for profiling. The `tensor size' line is calculated through the formula $8 n^d / 2^{30}$, and the `TT size' and `sub-tensor size' lines are measured via heap profiling. One can confirm that the sub-tensor sizes are nearly a factor of $1/P$ off from the tensor size, with discrepancies due to auxiliary information stored in the sub-tensor data structure.}
\end{table}
For each core, the number of stored entries of a sub-tensor and a TT format are
\[
M_{\rm sub-tensor} = n^d/P, \qquad M_{\rm TT} = (d-2) r^2 n + 2 r n,
\]
where we use capital `$M$'s to denote all spatial costs. For the actual memory required, these quantities can be multiplied by the memory of a single entry. We note that neither of the above costs depend explicitly on the number of cores $C$, assuming $P>C$. The memory requirements for the three Hilbert examples are given in~\cref{tab:1}. We see that while the tensors have sizes in the gigabyte range, each of the cores only stores on the order of megabytes for the given values of $\mathbf{P}$.
One can then express the memory costs of the individual algorithms as:
\begin{itemize}[leftmargin=*,noitemsep]
\item \textbf{SSTT}: As mentioned in~\cref{sec:Parallel_Tensor_Sketching}, the predominant memory cost comes from storing the first calculation $Z = \mathcal{G}_1^T X_1$, which has $r n^{d-1}$ entries. We distribute the entries among all cores, so the asymptotic memory cost of SSTT is
\begin{equation}
\label{eq:M_SSTT}
M_{\rm SSTT} = \mathcal{O}(r n^{d-1} / C).
\end{equation}
\item \textbf{PSTT2}: The only major storage cost is to store the sketches. Each column sketch has a per-core memory cost of
\begin{equation*}
M_{S_i} = n^i r/C + (d-i) n r,
\end{equation*}
where the two terms are the memory needed for $S_i$ and the random Khatri-Rao DRMs. One needs to be careful that the above expression holds when $\mathbf{P}$ is large enough in the correct dimensions so that $S_j$ can be split into $C$ different sub-sketches. In practice, we choose $\mathbf{P}$ that the divisions are concentrated near $P_1$ and $P_d$ as in~\cref{tab:1}. This allows all column and row sketches to be distributed among the cores. We choose the middle index $d_* = \ceil{d/2}$ so that the storage is balanced among row and column sketches, and thus the overall storage of PSTT2 is
\begin{equation}
\label{eq:M_PSTT2}
M_{\rm PSTT2} = \mathcal{O}\left(r n^{\floor{d/2}}/C + d r n \right).
\end{equation}
When $d>4$ and $C \ll n$, the first term dominates the second, improving upon SSTT by a factor of $n^{\ceil{d/2}-1}$. When $d = 3$, the second term -- the storage cost of the DRMs -- exceeds that of the actual sketch matrices. Nevertheless, this complexity is better than that of SSTT, which depends on $n^2$.
\item \textbf{PSTT2-onepass}: The storage cost of PSTT2-onepass is different from that of PSTT2 only because of the final middle sketch. Hence, the complexity is
\begin{align}
\label{eq:M_PSTT2onepass}
M_{\rm PSTT2-onepass} = \mathcal{O}\left(r n^{\ceil{d/2}}/C\right).
\end{align}
When $d$ is even, this spatial complexity is asymptotically the same as~\cref{eq:M_PSTT2}. When $d$ is odd, however, the two complexities differ. This is most apparent for $d=3$, when~\cref{eq:M_PSTT2onepass} matches~\cref{eq:M_SSTT}, leading to neither algorithm being better in storage.
\end{itemize}
In practice, all our algorithms require some extra memory for the communicated quantities. However, this cost depends on $1/P$, which is by assumption less than $1/C$, and is thus absorbed in our asymptotic statements.
\begin{figure}
\centering
\includegraphics[width=0.32\textwidth]{heap_tensorype_10.png}
\includegraphics[width=0.32\textwidth]{heap_tensorype_8.png}
\includegraphics[width=0.32\textwidth]{heap_tensorype_9.png}
\caption{Measured total memory allocated per-core for (left) $d=3$, (right) $d=5$, and (bottom) $d=9$. The bottom plot is missing a single data point for SSTT with $C=12$ because the $d=9$ tensor is too large to store in memory. See~\cref{tab:1} for additional details of each experiment.}
\label{fig:heap}
\end{figure}
In~\cref{fig:heap}, we see the total memory allocated per-core as a function of the total number of cores $C$ for the three different Hilbert tensor TT computations. For each algorithm and each Hilbert tensor, we use $C=12$, $C=24$, $C=48$, and $C=96$. We see that PSTT2 reliably has the smallest memory overhead, and the improvement over SSTT increases as the dimension increases. This agrees with the asymptotic expectations of~\cref{eq:M_PSTT2} and~\cref{eq:M_SSTT}. We see that PSTT2-onepass also improves upon SSTT for all but the $d=3$ trial, where the two asymptotic spatial complexities agree. Moreover, the $d=9,$ $C=12$ SSTT trial data is missing due to an out-of-memory error. This occurs because the first sketch has $r_1=n_1$, leading to no compression on the first step of SSTT. As a result, the first $Z$ calculated needs to store the full tensor of 38.4 GB in memory, which is over the 32 GB limit of a single node.
As a function of $C$, the PSTT2 memory profiles are relatively flat due to the importance of storing sketch matrices. Specifically, PSTT2 storage costs are more influenced by the cost of storing DRMs and other similarly sized allocations, especially for small $d$. These costs are constant per-core, whereas the sub-sketch storage costs are distributed evenly across the cores as much as possible.
\subsection{Time Complexity} \label{sec:Time_Complexity}
Finally, we move to the discussion of time complexity. The major cost of our algorithms is streaming via the function $f$, which we assume to be proportional to the size of the tensor. Because streaming is evenly split amongst the cores, the complexity to stream the tensor once is
\begin{equation}
\label{eq:Tstream}
T_{\rm stream} = \tau_f n^d/C,
\end{equation}
where capital `$T$'s are used to designate time complexities. From numerical experiments, we find that~\cref{eq:Tstream} takes a significant portion of the computation time in the $C=1$ setting with no communication. We also want to remark that PSTT2-onepass has half the cost of streaming since it only streams the tensor once.
Even when the function evaluation is cheap, the time to multiply the DRMs against the sub-tensors can still be expensive. There are $d-1$ sketches for PSTT2, each with a time cost of $\mathcal{O}(r n^d/C)$, and this leads to a total sketching time of $\mathcal{O}(d r n^d/C)$. PSTT2-onepass has a single more sketch to compute, but the complexity is the same as $d$ becomes large. Comparatively, SSTT has a leading order complexity of $\mathcal{O}(r n^d/C)$, as only the first sketch and calculation of $Z$ involve multiplications with the full tensor.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{t_tensorype_8.png}
\includegraphics[width=0.45\textwidth]{t_tensorype_9.png}
\caption{Strong scaling for the Hilbert tensor (left) $d=5$, and (right) (b) $d=9$. On each plot, a black line with slope $-1$ is shown for comparison (see~\cref{tab:1} for details).}
\label{fig:time}
\end{figure}
\Cref{fig:time} shows the strong scaling timing results for the same $d=5$ and $d=9$ Hilbert tensors in the memory experiments (see~\cref{tab:1}). The slopes of the strong scaling times are near $-1$ for all algorithms and both values of $d$, i.e.~the time nearly scales as $1/C$. This agrees with both~\cref{eq:Tstream} and the scaling of multiplying against the tensor. We see that PSTT2 and SSTT take comparable amounts of time for $d=9$ and PSTT2 is moderately faster for $d=5$. However, the speedup is only by a constant factor, and is difficult to predict. In comparison, PSTT2-onepass is significantly faster than the other two algorithms, due to the single streaming loop.
The time complexity of communication is more complicated to model. From inspection of~\cref{alg:parsketch}, we see that the multi-sketching step can have $d P$ sends and receives at worst. However, because sometimes the core that calculates $S_{i,k,j}$ also stores $S_{i,k}$, not every communication is necessary. For example, in SSTT, the first sketch can be performed with zero communication when the cores stream columns of the first unfolding. For PSTT2 and PSTT2-onepass, this strategy of streaming columns of unfoldings can also eliminate the communication for column space sketches, but the cost of row sketches remains. In conclusion, as the worst-case communication depends on the partitions $\mathbf{P}$, we can think of $\mathbf{P}$ as an appropriate balance between good memory performance and timing performance.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{t_tensorype_10.png}
\includegraphics[width=0.45\textwidth]{t_tensorype_7.png}
\caption{Strong scaling for the $d = 3$ Hilbert tensor (left) and $d=3$ Gaussian bumps tensor (right). On each plot, a black line with slope $-1$ is shown for comparison (see~\cref{tab:1} for details).}
\label{fig:time2}
\end{figure}
\Cref{fig:time2} (left) shows that our parallel algorithms suffer from communication costs when the tensor size is small. For this $d=3$ Hilbert tensor, SSTT is significantly faster than PSTT2 and PSTT2-onepass. From~\cref{tab:1}, we see that the sub-tensors streamed are much smaller, leading to a smaller contribution of the total time streaming relative to communication. However, we emphasize that these communications are still relatively unimportant when the evaluation of a tensor element is expensive, as can be seen in an experiment with a Gaussian mixture model, with the function defined as
\begin{equation*}
\mathcal{X}_{i_1,i_2,i_3} = \sum_{j = 1}^N e^{-\gamma \left((x_1-\xi_j)^2 + (x_2-\eta_j)^2 + (x_3 - \zeta_j)^2
\right)}, \qquad x_j = \frac{2i_j}{n_j}-1,
\end{equation*}
where $\xi_j$, $\eta_j$, and $\zeta_j$ are the center coordinates of the $j$th Gaussian and $\gamma$ controls the width of the Gaussians.~\Cref{fig:time2} (right) shows the result when $N=100$, $\gamma=10$, and the centers are uniformly random numbers between $-1$ and $1$. Because each calculation of a tensor entry takes the evaluation of $N$ Gaussians, the evaluation is significantly longer than that of a Hilbert tensor. We see that although PSTT2 is still slower than SSTT, PSTT2-onepass becomes the fastest option since avoiding the second set of function evaluations gains more than the added communications.
\section{Solve Sylvester tensor equations in TT format} \label{sec:TTsylv}
Many tensors in practice are known implicitly as solutions of tensor equations. For example, the discretized solution of a multivariable PDE might satisfy a tensor equation.
In this section, we focus on computing an approximation $\tilde{\mathcal{X}}$ in TT format of a tensor $\mathcal{X} \in \C^{n_1 \times \dots \times n_d}$ that satisfies the Sylvester tensor equation in~\cref{eq:TensorDisplacement}. This type of algebraic relation appears when discretizing Poisson's equation of a tensor-product domain with either a finite difference scheme~\cite{leveque2007finite} or a spectral method~\cite{shi2021compressibility}. We describe an algorithm that solves the 3D Sylvester tensor equation of the form
\begin{align}
\mathcal{X} \times_1 A + \mathcal{X} \times_2 B + \mathcal{X} &\times_3 C = \mathcal{F}, \quad \mathcal{F} \in \C^{n_1 \times n_2 \times n_3}, \nonumber \\
A, B, C \ {\rm normal}, \quad A \in \C^{n_1 \times n_1}&, \quad B \in \C^{n_2 \times n_2}, \quad C \in \C^{n_3 \times n_3}. \label{3d:sylv}
\end{align}
To ensure a unique solution, we assume that $\lambda_i(A) + \lambda_j(B) + \lambda_k(C) \neq 0$, for $1 \le i \le n_1$, $1 \le j \le n_2$, and $1 \le k \le n_3$,
where $\lambda_i(A)$ is an eigenvalue of $A$~\cite{simoncini2016computational}. We further suppose that $\mathcal{F}$ is given in the TT format with TT rank $(1,r_1,r_2,1)$ and cores $\mathcal{G}_1, \mathcal{G}_2$, and $\mathcal{G}_3$. Our goal is to compute an approximate solution $\tilde{\mathcal{X}}$ to~\cref{3d:sylv} in the TT format.
Since TT cores of a tensor are analogues of factor matrices in a matrix decomposition, our key idea is to convert~\cref{3d:sylv} into several Sylvester matrix equations and use fADI. If we reshape $\mathcal{X}$ and $\mathcal{F}$ to their 1st unfolding, respectively, then~\cref{3d:sylv} becomes
\begin{equation} \label{3d:sylv_fla1}
AX_1 + X_1(I \otimes B + C \otimes I)^T = F_1 = G_1(G_2)_1(G_3 \otimes I),
\end{equation}
where $G_1$ is the 1st unfolding of $\mathcal{G}_1$, $G_3$ is the 2nd unfolding of $\mathcal{G}_3$, and $(G_2)_1$ is the 1st unfolding of $\mathcal{G}_2$. Similarly, we get another Sylvester matrix equation by reshaping $\mathcal{X}$ and $\mathcal{F}$ to their 2nd unfolding:
\begin{equation} \label{3d:sylv_fla2}
(I \otimes A + B \otimes I) X_2 + X_2C^T = F_2 = (I \otimes G_1)(G_2)_2G_3,
\end{equation}
where $(G_2)_2$ is the 2nd unfolding of $\mathcal{G}_2$.
Algorithm 4.1 in~\cite{shi2021compressibility} (ST21) describes a way to compute the solution of~\cref{3d:sylv} in TT format. It starts by computing the first TT core $U_1$ as an ON basis of the column space of $X_1$ from~\cref{3d:sylv_fla1}, and then finds the other two TT cores via solving:
\begin{equation} \label{3d:sylv_fla2v}
(I \otimes \tilde{A} + B \otimes I) Y + YC^T = (I \otimes \tilde{G}_1)(G_2)_2G_3,
\end{equation}
where $\tilde{A} = U_1^*AU_1$, and $\tilde{G}_1 = U_1^*G_1$.~\cref{3d:sylv_fla2v} is similar to~\cref{3d:sylv_fla2} but~\cref{3d:sylv_fla2v} has two disadvantages: (1) Parallelism when finding TT cores is not exploited, as the two fADI loops are carried out in an order, (2) $\tilde{A}$ is a dense matrix, which may have a higher cost of solving shifted linear systems than that of $A$. In this way, we want an algorithm that, in certain scenarios, can solve~\cref{3d:sylv_fla1} and~\cref{3d:sylv_fla2} simultaneously and only solve shifted linear systems with $A, B$, and $C$.
With fADI, we can obtain $U_1$ and $U_2$, ON bases of the column space of $X_1$ and $X_2$, from~\cref{3d:sylv_fla1} and~\cref{3d:sylv_fla2} independently. In the meantime, the third TT core can be computed along with $U_2$. As a result,~\cref{thm:interlacing_d} allows us to compute all three TT cores with only one extra matrix-matrix multiplication.
In general, we need two sets of shift parameters to solve~\cref{3d:sylv_fla1} and~\cref{3d:sylv_fla2}. The Zolotarev number associated with~\cref{3d:sylv_fla1} is
\begin{equation} \label{zolo_fla1}
Z_k(\Lambda(A), \Lambda(-B)+\Lambda(-C)) := \inf_{r \in \mathcal{R}_{k,k}} \frac{\sup_{z \in \Lambda(A)} |r(z)|}{\inf_{z \in \Lambda(-B)+\Lambda(-C)} |r(z)|},\qquad k\geq 0,
\end{equation}
where `+' is the Minkowski sum of two sets. Similarly, the Zolotarev number corresponded to~\cref{3d:sylv_fla2} is
\begin{equation} \label{zolo_fla2}
Z_k(\Lambda(A)+\Lambda(B), \Lambda(-C)) := \inf_{r \in \mathcal{R}_{k,k}} \frac{\sup_{z \in \Lambda(A)+\Lambda(B)} |r(z)|}{\inf_{z \in \Lambda(-C)} |r(z)|},\qquad k\geq 0.
\end{equation}
We find that the number
\[
L_k(\Lambda(A), \Lambda(B), \Lambda(C)) := \inf_{r \in \mathcal{R}_{k,k}} \frac{\sup_{z \in \Lambda(A) \cup [\Lambda(A)+\Lambda(B)]} |r(z)|}{\inf_{z \in \Lambda(-C) \cup [\Lambda(-B)+\Lambda(-C)]} |r(z)|},\qquad k\geq 0,
\]
is an upper bound for both~\cref{zolo_fla1} and~\cref{zolo_fla2}, so that bounds in~\cref{zolo_fro} can be acquired for both $X_1$ and $X_2$:
\[
\|X_i-(X_i)_k\|_F \le L_k(\Lambda(A), \Lambda(B), \Lambda(C)) \|\mathcal{X}\|_F, \qquad i = 1,2.
\]
Therefore, we can choose the same shift parameters when we use fADI on~\cref{3d:sylv_fla1} and~\cref{3d:sylv_fla2}, which means $U_1$ and $U_2$ can be found in a single set of iterations.
Since we do not need $U_2$ in a low-rank format, we can recover $U_2$ column-by-column in a way introduced in ST21 by using alternating direction implicit (ADI) method~\cite{benner2009adi} on reshapes of the columns. We summarize the 3D Sylvester equation solver in~\cref{alg:7}.
\begin{algorithm}
\caption{TT-fADI: Given a 3D Sylvester tensor equation~\cref{3d:sylv}, compute an approximate solution in TT format with three cores computed almost-simultaneously.}
\begin{algorithmic}[1]
\label{alg:7}
\Require {Matrices $A, B$, and $C$, TT cores $\mathcal{G}_1, \mathcal{G}_2$, and $\mathcal{G}_3$ and TT ranks $r_1$ and $r_2$ of $\mathcal{F}$, and desired accuracy $0 < \epsilon < 1$}
\Ensure {TT cores $\mathcal{H}_1, \mathcal{H}_2$, and $\mathcal{H}_3$ of an approximate solution $\tilde{\mathcal{X}}$}
\State Use the spectra of $A, B$, and $C$ to find shift parameter arrays $\pmb{p}$ and $\pmb{q}$ of length $\ell$.
\State Solve $(A-q_1I_{n_1})Z_1 = G_1$. Let $Z = Z_1$.
\State Solve $((I_{n_2} \otimes A + B \otimes I_{n_1})-q_1I_{n_1n_2})W_1 = (I_{n_2} \otimes G_1)(G_2)_2$. Let $W = W_1$.
\State Solve $(-C-\overline{p_1}I_{n_3})Y_1 = G_3^T$. Let $Y = Y_1$.
\State Let $D = (q_1-p_1)I_{r_2}$.
\For {$1 \le j \le \ell-1$}
\State Set $R_j = (q_{j+1}-p_j)Z_j$, $U_j = (q_{j+1}-p_j)W_j$, and $V_j = (\overline{p_{j+1}}-\overline{q_j})Y_j$.
\State Solve $(A-q_{j+1}I_{n_1})Z_{j+1} = R_j$. Set $Z_{j+1} = Z_{j+1}+Z_j$ and $Z = \begin{bmatrix} Z & Z_{j+1} \end{bmatrix}$.
\State Solve $((I_{n_2} \otimes A + B \otimes I_{n_1})-q_{j+1}I_{n_1n_2})W_{j+1} = U_j$. Set $W_{j+1} = W_{j+1}+W_j$ and $W = \begin{bmatrix} W & W_{j+1} \end{bmatrix}$.
\State Solve $(-C-\overline{p_{j+1}}I_{n_3})Y_{j+1} = V_j$. Set $Y_{j+1} = Y_{j+1}+Y_j$ and $Y = \begin{bmatrix} Y & Y_{j+1} \end{bmatrix}$.
\State Set $D = \begin{bmatrix} D & \\[5pt] & (q_{j+1}-p_{j+1})I_{r_2} \end{bmatrix}$.
\State Recompress $W, D$, and $Y$ to get $\|\tilde{W}\tilde{D}\tilde{Y}^*-WDY^*\| \le \epsilon \|WDY^*\|$.
\State Set $W = \tilde{W}, D = \tilde{D}$, and $Y = \tilde{Y}$, and $s_2$ to be the rank.
\EndFor
\State Compute a CPQR of $Z$ to obtain $U_1$ with ON cols and set $U_1 = U_1(:,1\!:\!s_1)$ if $U_1(s_1+1,s_1+1) \le \epsilon$.
\State Calculate $T = U_1^*{\rm reshape}(W,n_1,n_2s_2)$.
\State Set $\mathcal{H}_1 = U_1$, $\mathcal{H}_2 = {\rm reshape}(T, s_1, n_2, s_2)$, and $\mathcal{H}_3 = DY^*$.
\end{algorithmic}
\end{algorithm}
We demonstrate~\cref{alg:7} with a simple example. Consider the equation
\begin{equation} \label{sylv_sim_ex}
\mathcal{X} \times_1 A + \mathcal{X} \times_2 A + \mathcal{X} \times_3 A = \mathcal{F}, \quad A \in \R^{n \times n}, \ \mathcal{F} \in \R^{n \times n \times n},
\end{equation}
where $A$ is a diagonal matrix with diagonal elements $a_j \in \left[-1, -1/(30n) \right]$ for $1 \le j \le n$, and $\mathcal{F}$ has TT rank $(1,\floor{n/4},2,1)$ with all three TT cores consisting of i.i.d. uniform random numbers in $(0,1)$.~\Cref{fig:3dsylv_ex} shows the running time of three Sylvester equation solvers. The green line represents the algorithm ST21. The blue line represents a direct solver, which computes each element of $\mathcal{X}$ by $\mathcal{X}_{i,j,k} = \mathcal{F}_{i,j,k}/(a_i+a_j+a_k)$ for $1 \le i,j,k \le n$, and performs TT decomposition on $\mathcal{X}$. This algorithm has complexity $\mathcal{O}(n^3)$. The red line represents~\cref{alg:7}. We can see when $n \ge 100$,~\cref{alg:7} is the fastest. With $n = 350$,~\cref{alg:7} is 4 times faster than ST21, and almost 6 times faster than the direct solver. The performance of ST21 is affected since $s_1$, the size of the first TT core of the solution $\mathcal{X}$, can be close to $n$ despite the fact that $s_1=\mathcal{O}(\log n)$~\cite{shi2021compressibility}. Therefore, the cost of solving shifted linear systems of $\tilde{A}$ in~\cref{3d:sylv_fla2v} is significantly higher than that of $A$.
\begin{figure}
\centering
\begin{minipage}{0.49\textwidth}
\begin{overpic}[width=\textwidth]{fadi_timing2}
\put(45,-2) {Size, $n$}
\put(-1,28) {\rotatebox{90}{Time (sec)}}
\put(64,12) {\rotatebox{35}{TT-fADI}}
\put(60,40) {\rotatebox{63}{ST21}}
\put(48,42) {\rotatebox{70}{Direct}}
\end{overpic}
\end{minipage}
\caption{The execution time of direct solver (blue), ST21 (green), and~\cref{alg:7} (red) to solve~\cref{sylv_sim_ex} with size of the problem $10 \leq n\leq 500$.}
\label{fig:3dsylv_ex}
\end{figure}
\section*{Acknowledgements}
We thank David Bindel for both his advice and his class, where the first two authors learned much about parallel computing.
|
1,108,101,565,961 | arxiv | \section{Introduction and Results}
\par\hspace{5mm}
The Neumann--Poincar\'e (abbreviated by NP) operator is a boundary integral operator which appears naturally when solving classical boundary value problems using layer potentials. Its study (for the Laplace operator) goes back to C. Neumann \cite{Neumann-87} and H. Poincar\'e \cite{Poincare-AM-87} as the name of the operator suggests. If the boundary of the domain, on which the NP operator is defined, is $C^{1, \alpha}$ smooth, then the NP operator
is compact. Thus the Fredholm integral equation, which appears when solving Dirichlet or Neumann problems, can be solved using the Fredholm index theory \cite{Fredholm-03}. If the domain has corners, the NP operator is not any more a compact operator, but a singular integral operator. The solvability of the corresponding integral equation was established in \cite{Verch-JFA-84}.
Regarding spectral properties of the NP operator, it is proved in \cite{KPS} that the NP operator can be realized as a self-adjoint operator by introducing a new inner product on the $H^{-1/2}$-space (see also \cite{KKLSY-JLMS-16}), and so the NP spectrum consists of continuous spectrum and discrete spectrum (and possibly the limit points of discrete spectrum). If the domain has corners, the corresponding NP operator may exhibit a continuous spectrum (as well as eigenvalues). For recent development in this direction we refer to \cite{HKL-AIHP-17, KLY, PP-JAM-14, PP-arXiv}. If the domain has the smooth boundary, then the spectrum consists of eigenvalues converging to $0$. We refer to \cite{AKM2, Miyanishi:2015aa} for progress on the convergence rate of NP eigenvalues in two dimensions. However the satisfactory answers of decay rates in three dimensions were less-known even for smooth cases
since it uses the smoothness of the kernel of NP operators in two dimensions \cite{Miyanishi:2015aa}.
With this in mind, the purpose of this paper is to prove the so-called ``Weyl law'' which is the asymptotic behavior of NP eigenvalues in three dimensions.
To state the result in a precise manner, let $\Omega$ be a $C^{1, \alpha}$ bounded region in $\Rbb^3$. The NP operator $\Kcal_{\p \GO} : L^2{(\p\GO)} \rightarrow L^2{(\p\GO)}$ is defined by
\beq\label{definition of NP operators}
\quad \Kcal_{\p \GO}[\psi](\Bx) := \frac{1}{4\pi} \int_{\partial \Omega} \frac{\langle \By-\Bx, \Bn(\By) \rangle}{|\Bx-\By|^3} \psi(\By)\; dS_{\By}
\eeq
where $dS_{\By}$ is the surface element and $\Bn(\By)$ is the outer normal vector on $\partial \Omega$.
As described above, we know that $\Kcal_{\p \GO}$ is a compact operator on $L^2(\partial \Omega)$ and its eigenvalues consist of at most countable numbers, with $0$ the only possible limit point. It is also known that the eigenvalues of the NP operator lie in the interval $(-1/2, 1/2]$ and the eigenvalue $1/2$ corresponds to constant eigenfunctions. We denote the set of NP eigenvalues counting multiplicities by
\beq
\sigma(\Kcal_{\p \GO})
=\{\; \lambda_j(\Kcal_{\p \GO})\ \mid\ \frac{1}{2}=|\lambda_0(\Kcal_{\p \GO})| > |\lambda_1(\Kcal_{\p \GO})| \geq |\lambda_2(\Kcal_{\p \GO})| \geq\; \cdots \geq 0 \}.
\eeq
Here our main purpose is to deduce the asymptotic behavior of NP eigenvalues by using the basic ingredients of surface geometry. To do this, we also define the Willmore energy $W(\p\GO)$ by
\beq\label{definition of Willmore energy}
W(\p\GO):=\int_{\p\GO} H^2(x)\; dS_{x}
\eeq
where $H(x)$ is the mean curvature of the surface. Then we have:
\begin{theorem}\label{main}
Let $\GO$ be a $C^{2, \alpha}$ bounded region with $\alpha>0$.
Then
\beq
|\lambda_j(\Kcal_{\p \GO})| \sim \Big\{\frac{3W(\p\GO) - 2\pi \chi(\p\GO)}{128 \pi} \Big\}^{1/2} j^{-1/2}\quad \text{as}\ j\rightarrow \infty.
\eeq
Here $W(\p\GO)$ and $\chi(\p\GO)$ denote, respectively, the Willmore energy and the Euler characteristic of the surface $\p\GO$.
\end{theorem}
Thus the NP operator has infinite rank \cite{KPS} and the decay rate of NP eigenvalues is $j^{-1/2}$ for $C^{2, \alpha}$ regions. Furthermore, the integral (\ref{definition of Willmore energy}) is specially interesting because it has the remarkable property of being invariant under M\"obius transformations of ${\Rbb}^3$ \cite{Bl, Wh}. Thus we find that the asymptotic behavior of NP eigenvalues is also M\"obius invariant since the Euler characteristic is topologically invariant. We will present some further facts and applications later (See section \ref{sec: applications}).
To clarify the meaning of Theorem \ref{main}, let us consider the case $\p\GO=S^2$. It is proved by Poincar\'e \cite{Poincare-AM-87} that the NP eigenvalues on a two-dimensional sphere are $\frac{1}{2(2k+1)}$ for $k = 0, 1, 2\, \ldots$ and their multiplicities are $2k +1$ (see also \cite{AKMU}). So we may enumerate them as
$$
{\underbrace{\frac{1}{2}}_{1}, \underbrace{\frac{1}{6}, \frac{1}{6}, \frac{1}{6}}_{3}, \underbrace{\frac{1}{10}, \frac{1}{10}, \frac{1}{10}, \frac{1}{10}, \frac{1}{10}}_{5}, \cdots, \underbrace{\frac{1}{2(2k+1)}, \cdots, \frac{1}{2(2k+1)}}_{2k+1}}, \cdots.
$$
It easily follows that the $j=k^2$th eigenvalue satisfies
$$|\lambda_j(\Kcal_{S^2})|=\frac{1}{2(2k+1)} \sim \frac{1}{4} j^{-1/2}. $$
In contrast, one can verify these asymptotics from Theorem \ref{main}:
$$|\lambda_j(\Kcal_{S^2})| \sim \Big\{\frac{3W(S^2) - 2\pi \chi(S^2)}{128 \pi} \Big\}^{1/2} j^{-1/2}=\frac{1}{4} j^{-1/2}$$ since ${W(S^2) =4\pi}$ and $\chi(S^2)=2$. This calculation, of course, is consistent with the asymptotic of the explicit eigenvalues.
It is worth comparing the decay rates for three dimensional NP eigenvalues obtained here with those for two dimensional NP eigenvalues. For the two dimensional cases, it is well known that the eigenvalues of the integral operator $\Kcal_{\p \GO}$ are symmetric with respect to the origin \cite{BM, Sh}. The only exception is the eigenvalue $1/2$ corresponding to constant eigenfunctions. NP eigenvalues are invariant under M\"obius transformations \cite{MR0104934}.
One of the main distinguished features is that the decay rates deeply depend on the smoothness of the boundary. Indeed, we \cite{AKM2, Miyanishi:2015aa} proved the decay rate which depends on the smoothness of the $C^k$ smooth boundary
$\partial \Omega$, that is, for any $\tau >-k+3/2$,
$$
|\lambda^{\pm}_j(\Kcal_{\p \GO})| = o(j^{\tau}) \quad \text{as}\; j\rightarrow \infty,
$$
where $o$ means the small order. Moreover for the analytic boundary, we have the exponential decay rate:
$$
|\Gl^{\pm}_{j}(\Kcal_{\p \GO})| \le Ce^{-j\Ge} \quad \text{as}\; j\rightarrow \infty,
$$
for any $j$. Here $\Ge$ is the modified Grauert radius of $\p\GO$ (See \cite{AKM2} for the precise statement).
We also remark that $\Kcal_{\p \GO}$ is not self-adjoint on $L^2(\p\GO)$.
This difficulty can be circumvented by restating from singular values into NP eigenvalues. To do this, this paper is organized as follows: In the next section we introduce
the notations of surface geometry and state the relationships among singular- and eigenvalues using Ky-Fan theorem and the triangular representation of NP operators. In section \ref{sec: symbol}, we provide the approximate pseudo-differential operators for NP operators. Then the relationships given in section \ref{sec: notations} yield the Weyl law of NP eigenvalues in section \ref{sec: Weyl law}.
The applications are provided in section \ref{sec: applications}. This paper ends with some discussions
and a brief conclusion.
\date{\textbf{Acknowledgement: }I am grateful to Prof. H. Kang and Prof. K. Ando for useful discussions on the early stages of this work. I would also like to thank the members of Inha university for their hospitality during my visit, when the main results of this paper were obtained.}
\section{Preliminaries and Notations}\label{sec: notations}
As preliminaries, we shall mention the notations of surface geometry and some results of Schatten class used in this paper.
\subsection{Surface geometry}\label{sec: surface geometry}
Let $M$ be a two-dimensional surface without bounday and $\Br = \Br(s, t)$ be
a regular parametrization of a surface in ${\Rbb}^3$, where $\Br$ is
a smooth (at least $C^{2}$) vector-valued function of two variables.
It is common to denote the partial derivatives of $\Br$ with respect to $s$ and $t$
by $\Br_s$ and $\Br_t$. The first fundamental form is the inner product
on the tangent space of a surface in three-dimensional Euclidean space
which is induced canonically from the dot product of ${\Rbb}^3$.
We denote the first fundamental form the Roman numeral I:
\beq
\mathrm {I}=E ds^2 +2F ds\; dt +G dt^2.
\eeq
Let $\Br(s, t)$ be a parametric surface. Then the inner product of two tangent vectors is
$$
{\displaystyle {\begin{aligned}&{}\quad \mathrm {I} (a\Br_{s}+b\Br_{t},c\Br_{s}+d\Br_{t})\\
&=ac ( \Br_{s}\cdot \Br_{s}) +(ad+bc)(\Br_{s}\cdot \Br_{t}) +bd( \Br_{t}\cdot \Br_{t}) \\
&=Eac+F(ad+bc)+Gbd,\end{aligned}}}
$$
Thus
\begin{align}
E=\Br_{s}\cdot \Br_{s},\ F=\Br_{s}\cdot \Br_{t},\ G=\Br_{t}\cdot \Br_{t}
\end{align}
and hereafter we often write $E=g_{11},\ F=g_{12}=g_{21},\ G=g_{22}$.
The second fundamental form of a general parametric surface is defined as follows: Regularity of the parametrization means that $\Br_s$ and $\Br_t$ are linearly independent for any $(s, t)$ in the domain of $\Br$, and hence span the tangent plane to $M$ at each point. Equivalently, the cross product $\Br_s \times \Br_t$ is a nonzero vector normal to the surface. The parametrization thus defines a field of unit normal vectors $\Bn$:
$$
\Bn=\frac{\Br_s \times \Br_t}{|\Br_s \times \Br_t|}.
$$
Then the second fundamental form is usually written as
\begin{align}
{\displaystyle \mathrm {I\!I} =L\,ds^{2}+2M\,ds\,dt+N\,dt^{2}. }
\end{align}
Here the coefficients $L, M, N$ at a given point in the parametric $st$-plane are given by the projections of the second partial derivatives of $\Br$ at that point onto the normal line to $M$ and can be computed with the aid of the dot product as follows:
$$
{\displaystyle L=\mathbf {r} _{ss}\cdot \mathbf {n} ,\quad M=\mathbf {r} _{st}\cdot \mathbf {n} ,\quad N=\mathbf {r} _{tt}\cdot \mathbf {n} .}
$$
Under these notations, we can denote the Gaussian curvature $K$ and the mean curvature $H$ as
$$
K:=\det A=\frac{LN-M^2}{EG-F^2}, \quad H:=\frac{1}{2}{\rm tr} A
$$
where
$
A={\displaystyle {\begin{pmatrix}E&F\\F&G\end{pmatrix}}}^{-1} {\displaystyle {\begin{pmatrix}L&M\\M&N\end{pmatrix}}}
$ is the Weingarten matrix.
We also recall the conformal parameters used in this paper:
The parameter $(s, t)$ is said to be isothermal or conformal if the first fundamental form
is written as
\begin{align}
\mathrm {I}=e^{2\sigma}(ds^2+dt^2)\quad \text{(i.e. $E=G=e^{2\sigma}$,\ $F=0$)}
\end{align}
where $\sigma:=\sigma(s, t)$ is a $C^2$ function in $(s, t)$. It is emphasized that one can always take
such coordinates without loss of regularity \cite{DK}. For the isothermal parameters, we find
$$
K=\frac{LN-M^2}{E^2}, \quad H=\frac{L+N}{2E}
$$
and the Gauss-Bonnet formula
$$
\int_{M} K\; dS=2\pi \chi(M)
$$
holds true even for $C^2$ oriented compact surfaces \cite{Res}.
\subsection{Schatten class}
Let $K$ be a compact operator in separable Hilbert space $H$. We denote the singular values $\{s_j(K) \}$ as the family of eigenvalues of $(K^* K)^{1/2}$. Since the singular values are non-negative, we always assume the singular values are non-increasing:
\beq
\sigma_{sing}(K)=\{\; s_j(K) \mid s_1(K) \geq s_2(K) \geq s_3(K) \geq\; \cdots\}.
\eeq
Then the Schatten $p$-norm of $K$ is defined by
$$\Vert K\Vert_{S^p} := {\rm{tr}}(K^*K)^{p/2}=\sum_{j=1}^{\infty} |s_j(K)|^{p}$$
and the $p$-th Schatten-class operator is a bounded linear operator on a Hilbert space
with finite Schatten $p$-norm. Especially for $p=2$, $2$nd Schatten-class operator
is so-called Hilbert-Schmidt class operator. Here we can show the convergence rate
of singular values of the $p$-th Schatten class operators:
\begin{lemma}\label{weak Schatten lemma}
If $K$ is in $p$-th Schatten class, then ordered singular values satisfy
$$
s_j(K)=o(j^{-1/p}) .
$$
\end{lemma}
\begin{proof}
$$
\Vert K\Vert_{S^p}^p=\sum_{j=1}^{\infty} |s_j(K)|^{p}<\infty.
$$
Thus, for all $\epsilon>0$, there exists $N\in \Nbb$ such that
$$
(n-N)|s_n(K)|^{p} \leqq \sum_{N+1}^{n} |s_j(K)|^p<\epsilon,
$$
and hence,
$$
n|s_n(K)|^{p}< 2\epsilon \quad \mbox{for all}\; n>2N.
$$
Accordingly, $s_j(K)=o(j^{-1/p})$ as desired.
\end{proof}
The class of operators $K$, which satisfy $s_j(K)=o(j^{-1/p})$,
is called weakly $p$th-Scahtten class \cite{Si}. Thus if we know the class of operators,
then the upper bounds of decay rates are obtained. We also have the precise relation
between singular- and eigenvalues, which is convenient to derive NP eigenvalues
from singular values:
\begin{prop}\label{almost self-adjoint} Let $K$ be a compact operator. Assume the following {\rm(1)-(3):}
{\rm(1)}\ $K-K^*$ is in Hilbert-Schmidt class.
{\rm(2)}\ Eigenvalues of $K$ consist of real values.
{\rm(3)}\ $s_j(K) \sim C j^{-1/2}\quad\text{as}\ j\rightarrow \infty$.
Then $|\lambda_j(K)| \sim s_j(K) \sim Cj^{-1/2}\quad\text{as}\ j\rightarrow \infty$.
\end{prop}
We call the operator $K$, satisfying (1)-(3), ``{\it almost self-adjoint operator}''.
\begin{proof}
As it is well known \cite{GK}, for any compact operator $K$, there are a compact normal operator $D$
and a compact quasinilpotent operator $V$, such that
\beq K = D + V\quad \text{and}\quad \sigma(D) = \sigma(K).\eeq
Since the spectrum $\sigma(D)=\sigma(K)$ is real, $D$ is a compact self-adjoint operator and
$$
K^{*}=D+V^{*}.
$$
Thus $K-K^{*}=V-V^*$ is in Hilbert-Schmidt class. The compact quasinilpotent operator $V$ with Hilbert-Schmidt imaginary part $\Im (V)=\frac{V-V^*}{2i}$ is also in Hilbert-Schimdt class (See e.g. \cite[Lemma 6.5.1]{Gil1} and \cite{Gil2}).
From Ky-Fan theorem (see e.g. \cite{Dostanic} and references therein) and Lemma \ref{weak Schatten lemma}, the Hilbert-Schmidt operator $V$ is considered as a small perturbation of $K$. Thus
$$|\lambda_j(K)|=|\lambda_j(D)|=s_j(D) \sim s_j(K) \sim C j^{-1/2}\quad\text{as}\ j\rightarrow \infty. $$
\end{proof}
\section{NP operators as pseudo-differential operators}\label{sec: symbol}
We study the asymptotics of singular numbers of $\Psi$DO's which correspond to compact operators of the form (\ref{definition of NP operators}). To consider an integral operator as a $\Psi$DO is technically convenient also because the asymptotics can be expressed directly in terms of symbols. The starting point for us is to construct the approximate $\Psi$DO of the NP operator $\Kcal_{\p\Omega}$ modulo Hilbert-Schmidt operators. This is done by using local coordinates.
\subsection{NP operators and approximations}\label{sec: double symbol}
Let $\Bx_0 \in \p\GO$ and choose open neighborhoods $U_j$ ($j=1,2,3$) of $\Bx_0$ in $\p\GO$ so that $U_1\cup {U_2} \subset U_3$ and $U_3$ has a local parametrization as in section \ref{sec: surface geometry}. Let $\Gvf_j$ ($j=1,2$) be a smooth functions such that $\mbox{supp}\ \Gvf_1 \subset U_1$ and $\mbox{supp}\ \Gvf_2 \subset U_2$. Such a situation allows us to taking the local coordinates $(x(s, t)), y(s, t), z(s, t))$ and the surface element is given by $d\Gs(s, t)=|(x_s, y_s, z_s)\times(x_t, y_t, z_t)|ds \wedge dt$.
Thus we obtain
\begin{align*}
\Gvf_2 \Kcal_{\p \GO} [\Gvf_1 f] (\Bx) := \Gvf_2(\Bx) \int_{\Rbb^2} \Big[ & \frac{(x(s_1, t_1)-x(s_2, t_2)) (y_s z_t-z_s y_t)(s_1, t_1)}{4\pi |\Bx-\By|^{3}} \\
+& \frac{(y(s_1,t_1)-y(s_2, t_2))(z_s x_t-x_s z_t)(s_1, t_1)}{4\pi |\Bx-\By|^{3}} \\
+& \frac{(z(s_1,t_1)-z(s_2, t_2))(x_s y_t-y_s x_t)(s_1, t_1)}{4\pi |\Bx-\By|^{3}} \Big]
\Gvf_1(\By) f(\By) ds_1 \wedge dt_1. \\
\end{align*}
Remarking that $dx=x_s ds+ x_t dt$ , $dy=y_s ds+ y_t dt$ and $dz=z_s ds+ z_t dt$ on local charts,
the metric of the surface is denoted as
\begin{align*} dx^2+dy^2+dz^2
&=(x_s^2+y_s^2+z_s^2) ds^2 +2(x_s x_t+y_s y_t+z_s z_t )ds\ dt+(x_t^2+y_t^2+z_t^2) dt^2 \\
&=E ds^2 + 2F ds\ dt +G dt^2.
\end{align*}
Letting $y'=(s_1, t_1)$ and $x'=(s_2, t_2)$, we also find that
\begin{align*}
&|\Bx-\By|^2 \\
=&(x(s_1, t_1)-x(s_2, t_2))^2+(y(s_1, t_1)-y(s_2, t_2))^2+(z(s_1, t_1)-z(s_2, t_2))^2 \\
=&\{ E(x') (s_1-s_2)^2+2 F(x') (s_1-s_2)(t_1-t_2)+G(x') (t_1-t_2)^2 \}(1+O(|x'- y'|)),
\end{align*}
and so
\begin{align*}
&\frac{1}{|\Bx-\By|^{3}} \\
=& \frac{1}{[E(x') (s_1-s_2)^2+2 F(x') (s_1-s_2)(t_1-t_2)+G(x') (t_1-t_2)^2]^{3/2}}
(1+ O(|x'-y'|)) \\
=:& K(x', x'-y')+ E_1(x', y').
\end{align*}
Here
$$
K(x', x'-y')=\frac{1}{[E(x') (s_1-s_2)^2+2 F(x') (s_1-s_2)(t_1-t_2)+G(x') (t_1-t_2)^2]^{3/2}}
$$
and
$$
|E_1(x', y')| \lesssim |(s_1, t_1)-(s_2, t_2)|^{-2}.
$$
Similarly the numerator of the integral kernel $\Gvf_2 \Kcal_{\p \GO}[\Gvf_1 f] (\Bx)$
can be denoted as
\begin{align*}
&(x(s_1, t_1)-x(s_2, t_2)) (y_s z_t-z_s y_t)(s_1, t_1) +(y(s_1,t_1)-y(s_2, t_2))(z_s x_t-x_s z_t)(s_1, t_1)\\
&+(z(s_1,t_1)-z(s_2, t_2))(x_s y_t-y_s x_t)(s_1, t_1) \\
=&(x_s(s_1-s_2)+x_t(t_1-t_2)+\frac{1}{2}(s_1-s_2)^2 x_{ss}+x_{st}(s_1-s_2)(t_1-t_2)+\frac{1}{2}x_{tt}(t_1-t_2)^2)(y_s z_t-z_s y_t)(s_1, t_1) \\
&+(y_s(s_1-s_2)+y_t(t_1-t_2)+\frac{1}{2}(s_1-s_2)^2 y_{ss}+y_{st}(s_1-s_2)(t_1-t_2)+\frac{1}{2}y_{tt}(t_1-t_2)^2)(z_s x_t-x_s z_t)(s_1, t_1)\\
&+(z_s(s_1-s_2)+z_t(t_1-t_2)+\frac{1}{2}(s_1-s_2)^2 z_{ss}+z_{st}(s_1-s_2)(t_1-t_2)+\frac{1}{2}z_{tt}(t_1-t_2)^2)(x_s y_t-y_s x_t)(s_1, t_1) \\
&+O(|x'-y'|^{2+\alpha}) \\
=&\frac{1}{2}(s_1-s_2)^2 \{x_{ss}(y_s z_t-z_s y_t) +y_{ss}(z_s x_t-x_s z_t)+z_{ss}(x_s y_t-y_s x_t) \}(s_1, t_1) \\
&+ (s_1-s_2)(t_1-t_2) \{ x_{st}(y_s z_t-z_s y_t) +y_{st}(z_s x_t-x_s z_t)+z_{st}(x_s y_t-y_s x_t) \}(s_1, t_1) \\
&+\frac{1}{2}(t_1-t_2)^2 \{x_{tt}(y_s z_t-z_s y_t) +y_{tt}(z_s x_t-x_s z_t)+z_{tt}(x_s y_t-y_s x_t) \}(s_1, t_1)
\\
&+O(|x'-y'|^{2+\alpha}) \\
=&\frac{1}{2}\{ L(y') (s_1-s_2)^2 +2M(y') (s_1-s_2)(t_1-t_2)+N(y') (t_1-t_2)^2 \} |(x_s, y_s, z_s)\times(x_t, y_t, z_t)|
(s_1, t_1) \\
&+O(|x'-y'|^{2+\alpha}) \\
=&\frac{1}{2}\{ L(x') (s_1-s_2)^2 +2M(x') (s_1-s_2)(t_1-t_2)+N(x') (t_1-t_2)^2 \} |(x_s, y_s, z_s)\times(x_t, y_t, z_t)|
(s_1, t_1) \\
&+O(|x'-y'|^{2+\alpha}).
\end{align*}
Here we used the second fundamental form:
$$
{\displaystyle \mathrm {I\!I}}=L ds^2 +2 M ds\ dt +N dt^2
$$
and
$$L(x')-L(y')=O(|x'-y'|^{\alpha}),\ M(x')-M(y')=O(|x'-y'|^{\alpha}),\ N(x')-N(y')=O(|x'-y'|^{\alpha}).$$
Summarizing the dominator and the numerator of the kernel,
\begin{align*}
\Gvf_2 \Kcal_{\p \GO}[\Gvf_1 f] (\Bx)& \\
=\frac{1}{8\pi} \Gvf_2(\Bx)\int_{\Rbb^2}
\Big[&\{ L(s_1-s_2)^2 +2M(s_1-s_2)(t_1-t_2)+N(t_1-t_2)^2 \} \\
&|(x_s, y_s, z_s)\times(x_t, y_t, z_t)|
+O(|x'-y'|^{2+\alpha}) \Big] \{K(x', x'-y')(1 + O(|x'-y'|) \}
\Gvf_1(\By) f(\By) ds_2 \wedge dt_2 \\
=\frac{1}{8\pi}\Gvf_2(\Bx) \int_{\Rbb^2} \Big[&\{ L(s_1-s_2)^2 +2M(s_1-s_2)(t_1-t_2)+N(t_1-t_2)^2 \}
\\
&K(x', x'-y') |(x_s, y_s, z_s)\times(x_t, y_t, z_t)| + E_2(x', y') \Big] \Gvf_1(\By) f(\By) ds_2 \wedge dt_2 \\
=\frac{1}{8\pi}\Gvf_2(\Bx) \int_{\Rbb^2} \Big[&\{ L(s_1-s_2)^2 +2M(s_1-s_2)(t_1-t_2)+N(t_1-t_2)^2 \} \\
&K(x', x'-y') \Big] \Gvf_1(\By) f(\By) d\sigma(s, t) +\BH[f](\Bx)
\end{align*}
where $\BH[f](\Bx)=\frac{1}{8\pi}\Gvf_2(\Bx) \int_{\Rbb^2}E_2(x', y')\Gvf_1(\By) f(\By) ds_2 \wedge dt_2$ and $E(x', y')=O(|x'-y'|^{1-\alpha})$.
It follows that $E_2(x', y')$ is in $L^{2}_{\text{loc}}(\Rbb^4)$ and $\BH$ is in Hilbert-Schmidt class by Mercer's theorem (See e.g. \cite{CH, Si}).
\subsection{Symbols of the approximate NP operators}
From the calculations in the previous subsection, the NP operator on local charts is denoted as
\begin{align*}
\Gvf_2 \Kcal_{\p \GO}[\Gvf_1 f] (\Bx) &\equiv \Gvf_2 \BP[\Gvf_1 f](\Bx) \\
&:=\frac{1}{8\pi}\Gvf_2(\Bx)\int_{\Rbb^2} \Big[\{ L(s_1-s_2)^2 \\
&\quad\quad+2M(s_1-s_2)(t_1-t_2)+N(t_1-t_2)^2 \}K(x', x'-y') \Big] \Gvf_1(\By) f(\By) d\sigma(s, t)
\end{align*}
where the notation $\equiv$ stands for modulo Hilbert-Schmidt operators.
Let us denote $\BP[f](\Bx)$ as the pseudo-differential operator (See e.g. \cite{Shubin, Ta} for the details). The classical $\Psi $DO is defined as
$$
Op(\sigma)[f](x) = \frac{1}{(2\pi)^2}\int_{\Rbb^2}\int_{\Rbb^2} \sigma(x, \xi) e^{i(x-y)\cdot \xi} f (y) dx d\xi
$$
where $\sigma(x, \xi)$ is called the symbol of $Op(\sigma)$. Write $(x, y)$ instead of $(s, t)$ as usual.
Our purpose is to get the $\Psi$DO representation of the operator $P^g_{jk}$ :
\beq\label{partial pseudo-differential}
P_{jk}^g[f](x') :=\frac{1}{8\pi} \int_{\Rbb^2} {(x_j-y_j)(x_k-y_k)}K(x', x'-y') f(\By) d\sigma
\eeq
where
$$
K(x', x'-y') =
\frac{1}{[g_{11}(x')(x_1-y_1)^2+2 g_{12}(x')(x_1-y_1)(x_2-y_2)+g_{22}(x')(x_2-y_2)^2]^{3/2}}.
$$
We can calculate the principal symbol of $P_{jk}^g$ with the aid of the {\it surface Riesz transforms} $R^g_k$:
\beq\label{symbol of surface Riesz trans}
R_k^g[f](x') =\frac{1}{2\pi} \int_{\Rbb^2} {(x_k-y_k)}K(x', x'-y') f(\By) dy'.
\eeq
Here $R_k^g$ is, in fact, a homogeneous pseudo-differential operator \cite{AKM}. Let us recall the symbol of $R_k^g$ for the reader's convenience:
\begin{lemma}\label{symbolRG}
For $f\in C_0^{\infty}(\Rbb^2)$
\beq
R_k^g[f](x') =\frac{1}{(2\pi)^2} \int_{\Rbb^4}\frac{-i}{\sqrt{\det (g_{kl}(x))}}\frac{\sum_{l} g^{kl}(x')\xi_l}{\sqrt{\sum_{k, l} g^{kl}(x')\xi_k \xi_l}} e^{(x'-y')\cdot \xi} f(\By) dy' d\xi.
\eeq
\end{lemma}
\begin{proof}
The matrix (tensor) $\BG(x)=(g_{ij}(x))$ is symmetric and one can diagonalize via orthogonal matrices:
\beq
P^{-1}(x) \BG(x) P(x)=
\begin{pmatrix}
\alpha^2(x) & 0 \\
0 & \beta^2(x)
\end{pmatrix}
\eeq
and
\beq
\begin{pmatrix}
\alpha^{-1}(x) & 0 \\
0 & \beta^{-1}(x)
\end{pmatrix}
P^{-1}(x) \BG(x) P(x)
\begin{pmatrix}
\alpha^{-1}(x) & 0 \\
0 & \beta^{-1}(x)
\end{pmatrix}=
\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}.
\eeq
Since $\alpha(x)\beta(x)=\sqrt{\det (g_{kl}(x))}$,
putting $z_k=x_k-y_k$ $(k=1, 2)$ and
$$
z=
\begin{pmatrix}
z_1 \\
z_2
\end{pmatrix}
=
P(x)
\begin{pmatrix}
\alpha^{-1}(x) & 0 \\
0 & \beta^{-1}(x)
\end{pmatrix}
\begin{pmatrix}
\tilde{z_1} \\
\tilde{z_2}
\end{pmatrix}
=\widetilde{P(x)}\tilde{z},
$$
(\ref{symbol of surface Riesz trans}) becomes
\begin{align*}
R_k^g[f](x') &=\frac{1}{2\pi} \int_{\Rbb^2} \frac{[\widetilde{P(x')} \tilde{z}]_k}{|\tilde{z}|^3}\ f(\By) \frac{d\tilde{z}}{\alpha(x')\beta(x')} \\
&=\frac{1}{(2\pi)^2}\int_{\Rbb^2} \int_{\Rbb^2} \frac{-i}{\sqrt{\det (g_{kl}(x))}} \frac{[\widetilde{P(x)} \widetilde{P(x)}^{T} \xi]_k}{|\widetilde{P(x)}^{T} \xi|} e^{i\xi'(x'-y')} f(\By)\; d\xi dy' \\
&=\frac{1}{(2\pi)^2}\int_{\Rbb^2} \int_{\Rbb^2} \frac{-i}{\sqrt{\det (g_{kl}(x'))}}
\frac{\sum_{k} g^{kl}(x')\xi_l}{\sqrt{\sum_{j, k} g^{kl}(x)\xi_j \xi_k}}e^{i\xi'(x'-y')} f(\By)\; d\xi dy'.
\end{align*}
as desired.
\end{proof}
As a consequence of Lemma \ref{symbolRG}, we get the
symbol of $P_{jk}^g$ as homogeneous pseudo-differential operators:
\begin{lemma}\label{symbolP}
For $f\in C_0^{\infty}(\Rbb^2)$,
\beq
P_{jk}^g[f](x') =\frac{1}{(2\pi)^2} \int_{\Rbb^4} \Big[\frac{(-1)^{i-j} \widehat{\xi_j} \widehat{\xi_k}}{4 \det (g_{ij}(x')) \{ \sqrt{\sum_{j, k} g^{jk}(x')\xi_j \xi_k}\}^3}\Big]e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi
\eeq
Here $\widehat{\xi_1}=\xi_2$ and $\widehat{\xi_2}=\xi_1$.
\end{lemma}
\begin{proof}
From Lemma \ref{symbolRG} and using the integration by parts as oscillatory integrals \cite{Shubin}, we have for all $f \in C_0^{\infty}(\Rbb^2),$
\begin{align*}
P_{jk}^g[f](x') =& \frac{1}{4} R_k^g [ \sqrt{\det (g(x))}(x_j-y_j)f](x') \\
=&\frac{1}{(2\pi)^2} \int_{\Rbb^4}{-i} \frac{\sum_{l} g^{kl}(x')\xi_l}{4 \sqrt{\sum_{k, l} g^{kl}(x')\xi_k \xi_l}}\frac{1}{i} \frac{\p}{\p\xi_j} e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi \\
=&\frac{1}{(2\pi)^2} \int_{\Rbb^4} \frac{\p}{\p\xi_j} \Big[\frac{\sum_{l} g^{kl}(x')\xi_l}{4 \sqrt{\sum_{k, l} g^{kl}(x')\xi_k \xi_l}}\Big] e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi \\
=&\frac{1}{(2\pi)^2} \int_{\Rbb^4} \Big[\Big\{\frac{g^{kj}(x')}{4 \sqrt{\sum_{k, l} g^{kl}(x')\xi_k \xi_l}}\Big\} \\
&\quad\quad\quad -\frac{1}{2}\Big\{\frac{\{\sum_{l} g^{kl}(x')\xi_l\}\{ 2\sum_{l} g^{jl}(x') \xi_l \}}{4 \{\sqrt{\sum_{k, l} g^{kl}(x')\xi_k \xi_l}\}^{3}}\Big\} \Big] e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi \\
=&\frac{1}{(2\pi)^2} \int_{\Rbb^4} \Big[\frac{g^{kj}(x') (\sum_{k, l} g^{kl}(x')\xi_k \xi_l)
-\{\sum_{l} g^{kl}(x')\xi_l \}\{ \sum_{l} g^{jl}(x') \xi_l \} }{4 \{\sqrt{\sum_{k, l} g^{kl}(x')\xi_k \xi_l}\}^3}\Big] e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi \\
=&\frac{1}{(2\pi)^2} \int_{\Rbb^4} \Big[\frac{ \sum_{l, m} (g^{kj}(x')g^{lm}(x') -g^{kl}(x')g^{jm}(x') ) \xi_l \xi_m}{4 \{\sqrt{\sum_{j, k} g^{jk}(x')\xi_j \xi_k}\}^3}\Big]e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi \\
=&\frac{1}{(2\pi)^2} \int_{\Rbb^4} \Big[\frac{(-1)^{i-j}(g^{11}g^{22} -g^{12}g^{21})\widehat{\xi_j} \widehat{\xi_k}}{4 \{\sqrt{\sum_{j, k} g^{jk}(x')\xi_j \xi_k}\}^3}\Big]e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi \\
=&\frac{1}{(2\pi)^2} \int_{\Rbb^4} \Big[\frac{(-1)^{i-j} \widehat{\xi_j} \widehat{\xi_k}}{4 \det (g_{ij}) \{ \sqrt{\sum_{j, k} g^{jk}(x')\xi_j \xi_k}\}^3}\Big]e^{i(x'-y')\cdot \xi} f(\By) dy' d\xi.
\end{align*}
as desired.
\end{proof}
Hence the principal symbol of $P_{jk}^g$ is a (strictly) homogeneous symbol of order $-1$ and the summation immediately yields the principal symbol of $\BP$:
\begin{lemma}\label{symbolT}
Let $\p\GO$ be a bounded $C^{2, \alpha}$ surface. Then
$$\BP \equiv Op\Big( \Big[
\frac{L(x') \xi_2^2 -2M(x')\xi_1 \xi_2 +N(x')\xi_1^2}{4 \det (g_{ij}) \{ \sqrt{\sum_{j, k} g^{jk}(x')\xi_j \xi_k}\}^3} \Big]
\Big) \quad\mbox{modulo}\ \text{Hilbert-Schmidt operators}.
$$
\end{lemma}
We remark that the above $\Psi$DO is defined even for $f \in {L^2}(\p\GO)$ since the localizations of $\Psi$DO on local coordinates coincide with the sum of (\ref{partial pseudo-differential}).
\section{Weyl's law of compact pseudo-differential operators and NP operators}\label{sec: Weyl law}
Let us introduce Weyl's law of singular values of the pseudo-differential operators with $C^{\alpha}$ smooth in $x$-variable. M. S. Birman and M. Z. Solomyak \cite{BS} showed the asymptotics under weak smoothness hypothesis both in the $x$- and $\xi$-variable (See \cite{Dostanic, Grubb, RT} for recent progress). In our situation, we employ the results as the asymptotics of singular values of $-1$ homogeneous $\Psi$DO in two dimensions.:
\begin{theorem}[\cite{Grubb} Theorem 2.1 and Theorem 2.5]\label{Grubb theorem}
On a closed manifold $M$ of dimension $2$, let $P$ be defined in local coordinates
from symbols $p(x, \xi)$ that are homogeneous in $\xi$ of degree $-1$. Assume that the symbols restricted to $\xi \in S^{m-1}=\{|\xi|=1\}$ are in $C(S^{m-1}, C^{\epsilon})$ for some $\epsilon$.
Then
$$
s_j(P) \sim C(\p\GO)^{1/2}j^{-1/2} \quad j\rightarrow \infty.
$$
Here
\beq
C(\p\GO)=\frac{1}{8\pi^2} \int_{S^{*}M} |\sigma_0(x, \xi)|^{2} dx\ d\xi,
\eeq
$S^{*}M$ denotes the cosphere bundle and $\sigma_0$ is the principal symbol of $Op(\sigma(x, \xi))$.
\end{theorem}
From Theorem \ref{Grubb theorem}, immediately we have the Weyl's law of singular values:
\begin{lemma}\label{Weyl law singular values}
Let $\GO$ be a $C^{2, \alpha}$ bounded region. Then
\beq
s_j(\Kcal_{\p \GO})\sim \Big\{\frac{3W(\p\GO) - 2\pi \chi(\p\GO)}{128 \pi} \Big\}^{1/2} j^{-1/2}\quad \text{as}\ j\rightarrow \infty
\eeq
where $W(\p\GO)=\int_{\p\GO} H^2\; dS$ and $\chi(\p\GO)$ denotes, respectively, the Willmore energy and the Euler characteristic of the surface $\p\GO$.
\end{lemma}
\begin{proof}
In the preceding section, we proved that $\Kcal_{\p \GO}$ is a $\Psi$DO modulo Hilbert-Schmidt class:
\beq
\Kcal_{\p \GO} \equiv Op\Big( \Big[
\frac{L(x) \xi_2^2 -2M(x)\xi_1 \xi_2 +N(x)\xi_1^2}{4 \det (g_{ij}) \{ \sqrt{\sum_{j, k} g^{jk}(x)\xi_j \xi_k}\}^3} \Big]
\Big) \quad \text{modulo Hilbert-Schimidt operator}\ \BH.
\eeq
From Ky-Fan theorem \cite{Dostanic}, $\BH$ is considered as the small perturbation of the $\Psi$DO.
Thus $s_j(\Kcal_{\p \GO})$ also satisfies
$$s_j(\Kcal_{\p \GO}) \sim C(\p\GO) j^{-1/2} \quad j\rightarrow \infty. $$
To calculate the positive constant $C(\p\GO)$ in Theorem \ref{Grubb theorem}, we take the isothermal charts introduced in section \ref{sec: surface geometry}. The surface element is given by $dS_{\Bx}=E(x) dx$ and
\begin{align*}
C(\p\GO)^2&=\frac{1}{8\pi^2}\int_{\p\GO} \int_{S^1} \Big[\frac{L(x) \xi_2^2 -2M(x)\xi_1 \xi_2 +N(x)\xi_1^2}{4 \det (g_{ij}) \{ \sqrt{\sum_{j, k} g^{jk}(x)\xi_j \xi_k}\}^3} \Big]^2\; d\xi dx \\
&=\frac{1}{8\pi^2} \int_{\p\GO} \int_{S^1}\Big[\frac{L(x) \cos^2 \theta -2M(x)\cos \theta \sin \theta + N(x) \sin^2 \theta}{4 E^2(x) E^{-3/2}(x)} \Big]^2\; d\xi dx \\
&=\frac{1}{128\pi^2} \int_{\p\GO} \int_{S^1} \frac{ (L(x) \cos^2 \theta -2M(x)\cos \theta \sin \theta + N(x) \sin^2 \theta)^2}{E(x)} \; d\xi dx \\
&=\frac{1}{128\pi^2} \int_{\p\GO} \frac{ (\frac{3\pi}{4}L^2(x)+\frac{3\pi}{4}N^2(x)+\pi M^2(x) + \frac{\pi}{2}L(x)N(x)) }{E(x)} \; dx \\
&=\frac{1}{128\pi^2} \int_{\p\GO} \frac{ (\frac{3\pi}{4}L^2(x)+\frac{3\pi}{4}N^2(x)+\pi (L(x)N(x)-E^2(x)K(x)) + \frac{\pi}{2}L(x)N(x)) }{E(x)} \; dx \\
&=\frac{3}{512 \pi} \int_{\p\GO} \Big[\left(\frac{L(x)+N(x)}{E(x)} \right)^2 -\frac{4}{3} K(x) \Big]{E(x)} \; dx \\
&=\frac{3}{512 \pi} \int_{\p\GO} 4H^2(x)\; dx - \frac{1}{64} \chi (\p\GO) \\
&=\frac{3W(\p\GO) - 2\pi \chi(\p\GO)}{128 \pi}.
\end{align*}
Thus we have a Weyl's type formula for singular values of NP operators.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main}]
From Proposition \ref{almost self-adjoint}, we need to prove only the almost self-adjointness of $\Kcal_{\p\GO}$, that is,
$$ \Kcal^*_{\p\GO}-\Kcal_{\p\GO} $$
is a Hilbert-Schimidt operator.
This fact follows from that the approximate $\Psi$DO's of $\Kcal^*_{\p\GO}$ and $\Kcal_{\p\GO}$
have just the same symbol from the similar calculations in section \ref{sec: symbol}.
\end{proof}
\section{Applications}\label{sec: applications}
Theorem \ref{main} states that the NP operators in three dimensions have infinite rank.
Regarding the rank of NP operators,
Khavinson-Putinar-Shapiro \cite{KPS} propose a question:
``{\it The disk is the only planar domain for which the NP operator has finite rank.
It is not known whether there are such domains in higher dimensions.}''
Our answer to this question is summarized as the following:
\par
\begin{cor}[Finite-rank problem]
Let $\Omega$ be a bounded $C^{2, \alpha}$ {\rm($\alpha>0$)} region
in $\Rbb^n$ {\rm($n=2, 3$)}. If the NP operator has finite rank,
then
$$
n=2\ \mbox{and}\ \p\GO=S^1.
$$
Thus the finite rank NP operator is rank one.
\end{cor}
Some results are also obtained from the known facts of the Willmore energy:
The Willmore energy (\ref{definition of Willmore energy}) is known as the best thought of as a measure of `roundness', it is not hard to prove
$$
W(\p\GO) \geq 4\pi,
$$
with equality if and only if $\p\GO$ is an round sphere.
For higher genus cases, F. C. Marques and A. Neves \cite{MN} proved
the celebrated ``Willmore conjecture''
$$
W(\p\GO) \geq 2\pi^2.
$$
The equality is achieved by the torus of revolution whose generating circle
has radius $1$ and center at distance $\sqrt{2}$ from the axis of revolution:
$$
T^2_{\text{Clifford}}:=\{ ((\sqrt{2}+\cos u)\cos v, (\sqrt{2}+\cos u)\sin v, \sin u) \in {\Rbb}^3\; |\; (u, v) \in {\Rbb}^2 \}
$$
As results, we obtain the spectral geometry nature of NP eigenvalues:
\begin{cor}\label{Isospectral property of sphere}
Let $\GO \subset \Rbb^3$ be a bounded region of class $C^{2, \alpha}$. Then
$$\hspace{-6mm}|\lambda_j(\Kcal_{\p\GO})| \succsim \frac{1}{4} j^{-1/2}. $$
The minimum asymptotic is achieved if and only if $\partial \Omega=S^2$. Especially if
$\sigma_p(\Kcal_{\p\GO})=\sigma_p(\Kcal_{S^2})$ then $\p\GO=S^2.$
\end{cor}
\begin{cor}\label{Isopectral property of Clifford Torus}
Let $\GO \subset \Rbb^3$ be a bounded region of class $C^{2, \alpha}$ with genus $g(\p\GO)\geq 1$.
Then $$|\lambda_j(\Kcal_{\p\GO})| \succsim \frac{\sqrt{3\pi}}{8} j^{-1/2}.$$
Especially if $\sigma_p(\p\GO)=\sigma_p(T^2_{\text{Clifford}})$ then $\p\GO\cong T^2_{\text{Clifford}}.$
Here $\cong$ means modulo M\"obius transforms.
\end{cor}
Any other properties of the Willmore energy can also be interpreted as the asymptotics of NP eigenvalues.
For instance, Langevin and Rosenberg \cite{LR} showed that any knotted embedding of a torus in $\Rbb^3$
was bounded below by $8\pi$, namely,
$$
{W(\p\GO) \geq 8\pi.}
$$
In the terminology of NP eigenvalues, we have
\begin{cor}\label{Decay estimate for knotted tori}
Let $\GO \subset \Rbb^3$ be a bounded region of class $C^{2, \alpha}$ and $\p\GO$ be a knot torus.
Then
$$\hspace{-6mm}|\lambda_j(\Kcal_{\p\GO})| \succsim \frac{\sqrt{3}}{4} j^{-1/2}.$$
\end{cor}
As the last applicaltion, let us consider plasomonic eigenvalues (See e.g. \cite{Grieser} and references therein). A real number $\Ge$ is called a {\it plasmonic eigenvalue} if the following problem admits
a solution $u$ in the space $H^1(\Rbb^3)$:
\beq\label{plasmon}
\begin{cases}
\GD u =0 \quad\quad\quad\quad\, &\text{in}\ {\Rbb}^3\backslash \p\GO ,\\
u|_{-}=u|_{+} \quad\quad\quad\ &\text{on}\ \p\GO ,\\
\Ge\p_n u|_{-}= -\p_n u|_{+} \quad\ \, &\text{on}\ \p\GO.
\end{cases}
\eeq
where the subscript $\pm$ on the left-hand side respectively denotes the limit (to $\p\GO$) from the outside and inside of $\GO$.
The well-known relation \cite{AKMU} among the plasmonic eigenvalue $\Ge$ and
the NP eigenvalue $\Gl$ gives
\beq\label{plasmonic eigenvalues}
|\epsilon_j - 1| =\Big|\frac{-2\lambda_j}{\lambda_j-1/2}\Big|\sim |4\lambda_j | \sim \Big\{\frac{3W(\p\GO) - 2\pi \chi(\p\GO)}{8 \pi} \Big\}^{1/2} j^{-1/2}.
\eeq
Hence the plasmonic eigenvalues consist of the sequence with $1$ as the limit, and the (R.H.S.) of \eqnref{plasmonic eigenvalues} gives its converging rate.
\section{Conclusion and discussions}
We discussed about the Weyl's law of NP eigenvalues. It depends on the Willmore energy and the Euler characteristics. However Theorem \ref{main} holds only for $C^{2, \alpha}$ smooth surfaces, while $\Kcal_{\p\GO}$ is compact for $C^{1, \alpha}$ surfaces. This fact indicates that the Weyl's law on surfaces having only $C^{1, \alpha}$ smoothness is probably changed to be the interpolation between $C^{0, 1}$ and $C^{2, \alpha}$.
Moreover we don't know the asymptotics of {\it signed} NP eigenvalues:
\beq
\lambda_1^{+}>\lambda_2^{+}>\cdots >0 >\cdots > \lambda_{2}^{-}> \lambda_{1}^{-}.
\eeq
When we denote the {\it signed} Browder-G\r{a}rding density \cite{Andreev} as
\beq
C_{\pm}(\p\GO)=\frac{1}{8\pi^2} \int_{S^{*}\p\GO}\Big[ \frac{L(x') \xi_2^2 -2M(x')\xi_1 \xi_2 +N(x')\xi_1^2}{4 \det (g_{ij}) \{ \sqrt{\sum_{j, k} g^{jk}(x')\xi_j \xi_k}\}^3} \Big]_{\pm}^2 dx\ d\xi
\eeq
where the subscript $\pm$ denotes the positive and negative part respectively, we believe that
\begin{align*}
\lambda_j^{\pm} \sim \pm C_{\pm}^{1/2}(\p\GO) j^{-1/2}\quad \text{as}\ j\rightarrow \infty.
\end{align*}
If so, for the case of ellipsoids, $C_{-}(\p\GO)=0$ and negative NP eigenvalues decay faster than $j^{-1/2}$ \cite{Ah2, AA, Ma, Ri2}.
For a torus, $C_{-}(\p\GO)>0$ and infinitely many negative NP eigenvalues exist.
We hope that the truth or falsehood of these problems will be established in the recent future.
|
1,108,101,565,962 | arxiv |
\chapter{A more detailed spectral analysis}\label{a:better}
\section{From Remark \ref{r:better2}(i) to Remark \ref{r:better}(c)}
Let us assume the validity of \ref{r:better2}(i) and prove Remark \ref{r:better}(c). Let $m_0\ge 2$ be the integer such that
\begin{align}\label{z1}
&{\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\} \neq \emptyset \, ,
\\
&{\rm spec}\, (L_{\text{st}}, U_{m}) \cap \{{\rm Re}\, z > 0\} = \emptyset \,
\quad
\text{for any $m>m_0$} \, .
\end{align}
We show that Remark \ref{r:better}(c) holds with $m = m_0$.
For any $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ we denote by $V_z:= P_z(L^2_{m_0})$ the image of the Riesz projector
\begin{equation*}
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1} dw \, ,
\end{equation*}
where $\gamma$ parameterizes the boundary of a ball containing $z$ and no other eigenvalues of $L_{\text{st}}$.
It is enough to show that $P_z(U_{km_0}) = \{0\}$ for any $k\in \mathbb{Z}\setminus\{-1, 1\}$, $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ since it gives
$$V_z = P_z(U_{m_0}\cup U_{-m_0}) \subset U_{m_0} \cup U_{-m_0}\, ,$$
where the second inclusion follows from the fact that $U_m$ is always an invariant space of $L_{\text st}$.
If $k>1$, from \eqref{z1} we know that $z\notin {\rm spec}\, (L_{\text{st}}, U_{km_0})$, hence $P_z(U_{km_0})$ is trivial. If $k<-1$,
we reduce to the previous situation by observing that $P_z(U_{km_0}) = \overline{ P_{\bar z}(U_{-km_0})}$.
\section{Proof of Remark \ref{r:better2}(i)} In order to show this point, given Lemma \ref{l:almost-final-2}, we just need to prove the following statement.
\begin{lemma}\label{l:almost-final-1}
For every fixed $\Xi\in \mathscr{C}$ there is $M_0>0$ such that $\mathscr{U}_m$ is empty for every $m\geq M_0$.
\end{lemma}
Indeed, given the conclusion above we infer that $\mathscr{U}_m$ is empty for every $m\geq m_a$ and it thus suffices to select $m_0$ as the largest integer strictly smaller than $m_a$.
Before coming to the proof of Lemma \ref{l:almost-final-1} we state an auxiliary fact which will be used in the argument and which can be readily inferred from the computations in Step 1 of the proof of Lemma \ref{l:will-apply-Rouche}.
\begin{lemma}\label{l:operator-B_z}
For every $0 < \sigma < \tau < 1$ there is a constant $C$ (depending only upon $\sigma$ and $\tau$) such that $B_z := \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ is a bounded operator from $C^\sigma$ to $C^\tau$ for every $z$ with ${\rm Im}\, z>0$ and
\[
\|B_z\|_{\mathcal{L} (C^\sigma, C^\tau)} \leq C\, .
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:almost-final-1}] The proof will be by contradiction and thus, assuming that the statement is false, we can select:
\begin{itemize}
\item[(i)] a sequence $\{m_j\}\subset [1, \infty[$ with $m_j\to \infty$;
\item[(ii)] a sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$;
\item[(iii)] and a sequence $\{\psi_j\}\subset L^2 (\mathbb R)$ solving the equation
\begin{equation}\label{e:eigenvalue-equation-20}
-\frac{d^2 \psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $\{z_j\}$ is bounded and every cluster point must be an element of $[0, \Xi (-\infty)]$. Otherwise for a subsequence, not relabeled, we get the estimate
\[
\sup \left\|\frac{A}{\Xi-z_j}\right\|_{L^\infty} =: C_0 < \infty\, .
\]
By scalar multiplying \eqref{e:eigenvalue-equation-20} by $\psi_j$ and taking the real part of the resulting equation we then conclude
\[
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) \leq C_0 \int |\psi_j|^2\, ,
\]
which clearly it is not feasible because $C_0 < m_j^2$ for a sufficiently large $j$ (and $\psi_j$ is nontrivial).
Up to subsequences we can thus assume that $z_j$ converges to some $z_0 \in [0, \Xi (-\infty)]$.
\medskip
{\bf Step 2.} We next analyze the cases $z_0 =0$ and $z_0 = \Xi (-\infty)$. The argument is similar to that used in Section \ref{s:3+4} in case (C). Let us argue first for $z_0=0$. We observe that $\Xi^{-1} |A|$ belongs to $L^1 (]-\infty, N])$ for any fixed $N$ and that, likewise, $|\Xi-z_j|^{-1} |A|$ have a uniform $L^1$ bound on any $]-\infty, N]$. We can then
use the Lemma \ref{l:ODE2} to normalize $\psi_j$ so that it is asymptotic to $e^{m_j t}$ and also to write
\[
\psi_j (t) = e^{m_j (t)} (1+z_j (t))
\]
with
\[
|z_j (t)| \leq \exp \left(\frac{1}{m_j} \int_{-\infty}^N \frac{|A|}{|\Xi-z_j|}\right) -1 \qquad
\mbox{for all $t\leq N$.}
\]
In particular, we have $|z_j (t)|\leq \frac{C(N)}{m_j}$ on $]-\infty, N]$. We next scalar multiply \eqref{e:eigenvalue-equation-20} by $\psi_j$ and take the imaginary part to conclude
\[
- \left(\int_{-\infty}^a + \int_b^\infty\right) \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\leq
\int_a^b \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\, .
\]
In particular, since $\frac{A}{|\Xi-z_j|^2}$ is bounded from above by a constant $C$ independent of $j$ on $[a,b]$ and $-\frac{A}{|\Xi-z_j|^2}$ is bounded from below by a constant $c>0$ independent of $j$ on $[b+1, b+2]$, we conclude
\[
\int_{b+1}^{b+2} |\psi_j|^2 \leq \frac{C}{c} \int_a^b |\psi_j|^2\, .
\]
We next choose $N$ larger than $b+2$ and use the estimate $|z_j (t)|\leq \frac{C(N)}{m_j}$ to argue that, for $j$ large enough, we have $\frac{1}{2} e^{m_j t} \leq |\psi_j (t)| \leq 2 e^{m_j t}$ on $]-\infty, N]$. In particular, we infer
\[
\int_{b+1}^{b+2} e^{2m_j t} \leq C \int_a^b e^{2m_j t}
\]
provided the constant $C$ is chosen large enough (but independent of $j$) and $j$ is large enough. The latter inequality is certainly impossible for $m_j$ large enough, leading to a contradiction.
The argument to exclude $z_0 = \Xi (-\infty)$ is entirely analogous, this time normalizing for $t\to \infty$ and reaching an inequality of type
\[
\int_{a-2}^{a-1} e^{-2m_j t} \leq C \int_a^b e^{-2m_j t}
\]
for a constant $C$ independent of $j$ and any $j$ large enough.
\medskip
{\bf Step 3.} We next examine the last case, that is $z_0 = \Xi (c)$. This time we fix a $\sigma \in \, ]0,1[$ and normalize $\psi_j$ so that $\|\psi_j\|_{C^\sigma}=1$. We observe that
\[
\psi_j = - \mathcal{K}_{m_j} \left(\frac{A}{\Xi-z_j} \psi_j\right)\, ,
\]
and also recall that $\mathcal{K}_{m_j} (\varphi) = \frac{1}{2m_j} e^{-m_j |\cdot|} * \varphi$.
We set $m_0=m_a$ write further
\[
\psi_j = - \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} +m_0^2\right) \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\, .
\]
Recalling Lemma \ref{l:operator-B_z}, we can fix a $\tau \in ]\sigma,1[$ to achieve
\[
\left\|\left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\right\|_{C^\tau}\leq C
\]
for some constant $C$ independent of $j$.
We will show in the final step that
\begin{itemize}
\item[(Cl)] $\|\mathcal{K}_{m_j} \circ (-\frac{d^2}{dt^2} + m_0^2)\|_{\mathcal{L} (C^\tau, C^\tau)} \leq C$ for some constant $C$ independent of $k$.
\end{itemize}
In particular, we achieve
\begin{equation}\label{e:estimate-C-tau}
\|\psi_j\|_{C^\tau} \leq C\, .
\end{equation}
We now wish to show that indeed $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$ for $j$ large enough, which obviously would be a contradiction. In order to achieve the latter estimate we use a Littlewood-Paley decomposition. We fix a cut-off function $\chi$ which is supported in $]\frac{1}{2}, 2[$, define $\chi_\ell (t) :=\chi (2^{-\ell} t)$ for $\ell\in \mathbb N$ and assume that $\chi$ has been chosen so that
\[
\sum_{\ell \in \mathbb N} \chi_\ell \equiv 1 \qquad \mbox{on $[1, \infty[$}.
\]
We then define
\[
\chi_{-1} := 1 - \sum_{\ell \in \mathbb N} \chi_\ell\,
\]
and introduce the Littlewood-Paley operator $\Delta_\ell$ as $\Delta_\ell (\varphi) = \mathscr{F}^{-1} (\chi_\ell \mathscr{F} (\varphi))$, where $\mathscr{F}$ is the Fourier transform.
We finally recall that (see \cite[Section 1.4.2]{Grafakos}), if we define
\[
\|\varphi\|_{X^\sigma} := \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}\, ,
\]
then
\[
C (\sigma)^{-1} \|\varphi\|_{X^\sigma} \leq \|\varphi\|_{C^\sigma} \leq C (\sigma) \|\varphi\|_{X^\sigma}\, .
\]
We are now ready to perform our final estimate. We fix a large $N$, which will be chosen later, and for $\ell\geq N$ we write
\begin{align*}
\sum_{\ell \geq N} 2^{\sigma \ell} \|\Delta_\ell \psi_j\|_\infty
&\leq 2^{-N (\tau-\sigma)} \sum_{\ell\geq N} 2^{\tau \ell} \| \Delta_\ell \psi_j\|_\infty
\leq 2^{-N (\tau-\sigma)} C (\tau) \|\psi_j\|_{C^\tau} \leq C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constant $C$ is independent of both $N$ and $j$. Next, for any $\ell$ we observe that
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \underbrace{\left(\Delta_\ell \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right)\right)}_{=: \Gamma_{\ell,j}}\, .
\]
Now
\[
\|\Gamma_{\ell,j}\|_{L^\infty} \leq C 2^{-\ell\sigma} \left\|\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right\|_{C^\sigma} \leq C 2^{-\ell \sigma}\, .
\]
On the other hand, because of the frequency localization, we have
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1}) (\Gamma_{\ell,j})
\]
and the estimate
\[
\left\|\mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ \Delta_\ell\right\|_{\mathcal{L} (L^\infty, L^\infty)} \leq \frac{C}{m_j^2} \left(2^{2\ell} + m_0^2\right)\, .
\]
We can therefore write the estimate
\begin{align*}
\|\psi_j\|_{C^\sigma} & \leq \frac{C}{m_j^2} \sum_{\ell=-1}^{N} (2^{(2+2\sigma) \ell} + m_0^2)
+ C 2^{-N (\tau-\sigma)} &\leq \frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) + C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constants $C$ are independent of $N$ and $j$. In particular, we fix first $N$ large enough to get $C 2^{-N (\tau-\sigma)} \leq \frac{1}{4}$ and we then choose $m_j$ large enough so that
\[
\frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) \leq \frac{1}{4}\, .
\]
These two estimates imply $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$, contradicting the normalization $\|\psi_j\|_{C^\sigma} = 1$.
\medskip
{\bf Step 4.} To complete the proof of the Lemma we need to show (Cl). We first write
\[
T_{m, \ell} := \Delta_\ell \circ \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right)\, .
\]
The operator $T_{m, \ell}$ is the convolution with a kernel $K_{m, \ell}$ whose Fourier symbol is given by
$\chi_\ell (\xi) \frac{|\xi|^2 + m_0^2}{|\xi|^2 +m^2}$. Hence, for $\ell \geq 0$ we have
\[
K_{m, \ell} (t) = \frac{1}{2\pi} \int \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2} e^{i \xi t}\, d\xi\, .
\]
and
\[
(-it)^k K_{m, \ell} (t) = \frac{1}{2\pi} \int \frac{d^k}{d\xi^k} \left( \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2}\right) e^{it\xi}\, d\xi\, .
\]
In particular, we easily conclude
\[
\| |t|^k K_{m, \ell}\|_{L^\infty} \leq C (k) 2^{\ell (1-k)}\, ,
\]
for a constant $C (k)$ independent of both $m\geq 1$ and $\ell$, but which depends on $k$. From the latter we can estimate
\begin{align*}
\|K_{m, \ell}\|_{L^1} &\leq \int_{|t|\leq 2^{-\ell}} |K_{m, \ell} (s)|\, ds +
\int_{|t|\geq 2^{-\ell}} \frac{|s^2 K_{m, \ell} (s)|}{|s|^2}\, ds\\
&\leq C + C 2^{-\ell} \int_{2^{-\ell}}^\infty \frac{1}{s^2}\, ds \leq C\, .
\end{align*}
For $\ell = -1$ we just conclude, likewise
\[
\||t|^k K_{m, -1}\|_{L^\infty} \leq C (k)
\]
for a constant $C(k)$ independent of $m$, but depending of $k$. Once again using the cases $k=0$ and $2$ of the latter inequality we achieve
\[
\|K_{m, -1}\|_{L^1} \leq C \, .
\]
We have thus bounded all $\|K_{m, \ell}\|_{L^1}$ with a universal constant $C$ independent of both $m\geq 1$ and $\ell\in \mathbb N \cup \{-1\}$. In particular, since $\|T_{m, \ell}\|_{\mathcal{L} (L^\infty, L^\infty)} = \|K_{m, \ell}\|_{L^1}$ and
\[
\mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) = \sum_{\ell \geq -1} T_{m, \ell}
= \sum_{\ell \ge -1} T_{m, \ell} \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1})\, ,
\]
we can estimate
\begin{align*}
\left\| \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) (\varphi)\right\|_{C^\sigma}
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\varphi)\|_{L^\infty}
= C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\Delta_\ell \varphi)\|_{L^\infty}\\
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}
\leq C (\sigma) \|\varphi\|_{C^\sigma}\, .
\end{align*}
This completes the proof of (Cl) and hence of the entire Lemma.
\end{proof}
{\color{red}
\section{Proof of Theorem \ref{thm:spectral-stronger-2}}
In \cite{Vishik2}, Vishik claims the following improved version of Theorem \ref{thm:spectral5}, which would immediately imply Theorem \ref{thm:spectral-stronger-2}.
\begin{theorem}\label{thm:Vishikversion}
There are a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m} = \emptyset$ for any integer $m>m_0$ and $\mathscr{U}_{m_0}$ consists of a single $z$. Moreover, the algebraic multiplicity of $m_0 z$ as an eigenvalue of $\mathcal{L}_{m_0}$ is $1$.
\end{theorem}
Vishik's suggested proof of Theorem \ref{thm:Vishikversion} builds upon Proposition \ref{p:3+4} and the following improved versions of Proposition \ref{p:5-7} and Proposition \ref{p:almost-final}.
\begin{proposition}\label{prop:Vishicimproved1}\label{PROP:VISHICIMPROVED1}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Then there
exist $\varepsilon >0$ and $\delta>0$ with the following property.
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $(m_a-h) z_{m_a-h}$ is an eigenvalue of $\mathcal{L}_{m_a-h}$ with algebraic multiplicity $1$.
\end{proposition}
In \cite{Vishik2} Vishik only gives the argument that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ contains a single element $z_{m_a-h}$ and the corresponding eigenspace of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ has dimension $1$ (i.e. its {\em geometric} multiplicity is $1$, cf. Remark \ref{r:b-also-2}). However it is essential to have the {\em algebraic} multiplicity equal to $1$ in order to complete his suggested argument. After we pointed out to him the gap in his paper, he suggested in \cite{Vishik3} the proof of Proposition \ref{prop:Vishicimproved1} reported below. Before coming to it, we point out that
a spectral perturbation argument as in the proof of Lemma \ref{l:almost-final-2} (which we outline anyway below)
easily implies the following.
\begin{proposition}\label{prop:Vishicimproved2}
Assume $- \lambda_a<-1$ and let $m_a = \sqrt{\lambda_a}$ and $m_b:= \max \{1, \sqrt{\lambda_b}\}$. Then $\mathscr{U}_m$ consists of a single element $z_m$ for every $m\in ]m_b, m_a[$ and moreover the algebraic multiplicity of $z_m$ as an eigenvalue of $m^{-1} \mathcal{L}_m$ is $1$.
\end{proposition}
Taking the previous proposition for granted, we just need a choice of $\Xi$ for which $\lambda_a >1$ and $]m_b, m_a[$ contains an integer, which is guaranteed by Lemma \ref{L:BOTTOM}, and we conclude Theorem \ref{thm:Vishikversion}.
\medskip
We now explain how to prove Proposition \ref{prop:Vishicimproved2}.
From Proposition \ref{p:almost-final} and Lemma \ref{l:almost-final-1} we know that $\mathscr{U}_m \neq \emptyset$, for every $m\in ]m_b, m_a[$, and $\mathscr{U}_m = \emptyset$ for $m \ge m_a$. Moreover, Remark \ref{rmk:algebraic dim const} implies that the sum of the algebraic multiplicities of $z\in \mathscr{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant for $m\in ]m_b, m_a[$. Hence, to conclude we just need to prove that the latter is $1$ for some $m\in ]m_b, m_a[$.
To that aim we show that for any $\varepsilon>0$ there exists $\delta>0$ such that $\mathscr{U}_{m_a-h} = \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))$ for any $h\in ]0,\delta[$. This is enough for our purposes since, together with Proposition \ref{prop:Vishicimproved1}, it gives $\mathscr{U}_{m_a-h}= \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))=\{ z_{m_a-h}\}$ where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a - h}$ with algebraic multiplicity $1$.
Assume for contradiction the existence of a sequence $(m_j)_{j\in\mathbb N}$ in $]m_b,m_a[$ converging to $m_a$ such that there are $z_j\in \mathscr{U}_{m_j}$ with $|z_j - \Xi(a)|>\varepsilon$ for some $\varepsilon>0$. Up to extracting a subsequence, we may assume $z_j \to z$ for some $z\in \mathbb C$ with $|z-\Xi(a)|\ge \varepsilon$. Proposition \ref{p:3+4} implies that the imaginary part of $z$ is positive. Arguing as in the first step of proof of Proposition \ref{p:3+4} we can prove that $z\in \mathscr{U}_{m_a}$ and reach a contradiction.
\section{Proof of Proposition \ref{prop:Vishicimproved1}}
The proof of Proposition \ref{prop:Vishicimproved1} can be reduced to the following weaker version using Remark \ref{rmk:algebraic dim const} and the argument outlined in the previous paragraph.
\begin{proposition}\label{prop:Vishikimproved-weaker}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Let $h$ and $\varepsilon$ be sufficiently small so that Proposition \ref{p:3+4} and Remark \ref{r:b-also-2} apply, namely
$\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ with {\em geometric} multiplicity $1$. Then, if $h$ is chosen possibly smaller, the algebraic multiplicity of $z_{m_a-h}$ is also $1$.
\end{proposition}
We now come to the proof of the latter, which is the heart of the matter. First of all, we introduce a suitable transformation of the space $\mathcal{H}$ (which we recall is the domain of the operator $\mathcal{L}_m$, defined in \eqref{e:def-H}). We introduce the Hilbert space
\[
\mathcal{H}^e :=\left\{ f: \mathbb R \to \mathbb C\, :\, \int |f (t)|^2 e^{-2t}\, dt < \infty\right\}
\]
and the isometry $T: \mathcal{H} \to \mathcal{H}^e$ given by
\[
\gamma (r) \mapsto e^{2t} \gamma (e^t)\, .
\]
Rather than considering the operator $\mathcal{L}_m$ on $\mathcal{H}$, it turns out to be more convenient to consider the operator $T \circ \mathcal{L}_m \circ T^{-1}$ on $\mathcal{H}^e$. Since the spectra of the two operators coincide, with a slight abuse of notation we will keep writing $\mathcal{L}_m$ in place of $T \circ \mathcal{L}_m \circ T^{-1}$, and we will keep $\mathscr{U}_m$ to denote the point spectrum of ${m^{-1}}T \circ \mathcal{L}_m \circ T^{-1}$ in the upper half plane.
Simple computations show that the operator $\mathcal{L}_m$ is given, on $\mathcal{H}^e$ by
\[
\mathcal{L}_m (\alpha) = m \Xi \alpha - m A \varphi
\]
where $\varphi$ is the unique $L^2$ solution of
\[
\varphi'' - m^2 \varphi = \alpha\,
\]
(note that we are claiming $\varphi\in L^2$ rather than $\varphi \in \mathcal{H}^e$, cf. Section \ref{s:eigenvalue-equation}).
We can now come to the main idea behind the simplicity of $z_{m_a-h}$, which is borrowed from \cite{Vishik3}. A prominent role is played by the adjoint of $\mathcal{L}_m$ (considered as a bounded linear operator from $\mathcal{H}^e$ into itself): for the latter we will use the notation $\mathcal{L}_m^\star$.
\begin{lemma}\label{l:aggiunto}
Assume that $h$ and $\varepsilon$ are small enough so that $\{z_{m_a-h}\} =\mathscr{U}_{m_a-h}\cap B_\varepsilon (\Xi (a))$ and $z_{m_a-h}$ has geometric multiplicity $1$ in ${\rm spec}\, ((m_a-h)^{-1} \mathcal{L}_{m_a-h}, \mathcal{H}^e)$. Let $\alpha_h \in \mathcal{H}^e\setminus \{0\}$ be such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}(\alpha_h) - z_{m_a - h}\alpha_h=0$. If $h$ is small enough, then there is $\beta_h\in \mathcal{H}^e$ such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}^\star(\beta_h) - \bar z_{m_a-h}\beta_h =0$ and
\begin{equation}\label{e:dual-pairing}
\langle \alpha_h, \beta_h \rangle_{\mathcal{H}^e} = \int \alpha_h (t) \bar \beta_h (t)\, e^{-2t}dt \neq 0\, .
\end{equation}
\end{lemma}
Let us show how the latter implies Proposition \ref{prop:Vishikimproved-weaker}. Assume $z_{m_a-h}$ were an element of ${\rm spec}\, ((m_a-h)^{-1}\mathcal{L}_{m_a-h}, \mathcal{H}^e)\cap B_\varepsilon (\Xi (a))$ with geometric multiplicity $1$ and algebraic multiplicity larger than $1$: our goal is to show that $h$ cannot be too small.
The properties just listed mean that the following bounded operator on $\mathcal{H}^e$,
\[
L_h := (m_a-h)^{-1}\mathcal{L}_{m_a-h} - z_{m_a-h} \, ,
\]
has a 1-dimensional kernel, $0$ is in its point spectrum, and $0$ has algebraic multiplicity strictly larger than $1$. These properties imply that any element $\alpha_h$ in the kernel of $L_h$ (i.e. any eigenfunction of $(m_a-h)^{-1}\mathcal{L}_{m_a-h}$ with eigenvalue $z_{m_a-h}$) is in the image of $L_h$. Fix one such element $\alpha_h$ and let $\eta_h$ be such that $L_h (\eta_h) = \alpha_h$. If $h$ is small enough, we can fix $\beta_h$ as in Lemma \ref{l:aggiunto}, and observe that it is in the kernel of the adjoint operator $L^\star_h$. We then must have
\[
0 \neq \int \alpha_h \bar\beta_h \, e^{-2t}dt= \int L_h (\eta_h) \bar\beta_h \, e^{-2t}dt
= \int \eta_h \overline{L_h^\star (\beta_h)} \, e^{-2t}dt = 0\, ,
\]
which is not possible.}
\begin{proof}[Proof of Lemma \ref{l:aggiunto}]
{\color{red}
We begin by proving the following claim:
\begin{itemize}
\item[(Cl)] For any $z\in \mathcal{U}_m$, with $m>1$, such that
$m^{-1}\mathcal{L}_m(\alpha_z) - z \alpha_z = 0$,
there exists $\beta_z\in \mathcal{H}^e$ such that
\begin{equation}
m^{-1}\mathcal{L}_m^\star(\beta_z) - \bar z \beta_z = 0 \, ,
\end{equation}
and
\begin{equation}\label{eq:keyfunction}
\left\langle \alpha_z, \beta_z \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z)^2}\varphi_z(t)^2\, d t\, ,
\end{equation}
where $\varphi_z$ is the unique solution on $L^2(\mathbb{R})$ of $\varphi_z'' - m^2\varphi_z = \alpha_z$.
\end{itemize}
To that aim we first observe that the adjoint of $\mathcal{L}_m$ in $\mathcal{H}^e$ is given by
\begin{equation}
\mathcal{L}_m^\star (\alpha) = m( \Xi \alpha - e^{2t} \mathcal{K}_m(A\alpha e^{-2t})) \, ,
\end{equation}
where $\mathcal{K}_m$ is the inverse of $-\frac{d^2}{dt^2} + m^2$ as a closed unbounded self-adjoint operator in $L^2(\mathbb{R})$.
Notice that $\mathcal{L}^\star$ is well defined because $e^{-t}\alpha \in L^2(\mathbb{R})$ and $A\sim e^{2t}$ as $t\to -\infty$.
We now observe that, if $z\in \mathcal{U}_m$, $m^{-1}\mathcal{L}_m (\alpha_z) = z \alpha_z$, and $\beta_z$ is defined by
\[
\beta_z:= e^{2t}\frac{\bar \varphi_z}{\Xi - \bar z}
= e^{2t}\frac{\mathcal{K}_m(\bar \alpha_z)}{\Xi - \bar z}\, ,
\]
then
\begin{equation}\label{eq:adj}
m^{-1}\mathcal{L}_m^\star (\beta_z) = \bar z \beta_z \, .
\end{equation}
Notice that $\beta_z\in W^{2,2}_{\rm loc}\cap \mathcal{H}^e$ decays exponentially fast at $\infty$ thanks to the bound $| \varphi_z(t)|\le C e^{-m|t|}$, for every $t\in \mathbb{R}$, proven in Lemma \ref{c:decay}.
Let us now verify \eqref{eq:adj}:
We first observe that
\begin{equation}
\alpha_z = \frac{A\varphi_z}{\Xi - z} \, ,
\end{equation}
hence
\begin{align*}
m^{-1}\mathcal{L}_m^\star (\beta_z) & = \Xi \beta_z - e^{2t}\mathcal{K}_m \left(\frac{A \bar \varphi_z}{\Xi - \bar z}\right) =
\Xi \beta_z - e^{2t}\mathcal{K}_m(\bar \alpha_z)
\\& = \Xi \beta_z -(\Xi - \bar z) \beta_z = \bar z \beta_z \, .
\end{align*}
It is now immediate to conclude \eqref{eq:keyfunction}.
}
\medskip
{\color{red}
In order to simplify our notation we use $\mathcal{L}_h$, $z_h$ and $m_h$ in place of $\mathcal{L}_{m_a-h}$, $z_{a-h}$ and $m_a-h$.
Given $\alpha_h\in \mathcal{H}^e$ as in the statement of the Lemma we denote by $\varphi_h$ the unique $L^2$ solution of $\varphi_h'' - m_h^2 \varphi_h = \alpha_h$. We now can apply (Cl) above to find $\beta_h\in \mathcal{H}^e$ which solves $m_h^{-1}\mathcal{L}_h^\star (\beta_h) = \bar z_h \beta_h$ and such that
\begin{equation}\label{eq:keyfunction1}
\left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t\, .
\end{equation}
To conclude the proof it suffices to show that, after appropriately normalizing the functions $\alpha_h$ (i.e. after multiplying them by an appropriate constant factor, which might depend on $h$) we have
\begin{equation}\label{e:limit-nonzero}
\lim_{h\to 0} \left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
= c \neq 0 \, .
\end{equation}
Note that for the latter conclusion, which we will prove in the next two steps, we will use the assumption that $\alpha_h\neq 0$.
\bigskip
{\bf Step 1:} We show that, up to multiplication of $\alpha_h$ by a suitable constant factor (which might vary with $h$), $\varphi_h \to \varphi$ in $W^{1,2}$ and in $C^{1,\alpha}$, as $h\to 0$, where $\varphi \in W^{2,\infty}$ is a nontrivial solution to
\begin{equation}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
By Remark \ref{r:phi(a)-nonzero}, the nontriviality of $\varphi$ implies $\varphi (a) \neq 0$, and hence, up to multiplication by another constant factor, we will assume, without loss of generality, that $\varphi (a)=1$.
Recall that $\varphi_h$ solves the equation
\begin{equation}\label{e:ODE-again-100}
- \frac{d^2 \varphi_h}{dt^2} + m_h^2 \varphi_h + \frac{A}{\Xi - z_h} \varphi_h
= 0 \, .
\end{equation}
For the moment, let us normalize the functions so that
\begin{equation}\label{e:normalization-2}
\int (|\varphi_h'|^2 + m_h^2 |\varphi_h|^2) = 1\, ,
\end{equation}
as in \eqref{e:L2-normalization}. We then can argue as for the bounds \eqref{e:exp-bound-1} and \eqref{e:exp-bound-2} to derive the existence of constants $C$ and $\beta>2$ (independent of $h$) such that
\begin{equation}\label{e:exp-bound-3}
\left|\varphi_h (t)\right| \leq C e^{- \beta |t|} \quad \forall t\, .
\end{equation}
Recalling Section \ref{s:5-7-part-II}, we know that $z_h = \Xi (a) + c(a) h + o (h)$, where $c(a)$ is a complex number with positive imaginary part, which we denote by $d(a)$. Using the monotonicity of $\Xi$ we can write $z_h = \Xi (t (h)) + i (d(a) h + o (h))$ for some $t(h)$ which satisfies the bound $|t (h) -a|\leq C h$ for some positive constant $C$. In particular, using the mean value theorem and the fact that the derivative of $\Xi$ does not vanish on $[a-1, a+1]$, we get
\[
|\Xi (t) - z_h|^2\geq C^{-1} (|t-t(h)|^2 + |h|^2)\qquad \forall t\in [a-1,1+1]\, ,
\]
where $C$ is some positive constant. Next, using that $|t(h)-a|\leq C h$, we conclude the estimate
\[
|\Xi (t) - z_h|\geq C^{-1} |t-a| \qquad \forall t\in [a-1, a+1]\, ,
\]
with a constant $C$ independent of $h$. Since $a$ is a zero of $A$, we finally conclude that the functions
\[
\frac{A(t)}{\Xi (t) - z_h}
\]
are in fact uniformly bounded, independently of $h$. Using the latter estimate and \eqref{e:exp-bound-3} we thus infer that
\begin{equation}\label{e:exp-bound-4}
|\varphi_h'' (t)|\leq C e^{-\beta |t|}\, .
\end{equation}
In particular, upon extraction of a subsequence, we can assume that the $\varphi_h$ converge to a function $\varphi$ strongly in $W^{1,2}$, weakly in $W^{2,\infty}$, and hence strongly in $C^{1, \alpha}$ for every $\alpha<1$. In particular, because of the normalization \eqref{e:normalization-2}, $\varphi$ is a nontrivial $W^{2, \infty}$ function, satisfying the same exponential decay as in \eqref{e:exp-bound-3} and \eqref{e:exp-bound-4}. Moreover, given the bound on the functions $\frac{A(t)}{\Xi (t) -z_h}$, $\varphi$ is in fact a solution of
\begin{equation}\label{e:ODE-again-101}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
Recalling Remark \ref{r:phi(a)-nonzero}, $\varphi (a)\neq 0$ and $\varphi$ is unique up to a constant factor. In particular, we must have
\begin{equation}\label{e:comoda}
\liminf_{h\downarrow 0} |\varphi_h (a)|>0,
\end{equation}
otherwise for a suitable subsequence we would have convergence to a nontrivial solution $\varphi$ for which $\varphi (a)=0$. Because of \eqref{e:comoda} we can use the different normalization $\varphi_h (a) = 1$, which in turn implies that $\varphi_h$ converges (without extracting subsequences) to the unique $W^{2,2}$ solution $\varphi$ of \eqref{e:ODE-again-101} which satisfies $\varphi (a)=1$.
\medskip
{\bf Step 2:} We prove that
\begin{equation}
\lim_{h\to 0} \, {\rm Im} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t
=
\frac{2A'(a)\varphi(a)^2}{d(a)\Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Recalling that $z_h = \Xi(t(h)) + i (d(a)h + o(h))$, we write
\begin{align*}
{\rm Im}\, \left[\frac{A}{(\Xi-z_h)^2} \varphi_h^2\right]
& = {\rm Im}\, \left[\frac{A((\Xi-\Xi(t(h)))+ i(d(a)h + o(h)))^2}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h + i {\rm Im}\, \varphi_h)^2\right]
\\&
= \frac{2(d(a)h + o(h)) A(\Xi - \Xi(t(h)))}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2)
\\ & \qquad +
\frac{2A }{(\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & \qquad -\frac{ 4(d(a)h + o(h))^2 A}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & =: I_h + II_h + III_h \, .
\end{align*}
To ease notation we set
\begin{equation}
f_h := {\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2 \, ,
\quad
g_h := {\rm Re}\, \varphi_h\, {\rm Im}\varphi_h \, ,
\end{equation}
and observe that $f_h \to \varphi^2$, $g_h \to 0$ as $h\to 0$, where the convergence is, in both cases, in the strong topologies of $L^2$ and of $C^\alpha$, for every $\alpha <1$.
We will show below that:
\begin{align}
\lim_{h\to 0} \int I_h &= \frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds=: L (a)\label{e:limit-I}\, ,\\
\lim_{h\to 0} \int II_h &= 0\label{e:limit-II}\, ,\\
\lim_{h\to 0} \int III_h &=0\label{e:limit-III}\, .
\end{align}
Considering that none of the numbers $\Xi' (a)$, $A'(a)$, $\varphi (a)$, and $d(a)$ vanish, $L(a)\neq 0$. This implies \eqref{e:limit-nonzero} and concludes the proof. We next study separately the three limits above.
\medskip
{\bf Proof of \eqref{e:limit-I}.}
There exists $\delta>0$ and $r>0$ such that for any $h$ sufficiently small one has $|\Xi(t) - \Xi(t(h))|>\delta$ for all $t\in \mathbb{R}\setminus (a-r/2,a+r/2)$. This implies that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}\setminus (t(h) - r, t(h) + r)} I_h = 0 \, ,
\end{equation}
hence, we are left with
\begin{equation}
\lim_{h \to 0} 2h d(a)\int_{t(h) - r}^{t(h) + r} \frac{A(t)(\Xi(t)- \Xi(t(h))}{((\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2)^2} f_h(t)
\, d t \, .
\end{equation}
We change variables according to $t= t(h) + sh$:
\begin{align*}
2\int_{-\frac{r}{h}}^{\frac{r}{h}} s & \left(\frac{A(t(h) + sh)}{h} \frac{\Xi(t(h) + sh )- \Xi(t(h))}{sh}\right) \times
\\&
\left( s^2\left(\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh}\right)^2 + (d(a) + o(h)/h)^2 \right)^{-2}
f_h(t(h) + sh)\, d s
=: C(h) \, .
\end{align*}
Notice that, for any $s\in \mathbb{R}$, we have
\begin{equation}
\lim_{h \to 0}\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} = \Xi'(a) \, .
\end{equation}
Moreover the monotonicity of $\Xi$ implies that
\begin{equation}
1/C \le \left| \frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} \right| \le C
\quad \text{for any $s\in (-r/h, r/h)$} \, .
\end{equation}
Notice that
\begin{equation}
\frac{A(t(h) + sh)}{h}
=
\frac{A(t(h) + sh)-A(t(h))}{h} + \frac{A(t(h)) - A(a)}{h} \, ,
\end{equation}
hence, up to extracting a subsequence $h_i \to 0$, we have
\begin{equation}
\frac{A(t(h_i) + sh_i)}{h_i} \to A'(a)s + x
\end{equation}
for some $x\in \mathbb{R}$ (recall that $|t(h)-a|\leq C h$ and note that $x$ might depend on the subsequence).
Collecting all the estimates above, and using the dominated convergence theorem we deduce that, along the subsequence $h_i$,
\begin{equation}
\lim_{i\to\infty} C(h_i)= 2\int_{\mathbb{R}} \frac{s(A'(a)s + x)\Xi'(a)}{(s^2 \Xi'(a)^2 + d(a)^2)^2} \varphi(a)^2\, ds
=
\frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Observe that the limit does not depend on $x$ and hence does not depend on the chosen subsequence.
\medskip
{\bf Proof of \eqref{e:limit-III}.} Arguing as we did for $I_h$ and using that $g_h\to 0$ in $C^{1/2}$, as $h\to 0$, we easily deduce that
\begin{equation}
\lim_{h\to 0} \int III_h = 0 \, .
\end{equation}
\medskip
{\bf Proof of \eqref{e:limit-II}.}
We need to show that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} g_h(t)\, dt = 0 \, .
\end{equation}
Observe that $G_h := g_h/|\varphi_h|^2 = |\varphi_h|^{-2} {\rm Re}\, \varphi_h {\rm Im}\, \varphi_h $, and in particular, $|G_h|\leq \frac{1}{2}$. Moreover there exists $r>0$ such that $G_h \to 0$ in $(a-r, a+r)$ in the $C^\alpha$ topology for every $\alpha<1$ (here, we are using that $|\varphi_h(a)|^2 \to |\varphi(a)|^2\neq 0$ as $h\to 0$).
We write
\begin{align}
\int_\mathbb{R} & \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} G_h(t)\, dt
\\& =
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h))) dt \, ,
\end{align}
where we took advantage of the identity
\begin{equation}
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2}\, d t = 0 \, ,
\end{equation}
proven in \eqref{e:imaginary-trick}.
Arguing as we did for $I_h$ we can reduce the problem to show
\begin{align}
\lim_{h \to 0} & \int_{t(h) - r}^{t(h) + r} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt = 0\, .
\end{align}
We split the integral to the sum of
\begin{align*}
J_1 (h) &:= \int_{t(h)-r}^{t(h)+r} \frac{(A(t)- A(t (h))|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\\
J_2 (h) &:= A(t (h)) \int_{t(h)-r}^{t(h)+r} \frac{|\varphi_h(t)|^2}{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\, .
\end{align*}
Next observe that, in the interval that interests us, the following inequalities hold provided $r$ and $h$ are sufficiently small:
\begin{align*}
|A (t) - A(t(h))|&\leq C |t- t(h)|\\
|A(t(h)| &= |A (t (h))-A(a)|\leq C|t(h)-a|\leq C h\\
|G_h (t) - G_h (t (h))|&\leq \|G_h\|_{C^{1/2} (a-r, a+r)} |t-t(h)|^{1/2}\\
|\Xi (t) - \Xi (t(h))| &\geq C^{-1} |t-t(h)|\\
(d(a)h + o (h))^2 &\geq C^{-1} h^2\, .
\end{align*}
Since $\|\varphi_h\|_{L^\infty} \leq C$, we can change variable in the integrals to $\sigma =t-t(h)$ and estimate them as follows:
\begin{align*}
|J_1 (h)|&\leq C \|G_h\|_{C^{1/2} (a-r,a+r)} \int_{-r/2}^{r/2} \sigma^{-1/2}\, d\sigma \leq C\|G_h\|_{C^{1/2}} r^{1/2}\, ,\\
|J_2 (h)| &\leq C h \int_{-\infty}^\infty \frac{\sigma^{1/2}}{\sigma^2 + C^{-1} h^2}\, d\sigma
= C h^{1/2} \int \frac{\tau^{1/2}}{\tau^2 + C^{-1}} d\tau \leq C h^{1/2}\, .
\end{align*}
Clearly $J_2 (h)\to 0$, while $J_1 (h) \to 0$ because $\|G_h\|_{C^{1/2} (a-r,a+r)} \to 0$.}
\end{proof}
\chapter{Proofs of technical statements}
\section{Proof of Remark \ref{r:bounded}}
More generally, we will show here that, for any $q_0\in[1,2[$ and $q_1\in]2,\infty]$, it is true that \begin{equation*}\lVert K_2*\omega\rVert_{L^\infty(\ensuremath{\mathbb R}^2;\ensuremath{\mathbb R}^2)}\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)})\end{equation*} for all $\omega\in L^{q_0}\cap L^{q_1}$.
Indeed, passing to polar coordinates, one sees that $K_2\vert_{B_1}\in L^{q_1^*}(B_1; \ensuremath{\mathbb R}^2)$ and $K_2\vert_{\ensuremath{\mathbb R}^2\setminus B_1}\in L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1;\ensuremath{\mathbb R}^2)$, where $q_1^*, q_2^*$ are given by $\frac1{q_i}+\frac1{q_i^*}=1$ for $i\in\{1,2\}$. Hölder's inequality implies that for any $x\in\ensuremath{\mathbb R}^2$,
\begin{equation*}
\begin{split}
\abs{(K_2*\omega)(x)} &= \abs{((K_2 \mathbf 1_{B_1})*\omega)(x)+((K_2 (1-\mathbf 1_{B_1}))*\omega)(x)} \\
&\le\norm{K_2}_{L^{q_1^*}(B_1)}\norm{\omega}_{L^{q_1}(\ensuremath{\mathbb R}^2)} + \norm{K_2}_{L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1)}\norm{\omega}_{L^{q_0}(\ensuremath{\mathbb R}^2)} \\
&\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)}).
\end{split}
\end{equation*}
Since $x$ is arbitrary, this achieves a proof of the claim above.
\section{Proof of Theorem \ref{thm:Yudo}}
\textbf{Existence.} The existence argument is a classical density argument. Take any sequence $(\omega_0^{(n)})_{n\in\mathbb N}$ of functions in $L^1\cap C^\infty_c$ that converges strongly in $L^1$ to $\omega_0$. Analogously, pick a sequence of smooth functions $(f_n)_{n\in\mathbb N}$ in $C^\infty_c (\ensuremath{\mathbb R}^2\times[0, T])$ converging in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to $f$ and satisfying the bound $\|f_n (\cdot, t)\|_{L^\infty} \leq \|f (\cdot, t)\|_{L^\infty}$ for a.e. $t$. Then, let $\omega^{(n)}$ denote the solution of the corresponding Cauchy problem of the Euler equations in vorticity form. The existence of such solutions is a classical well-known fact, see for instance \cite[Theorem A]{McGrath}. Following Remark \ref{r:A-priori-estimates}, these solutions satisfy all the \`a priori estimates needed in Proposition \ref{p:convergence}. Therefore, following the proof of Proposition \ref{p:convergence}, one obtains, in the limit $n\to\infty$, a solution $\omega\in L^\infty([0,T]; L^1\cap L^\infty)$ of the given Cauchy problem. Furthermore, since the à priori estimates of Remark \ref{r:A-priori-estimates} are uniform in $n$, one gets $K_2*\omega\in L^\infty([0,T]; L^2)$.
\begin{remark}
The proof of Proposition \ref{p:convergence} has a fixed force $f$ but a straightforward adaptation of the arguments handles the case above, namely with a sequence of forces $(f_n)_{n\in\mathbb N}$ that converges in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to a given $f$. More precisely the only difference occurs in the term $I_4 (k)$ of \eqref{e:term-I4}, which anyway enjoys convergence to the same limit.
\end{remark}
\textbf{Uniqueness.} The uniqueness proof needs two important facts. The first is a well-known ODE inequality, whose short proof is given, for the reader's convenience, at the end of the section.
\begin{lemma}\label{l:ODE lemma}
Let $T>0$ and let $E:[0,T]\to[0,\infty[$ be a differentiable function satisfying
\begin{equation}\label{e:ODE inequality for E}
\dot E(t)\le p M E(t)^{1-1/p} \quad \text{ and }\quad E(0)=0
\end{equation}
for some fixed $M>0$. Then $E(t)\le (Mt)^p$ for all $t\in[0,T]$.
\end{lemma}
The second is the classical Calderón-Zygmund $L^p$ estimate, where we need the sharp $p$-dependence of the corresponding constant. This fact is also well known, cf. for instance \cite[Formula (8.45), page 322]{MajdaBertozzi}).
\begin{lemma}\label{l:Estimate on Lp norm of gradient of velocity}
For every $p_0>1$ there is a constant $c (p_0)$ with the following property.
If $v=K_2*\omega$ for some $\omega\in L^1 \cap L^p(\ensuremath{\mathbb R}^2)$ with $p\in [p_0, \infty[$, then $\norm{D v}_{L^p}\le p c \norm{\omega}_{L^p}$.
\end{lemma}
Now, let $v_1=K_2*\omega_1, v_2=K_2*\omega_2$ be two solutions of \eqref{e:Euler} satisfying the assumptions of Theorem \ref{thm:Yudo} and note that $w:=v_1-v_2$ solves
\begin{equation}\label{e:Gleichung fuer die Differenz}
\partial_t w +(v_1\cdot\nabla)w +(w\cdot\nabla)v_2=-\nabla(p_1-p_2)
\end{equation}
(where $p_1, p_2$ are the pressures corresponding to $v_1$ and $v_2$). Clearly
\[
E(t):=\int_{\ensuremath{\mathbb R}^2} |w(x,t)|^2\,\mathrm dx \leq 2 \int_{\ensuremath{\mathbb R}^2} |v_1 (x,t))^2|\,\mathrm dx + 2 \int_{\ensuremath{\mathbb R}^2} |v_2 (x,t)^2|\,\mathrm dx <\infty
\]
is a bounded function on $[0,T]$.
We scalar multiply \eqref{e:Gleichung fuer die Differenz} with $w$, integrate by parts and use the divergence free conditions of $v_1, v_2$, and $w$ to conclude and
\begin{align*}
\dot E(t) = & - 2\int_{\ensuremath{\mathbb R}^2}((w\cdot\nabla)v_2)w\,\mathrm dx
\le 2\int_{\ensuremath{\mathbb R}^2}|w(x,t)|^2 \abs{D v_2(x,t)}\,\mathrm dx\\
\le & 2\norm{\nabla v_2(\cdot, t)}_{L^p}\norm{w(\cdot, t)}_{L^\infty}^{2/p}\norm{w(\cdot,t)}_{L^2}^{2-2/p}\,.
\end{align*}
Using Remark \ref{r:bounded}, we also have
\begin{equation*}
\begin{split}
\sup_{t\in[0,T]} \norm{w(\cdot, t)}_{L^\infty} &\le \sup_{t\in[0,T]}(\norm{v_1(\cdot, t)}_{L^\infty}+\norm{v_2(\cdot, t)}_{L^\infty}) \\
&\le C \sup_{t\in[0, T]}(\norm{\omega_1(\cdot, t)}_{L^1}+\norm{\omega_1(\cdot, t)}_{L^\infty}+\norm{\omega_2(\cdot, t)}_{L^1}+\norm{\omega_2(\cdot, t)}_{L^\infty}) <\infty\, .
\end{split}
\end{equation*}
Next fix any $p\geq 2$. From Lemma \ref{l:Estimate on Lp norm of gradient of velocity} and the classical $L^p$ interpolation we conclude
\begin{equation*}
\norm{D v_2(\cdot, t)}_{L^p}\le p c \norm{\omega_2 (\cdot, t)}_{L^p}\le p c \norm{\omega_2 }_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 (\cdot, t)}_{L^\infty([0,T]; L^\infty)}^{1-1/p}.
\end{equation*}
Therefore, $\dot E(t)\le p M_p E(t)^{1-1/p}$
with
\begin{align*}
M_p &= 2\norm{w}_{L^\infty([0,T];L^\infty)}^{2/p} c \norm{\omega_2}_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 }_{L^\infty([0,T]; L^\infty)}^{1-1/p}\\
&\le 2c \left({\textstyle{\frac{1}{p}}}\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\left(1-{\textstyle{\frac{1}{p}}}\right)\norm{\omega_2}_{L^\infty([0,T];L^\infty)}\right) \\
&\le 2c (\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\norm{\omega_2}_{L^\infty([0,T];L^\infty)})=: M<\infty\, .
\end{align*}
We can thus apply
Lemma \ref{l:ODE lemma} to obtain that $E(t)\le (M_p t)^p \leq (Mt)^p$.
In particular, for any $t\geq \frac{1}{2M}$ we have $E(t)\le \frac 1{2^p}$ and we can let $p$ tend to $\infty$ to infer $E(t)=0$. Since the same estimates apply to any translation $\tilde{E} (t):= E (t+t_0)$ of the function $E$, we immediately conclude that $E$ vanishes identically, namely that $v_1=v_2$ on $\mathbb R^2\times [0,T]$.
\begin{proof}[Proof of Lemma \ref{l:ODE lemma}] Fix an arbitrary $t_0\leq T$ and note that if $E(t_0)=0$ there is nothing to show. Hence assume $E(t_0)> 0$ and set $a:=\sup\{t:E(t)=0\text{ and }t\le t_0\}$ (note that the set is nonempty because $E (0)=0$). $E(a)=0$ by continuity of $E$ and clearly $E(t)>0$ for all $t\in]a,t_0]$. Therefore, we can divide \eqref{e:ODE inequality for E} by $E(t)^{1-1/p}$ to obtain that $\dot E(t) E^{1/p-1}(t)\le p M$
for all $t\in]a, t_0]$. Integrating both sides gives
\begin{equation}\label{e:Integral bound on E}
\int_{a}^{t_0} \dot E(t) E^{1/p-1}(t)\le p M (t_0-a)\, .
\end{equation}
But the left hand side equals $p E^{1/p}(t_0)-pE^{1/p}(a)=p E^{1/p}(t_0)$, from which we infer
$E^{1/p}(t_0)\le M (t_0-a) \le M t_0$.
\end{proof}
\section{Proof of Proposition \ref{p:convergence}}
Recall first the following classical metrizability result of weak${}^*$ topologies of separable Banach spaces.
\begin{lemma}[Metrizability Lemma]\label{l:Metrizability}
Let $X$ be a separable Banach space and let $K\subset X^*$ be weakly${}^*$-compact. Then $K$ is metrizable in the weak${}^*$ topology inherited from $X^*$ and a metric that induces this topology is given by
\begin{equation}\label{e:Metrization-of-weak-star-topology}
d(l, \tilde l)=\sum_{n=1}^\infty 2^{-n}\min\{1, \vert l(x_n)-\tilde l(x_n)\vert\},
\end{equation}
where $(x_n)_{n\in \mathbb N}$ is any sequence in $X$ such that $\{x_n:n\in\mathbb N\}$ is dense in $X$.
\end{lemma}
Now on to the proof of Proposition \ref{p:convergence}. We will prove convergence of the $\omega_{\varepsilon, k}$ to a $\omega_\varepsilon$ for fixed $\varepsilon$ and $k\to\infty$ in the space $C([0, T]; K)$, where $K:=\{u\in L^q(\ensuremath{\mathbb R}^2):\lVert u\rVert_{L^q}\le R\}$ is equipped with the weak${}^*$ topology inherited from $L^q_{\text w}$. (We will talk about the choice of $q$ later.) Here, $R$ is the uniform bound obtained in \eqref{e:uniform_bound} of Corollary \ref{c:omega_k_epsilon}. Note that since every $L^q$ space is reflexive, one can work just as well with the weak topology on $K$. Let $(\phi_n)_{n\in\mathbb N}$ be a sequence of smooth functions such that $\{\phi_n:n\in\mathbb N\}$ is dense in every $L^q$. The metric given by \eqref{e:Metrization-of-weak-star-topology} now induces the topology of $K$, and it does not depend on $q$. Therefore, using the uniform bound \eqref{e:uniform_bound}, we conclude that \emph{the choice of $q$ does not matter}. It is sufficient to prove the statement of Proposition \ref{p:convergence} for one fixed $q\in]1, p]$ in order to prove it for all $q\in]1, p]$.
\begin{claim}
The $\omega_{\varepsilon, k}$, seen as functions from $[0, T]$ to $K$, are equicontinuous (for simplicity we define each $\omega_{\varepsilon, k} (\cdot, t)$ on the interval $[0,t_k]$ as constantly equal to $\omega_{\varepsilon, k} (\cdot, t_k)$).
\end{claim}
\begin{proof}
For $\tilde\omega, \hat\omega \in L^q(\ensuremath{\mathbb R}^2)$, let
\begin{equation*}
d_i(\tilde\omega, \hat \omega) \overset{\text{Def.}}=\left\lvert\int_{\ensuremath{\mathbb R}^2} (\tilde\omega-\hat\omega) \phi_i\right\rvert.
\end{equation*}
Since each $\omega_{\varepsilon, k}$ solves the Euler equations in vorticity form, we can estimate
\begin{equation}\label{e:bound-on-ith-distance}
\begin{split}
d_i(\omega_{\varepsilon, k}(t, \cdot), \omega_{\varepsilon, k}(s, \cdot)) &= \left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t\partial_\tau\omega_{\varepsilon, k}(\sigma, x)\phi_i(x)\,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&=\left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t ((K_2*\omega_{\varepsilon, k})\cdot\nabla)\omega_{\varepsilon, k}(x, \sigma + f (x, \sigma))\phi_i(x) \,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&\le\lVert\nabla\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)}\int_{\ensuremath{\mathbb R}^2}\int_s^t\lvert K_2*\omega_{\varepsilon, k}\rvert\lvert\omega_{\varepsilon, k}\rvert\,\mathrm d\sigma\,\mathrm dx\\
& \qquad + \lVert\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)} \int_{\ensuremath{\mathbb R}^2}\int_s^t |f(x, \sigma)|\, dx\, d\sigma\\
&\le C(\lVert\nabla\phi_i\rVert_\infty + \|\phi_i\|_\infty) \lvert s-t\rvert
\end{split}
\end{equation}
whenever $t\geq s \geq t_k$.
Let $\tilde\varepsilon>0$. We can find a $N\in\mathbb N$ (depending on $\tilde\varepsilon$) such that $\sum_{n=N^+1}^\infty 2^{-n}\le\frac{\tilde\varepsilon}2$. If \begin{equation*}\lvert t-s\rvert\le \frac{\tilde\varepsilon}{2NC\max_{i\in\{1,\dots,N\}}(\lVert\nabla\phi_i\rVert_{\infty} + \|\phi_i\|_\infty)},\end{equation*} where $C$ is the constant from \eqref{e:bound-on-ith-distance}, then, by the bound in \eqref{e:bound-on-ith-distance}, we get
\begin{equation*}
d(\omega_{\varepsilon, k}(t, \cdot),\omega_{\varepsilon, k}(s, \cdot))\le\frac{\tilde\varepsilon}2+\sum_{i=1}^N \frac{\tilde\varepsilon}{2N}=\tilde\varepsilon.\qedhere
\end{equation*}
\end{proof}
By the Banach-Alaoglu theorem, bounded subsets are relatively compact in the weak${}^*$ topology. Therefore, using reflexivity, for every $t\in[0,T]$, the bounded set $\{\omega_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is also relatively compact in $L^q_{\text w}$.
Therefore, using Arzelà-Ascoli, we can conclude that there exists a subsequence of $(\omega_{\varepsilon, k})_{k\in\mathbb N}$, not relabeled, that converges in $C([0, T]; L_{\text w}^q)$, for every $q$, to the same $\omega_\varepsilon\in C([0, T]; L_{\text w}^q)$.
\begin{claim}
The $\omega_\varepsilon$ is a solution of the Euler equations in vorticity formulation.
\end{claim}
\begin{proof}
We have, for every $k\in\mathbb N$ and $\phi\in C_{\text c}^\infty(\ensuremath{\mathbb R}^2\times[0, T])$ with $\phi(\cdot, T)=0$, (cf. \eqref{e:distrib})
\begin{align}
&\underbrace{\int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x, t_k)\phi(x, t_k)\,\mathrm dx}_{=:I_1 (k)} + \underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2}\ \omega_{\varepsilon, k}(x,t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_2 (k)}\nonumber\\
+ &\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x,t)((K_2*_x\omega_{\varepsilon, k})(x,t)\cdot\nabla)\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_3 (k)} +\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} f(x, t)\phi(x, t)\,\mathrm dx\,\mathrm dt}_{=:I_4 (k)} = 0\label{e:term-I4}\, .
\end{align}
The term $I_4(k)$ converges to
\begin{equation*}
\int_{\ensuremath{\mathbb R}^2\times[0, T]} f(x, t)\phi(x,t)\,\mathrm dx\,\mathrm dt\, .
\end{equation*}
By the convergence of the $\omega_{\varepsilon, k}$, \begin{equation*}\lim_{k\to\infty} I_2(k)=\int_{\ensuremath{\mathbb R}^2\times[0, T]} \omega_\varepsilon(x, t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt.\end{equation*} By the definition of the initial condition of $\omega_{\varepsilon, k}$ (cf. \eqref{e:Euler-later-times}), $\omega_{\varepsilon, k}(\cdot, t_k)$ converges strongly in $L^1(\ensuremath{\mathbb R}^2)$ to $\tilde\omega(\cdot, 0)=\omega_0=\omega_\varepsilon(\cdot, 0)$. Therefore, \begin{equation*}\lim_{k\to\infty} I_1(k)=\int_{\ensuremath{\mathbb R}^2}\omega_\varepsilon(x, 0)\phi(x, 0)\,\mathrm dx.\end{equation*}
It therefore only remains to prove the convergence of $I_3$, for which we will require yet another claim.
\begin{claim}
For every $r\in[2, \infty[$ and every $t\in[0,1]$, the set $\{v_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is compact in $L^r (B_R)$ for every $R>0$.
\end{claim}
\begin{proof}
From \eqref{e:uniform_bound}, we know that $\|v_{\varepsilon, k}(\cdot, t)\|_{L^2(\ensuremath{\mathbb R}^2)}\le C$ for some constant $C$ that is independent of $t$. Recall that $v_{\varepsilon, k}=\nabla^\bot\psi_{\varepsilon, k}$, where $\psi_{\varepsilon, k}$ solves $\Delta\psi_{\varepsilon, k} = \omega_{\varepsilon, k}$. Therefore, using the Calder\'{o}n-Zygmund inequality, one gets
\begin{equation*}
\norm{\nabla v_{\varepsilon, k}(\cdot, t)}_{L^2}\le C\norm{\omega_{\varepsilon, k}(\cdot, t)}_{L^2}.
\end{equation*}
Since the $L^2$ norms of the $\omega_{\varepsilon, k}(\cdot, t)$ are uniformly bounded, we can conclude that
\begin{equation*}
\sup_{k\in\infty} \norm{v_{\varepsilon, k}(\cdot, t)}_{L^2}<\infty.
\end{equation*}
Hence we conclude the compactness in $L^r (B_R)$ from Rellich's Theorem.
\end{proof}
Therefore, the $v_{\varepsilon, k}(\cdot, t)$ converge to $v_\varepsilon(\cdot, t)$ strongly in every $L^r (B_R)$ with $r\in[2,\infty[$. Moreover, thanks to \eqref{e:uniform_bound}, we can apply the dominated convergence theorem by Lebesgue to conclude that $v_{\varepsilon, k}\to v_\varepsilon$ as $k\to\infty$ in the space $L^1([0, T]; L^r (B_R))$ for every $r\in[2,\infty[$.
By definition,
\begin{equation*}
\omega_{\varepsilon, k} (v_{\varepsilon, k}\cdot\nabla)\phi-\omega_{\varepsilon} (v_{\varepsilon}\cdot\nabla)\phi = \omega_{\varepsilon, k} (v_{\varepsilon, k}-v_{\varepsilon})\cdot \nabla \phi +
(\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, .
\end{equation*}
We thus rewrite
\begin{align}
I_3 (k) &= \int_0^T \int_{B_R} \omega_{\varepsilon, k} (v_{\varepsilon,k}-v_\varepsilon)\cdot \nabla \phi\, dx\, dt
+ \int_0^T \underbrace{\int_{B_R} (\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, dx}_{=:J_k(t)}\, dt\, .\label{e:I3-converges-to-0}
\end{align}
Observe first that, for each fixed $t$,
\[
\lim_{k\to\infty} J_k (t) = 0\, ,
\]
since $\omega_{\varepsilon, k} (\cdot, t) - \omega_\varepsilon (\cdot, t)$ converges weakly to $0$ in $L^2$, while $v_\varepsilon (\cdot, t)\cdot \nabla \phi (\cdot, t)$ is a fixed $L^2$ function. On the other hand
\[
|J_k (t)|\leq \|\nabla \phi (\cdot, t)\|_\infty (\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^2} + \|\omega_\varepsilon (\cdot, t)\|_{L^2}) \|v_\varepsilon (\cdot, t)\|_{L^2}\, .
\]
Therefore the second integral in \eqref{e:I3-converges-to-0} converges to $0$. The first integral can be bounded by
\[
\|\nabla \phi\|_{L^\infty} \|v_{\varepsilon, k} - v_\varepsilon\|_{L^1 ([0,T], L^2 (B_R))} \|\omega_{\varepsilon, k}\|_{L^\infty ([0,T], L^2 (B_R)}
\]
and converges to $0$ as well.
\end{proof}
\section{Proof of Lemma \ref{l:extension}}
Consider $\vartheta \in L^2_m\cap \mathscr{S}$ for $m\geq 2$ and let $v:= K_2*\vartheta$. We first claim that
\begin{equation}\label{e:average}
\int_{B_R} v = 0 \qquad \qquad \mbox{for every $R>0$.}
\end{equation}
With \eqref{e:average} at our disposal, since $\|Dv\|_{L^2 (\mathbb R^2)} = \|\vartheta\|_{L^2 (\mathbb R^2)}$, we use the Poincar\'e inequality to conclude
\begin{equation}
R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C \|\vartheta\|_{L^2 (\mathbb R^2)}
\end{equation}
for a geometric constant $C$. This is then enough to infer the remaining conclusions of the lemma.
In order to achieve \eqref{e:average} observe first that $v = \nabla^\perp h$, where $h$ is the unique potential-theoretic solution of $\Delta h = \vartheta$, given by $h = K * \vartheta$ with $K (x) = \frac{1}{2\pi} \log |x|$. Since $K(R_\theta x) = K (x)$ and $\vartheta (x) = \vartheta (R_{2\pi/m} x)$, it follows that $h (R_{2\pi/m} x) = h (x)$, i.e. $h$ is $m$-fold symmetric. Therefore $R_{2\pi/m} \nabla h (R_{2\pi/m} x) = \nabla h (x)$. In particular, integrating in $x$ and using that the rotation is a measure-preserving transformation of the disk, we conclude
\[
\int_{B_R} \nabla h = R_{2\pi/m} \int_{B_R} \nabla h \, ,
\]
and thus,
\[
\int_{B_R} \nabla h = \frac{1}{m} \sum_{k=0}^{m-1} R_{2k\pi/m} \int_{B_R} \nabla h
\]
However, since $m\ge 2$, $\sum_{k=0}^{m-1} R_{2k\pi/m} = 0$, showing that $\int_{B_R} \nabla h = 0$.
\begin{remark}\label{r:Camillo_dumb}
We next show that it is not possible to find a continuous extension of the operator $L^2\cap \mathscr{S} \ni \vartheta \mapsto K_2* \vartheta \in \mathscr{S}'$ to the whole $L^2$. First of all we observe that, if such an extension exists, it then needs to coincide with $K_2* \vartheta$ when $\vartheta \in L^1 \cap L^2$. We next exhibit a sequence of divergence free vector fields $\{v_k\}\subset W^{1,1}\cap W^{1,2}$ with the property that $\omega_k = \curl v_k$ converge to $0$ strongly in $L^2$ but $v_k$ converge locally to a constant vector field $v_0\neq 0$. In order to do this, we first define the following functions $\phi_k$ on the positive real axis:
\[
\phi_k (r) :=
\left\{
\begin{array}{ll}
1 + \frac{3}{4\ln k} \qquad &\mbox{for $r\leq \frac{k}{2}$}\\ \\
1 + \frac{3}{4\ln k} - \frac{1}{k^2 \ln k} \left(r-\frac{k}{2}\right)^2\qquad &\mbox{for $\frac{k}{2} \leq r \leq k$}\\ \\
1 + \frac{1}{2 \ln k}- \frac{1}{\ln k} \ln \frac{r}{k} \qquad & \mbox{for $k\leq r \leq k^2$}\\ \\
\frac{1}{2 k^4 \ln k} (r-2k^2)^2\qquad & \mbox{for $k^2 \leq r\leq 2k^2$}\\ \\
0 \qquad &\mbox{for $r\geq 2 k^2$}\, .
\end{array}
\right.
\]
Observe that $\phi_k$ is $C^1$ and its derivative is Lipschitz. Next we define the stream functions
\[
\psi_k (x) = - \phi_k (|x|) v_0^\perp \cdot x
\]
and the vector field $v_k (x) = \nabla^\perp \psi_k (x)$. By construction $v_k$ is divergence free, compactly supported, and Lipschitz. In particular, it belongs to $W^{1,p}$ for every $p$. Moreover, $v_k$ equals $(1+\frac{3}{4\ln k}) v_0$ on $B_{k/2}$ and it thus follows that, as $k\to \infty$, $v_k$ converges locally to the constant vector field $v_0$. It remains to check that $\curl v_k = \Delta \psi_k$ converges to $0$ strongly in $L^2$. We compute
\[
\Delta \psi_k = - \underbrace{v_0^\perp \cdot x\, \Delta (\phi_k (|x|))}_{=:f_k} - \underbrace{\nabla (\phi_k (|x|))\cdot v_0^\perp}_{=: g_k}\,
\]
and we seek to bound $f_k$ and $g_k$ pointwise. For what concerns $f_k$ observe that $\Delta\phi_k$ vanishes on $|x|\leq \frac{k}{2}$, $k\leq |x| \leq k^2$, and $2k^2 \leq |x|$. On the remaining regions, using the formula for the Laplacian in polar coordinates, we can estimate
\[
|f_k (x)|\leq |v_0| |x| (|\phi'' (|x|)| + |x|^{-1} |\phi' (|x|)|)\, .
\]
In particular, we conclude
\[
|f_k (|x|)| \leq \frac{C}{|x| \ln k}\, ,
\]
for a constant $C$ independent of $k$.
As for $g_k$, it vanishes for $|x|\leq \frac{k}{2}$ and $|x|\geq k^2$, and where it does not vanish we have the estimate
\[
|g_k (x)|\leq |v_0| |\psi' (|x|)| \leq \frac{C}{|x|\ln k}\, ,
\]
again for a constant $C$ independent of $k$. Passing to polar coordinates, we can thus estimate
\begin{align*}
\|\Delta \psi_k\|^2_{L^2 (\mathbb R^2)} \leq & \frac{C}{(\ln k)^2} \int_{k/2}^{2k^2} \frac{1}{r} \, dr =
\frac{C}{(\ln k)^2} \left(\ln (2k^2) - \ln {\textstyle{\frac{k}{2}}}\right)
= \frac{C \ln (4k)}{(\ln k)^2}\, .
\end{align*}
\end{remark}
\chapter{A more detailed spectral analysis}\label{a:better}
\section{From Remark \ref{r:better2}(i) to Remark \ref{r:better}(c)}
Let us assume the validity of \ref{r:better2}(i) and prove Remark \ref{r:better}(c). Let $m_0\ge 2$ be the integer such that
\begin{align}\label{z1}
&{\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\} \neq \emptyset \, ,
\\
&{\rm spec}\, (L_{\text{st}}, U_{m}) \cap \{{\rm Re}\, z > 0\} = \emptyset \,
\quad
\text{for any $m>m_0$} \, .
\end{align}
We show that Remark \ref{r:better}(c) holds with $m = m_0$.
For any $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ we denote by $V_z:= P_z(L^2_{m_0})$ the image of the Riesz projector
\begin{equation*}
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1} dw \, ,
\end{equation*}
where $\gamma$ parameterizes the boundary of a ball containing $z$ and no other eigenvalues of $L_{\text{st}}$.
It is enough to show that $P_z(U_{km_0}) = \{0\}$ for any $k\in \mathbb{Z}\setminus\{-1, 1\}$, $z\in {\rm spec}_{m_0}\, (L_{\text{st}}) \cap \{{\rm Re}\, z > 0\}$ since it gives
$$V_z = P_z(U_{m_0}\cup U_{-m_0}) \subset U_{m_0} \cup U_{-m_0}\, ,$$
where the second inclusion follows from the fact that $U_m$ is always an invariant space of $L_{\text st}$.
If $k>1$, from \eqref{z1} we know that $z\notin {\rm spec}\, (L_{\text{st}}, U_{km_0})$, hence $P_z(U_{km_0})$ is trivial. If $k<-1$,
we reduce to the previous situation by observing that $P_z(U_{km_0}) = \overline{ P_{\bar z}(U_{-km_0})}$.
\section{Proof of Remark \ref{r:better2}(i)} In order to show this point, given Lemma \ref{l:almost-final-2}, we just need to prove the following statement.
\begin{lemma}\label{l:almost-final-1}
For every fixed $\Xi\in \mathscr{C}$ there is $M_0>0$ such that $\mathscr{U}_m$ is empty for every $m\geq M_0$.
\end{lemma}
Indeed, given the conclusion above we infer that $\mathscr{U}_m$ is empty for every $m\geq m_a$ and it thus suffices to select $m_0$ as the largest integer strictly smaller than $m_a$.
Before coming to the proof of Lemma \ref{l:almost-final-1} we state an auxiliary fact which will be used in the argument and which can be readily inferred from the computations in Step 1 of the proof of Lemma \ref{l:will-apply-Rouche}.
\begin{lemma}\label{l:operator-B_z}
For every $0 < \sigma < \tau < 1$ there is a constant $C$ (depending only upon $\sigma$ and $\tau$) such that $B_z := \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ is a bounded operator from $C^\sigma$ to $C^\tau$ for every $z$ with ${\rm Im}\, z>0$ and
\[
\|B_z\|_{\mathcal{L} (C^\sigma, C^\tau)} \leq C\, .
\]
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:almost-final-1}] The proof will be by contradiction and thus, assuming that the statement is false, we can select:
\begin{itemize}
\item[(i)] a sequence $\{m_j\}\subset [1, \infty[$ with $m_j\to \infty$;
\item[(ii)] a sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$;
\item[(iii)] and a sequence $\{\psi_j\}\subset L^2 (\mathbb R)$ solving the equation
\begin{equation}\label{e:eigenvalue-equation-20}
-\frac{d^2 \psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $\{z_j\}$ is bounded and every cluster point must be an element of $[0, \Xi (-\infty)]$. Otherwise for a subsequence, not relabeled, we get the estimate
\[
\sup \left\|\frac{A}{\Xi-z_j}\right\|_{L^\infty} =: C_0 < \infty\, .
\]
By scalar multiplying \eqref{e:eigenvalue-equation-20} by $\psi_j$ and taking the real part of the resulting equation we then conclude
\[
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) \leq C_0 \int |\psi_j|^2\, ,
\]
which clearly it is not feasible because $C_0 < m_j^2$ for a sufficiently large $j$ (and $\psi_j$ is nontrivial).
Up to subsequences we can thus assume that $z_j$ converges to some $z_0 \in [0, \Xi (-\infty)]$.
\medskip
{\bf Step 2.} We next analyze the cases $z_0 =0$ and $z_0 = \Xi (-\infty)$. The argument is similar to that used in Section \ref{s:3+4} in case (C). Let us argue first for $z_0=0$. We observe that $\Xi^{-1} |A|$ belongs to $L^1 (]-\infty, N])$ for any fixed $N$ and that, likewise, $|\Xi-z_j|^{-1} |A|$ have a uniform $L^1$ bound on any $]-\infty, N]$. We can then
use the Lemma \ref{l:ODE2} to normalize $\psi_j$ so that it is asymptotic to $e^{m_j t}$ and also to write
\[
\psi_j (t) = e^{m_j (t)} (1+z_j (t))
\]
with
\[
|z_j (t)| \leq \exp \left(\frac{1}{m_j} \int_{-\infty}^N \frac{|A|}{|\Xi-z_j|}\right) -1 \qquad
\mbox{for all $t\leq N$.}
\]
In particular, we have $|z_j (t)|\leq \frac{C(N)}{m_j}$ on $]-\infty, N]$. We next scalar multiply \eqref{e:eigenvalue-equation-20} by $\psi_j$ and take the imaginary part to conclude
\[
- \left(\int_{-\infty}^a + \int_b^\infty\right) \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\leq
\int_a^b \frac{A}{|\Xi-z_j|^2} |\psi_j|^2\, .
\]
In particular, since $\frac{A}{|\Xi-z_j|^2}$ is bounded from above by a constant $C$ independent of $j$ on $[a,b]$ and $-\frac{A}{|\Xi-z_j|^2}$ is bounded from below by a constant $c>0$ independent of $j$ on $[b+1, b+2]$, we conclude
\[
\int_{b+1}^{b+2} |\psi_j|^2 \leq \frac{C}{c} \int_a^b |\psi_j|^2\, .
\]
We next choose $N$ larger than $b+2$ and use the estimate $|z_j (t)|\leq \frac{C(N)}{m_j}$ to argue that, for $j$ large enough, we have $\frac{1}{2} e^{m_j t} \leq |\psi_j (t)| \leq 2 e^{m_j t}$ on $]-\infty, N]$. In particular, we infer
\[
\int_{b+1}^{b+2} e^{2m_j t} \leq C \int_a^b e^{2m_j t}
\]
provided the constant $C$ is chosen large enough (but independent of $j$) and $j$ is large enough. The latter inequality is certainly impossible for $m_j$ large enough, leading to a contradiction.
The argument to exclude $z_0 = \Xi (-\infty)$ is entirely analogous, this time normalizing for $t\to \infty$ and reaching an inequality of type
\[
\int_{a-2}^{a-1} e^{-2m_j t} \leq C \int_a^b e^{-2m_j t}
\]
for a constant $C$ independent of $j$ and any $j$ large enough.
\medskip
{\bf Step 3.} We next examine the last case, that is $z_0 = \Xi (c)$. This time we fix a $\sigma \in \, ]0,1[$ and normalize $\psi_j$ so that $\|\psi_j\|_{C^\sigma}=1$. We observe that
\[
\psi_j = - \mathcal{K}_{m_j} \left(\frac{A}{\Xi-z_j} \psi_j\right)\, ,
\]
and also recall that $\mathcal{K}_{m_j} (\varphi) = \frac{1}{2m_j} e^{-m_j |\cdot|} * \varphi$.
We set $m_0=m_a$ write further
\[
\psi_j = - \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} +m_0^2\right) \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\, .
\]
Recalling Lemma \ref{l:operator-B_z}, we can fix a $\tau \in ]\sigma,1[$ to achieve
\[
\left\|\left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_j}\right) \psi_j\right)\right\|_{C^\tau}\leq C
\]
for some constant $C$ independent of $j$.
We will show in the final step that
\begin{itemize}
\item[(Cl)] $\|\mathcal{K}_{m_j} \circ (-\frac{d^2}{dt^2} + m_0^2)\|_{\mathcal{L} (C^\tau, C^\tau)} \leq C$ for some constant $C$ independent of $k$.
\end{itemize}
In particular, we achieve
\begin{equation}\label{e:estimate-C-tau}
\|\psi_j\|_{C^\tau} \leq C\, .
\end{equation}
We now wish to show that indeed $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$ for $j$ large enough, which obviously would be a contradiction. In order to achieve the latter estimate we use a Littlewood-Paley decomposition. We fix a cut-off function $\chi$ which is supported in $]\frac{1}{2}, 2[$, define $\chi_\ell (t) :=\chi (2^{-\ell} t)$ for $\ell\in \mathbb N$ and assume that $\chi$ has been chosen so that
\[
\sum_{\ell \in \mathbb N} \chi_\ell \equiv 1 \qquad \mbox{on $[1, \infty[$}.
\]
We then define
\[
\chi_{-1} := 1 - \sum_{\ell \in \mathbb N} \chi_\ell\,
\]
and introduce the Littlewood-Paley operator $\Delta_\ell$ as $\Delta_\ell (\varphi) = \mathscr{F}^{-1} (\chi_\ell \mathscr{F} (\varphi))$, where $\mathscr{F}$ is the Fourier transform.
We finally recall that (see \cite[Section 1.4.2]{Grafakos}), if we define
\[
\|\varphi\|_{X^\sigma} := \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}\, ,
\]
then
\[
C (\sigma)^{-1} \|\varphi\|_{X^\sigma} \leq \|\varphi\|_{C^\sigma} \leq C (\sigma) \|\varphi\|_{X^\sigma}\, .
\]
We are now ready to perform our final estimate. We fix a large $N$, which will be chosen later, and for $\ell\geq N$ we write
\begin{align*}
\sum_{\ell \geq N} 2^{\sigma \ell} \|\Delta_\ell \psi_j\|_\infty
&\leq 2^{-N (\tau-\sigma)} \sum_{\ell\geq N} 2^{\tau \ell} \| \Delta_\ell \psi_j\|_\infty
\leq 2^{-N (\tau-\sigma)} C (\tau) \|\psi_j\|_{C^\tau} \leq C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constant $C$ is independent of both $N$ and $j$. Next, for any $\ell$ we observe that
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \underbrace{\left(\Delta_\ell \left(\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right)\right)}_{=: \Gamma_{\ell,j}}\, .
\]
Now
\[
\|\Gamma_{\ell,j}\|_{L^\infty} \leq C 2^{-\ell\sigma} \left\|\mathcal{K}_{m_0} \left(\frac{A}{\Xi-z} \psi_j \right)\right\|_{C^\sigma} \leq C 2^{-\ell \sigma}\, .
\]
On the other hand, because of the frequency localization, we have
\[
\Delta_\ell \psi_j = \mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1}) (\Gamma_{\ell,j})
\]
and the estimate
\[
\left\|\mathcal{K}_{m_j} \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) \circ \Delta_\ell\right\|_{\mathcal{L} (L^\infty, L^\infty)} \leq \frac{C}{m_j^2} \left(2^{2\ell} + m_0^2\right)\, .
\]
We can therefore write the estimate
\begin{align*}
\|\psi_j\|_{C^\sigma} & \leq \frac{C}{m_j^2} \sum_{\ell=-1}^{N} (2^{(2+2\sigma) \ell} + m_0^2)
+ C 2^{-N (\tau-\sigma)} &\leq \frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) + C 2^{-N (\tau-\sigma)}\, ,
\end{align*}
where the constants $C$ are independent of $N$ and $j$. In particular, we fix first $N$ large enough to get $C 2^{-N (\tau-\sigma)} \leq \frac{1}{4}$ and we then choose $m_j$ large enough so that
\[
\frac{CN}{m_j^2} \left(2^{(2+2\sigma) N} + m_0^2\right) \leq \frac{1}{4}\, .
\]
These two estimates imply $\|\psi_j\|_{C^\sigma} \leq \frac{1}{2}$, contradicting the normalization $\|\psi_j\|_{C^\sigma} = 1$.
\medskip
{\bf Step 4.} To complete the proof of the Lemma we need to show (Cl). We first write
\[
T_{m, \ell} := \Delta_\ell \circ \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right)\, .
\]
The operator $T_{m, \ell}$ is the convolution with a kernel $K_{m, \ell}$ whose Fourier symbol is given by
$\chi_\ell (\xi) \frac{|\xi|^2 + m_0^2}{|\xi|^2 +m^2}$. Hence, for $\ell \geq 0$ we have
\[
K_{m, \ell} (t) = \frac{1}{2\pi} \int \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2} e^{i \xi t}\, d\xi\, .
\]
and
\[
(-it)^k K_{m, \ell} (t) = \frac{1}{2\pi} \int \frac{d^k}{d\xi^k} \left( \chi \left(\frac{\xi}{2^\ell}\right) \frac{|\xi|^2 + m_0^2}{|\xi|^2 + m^2}\right) e^{it\xi}\, d\xi\, .
\]
In particular, we easily conclude
\[
\| |t|^k K_{m, \ell}\|_{L^\infty} \leq C (k) 2^{\ell (1-k)}\, ,
\]
for a constant $C (k)$ independent of both $m\geq 1$ and $\ell$, but which depends on $k$. From the latter we can estimate
\begin{align*}
\|K_{m, \ell}\|_{L^1} &\leq \int_{|t|\leq 2^{-\ell}} |K_{m, \ell} (s)|\, ds +
\int_{|t|\geq 2^{-\ell}} \frac{|s^2 K_{m, \ell} (s)|}{|s|^2}\, ds\\
&\leq C + C 2^{-\ell} \int_{2^{-\ell}}^\infty \frac{1}{s^2}\, ds \leq C\, .
\end{align*}
For $\ell = -1$ we just conclude, likewise
\[
\||t|^k K_{m, -1}\|_{L^\infty} \leq C (k)
\]
for a constant $C(k)$ independent of $m$, but depending of $k$. Once again using the cases $k=0$ and $2$ of the latter inequality we achieve
\[
\|K_{m, -1}\|_{L^1} \leq C \, .
\]
We have thus bounded all $\|K_{m, \ell}\|_{L^1}$ with a universal constant $C$ independent of both $m\geq 1$ and $\ell\in \mathbb N \cup \{-1\}$. In particular, since $\|T_{m, \ell}\|_{\mathcal{L} (L^\infty, L^\infty)} = \|K_{m, \ell}\|_{L^1}$ and
\[
\mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) = \sum_{\ell \geq -1} T_{m, \ell}
= \sum_{\ell \ge -1} T_{m, \ell} \circ (\Delta_{\ell-1}+\Delta_\ell+\Delta_{\ell+1})\, ,
\]
we can estimate
\begin{align*}
\left\| \mathcal{K}_m \circ \left(-\frac{d^2}{dt^2} + m_0^2\right) (\varphi)\right\|_{C^\sigma}
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\varphi)\|_{L^\infty}
= C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|T_{m, \ell} (\Delta_\ell \varphi)\|_{L^\infty}\\
&\leq C (\sigma) \sum_{\ell \geq -1} 2^{\sigma \ell} \|\Delta_\ell \varphi\|_{L^\infty}
\leq C (\sigma) \|\varphi\|_{C^\sigma}\, .
\end{align*}
This completes the proof of (Cl) and hence of the entire Lemma.
\end{proof}
{{red}
\section{Proof of Theorem \ref{thm:spectral-stronger-2}}
In \cite{Vishik2}, Vishik claims the following improved version of Theorem \ref{thm:spectral5}, which would immediately imply Theorem \ref{thm:spectral-stronger-2}.
\begin{theorem}\label{thm:Vishikversion}
There are a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m} = \emptyset$ for any integer $m>m_0$ and $\mathscr{U}_{m_0}$ consists of a single $z$. Moreover, the algebraic multiplicity of $m_0 z$ as an eigenvalue of $\mathcal{L}_{m_0}$ is $1$.
\end{theorem}
Vishik's suggested proof of Theorem \ref{thm:Vishikversion} builds upon Proposition \ref{p:3+4} and the following improved versions of Proposition \ref{p:5-7} and Proposition \ref{p:almost-final}.
\begin{proposition}\label{prop:Vishicimproved1}\label{PROP:VISHICIMPROVED1}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Then there
exist $\varepsilon >0$ and $\delta>0$ with the following property.
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $(m_a-h) z_{m_a-h}$ is an eigenvalue of $\mathcal{L}_{m_a-h}$ with algebraic multiplicity $1$.
\end{proposition}
In \cite{Vishik2} Vishik only gives the argument that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ contains a single element $z_{m_a-h}$ and the corresponding eigenspace of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ has dimension $1$ (i.e. its {\em geometric} multiplicity is $1$, cf. Remark \ref{r:b-also-2}). However it is essential to have the {\em algebraic} multiplicity equal to $1$ in order to complete his suggested argument. After we pointed out to him the gap in his paper, he suggested in \cite{Vishik3} the proof of Proposition \ref{prop:Vishicimproved1} reported below. Before coming to it, we point out that
a spectral perturbation argument as in the proof of Lemma \ref{l:almost-final-2} (which we outline anyway below)
easily implies the following.
\begin{proposition}\label{prop:Vishicimproved2}
Assume $- \lambda_a<-1$ and let $m_a = \sqrt{\lambda_a}$ and $m_b:= \max \{1, \sqrt{\lambda_b}\}$. Then $\mathscr{U}_m$ consists of a single element $z_m$ for every $m\in ]m_b, m_a[$ and moreover the algebraic multiplicity of $z_m$ as an eigenvalue of $m^{-1} \mathcal{L}_m$ is $1$.
\end{proposition}
Taking the previous proposition for granted, we just need a choice of $\Xi$ for which $\lambda_a >1$ and $]m_b, m_a[$ contains an integer, which is guaranteed by Lemma \ref{L:BOTTOM}, and we conclude Theorem \ref{thm:Vishikversion}.
\medskip
We now explain how to prove Proposition \ref{prop:Vishicimproved2}.
From Proposition \ref{p:almost-final} and Lemma \ref{l:almost-final-1} we know that $\mathscr{U}_m \neq \emptyset$, for every $m\in ]m_b, m_a[$, and $\mathscr{U}_m = \emptyset$ for $m \ge m_a$. Moreover, Remark \ref{rmk:algebraic dim const} implies that the sum of the algebraic multiplicities of $z\in \mathscr{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant for $m\in ]m_b, m_a[$. Hence, to conclude we just need to prove that the latter is $1$ for some $m\in ]m_b, m_a[$.
To that aim we show that for any $\varepsilon>0$ there exists $\delta>0$ such that $\mathscr{U}_{m_a-h} = \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))$ for any $h\in ]0,\delta[$. This is enough for our purposes since, together with Proposition \ref{prop:Vishicimproved1}, it gives $\mathscr{U}_{m_a-h}= \mathscr{U}_{m_a-h}\cap B_{\varepsilon}(\Xi(a))=\{ z_{m_a-h}\}$ where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a - h}$ with algebraic multiplicity $1$.
Assume for contradiction the existence of a sequence $(m_j)_{j\in\mathbb N}$ in $]m_b,m_a[$ converging to $m_a$ such that there are $z_j\in \mathscr{U}_{m_j}$ with $|z_j - \Xi(a)|>\varepsilon$ for some $\varepsilon>0$. Up to extracting a subsequence, we may assume $z_j \to z$ for some $z\in \mathbb C$ with $|z-\Xi(a)|\ge \varepsilon$. Proposition \ref{p:3+4} implies that the imaginary part of $z$ is positive. Arguing as in the first step of proof of Proposition \ref{p:3+4} we can prove that $z\in \mathscr{U}_{m_a}$ and reach a contradiction.
\section{Proof of Proposition \ref{prop:Vishicimproved1}}
The proof of Proposition \ref{prop:Vishicimproved1} can be reduced to the following weaker version using Remark \ref{rmk:algebraic dim const} and the argument outlined in the previous paragraph.
\begin{proposition}\label{prop:Vishikimproved-weaker}
Assume $- \lambda_a < -1$ and let $m_a=\sqrt{\lambda_a}$. Let $h$ and $\varepsilon$ be sufficiently small so that Proposition \ref{p:3+4} and Remark \ref{r:b-also-2} apply, namely
$\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) = \{z_{m_a-h}\}$, where $z_{m_a-h}$ is an eigenvalue of $(m_a-h)^{-1} \mathcal{L}_{m_a-h}$ with {\em geometric} multiplicity $1$. Then, if $h$ is chosen possibly smaller, the algebraic multiplicity of $z_{m_a-h}$ is also $1$.
\end{proposition}
We now come to the proof of the latter, which is the heart of the matter. First of all, we introduce a suitable transformation of the space $\mathcal{H}$ (which we recall is the domain of the operator $\mathcal{L}_m$, defined in \eqref{e:def-H}). We introduce the Hilbert space
\[
\mathcal{H}^e :=\left\{ f: \mathbb R \to \mathbb C\, :\, \int |f (t)|^2 e^{-2t}\, dt < \infty\right\}
\]
and the isometry $T: \mathcal{H} \to \mathcal{H}^e$ given by
\[
\gamma (r) \mapsto e^{2t} \gamma (e^t)\, .
\]
Rather than considering the operator $\mathcal{L}_m$ on $\mathcal{H}$, it turns out to be more convenient to consider the operator $T \circ \mathcal{L}_m \circ T^{-1}$ on $\mathcal{H}^e$. Since the spectra of the two operators coincide, with a slight abuse of notation we will keep writing $\mathcal{L}_m$ in place of $T \circ \mathcal{L}_m \circ T^{-1}$, and we will keep $\mathscr{U}_m$ to denote the point spectrum of ${m^{-1}}T \circ \mathcal{L}_m \circ T^{-1}$ in the upper half plane.
Simple computations show that the operator $\mathcal{L}_m$ is given, on $\mathcal{H}^e$ by
\[
\mathcal{L}_m (\alpha) = m \Xi \alpha - m A \varphi
\]
where $\varphi$ is the unique $L^2$ solution of
\[
\varphi'' - m^2 \varphi = \alpha\,
\]
(note that we are claiming $\varphi\in L^2$ rather than $\varphi \in \mathcal{H}^e$, cf. Section \ref{s:eigenvalue-equation}).
We can now come to the main idea behind the simplicity of $z_{m_a-h}$, which is borrowed from \cite{Vishik3}. A prominent role is played by the adjoint of $\mathcal{L}_m$ (considered as a bounded linear operator from $\mathcal{H}^e$ into itself): for the latter we will use the notation $\mathcal{L}_m^\star$.
\begin{lemma}\label{l:aggiunto}
Assume that $h$ and $\varepsilon$ are small enough so that $\{z_{m_a-h}\} =\mathscr{U}_{m_a-h}\cap B_\varepsilon (\Xi (a))$ and $z_{m_a-h}$ has geometric multiplicity $1$ in ${\rm spec}\, ((m_a-h)^{-1} \mathcal{L}_{m_a-h}, \mathcal{H}^e)$. Let $\alpha_h \in \mathcal{H}^e\setminus \{0\}$ be such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}(\alpha_h) - z_{m_a - h}\alpha_h=0$. If $h$ is small enough, then there is $\beta_h\in \mathcal{H}^e$ such that $(m_a-h)^{-1}\mathcal{L}_{m_a-h}^\star(\beta_h) - \bar z_{m_a-h}\beta_h =0$ and
\begin{equation}\label{e:dual-pairing}
\langle \alpha_h, \beta_h \rangle_{\mathcal{H}^e} = \int \alpha_h (t) \bar \beta_h (t)\, e^{-2t}dt \neq 0\, .
\end{equation}
\end{lemma}
Let us show how the latter implies Proposition \ref{prop:Vishikimproved-weaker}. Assume $z_{m_a-h}$ were an element of ${\rm spec}\, ((m_a-h)^{-1}\mathcal{L}_{m_a-h}, \mathcal{H}^e)\cap B_\varepsilon (\Xi (a))$ with geometric multiplicity $1$ and algebraic multiplicity larger than $1$: our goal is to show that $h$ cannot be too small.
The properties just listed mean that the following bounded operator on $\mathcal{H}^e$,
\[
L_h := (m_a-h)^{-1}\mathcal{L}_{m_a-h} - z_{m_a-h} \, ,
\]
has a 1-dimensional kernel, $0$ is in its point spectrum, and $0$ has algebraic multiplicity strictly larger than $1$. These properties imply that any element $\alpha_h$ in the kernel of $L_h$ (i.e. any eigenfunction of $(m_a-h)^{-1}\mathcal{L}_{m_a-h}$ with eigenvalue $z_{m_a-h}$) is in the image of $L_h$. Fix one such element $\alpha_h$ and let $\eta_h$ be such that $L_h (\eta_h) = \alpha_h$. If $h$ is small enough, we can fix $\beta_h$ as in Lemma \ref{l:aggiunto}, and observe that it is in the kernel of the adjoint operator $L^\star_h$. We then must have
\[
0 \neq \int \alpha_h \bar\beta_h \, e^{-2t}dt= \int L_h (\eta_h) \bar\beta_h \, e^{-2t}dt
= \int \eta_h \overline{L_h^\star (\beta_h)} \, e^{-2t}dt = 0\, ,
\]
which is not possible.}
\begin{proof}[Proof of Lemma \ref{l:aggiunto}]
{{red}
We begin by proving the following claim:
\begin{itemize}
\item[(Cl)] For any $z\in \mathcal{U}_m$, with $m>1$, such that
$m^{-1}\mathcal{L}_m(\alpha_z) - z \alpha_z = 0$,
there exists $\beta_z\in \mathcal{H}^e$ such that
\begin{equation}
m^{-1}\mathcal{L}_m^\star(\beta_z) - \bar z \beta_z = 0 \, ,
\end{equation}
and
\begin{equation}\label{eq:keyfunction}
\left\langle \alpha_z, \beta_z \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z)^2}\varphi_z(t)^2\, d t\, ,
\end{equation}
where $\varphi_z$ is the unique solution on $L^2(\mathbb{R})$ of $\varphi_z'' - m^2\varphi_z = \alpha_z$.
\end{itemize}
To that aim we first observe that the adjoint of $\mathcal{L}_m$ in $\mathcal{H}^e$ is given by
\begin{equation}
\mathcal{L}_m^\star (\alpha) = m( \Xi \alpha - e^{2t} \mathcal{K}_m(A\alpha e^{-2t})) \, ,
\end{equation}
where $\mathcal{K}_m$ is the inverse of $-\frac{d^2}{dt^2} + m^2$ as a closed unbounded self-adjoint operator in $L^2(\mathbb{R})$.
Notice that $\mathcal{L}^\star$ is well defined because $e^{-t}\alpha \in L^2(\mathbb{R})$ and $A\sim e^{2t}$ as $t\to -\infty$.
We now observe that, if $z\in \mathcal{U}_m$, $m^{-1}\mathcal{L}_m (\alpha_z) = z \alpha_z$, and $\beta_z$ is defined by
\[
\beta_z:= e^{2t}\frac{\bar \varphi_z}{\Xi - \bar z}
= e^{2t}\frac{\mathcal{K}_m(\bar \alpha_z)}{\Xi - \bar z}\, ,
\]
then
\begin{equation}\label{eq:adj}
m^{-1}\mathcal{L}_m^\star (\beta_z) = \bar z \beta_z \, .
\end{equation}
Notice that $\beta_z\in W^{2,2}_{\rm loc}\cap \mathcal{H}^e$ decays exponentially fast at $\infty$ thanks to the bound $| \varphi_z(t)|\le C e^{-m|t|}$, for every $t\in \mathbb{R}$, proven in Lemma \ref{c:decay}.
Let us now verify \eqref{eq:adj}:
We first observe that
\begin{equation}
\alpha_z = \frac{A\varphi_z}{\Xi - z} \, ,
\end{equation}
hence
\begin{align*}
m^{-1}\mathcal{L}_m^\star (\beta_z) & = \Xi \beta_z - e^{2t}\mathcal{K}_m \left(\frac{A \bar \varphi_z}{\Xi - \bar z}\right) =
\Xi \beta_z - e^{2t}\mathcal{K}_m(\bar \alpha_z)
\\& = \Xi \beta_z -(\Xi - \bar z) \beta_z = \bar z \beta_z \, .
\end{align*}
It is now immediate to conclude \eqref{eq:keyfunction}.
}
\medskip
{{red}
In order to simplify our notation we use $\mathcal{L}_h$, $z_h$ and $m_h$ in place of $\mathcal{L}_{m_a-h}$, $z_{a-h}$ and $m_a-h$.
Given $\alpha_h\in \mathcal{H}^e$ as in the statement of the Lemma we denote by $\varphi_h$ the unique $L^2$ solution of $\varphi_h'' - m_h^2 \varphi_h = \alpha_h$. We now can apply (Cl) above to find $\beta_h\in \mathcal{H}^e$ which solves $m_h^{-1}\mathcal{L}_h^\star (\beta_h) = \bar z_h \beta_h$ and such that
\begin{equation}\label{eq:keyfunction1}
\left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
=
\int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t\, .
\end{equation}
To conclude the proof it suffices to show that, after appropriately normalizing the functions $\alpha_h$ (i.e. after multiplying them by an appropriate constant factor, which might depend on $h$) we have
\begin{equation}\label{e:limit-nonzero}
\lim_{h\to 0} \left\langle \alpha_h, \beta_h \right\rangle_{\mathcal{H}^e}
= c \neq 0 \, .
\end{equation}
Note that for the latter conclusion, which we will prove in the next two steps, we will use the assumption that $\alpha_h\neq 0$.
\bigskip
{\bf Step 1:} We show that, up to multiplication of $\alpha_h$ by a suitable constant factor (which might vary with $h$), $\varphi_h \to \varphi$ in $W^{1,2}$ and in $C^{1,\alpha}$, as $h\to 0$, where $\varphi \in W^{2,\infty}$ is a nontrivial solution to
\begin{equation}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
By Remark \ref{r:phi(a)-nonzero}, the nontriviality of $\varphi$ implies $\varphi (a) \neq 0$, and hence, up to multiplication by another constant factor, we will assume, without loss of generality, that $\varphi (a)=1$.
Recall that $\varphi_h$ solves the equation
\begin{equation}\label{e:ODE-again-100}
- \frac{d^2 \varphi_h}{dt^2} + m_h^2 \varphi_h + \frac{A}{\Xi - z_h} \varphi_h
= 0 \, .
\end{equation}
For the moment, let us normalize the functions so that
\begin{equation}\label{e:normalization-2}
\int (|\varphi_h'|^2 + m_h^2 |\varphi_h|^2) = 1\, ,
\end{equation}
as in \eqref{e:L2-normalization}. We then can argue as for the bounds \eqref{e:exp-bound-1} and \eqref{e:exp-bound-2} to derive the existence of constants $C$ and $\beta>2$ (independent of $h$) such that
\begin{equation}\label{e:exp-bound-3}
\left|\varphi_h (t)\right| \leq C e^{- \beta |t|} \quad \forall t\, .
\end{equation}
Recalling Section \ref{s:5-7-part-II}, we know that $z_h = \Xi (a) + c(a) h + o (h)$, where $c(a)$ is a complex number with positive imaginary part, which we denote by $d(a)$. Using the monotonicity of $\Xi$ we can write $z_h = \Xi (t (h)) + i (d(a) h + o (h))$ for some $t(h)$ which satisfies the bound $|t (h) -a|\leq C h$ for some positive constant $C$. In particular, using the mean value theorem and the fact that the derivative of $\Xi$ does not vanish on $[a-1, a+1]$, we get
\[
|\Xi (t) - z_h|^2\geq C^{-1} (|t-t(h)|^2 + |h|^2)\qquad \forall t\in [a-1,1+1]\, ,
\]
where $C$ is some positive constant. Next, using that $|t(h)-a|\leq C h$, we conclude the estimate
\[
|\Xi (t) - z_h|\geq C^{-1} |t-a| \qquad \forall t\in [a-1, a+1]\, ,
\]
with a constant $C$ independent of $h$. Since $a$ is a zero of $A$, we finally conclude that the functions
\[
\frac{A(t)}{\Xi (t) - z_h}
\]
are in fact uniformly bounded, independently of $h$. Using the latter estimate and \eqref{e:exp-bound-3} we thus infer that
\begin{equation}\label{e:exp-bound-4}
|\varphi_h'' (t)|\leq C e^{-\beta |t|}\, .
\end{equation}
In particular, upon extraction of a subsequence, we can assume that the $\varphi_h$ converge to a function $\varphi$ strongly in $W^{1,2}$, weakly in $W^{2,\infty}$, and hence strongly in $C^{1, \alpha}$ for every $\alpha<1$. In particular, because of the normalization \eqref{e:normalization-2}, $\varphi$ is a nontrivial $W^{2, \infty}$ function, satisfying the same exponential decay as in \eqref{e:exp-bound-3} and \eqref{e:exp-bound-4}. Moreover, given the bound on the functions $\frac{A(t)}{\Xi (t) -z_h}$, $\varphi$ is in fact a solution of
\begin{equation}\label{e:ODE-again-101}
- \frac{d^2 \varphi}{dt^2} + m_a^2 \varphi + \frac{A}{\Xi - \Xi (a)} \varphi = 0\, .
\end{equation}
Recalling Remark \ref{r:phi(a)-nonzero}, $\varphi (a)\neq 0$ and $\varphi$ is unique up to a constant factor. In particular, we must have
\begin{equation}\label{e:comoda}
\liminf_{h\downarrow 0} |\varphi_h (a)|>0,
\end{equation}
otherwise for a suitable subsequence we would have convergence to a nontrivial solution $\varphi$ for which $\varphi (a)=0$. Because of \eqref{e:comoda} we can use the different normalization $\varphi_h (a) = 1$, which in turn implies that $\varphi_h$ converges (without extracting subsequences) to the unique $W^{2,2}$ solution $\varphi$ of \eqref{e:ODE-again-101} which satisfies $\varphi (a)=1$.
\medskip
{\bf Step 2:} We prove that
\begin{equation}
\lim_{h\to 0} \, {\rm Im} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - z_h)^2}\varphi_h(t)^2\, d t
=
\frac{2A'(a)\varphi(a)^2}{d(a)\Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Recalling that $z_h = \Xi(t(h)) + i (d(a)h + o(h))$, we write
\begin{align*}
{\rm Im}\, \left[\frac{A}{(\Xi-z_h)^2} \varphi_h^2\right]
& = {\rm Im}\, \left[\frac{A((\Xi-\Xi(t(h)))+ i(d(a)h + o(h)))^2}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h + i {\rm Im}\, \varphi_h)^2\right]
\\&
= \frac{2(d(a)h + o(h)) A(\Xi - \Xi(t(h)))}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}({\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2)
\\ & \qquad +
\frac{2A }{(\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & \qquad -\frac{ 4(d(a)h + o(h))^2 A}{((\Xi-\Xi(t(h)))^2 + (d(a)h + o(h))^2)^2}{\rm Re}\, \varphi_h {\rm Im}\, \varphi_h
\\ & =: I_h + II_h + III_h \, .
\end{align*}
To ease notation we set
\begin{equation}
f_h := {\rm Re}\, \varphi_h^2 - {\rm Im}\, \varphi_h^2 \, ,
\quad
g_h := {\rm Re}\, \varphi_h\, {\rm Im}\varphi_h \, ,
\end{equation}
and observe that $f_h \to \varphi^2$, $g_h \to 0$ as $h\to 0$, where the convergence is, in both cases, in the strong topologies of $L^2$ and of $C^\alpha$, for every $\alpha <1$.
We will show below that:
\begin{align}
\lim_{h\to 0} \int I_h &= \frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds=: L (a)\label{e:limit-I}\, ,\\
\lim_{h\to 0} \int II_h &= 0\label{e:limit-II}\, ,\\
\lim_{h\to 0} \int III_h &=0\label{e:limit-III}\, .
\end{align}
Considering that none of the numbers $\Xi' (a)$, $A'(a)$, $\varphi (a)$, and $d(a)$ vanish, $L(a)\neq 0$. This implies \eqref{e:limit-nonzero} and concludes the proof. We next study separately the three limits above.
\medskip
{\bf Proof of \eqref{e:limit-I}.}
There exists $\delta>0$ and $r>0$ such that for any $h$ sufficiently small one has $|\Xi(t) - \Xi(t(h))|>\delta$ for all $t\in \mathbb{R}\setminus (a-r/2,a+r/2)$. This implies that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}\setminus (t(h) - r, t(h) + r)} I_h = 0 \, ,
\end{equation}
hence, we are left with
\begin{equation}
\lim_{h \to 0} 2h d(a)\int_{t(h) - r}^{t(h) + r} \frac{A(t)(\Xi(t)- \Xi(t(h))}{((\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2)^2} f_h(t)
\, d t \, .
\end{equation}
We change variables according to $t= t(h) + sh$:
\begin{align*}
2\int_{-\frac{r}{h}}^{\frac{r}{h}} s & \left(\frac{A(t(h) + sh)}{h} \frac{\Xi(t(h) + sh )- \Xi(t(h))}{sh}\right) \times
\\&
\left( s^2\left(\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh}\right)^2 + (d(a) + o(h)/h)^2 \right)^{-2}
f_h(t(h) + sh)\, d s
=: C(h) \, .
\end{align*}
Notice that, for any $s\in \mathbb{R}$, we have
\begin{equation}
\lim_{h \to 0}\frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} = \Xi'(a) \, .
\end{equation}
Moreover the monotonicity of $\Xi$ implies that
\begin{equation}
1/C \le \left| \frac{\Xi(t(h) + sh)- \Xi(t(h))}{sh} \right| \le C
\quad \text{for any $s\in (-r/h, r/h)$} \, .
\end{equation}
Notice that
\begin{equation}
\frac{A(t(h) + sh)}{h}
=
\frac{A(t(h) + sh)-A(t(h))}{h} + \frac{A(t(h)) - A(a)}{h} \, ,
\end{equation}
hence, up to extracting a subsequence $h_i \to 0$, we have
\begin{equation}
\frac{A(t(h_i) + sh_i)}{h_i} \to A'(a)s + x
\end{equation}
for some $x\in \mathbb{R}$ (recall that $|t(h)-a|\leq C h$ and note that $x$ might depend on the subsequence).
Collecting all the estimates above, and using the dominated convergence theorem we deduce that, along the subsequence $h_i$,
\begin{equation}
\lim_{i\to\infty} C(h_i)= 2\int_{\mathbb{R}} \frac{s(A'(a)s + x)\Xi'(a)}{(s^2 \Xi'(a)^2 + d(a)^2)^2} \varphi(a)^2\, ds
=
\frac{2A'(a)\varphi(a)^2}{d(a) \Xi'(a)^2}\int_{\mathbb{R}} \frac{s^2}{(1+s^2)^2} ds \, .
\end{equation}
Observe that the limit does not depend on $x$ and hence does not depend on the chosen subsequence.
\medskip
{\bf Proof of \eqref{e:limit-III}.} Arguing as we did for $I_h$ and using that $g_h\to 0$ in $C^{1/2}$, as $h\to 0$, we easily deduce that
\begin{equation}
\lim_{h\to 0} \int III_h = 0 \, .
\end{equation}
\medskip
{\bf Proof of \eqref{e:limit-II}.}
We need to show that
\begin{equation}
\lim_{h \to 0} \int_{\mathbb{R}} \frac{A(t)}{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} g_h(t)\, dt = 0 \, .
\end{equation}
Observe that $G_h := g_h/|\varphi_h|^2 = |\varphi_h|^{-2} {\rm Re}\, \varphi_h {\rm Im}\, \varphi_h $, and in particular, $|G_h|\leq \frac{1}{2}$. Moreover there exists $r>0$ such that $G_h \to 0$ in $(a-r, a+r)$ in the $C^\alpha$ topology for every $\alpha<1$ (here, we are using that $|\varphi_h(a)|^2 \to |\varphi(a)|^2\neq 0$ as $h\to 0$).
We write
\begin{align}
\int_\mathbb{R} & \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} G_h(t)\, dt
\\& =
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h))) dt \, ,
\end{align}
where we took advantage of the identity
\begin{equation}
\int_\mathbb{R} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h))^2 + (d(a)h + o(h))^2}\, d t = 0 \, ,
\end{equation}
proven in \eqref{e:imaginary-trick}.
Arguing as we did for $I_h$ we can reduce the problem to show
\begin{align}
\lim_{h \to 0} & \int_{t(h) - r}^{t(h) + r} \frac{A(t)|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt = 0\, .
\end{align}
We split the integral to the sum of
\begin{align*}
J_1 (h) &:= \int_{t(h)-r}^{t(h)+r} \frac{(A(t)- A(t (h))|\varphi_h(t)|^2 }{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\\
J_2 (h) &:= A(t (h)) \int_{t(h)-r}^{t(h)+r} \frac{|\varphi_h(t)|^2}{(\Xi(t) - \Xi(t(h)))^2 + (d(a)h + o(h))^2} (G_h(t) - G_h(t(h)))\, dt\, .
\end{align*}
Next observe that, in the interval that interests us, the following inequalities hold provided $r$ and $h$ are sufficiently small:
\begin{align*}
|A (t) - A(t(h))|&\leq C |t- t(h)|\\
|A(t(h)| &= |A (t (h))-A(a)|\leq C|t(h)-a|\leq C h\\
|G_h (t) - G_h (t (h))|&\leq \|G_h\|_{C^{1/2} (a-r, a+r)} |t-t(h)|^{1/2}\\
|\Xi (t) - \Xi (t(h))| &\geq C^{-1} |t-t(h)|\\
(d(a)h + o (h))^2 &\geq C^{-1} h^2\, .
\end{align*}
Since $\|\varphi_h\|_{L^\infty} \leq C$, we can change variable in the integrals to $\sigma =t-t(h)$ and estimate them as follows:
\begin{align*}
|J_1 (h)|&\leq C \|G_h\|_{C^{1/2} (a-r,a+r)} \int_{-r/2}^{r/2} \sigma^{-1/2}\, d\sigma \leq C\|G_h\|_{C^{1/2}} r^{1/2}\, ,\\
|J_2 (h)| &\leq C h \int_{-\infty}^\infty \frac{\sigma^{1/2}}{\sigma^2 + C^{-1} h^2}\, d\sigma
= C h^{1/2} \int \frac{\tau^{1/2}}{\tau^2 + C^{-1}} d\tau \leq C h^{1/2}\, .
\end{align*}
Clearly $J_2 (h)\to 0$, while $J_1 (h) \to 0$ because $\|G_h\|_{C^{1/2} (a-r,a+r)} \to 0$.}
\end{proof}
\chapter{Proofs of technical statements}
\section{Proof of Remark \ref{r:bounded}}
More generally, we will show here that, for any $q_0\in[1,2[$ and $q_1\in]2,\infty]$, it is true that \begin{equation*}\lVert K_2*\omega\rVert_{L^\infty(\ensuremath{\mathbb R}^2;\ensuremath{\mathbb R}^2)}\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)})\end{equation*} for all $\omega\in L^{q_0}\cap L^{q_1}$.
Indeed, passing to polar coordinates, one sees that $K_2\vert_{B_1}\in L^{q_1^*}(B_1; \ensuremath{\mathbb R}^2)$ and $K_2\vert_{\ensuremath{\mathbb R}^2\setminus B_1}\in L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1;\ensuremath{\mathbb R}^2)$, where $q_1^*, q_2^*$ are given by $\frac1{q_i}+\frac1{q_i^*}=1$ for $i\in\{1,2\}$. Hölder's inequality implies that for any $x\in\ensuremath{\mathbb R}^2$,
\begin{equation*}
\begin{split}
\abs{(K_2*\omega)(x)} &= \abs{((K_2 \mathbf 1_{B_1})*\omega)(x)+((K_2 (1-\mathbf 1_{B_1}))*\omega)(x)} \\
&\le\norm{K_2}_{L^{q_1^*}(B_1)}\norm{\omega}_{L^{q_1}(\ensuremath{\mathbb R}^2)} + \norm{K_2}_{L^{q_0^*}(\ensuremath{\mathbb R}^2\setminus B_1)}\norm{\omega}_{L^{q_0}(\ensuremath{\mathbb R}^2)} \\
&\le C(q_0, q_1) (\lVert \omega\rVert_{L^{q_0}(\ensuremath{\mathbb R}^2)}+\lVert \omega\rVert_{L^{q_1}(\ensuremath{\mathbb R}^2)}).
\end{split}
\end{equation*}
Since $x$ is arbitrary, this achieves a proof of the claim above.
\section{Proof of Theorem \ref{thm:Yudo}}
\textbf{Existence.} The existence argument is a classical density argument. Take any sequence $(\omega_0^{(n)})_{n\in\mathbb N}$ of functions in $L^1\cap C^\infty_c$ that converges strongly in $L^1$ to $\omega_0$. Analogously, pick a sequence of smooth functions $(f_n)_{n\in\mathbb N}$ in $C^\infty_c (\ensuremath{\mathbb R}^2\times[0, T])$ converging in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to $f$ and satisfying the bound $\|f_n (\cdot, t)\|_{L^\infty} \leq \|f (\cdot, t)\|_{L^\infty}$ for a.e. $t$. Then, let $\omega^{(n)}$ denote the solution of the corresponding Cauchy problem of the Euler equations in vorticity form. The existence of such solutions is a classical well-known fact, see for instance \cite[Theorem A]{McGrath}. Following Remark \ref{r:A-priori-estimates}, these solutions satisfy all the \`a priori estimates needed in Proposition \ref{p:convergence}. Therefore, following the proof of Proposition \ref{p:convergence}, one obtains, in the limit $n\to\infty$, a solution $\omega\in L^\infty([0,T]; L^1\cap L^\infty)$ of the given Cauchy problem. Furthermore, since the à priori estimates of Remark \ref{r:A-priori-estimates} are uniform in $n$, one gets $K_2*\omega\in L^\infty([0,T]; L^2)$.
\begin{remark}
The proof of Proposition \ref{p:convergence} has a fixed force $f$ but a straightforward adaptation of the arguments handles the case above, namely with a sequence of forces $(f_n)_{n\in\mathbb N}$ that converges in $L^1(\ensuremath{\mathbb R}^2\times[0,T])$ to a given $f$. More precisely the only difference occurs in the term $I_4 (k)$ of \eqref{e:term-I4}, which anyway enjoys convergence to the same limit.
\end{remark}
\textbf{Uniqueness.} The uniqueness proof needs two important facts. The first is a well-known ODE inequality, whose short proof is given, for the reader's convenience, at the end of the section.
\begin{lemma}\label{l:ODE lemma}
Let $T>0$ and let $E:[0,T]\to[0,\infty[$ be a differentiable function satisfying
\begin{equation}\label{e:ODE inequality for E}
\dot E(t)\le p M E(t)^{1-1/p} \quad \text{ and }\quad E(0)=0
\end{equation}
for some fixed $M>0$. Then $E(t)\le (Mt)^p$ for all $t\in[0,T]$.
\end{lemma}
The second is the classical Calderón-Zygmund $L^p$ estimate, where we need the sharp $p$-dependence of the corresponding constant. This fact is also well known, cf. for instance \cite[Formula (8.45), page 322]{MajdaBertozzi}).
\begin{lemma}\label{l:Estimate on Lp norm of gradient of velocity}
For every $p_0>1$ there is a constant $c (p_0)$ with the following property.
If $v=K_2*\omega$ for some $\omega\in L^1 \cap L^p(\ensuremath{\mathbb R}^2)$ with $p\in [p_0, \infty[$, then $\norm{D v}_{L^p}\le p c \norm{\omega}_{L^p}$.
\end{lemma}
Now, let $v_1=K_2*\omega_1, v_2=K_2*\omega_2$ be two solutions of \eqref{e:Euler} satisfying the assumptions of Theorem \ref{thm:Yudo} and note that $w:=v_1-v_2$ solves
\begin{equation}\label{e:Gleichung fuer die Differenz}
\partial_t w +(v_1\cdot\nabla)w +(w\cdot\nabla)v_2=-\nabla(p_1-p_2)
\end{equation}
(where $p_1, p_2$ are the pressures corresponding to $v_1$ and $v_2$). Clearly
\[
E(t):=\int_{\ensuremath{\mathbb R}^2} |w(x,t)|^2\,\mathrm dx \leq 2 \int_{\ensuremath{\mathbb R}^2} |v_1 (x,t))^2|\,\mathrm dx + 2 \int_{\ensuremath{\mathbb R}^2} |v_2 (x,t)^2|\,\mathrm dx <\infty
\]
is a bounded function on $[0,T]$.
We scalar multiply \eqref{e:Gleichung fuer die Differenz} with $w$, integrate by parts and use the divergence free conditions of $v_1, v_2$, and $w$ to conclude and
\begin{align*}
\dot E(t) = & - 2\int_{\ensuremath{\mathbb R}^2}((w\cdot\nabla)v_2)w\,\mathrm dx
\le 2\int_{\ensuremath{\mathbb R}^2}|w(x,t)|^2 \abs{D v_2(x,t)}\,\mathrm dx\\
\le & 2\norm{\nabla v_2(\cdot, t)}_{L^p}\norm{w(\cdot, t)}_{L^\infty}^{2/p}\norm{w(\cdot,t)}_{L^2}^{2-2/p}\,.
\end{align*}
Using Remark \ref{r:bounded}, we also have
\begin{equation*}
\begin{split}
\sup_{t\in[0,T]} \norm{w(\cdot, t)}_{L^\infty} &\le \sup_{t\in[0,T]}(\norm{v_1(\cdot, t)}_{L^\infty}+\norm{v_2(\cdot, t)}_{L^\infty}) \\
&\le C \sup_{t\in[0, T]}(\norm{\omega_1(\cdot, t)}_{L^1}+\norm{\omega_1(\cdot, t)}_{L^\infty}+\norm{\omega_2(\cdot, t)}_{L^1}+\norm{\omega_2(\cdot, t)}_{L^\infty}) <\infty\, .
\end{split}
\end{equation*}
Next fix any $p\geq 2$. From Lemma \ref{l:Estimate on Lp norm of gradient of velocity} and the classical $L^p$ interpolation we conclude
\begin{equation*}
\norm{D v_2(\cdot, t)}_{L^p}\le p c \norm{\omega_2 (\cdot, t)}_{L^p}\le p c \norm{\omega_2 }_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 (\cdot, t)}_{L^\infty([0,T]; L^\infty)}^{1-1/p}.
\end{equation*}
Therefore, $\dot E(t)\le p M_p E(t)^{1-1/p}$
with
\begin{align*}
M_p &= 2\norm{w}_{L^\infty([0,T];L^\infty)}^{2/p} c \norm{\omega_2}_{L^\infty([0,T]; L^1)}^{1/p}\norm{\omega_2 }_{L^\infty([0,T]; L^\infty)}^{1-1/p}\\
&\le 2c \left({\textstyle{\frac{1}{p}}}\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\left(1-{\textstyle{\frac{1}{p}}}\right)\norm{\omega_2}_{L^\infty([0,T];L^\infty)}\right) \\
&\le 2c (\norm{w}_{L^\infty([0,T];L^\infty)}^2\norm{\omega_2}_{L^\infty([0,T];L^1)}+\norm{\omega_2}_{L^\infty([0,T];L^\infty)})=: M<\infty\, .
\end{align*}
We can thus apply
Lemma \ref{l:ODE lemma} to obtain that $E(t)\le (M_p t)^p \leq (Mt)^p$.
In particular, for any $t\geq \frac{1}{2M}$ we have $E(t)\le \frac 1{2^p}$ and we can let $p$ tend to $\infty$ to infer $E(t)=0$. Since the same estimates apply to any translation $\tilde{E} (t):= E (t+t_0)$ of the function $E$, we immediately conclude that $E$ vanishes identically, namely that $v_1=v_2$ on $\mathbb R^2\times [0,T]$.
\begin{proof}[Proof of Lemma \ref{l:ODE lemma}] Fix an arbitrary $t_0\leq T$ and note that if $E(t_0)=0$ there is nothing to show. Hence assume $E(t_0)> 0$ and set $a:=\sup\{t:E(t)=0\text{ and }t\le t_0\}$ (note that the set is nonempty because $E (0)=0$). $E(a)=0$ by continuity of $E$ and clearly $E(t)>0$ for all $t\in]a,t_0]$. Therefore, we can divide \eqref{e:ODE inequality for E} by $E(t)^{1-1/p}$ to obtain that $\dot E(t) E^{1/p-1}(t)\le p M$
for all $t\in]a, t_0]$. Integrating both sides gives
\begin{equation}\label{e:Integral bound on E}
\int_{a}^{t_0} \dot E(t) E^{1/p-1}(t)\le p M (t_0-a)\, .
\end{equation}
But the left hand side equals $p E^{1/p}(t_0)-pE^{1/p}(a)=p E^{1/p}(t_0)$, from which we infer
$E^{1/p}(t_0)\le M (t_0-a) \le M t_0$.
\end{proof}
\section{Proof of Proposition \ref{p:convergence}}
Recall first the following classical metrizability result of weak${}^*$ topologies of separable Banach spaces.
\begin{lemma}[Metrizability Lemma]\label{l:Metrizability}
Let $X$ be a separable Banach space and let $K\subset X^*$ be weakly${}^*$-compact. Then $K$ is metrizable in the weak${}^*$ topology inherited from $X^*$ and a metric that induces this topology is given by
\begin{equation}\label{e:Metrization-of-weak-star-topology}
d(l, \tilde l)=\sum_{n=1}^\infty 2^{-n}\min\{1, \vert l(x_n)-\tilde l(x_n)\vert\},
\end{equation}
where $(x_n)_{n\in \mathbb N}$ is any sequence in $X$ such that $\{x_n:n\in\mathbb N\}$ is dense in $X$.
\end{lemma}
Now on to the proof of Proposition \ref{p:convergence}. We will prove convergence of the $\omega_{\varepsilon, k}$ to a $\omega_\varepsilon$ for fixed $\varepsilon$ and $k\to\infty$ in the space $C([0, T]; K)$, where $K:=\{u\in L^q(\ensuremath{\mathbb R}^2):\lVert u\rVert_{L^q}\le R\}$ is equipped with the weak${}^*$ topology inherited from $L^q_{\text w}$. (We will talk about the choice of $q$ later.) Here, $R$ is the uniform bound obtained in \eqref{e:uniform_bound} of Corollary \ref{c:omega_k_epsilon}. Note that since every $L^q$ space is reflexive, one can work just as well with the weak topology on $K$. Let $(\phi_n)_{n\in\mathbb N}$ be a sequence of smooth functions such that $\{\phi_n:n\in\mathbb N\}$ is dense in every $L^q$. The metric given by \eqref{e:Metrization-of-weak-star-topology} now induces the topology of $K$, and it does not depend on $q$. Therefore, using the uniform bound \eqref{e:uniform_bound}, we conclude that \emph{the choice of $q$ does not matter}. It is sufficient to prove the statement of Proposition \ref{p:convergence} for one fixed $q\in]1, p]$ in order to prove it for all $q\in]1, p]$.
\begin{claim}
The $\omega_{\varepsilon, k}$, seen as functions from $[0, T]$ to $K$, are equicontinuous (for simplicity we define each $\omega_{\varepsilon, k} (\cdot, t)$ on the interval $[0,t_k]$ as constantly equal to $\omega_{\varepsilon, k} (\cdot, t_k)$).
\end{claim}
\begin{proof}
For $\tilde\omega, \hat\omega \in L^q(\ensuremath{\mathbb R}^2)$, let
\begin{equation*}
d_i(\tilde\omega, \hat \omega) \overset{\text{Def.}}=\left\lvert\int_{\ensuremath{\mathbb R}^2} (\tilde\omega-\hat\omega) \phi_i\right\rvert.
\end{equation*}
Since each $\omega_{\varepsilon, k}$ solves the Euler equations in vorticity form, we can estimate
\begin{equation}\label{e:bound-on-ith-distance}
\begin{split}
d_i(\omega_{\varepsilon, k}(t, \cdot), \omega_{\varepsilon, k}(s, \cdot)) &= \left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t\partial_\tau\omega_{\varepsilon, k}(\sigma, x)\phi_i(x)\,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&=\left\lvert\int_{\ensuremath{\mathbb R}^2}\int_s^t ((K_2*\omega_{\varepsilon, k})\cdot\nabla)\omega_{\varepsilon, k}(x, \sigma + f (x, \sigma))\phi_i(x) \,\mathrm d\sigma\,\mathrm dx\right\rvert\\
&\le\lVert\nabla\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)}\int_{\ensuremath{\mathbb R}^2}\int_s^t\lvert K_2*\omega_{\varepsilon, k}\rvert\lvert\omega_{\varepsilon, k}\rvert\,\mathrm d\sigma\,\mathrm dx\\
& \qquad + \lVert\phi_i\rVert_{L^\infty(\ensuremath{\mathbb R}^2)} \int_{\ensuremath{\mathbb R}^2}\int_s^t |f(x, \sigma)|\, dx\, d\sigma\\
&\le C(\lVert\nabla\phi_i\rVert_\infty + \|\phi_i\|_\infty) \lvert s-t\rvert
\end{split}
\end{equation}
whenever $t\geq s \geq t_k$.
Let $\tilde\varepsilon>0$. We can find a $N\in\mathbb N$ (depending on $\tilde\varepsilon$) such that $\sum_{n=N^+1}^\infty 2^{-n}\le\frac{\tilde\varepsilon}2$. If \begin{equation*}\lvert t-s\rvert\le \frac{\tilde\varepsilon}{2NC\max_{i\in\{1,\dots,N\}}(\lVert\nabla\phi_i\rVert_{\infty} + \|\phi_i\|_\infty)},\end{equation*} where $C$ is the constant from \eqref{e:bound-on-ith-distance}, then, by the bound in \eqref{e:bound-on-ith-distance}, we get
\begin{equation*}
d(\omega_{\varepsilon, k}(t, \cdot),\omega_{\varepsilon, k}(s, \cdot))\le\frac{\tilde\varepsilon}2+\sum_{i=1}^N \frac{\tilde\varepsilon}{2N}=\tilde\varepsilon.\qedhere
\end{equation*}
\end{proof}
By the Banach-Alaoglu theorem, bounded subsets are relatively compact in the weak${}^*$ topology. Therefore, using reflexivity, for every $t\in[0,T]$, the bounded set $\{\omega_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is also relatively compact in $L^q_{\text w}$.
Therefore, using Arzelà-Ascoli, we can conclude that there exists a subsequence of $(\omega_{\varepsilon, k})_{k\in\mathbb N}$, not relabeled, that converges in $C([0, T]; L_{\text w}^q)$, for every $q$, to the same $\omega_\varepsilon\in C([0, T]; L_{\text w}^q)$.
\begin{claim}
The $\omega_\varepsilon$ is a solution of the Euler equations in vorticity formulation.
\end{claim}
\begin{proof}
We have, for every $k\in\mathbb N$ and $\phi\in C_{\text c}^\infty(\ensuremath{\mathbb R}^2\times[0, T])$ with $\phi(\cdot, T)=0$, (cf. \eqref{e:distrib})
\begin{align}
&\underbrace{\int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x, t_k)\phi(x, t_k)\,\mathrm dx}_{=:I_1 (k)} + \underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2}\ \omega_{\varepsilon, k}(x,t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_2 (k)}\nonumber\\
+ &\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} \omega_{\varepsilon, k}(x,t)((K_2*_x\omega_{\varepsilon, k})(x,t)\cdot\nabla)\phi(x,t)\,\mathrm dx\,\mathrm dt}_{=:I_3 (k)} +\underbrace{\int_{t_k}^T \int_{\ensuremath{\mathbb R}^2} f(x, t)\phi(x, t)\,\mathrm dx\,\mathrm dt}_{=:I_4 (k)} = 0\label{e:term-I4}\, .
\end{align}
The term $I_4(k)$ converges to
\begin{equation*}
\int_{\ensuremath{\mathbb R}^2\times[0, T]} f(x, t)\phi(x,t)\,\mathrm dx\,\mathrm dt\, .
\end{equation*}
By the convergence of the $\omega_{\varepsilon, k}$, \begin{equation*}\lim_{k\to\infty} I_2(k)=\int_{\ensuremath{\mathbb R}^2\times[0, T]} \omega_\varepsilon(x, t)\partial_t\phi(x,t)\,\mathrm dx\,\mathrm dt.\end{equation*} By the definition of the initial condition of $\omega_{\varepsilon, k}$ (cf. \eqref{e:Euler-later-times}), $\omega_{\varepsilon, k}(\cdot, t_k)$ converges strongly in $L^1(\ensuremath{\mathbb R}^2)$ to $\tilde\omega(\cdot, 0)=\omega_0=\omega_\varepsilon(\cdot, 0)$. Therefore, \begin{equation*}\lim_{k\to\infty} I_1(k)=\int_{\ensuremath{\mathbb R}^2}\omega_\varepsilon(x, 0)\phi(x, 0)\,\mathrm dx.\end{equation*}
It therefore only remains to prove the convergence of $I_3$, for which we will require yet another claim.
\begin{claim}
For every $r\in[2, \infty[$ and every $t\in[0,1]$, the set $\{v_{\varepsilon, k}(\cdot, t): k\in\mathbb N\}$ is compact in $L^r (B_R)$ for every $R>0$.
\end{claim}
\begin{proof}
From \eqref{e:uniform_bound}, we know that $\|v_{\varepsilon, k}(\cdot, t)\|_{L^2(\ensuremath{\mathbb R}^2)}\le C$ for some constant $C$ that is independent of $t$. Recall that $v_{\varepsilon, k}=\nabla^\bot\psi_{\varepsilon, k}$, where $\psi_{\varepsilon, k}$ solves $\Delta\psi_{\varepsilon, k} = \omega_{\varepsilon, k}$. Therefore, using the Calder\'{o}n-Zygmund inequality, one gets
\begin{equation*}
\norm{\nabla v_{\varepsilon, k}(\cdot, t)}_{L^2}\le C\norm{\omega_{\varepsilon, k}(\cdot, t)}_{L^2}.
\end{equation*}
Since the $L^2$ norms of the $\omega_{\varepsilon, k}(\cdot, t)$ are uniformly bounded, we can conclude that
\begin{equation*}
\sup_{k\in\infty} \norm{v_{\varepsilon, k}(\cdot, t)}_{L^2}<\infty.
\end{equation*}
Hence we conclude the compactness in $L^r (B_R)$ from Rellich's Theorem.
\end{proof}
Therefore, the $v_{\varepsilon, k}(\cdot, t)$ converge to $v_\varepsilon(\cdot, t)$ strongly in every $L^r (B_R)$ with $r\in[2,\infty[$. Moreover, thanks to \eqref{e:uniform_bound}, we can apply the dominated convergence theorem by Lebesgue to conclude that $v_{\varepsilon, k}\to v_\varepsilon$ as $k\to\infty$ in the space $L^1([0, T]; L^r (B_R))$ for every $r\in[2,\infty[$.
By definition,
\begin{equation*}
\omega_{\varepsilon, k} (v_{\varepsilon, k}\cdot\nabla)\phi-\omega_{\varepsilon} (v_{\varepsilon}\cdot\nabla)\phi = \omega_{\varepsilon, k} (v_{\varepsilon, k}-v_{\varepsilon})\cdot \nabla \phi +
(\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, .
\end{equation*}
We thus rewrite
\begin{align}
I_3 (k) &= \int_0^T \int_{B_R} \omega_{\varepsilon, k} (v_{\varepsilon,k}-v_\varepsilon)\cdot \nabla \phi\, dx\, dt
+ \int_0^T \underbrace{\int_{B_R} (\omega_{\varepsilon,k} - \omega_\varepsilon) v_\varepsilon \cdot \nabla \phi\, dx}_{=:J_k(t)}\, dt\, .\label{e:I3-converges-to-0}
\end{align}
Observe first that, for each fixed $t$,
\[
\lim_{k\to\infty} J_k (t) = 0\, ,
\]
since $\omega_{\varepsilon, k} (\cdot, t) - \omega_\varepsilon (\cdot, t)$ converges weakly to $0$ in $L^2$, while $v_\varepsilon (\cdot, t)\cdot \nabla \phi (\cdot, t)$ is a fixed $L^2$ function. On the other hand
\[
|J_k (t)|\leq \|\nabla \phi (\cdot, t)\|_\infty (\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^2} + \|\omega_\varepsilon (\cdot, t)\|_{L^2}) \|v_\varepsilon (\cdot, t)\|_{L^2}\, .
\]
Therefore the second integral in \eqref{e:I3-converges-to-0} converges to $0$. The first integral can be bounded by
\[
\|\nabla \phi\|_{L^\infty} \|v_{\varepsilon, k} - v_\varepsilon\|_{L^1 ([0,T], L^2 (B_R))} \|\omega_{\varepsilon, k}\|_{L^\infty ([0,T], L^2 (B_R)}
\]
and converges to $0$ as well.
\end{proof}
\section{Proof of Lemma \ref{l:extension}}
Consider $\vartheta \in L^2_m\cap \mathscr{S}$ for $m\geq 2$ and let $v:= K_2*\vartheta$. We first claim that
\begin{equation}\label{e:average}
\int_{B_R} v = 0 \qquad \qquad \mbox{for every $R>0$.}
\end{equation}
With \eqref{e:average} at our disposal, since $\|Dv\|_{L^2 (\mathbb R^2)} = \|\vartheta\|_{L^2 (\mathbb R^2)}$, we use the Poincar\'e inequality to conclude
\begin{equation}
R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C \|\vartheta\|_{L^2 (\mathbb R^2)}
\end{equation}
for a geometric constant $C$. This is then enough to infer the remaining conclusions of the lemma.
In order to achieve \eqref{e:average} observe first that $v = \nabla^\perp h$, where $h$ is the unique potential-theoretic solution of $\Delta h = \vartheta$, given by $h = K * \vartheta$ with $K (x) = \frac{1}{2\pi} \log |x|$. Since $K(R_\theta x) = K (x)$ and $\vartheta (x) = \vartheta (R_{2\pi/m} x)$, it follows that $h (R_{2\pi/m} x) = h (x)$, i.e. $h$ is $m$-fold symmetric. Therefore $R_{2\pi/m} \nabla h (R_{2\pi/m} x) = \nabla h (x)$. In particular, integrating in $x$ and using that the rotation is a measure-preserving transformation of the disk, we conclude
\[
\int_{B_R} \nabla h = R_{2\pi/m} \int_{B_R} \nabla h \, ,
\]
and thus,
\[
\int_{B_R} \nabla h = \frac{1}{m} \sum_{k=0}^{m-1} R_{2k\pi/m} \int_{B_R} \nabla h
\]
However, since $m\ge 2$, $\sum_{k=0}^{m-1} R_{2k\pi/m} = 0$, showing that $\int_{B_R} \nabla h = 0$.
\begin{remark}\label{r:Camillo_dumb}
We next show that it is not possible to find a continuous extension of the operator $L^2\cap \mathscr{S} \ni \vartheta \mapsto K_2* \vartheta \in \mathscr{S}'$ to the whole $L^2$. First of all we observe that, if such an extension exists, it then needs to coincide with $K_2* \vartheta$ when $\vartheta \in L^1 \cap L^2$. We next exhibit a sequence of divergence free vector fields $\{v_k\}\subset W^{1,1}\cap W^{1,2}$ with the property that $\omega_k = \curl v_k$ converge to $0$ strongly in $L^2$ but $v_k$ converge locally to a constant vector field $v_0\neq 0$. In order to do this, we first define the following functions $\phi_k$ on the positive real axis:
\[
\phi_k (r) :=
\left\{
\begin{array}{ll}
1 + \frac{3}{4\ln k} \qquad &\mbox{for $r\leq \frac{k}{2}$}\\ \\
1 + \frac{3}{4\ln k} - \frac{1}{k^2 \ln k} \left(r-\frac{k}{2}\right)^2\qquad &\mbox{for $\frac{k}{2} \leq r \leq k$}\\ \\
1 + \frac{1}{2 \ln k}- \frac{1}{\ln k} \ln \frac{r}{k} \qquad & \mbox{for $k\leq r \leq k^2$}\\ \\
\frac{1}{2 k^4 \ln k} (r-2k^2)^2\qquad & \mbox{for $k^2 \leq r\leq 2k^2$}\\ \\
0 \qquad &\mbox{for $r\geq 2 k^2$}\, .
\end{array}
\right.
\]
Observe that $\phi_k$ is $C^1$ and its derivative is Lipschitz. Next we define the stream functions
\[
\psi_k (x) = - \phi_k (|x|) v_0^\perp \cdot x
\]
and the vector field $v_k (x) = \nabla^\perp \psi_k (x)$. By construction $v_k$ is divergence free, compactly supported, and Lipschitz. In particular, it belongs to $W^{1,p}$ for every $p$. Moreover, $v_k$ equals $(1+\frac{3}{4\ln k}) v_0$ on $B_{k/2}$ and it thus follows that, as $k\to \infty$, $v_k$ converges locally to the constant vector field $v_0$. It remains to check that $\curl v_k = \Delta \psi_k$ converges to $0$ strongly in $L^2$. We compute
\[
\Delta \psi_k = - \underbrace{v_0^\perp \cdot x\, \Delta (\phi_k (|x|))}_{=:f_k} - \underbrace{\nabla (\phi_k (|x|))\cdot v_0^\perp}_{=: g_k}\,
\]
and we seek to bound $f_k$ and $g_k$ pointwise. For what concerns $f_k$ observe that $\Delta\phi_k$ vanishes on $|x|\leq \frac{k}{2}$, $k\leq |x| \leq k^2$, and $2k^2 \leq |x|$. On the remaining regions, using the formula for the Laplacian in polar coordinates, we can estimate
\[
|f_k (x)|\leq |v_0| |x| (|\phi'' (|x|)| + |x|^{-1} |\phi' (|x|)|)\, .
\]
In particular, we conclude
\[
|f_k (|x|)| \leq \frac{C}{|x| \ln k}\, ,
\]
for a constant $C$ independent of $k$.
As for $g_k$, it vanishes for $|x|\leq \frac{k}{2}$ and $|x|\geq k^2$, and where it does not vanish we have the estimate
\[
|g_k (x)|\leq |v_0| |\psi' (|x|)| \leq \frac{C}{|x|\ln k}\, ,
\]
again for a constant $C$ independent of $k$. Passing to polar coordinates, we can thus estimate
\begin{align*}
\|\Delta \psi_k\|^2_{L^2 (\mathbb R^2)} \leq & \frac{C}{(\ln k)^2} \int_{k/2}^{2k^2} \frac{1}{r} \, dr =
\frac{C}{(\ln k)^2} \left(\ln (2k^2) - \ln {\textstyle{\frac{k}{2}}}\right)
= \frac{C \ln (4k)}{(\ln k)^2}\, .
\end{align*}
\end{remark}
\chapter{General strategy: background field and self-similar coordinates}
\label{chapter:general}
\section{The initial velocity and the force}
First of all, the initial velocity $v_0$ of Theorem \ref{thm:main} will have the following structure\index{aaga@$\alpha$}\index{aagb@$\beta$}
\begin{equation}\label{e:v_0}
v_0 (x) =
\begin{cases}
\beta (2-\alpha)^{-1} |x|^{-\alpha} \chi (\lvert x\rvert) x^\perp\;\; &\mbox{if }{\bar\alpha} =\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
where $0<\alpha\leq {\bar\alpha}<1$, $\chi$ is a smooth cut-off function, compactly supported in $\mathbb R$ and identically $1$ on the interval $[-1,1]$, and $\beta$ is a sufficiently large constant (whose choice will depend on $\alpha$). For simplicity we will assume that $\chi$ takes values in $[0,1]$ and it is monotone non-increasing on $[0, \infty[$, even though none of these conditions play a significant role.
A direct computation gives $\div v_0 = 0$.
The corresponding $\omega_0$ is then given by\index{aagz_0@$\omega_0$}\index{aalv_0@$v_0$}\index{aagx@$\chi$}
\begin{equation}\label{e:omega_0}
\omega_0 (x) =
\curl v_0 (x) =
\begin{cases}
\beta \left[ |x|^{-\alpha} \chi (|x|) + (2-\alpha)^{-1} \chi' (|x|) |x|^{1-\alpha}\right] \;\;&\mbox{if }{\bar\alpha}=\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
and the relation $v_0 = K_2*\omega_0$ comes from standard Calder{\'o}n-Zygmund theory (since ${\rm div}\, v_0 =0$, $\curl v_0=\omega_0$ and $v_0$ is compactly supported).
${\bar\alpha}\in ]0,1[$ is chosen depending on $p$ in Theorem \ref{thm:main}, so that ${\bar\alpha} p < 2$: in the rest of the notes we assume that $p$, ${\bar\alpha}$, and $\alpha$ are fixed. In particular it follows from the definition that $\omega_0\in L^1\cap L^p$ and that $v_0 \in L^1 \cap L^\infty$.
Next, the function $|x|^{-{\bar\alpha}}$ will be appropriately smoothed to a (radial) function
\begin{equation}\label{e:def-bar-Omega}
\bar \Omega (x) = g (|x|)
\end{equation}
\index{aalg@$g$}\index{aagZbar@$\bar \Omega$}
such that:
\begin{align}
&g \in C^\infty ([0,R]) \qquad \qquad &\forall R>0\, ,\\
&g (r) = r^{-{\bar\alpha}} \qquad\qquad &\mbox{for $r\geq 2$,}\label{e:decay-at-infinity}\\
&g(r) = g(0) + \frac{g''(0)}{2} r^2\qquad \qquad &\mbox{for $r$ in a neighborhood of $0$.}\label{e:g-constant-around-0}
\end{align}
This smoothing will be carefully chosen so to achieve some particular properties, whose proof will take a good portion of the notes (we remark however that while a sufficient degree of smoothness and the decay \eqref{e:decay-at-infinity} play an important role, the condition \eqref{e:g-constant-around-0} is just technical and its role is to simplify some arguments). We next define the function $\bar V (x)$\index{aagf@$\zeta$} \index{aalVbar@$\bar V$} as
\begin{equation}\label{e:def-barV}
\bar V (x) = \zeta (|x|) x^\perp\, ,
\end{equation}
where $\zeta$ is
\begin{equation}\label{e:def-zeta}
\zeta(r) = \frac{1}{r^2}\int_0^r \rho g(\rho)\,\mathrm d\rho\, .
\end{equation}
\begin{remark}\label{r:well-defined-2} Observe that under our assumptions $\bar \Omega\in L^q(\ensuremath{\mathbb R}^2)$ for every $q>\frac{2}{{\bar\alpha}}$, but it does not belong to any $L^q(\ensuremath{\mathbb R}^2)$ with $q\leq \frac{2}{{\bar\alpha}}$. Since when $p\geq 2$ the condition ${\bar\alpha} p <2$ implies ${\bar\alpha} < 1$, we cannot appeal to Young's Theorem as in Remark \ref{r:well-defined} to define $K_2* \bar\Omega$.
Nonetheless, $\bar V$ can be understood as a natural definition of $K_2* \bar\Omega$ for radial distributions of vorticity which are in $L^1_{\rm loc}$. Indeed observe first that ${\rm div}\, \bar V=0$ and ${\rm curl}\, \bar V = \bar\Omega$, and notice also that $\bar V$ would decay at infinity like $|x|^{-1}$ if $\bar\Omega$ were compactly supported. This shows that $\bar V$ would indeed coincide with $K_2*\bar \Omega$ for compactly supported radial vorticities. Since we can approximate $\bar \Omega$ with $\bar\Omega_N := \bar\Omega \mathbf{1}_{B_N}$, passing into the limit in the corresponding formulas for $K_2* \bar\Omega_N$ we would achieve $\bar V$.
Note also that in the remaining computations what really matters are the identities ${\rm div}\, \bar V = 0$ and ${\rm curl}\, \bar V = \bar \Omega$ and so regarding $\bar V$ as $K_2* \bar\Omega$ only simplifies our terminology and notation.
\end{remark}
The force $f$ will then be defined in such a way that $\tilde \omega$, the curl of the velocity \index{aalf@$f$}\index{aalvtilde@$\tilde{v}$}\index{aagztilde@$\tilde{\omega}$}
\begin{equation}\label{e:tilde-v}
\tilde v (x, t) = \beta t^{1/\alpha-1} \bar V \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)\,
\end{equation}
is a solution of \eqref{e:Euler}. In particular, since $(\tilde v\cdot\nabla)\tilde \omega=0$, the force $f$ is given by the explicit formula
\begin{equation}\label{e:def-f}
f (x,t) = \partial_t \tilde{\omega} (x,t)\, .
\end{equation}
With this choice a simple computation, left to the reader, shows that $\tilde{\omega}$ solves \eqref{e:Euler} with initial data $\omega_0$. Note in passing that, although as pointed our in Remark \ref{r:well-defined-2} there is not enough summability to make sense of the identity $K_2* \bar \Omega = \bar V$ by using standard Lebesgue integration, the relation $K_2* \tilde\omega = \tilde{v}$ is made obvious by ${\rm div}\, \tilde{v} =0$, ${\rm curl}\, \tilde{v} = \tilde{\omega}$, and the boundedness of the supports of both $\tilde{\omega}$ and $\tilde{v}$.
The pair $(\tilde{\omega}, \tilde{v})$ is one of the solutions claimed to exist in Theorem \ref{thm:main}. The remaining ones will be described as a one-parameter family $(\omega_\varepsilon, v_\varepsilon)$ for a nonzero choice of the parameter $\varepsilon$, while $(\tilde{\omega}, \tilde{v})$ will correspond to the choice $\varepsilon =0$. We will however stick to the notation $(\tilde\omega, \tilde v)$ to avoid confusions with the initial data.
It remains to check that $f$ belongs to the functional spaces claimed in Theorem \ref{thm:main}.
\begin{lemma}\label{lem:Curl of tilde v}
$\tilde\omega$ is a smooth function on $\{t>0\}$ which satisfies, for all $t>0$ and $x\in\ensuremath{\mathbb R}^2$,
\begin{equation}\label{e:curl-tilde-v}
\tilde \omega (x, t) = \beta t^{-1} \bar \Omega \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|) + \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, ,
\end{equation}
while the external force $f$ and $\partial_t \tilde{v} = K_2*f$ belong, respectively, to the spaces $L^1([0,T]; L^1 \cap L^p)$ and $L^1 ([0,T], L^2)$ for every positive $T$. Likewise $\tilde\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $\tilde{v} \in L^\infty ([0,T], L^2)$.
\end{lemma}
We end the section with a proof of the lemma, while we resume our explanation of the overall approach to Theorem \ref{thm:main} in the next section.
\begin{proof} The formula \eqref{e:curl-tilde-v} is a simple computation. From it we also conclude that $\tilde\omega = \curl\tilde v$ is a smooth function on $\{t>0\}$ and hence differentiable in all variables.
Observe next that $|\bar V (x)|\leq C |x|^{1-{\bar\alpha}}$ and we can thus estimate $|\tilde{v} (x,t)|\leq C {t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}$. Since its spatial support is contained in ${\rm spt}\, (\chi)$, we conclude that $\tilde v$ is bounded and belongs to $L^\infty ([0,{T}], L^2)$ {for any $T>0$.}
Using that $\bar \Omega (x) = |x|^{-{\bar\alpha}}=g(\abs x)$ for $|x|\geq 2$, we write
\begin{align*}
\tilde{\omega} (x,t) = & \beta t^{-1} g \left(\frac{|x|}{t^{1/\alpha}}\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}
+ \beta { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \chi (|x|) \mathbf{1}_{\{|x|> 2t^{1/\alpha}\}} \\
&+ \beta t^{-1}\zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, .
\end{align*}
In particular, recalling that $|\bar \Omega (x)|\leq C|x|^{-{\bar\alpha}}$ and $\zeta (|x|) |x| \leq C |x|^{1-{\bar\alpha}}$ we easily see that
\begin{align}
\|\tilde\omega (\cdot, t)\|_{L^1} &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}\, dx\, ,\\
\|\tilde{\omega} (\cdot, t)\|_{L^p}^p &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}} |x|^{-p {\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}}|x|^{p-p{\bar\alpha}}\, dx\, .
\end{align}
This implies immediately that $\tilde\omega \in L^\infty ([0,{ T]}, L^1\cap L^p)$ {for any $T>0$ }, given that ${\bar\alpha} p <2$ (and hence $|x|^{-{\bar\alpha} p}$ is locally integrable).
We now differentiate in time in the open regions $\{|x|< 2t^{1/\alpha}\}$ and $\{|x| > 2t^{1/\alpha}\}$ separately to achieve\footnote{Since we will only estimate integral norms of $f$, its values on $\{|x|= 2t^{1/\alpha}\}$ are of no importance. However, given that $f$ is in fact smooth over the whole domain $\{t>0\}$, we can infer the validity of the formula \eqref{e:f1-f2} for every point $x\in \{|x|= 2t^{1/\alpha}\}$ by approximating it with a sequence of points in $\{|x|< 2t^{1/\alpha}\}$ and passing to the limit in the corresponding expressions.}
\begin{align}
f (x,t) = & - \beta \left(t^{-2} g \left(\frac{|x|}{t^{1/\alpha}}\right)
+ \frac{1}{\alpha}{t^{-2-1/\alpha}} |x| g' \left(\frac{|x|}{t^{1/\alpha}}\right)\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}\nonumber\\
& {+ \beta \left(\frac{{\bar\alpha}}{\alpha}-1\right)t^{\frac{{\bar\alpha}}{\alpha}-2} |x|^{-{\bar\alpha}}\chi(|x|)\mathbf{1}_{\{|x|>2t^{1/\alpha}\}}
}\nonumber\\
& - \beta \left(t^{-2} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} t^{-2-1/\alpha} \zeta' \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\right) |x|\chi' (|x|)\nonumber\\
=:& f_1 (x,t) + f_2 (x,t){ + f_3(x,t)}\, .\label{e:f1-f2}
\end{align}
We wish to prove that $f\in L^1 ([0,T], L^1\cap L^p)$. On the other hand, since for any $T_0>0$ both $f_1+f_2$ and $f_3$ are smooth and have compact support on $\mathbb R^2\times [T_0, T]$, it suffices to show that $f\in L^1 ([0,T_0], L^1\cap L^p)$ for a sufficiently small $T_0$. Recalling that $|g (|x|)| + |g' (|x|)||x| \leq C |x|^{-{\bar\alpha}}$, we can then bound
\begin{equation}\label{e:bound-f1}
|f_1 (x,t)|\leq C {t^{-2+\frac{{\bar\alpha}}{\alpha}}} |x|^{-{\bar\alpha}} \mathbf{1}_{|x|\leq 2 t^{1/\alpha}} \qquad
\mbox{for all $0<t<T_0$ and all $x$}.
\end{equation}
Thus
\begin{align}
\|f_1\|_{L^1 (\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{2/\alpha -2}\, dt\, , < \infty\\
\|f_1\|_{L^1 ([0,T_0];L^p( \mathbb R^2))} &\leq C \int_0^{T_0} t^{2/(\alpha p) -2}\, dt < \infty\, ,
\end{align}
where the condition $2> {\bar\alpha} p$ entered precisely in the finiteness of the latter integral.
Coming to the second term{\color{red}, we observe that it vanishes when $\bar\alpha = \alpha$. When $\alpha < \bar\alpha$,} {since $\chi$ is compactly supported in $\mathbb R$, we get
\begin{align*}
\|f_2\|_{L^1(\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac 1{\alpha}(2-{\bar\alpha})}) dt<+\infty\\
\|f_2\|_{L^1([0,T_0];L^p(\mathbb R^2))}
&\leq \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac p{\alpha}(2-{\bar\alpha})})^{1/p} dt <+\infty
\, .
\end{align*}}
The last term can be computed explicitly as
\begin{align*}
\zeta (r) &= \frac{1}{r^2} \left(C + \int_2^r \rho^{1-{\bar\alpha}}\,\mathrm d\rho\right) = a r^{-2} + b r^{-{\bar\alpha}} & \text{for all } r \geq 2\, ,
\shortintertext{where $a$ and $b$ are two fixed constants. Likewise}
\zeta'(r)&= -2 a r^{-3} -{\bar\alpha} b r^{-{\bar\alpha}-1}\qquad &\text{for all } r \geq 2\, .
\end{align*}
Recall that $\chi' (|x|)=0$ for $|x|\leq 1$.
Therefore, for $t\leq T_0$ sufficiently small, the functions $\zeta$ and $\zeta'$ are computed on $|x| t^{-1/\alpha} \geq 2$ in the formula for $f_3$ (cf. \eqref{e:f1-f2}). Thus, {
\[
f_3 (x,t)
=-\beta t^{-2}\left(
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} + b\left(1-\frac{{\bar\alpha}}{\alpha}\right)t^{\frac{{\bar\alpha}}{\alpha}} |x|^{1-{\bar\alpha}}
\right)\chi'(|x|)
\]}
{\color{red} In particular $f_3$ has compact support. Since $\alpha <1$ the function
\[
-\beta t^{-2}
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} \chi'(|x|)\, ,
\]
is bounded, and thus belongs to
$L^1 ([0,T_0], L^1\cap L^p)$. As for the second summand, it vanishes if $\alpha = \bar \alpha$, while its $L^p$ norm at time $t$ can be bounded by $C t^{-2+\frac{\bar \alpha}{\alpha}}$ if $\bar\alpha > \alpha$. The latter function however belongs to $L^1 ([0,T_0])$.}
Observe next that, since for every positive $t$ the function $f (\cdot, t)$ is smooth and compactly supported, $K_2* f (\cdot, t)$ is the unique divergence-free vector field which belongs to $L^1$ and such that its curl gives $f (\cdot, t)$. Hence, since $f (\cdot, t) = \curl \partial_t \tilde{v} (\cdot, t)$ and $\partial_t \tilde{v} (\cdot, t)$ is smooth and compactly supported, we necessarily have $K_2 * f (\cdot, t) = \partial_t \tilde{v} (\cdot, t)$. It remains to show that $\partial_t \tilde{v} \in L^1 ([0,T]; L^2)$ for every positive $T$. To that end we compute
{\color{red}
\[
\tilde{v} (x,t) = \beta t^{1/\alpha-1} \bar{V} \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)
= \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) x^\perp \chi (|x|)
\]
\[
\partial_t \tilde{v} (x,t ) = - \beta t^{-2} \chi (|x|) x^\perp \left(\zeta \left( \frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} \frac{|x|}{t^{1/\alpha}} \zeta' \left(\frac{|x|}{t^{1/\alpha}} \right)\right)\, .
\]
In order to compute the $L^2$ norm of $\partial_t \tilde{v} (\cdot, t)$ we break the space into two regions as in the computations above. In the region $\{|x|\leq 2 t^{1/\alpha}\}$ we use that $|\zeta| + |g|+ |\zeta'|$ are bounded to compute
\[
\int_{|x|\leq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4} \int_{|x|\leq t^{1/\alpha}} |x|^2\,dx \leq C t^{4/\alpha -4}\, ,
\]
which is a bounded function on $[0,1]$. On $\{|x|\geq t^{1/\alpha}\}$ we observe that the function can be explicitly computed as
\[
-\beta t^{-2} \chi (|x|) x^\perp \left(\left(1-\frac{2}{\alpha}\right) t^{2/\alpha} |x|^{-2} + b \left(1-\frac{\bar \alpha}{\alpha}\right) t^{\frac{\bar\alpha}{\alpha}}|x|^{-\bar \alpha}\right)\, .
\]
If we let $\bar R>0$ be such that the support of $\chi$ is contained in $B_{\bar R}$, we use polar coordinates to estimate
\[
\int_{|x|\geq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4+4/\alpha} \int_{2t^{1/\alpha}}^{\bar R} \frac{d\rho}{\rho} + C |\alpha - \bar \alpha| t^{2\frac{\bar \alpha}{\alpha}-4}\, .
\]
We can therefore estimate the $L^2$ norm of $\partial_t \tilde{v}$ at time $t$ by
\[
\|\partial_t \tilde{v} (\cdot, t)\|_{L^2} \leq C + C |\alpha - \bar \alpha| t^{\frac{\bar \alpha}{\alpha} -2}\, .
\]
When $\alpha = \bar \alpha$ we conclude that the $L^2$ norm of $\partial_t \tilde{v}$ is bounded, while for $\bar\alpha > \alpha$ the function $t\mapsto t^{\frac{\bar \alpha}{\alpha} -2}$ belongs to $L^1 ([0,T])$.
}
\end{proof}
\section{The infinitely many solutions}
We next give a more precise statement leading to Theorem \ref{thm:main} as a corollary.
\begin{theorem}\label{thm:main2}
Let $p\in ]2, \infty[$ be given and let $\alpha$ {and ${\bar\alpha}$} be any positive number such that {$\alpha \leq {\bar\alpha}$} and ${\bar\alpha} p <2$. For an appropriate choice of the smooth function $\bar \Omega$ and of a positive constant $\beta$ as in the previous section, we can find, additionally:
\begin{itemize}
\item[(a)] a suitable nonzero function $\eta\in (L^1\cap H^2) (\mathbb R^2; \mathbb C)$ with $K_2 * \eta\in L^2(\ensuremath{\mathbb R}^2; \mathbb C^2)$, \index{aagh@$\eta$}
\item[(b)] a real number $b_0$ and a positive number $a_0>0$, \index{aalazero@$a_0$}\index{aalbzero@$b_0$}
\end{itemize}
with the following property.
Consider $\omega_0$, $v_0$, $\tilde{v}$, $\tilde\omega = \curl \tilde{v}$, and $f$ as defined in \eqref{e:v_0},\eqref{e:omega_0}, \eqref{e:tilde-v}, and \eqref{e:def-f}. Then for every $\varepsilon\in \ensuremath{\mathbb R}$ there is a solution $\omega_\varepsilon$ of \eqref{e:Euler} with initial data $\omega_0$ such that \index{aage@$\varepsilon$}\index{aagzepsilon@$\omega_\varepsilon$}
\begin{enumerate}[(i)]
\item\label{item:1-omega in L infinity L1 Lp} $\omega_\varepsilon \in L^\infty ([0,T], L^1\cap L^p )$ for every $T>0$;
\item\label{item:2-v in L infinity L2} $v_\varepsilon := K_2 * \omega_\varepsilon \in L^\infty ([0,T], L^2)$ for every $T>0$;
\item\label{item:3-eigenvalue bound} as $t\to0$,
\begin{equation}\label{e:asymptotic-in-t}
\|\omega_\varepsilon (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} \operatorname{Re} (t^{i b_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2(\ensuremath{\mathbb R}^2)} = o (t^{a_0 +1/\alpha -1})\, ;
\end{equation}
\item\label{e:Camillo-is-silly} if $b_0=0$, then $\eta$ is real-valued.
\end{enumerate}
\end{theorem}
Observe that, by a simple computation,
\begin{align*}
\| t^{a_0-1} \operatorname{Re} (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} = t^{a_0 +1/\alpha -1} \|\operatorname{Re} (t^{ib_0} \eta)\|_{L^2}\, ,
\end{align*}
and thus it follows from \eqref{e:asymptotic-in-t} that
\begin{equation}\label{eq:difference-of-the-omega}
\limsup_{t\downarrow 0} t^{1-1/\alpha - a_0} \|\omega_{\varepsilon} (\cdot, t) - \omega_{\bar\varepsilon} (\cdot, t)\|_{L^2} \geq |\varepsilon - \bar\varepsilon| \max_{\theta\in[0,2\pi]}\| \operatorname{Re} (e^{i\theta} \eta)\|_{L^2}\,
\end{equation}
(note that in the last conclusion we need (iv) if $b_0=0$).
Since $\|\eta\|_{L^2} >0$, we conclude that the solutions $\omega_\varepsilon$ described in Theorem \ref{thm:main2} must be all distinct.
For each fixed $\varepsilon$, the solution $\omega_\varepsilon$ will be achieved as a limit of a suitable sequence of approximations $\omega_{\varepsilon, k}$\index{aagzepsilonk@$\omega_{\varepsilon, k}$} in the following way. After fixing a sequence of positive times $t_k$\index{aaltk@$t_k$} converging to $0$, which for convenience are chosen to be $t_k := e^{-k}$, we solve the following Cauchy problem for the Euler equations in vorticity formulation
\begin{equation}\label{e:Euler-later-times}
\left\{
\begin{array}{ll}
& \partial_t \omega_{\varepsilon,k} + ((K_2* \omega_{\varepsilon,k})\cdot \nabla) \omega_{\varepsilon,k} = f \\ \\
& \omega_{\varepsilon, k} (\cdot, t_k) = \tilde\omega (\cdot, t_k) + \varepsilon t_k^{a_0-1} \operatorname{Re} (t_k^{ib_0} \eta (t_k^{-1/\alpha}\cdot))\, .
\end{array}\right.
\end{equation}
Observe that, since $t_k$ is positive, the initial data $\omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^1\cap L^\infty$, while the initial velocity defining $v_{k, \varepsilon}:= K_2 * \omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^2$. Since $K_2 * f \in L^1 ([0,T], L^2)$ for every $T$, we can apply the classical theorem of Yudovich (namely, Theorem \ref{thm:Yudo} and Remark \ref{r:A-priori-estimates}) to conclude that
\begin{corollary}\label{c:omega_k_epsilon}
For every $k$, $\varepsilon$, and every $T$ there exists a unique solution $\omega_{\varepsilon, k}$ of \eqref{e:Euler-later-times} with the property that $\omega_{\varepsilon , k} \in L^\infty ([t_k, T], L^1\cap L^\infty)$ and $v_{\varepsilon, k}\in L^\infty ([t_k, T], L^2)$ for every positive $T$. Moreover, we have the following bounds for every $t$
\begin{align}
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^1}\,\mathrm ds\\
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^p}\,\mathrm ds \label{e:omega_Lp_estimate}\\
\|v_{\varepsilon, k} (\cdot, t)\|_{L^2}\leq &\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2} +
\int_{t_k}^t \|K_2* f (\cdot, s)\|_{L^2}\,\mathrm ds\, .
\end{align}
\end{corollary}
Next, since we can easily bound $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1}$, $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p}$, and $\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2}$ independently of $k$, for each fixed $\varepsilon$ we conclude
\begin{equation}\label{e:uniform_bound}
\sup_{k\in \mathbb N} \sup_{t\in [t_k, T]}
\left(\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} + \|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} + \|v_{\varepsilon, k} (\cdot, t)\|_{L^2}
\right) < \infty\, .
\end{equation}
In turn we can use \eqref{e:uniform_bound} to conclude that, for each fixed $\varepsilon$, a subsequence of $\omega_{\varepsilon, k}$ converges to a solution $\omega_\varepsilon$ of \eqref{e:Euler} which satisfies the conclusions \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\begin{proposition}\label{p:convergence}\label{P:CONVERGENCE}
Assume $p, \alpha, {\bar\alpha}, \omega_0, v_0, \tilde\omega, \tilde{v}, f, a_0, b_0$, and $\bar \eta$ are as in Theorem \ref{thm:main2} and let $\omega_{\varepsilon, k}$ be as in Corollary \ref{c:omega_k_epsilon}. Then, for every fixed $\varepsilon$, there is a subsequence, not relabeled, with the property that $\omega_{\varepsilon, k}$ converges (uniformly in $C ([0,T], L^q_w)$ for every positive $T$ and every $1< q\leq p$, where $L^q_w$ denotes the space $L^q$ endowed with the weak topology) to a solution $\omega_\varepsilon$ of \eqref{e:Euler} on $[0, \infty[$ with initial data $\omega_0$ and satisfying the bounds \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\end{proposition}
The proof uses classical convergence theorems and we give it in the appendix for the reader's convenience. The real difficulty in the proof of Theorem \ref{thm:main2} is to ensure that the bound (iii) holds. This is reduced to the derivation of suitable estimates on $\omega_{\varepsilon, k}$, which we detail in the following statement.
\begin{theorem}\label{thm:main3}
Assume $p, \alpha, {\bar\alpha}$ are as in Theorem \ref{thm:main2} and fix $\varepsilon >0$. For an appropriate choice of $\bar \Omega$ and $\beta$ there is a triple $\eta$, $a_0$, and $b_0$ as in Theorem \ref{thm:main2} and three positive constants $T_0, \delta_0$, and $C$ with the property that
\begin{equation}\label{e:asymptotic-in-t-2}
\|\omega_{\varepsilon,k} (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} {\rm Re}\, (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} \leq C t^{a_0+1/\alpha-1+\delta_0} \qquad \forall t\in [t_k, T_0]\, .
\end{equation}
\end{theorem}
It is then obvious that the final conclusion \ref{item:3-eigenvalue bound} of Theorem \ref{thm:main2} is a consequence of the more precise estimate \eqref{e:asymptotic-in-t-2} on the approximations $\omega_{\varepsilon,k}$. The rest of these lecture notes are thus devoted to the proof of Theorem \ref{thm:main3} and we will start in the next section by breaking it into two main parts.
\section{Logarithmic time scale and main Ansatz}
\index{similarity-variables@Similarity variables}\index{aagZ@$\Omega$}\index{aagt@$\tau$}\index{aagx@$\xi$}
First of all, we will change variables and unknowns of the Euler equations (in vorticity formulation) in a way which will be convenient for many computations. Given a solution $\omega$ of \eqref{e:Euler} on $\ensuremath{\mathbb R}^2\times [T_0, T_1]$ with $0\leq T_0 \leq T_1$, we introduce a new function $\Omega$ on $\mathbb R^2 \times [\ln T_0, \ln T_1]$ with the following transformation. We set $\tau=\ln t$, $\xi=x t^{-1/\alpha}$ and
\begin{equation}\label{e:omega->Omega}
\Omega (\xi, \tau) := e^{\tau} \omega (e^{\tau/\alpha} \xi, e^\tau)\, ,
\end{equation}
which in turn results in
\begin{equation}\label{e:Omega->omega}
\omega (x, t) = t^{-1} \Omega (t^{-1/\alpha} x, \ln t)\, .
\end{equation}
Observe that, if $v (\cdot, t) = K_2 * \omega (\cdot, t)$ and $V( \cdot, \tau) = K_2 * \Omega (\cdot, \tau)$, we can derive similar transformation rules for the velocities as \index{aalV@$V$}
\begin{align}
V (\xi, \tau) &= e^{\tau (1-1/\alpha)} v(e^{\tau/\alpha} \xi, e^\tau)\label{e:v->V}\, ,\\
v (x,t) &= t^{-1+1/\alpha} V (t^{-1/\alpha} x, \ln t)\label{e:V-t>v}\, .
\end{align}
Likewise, we have an analogous transformation rule for the force $f$, which results in \index{aalF@$F$}
\begin{align}
F (\xi, \tau) &= e^{2\tau} f (e^{\tau/\alpha} \xi, e^\tau)\, ,\label{e:f->F}\\
f (x,t) &= t^{-2} F (t^{-1/\alpha} x, \ln t)\label{e:F->f}\, .
\end{align}
In order to improve the readability of our arguments, throughout the rest of the notes we will use the overall convention that, given some object related to the Euler equations in the ``original system of coordinates'', the corresponding object after applying the transformations above will be denoted with the same letter in capital case.
\begin{remark}
Note that the naming of $\bar V$ and $\bar\Omega$ is somewhat of an exception to this convention, since $(\bar\Omega, \bar V)$ is a solution of \eqref{e:Euler} in Eulerian variables. However, if you ``force them to be functions of $\xi$,'' which is how they will be used in the non-linear part, then they solve the Euler equations in self-similar variables with forcing (see \eqref{e:Euler-transformed}).
\end{remark}
Straightforward computations allow then to pass from \eqref{e:Euler} to an equation for the new unknown $\Omega$ in the new coordinates. More precisely, we have the following
\begin{lemma}\label{l:coordinates-change}
Let $p>2$ and $\infty \geq T_1 > T_0\geq 0$. Then $\omega\in L^\infty_{\text{loc}} (]T_0, T_1[; L^1\cap L^p)$ and $v (\cdot, t) = K_2* \omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-again}
\partial_t \omega + (v \cdot \nabla) \omega = f\, ,
\end{equation}
if and only if $\Omega$ and $V (\cdot, t) = K_2 * \Omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-transformed}
\partial_\tau \Omega - \left(1 + \frac{\xi}{\alpha}\cdot \nabla\right) \Omega + (V\cdot \nabla) \Omega = F\, .
\end{equation}
\end{lemma}
We next observe that, due to the structural assumptions on $\tilde \omega$ and $\tilde v$, the corresponding fields $\tilde \Omega$ and $\tilde V$ can be expressed in the following way: \index{aalVtilde@$\tilde V$}\index{aagZtilde@$\tilde\Omega$}
\begin{align}
\tilde{V} (\xi, \tau) &= \beta \bar V (\xi) \chi (e^{\tau/\alpha} |\xi|)\, ,\label{e:tildeV}\\
\tilde{\Omega} (\xi, \tau) &= \beta \bar \Omega (\xi) \chi (e^{\tau/\alpha} |\xi|) + \beta \zeta (|\xi|)
\chi' (e^{\tau/\alpha} |\xi|) e^{\tau/\alpha} |\xi|\, \label{e:tildeOmega}.
\end{align}
Observe that, for every fixed compact set $K$ there is a sufficiently negative $-T (K)$ with the property that
\begin{itemize}
\item $\chi (e^{\tau/\alpha} |\cdot|)= 1$ and $\chi' (e^{\tau/\alpha} \cdot) = 0$ on $K$ whenever $\tau \leq - T (K)$.
\end{itemize}
Since in order to prove Theorem \ref{thm:main} we are in fact interested in very small times $t$, which in turn correspond to very negative $\tau$, it is natural to consider $\tilde\Omega$ and $\tilde{V}$ as perturbations of $\beta \bar \Omega$ and $\beta \bar V$. We will therefore introduce the notation
\begin{align}
\tilde \Omega &= \beta \bar \Omega + \Omega_r\, ,\\
\tilde V & = \beta \bar V + V_r := \beta \bar V + K_2* \Omega_r\, .
\end{align}
We are thus lead to the following Ansatz for $\Omega_{\varepsilon,k} (\xi, \tau) = e^{\tau} \omega_{\varepsilon ,k} (e^{\tau/\alpha} \xi, e^\tau)$:
\begin{equation}\label{e:Ansatz-1}
\Omega_{\varepsilon, k} (\xi, \tau) = \beta \bar \Omega (\xi) + \Omega_r (\xi, \tau) + \varepsilon e^{\tau a_0} {\rm Re}\, (e^{i\tau b_0} \eta (\xi)) + \Omega_{\text{per}, k} (\xi, \tau)\, .
\end{equation}
The careful reader will notice that indeed the function $\Omega_{\text{per},k}$ depends upon the parameter $\varepsilon$ as well, but since such dependence will not really play a significant role in our discussion, in order to keep our notation simple, we will always omit it. \index{aagZr@$\Omega_r$}\index{aagZepsilonk@$\Omega_{\varepsilon, k}$}\index{aagZperk@$\Omega_{\text{per}, k}$}\index{aalVr@$V_r$}
We are next ready to complete our Ansatz by prescribing one fundamental property of the function $\eta$.
We first introduce the integro-differential operator \index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:Lss}
L_{\text{ss}} (\Omega) := \left(1+\frac{\xi}{\alpha} \cdot \nabla\right) \Omega - \beta (\bar V \cdot \nabla) \Omega - \beta ((K_2* \Omega)\cdot \nabla) \bar \Omega\, .
\end{equation}
We will then prescribe that $\eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $z_0 = a_0 + ib_0$, namely, \index{aalz0@$z_0$}
\begin{equation}\label{e:Ansatz-2}
L_{\text{ss}} (\eta) = z_0 \eta\, .
\end{equation}
Observe in particular that, since $L_{\text{ss}}$ is a real operator (i.e. $L_{\text{ss}} (\eta)$ is real-valued when $\eta$ is real-valued, cf. Section \ref{s:abstract-operators}), the complex conjugate $\bar \eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $\bar z_0$, so that, in particular, the function
\begin{equation}\label{e:Omega_lin}
\Omega_{\text{lin}} (\xi, \tau) := \varepsilon e^{a_0 \tau} {\rm Re}\, (e^{i b_0 \tau} \eta (\xi))
= \frac{\varepsilon}{2} (e^{z_0 \tau} \eta (\xi) + e^{\bar z_0 \tau} \bar \eta (\xi))
\end{equation}
satisfies the linear evolution equation
\begin{equation}\label{e:evolution_of_Omega_lin}
\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})=0\, .
\end{equation}
The relevance of our choice will become clear from the discussion of Section \ref{s:nonlinear}. The point is that \eqref{e:evolution_of_Omega_lin} is close to the linearization of Euler (in the new system of coordinates) around $\tilde{\Omega}$. The``true linearization'' would be given by \eqref{e:evolution_of_Omega_lin} if we were to substitute $\bar \Omega$ and $\bar V$ in \eqref{e:Lss} with $\tilde{\Omega}$ and $\tilde{V}$. Since however the pair $(\tilde \Omega, \tilde{V})$ is well approximated by $(\bar \Omega, \bar V)$ for very negative times, we will show that \eqref{e:evolution_of_Omega_lin} drives indeed the evolution of $\Omega_{\varepsilon,k}-\tilde{\Omega}$ up to an error term (i.e. $\Omega_{\text{per},k}$) which is smaller than $\Omega_{\text{lin}}$.
\section{Linear theory}
We will look for the eigenfunction $\eta$ in a particular subspace of $L^2$. More precisely for every
$m\in \mathbb N\setminus \{0\}$ we denote by $L^2_m$ the set of those elements $\vartheta \in L^2 (\mathbb R^2, \mathbb C)$ which are $m$-fold symmetric, i.e., denoting by $R_\theta: \mathbb R^2\to \mathbb R^2$ the counterclockwise rotation of angle $\theta$ around the origin, \index{rotational-symmetry@Rotationally symmetric function space}\index{aalL2m@$L^2_m$}
they satisfy the condition
\begin{align*}
\vartheta &= \vartheta \circ R_{2\pi/m}\, .
\end{align*}
In particular, $L^2_m$ is a closed subspace of $L^2 (\mathbb R^2, \mathbb C)$. Note however that the term ``$m$-fold symmetric'' is somewhat misleading when $m=1$: in that case the transformation $R_{2\pi/m} = R_{2\pi}$ is the identity and in particular $L^2_1 = L^2 (\mathbb R^2, \mathbb C)$. Indeed we will look for $\eta$ in $L^2_m$ for a sufficiently large $m\geq 2$.
An important technical detail is that, while the operator $L^2 \cap \mathscr{S} \ni \omega \mapsto K_2* \omega \in \mathscr{S}'$ {\em cannot} be extended continuously to the whole $L^2$ (cf. Remark \ref{r:Camillo_dumb}), for $m\geq 2$ it {\em can} be extended to a continuous operator from $L^2_m$ into $\mathscr{S}'$: this is the content of the following lemma.
\begin{lemma}\label{l:extension}\label{L:EXTENSION}
For every $m\geq 2$ there is a unique continuous operator $T: L^2_m \to \mathscr{S}'$ with the following properties:
\begin{itemize}
\item[(a)] If $\vartheta\in \mathscr{S}$, then $T (\vartheta) = K_2*\vartheta$ (in the sense of distributions);
\item[(b)] There is $C>0$ such that for every $\vartheta \in L^2_m$, there is $v=v(\vartheta)\in W^{1,2}_{\text{loc}}$ with
\begin{itemize}
\item[(b1)] $R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C\|\vartheta\|_{L^2 (\mathbb R^2)}$ for all $R>0$;
\item[(b2)] ${\rm div}\, v =0$ and $\langle T(\vartheta), \varphi\rangle = \int v\cdot \varphi$ for every test function $\varphi \in \mathscr{S}$.
\end{itemize}
\end{itemize}
\end{lemma}
From now on the operator $T$ will still be denoted by $K_2*$ and the function $v$ will be denoted by $K_2*\omega$. Observe also that, if $\hat\Omega$ is an $L^2_{\text{loc}}$ function such that $\|\hat\Omega\|_{L^2 (B_R)}$ grows polynomially in $R$, the integration of a Schwartz function times $v \hat\Omega$ is a well defined tempered distribution. In the rest of the notes, any time that we write a product $\hat\Omega K_2 * \vartheta $ for an element $\vartheta\in L^2_m$ and an $L^2_{\text{loc}}$ function $\hat\Omega$ we will always implicitly assume that
$\|\hat\Omega\|_{L^2 (B_R)}$ grows at most polynomially in $R$ and that the product is understoood as a well-defined element of $\mathscr{S}'$.
The relevance of this discussion is that, for $m\geq 2$, we can now consider the operator $L_{\text{ss}}$ as a closed, densely defined unbounded operator on $L^2_m$. We let
\index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:def-Lss-formal}
L_{\text{ss}} (\Omega) = \left(1- {\textstyle{\frac{2}{\alpha}}}\right) \Omega - {\rm div}\, \left(\left(-{\textstyle{\frac{\xi}{\alpha}}} + \beta \bar V\right) \Omega\right) - \beta ( K_2*\Omega \cdot \nabla) \bar\Omega\,
\end{equation}
and its domain is
\begin{equation}\label{e:D(Lss)-formal}
D_m (L_{\text{ss}}) =\{\Omega\in L^2_m : L_{\text{ss}} (\Omega)\in L^2_m\}\, .
\end{equation}
When $\Omega\in \mathscr{S}$ it can be readily checked that $L_{\text{ss}}$ as defined in \eqref{e:def-Lss-formal} coincides with \eqref{e:Lss}.
The definition makes obvious that $L_{\text{ss}}$ is a closed and densely defined unbounded operator over $L^2_m$. We will later show that $\Omega \mapsto (K_2*\Omega \cdot \nabla) \bar \Omega$ is in fact a compact operator from $L^2_m$ into $L^2_m$ and therefore we have
\begin{equation}\label{e:D(Lss)-formal-2}
D_m (L_{\text{ss}}) := \left\{\Omega\in L^2_m : {\rm div} \left(\beta \bar V\Omega- {\textstyle{\frac{\xi}{\alpha}}}\Omega\right)\, \in L^2_m\right\}\, .
\end{equation}
From now on, having fixed $m\geq 2$ and regarding $L_{\text{ss}}$ as an unbounded, closed, and densely defined operator in the sense given above, the spectrum ${\rm spec}_m\, (L_{\text{ss}})$ on $L^2_m$ is defined as the (closed) set which is the complement of the {\em resolvent} of $L_{\text{ss}}$, the latter being the (open) set of $z_0 \in \mathbb C$ such that $L_{\text{ss}}-z_0$ has a bounded inverse $(L_{\text{ss}}-z_0)^{-1} : L^2_m \to L^2_m$.\footnote{The textbook definition would require the inverse to take values in $D_m (L_{\text{ss}})$. Note however that this is a consequence of our very definition of $D_m (L_{\text{ss}})$.}.
The choice of $\eta$ will then be defined by the following theorem which summarizes a quite delicate spectral analysis.
\begin{theorem}\label{thm:spectral}\label{THM:SPECTRAL}
For an appropriate choice of $\bar \Omega$ there is an integer $m\geq 2$ with the following property. For every positive $\bar a>0$, if $\beta$ is chosen appropriately large, then there is $\eta\in L^2_m\setminus \{0\}$ and $z_0=a_0+ib_0$ such that:
\begin{itemize}
\item[(i)] $a_0 \geq \bar a$ and $L_{\text{ss}} (\eta) = z_0 \eta$;
\item[(ii)] For any $z \in {\rm spec}_m\, (L_{\text{ss}})$ we have ${\rm Re}\, z\leq a_0$;
\item[(iii)] If $b_0=0$, then $\eta$ is real valued;
\item[(iv)] There is $k\geq 1$ integer and $e:\mathbb R^+\to \mathbb C$ such that $\eta (x) = e (r) e^{ikm \theta}$ if $b_0\neq 0$ and $\eta (x) = {\rm Re}\, (e(r) e^{ikm\theta})$ if $b_0= 0$.
\end{itemize}
\end{theorem}
In fact we will prove some more properties of $\eta$, namely, suitable regularity and decay at infinity, but these are effects of the eigenvalue equation and will be addressed later.
The proof of Theorem \ref{thm:spectral} will be split in two chapters. In the first one we regard $L_{\text{ss}}$ as perturbation of a simpler operator $L_{\text{st}}$, which is obtained from $L_{\text{ss}}$ by ignoring the $(1+\frac{\xi}{\alpha}\cdot \nabla)$ part: the intuition behind neglecting this term is that the remaining part of the operator $L_{\text{ss}}$ is multiplied by the constant $\beta$, which will be chosen appropriately large. The second chapter will be dedicated to proving a theorem analogous to Theorem \ref{thm:spectral} for the operator $L_{\text{st}}$. The analysis will take heavily advantage of an appropriate splitting of $L^2_m$ as a direct sum of invariant subspaces of $L_{\text{st}}$. The latter are obtained by expanding in Fourier series the trace of any element of $L^2_m$ on the unit circle. In each of these invariant subspaces the spectrum of $L_{\text{st}}$ can be related to the spectrum of a suitable second order differential operator in a single real variable.
\section{Nonlinear theory}\label{s:nonlinear}
The linear theory will then be used to show Theorem \ref{thm:main3}. In fact, given the decomposition introduced in \eqref{e:Ansatz-1}, we can now formulate a yet more precise statement from which we conclude Theorem \ref{thm:main3} as a corollary.
\begin{theorem}\label{thm:main4}
Let $p$, $\alpha$, and ${\bar\alpha}$ be as in Theorem \ref{thm:main2} and assume $\bar a$ is sufficiently large. Let $\bar \Omega$, $\eta$, $a_0$, and $b_0$ be as in Theorem \ref{thm:spectral} and for every $\varepsilon \in \mathbb R$, $k\in \mathbb N$ consider the solutions $\omega_{\varepsilon,k}$ of \eqref{e:Euler-later-times} and $\Omega_{\varepsilon,k} (\xi, \tau) = e^\tau \omega_{\varepsilon,k} (e^{\tau/\alpha} \xi, e^\tau)$. If we define $\Omega_{\text{per},k}$ through \eqref{e:Ansatz-1}, then there are $\tau_0 = \tau_0 (\varepsilon)$ and $\delta_0>0$, independent of $k$, such that
\begin{equation}\label{e:H2-estimate}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_{L^2} \leq e^{\tau (a_0+\delta_0)} \qquad\qquad\qquad \forall \tau\leq \tau_0\, .
\end{equation}
\end{theorem}
\eqref{e:asymptotic-in-t-2} is a simple consequence of \eqref{e:H2-estimate} after translating it back to the original coordinates. In order to give a feeling for why \eqref{e:H2-estimate} holds we will detail the equation that $\Omega_{\text{per}, k}$ satisfies.
First of all subtracting the equation satisfied by $\tilde{\Omega}$ from the one satisfied by $\Omega_{\varepsilon, k}$ we achieve
\begin{align*}
&\partial_\tau \Omega_{\text{lin}} + \partial_\tau \Omega_{\text{per},k} - \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{lin}}
-\left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{per}, k} \\
+ & (\tilde{V} \cdot \nabla) \Omega_{\text{lin}} + V_{\text{lin}}\cdot \nabla \tilde{\Omega} + (\tilde{V}\cdot \nabla ) \Omega_{\text{per},k} + (V_{\text{per},k}\cdot \nabla) \tilde{\Omega} + (V_{\text{lin}}\cdot \nabla) \Omega_{\text{per}, k}\\
+ & (V_{\text{per}, k} \cdot \nabla) \Omega_{\text{lin}}
+ (V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per}, k} = 0\, ,
\end{align*}
where we have used the convention $\tilde{V}=K_2*\tilde\Omega$, $V_{\text{per},k} = K_2* \Omega_{\text{per},k}$, and $V_{\text{lin}}= K_2* \Omega_{\text{lin}}$. Next recall that $\tilde{\Omega}=\beta\bar\Omega + \Omega_r$ and recall also the definition of $L_{\text{ss}}$ in \eqref{e:Lss} and the fact that $\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})= 0$. In particular formally we reach
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + ((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r) + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}\nonumber\\
= & -(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r\, ,\label{e:master}
\end{align}
which must be supplemented with the initial condition
\[
\Omega_{\text{per},k} (\cdot, -k)= 0\, .
\]
In fact, in order to justify \eqref{e:master} we need to show that $\Omega_{\text{per},k} (\cdot, \tau)\in L^2_m$ for every $\tau$, which is the content of the following elementary lemma.
\begin{lemma}\label{l:evolution-in-L2m}
The function $\Omega_{\text{per},k} (\cdot, \tau)$ belongs to $L^2_m$ for every $\tau$.
\end{lemma}
\begin{proof}
It suffices to prove that $\omega_{\varepsilon, k} (\cdot, t)$ is $m$-fold symmetric, since the transformation rule then implies that $\Omega_{\varepsilon,k} (\cdot, \tau)$ is $m$-fold symmetric and $\Omega_{\text{per}, k} (\cdot, \tau)$ is obtained from the latter by subtracting $e^{a_0\tau} {\rm Re} (e^{ib_0\tau} \eta) + \tilde{\Omega} (\cdot, \tau)$, which is also $m$-fold symmetric. In order to show that $\omega_{\varepsilon, k}$ is $m$-fold symmetric just consider that $\omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), \tau)$ solves \eqref{e:Euler-later-times} because both the forcing term and the initial data are invariant under a rotation of $\frac{2\pi}{m}$ (and the Euler equations are rotationally invariant). Then the uniqueness part of Yudovich's statement implies $\omega_{\varepsilon, k} (\cdot, t) = \omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), t)$.
\end{proof}
We proceed with our discussion and observe that $V_{\text{lin}} + V_r$ and $\Omega_{\text{lin}}+\Omega_r$ are both ``small'' in appropriate sense for sufficiently negative times, while, because of the initial condition being $0$ at $-k$, for some time after $-k$ we expect that the quadratic nonlinearity $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ will not contribute much to the growth of $\Omega_{\text{per}, k} (\cdot, \tau)$. Schematically, we can break \eqref{e:master} as
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + \underbrace{((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r)}_{\mbox{small linear terms}} + \underbrace{(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}}_{\mbox{quadratic term}}\nonumber\\
= & \underbrace{-(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r}_{\mbox{forcing term } \mathscr{F}}\, ,\label{e:master-schematics}
\end{align}
In particular we can hope that the growth of $\Omega_{\text{per},k} (\cdot, \tau)$ is comparable to that of the solution of the following ``forced'' linear problem
\begin{equation}\label{e:master-linear}
(\partial_{\tau} - L_{\text{ss}}) \Omega = \mathscr{F}\, .
\end{equation}
Observe that we know that $\Omega_{\text{lin}} (\cdot, \tau)$ and $V_{\text{lin}} (\cdot, \tau)$ decay like $e^{a_0 \tau}$. We can then expect to gain a slightly faster exponential decay for $\mathscr{F} (\cdot, \tau)$ because of the smallness of $V_r$ and $\Omega_r$. On the other hand from Theorem \ref{thm:spectral} we expect that the semigroup generated by $L_{\text{ss}}$ enjoys growth estimates of type $e^{a_0\tau}$ on $L^2_m$ (this will be rigorously justified using classical results in the theory of strongly continuous semigroups). We then wish to show, using the Duhamel's formula for the semigroup $e^{\tau L_{\text{ss}}}$, that the growth of $\Omega_{\text{per},k}$ is bounded by $e^{a_0\tau} (e^{\delta_0 \tau} - e^{-\delta_0 k})$ for some positive $\delta_0$ for some time $\tau$ after the initial $-k$: the crucial point will be to show that the latter bound is valid for $\tau$ up until a ``universal'' time $\tau_0$, independent of $k$.
Even though intuitively sound, this approach will require several delicate arguments, explained in the final chapter of the notes. In particular:
\begin{itemize}
\item we will need to show that the quadratic term $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ is small up to some time $\tau_0$ independent of $k$, in spite of the fact that there is a ``loss of derivative'' in it (and thus we cannot directly close an implicit Gronwall argument using the Duhamel formula and the semigroup estimate for $L_{\text{ss}}$);
\item The terms $\Omega_r$ and $V_r$ are not really negligible in absolute terms, but rather, for very negative times, they are supported in a region in space which goes towards spatial $\infty$.
\end{itemize}
The first issue will be solved by closing the estimates in a space of more regular functions, which contains $L^2$ and embeds in $L^\infty$ (in fact $L^2\cap W^{1,4}$): the bound on the growth of the $L^2$ norm will be achieved through the semigroup estimate for $L_{\text{ss}}$ via Duhamel, while the bound of the first derivative will be achieved through an energy estimate, which will profit from the $L^2$ one. The second point by restricting further the functional space in which we will close to estimates for $\Omega_{per, k}$. We will require an appropriate decay of the derivative of the solutions, more precisely we will require that the latter belong to $L^2 (|x|^2\,\mathrm dx)$. Of course in order to use this strategy we will need to show that the initial perturbation $\eta$ belongs to the correct space of functions.
\chapter{General strategy: background field and self-similar coordinates}
\label{chapter:general}
\section{The initial velocity and the force}
First of all, the initial velocity $v_0$ of Theorem \ref{thm:main} will have the following structure\index{aaga@$\alpha$}\index{aagb@$\beta$}
\begin{equation}\label{e:v_0}
v_0 (x) =
\begin{cases}
\beta (2-\alpha)^{-1} |x|^{-\alpha} \chi (\lvert x\rvert) x^\perp\;\; &\mbox{if }{\bar\alpha} =\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
where $0<\alpha\leq {\bar\alpha}<1$, $\chi$ is a smooth cut-off function, compactly supported in $\mathbb R$ and identically $1$ on the interval $[-1,1]$, and $\beta$ is a sufficiently large constant (whose choice will depend on $\alpha$). For simplicity we will assume that $\chi$ takes values in $[0,1]$ and it is monotone non-increasing on $[0, \infty[$, even though none of these conditions play a significant role.
A direct computation gives $\div v_0 = 0$.
The corresponding $\omega_0$ is then given by\index{aagz_0@$\omega_0$}\index{aalv_0@$v_0$}\index{aagx@$\chi$}
\begin{equation}\label{e:omega_0}
\omega_0 (x) =
\curl v_0 (x) =
\begin{cases}
\beta \left[ |x|^{-\alpha} \chi (|x|) + (2-\alpha)^{-1} \chi' (|x|) |x|^{1-\alpha}\right] \;\;&\mbox{if }{\bar\alpha}=\alpha\\
0 &\mbox{if }{\bar\alpha}>\alpha
\end{cases}
\end{equation}
and the relation $v_0 = K_2*\omega_0$ comes from standard Calder{\'o}n-Zygmund theory (since ${\rm div}\, v_0 =0$, $\curl v_0=\omega_0$ and $v_0$ is compactly supported).
${\bar\alpha}\in ]0,1[$ is chosen depending on $p$ in Theorem \ref{thm:main}, so that ${\bar\alpha} p < 2$: in the rest of the notes we assume that $p$, ${\bar\alpha}$, and $\alpha$ are fixed. In particular it follows from the definition that $\omega_0\in L^1\cap L^p$ and that $v_0 \in L^1 \cap L^\infty$.
Next, the function $|x|^{-{\bar\alpha}}$ will be appropriately smoothed to a (radial) function
\begin{equation}\label{e:def-bar-Omega}
\bar \Omega (x) = g (|x|)
\end{equation}
\index{aalg@$g$}\index{aagZbar@$\bar \Omega$}
such that:
\begin{align}
&g \in C^\infty ([0,R]) \qquad \qquad &\forall R>0\, ,\\
&g (r) = r^{-{\bar\alpha}} \qquad\qquad &\mbox{for $r\geq 2$,}\label{e:decay-at-infinity}\\
&g(r) = g(0) + \frac{g''(0)}{2} r^2\qquad \qquad &\mbox{for $r$ in a neighborhood of $0$.}\label{e:g-constant-around-0}
\end{align}
This smoothing will be carefully chosen so to achieve some particular properties, whose proof will take a good portion of the notes (we remark however that while a sufficient degree of smoothness and the decay \eqref{e:decay-at-infinity} play an important role, the condition \eqref{e:g-constant-around-0} is just technical and its role is to simplify some arguments). We next define the function $\bar V (x)$\index{aagf@$\zeta$} \index{aalVbar@$\bar V$} as
\begin{equation}\label{e:def-barV}
\bar V (x) = \zeta (|x|) x^\perp\, ,
\end{equation}
where $\zeta$ is
\begin{equation}\label{e:def-zeta}
\zeta(r) = \frac{1}{r^2}\int_0^r \rho g(\rho)\,\mathrm d\rho\, .
\end{equation}
\begin{remark}\label{r:well-defined-2} Observe that under our assumptions $\bar \Omega\in L^q(\ensuremath{\mathbb R}^2)$ for every $q>\frac{2}{{\bar\alpha}}$, but it does not belong to any $L^q(\ensuremath{\mathbb R}^2)$ with $q\leq \frac{2}{{\bar\alpha}}$. Since when $p\geq 2$ the condition ${\bar\alpha} p <2$ implies ${\bar\alpha} < 1$, we cannot appeal to Young's Theorem as in Remark \ref{r:well-defined} to define $K_2* \bar\Omega$.
Nonetheless, $\bar V$ can be understood as a natural definition of $K_2* \bar\Omega$ for radial distributions of vorticity which are in $L^1_{\rm loc}$. Indeed observe first that ${\rm div}\, \bar V=0$ and ${\rm curl}\, \bar V = \bar\Omega$, and notice also that $\bar V$ would decay at infinity like $|x|^{-1}$ if $\bar\Omega$ were compactly supported. This shows that $\bar V$ would indeed coincide with $K_2*\bar \Omega$ for compactly supported radial vorticities. Since we can approximate $\bar \Omega$ with $\bar\Omega_N := \bar\Omega \mathbf{1}_{B_N}$, passing into the limit in the corresponding formulas for $K_2* \bar\Omega_N$ we would achieve $\bar V$.
Note also that in the remaining computations what really matters are the identities ${\rm div}\, \bar V = 0$ and ${\rm curl}\, \bar V = \bar \Omega$ and so regarding $\bar V$ as $K_2* \bar\Omega$ only simplifies our terminology and notation.
\end{remark}
The force $f$ will then be defined in such a way that $\tilde \omega$, the curl of the velocity \index{aalf@$f$}\index{aalvtilde@$\tilde{v}$}\index{aagztilde@$\tilde{\omega}$}
\begin{equation}\label{e:tilde-v}
\tilde v (x, t) = \beta t^{1/\alpha-1} \bar V \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)\,
\end{equation}
is a solution of \eqref{e:Euler}. In particular, since $(\tilde v\cdot\nabla)\tilde \omega=0$, the force $f$ is given by the explicit formula
\begin{equation}\label{e:def-f}
f (x,t) = \partial_t \tilde{\omega} (x,t)\, .
\end{equation}
With this choice a simple computation, left to the reader, shows that $\tilde{\omega}$ solves \eqref{e:Euler} with initial data $\omega_0$. Note in passing that, although as pointed our in Remark \ref{r:well-defined-2} there is not enough summability to make sense of the identity $K_2* \bar \Omega = \bar V$ by using standard Lebesgue integration, the relation $K_2* \tilde\omega = \tilde{v}$ is made obvious by ${\rm div}\, \tilde{v} =0$, ${\rm curl}\, \tilde{v} = \tilde{\omega}$, and the boundedness of the supports of both $\tilde{\omega}$ and $\tilde{v}$.
The pair $(\tilde{\omega}, \tilde{v})$ is one of the solutions claimed to exist in Theorem \ref{thm:main}. The remaining ones will be described as a one-parameter family $(\omega_\varepsilon, v_\varepsilon)$ for a nonzero choice of the parameter $\varepsilon$, while $(\tilde{\omega}, \tilde{v})$ will correspond to the choice $\varepsilon =0$. We will however stick to the notation $(\tilde\omega, \tilde v)$ to avoid confusions with the initial data.
It remains to check that $f$ belongs to the functional spaces claimed in Theorem \ref{thm:main}.
\begin{lemma}\label{lem:Curl of tilde v}
$\tilde\omega$ is a smooth function on $\{t>0\}$ which satisfies, for all $t>0$ and $x\in\ensuremath{\mathbb R}^2$,
\begin{equation}\label{e:curl-tilde-v}
\tilde \omega (x, t) = \beta t^{-1} \bar \Omega \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|) + \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, ,
\end{equation}
while the external force $f$ and $\partial_t \tilde{v} = K_2*f$ belong, respectively, to the spaces $L^1([0,T]; L^1 \cap L^p)$ and $L^1 ([0,T], L^2)$ for every positive $T$. Likewise $\tilde\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $\tilde{v} \in L^\infty ([0,T], L^2)$.
\end{lemma}
We end the section with a proof of the lemma, while we resume our explanation of the overall approach to Theorem \ref{thm:main} in the next section.
\begin{proof} The formula \eqref{e:curl-tilde-v} is a simple computation. From it we also conclude that $\tilde\omega = \curl\tilde v$ is a smooth function on $\{t>0\}$ and hence differentiable in all variables.
Observe next that $|\bar V (x)|\leq C |x|^{1-{\bar\alpha}}$ and we can thus estimate $|\tilde{v} (x,t)|\leq C {t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}$. Since its spatial support is contained in ${\rm spt}\, (\chi)$, we conclude that $\tilde v$ is bounded and belongs to $L^\infty ([0,{T}], L^2)$ {for any $T>0$.}
Using that $\bar \Omega (x) = |x|^{-{\bar\alpha}}=g(\abs x)$ for $|x|\geq 2$, we write
\begin{align*}
\tilde{\omega} (x,t) = & \beta t^{-1} g \left(\frac{|x|}{t^{1/\alpha}}\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}
+ \beta { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \chi (|x|) \mathbf{1}_{\{|x|> 2t^{1/\alpha}\}} \\
&+ \beta t^{-1}\zeta \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\chi' (|x|)\, .
\end{align*}
In particular, recalling that $|\bar \Omega (x)|\leq C|x|^{-{\bar\alpha}}$ and $\zeta (|x|) |x| \leq C |x|^{1-{\bar\alpha}}$ we easily see that
\begin{align}
\|\tilde\omega (\cdot, t)\|_{L^1} &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{-{\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\frac{{\bar\alpha}}{\alpha}-1}}|x|^{1-{\bar\alpha}}\, dx\, ,\\
\|\tilde{\omega} (\cdot, t)\|_{L^p}^p &\leq C \int_{\{|x|\in {\rm spt}\, (\chi)\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}} |x|^{-p {\bar\alpha}} \, dx + C \int_{\{|x|\in {\rm spt}\, (\chi')\}} { t^{\left(\frac{{\bar\alpha}}{\alpha} -1\right)p}}|x|^{p-p{\bar\alpha}}\, dx\, .
\end{align}
This implies immediately that $\tilde\omega \in L^\infty ([0,{ T]}, L^1\cap L^p)$ {for any $T>0$ }, given that ${\bar\alpha} p <2$ (and hence $|x|^{-{\bar\alpha} p}$ is locally integrable).
We now differentiate in time in the open regions $\{|x|< 2t^{1/\alpha}\}$ and $\{|x| > 2t^{1/\alpha}\}$ separately to achieve\footnote{Since we will only estimate integral norms of $f$, its values on $\{|x|= 2t^{1/\alpha}\}$ are of no importance. However, given that $f$ is in fact smooth over the whole domain $\{t>0\}$, we can infer the validity of the formula \eqref{e:f1-f2} for every point $x\in \{|x|= 2t^{1/\alpha}\}$ by approximating it with a sequence of points in $\{|x|< 2t^{1/\alpha}\}$ and passing to the limit in the corresponding expressions.}
\begin{align}
f (x,t) = & - \beta \left(t^{-2} g \left(\frac{|x|}{t^{1/\alpha}}\right)
+ \frac{1}{\alpha}{t^{-2-1/\alpha}} |x| g' \left(\frac{|x|}{t^{1/\alpha}}\right)\right) \chi (|x|) \mathbf{1}_{\{|x|\leq 2 t^{1/\alpha}\}}\nonumber\\
& {+ \beta \left(\frac{{\bar\alpha}}{\alpha}-1\right)t^{\frac{{\bar\alpha}}{\alpha}-2} |x|^{-{\bar\alpha}}\chi(|x|)\mathbf{1}_{\{|x|>2t^{1/\alpha}\}}
}\nonumber\\
& - \beta \left(t^{-2} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} t^{-2-1/\alpha} \zeta' \left(\frac{|x|}{t^{1/\alpha}}\right) |x|\right) |x|\chi' (|x|)\nonumber\\
=:& f_1 (x,t) + f_2 (x,t){ + f_3(x,t)}\, .\label{e:f1-f2}
\end{align}
We wish to prove that $f\in L^1 ([0,T], L^1\cap L^p)$. On the other hand, since for any $T_0>0$ both $f_1+f_2$ and $f_3$ are smooth and have compact support on $\mathbb R^2\times [T_0, T]$, it suffices to show that $f\in L^1 ([0,T_0], L^1\cap L^p)$ for a sufficiently small $T_0$. Recalling that $|g (|x|)| + |g' (|x|)||x| \leq C |x|^{-{\bar\alpha}}$, we can then bound
\begin{equation}\label{e:bound-f1}
|f_1 (x,t)|\leq C {t^{-2+\frac{{\bar\alpha}}{\alpha}}} |x|^{-{\bar\alpha}} \mathbf{1}_{|x|\leq 2 t^{1/\alpha}} \qquad
\mbox{for all $0<t<T_0$ and all $x$}.
\end{equation}
Thus
\begin{align}
\|f_1\|_{L^1 (\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{2/\alpha -2}\, dt\, , < \infty\\
\|f_1\|_{L^1 ([0,T_0];L^p( \mathbb R^2))} &\leq C \int_0^{T_0} t^{2/(\alpha p) -2}\, dt < \infty\, ,
\end{align}
where the condition $2> {\bar\alpha} p$ entered precisely in the finiteness of the latter integral.
Coming to the second term{{red}, we observe that it vanishes when $\bar\alpha = \alpha$. When $\alpha < \bar\alpha$,} {since $\chi$ is compactly supported in $\mathbb R$, we get
\begin{align*}
\|f_2\|_{L^1(\mathbb R^2\times [0,T_0])} &\leq C \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac 1{\alpha}(2-{\bar\alpha})}) dt<+\infty\\
\|f_2\|_{L^1([0,T_0];L^p(\mathbb R^2))}
&\leq \int_0^{T_0} t^{\frac{{\bar\alpha}}{\alpha}-2}(1+t^{\frac p{\alpha}(2-{\bar\alpha})})^{1/p} dt <+\infty
\, .
\end{align*}}
The last term can be computed explicitly as
\begin{align*}
\zeta (r) &= \frac{1}{r^2} \left(C + \int_2^r \rho^{1-{\bar\alpha}}\,\mathrm d\rho\right) = a r^{-2} + b r^{-{\bar\alpha}} & \text{for all } r \geq 2\, ,
\shortintertext{where $a$ and $b$ are two fixed constants. Likewise}
\zeta'(r)&= -2 a r^{-3} -{\bar\alpha} b r^{-{\bar\alpha}-1}\qquad &\text{for all } r \geq 2\, .
\end{align*}
Recall that $\chi' (|x|)=0$ for $|x|\leq 1$.
Therefore, for $t\leq T_0$ sufficiently small, the functions $\zeta$ and $\zeta'$ are computed on $|x| t^{-1/\alpha} \geq 2$ in the formula for $f_3$ (cf. \eqref{e:f1-f2}). Thus, {
\[
f_3 (x,t)
=-\beta t^{-2}\left(
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} + b\left(1-\frac{{\bar\alpha}}{\alpha}\right)t^{\frac{{\bar\alpha}}{\alpha}} |x|^{1-{\bar\alpha}}
\right)\chi'(|x|)
\]}
{{red} In particular $f_3$ has compact support. Since $\alpha <1$ the function
\[
-\beta t^{-2}
\left(1-\frac 2{{\bar\alpha}}\right)a t^{\frac 2{\alpha}}|x|^{-1} \chi'(|x|)\, ,
\]
is bounded, and thus belongs to
$L^1 ([0,T_0], L^1\cap L^p)$. As for the second summand, it vanishes if $\alpha = \bar \alpha$, while its $L^p$ norm at time $t$ can be bounded by $C t^{-2+\frac{\bar \alpha}{\alpha}}$ if $\bar\alpha > \alpha$. The latter function however belongs to $L^1 ([0,T_0])$.}
Observe next that, since for every positive $t$ the function $f (\cdot, t)$ is smooth and compactly supported, $K_2* f (\cdot, t)$ is the unique divergence-free vector field which belongs to $L^1$ and such that its curl gives $f (\cdot, t)$. Hence, since $f (\cdot, t) = \curl \partial_t \tilde{v} (\cdot, t)$ and $\partial_t \tilde{v} (\cdot, t)$ is smooth and compactly supported, we necessarily have $K_2 * f (\cdot, t) = \partial_t \tilde{v} (\cdot, t)$. It remains to show that $\partial_t \tilde{v} \in L^1 ([0,T]; L^2)$ for every positive $T$. To that end we compute
{{red}
\[
\tilde{v} (x,t) = \beta t^{1/\alpha-1} \bar{V} \left(\frac{x}{t^{1/\alpha}}\right) \chi (|x|)
= \beta t^{-1} \zeta \left(\frac{|x|}{t^{1/\alpha}}\right) x^\perp \chi (|x|)
\]
\[
\partial_t \tilde{v} (x,t ) = - \beta t^{-2} \chi (|x|) x^\perp \left(\zeta \left( \frac{|x|}{t^{1/\alpha}}\right) +\frac{1}{\alpha} \frac{|x|}{t^{1/\alpha}} \zeta' \left(\frac{|x|}{t^{1/\alpha}} \right)\right)\, .
\]
In order to compute the $L^2$ norm of $\partial_t \tilde{v} (\cdot, t)$ we break the space into two regions as in the computations above. In the region $\{|x|\leq 2 t^{1/\alpha}\}$ we use that $|\zeta| + |g|+ |\zeta'|$ are bounded to compute
\[
\int_{|x|\leq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4} \int_{|x|\leq t^{1/\alpha}} |x|^2\,dx \leq C t^{4/\alpha -4}\, ,
\]
which is a bounded function on $[0,1]$. On $\{|x|\geq t^{1/\alpha}\}$ we observe that the function can be explicitly computed as
\[
-\beta t^{-2} \chi (|x|) x^\perp \left(\left(1-\frac{2}{\alpha}\right) t^{2/\alpha} |x|^{-2} + b \left(1-\frac{\bar \alpha}{\alpha}\right) t^{\frac{\bar\alpha}{\alpha}}|x|^{-\bar \alpha}\right)\, .
\]
If we let $\bar R>0$ be such that the support of $\chi$ is contained in $B_{\bar R}$, we use polar coordinates to estimate
\[
\int_{|x|\geq 2 t^{1/\alpha}} |\partial_t \tilde{v} (x,t)|^2\, dx \leq C t^{-4+4/\alpha} \int_{2t^{1/\alpha}}^{\bar R} \frac{d\rho}{\rho} + C |\alpha - \bar \alpha| t^{2\frac{\bar \alpha}{\alpha}-4}\, .
\]
We can therefore estimate the $L^2$ norm of $\partial_t \tilde{v}$ at time $t$ by
\[
\|\partial_t \tilde{v} (\cdot, t)\|_{L^2} \leq C + C |\alpha - \bar \alpha| t^{\frac{\bar \alpha}{\alpha} -2}\, .
\]
When $\alpha = \bar \alpha$ we conclude that the $L^2$ norm of $\partial_t \tilde{v}$ is bounded, while for $\bar\alpha > \alpha$ the function $t\mapsto t^{\frac{\bar \alpha}{\alpha} -2}$ belongs to $L^1 ([0,T])$.
}
\end{proof}
\section{The infinitely many solutions}
We next give a more precise statement leading to Theorem \ref{thm:main} as a corollary.
\begin{theorem}\label{thm:main2}
Let $p\in ]2, \infty[$ be given and let $\alpha$ {and ${\bar\alpha}$} be any positive number such that {$\alpha \leq {\bar\alpha}$} and ${\bar\alpha} p <2$. For an appropriate choice of the smooth function $\bar \Omega$ and of a positive constant $\beta$ as in the previous section, we can find, additionally:
\begin{itemize}
\item[(a)] a suitable nonzero function $\eta\in (L^1\cap H^2) (\mathbb R^2; \mathbb C)$ with $K_2 * \eta\in L^2(\ensuremath{\mathbb R}^2; \mathbb C^2)$, \index{aagh@$\eta$}
\item[(b)] a real number $b_0$ and a positive number $a_0>0$, \index{aalazero@$a_0$}\index{aalbzero@$b_0$}
\end{itemize}
with the following property.
Consider $\omega_0$, $v_0$, $\tilde{v}$, $\tilde\omega = \curl \tilde{v}$, and $f$ as defined in \eqref{e:v_0},\eqref{e:omega_0}, \eqref{e:tilde-v}, and \eqref{e:def-f}. Then for every $\varepsilon\in \ensuremath{\mathbb R}$ there is a solution $\omega_\varepsilon$ of \eqref{e:Euler} with initial data $\omega_0$ such that \index{aage@$\varepsilon$}\index{aagzepsilon@$\omega_\varepsilon$}
\begin{enumerate}[(i)]
\item\label{item:1-omega in L infinity L1 Lp} $\omega_\varepsilon \in L^\infty ([0,T], L^1\cap L^p )$ for every $T>0$;
\item\label{item:2-v in L infinity L2} $v_\varepsilon := K_2 * \omega_\varepsilon \in L^\infty ([0,T], L^2)$ for every $T>0$;
\item\label{item:3-eigenvalue bound} as $t\to0$,
\begin{equation}\label{e:asymptotic-in-t}
\|\omega_\varepsilon (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} \operatorname{Re} (t^{i b_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2(\ensuremath{\mathbb R}^2)} = o (t^{a_0 +1/\alpha -1})\, ;
\end{equation}
\item\label{e:Camillo-is-silly} if $b_0=0$, then $\eta$ is real-valued.
\end{enumerate}
\end{theorem}
Observe that, by a simple computation,
\begin{align*}
\| t^{a_0-1} \operatorname{Re} (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} = t^{a_0 +1/\alpha -1} \|\operatorname{Re} (t^{ib_0} \eta)\|_{L^2}\, ,
\end{align*}
and thus it follows from \eqref{e:asymptotic-in-t} that
\begin{equation}\label{eq:difference-of-the-omega}
\limsup_{t\downarrow 0} t^{1-1/\alpha - a_0} \|\omega_{\varepsilon} (\cdot, t) - \omega_{\bar\varepsilon} (\cdot, t)\|_{L^2} \geq |\varepsilon - \bar\varepsilon| \max_{\theta\in[0,2\pi]}\| \operatorname{Re} (e^{i\theta} \eta)\|_{L^2}\,
\end{equation}
(note that in the last conclusion we need (iv) if $b_0=0$).
Since $\|\eta\|_{L^2} >0$, we conclude that the solutions $\omega_\varepsilon$ described in Theorem \ref{thm:main2} must be all distinct.
For each fixed $\varepsilon$, the solution $\omega_\varepsilon$ will be achieved as a limit of a suitable sequence of approximations $\omega_{\varepsilon, k}$\index{aagzepsilonk@$\omega_{\varepsilon, k}$} in the following way. After fixing a sequence of positive times $t_k$\index{aaltk@$t_k$} converging to $0$, which for convenience are chosen to be $t_k := e^{-k}$, we solve the following Cauchy problem for the Euler equations in vorticity formulation
\begin{equation}\label{e:Euler-later-times}
\left\{
\begin{array}{ll}
& \partial_t \omega_{\varepsilon,k} + ((K_2* \omega_{\varepsilon,k})\cdot \nabla) \omega_{\varepsilon,k} = f \\ \\
& \omega_{\varepsilon, k} (\cdot, t_k) = \tilde\omega (\cdot, t_k) + \varepsilon t_k^{a_0-1} \operatorname{Re} (t_k^{ib_0} \eta (t_k^{-1/\alpha}\cdot))\, .
\end{array}\right.
\end{equation}
Observe that, since $t_k$ is positive, the initial data $\omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^1\cap L^\infty$, while the initial velocity defining $v_{k, \varepsilon}:= K_2 * \omega_{\varepsilon, k} (\cdot, t_k)$ belongs to $L^2$. Since $K_2 * f \in L^1 ([0,T], L^2)$ for every $T$, we can apply the classical theorem of Yudovich (namely, Theorem \ref{thm:Yudo} and Remark \ref{r:A-priori-estimates}) to conclude that
\begin{corollary}\label{c:omega_k_epsilon}
For every $k$, $\varepsilon$, and every $T$ there exists a unique solution $\omega_{\varepsilon, k}$ of \eqref{e:Euler-later-times} with the property that $\omega_{\varepsilon , k} \in L^\infty ([t_k, T], L^1\cap L^\infty)$ and $v_{\varepsilon, k}\in L^\infty ([t_k, T], L^2)$ for every positive $T$. Moreover, we have the following bounds for every $t$
\begin{align}
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^1}\,\mathrm ds\\
\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} \leq &\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p} +
\int_{t_k}^t \|f (\cdot, s)\|_{L^p}\,\mathrm ds \label{e:omega_Lp_estimate}\\
\|v_{\varepsilon, k} (\cdot, t)\|_{L^2}\leq &\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2} +
\int_{t_k}^t \|K_2* f (\cdot, s)\|_{L^2}\,\mathrm ds\, .
\end{align}
\end{corollary}
Next, since we can easily bound $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^1}$, $\|\omega_{\varepsilon, k} (\cdot, t_k)\|_{L^p}$, and $\|v_{\varepsilon, k} (\cdot, t_k)\|_{L^2}$ independently of $k$, for each fixed $\varepsilon$ we conclude
\begin{equation}\label{e:uniform_bound}
\sup_{k\in \mathbb N} \sup_{t\in [t_k, T]}
\left(\|\omega_{\varepsilon, k} (\cdot, t)\|_{L^1} + \|\omega_{\varepsilon, k} (\cdot, t)\|_{L^p} + \|v_{\varepsilon, k} (\cdot, t)\|_{L^2}
\right) < \infty\, .
\end{equation}
In turn we can use \eqref{e:uniform_bound} to conclude that, for each fixed $\varepsilon$, a subsequence of $\omega_{\varepsilon, k}$ converges to a solution $\omega_\varepsilon$ of \eqref{e:Euler} which satisfies the conclusions \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\begin{proposition}\label{p:convergence}\label{P:CONVERGENCE}
Assume $p, \alpha, {\bar\alpha}, \omega_0, v_0, \tilde\omega, \tilde{v}, f, a_0, b_0$, and $\bar \eta$ are as in Theorem \ref{thm:main2} and let $\omega_{\varepsilon, k}$ be as in Corollary \ref{c:omega_k_epsilon}. Then, for every fixed $\varepsilon$, there is a subsequence, not relabeled, with the property that $\omega_{\varepsilon, k}$ converges (uniformly in $C ([0,T], L^q_w)$ for every positive $T$ and every $1< q\leq p$, where $L^q_w$ denotes the space $L^q$ endowed with the weak topology) to a solution $\omega_\varepsilon$ of \eqref{e:Euler} on $[0, \infty[$ with initial data $\omega_0$ and satisfying the bounds \ref{item:1-omega in L infinity L1 Lp} and \ref{item:2-v in L infinity L2} of Theorem \ref{thm:main2}.
\end{proposition}
The proof uses classical convergence theorems and we give it in the appendix for the reader's convenience. The real difficulty in the proof of Theorem \ref{thm:main2} is to ensure that the bound (iii) holds. This is reduced to the derivation of suitable estimates on $\omega_{\varepsilon, k}$, which we detail in the following statement.
\begin{theorem}\label{thm:main3}
Assume $p, \alpha, {\bar\alpha}$ are as in Theorem \ref{thm:main2} and fix $\varepsilon >0$. For an appropriate choice of $\bar \Omega$ and $\beta$ there is a triple $\eta$, $a_0$, and $b_0$ as in Theorem \ref{thm:main2} and three positive constants $T_0, \delta_0$, and $C$ with the property that
\begin{equation}\label{e:asymptotic-in-t-2}
\|\omega_{\varepsilon,k} (\cdot, t) - \tilde\omega (\cdot, t) - \varepsilon t^{a_0-1} {\rm Re}\, (t^{ib_0} \eta (t^{-1/\alpha} \cdot))\|_{L^2} \leq C t^{a_0+1/\alpha-1+\delta_0} \qquad \forall t\in [t_k, T_0]\, .
\end{equation}
\end{theorem}
It is then obvious that the final conclusion \ref{item:3-eigenvalue bound} of Theorem \ref{thm:main2} is a consequence of the more precise estimate \eqref{e:asymptotic-in-t-2} on the approximations $\omega_{\varepsilon,k}$. The rest of these lecture notes are thus devoted to the proof of Theorem \ref{thm:main3} and we will start in the next section by breaking it into two main parts.
\section{Logarithmic time scale and main Ansatz}
\index{similarity-variables@Similarity variables}\index{aagZ@$\Omega$}\index{aagt@$\tau$}\index{aagx@$\xi$}
First of all, we will change variables and unknowns of the Euler equations (in vorticity formulation) in a way which will be convenient for many computations. Given a solution $\omega$ of \eqref{e:Euler} on $\ensuremath{\mathbb R}^2\times [T_0, T_1]$ with $0\leq T_0 \leq T_1$, we introduce a new function $\Omega$ on $\mathbb R^2 \times [\ln T_0, \ln T_1]$ with the following transformation. We set $\tau=\ln t$, $\xi=x t^{-1/\alpha}$ and
\begin{equation}\label{e:omega->Omega}
\Omega (\xi, \tau) := e^{\tau} \omega (e^{\tau/\alpha} \xi, e^\tau)\, ,
\end{equation}
which in turn results in
\begin{equation}\label{e:Omega->omega}
\omega (x, t) = t^{-1} \Omega (t^{-1/\alpha} x, \ln t)\, .
\end{equation}
Observe that, if $v (\cdot, t) = K_2 * \omega (\cdot, t)$ and $V( \cdot, \tau) = K_2 * \Omega (\cdot, \tau)$, we can derive similar transformation rules for the velocities as \index{aalV@$V$}
\begin{align}
V (\xi, \tau) &= e^{\tau (1-1/\alpha)} v(e^{\tau/\alpha} \xi, e^\tau)\label{e:v->V}\, ,\\
v (x,t) &= t^{-1+1/\alpha} V (t^{-1/\alpha} x, \ln t)\label{e:V-t>v}\, .
\end{align}
Likewise, we have an analogous transformation rule for the force $f$, which results in \index{aalF@$F$}
\begin{align}
F (\xi, \tau) &= e^{2\tau} f (e^{\tau/\alpha} \xi, e^\tau)\, ,\label{e:f->F}\\
f (x,t) &= t^{-2} F (t^{-1/\alpha} x, \ln t)\label{e:F->f}\, .
\end{align}
In order to improve the readability of our arguments, throughout the rest of the notes we will use the overall convention that, given some object related to the Euler equations in the ``original system of coordinates'', the corresponding object after applying the transformations above will be denoted with the same letter in capital case.
\begin{remark}
Note that the naming of $\bar V$ and $\bar\Omega$ is somewhat of an exception to this convention, since $(\bar\Omega, \bar V)$ is a solution of \eqref{e:Euler} in Eulerian variables. However, if you ``force them to be functions of $\xi$,'' which is how they will be used in the non-linear part, then they solve the Euler equations in self-similar variables with forcing (see \eqref{e:Euler-transformed}).
\end{remark}
Straightforward computations allow then to pass from \eqref{e:Euler} to an equation for the new unknown $\Omega$ in the new coordinates. More precisely, we have the following
\begin{lemma}\label{l:coordinates-change}
Let $p>2$ and $\infty \geq T_1 > T_0\geq 0$. Then $\omega\in L^\infty_{\text{loc}} (]T_0, T_1[; L^1\cap L^p)$ and $v (\cdot, t) = K_2* \omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-again}
\partial_t \omega + (v \cdot \nabla) \omega = f\, ,
\end{equation}
if and only if $\Omega$ and $V (\cdot, t) = K_2 * \Omega (\cdot, t)$ satisfy
\begin{equation}\label{e:Euler-transformed}
\partial_\tau \Omega - \left(1 + \frac{\xi}{\alpha}\cdot \nabla\right) \Omega + (V\cdot \nabla) \Omega = F\, .
\end{equation}
\end{lemma}
We next observe that, due to the structural assumptions on $\tilde \omega$ and $\tilde v$, the corresponding fields $\tilde \Omega$ and $\tilde V$ can be expressed in the following way: \index{aalVtilde@$\tilde V$}\index{aagZtilde@$\tilde\Omega$}
\begin{align}
\tilde{V} (\xi, \tau) &= \beta \bar V (\xi) \chi (e^{\tau/\alpha} |\xi|)\, ,\label{e:tildeV}\\
\tilde{\Omega} (\xi, \tau) &= \beta \bar \Omega (\xi) \chi (e^{\tau/\alpha} |\xi|) + \beta \zeta (|\xi|)
\chi' (e^{\tau/\alpha} |\xi|) e^{\tau/\alpha} |\xi|\, \label{e:tildeOmega}.
\end{align}
Observe that, for every fixed compact set $K$ there is a sufficiently negative $-T (K)$ with the property that
\begin{itemize}
\item $\chi (e^{\tau/\alpha} |\cdot|)= 1$ and $\chi' (e^{\tau/\alpha} \cdot) = 0$ on $K$ whenever $\tau \leq - T (K)$.
\end{itemize}
Since in order to prove Theorem \ref{thm:main} we are in fact interested in very small times $t$, which in turn correspond to very negative $\tau$, it is natural to consider $\tilde\Omega$ and $\tilde{V}$ as perturbations of $\beta \bar \Omega$ and $\beta \bar V$. We will therefore introduce the notation
\begin{align}
\tilde \Omega &= \beta \bar \Omega + \Omega_r\, ,\\
\tilde V & = \beta \bar V + V_r := \beta \bar V + K_2* \Omega_r\, .
\end{align}
We are thus lead to the following Ansatz for $\Omega_{\varepsilon,k} (\xi, \tau) = e^{\tau} \omega_{\varepsilon ,k} (e^{\tau/\alpha} \xi, e^\tau)$:
\begin{equation}\label{e:Ansatz-1}
\Omega_{\varepsilon, k} (\xi, \tau) = \beta \bar \Omega (\xi) + \Omega_r (\xi, \tau) + \varepsilon e^{\tau a_0} {\rm Re}\, (e^{i\tau b_0} \eta (\xi)) + \Omega_{\text{per}, k} (\xi, \tau)\, .
\end{equation}
The careful reader will notice that indeed the function $\Omega_{\text{per},k}$ depends upon the parameter $\varepsilon$ as well, but since such dependence will not really play a significant role in our discussion, in order to keep our notation simple, we will always omit it. \index{aagZr@$\Omega_r$}\index{aagZepsilonk@$\Omega_{\varepsilon, k}$}\index{aagZperk@$\Omega_{\text{per}, k}$}\index{aalVr@$V_r$}
We are next ready to complete our Ansatz by prescribing one fundamental property of the function $\eta$.
We first introduce the integro-differential operator \index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:Lss}
L_{\text{ss}} (\Omega) := \left(1+\frac{\xi}{\alpha} \cdot \nabla\right) \Omega - \beta (\bar V \cdot \nabla) \Omega - \beta ((K_2* \Omega)\cdot \nabla) \bar \Omega\, .
\end{equation}
We will then prescribe that $\eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $z_0 = a_0 + ib_0$, namely, \index{aalz0@$z_0$}
\begin{equation}\label{e:Ansatz-2}
L_{\text{ss}} (\eta) = z_0 \eta\, .
\end{equation}
Observe in particular that, since $L_{\text{ss}}$ is a real operator (i.e. $L_{\text{ss}} (\eta)$ is real-valued when $\eta$ is real-valued, cf. Section \ref{s:abstract-operators}), the complex conjugate $\bar \eta$ is an eigenfunction of $L_{\text{ss}}$ with eigenvalue $\bar z_0$, so that, in particular, the function
\begin{equation}\label{e:Omega_lin}
\Omega_{\text{lin}} (\xi, \tau) := \varepsilon e^{a_0 \tau} {\rm Re}\, (e^{i b_0 \tau} \eta (\xi))
= \frac{\varepsilon}{2} (e^{z_0 \tau} \eta (\xi) + e^{\bar z_0 \tau} \bar \eta (\xi))
\end{equation}
satisfies the linear evolution equation
\begin{equation}\label{e:evolution_of_Omega_lin}
\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})=0\, .
\end{equation}
The relevance of our choice will become clear from the discussion of Section \ref{s:nonlinear}. The point is that \eqref{e:evolution_of_Omega_lin} is close to the linearization of Euler (in the new system of coordinates) around $\tilde{\Omega}$. The``true linearization'' would be given by \eqref{e:evolution_of_Omega_lin} if we were to substitute $\bar \Omega$ and $\bar V$ in \eqref{e:Lss} with $\tilde{\Omega}$ and $\tilde{V}$. Since however the pair $(\tilde \Omega, \tilde{V})$ is well approximated by $(\bar \Omega, \bar V)$ for very negative times, we will show that \eqref{e:evolution_of_Omega_lin} drives indeed the evolution of $\Omega_{\varepsilon,k}-\tilde{\Omega}$ up to an error term (i.e. $\Omega_{\text{per},k}$) which is smaller than $\Omega_{\text{lin}}$.
\section{Linear theory}
We will look for the eigenfunction $\eta$ in a particular subspace of $L^2$. More precisely for every
$m\in \mathbb N\setminus \{0\}$ we denote by $L^2_m$ the set of those elements $\vartheta \in L^2 (\mathbb R^2, \mathbb C)$ which are $m$-fold symmetric, i.e., denoting by $R_\theta: \mathbb R^2\to \mathbb R^2$ the counterclockwise rotation of angle $\theta$ around the origin, \index{rotational-symmetry@Rotationally symmetric function space}\index{aalL2m@$L^2_m$}
they satisfy the condition
\begin{align*}
\vartheta &= \vartheta \circ R_{2\pi/m}\, .
\end{align*}
In particular, $L^2_m$ is a closed subspace of $L^2 (\mathbb R^2, \mathbb C)$. Note however that the term ``$m$-fold symmetric'' is somewhat misleading when $m=1$: in that case the transformation $R_{2\pi/m} = R_{2\pi}$ is the identity and in particular $L^2_1 = L^2 (\mathbb R^2, \mathbb C)$. Indeed we will look for $\eta$ in $L^2_m$ for a sufficiently large $m\geq 2$.
An important technical detail is that, while the operator $L^2 \cap \mathscr{S} \ni \omega \mapsto K_2* \omega \in \mathscr{S}'$ {\em cannot} be extended continuously to the whole $L^2$ (cf. Remark \ref{r:Camillo_dumb}), for $m\geq 2$ it {\em can} be extended to a continuous operator from $L^2_m$ into $\mathscr{S}'$: this is the content of the following lemma.
\begin{lemma}\label{l:extension}\label{L:EXTENSION}
For every $m\geq 2$ there is a unique continuous operator $T: L^2_m \to \mathscr{S}'$ with the following properties:
\begin{itemize}
\item[(a)] If $\vartheta\in \mathscr{S}$, then $T (\vartheta) = K_2*\vartheta$ (in the sense of distributions);
\item[(b)] There is $C>0$ such that for every $\vartheta \in L^2_m$, there is $v=v(\vartheta)\in W^{1,2}_{\text{loc}}$ with
\begin{itemize}
\item[(b1)] $R^{-1} \|v\|_{L^2 (B_R)} + \|Dv\|_{L^2 (B_R)} \leq C\|\vartheta\|_{L^2 (\mathbb R^2)}$ for all $R>0$;
\item[(b2)] ${\rm div}\, v =0$ and $\langle T(\vartheta), \varphi\rangle = \int v\cdot \varphi$ for every test function $\varphi \in \mathscr{S}$.
\end{itemize}
\end{itemize}
\end{lemma}
From now on the operator $T$ will still be denoted by $K_2*$ and the function $v$ will be denoted by $K_2*\omega$. Observe also that, if $\hat\Omega$ is an $L^2_{\text{loc}}$ function such that $\|\hat\Omega\|_{L^2 (B_R)}$ grows polynomially in $R$, the integration of a Schwartz function times $v \hat\Omega$ is a well defined tempered distribution. In the rest of the notes, any time that we write a product $\hat\Omega K_2 * \vartheta $ for an element $\vartheta\in L^2_m$ and an $L^2_{\text{loc}}$ function $\hat\Omega$ we will always implicitly assume that
$\|\hat\Omega\|_{L^2 (B_R)}$ grows at most polynomially in $R$ and that the product is understoood as a well-defined element of $\mathscr{S}'$.
The relevance of this discussion is that, for $m\geq 2$, we can now consider the operator $L_{\text{ss}}$ as a closed, densely defined unbounded operator on $L^2_m$. We let
\index{aalLss@$L_{\text{ss}}$}\index{Self-similar operator}
\begin{equation}\label{e:def-Lss-formal}
L_{\text{ss}} (\Omega) = \left(1- {\textstyle{\frac{2}{\alpha}}}\right) \Omega - {\rm div}\, \left(\left(-{\textstyle{\frac{\xi}{\alpha}}} + \beta \bar V\right) \Omega\right) - \beta ( K_2*\Omega \cdot \nabla) \bar\Omega\,
\end{equation}
and its domain is
\begin{equation}\label{e:D(Lss)-formal}
D_m (L_{\text{ss}}) =\{\Omega\in L^2_m : L_{\text{ss}} (\Omega)\in L^2_m\}\, .
\end{equation}
When $\Omega\in \mathscr{S}$ it can be readily checked that $L_{\text{ss}}$ as defined in \eqref{e:def-Lss-formal} coincides with \eqref{e:Lss}.
The definition makes obvious that $L_{\text{ss}}$ is a closed and densely defined unbounded operator over $L^2_m$. We will later show that $\Omega \mapsto (K_2*\Omega \cdot \nabla) \bar \Omega$ is in fact a compact operator from $L^2_m$ into $L^2_m$ and therefore we have
\begin{equation}\label{e:D(Lss)-formal-2}
D_m (L_{\text{ss}}) := \left\{\Omega\in L^2_m : {\rm div} \left(\beta \bar V\Omega- {\textstyle{\frac{\xi}{\alpha}}}\Omega\right)\, \in L^2_m\right\}\, .
\end{equation}
From now on, having fixed $m\geq 2$ and regarding $L_{\text{ss}}$ as an unbounded, closed, and densely defined operator in the sense given above, the spectrum ${\rm spec}_m\, (L_{\text{ss}})$ on $L^2_m$ is defined as the (closed) set which is the complement of the {\em resolvent} of $L_{\text{ss}}$, the latter being the (open) set of $z_0 \in \mathbb C$ such that $L_{\text{ss}}-z_0$ has a bounded inverse $(L_{\text{ss}}-z_0)^{-1} : L^2_m \to L^2_m$.\footnote{The textbook definition would require the inverse to take values in $D_m (L_{\text{ss}})$. Note however that this is a consequence of our very definition of $D_m (L_{\text{ss}})$.}.
The choice of $\eta$ will then be defined by the following theorem which summarizes a quite delicate spectral analysis.
\begin{theorem}\label{thm:spectral}\label{THM:SPECTRAL}
For an appropriate choice of $\bar \Omega$ there is an integer $m\geq 2$ with the following property. For every positive $\bar a>0$, if $\beta$ is chosen appropriately large, then there is $\eta\in L^2_m\setminus \{0\}$ and $z_0=a_0+ib_0$ such that:
\begin{itemize}
\item[(i)] $a_0 \geq \bar a$ and $L_{\text{ss}} (\eta) = z_0 \eta$;
\item[(ii)] For any $z \in {\rm spec}_m\, (L_{\text{ss}})$ we have ${\rm Re}\, z\leq a_0$;
\item[(iii)] If $b_0=0$, then $\eta$ is real valued;
\item[(iv)] There is $k\geq 1$ integer and $e:\mathbb R^+\to \mathbb C$ such that $\eta (x) = e (r) e^{ikm \theta}$ if $b_0\neq 0$ and $\eta (x) = {\rm Re}\, (e(r) e^{ikm\theta})$ if $b_0= 0$.
\end{itemize}
\end{theorem}
In fact we will prove some more properties of $\eta$, namely, suitable regularity and decay at infinity, but these are effects of the eigenvalue equation and will be addressed later.
The proof of Theorem \ref{thm:spectral} will be split in two chapters. In the first one we regard $L_{\text{ss}}$ as perturbation of a simpler operator $L_{\text{st}}$, which is obtained from $L_{\text{ss}}$ by ignoring the $(1+\frac{\xi}{\alpha}\cdot \nabla)$ part: the intuition behind neglecting this term is that the remaining part of the operator $L_{\text{ss}}$ is multiplied by the constant $\beta$, which will be chosen appropriately large. The second chapter will be dedicated to proving a theorem analogous to Theorem \ref{thm:spectral} for the operator $L_{\text{st}}$. The analysis will take heavily advantage of an appropriate splitting of $L^2_m$ as a direct sum of invariant subspaces of $L_{\text{st}}$. The latter are obtained by expanding in Fourier series the trace of any element of $L^2_m$ on the unit circle. In each of these invariant subspaces the spectrum of $L_{\text{st}}$ can be related to the spectrum of a suitable second order differential operator in a single real variable.
\section{Nonlinear theory}\label{s:nonlinear}
The linear theory will then be used to show Theorem \ref{thm:main3}. In fact, given the decomposition introduced in \eqref{e:Ansatz-1}, we can now formulate a yet more precise statement from which we conclude Theorem \ref{thm:main3} as a corollary.
\begin{theorem}\label{thm:main4}
Let $p$, $\alpha$, and ${\bar\alpha}$ be as in Theorem \ref{thm:main2} and assume $\bar a$ is sufficiently large. Let $\bar \Omega$, $\eta$, $a_0$, and $b_0$ be as in Theorem \ref{thm:spectral} and for every $\varepsilon \in \mathbb R$, $k\in \mathbb N$ consider the solutions $\omega_{\varepsilon,k}$ of \eqref{e:Euler-later-times} and $\Omega_{\varepsilon,k} (\xi, \tau) = e^\tau \omega_{\varepsilon,k} (e^{\tau/\alpha} \xi, e^\tau)$. If we define $\Omega_{\text{per},k}$ through \eqref{e:Ansatz-1}, then there are $\tau_0 = \tau_0 (\varepsilon)$ and $\delta_0>0$, independent of $k$, such that
\begin{equation}\label{e:H2-estimate}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_{L^2} \leq e^{\tau (a_0+\delta_0)} \qquad\qquad\qquad \forall \tau\leq \tau_0\, .
\end{equation}
\end{theorem}
\eqref{e:asymptotic-in-t-2} is a simple consequence of \eqref{e:H2-estimate} after translating it back to the original coordinates. In order to give a feeling for why \eqref{e:H2-estimate} holds we will detail the equation that $\Omega_{\text{per}, k}$ satisfies.
First of all subtracting the equation satisfied by $\tilde{\Omega}$ from the one satisfied by $\Omega_{\varepsilon, k}$ we achieve
\begin{align*}
&\partial_\tau \Omega_{\text{lin}} + \partial_\tau \Omega_{\text{per},k} - \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{lin}}
-\left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) \Omega_{\text{per}, k} \\
+ & (\tilde{V} \cdot \nabla) \Omega_{\text{lin}} + V_{\text{lin}}\cdot \nabla \tilde{\Omega} + (\tilde{V}\cdot \nabla ) \Omega_{\text{per},k} + (V_{\text{per},k}\cdot \nabla) \tilde{\Omega} + (V_{\text{lin}}\cdot \nabla) \Omega_{\text{per}, k}\\
+ & (V_{\text{per}, k} \cdot \nabla) \Omega_{\text{lin}}
+ (V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per}, k} = 0\, ,
\end{align*}
where we have used the convention $\tilde{V}=K_2*\tilde\Omega$, $V_{\text{per},k} = K_2* \Omega_{\text{per},k}$, and $V_{\text{lin}}= K_2* \Omega_{\text{lin}}$. Next recall that $\tilde{\Omega}=\beta\bar\Omega + \Omega_r$ and recall also the definition of $L_{\text{ss}}$ in \eqref{e:Lss} and the fact that $\partial_\tau \Omega_{\text{lin}} - L_{\text{ss}} (\Omega_{\text{lin}})= 0$. In particular formally we reach
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + ((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r) + (V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}\nonumber\\
= & -(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r\, ,\label{e:master}
\end{align}
which must be supplemented with the initial condition
\[
\Omega_{\text{per},k} (\cdot, -k)= 0\, .
\]
In fact, in order to justify \eqref{e:master} we need to show that $\Omega_{\text{per},k} (\cdot, \tau)\in L^2_m$ for every $\tau$, which is the content of the following elementary lemma.
\begin{lemma}\label{l:evolution-in-L2m}
The function $\Omega_{\text{per},k} (\cdot, \tau)$ belongs to $L^2_m$ for every $\tau$.
\end{lemma}
\begin{proof}
It suffices to prove that $\omega_{\varepsilon, k} (\cdot, t)$ is $m$-fold symmetric, since the transformation rule then implies that $\Omega_{\varepsilon,k} (\cdot, \tau)$ is $m$-fold symmetric and $\Omega_{\text{per}, k} (\cdot, \tau)$ is obtained from the latter by subtracting $e^{a_0\tau} {\rm Re} (e^{ib_0\tau} \eta) + \tilde{\Omega} (\cdot, \tau)$, which is also $m$-fold symmetric. In order to show that $\omega_{\varepsilon, k}$ is $m$-fold symmetric just consider that $\omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), \tau)$ solves \eqref{e:Euler-later-times} because both the forcing term and the initial data are invariant under a rotation of $\frac{2\pi}{m}$ (and the Euler equations are rotationally invariant). Then the uniqueness part of Yudovich's statement implies $\omega_{\varepsilon, k} (\cdot, t) = \omega_{\varepsilon, k} (R_{2\pi/m} (\cdot), t)$.
\end{proof}
We proceed with our discussion and observe that $V_{\text{lin}} + V_r$ and $\Omega_{\text{lin}}+\Omega_r$ are both ``small'' in appropriate sense for sufficiently negative times, while, because of the initial condition being $0$ at $-k$, for some time after $-k$ we expect that the quadratic nonlinearity $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ will not contribute much to the growth of $\Omega_{\text{per}, k} (\cdot, \tau)$. Schematically, we can break \eqref{e:master} as
\begin{align}
& (\partial_{\tau} - L_{\text{ss}}) \Omega_{\text{per}, k} + \underbrace{((V_{\text{lin}}+V_r)\cdot \nabla) \Omega_{\text{per},k} + (V_{\text{per},k} \cdot \nabla) (\Omega_{\text{lin}} + \Omega_r)}_{\mbox{small linear terms}} + \underbrace{(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}}_{\mbox{quadratic term}}\nonumber\\
= & \underbrace{-(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r\cdot \nabla) \Omega_{\text{lin}} - (V_{\text{lin}}\cdot \nabla) \Omega_r}_{\mbox{forcing term } \mathscr{F}}\, ,\label{e:master-schematics}
\end{align}
In particular we can hope that the growth of $\Omega_{\text{per},k} (\cdot, \tau)$ is comparable to that of the solution of the following ``forced'' linear problem
\begin{equation}\label{e:master-linear}
(\partial_{\tau} - L_{\text{ss}}) \Omega = \mathscr{F}\, .
\end{equation}
Observe that we know that $\Omega_{\text{lin}} (\cdot, \tau)$ and $V_{\text{lin}} (\cdot, \tau)$ decay like $e^{a_0 \tau}$. We can then expect to gain a slightly faster exponential decay for $\mathscr{F} (\cdot, \tau)$ because of the smallness of $V_r$ and $\Omega_r$. On the other hand from Theorem \ref{thm:spectral} we expect that the semigroup generated by $L_{\text{ss}}$ enjoys growth estimates of type $e^{a_0\tau}$ on $L^2_m$ (this will be rigorously justified using classical results in the theory of strongly continuous semigroups). We then wish to show, using the Duhamel's formula for the semigroup $e^{\tau L_{\text{ss}}}$, that the growth of $\Omega_{\text{per},k}$ is bounded by $e^{a_0\tau} (e^{\delta_0 \tau} - e^{-\delta_0 k})$ for some positive $\delta_0$ for some time $\tau$ after the initial $-k$: the crucial point will be to show that the latter bound is valid for $\tau$ up until a ``universal'' time $\tau_0$, independent of $k$.
Even though intuitively sound, this approach will require several delicate arguments, explained in the final chapter of the notes. In particular:
\begin{itemize}
\item we will need to show that the quadratic term $(V_{\text{per},k}\cdot \nabla) \Omega_{\text{per},k}$ is small up to some time $\tau_0$ independent of $k$, in spite of the fact that there is a ``loss of derivative'' in it (and thus we cannot directly close an implicit Gronwall argument using the Duhamel formula and the semigroup estimate for $L_{\text{ss}}$);
\item The terms $\Omega_r$ and $V_r$ are not really negligible in absolute terms, but rather, for very negative times, they are supported in a region in space which goes towards spatial $\infty$.
\end{itemize}
The first issue will be solved by closing the estimates in a space of more regular functions, which contains $L^2$ and embeds in $L^\infty$ (in fact $L^2\cap W^{1,4}$): the bound on the growth of the $L^2$ norm will be achieved through the semigroup estimate for $L_{\text{ss}}$ via Duhamel, while the bound of the first derivative will be achieved through an energy estimate, which will profit from the $L^2$ one. The second point by restricting further the functional space in which we will close to estimates for $\Omega_{per, k}$. We will require an appropriate decay of the derivative of the solutions, more precisely we will require that the latter belong to $L^2 (|x|^2\,\mathrm dx)$. Of course in order to use this strategy we will need to show that the initial perturbation $\eta$ belongs to the correct space of functions.
\chapter{Introduction}
In these notes we will consider the Euler equations in the $2$-dimensional space in vorticity formulation\index{Euler equations@Euler equations}, which are given by
\begin{equation}\label{e:Euler}
\left\{
\begin{array}{ll}
&\partial_t \omega + (v\cdot \nabla) \omega = f\\ \\
& v (\cdot, t) = K_2 * \omega (\cdot, t)
\end{array}
\right.
\end{equation}
where $K_2$ is the usual $2$-dimensional Biot-Savart kernel and $f$ is a given external force. \index{Biot-Savart kernel@Biot-Savart kernel}\index{aalf@$f$}\index{aalK_2@$K_2$}\index{external force@external force}\index{force, external@force, external}
$v$ is the velocity field, and it is a function defined on a space-time domain of type $\mathbb R^2 \times [0, T]$. By the Biot-Savart law we have $\omega = \curl v = \partial_{x_1} v_2 - \partial_{x_2} v_1 = \nabla\times v$\index{vorticity@vorticity}\index{velocity@velocity}\index{aagz@$\omega$}\index{aalv@$v$}.
We will study the Cauchy problem for \eqref{e:Euler} with initial data
\begin{equation}\label{e:Cauchy}
\omega (\cdot, 0) = \omega_0\,
\end{equation}
on the domain $\mathbb R^2 \times [0,\infty[$
under the assumptions that
\begin{itemize}
\item[(i)] $\omega_0\in L^1\cap L^p$ for some $p>2$ and $v_0=K_2* \omega_0 \in L^2$;
\item[(ii)] $f\in L^1 ([0,T], L^1\cap L^p)$ and $K_2*f\in L^1 ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
In particular we understand solutions $\omega$ in the usual sense of distributions, namely,
\begin{equation}\label{e:distrib}
\int_0^T \int_{\ensuremath{\mathbb R}^2} [\omega (\partial_t \phi + K_2* \omega \cdot \nabla \phi) + f \phi]\, dx\, dt
= - \int_{\ensuremath{\mathbb R}^2} \phi (x,0)\, \omega_0 (x)\, dx
\end{equation}
for every smooth test function $\phi\in C^\infty_c (\mathbb R^2 \times [0,T[)$.\index{Solution, weak} In view of (i)-(ii) and standard energy estimates we will restrict our attention to weak solutions which satisfy the following bounds:
\begin{itemize}
\item[(a)] $\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $v\in L^\infty ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
The purpose of these notes is to give a proof of the following:
\begin{theorem}\label{thm:main}
For every $p\in ]2, \infty[$ there is a triple $\omega_0, v_0$, and $f$ satisfing (i)-(ii)
with the property that there are uncountably many solutions $(\omega, v)$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, \infty [$ which satisfy the bound (a). Moreover, $\omega_0$ can be chosen to vanish identically.
\end{theorem}
In fact the $f$ given by the proof is smooth and compactly supported on any closed interval of time $[\varepsilon, T]\subset ]0, \infty[$. Moreover, a closer inspection of the argument reveals that any of the solutions $(\omega, v)$ enjoy bounds on the $W^{1,4}_{\rm loc}$ norm of $\omega (t, \cdot)$, and good decay properties at infinity, whenever $t$ is positive (and obviously such estimates degenerate as $t\downarrow 0$). In particular $v$ belongs to $C^1_{\rm loc} (\mathbb R^2\times ]0, \infty[)$. It is not difficult to modify the arguments detailed in these notes to produce examples which have even more regularity and better decay for positive times, but we do not pursue the issue here.
\begin{remark}\label{r:bounded}\label{R:BOUNDED}
Recall that
\begin{equation}\label{e:bound-on-Biot-Savart}
\|K_2* \omega (\cdot, t)\|_{L^\infty}\leq C (p) (\|\omega (\cdot, t)\|_{L^1} + \|\omega (\cdot, t)\|_{L^p})
\end{equation}
whenever $p>2$ (cf. the Appendix for the proof). Therefore we conclude that each solution $v$ in Theorem \ref{thm:main} is bounded on $\mathbb R^2\times [0,T]$ for every positive $T$.
\end{remark}
The above groundbreaking result was proved by Vishik in the two papers \cite{Vishik1} and \cite{Vishik2} (upon which these notes are heavily based) and answers a long-standing open question in the PDE theory of the incompressible Euler equations, as it shows that it is impossible to extend to the $L^p$ scale the following classical uniqueness result of Yudovich.
\begin{theorem}\label{thm:Yudo}\label{THM:YUDO}
Consider a strictly positive $T$, an initial vorticity $\omega_0 \in L^1\cap L^\infty$ with $v_0=K_2*\omega_0 \in L^2$ and an external force $f\in L^1 ([0,T]; L^1\cap L^\infty)$ with $K_2*f\in L^1 ([0,T]; L^2)$. Then there is a unique solution $\omega$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, T]$ satisfying the estimates $\omega \in L^\infty ([0,T], L^1\cap L^\infty)$ and $v = K_2* \omega \in L^\infty ([0,T], L^2)$.
\end{theorem}
The above theorem in a bounded domain was originally proven by Yudovich in 1963 \cite{Yudovich1963}, who also proved a somewhat more technical statement on unbounded domains. We have not been able to find an exact reference for the statement above (cf. for instance \cite[Theorem 8.2]{MajdaBertozzi} and the paragraph right afterwards, where the authors point out the validity of the Theorem in the case of $f=0$). We therefore give a detailed proof in the appendix for the reader's convenience.
\begin{remark}\label{r:A-priori-estimates}
We recall that the solution of Theorem \ref{thm:Yudo} satisfies a set of important a priori estimates, which can be justified using the uniqueness part and a simple approximation procedure. Indeed if $(\omega, v)$ is a smooth solution of \eqref{e:Euler}, then the method of characteristics shows that, for every $t$, there exists a family of volume-preserving diffeomorphisms $T_s:\ensuremath{\mathbb R}^2\to\ensuremath{\mathbb R}^2, s\in[0, t]$, such that \begin{equation*}
\omega(x, t)=\omega_0(T_0 x) + \int_0^t f(T_s x, s)\,\mathrm ds.
\end{equation*}
Therefore, since volume-preserving diffeomorphisms preserve all $L^q$ norms, we get, for all $q\in[1,\infty]$,
\begin{equation*}
\norm{\omega(\cdot, t)}_{L^q}\le\norm{\omega_0}_{L^q}+\int_0^t \norm{f(\cdot, s)}_{L^q}\,\mathrm ds.
\end{equation*}
Furthermore, a usual integration by parts argument, as seen in \cite[Lemma 1.1]{Yudovich1963}, shows that $v$ satisfies the estimate
\begin{equation*}
\norm{v(\cdot, t)}_{L^2}\le\norm{v_0}_{L^2}+\int_0^t \norm{K_2* f(\cdot,s)}_{L^2}\,\mathrm ds.
\end{equation*}
\end{remark}
\begin{remark}\label{r:well-defined}
Recall that the Biot-Savart kernel is given by the formula
\begin{equation}\label{e:Biot-Savart}
K_2 (x_1, x_2) = \frac{x^\perp}{2\pi |x|^2} = \frac{1}{2\pi |x|^2} (-x_2, x_1)\, .
\end{equation}
In particular, while $K_2\not \in L^p$ for any $p$, it can be easily broken into
\begin{equation}\label{e:decomposition-Biot-Savart-Kernel}
K_2 = K_2 \mathbf{1}_{B_1} + K_2 \mathbf{1}_{B_1^c}\, ,
\end{equation}
where $B_1$ denotes the unit ball around $0$.
Observe that $K_2 \mathbf{1}_{B_1} \in L^q$ for every $q\in [1,2[$ and $K_2 \mathbf{1}_{B_1^c}\in L^r$ for every $r\in ]2, \infty]$. Under the assumption that $\omega \in L^{2-\delta}$ for some positive $\delta >0$, this decomposition allows us to define the convolution $K_2* \omega$ as $(K_2 \mathbf{1}_{B_1}) * \omega
+ (K_2 \mathbf{1}_{B_1^c}) * \omega$, where each separate summand makes sense as Lebesgue integrals thanks to Young's convolution inequality.\footnote{Young's convolution inequality states that, if $g_1\in L^{p_1}$ and $g_2\in L^{p_2}$ with $1\leq \frac{1}{p_1} + \frac{1}{p_2} \leq 2$, then $g_1 (y-\cdot) g_2 (\cdot)$ belongs to $L^1$ for a.e. $y$ and $g_1* g_2\in L^r$ for $\frac{1}{r}=\frac{1}{p_1} + \frac{1}{p_2} -1$.}
On the other hand we caution the reader that, for general $\omega\in L^2$, $K_2*\omega$ may not be well-defined. More precisely, if we denote by $\mathscr{S}$\index{aalSscript@$\mathscr{S}$} the \index{Schwartz space@Schwartz space $\mathscr{S}$} Schwartz space of rapidly decaying smooth functions and by $\mathscr{S}'$\index{aalSscript'@$\mathscr{S}'$} the space of tempered distributions \index{tempered distribution@tempered distribution}\index{space of tempered distributions@space $\mathscr{S}'$ of tempered distributions} (endowed, respectively, with their classical Fr\'echet and weak topologies), it can be shown that there is no continuous extension of the operator $\mathscr{S} \ni \omega \mapsto K_2 *\omega\in \mathscr{S}'$ to a continuous operator from $L^2$ to $\mathscr{S}'$, cf. Remark \ref{r:Camillo_dumb}.
This fact creates some technical issues in many arguments where we will indeed need to consider a suitable continuous extension of the operator $\omega \mapsto K_2*\omega$ to {\em some} closed linear subspace of $L^2$, namely, $m$-fold rotationally symmetric functions in $L^2$ (for some integer $m\geq 2$). Such an extension will be shown to exist thanks to some special structural properties of the subspace.
\end{remark}
\section{Idea of the proof}
We now describe, briefly, the rough idea of and motivation for the proof. An extensive description of the proof with precise statements can be found in Chapter~\ref{chapter:general}, which breaks down the whole argument into three separate (and independent) parts. The subsequent three chapters are then dedicated to the detailed proofs.
First, we recall two essential features of the two-dimensional Euler equations:
\begin{enumerate}
\item \emph{Steady states}. The two-dimensional Euler equations possess a large class of explicit, radially symmetric steady states called \emph{vortices}:\footnote{They are sometimes also called rotating or circular flows.}
\begin{equation}
\label{eq:vorticesdef}
\bar{\omega}(x) = g(|x|), \quad \bar{v}(x) = \zeta(|x|) x^\perp.
\end{equation}
\item \emph{Scaling symmetry}. The Euler equations possess a two-parameter scaling symmetry: If $(\omega,v)$ is a solution of~\eqref{e:Euler} with vorticity forcing $f$, and $\lambda, \mu > 0$, then
\begin{equation}
\omega_{\lambda,\mu}(x,t) = \mu \omega(\lambda x, \mu t), \quad v_{\lambda,\mu}(x,t) = \frac{\mu}{\lambda} v(\lambda x, \mu t),
\end{equation}
define a solution with vorticity forcing\textbf{}
\begin{equation}
f_{\lambda,\mu}(x,t) = \mu^2 f(\lambda x, \mu t).
\end{equation}
The scaling symmetry corresponds to the physical dimensions
\begin{equation}
[x] = L,\quad [t] = T, \quad [v] = \frac{L}{T}, \quad [\omega] = \frac{1}{T}, \quad \text{ and } \quad [f] = \frac{1}{T^2}.
\end{equation}
\end{enumerate}
We now elaborate on the above two features:
\smallskip
\emph{1. Unstable vortices}. The stability analysis of shear flows $u = (b(y),0)$ and vortices~\eqref{eq:vorticesdef} is classical, with seminal contributions due to Rayleigh~\cite{Rayleigh1879}, Kelvin~\cite{Thomson1880}, Orr~\cite{Orr}, and many others. The linearized Euler equations around the background vortex $\bar{\omega}$ are
\begin{equation}
\label{eq:linearizedeulerintro}
\partial_t \omega - L_{\rm st} \omega :=
\partial_t \omega + \zeta(r) \partial_\theta \omega + (v \cdot e_r) g'(r) = 0, \quad v = K_2 \ast \omega.
\end{equation}
Consider the eigenvalue problem associated to the linearized operator $L_{\rm st}$. It suffices to consider $\psi = e^{ik\theta} \psi_k(|x|)$, $k \geq 0$, the stream function associated to a vorticity perturbation $\omega$ (that is, $\Delta \psi = \omega$). It is convenient to pass to an exponential variable $s = \log r$ and define $\phi(s) = \psi_k(e^s)$; $A(s) = e^s g'(e^s)$ ($r \; \times$ the radial derivative of the background vorticity); and $\Xi(s) = \zeta(e^s)$ (the differential rotation). The eigenvalue problem for $L_{\rm st}$, with eigenvalue $\lambda = -ikz$, can be rewritten as
\begin{equation}
\label{eq:theeigenvaluequation}
\left( \Xi(s) - z \right) \left( \frac{d^2}{dy^2}- k^2 \right) \phi - A(s) \phi = 0.
\end{equation}
This is \emph{Rayleigh's stability equation}. The eigenvalue $\lambda$ is unstable when ${\rm Im}(z) > 0$, in which case we can divide by $\Xi - z$ and analyze a steady Schr{\"o}dinger equation. It is possible to understand~\eqref{eq:theeigenvaluequation} well enough to design vortices for which the corresponding linear operator has an unstable eigenfunction. For shear flows, this analysis goes back to Tollmien~\cite{Tollmien}. The problem was treated rigorously by Z.~Lin~\cite{LinSIMA2003} for bounded and unbounded shear flows and rotating flows in an annulus.\footnote{For those interested in hydrodynamic stability more generally, see the classic monograph~\cite{DrazinReid}. Chapter~4 therein concerns the stability of shear flows, including Rayleigh's criterion and a sketch of Tollmien's idea.}
The case of unbounded vortices, which is the crucial one for the purposes of these notes, was treated by Vishik in~\cite{Vishik2}, see Chapter~\ref{chapter:linearpartii} below. In the cases relevant to these notes, $L_{\rm st}$ has at least one unstable eigenvalue $\lambda$. While the latter could well be real, for the sake of our argument let us assume that it is a complex number $\lambda= a_0 + b_0 i$ ($a_0, b_0 > 0$) and let $\bar\lambda = a_0- b_0 i$ be its complex conjugate. If we denote by $\eta$ and $\bar{\eta}$ two corresponding (nontrivial) eigenfunctions, it can be checked that they are {\em not} radially symmetric.
With the unstable modes in hand, one may seek a trajectory on the \emph{unstable manifold} associated to $\lambda$ and $\bar{\lambda}$. For example, one such trajectory may look like
\begin{equation}
\omega = \bar{\omega} + \omega^{\rm lin} + o(e^{a_0 t}),
\end{equation}
where $\omega^{\rm lin} = {\rm Re}(e^{\lambda t} \eta)$ is a solution of the linearized Euler equations~\eqref{eq:linearizedeulerintro}. These solutions converge to $\bar{\omega}$ exponentially in backward time.
Hence, we expect that certain unstable vortices exhibit a kind of \emph{non-uniqueness at time $t = -\infty$} and moreover break the radial symmetry. The existence of unstable manifolds associated to a general class of Euler flows in dimension $n \geq 2$ was demonstrated by Lin and Zeng~\cite{LinZengCPAM2013,LinZengCorrigendum2014}.\footnote{There is a substantial mathematical literature on the nonlinear instability of Euler flows, see~\cite{friedlanderstraussvishikearly,FriedlanderHoward,friedlanderstraussvishik,bardosguostrauss,friedlandervishik,linnonlinear}.}
\smallskip
\emph{2. Self-similar solutions}. It is natural to consider solutions invariant under the scaling symmetry and, in particular, it is natural to consider those self-similar solutions which live exactly at the desired integrability. If we fix a relationship $L^\alpha \sim T$ in the scaling symmetries, the similarity variables are\footnote{We may regard the logarithmic time as $\tau = \log (t/t_0)$, so that $t$ is non-dimensionalized according to a fixed reference time $t_0 = 1$.}
\begin{equation}\label{e:self-similar-scaling}
\xi = \frac{x}{t^{\frac{1}{\alpha}}}, \quad \tau = \log t
\end{equation}
\begin{equation}
v(x,t) = \frac{1}{t^{1-\frac{1}{\alpha}}} V(\xi, \tau), \quad \omega(x,t) = \frac{1}{t} \Omega(\xi, \tau).
\end{equation}
Notice that physical time $t=0$ corresponds to logarithmic time $\tau = -\infty$. The function $\Omega$ is known as the \emph{profile}.
The Euler equations, without force, in similarity variables are
\begin{equation}
\label{eq:similarityvareulereqns}
\left\{\begin{array}{ll}
&\partial_\tau \Omega - \left( 1 + \frac{\xi}{\alpha} \cdot \nabla_\xi \right) \Omega + V \cdot \nabla_\xi \Omega = 0\\ \\
& V= K_2* \Omega \, .
\end{array}\right.
\end{equation}
Profiles $\Omega$ satisfying $\| \Omega(\cdot,\tau) \|_{L^p} = O(1)$ as $\tau \to -\infty$ satisfy $\| \omega(\cdot,t) \|_{L^p} = O(t^{-1+\frac{2}{\alpha p}})$ as $t \to 0^+$, and similarly in the weak $L^p$ norms. Hence, the Lebesgue and weak Lebesgue norms with $p = 2/\alpha$ would be $O(1)$ in either variables. To show sharpness of the Yudovich class, we consider $0 < \alpha \ll 1$.
\bigskip
The route to non-uniqueness through unstable vortices and self-similar solutions is as follows: Suppose that $\bar{\Omega}$ is an unstable steady state of the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} (in particular, $\bar{\omega}(x,t) = t^{-1} \bar{\Omega}(\xi)$ is a self-similar solution of the usual Euler equations). Find a trajectory $\Omega$ on the unstable manifold associated to $\bar{\Omega}$. In similarity variables, the steady state $\bar{\Omega}$ will be ``non-unique at minus infinity", which corresponds to non-uniqueness at time $t=0$ in the physical variables.
One natural class of background profiles $\bar{\Omega}$ consists of \emph{power-law vortices} $\bar{\omega} = \beta |x|^{-\alpha}$, $\beta \in \ensuremath{\mathbb R}$, which are simultaneously steady solutions and self-similar solutions without force. At present, we do not know whether the above strategy can be implemented with power-law vortices.
Instead, we choose a smooth vortex profile $g(|x|)$, with power-law decay as $|x| \to +\infty$, which is unstable for the Euler dynamics. Our background will be the self-similar solution with profile $\bar{\Omega} = g(|\xi|)$, which solves the Euler equations \emph{with a self-similar force}. This profile may be considered a well-designed smoothing of a power-law vortex. When the background is large, it is reasonable to expect that the additional term in the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} can be treated perturbatively, so that $g(|\xi|)$ will also be unstable for the similarity variable Euler dynamics. This heuristic is justified in Chapter~\ref{chapter:linearparti}.
In order to ensure that the solutions have finite energy, we also truncate the background velocity at distance $O(1)$ in physical space. This generates a different force. The truncation's contribution to the force is smooth and heuristically does not destroy the non-uniqueness, which can be thought of as ``emerging" from the singularity at the space-time origin. Our precise Ansatz is~\eqref{e:Ansatz-1}, which is the heart of the nonlinear part of these notes.
\section{Differences with Vishik's work}
While we follow the strategy of Vishik in~\cite{Vishik1,Vishik2}, we deviate from his proof in some ways. We start by listing two changes which, although rather minor, affect the presentation substantially.
\begin{enumerate}
\item We decouple the parameter $\alpha$ in~\eqref{e:self-similar-scaling} governing the self-similar scaling from the decay rate $\bar{\alpha}$ of the smooth profile $g$ at infinity. In \cite{Vishik1} these two parameters are equal; however, it is rather obvious that the argument goes through as long as $\alpha \leq \bar \alpha$. If we then choose $\alpha < \bar \alpha$ the resulting solution has zero initial data. This is a very minor remark, but it showcases the primary role played by the forcing $f$ in the equation.
\item Strictly speaking Vishik's Ansatz for the ``background solution'' is in fact different from our Ansatz (even taking into account the truncation at infinity). The interested reader might compare \eqref{e:tilde-v} and \eqref{e:curl-tilde-v} with \cite[(6.3)]{Vishik1}. Note in particular that the coordinates used in \cite{Vishik1} are not really \eqref{e:self-similar-scaling} but rather a more complicated variant. Moreover, Vishik's Ansatz contains a parameter $\varepsilon$, whose precise role is perhaps not initially transparent, and which is ultimately scaled away in~\cite[Chapter 9]{Vishik1}. This obscures that the whole approach hinges on finding a solution $\Omega$ of a truncated version of \eqref{eq:similarityvareulereqns}
asymptotic to the unstable manifold of the steady state $\bar \Omega$ at $-\infty$.
In our case, $\Omega$ is constructed by solving appropriate initial value problems for the truncated version of \eqref{eq:similarityvareulereqns} at negative times $-k$ and then taking their limit; this plays the role of Vishik's parameter~$\varepsilon$.
\end{enumerate}
We next list two more ways in which our notes deviate from \cite{Vishik1,Vishik2}. These differences are much more substantial.
\begin{enumerate}
\item[(3)] The crucial nonlinear estimate in the proof of Theorem \ref{thm:main} (cf. \eqref{e:asymptotic-in-t} and the more refined version \eqref{e:asymptotic-in-t-2}), which shows that the solution $\Omega$ is asymptotic, at minus infinity, to an unstable solution of the linearized equation, is proved in a rather different way. In particular our argument is completely Eulerian and based on energy estimates, while a portion of Vishik's proof relies in a crucial way on the Lagrangian formulation of the equation. The approach introduced here will be exploited by the first and third author in their forthcoming work~\cite{AC} and we believe it might be useful in other contexts.
\item[(4)] Another technical, but crucial, difference, concerns the simplicity of the unstable eigenvalue $\eta$. While Vishik claims such simplicity in \cite{Vishik2}, {\color{red} the argument given in the latter reference is actually incomplete. After we pointed out the gap to him, he provided a clever way to fill it in \cite{Vishik3}}. These notes point out that such simplicity is not really needed in the nonlinear part of the analysis: in fact a much weaker linear analysis than the complete one carried in \cite{Vishik2} {\color{red} is already enough to close the argument for Theorem \ref{thm:main}. However, for completeness and for the interested readers, we include in Appendix~\ref{a:better} the necessary additional arguments needed to conclude the more precise description of \cite{Vishik2}.}
\end{enumerate}
\section{Further remarks}\label{s:final-remarks}
Recently, Bressan, Murray, and Shen investigated in \cite{BressanAposteriori,BressanSelfSimilar} a different non-uniqueness scenario for~\eqref{e:Euler} which would demonstrate sharpness of the Yudovich class without a force. The scenario therein, partially inspired by the works of Elling~\cite{EllingAlgebraicSpiral,EllingSelfSimilar}, is also based on self-similarity and symmetry breaking but follows a different route.
Self-similarity and symmetry breaking moreover play a central role in the work of Jia, {\v S}ver{\'a}k, and Guillod~\cite{JiaSverakInventiones,JiaSverakIllposed,guillod2017numerical} on the conjectural non-uniqueness of weak Leray-Hopf solutions of the Navier-Stokes equations. One crucial difficulty in~\cite{JiaSverakIllposed}, compared to Vishik's approach, is that the self-similar solutions in~\cite{JiaSverakIllposed} are far from explicit. Therefore, the spectral condition therein seems difficult to verify analytically, although it has been checked with non-rigorous numerics in~\cite{guillod2017numerical}. The work~\cite{JiaSverakIllposed} already contains a version of the unstable manifold approach, see p. 3759--3760, and a truncation to finite energy.
At present, the above two programs, while very intriguing and highly suggestive, require a significant numerical component not present in Vishik's approach. On the other hand, at present, Vishik's approach includes a forcing term absent from the above two programs, whose primary role is showcased by the fact that the initial data can be taken to be zero.
\bigskip
Much of the recent progress on non-uniqueness of the Euler equations has been driven by Onsager's conjecture, which was solved in~\cite{IsettOnsager}. With Theorem \ref{thm:main} in hand, we can now summarize the situation for the Euler equations \da{in dimension three as follows:}
\begin{itemize}[leftmargin=*]
\item $\alpha \in (1,2)$: (\emph{Local well-posedness and energy conservation}) For each divergence-free $u_0 \in C^{\alpha}\da{(\mathbb{T}^3)}$ and force $f \in L^1(]0,T[;C^\alpha\da{(\mathbb{T}^3)})$, there exists $T' \in ]0,T[$ and a unique local-in-time solution $u \in L^\infty(]0,T'[;C^\alpha\da{(\mathbb{T}^3)})$. The solution $u$ depends continuously\footnote{The continuous dependence is more subtle for quasilinear equations than semilinear equations, and uniform continuity is not guaranteed in the regularity class in which the solutions are found, see the discussion in~\cite{taonotes}. One can see this at the level of the equation for the difference of two solutions $u^{(1)}$ and $u^{(2)}$: One of the solutions becomes the ``background" and, hence, loses a derivative. One way to recover the continuous dependence stated above is to compare the above two solutions with initial data $u_0^{(1)}$, $u_0^{(2)}$ and forcing terms $f^{(1)}$, $f^{(2)}$ to approximate solutions $u^{(1),\varepsilon}$, $u^{(2),\varepsilon}$ with mollified initial data $u_0^{(1),\varepsilon}$, $u_0^{(2),\varepsilon}$ and mollified forcing terms $f^{(1),\varepsilon}$, $f^{(2),\varepsilon}$. One then estimates $\| u^{(1)} - u^{(2)} \| \leq \| u^{(1)} - u^{(1),\varepsilon} \| + \| u^{(1),\varepsilon} - u^{(2),\varepsilon} \| + \| u^{(2),\varepsilon} - u^{(2)} \|$. The approximate solutions, which are more regular, are allowed to lose derivatives in a controlled way.} in the above class on its initial data and forcing term. Moreover, the solution $u$ conserves energy.
\item $1/3 < \alpha < 1$: (\emph{Non-uniqueness and energy conservation}) There exist $T > 0$, a force $f \in L^1(]0,T[;\da{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$, and two distinct weak solutions \da{$u_1,u_2 \in L^\infty(]0,T[;L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T}))$} to the Euler equations with zero initial data and force $f$. For any $T>0$, weak solutions $u \in L^\infty(]0,T[;\da{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$ with forcing in the above class conserve energy~\cite{constantinetiti}.
\item $0 < \alpha < 1/3$: (\emph{Non-uniqueness and anomalous dissipation}) There exist $T>0$ and two distinct admissible weak solutions (see~\cite{OnsagerAdmissible}) $u_1,u_2 \in L^\infty(]0,T[;C^\alpha\da{(\mathbb{T}^3)})$ to the Euler equations with the same initial data and zero force and which moreover dissipate energy.
\end{itemize}
While we are not aware of the first two statements with force in the literature, the proofs are easy adaptations of those with zero force. In order to obtain the non-uniqueness statement in the region $1/3 < \alpha < 1$, one can \da{extend the non-unique solutions on $\ensuremath{\mathbb R}^2$ to be constant in the $x_3$ direction.}
The borderline cases may be sensitive to the function spaces in question. For example, the three-dimensional Euler equations are ill-posed in $C^k$, $k \geq 1$~\cite{bougainillposed}. Furthermore, of the above statements, only the negative direction of Onsager's conjecture is open in $n=2$.
We finally point out that an expanded version of these notes is contained in the master's thesis of the sixth author, cf. \cite{Maximilian-thesis}.
\chapter{Introduction}
In these notes we will consider the Euler equations in the $2$-dimensional space in vorticity formulation\index{Euler equations@Euler equations}, which are given by
\begin{equation}\label{e:Euler}
\left\{
\begin{array}{ll}
&\partial_t \omega + (v\cdot \nabla) \omega = f\\ \\
& v (\cdot, t) = K_2 * \omega (\cdot, t)
\end{array}
\right.
\end{equation}
where $K_2$ is the usual $2$-dimensional Biot-Savart kernel and $f$ is a given external force. \index{Biot-Savart kernel@Biot-Savart kernel}\index{aalf@$f$}\index{aalK_2@$K_2$}\index{external force@external force}\index{force, external@force, external}
$v$ is the velocity field, and it is a function defined on a space-time domain of type $\mathbb R^2 \times [0, T]$. By the Biot-Savart law we have $\omega = \curl v = \partial_{x_1} v_2 - \partial_{x_2} v_1 = \nabla\times v$\index{vorticity@vorticity}\index{velocity@velocity}\index{aagz@$\omega$}\index{aalv@$v$}.
We will study the Cauchy problem for \eqref{e:Euler} with initial data
\begin{equation}\label{e:Cauchy}
\omega (\cdot, 0) = \omega_0\,
\end{equation}
on the domain $\mathbb R^2 \times [0,\infty[$
under the assumptions that
\begin{itemize}
\item[(i)] $\omega_0\in L^1\cap L^p$ for some $p>2$ and $v_0=K_2* \omega_0 \in L^2$;
\item[(ii)] $f\in L^1 ([0,T], L^1\cap L^p)$ and $K_2*f\in L^1 ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
In particular we understand solutions $\omega$ in the usual sense of distributions, namely,
\begin{equation}\label{e:distrib}
\int_0^T \int_{\ensuremath{\mathbb R}^2} [\omega (\partial_t \phi + K_2* \omega \cdot \nabla \phi) + f \phi]\, dx\, dt
= - \int_{\ensuremath{\mathbb R}^2} \phi (x,0)\, \omega_0 (x)\, dx
\end{equation}
for every smooth test function $\phi\in C^\infty_c (\mathbb R^2 \times [0,T[)$.\index{Solution, weak} In view of (i)-(ii) and standard energy estimates we will restrict our attention to weak solutions which satisfy the following bounds:
\begin{itemize}
\item[(a)] $\omega \in L^\infty ([0,T], L^1\cap L^p)$ and $v\in L^\infty ([0,T], L^2)$ for every $T<\infty$.
\end{itemize}
The purpose of these notes is to give a proof of the following:
\begin{theorem}\label{thm:main}
For every $p\in ]2, \infty[$ there is a triple $\omega_0, v_0$, and $f$ satisfing (i)-(ii)
with the property that there are uncountably many solutions $(\omega, v)$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, \infty [$ which satisfy the bound (a). Moreover, $\omega_0$ can be chosen to vanish identically.
\end{theorem}
In fact the $f$ given by the proof is smooth and compactly supported on any closed interval of time $[\varepsilon, T]\subset ]0, \infty[$. Moreover, a closer inspection of the argument reveals that any of the solutions $(\omega, v)$ enjoy bounds on the $W^{1,4}_{\rm loc}$ norm of $\omega (t, \cdot)$, and good decay properties at infinity, whenever $t$ is positive (and obviously such estimates degenerate as $t\downarrow 0$). In particular $v$ belongs to $C^1_{\rm loc} (\mathbb R^2\times ]0, \infty[)$. It is not difficult to modify the arguments detailed in these notes to produce examples which have even more regularity and better decay for positive times, but we do not pursue the issue here.
\begin{remark}\label{r:bounded}\label{R:BOUNDED}
Recall that
\begin{equation}\label{e:bound-on-Biot-Savart}
\|K_2* \omega (\cdot, t)\|_{L^\infty}\leq C (p) (\|\omega (\cdot, t)\|_{L^1} + \|\omega (\cdot, t)\|_{L^p})
\end{equation}
whenever $p>2$ (cf. the Appendix for the proof). Therefore we conclude that each solution $v$ in Theorem \ref{thm:main} is bounded on $\mathbb R^2\times [0,T]$ for every positive $T$.
\end{remark}
The above groundbreaking result was proved by Vishik in the two papers \cite{Vishik1} and \cite{Vishik2} (upon which these notes are heavily based) and answers a long-standing open question in the PDE theory of the incompressible Euler equations, as it shows that it is impossible to extend to the $L^p$ scale the following classical uniqueness result of Yudovich.
\begin{theorem}\label{thm:Yudo}\label{THM:YUDO}
Consider a strictly positive $T$, an initial vorticity $\omega_0 \in L^1\cap L^\infty$ with $v_0=K_2*\omega_0 \in L^2$ and an external force $f\in L^1 ([0,T]; L^1\cap L^\infty)$ with $K_2*f\in L^1 ([0,T]; L^2)$. Then there is a unique solution $\omega$ of \eqref{e:Euler} and \eqref{e:Cauchy} on $\mathbb R^2\times [0, T]$ satisfying the estimates $\omega \in L^\infty ([0,T], L^1\cap L^\infty)$ and $v = K_2* \omega \in L^\infty ([0,T], L^2)$.
\end{theorem}
The above theorem in a bounded domain was originally proven by Yudovich in 1963 \cite{Yudovich1963}, who also proved a somewhat more technical statement on unbounded domains. We have not been able to find an exact reference for the statement above (cf. for instance \cite[Theorem 8.2]{MajdaBertozzi} and the paragraph right afterwards, where the authors point out the validity of the Theorem in the case of $f=0$). We therefore give a detailed proof in the appendix for the reader's convenience.
\begin{remark}\label{r:A-priori-estimates}
We recall that the solution of Theorem \ref{thm:Yudo} satisfies a set of important a priori estimates, which can be justified using the uniqueness part and a simple approximation procedure. Indeed if $(\omega, v)$ is a smooth solution of \eqref{e:Euler}, then the method of characteristics shows that, for every $t$, there exists a family of volume-preserving diffeomorphisms $T_s:\ensuremath{\mathbb R}^2\to\ensuremath{\mathbb R}^2, s\in[0, t]$, such that \begin{equation*}
\omega(x, t)=\omega_0(T_0 x) + \int_0^t f(T_s x, s)\,\mathrm ds.
\end{equation*}
Therefore, since volume-preserving diffeomorphisms preserve all $L^q$ norms, we get, for all $q\in[1,\infty]$,
\begin{equation*}
\norm{\omega(\cdot, t)}_{L^q}\le\norm{\omega_0}_{L^q}+\int_0^t \norm{f(\cdot, s)}_{L^q}\,\mathrm ds.
\end{equation*}
Furthermore, a usual integration by parts argument, as seen in \cite[Lemma 1.1]{Yudovich1963}, shows that $v$ satisfies the estimate
\begin{equation*}
\norm{v(\cdot, t)}_{L^2}\le\norm{v_0}_{L^2}+\int_0^t \norm{K_2* f(\cdot,s)}_{L^2}\,\mathrm ds.
\end{equation*}
\end{remark}
\begin{remark}\label{r:well-defined}
Recall that the Biot-Savart kernel is given by the formula
\begin{equation}\label{e:Biot-Savart}
K_2 (x_1, x_2) = \frac{x^\perp}{2\pi |x|^2} = \frac{1}{2\pi |x|^2} (-x_2, x_1)\, .
\end{equation}
In particular, while $K_2\not \in L^p$ for any $p$, it can be easily broken into
\begin{equation}\label{e:decomposition-Biot-Savart-Kernel}
K_2 = K_2 \mathbf{1}_{B_1} + K_2 \mathbf{1}_{B_1^c}\, ,
\end{equation}
where $B_1$ denotes the unit ball around $0$.
Observe that $K_2 \mathbf{1}_{B_1} \in L^q$ for every $q\in [1,2[$ and $K_2 \mathbf{1}_{B_1^c}\in L^r$ for every $r\in ]2, \infty]$. Under the assumption that $\omega \in L^{2-\delta}$ for some positive $\delta >0$, this decomposition allows us to define the convolution $K_2* \omega$ as $(K_2 \mathbf{1}_{B_1}) * \omega
+ (K_2 \mathbf{1}_{B_1^c}) * \omega$, where each separate summand makes sense as Lebesgue integrals thanks to Young's convolution inequality.\footnote{Young's convolution inequality states that, if $g_1\in L^{p_1}$ and $g_2\in L^{p_2}$ with $1\leq \frac{1}{p_1} + \frac{1}{p_2} \leq 2$, then $g_1 (y-\cdot) g_2 (\cdot)$ belongs to $L^1$ for a.e. $y$ and $g_1* g_2\in L^r$ for $\frac{1}{r}=\frac{1}{p_1} + \frac{1}{p_2} -1$.}
On the other hand we caution the reader that, for general $\omega\in L^2$, $K_2*\omega$ may not be well-defined. More precisely, if we denote by $\mathscr{S}$\index{aalSscript@$\mathscr{S}$} the \index{Schwartz space@Schwartz space $\mathscr{S}$} Schwartz space of rapidly decaying smooth functions and by $\mathscr{S}'$\index{aalSscript'@$\mathscr{S}'$} the space of tempered distributions \index{tempered distribution@tempered distribution}\index{space of tempered distributions@space $\mathscr{S}'$ of tempered distributions} (endowed, respectively, with their classical Fr\'echet and weak topologies), it can be shown that there is no continuous extension of the operator $\mathscr{S} \ni \omega \mapsto K_2 *\omega\in \mathscr{S}'$ to a continuous operator from $L^2$ to $\mathscr{S}'$, cf. Remark \ref{r:Camillo_dumb}.
This fact creates some technical issues in many arguments where we will indeed need to consider a suitable continuous extension of the operator $\omega \mapsto K_2*\omega$ to {\em some} closed linear subspace of $L^2$, namely, $m$-fold rotationally symmetric functions in $L^2$ (for some integer $m\geq 2$). Such an extension will be shown to exist thanks to some special structural properties of the subspace.
\end{remark}
\section{Idea of the proof}
We now describe, briefly, the rough idea of and motivation for the proof. An extensive description of the proof with precise statements can be found in Chapter~\ref{chapter:general}, which breaks down the whole argument into three separate (and independent) parts. The subsequent three chapters are then dedicated to the detailed proofs.
First, we recall two essential features of the two-dimensional Euler equations:
\begin{enumerate}
\item \emph{Steady states}. The two-dimensional Euler equations possess a large class of explicit, radially symmetric steady states called \emph{vortices}:\footnote{They are sometimes also called rotating or circular flows.}
\begin{equation}
\label{eq:vorticesdef}
\bar{\omega}(x) = g(|x|), \quad \bar{v}(x) = \zeta(|x|) x^\perp.
\end{equation}
\item \emph{Scaling symmetry}. The Euler equations possess a two-parameter scaling symmetry: If $(\omega,v)$ is a solution of~\eqref{e:Euler} with vorticity forcing $f$, and $\lambda, \mu > 0$, then
\begin{equation}
\omega_{\lambda,\mu}(x,t) = \mu \omega(\lambda x, \mu t), \quad v_{\lambda,\mu}(x,t) = \frac{\mu}{\lambda} v(\lambda x, \mu t),
\end{equation}
define a solution with vorticity forcing\textbf{}
\begin{equation}
f_{\lambda,\mu}(x,t) = \mu^2 f(\lambda x, \mu t).
\end{equation}
The scaling symmetry corresponds to the physical dimensions
\begin{equation}
[x] = L,\quad [t] = T, \quad [v] = \frac{L}{T}, \quad [\omega] = \frac{1}{T}, \quad \text{ and } \quad [f] = \frac{1}{T^2}.
\end{equation}
\end{enumerate}
We now elaborate on the above two features:
\smallskip
\emph{1. Unstable vortices}. The stability analysis of shear flows $u = (b(y),0)$ and vortices~\eqref{eq:vorticesdef} is classical, with seminal contributions due to Rayleigh~\cite{Rayleigh1879}, Kelvin~\cite{Thomson1880}, Orr~\cite{Orr}, and many others. The linearized Euler equations around the background vortex $\bar{\omega}$ are
\begin{equation}
\label{eq:linearizedeulerintro}
\partial_t \omega - L_{\rm st} \omega :=
\partial_t \omega + \zeta(r) \partial_\theta \omega + (v \cdot e_r) g'(r) = 0, \quad v = K_2 \ast \omega.
\end{equation}
Consider the eigenvalue problem associated to the linearized operator $L_{\rm st}$. It suffices to consider $\psi = e^{ik\theta} \psi_k(|x|)$, $k \geq 0$, the stream function associated to a vorticity perturbation $\omega$ (that is, $\Delta \psi = \omega$). It is convenient to pass to an exponential variable $s = \log r$ and define $\phi(s) = \psi_k(e^s)$; $A(s) = e^s g'(e^s)$ ($r \; \times$ the radial derivative of the background vorticity); and $\Xi(s) = \zeta(e^s)$ (the differential rotation). The eigenvalue problem for $L_{\rm st}$, with eigenvalue $\lambda = -ikz$, can be rewritten as
\begin{equation}
\label{eq:theeigenvaluequation}
\left( \Xi(s) - z \right) \left( \frac{d^2}{dy^2}- k^2 \right) \phi - A(s) \phi = 0.
\end{equation}
This is \emph{Rayleigh's stability equation}. The eigenvalue $\lambda$ is unstable when ${\rm Im}(z) > 0$, in which case we can divide by $\Xi - z$ and analyze a steady Schr{\"o}dinger equation. It is possible to understand~\eqref{eq:theeigenvaluequation} well enough to design vortices for which the corresponding linear operator has an unstable eigenfunction. For shear flows, this analysis goes back to Tollmien~\cite{Tollmien}. The problem was treated rigorously by Z.~Lin~\cite{LinSIMA2003} for bounded and unbounded shear flows and rotating flows in an annulus.\footnote{For those interested in hydrodynamic stability more generally, see the classic monograph~\cite{DrazinReid}. Chapter~4 therein concerns the stability of shear flows, including Rayleigh's criterion and a sketch of Tollmien's idea.}
The case of unbounded vortices, which is the crucial one for the purposes of these notes, was treated by Vishik in~\cite{Vishik2}, see Chapter~\ref{chapter:linearpartii} below. In the cases relevant to these notes, $L_{\rm st}$ has at least one unstable eigenvalue $\lambda$. While the latter could well be real, for the sake of our argument let us assume that it is a complex number $\lambda= a_0 + b_0 i$ ($a_0, b_0 > 0$) and let $\bar\lambda = a_0- b_0 i$ be its complex conjugate. If we denote by $\eta$ and $\bar{\eta}$ two corresponding (nontrivial) eigenfunctions, it can be checked that they are {\em not} radially symmetric.
With the unstable modes in hand, one may seek a trajectory on the \emph{unstable manifold} associated to $\lambda$ and $\bar{\lambda}$. For example, one such trajectory may look like
\begin{equation}
\omega = \bar{\omega} + \omega^{\rm lin} + o(e^{a_0 t}),
\end{equation}
where $\omega^{\rm lin} = {\rm Re}(e^{\lambda t} \eta)$ is a solution of the linearized Euler equations~\eqref{eq:linearizedeulerintro}. These solutions converge to $\bar{\omega}$ exponentially in backward time.
Hence, we expect that certain unstable vortices exhibit a kind of \emph{non-uniqueness at time $t = -\infty$} and moreover break the radial symmetry. The existence of unstable manifolds associated to a general class of Euler flows in dimension $n \geq 2$ was demonstrated by Lin and Zeng~\cite{LinZengCPAM2013,LinZengCorrigendum2014}.\footnote{There is a substantial mathematical literature on the nonlinear instability of Euler flows, see~\cite{friedlanderstraussvishikearly,FriedlanderHoward,friedlanderstraussvishik,bardosguostrauss,friedlandervishik,linnonlinear}.}
\smallskip
\emph{2. Self-similar solutions}. It is natural to consider solutions invariant under the scaling symmetry and, in particular, it is natural to consider those self-similar solutions which live exactly at the desired integrability. If we fix a relationship $L^\alpha \sim T$ in the scaling symmetries, the similarity variables are\footnote{We may regard the logarithmic time as $\tau = \log (t/t_0)$, so that $t$ is non-dimensionalized according to a fixed reference time $t_0 = 1$.}
\begin{equation}\label{e:self-similar-scaling}
\xi = \frac{x}{t^{\frac{1}{\alpha}}}, \quad \tau = \log t
\end{equation}
\begin{equation}
v(x,t) = \frac{1}{t^{1-\frac{1}{\alpha}}} V(\xi, \tau), \quad \omega(x,t) = \frac{1}{t} \Omega(\xi, \tau).
\end{equation}
Notice that physical time $t=0$ corresponds to logarithmic time $\tau = -\infty$. The function $\Omega$ is known as the \emph{profile}.
The Euler equations, without force, in similarity variables are
\begin{equation}
\label{eq:similarityvareulereqns}
\left\{\begin{array}{ll}
&\partial_\tau \Omega - \left( 1 + \frac{\xi}{\alpha} \cdot \nabla_\xi \right) \Omega + V \cdot \nabla_\xi \Omega = 0\\ \\
& V= K_2* \Omega \, .
\end{array}\right.
\end{equation}
Profiles $\Omega$ satisfying $\| \Omega(\cdot,\tau) \|_{L^p} = O(1)$ as $\tau \to -\infty$ satisfy $\| \omega(\cdot,t) \|_{L^p} = O(t^{-1+\frac{2}{\alpha p}})$ as $t \to 0^+$, and similarly in the weak $L^p$ norms. Hence, the Lebesgue and weak Lebesgue norms with $p = 2/\alpha$ would be $O(1)$ in either variables. To show sharpness of the Yudovich class, we consider $0 < \alpha \ll 1$.
\bigskip
The route to non-uniqueness through unstable vortices and self-similar solutions is as follows: Suppose that $\bar{\Omega}$ is an unstable steady state of the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} (in particular, $\bar{\omega}(x,t) = t^{-1} \bar{\Omega}(\xi)$ is a self-similar solution of the usual Euler equations). Find a trajectory $\Omega$ on the unstable manifold associated to $\bar{\Omega}$. In similarity variables, the steady state $\bar{\Omega}$ will be ``non-unique at minus infinity", which corresponds to non-uniqueness at time $t=0$ in the physical variables.
One natural class of background profiles $\bar{\Omega}$ consists of \emph{power-law vortices} $\bar{\omega} = \beta |x|^{-\alpha}$, $\beta \in \ensuremath{\mathbb R}$, which are simultaneously steady solutions and self-similar solutions without force. At present, we do not know whether the above strategy can be implemented with power-law vortices.
Instead, we choose a smooth vortex profile $g(|x|)$, with power-law decay as $|x| \to +\infty$, which is unstable for the Euler dynamics. Our background will be the self-similar solution with profile $\bar{\Omega} = g(|\xi|)$, which solves the Euler equations \emph{with a self-similar force}. This profile may be considered a well-designed smoothing of a power-law vortex. When the background is large, it is reasonable to expect that the additional term in the similarity variable Euler equations~\eqref{eq:similarityvareulereqns} can be treated perturbatively, so that $g(|\xi|)$ will also be unstable for the similarity variable Euler dynamics. This heuristic is justified in Chapter~\ref{chapter:linearparti}.
In order to ensure that the solutions have finite energy, we also truncate the background velocity at distance $O(1)$ in physical space. This generates a different force. The truncation's contribution to the force is smooth and heuristically does not destroy the non-uniqueness, which can be thought of as ``emerging" from the singularity at the space-time origin. Our precise Ansatz is~\eqref{e:Ansatz-1}, which is the heart of the nonlinear part of these notes.
\section{Differences with Vishik's work}
While we follow the strategy of Vishik in~\cite{Vishik1,Vishik2}, we deviate from his proof in some ways. We start by listing two changes which, although rather minor, affect the presentation substantially.
\begin{enumerate}
\item We decouple the parameter $\alpha$ in~\eqref{e:self-similar-scaling} governing the self-similar scaling from the decay rate $\bar{\alpha}$ of the smooth profile $g$ at infinity. In \cite{Vishik1} these two parameters are equal; however, it is rather obvious that the argument goes through as long as $\alpha \leq \bar \alpha$. If we then choose $\alpha < \bar \alpha$ the resulting solution has zero initial data. This is a very minor remark, but it showcases the primary role played by the forcing $f$ in the equation.
\item Strictly speaking Vishik's Ansatz for the ``background solution'' is in fact different from our Ansatz (even taking into account the truncation at infinity). The interested reader might compare \eqref{e:tilde-v} and \eqref{e:curl-tilde-v} with \cite[(6.3)]{Vishik1}. Note in particular that the coordinates used in \cite{Vishik1} are not really \eqref{e:self-similar-scaling} but rather a more complicated variant. Moreover, Vishik's Ansatz contains a parameter $\varepsilon$, whose precise role is perhaps not initially transparent, and which is ultimately scaled away in~\cite[Chapter 9]{Vishik1}. This obscures that the whole approach hinges on finding a solution $\Omega$ of a truncated version of \eqref{eq:similarityvareulereqns}
asymptotic to the unstable manifold of the steady state $\bar \Omega$ at $-\infty$.
In our case, $\Omega$ is constructed by solving appropriate initial value problems for the truncated version of \eqref{eq:similarityvareulereqns} at negative times $-k$ and then taking their limit; this plays the role of Vishik's parameter~$\varepsilon$.
\end{enumerate}
We next list two more ways in which our notes deviate from \cite{Vishik1,Vishik2}. These differences are much more substantial.
\begin{enumerate}
\item[(3)] The crucial nonlinear estimate in the proof of Theorem \ref{thm:main} (cf. \eqref{e:asymptotic-in-t} and the more refined version \eqref{e:asymptotic-in-t-2}), which shows that the solution $\Omega$ is asymptotic, at minus infinity, to an unstable solution of the linearized equation, is proved in a rather different way. In particular our argument is completely Eulerian and based on energy estimates, while a portion of Vishik's proof relies in a crucial way on the Lagrangian formulation of the equation. The approach introduced here will be exploited by the first and third author in their forthcoming work~\cite{AC} and we believe it might be useful in other contexts.
\item[(4)] Another technical, but crucial, difference, concerns the simplicity of the unstable eigenvalue $\eta$. While Vishik claims such simplicity in \cite{Vishik2}, {the argument given in the latter reference is actually incomplete. After we pointed out the gap to him, he provided a clever way to fill it in \cite{Vishik3}}. These notes point out that such simplicity is not really needed in the nonlinear part of the analysis: in fact a much weaker linear analysis than the complete one carried in \cite{Vishik2} {is already enough to close the argument for Theorem \ref{thm:main}. However, for completeness and for the interested readers, we include in Appendix~\ref{a:better} the necessary additional arguments needed to conclude the more precise description of \cite{Vishik2}.}
\end{enumerate}
\section{Further remarks}\label{s:final-remarks}
Recently, Bressan, Murray, and Shen investigated in \cite{BressanAposteriori,BressanSelfSimilar} a different non-uniqueness scenario for~\eqref{e:Euler} which would demonstrate sharpness of the Yudovich class without a force. The scenario therein, partially inspired by the works of Elling~\cite{EllingAlgebraicSpiral,EllingSelfSimilar}, is also based on self-similarity and symmetry breaking but follows a different route.
Self-similarity and symmetry breaking moreover play a central role in the work of Jia, {\v S}ver{\'a}k, and Guillod~\cite{JiaSverakInventiones,JiaSverakIllposed,guillod2017numerical} on the conjectural non-uniqueness of weak Leray-Hopf solutions of the Navier-Stokes equations. One crucial difficulty in~\cite{JiaSverakIllposed}, compared to Vishik's approach, is that the self-similar solutions in~\cite{JiaSverakIllposed} are far from explicit. Therefore, the spectral condition therein seems difficult to verify analytically, although it has been checked with non-rigorous numerics in~\cite{guillod2017numerical}. The work~\cite{JiaSverakIllposed} already contains a version of the unstable manifold approach, see p. 3759--3760, and a truncation to finite energy.
At present, the above two programs, while very intriguing and highly suggestive, require a significant numerical component not present in Vishik's approach. On the other hand, at present, Vishik's approach includes a forcing term absent from the above two programs, whose primary role is showcased by the fact that the initial data can be taken to be zero.
\bigskip
Much of the recent progress on non-uniqueness of the Euler equations has been driven by Onsager's conjecture, which was solved in~\cite{IsettOnsager}. With Theorem \ref{thm:main} in hand, we can now summarize the situation for the Euler equations {in dimension three as follows:}
\begin{itemize}[leftmargin=*]
\item $\alpha \in (1,2)$: (\emph{Local well-posedness and energy conservation}) For each divergence-free $u_0 \in C^{\alpha}{(\mathbb{T}^3)}$ and force $f \in L^1(]0,T[;C^\alpha{(\mathbb{T}^3)})$, there exists $T' \in ]0,T[$ and a unique local-in-time solution $u \in L^\infty(]0,T'[;C^\alpha{(\mathbb{T}^3)})$. The solution $u$ depends continuously\footnote{The continuous dependence is more subtle for quasilinear equations than semilinear equations, and uniform continuity is not guaranteed in the regularity class in which the solutions are found, see the discussion in~\cite{taonotes}. One can see this at the level of the equation for the difference of two solutions $u^{(1)}$ and $u^{(2)}$: One of the solutions becomes the ``background" and, hence, loses a derivative. One way to recover the continuous dependence stated above is to compare the above two solutions with initial data $u_0^{(1)}$, $u_0^{(2)}$ and forcing terms $f^{(1)}$, $f^{(2)}$ to approximate solutions $u^{(1),\varepsilon}$, $u^{(2),\varepsilon}$ with mollified initial data $u_0^{(1),\varepsilon}$, $u_0^{(2),\varepsilon}$ and mollified forcing terms $f^{(1),\varepsilon}$, $f^{(2),\varepsilon}$. One then estimates $\| u^{(1)} - u^{(2)} \| \leq \| u^{(1)} - u^{(1),\varepsilon} \| + \| u^{(1),\varepsilon} - u^{(2),\varepsilon} \| + \| u^{(2),\varepsilon} - u^{(2)} \|$. The approximate solutions, which are more regular, are allowed to lose derivatives in a controlled way.} in the above class on its initial data and forcing term. Moreover, the solution $u$ conserves energy.
\item $1/3 < \alpha < 1$: (\emph{Non-uniqueness and energy conservation}) There exist $T > 0$, a force $f \in L^1(]0,T[;{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$, and two distinct weak solutions {$u_1,u_2 \in L^\infty(]0,T[;L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T}))$} to the Euler equations with zero initial data and force $f$. For any $T>0$, weak solutions $u \in L^\infty(]0,T[;{L^2 \cap C^\alpha(\ensuremath{\mathbb R}^2 \times \mathbb{T})})$ with forcing in the above class conserve energy~\cite{constantinetiti}.
\item $0 < \alpha < 1/3$: (\emph{Non-uniqueness and anomalous dissipation}) There exist $T>0$ and two distinct admissible weak solutions (see~\cite{OnsagerAdmissible}) $u_1,u_2 \in L^\infty(]0,T[;C^\alpha{(\mathbb{T}^3)})$ to the Euler equations with the same initial data and zero force and which moreover dissipate energy.
\end{itemize}
While we are not aware of the first two statements with force in the literature, the proofs are easy adaptations of those with zero force. In order to obtain the non-uniqueness statement in the region $1/3 < \alpha < 1$, one can {extend the non-unique solutions on $\ensuremath{\mathbb R}^2$ to be constant in the $x_3$ direction.}
The borderline cases may be sensitive to the function spaces in question. For example, the three-dimensional Euler equations are ill-posed in $C^k$, $k \geq 1$~\cite{bougainillposed}. Furthermore, of the above statements, only the negative direction of Onsager's conjecture is open in $n=2$.
We finally point out that an expanded version of these notes is contained in the master's thesis of the sixth author, cf. \cite{Maximilian-thesis}.
\chapter{Linear theory: Part I}
\label{chapter:linearparti}
In this chapter, we will reduce Theorem \ref{thm:spectral} to an analogous spectral result for another differential operator, and we will also show an important corollary of Theorem \ref{thm:spectral} concerning the semigroup that it generates. We start by giving the two relevant statements, but we will need first to introduce some notation and terminology.
First of all, in the rest of the chapter we will always assume that the positive integer $m$ is at least $2$. We then introduce a new (closed and densely defined) operator on $L^2_m$, which we will denote by $L_{\text{st}}$. The operator is defined by\index{aalLst@$L_{\text{st}}$}
\begin{equation}
L_{\text{st}} (\Omega) = - {\rm div}\, (\bar V \Omega) - (K_2*\Omega \cdot \nabla) \bar \Omega\,
\end{equation}
and (recalling that the operator $\Omega\mapsto (K_2* \Omega\cdot \nabla) \bar \Omega$ is bounded and compact, as will be shown below) its domain in $L^2_m$ is given by
\begin{equation}
D_m (L_{\text{st}}) = \{\Omega\in L^2_m : {\rm div}\, (\bar V \Omega)\in L^2\}\, .
\end{equation}
The key underlying idea behind the introduction of $L_{\text{st}}$ is that we can write $L_{\text{ss}}$ as
\[
L_{\text{ss}} = \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) + \beta L_{\text{st}} \,
\]
and since $\beta$ will be chosen very large, we will basically study the spectrum of $L_{\text{ss}}$ as a perturbation of the spectrum of $\beta L_{\text{st}}$. In particular Theorem \ref{thm:spectral} will be derived from a more precise spectral analysis of $L_{\text{st}}$. Before coming to it, we split the space $L^2_m$ into an appropriate infinite sum of closed orthogonal subspaces.
First of all, if we fix an element $\vartheta\in L^2 (\mathbb R^2)$ and we introduce the polar coordinates $(\theta, r)$ through $x= r (\cos \theta , \sin \theta)$, we can then use the Fourier expansion to write
\begin{equation}\label{e:Fourier}
\vartheta (x) =\sum_{k\in \mathbb Z} a_k (r) e^{ik\theta}\,
\end{equation}
where
\[
a_k (r) := \frac{1}{2\pi} \int_0^{2\pi} \vartheta(r \cos(\theta),r\sin(\theta)) e^{-ik\theta}\,\mathrm d\theta .
\]
By Plancherel's formula,
\[
\|\vartheta\|_{L^2 (\ensuremath{\mathbb R}^2)}^2 = 2\pi \sum_{k\in \mathbb Z} \|a_k\|^2_{L^2 (\ensuremath{\mathbb R}^+, r\,\mathrm dr)}\, .
\]
In particular it will be convenient to introduce the subspaces\index{aalUk@$U_k$}
\begin{equation}
U_k :=\{f(r) e^{ik\theta} : f \in L^2 (\mathbb R^+, r\,\mathrm dr)\}\, .
\end{equation}
Each $U_k$ is a closed subspace of $L^2$, distinct $U_k$'s are orthogonal to each other and moreover
\begin{equation}\label{e:Fourier-2}
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, .
\end{equation}
Each $U_{mk}$ is an invariant space of $L_{\text{st}}$, and it can be easily checked that $U_{mk}\subset D_m (L_{\text{st}})$ and that indeed the restriction of $L_{\text{st}}$ to $U_{mk}$ is a bounded operator. Following the same convention as for $L_{\text{ss}}$ we will denote by ${\rm spec}_m\, (L_{\text{st}})$ the spectrum of $L_{\text{st}}$ on $L^2_m$.
\begin{theorem}\label{thm:spectral2}\label{THM:SPECTRAL2}
For every $m\geq 2$ and every $\bar\Omega$
we have
\begin{itemize}
\item[(a)] each $z_i\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re} \,z\, \neq 0\}$ belongs to the discrete spectrum and if ${\rm Im}\, (z_i)=0$, then there is a nontrivial real eigenfunction relative to $z_i$.
\end{itemize}
Moreover, for an appropriate choice of $\bar\Omega$ there is an integer $m\geq 2$ such that:
\begin{itemize}
\item[(b)] ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is nonempty.
\end{itemize}
\end{theorem}
\begin{remark}\label{r:better}\label{R:BETTER}
The theorem stated above contains the minimal amount of information that we need to complete the proof of Theorem \ref{thm:main2}. We can however infer some additional conclusions with more work, more precisely we can show that
\begin{itemize}
\item[(c)] $m$ can be chosen so that, in addition to (b), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is finite and the image of the Riesz projector\footnote{Recall that in the case of an isolated point $z$ in the spectrum of a closed, densely defined operator $A$, the Riesz projector is defined as
\[
\frac{1}{2\pi i} \int_\gamma (w -A)^{-1}\, dw
\]
for any simple closed rectifiable contour $\gamma$ bounding a closed disk $D$ with $D \cap {\rm spec}\, (A) = \{z\}$. For an element of the discrete spectrum the Riesz projector has finite rank (the algebraic multiplicity of the eigenvalue $z$).}\index{Riesz projector}\index{aalPz@$P_z$}
$P_{z}$ of $L_{\text{st}}$ relative to each $z\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is contained in $U_m\cup U_{-m}$.
\end{itemize}
Since this property is not needed to prove Theorem~\ref{thm:main2} we defer its proof to Appendix~\ref{a:better}.
\end{remark}
{\color{red} In \cite{Vishik2} Vishik claims the following greatly improved statement.
\begin{theorem}\label{thm:spectral-stronger}\label{THM:SPECTRAL-STRONGER}
For a suitable $\bar \Omega$:
\begin{itemize}
\item[(c')] $m$ can be chosen so that, in addition to (b) and (c), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}\cap U_m$ consists of a single element, with algebraic multiplicity $1$ in $U_m$.
\end{itemize}
\end{theorem}
Since the spectrum of $L_{\text{st}}$ is invariant under complex conjugation (b), (c), and (c') imply that ${\rm spec}_m\, (L_{\text{st}})\cap \{{\rm Re}\, z>0\}$ consists either of a single real eigenvalue or of two complex conjugate eigenvalues. In the first case, the algebraic and geometric multiplicity of the eigenvalue is $2$ and the space of eigenfunctions has a basis consisting of an element of $U_m$ and its complex conjugate in $U_{-m}$. In the second case the two eigenvalues $z$ and $\bar z$ have algebraic multiplicity $1$ and their eigenspaces are generated, respectively, by an element of $U_m$ and its complex conjugate in $U_{-m}$.
The argument given in \cite{Vishik2} for (c') is however not complete. Vishik provided later (\cite{Vishik3}) a way to close the gap. In Appendix~\ref{a:better} we will give a proof of Theorem \ref{thm:spectral-stronger} along his lines.}
\medskip
In this chapter we also derive an important consequence of Theorem \ref{thm:spectral} for the semigroup generated by $L_{\text{ss}}$.\index{Semigroup}
\begin{theorem}\label{t:group}\label{T:GROUP}
For every $m\geq 2$, $L_{\text{ss}}$ is the generator of a strongly continuous semigroup on $L^2_m$ which will be denoted by $e^{\tau L_{\text{ss}}}$\index{aalEtauLss@$e^{\tau L_{\text{ss}}}$}, and the growth bound $\omega (L_{\text{ss}})$ of $e^{\tau L_{\text{ss}}}$ equals\index{Semigroup, growth bound}
\[
a_0 := \sup \{{\rm Re}\, z_0 : z_0\in {\rm spec}_m (L_{\text{ss}})\}\, <\infty\,
\]
if $a_0 \geq 1-\frac{1}{\alpha}$.
In other words, for every $\delta>0$, there is a constant $M (\delta)$ with the property that
\begin{equation}\label{e:growth-bound}
\left\|e^{\tau L_{\text{ss}}} \Omega\right\|_{L^2} \leq M (\delta) e^{(a_0 +\delta) \tau} \|\Omega\|_{L^2}
\qquad \qquad \forall \tau\geq 0,\, \forall \Omega\in L^2_m\, .
\end{equation}
\end{theorem}
\section{Preliminaries}\label{s:abstract-operators}
In this section we start with some preliminaries which will take advantage of several structural properties of the operators $L_{\text{ss}}$ and $L_{\text{st}}$. First of all we decompose $L_{\text{st}}$ as
\begin{equation}\label{e:decompo_L_st}
L_{\text{st}} = S_1 + \mathscr{K}\, ,
\end{equation}\index{aalK@$\mathscr{K}$}\index{aalS1@$S_1$}where
\begin{align}
S_1 (\Omega)&:= - {\rm div}\, (\bar V \Omega)\label{e:S1}\\
\mathscr{K} (\Omega) &:= - (K_2*\Omega \cdot \nabla) \bar \Omega \label{e:compatto}\, .
\end{align}
Hence we introduce the operator\index{aalS2@$S_2$}
\begin{equation}
S_2 (\Omega) := {\rm div} \left(\left(\frac{\xi}{\alpha} - \beta \bar V\right) \Omega\right) - \frac{\Omega}{\alpha}\, ,
\end{equation}
so that we can decompose $L_{\text{ss}}$ as
\begin{equation}
L_{\text{ss}} = \left(1-\frac{1}{\alpha}\right) + S_2 + \beta \mathscr{K}\, .
\end{equation}
The domains of the various operators $A$ involved are always understood as $D_m (A):= \{\Omega : A(\Omega)\in L^2\}$.
Finally, we introduce the real Hilbert spaces $L^2_m (\mathbb R)$ and $U_j (\mathbb R)$ by setting\index{aalL2mR@$L^2_m(\ensuremath{\mathbb R})$}\index{aalUkR@$U_k(\ensuremath{\mathbb R})$}
\begin{align}
L^2_m (\mathbb R) &:= \{{\rm Re}\, \Omega : \Omega \in L^2_m\}\,
\end{align}
and, for $j>0$ natural,
\begin{equation}
U_j (\mathbb R) :=\{{\rm Re}\, \Omega: \Omega \in U_j\}\, .
\end{equation}
Observe that while clearly $L^2_m (\mathbb R)$ is a real subspace of $L^2_m$, $U_j (\mathbb R)$ is a real subspace of $U_j \oplus U_{-j}$.
As it is customary, $L^2_m (\mathbb R)$ and its real vector subspaces are endowed with the inner product\index{Rotationally symmetric function space, inner product}
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb R} = \int \Omega\, \Xi\, ,
\end{equation}
while $L^2_m$ and its complex vector subspaces are endowed with the Hermitation product
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb C} = \int ({\rm Re}\, \Omega\, {\rm Re}\, \Xi +
{\rm Im}\, \Omega\, {\rm Im}\, \Xi) + i \int ({\rm Im}\, \Omega\, {\rm Re}\, \Xi - {\rm Re} \, \Omega\, {\rm Im}\, \Xi)\, .
\end{equation}
We will omit the subscripts from $\langle \cdot, \cdot \rangle$ when the underlying field is clear from the context. The following proposition details the important structural properties of the various operators. A closed unbounded operator $A$ on $L^2_m$ will be called \emph{real} if its restriction $A_{\mathbb R}$ to $L^2_m (\mathbb R)$ is a closed, densely defined operator with domain $D_m (A) \cap L^2_m (\mathbb R)$ such that $A(\Omega)\in L^2_m(\mathbb R)$ for all $\Omega\in D_m(A)\cap L^2_m(\mathbb R)$.
\begin{proposition}\label{p:abstract}
\begin{itemize}
\item[(i)] The operators $\mathscr{K}$, $S_1$ and $S_2$ are all real operators.
\item[(ii)] $\mathscr{K}$ is bounded and compact. More precisely there is a sequence of finite dimensional vector spaces $V_n \subset C^\infty_c (\mathbb R^2,\mathbb C)\cap L^2_m$ with the property that, if $P_n$ denotes the orthogonal projection onto $V_n$, then
\begin{equation}\label{e:explicit-approx}
\lim_{n\to\infty} \|\mathscr{K} - P_n\circ \mathscr{K}\|_O = 0\, ,
\end{equation}
where $\|\cdot\|_O$ denotes the operator norm.
\item[(iii)] $S_1$ and $S_2$ are skew-adjoint.
\item[(iv)] $D_m (L_{\text{st}}) = D_m (S_1)$ and $D_m (L_{\text{ss}}) = D_m (S_2)$.
\item[(v)] $U_{km}$ is an invariant subspace of $S_1, S_2, \mathscr{K}, L_{\text{st}}, L_{\text{ss}}$.
\item[(vi)] The restrictions of $S_1$ and $L_{\text{st}}$ to each $U_{km}$ are bounded operators.
\end{itemize}
\end{proposition}
\begin{proof} The verification of (i), (iii), (iv), (v), and (vi) are all simple and left therefore to the reader. We thus come to (ii) and prove the compactness of the operator $\mathscr{K}$. Recalling Lemma \ref{l:extension}, for every $\Omega \in L^2_m$ we can write the tempered distribution $\mathscr{K} (\Omega)$ as
\begin{equation}
\mathscr{K} (\Omega) = \nabla \bar \Omega \cdot V
\end{equation}
where $V=V(\Omega)$ is a $W^{1,2}_{\text{loc}}$ function with the properties that
\begin{equation}\label{e:stima-W12}
R^{-1} \|V \|_{L^2 (B_R)} + \|DV\|_{L^2 (B_R)} \leq C \|\Omega\|_{L^2} \qquad \forall R \geq 0\, . \end{equation}
Since $|\nabla \bar \Omega (\xi)| \leq C |\xi|^{-{\bar\alpha} -1}$ for $|\xi|\geq 1$, whenever $R\geq 1$ we can estimate
\begin{align*}
\|\mathscr{K} (\Omega)\|^2_{L^2 (B_R^c)} &= \sum_{j=0}^\infty \|\mathscr{K} (\Omega)\|_{L^2 (B_{2^{j+1} R}\setminus B_{2^j R})}^2 \leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2 (1+{\bar\alpha}) j} \|V\|^2_{L^2 (B_{2^{j+1} R})}\\
&\leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2(1+{\bar\alpha}) j} 2^{2j+2} R^2 \|\Omega\|_{L^2}^2
\leq C R^{-2{\bar\alpha}} \|\Omega\|_{L^2}^2\, .
\end{align*}
This shows at the same time that
\begin{itemize}
\item $\mathscr{K}$ is a bounded operator;
\item If we introduce the operators
\begin{equation}
\Omega \mapsto \mathscr{K}_N (\Omega) := \mathscr{K} (\Omega) \mathbf{1}_{B_N}\, ,
\end{equation}
then $\|\mathscr{K}_N- \mathscr{K}\|_{O}\to 0$.
\end{itemize}
Since the uniform limit of compact operators is a compact operator, it suffices to show that each $\mathscr{K}_N$ is a compact operator. This is however an obvious consequence of \eqref{e:stima-W12} and the compact embedding of $W^{1,2} (B_N)$ into $L^2 (B_N)$.
As for the remainder of the statement (ii), by the classical characterization of compact operators on a Hilbert space, for every $\varepsilon > 0$ there is a finite-rank linear map $L_N$ such that $\|\mathscr{K} - L_N\|_O \leq \frac{\varepsilon}{4}$. If we denote by $W_N$ the image of $L_N$ and by $Q_N$ the orthogonal projection onto it, given that $Q_N \circ L_N = L_N$ we can estimate
\[
\|Q_N \circ \mathscr{K} - \mathscr{K}\|_O \leq \|Q_N \circ \mathscr{K} - Q_N \circ L_N\|_O
+ \|L_N - \mathscr{K}\|_O \leq 2 \|L_N - \mathscr{K}\|_0 \leq \frac{\varepsilon}{2}\, .
\]
Fix next an orthonormal base $\{w_1, \ldots, w_N\}$ of $W_N$ and, using the density of $C^\infty_c (\mathbb R^2)$, approximate each element $w_i$ in the base with $v_i\in C^\infty_c (\mathbb R^2, \mathbb C)$. This can be done for instance convolving $w_i$ with a smooth radial kernel and multiplying by a suitable cut-off function. If the $v_i$'s are taken sufficiently close to $w_i$, the orthogonal projection $P_N$ onto $V_N = {\rm span}\, (v_1, \ldots , v_N)$ satisfies $\|Q_N-P_N\|_O \leq \frac{\varepsilon}{2\|\mathscr{K}\|_O}$ and thus
\[
\|\mathscr{K} - P_N \circ \mathscr{K}\|_O \leq \|\mathscr{K} - Q_N \circ \mathscr{K}\|_O + \|P_N-Q_N\|_O \|\mathscr{K}\|_O \leq \varepsilon\, .\qedhere
\]
\end{proof}
\section{Proof of Theorem \ref{t:group} and proof of Theorem \ref{thm:spectral2}(a)}\label{subsect:Proof-of-spectral2-and-group}
The above structural facts allow us to gain some important consequences as simple corollaries of classical results in spectral theory, which we gather in the next statement. Observe in particular that the statement (a) of Theorem \ref{thm:spectral2} follows from it.
In what follows we take the definition of essential spectrum of an operator as given in \cite{EngelNagel}. We caution the reader that other authors use different definitions; at any rate the main conclusion about the essential spectra of the operators $L_{\text{ss}}$ and $L_{\text{st}}$ in Corollary \ref{c:structural} below depends only upon the property that the essential and discrete spectra are disjoint (which is common to all the different definitions used in the literature).
\begin{corollary}\label{c:structural}
The essential spectrum of $L_{\text{st}}$ and the essential spectrum of $L_{\text{ss}} - \left(1-\frac{1}{\alpha}\right)$ are contained in the imaginary axis, while the remaining part of the spectrum is contained in the discrete spectrum. In particular, every $z\in {\rm spec}_m (L_{\text{st}})$ (resp. $z\in {\rm spec}_m (L_{\text{ss}})$) with nonzero real part (resp. real part different from $1-\frac{1}{\alpha}$) has the following properties.
\begin{itemize}
\item[(i)] $z$ is isolated in ${\rm spec}_m (L_{\text{st}})$ (resp. ${\rm spec}_m (L_{\text{ss}})$);
\item[(ii)] There is at least one nontrivial $\Omega$ such that $L_{\text{st}} (\Omega) = z \Omega$ (resp. $L_{\text{ss}} (\Omega) = z \Omega$) and if ${\rm Im}\, (z)=0$, then $\Omega$ can be chosen to be real-valued;
\item[(iii)] The Riesz projection $P_z$ has finite rank;
\item[(iv)] ${\rm Im}\, (P_z) = \bigoplus_{k\in \mathbb Z\setminus \{0\}} ({\rm Im}\, (P_z)\cap U_{km})$ and in particular the intersection ${\rm Im}\, (P_z) \cap U_{km}$ is trivial for all but a finite number of $k$'s and it is nontrivial for at least one $k$.
\end{itemize}
Moreover, Theorem \ref{t:group} holds.
\end{corollary}
\begin{proof} The points (i)-(iii) are consequence of classical theory, but we present briefly their proofs referring to \cite{Kato}. Observe that addition of a constant multiple $c$ of the identity only shifts the spectrum (and its properties) by the constant $c$. The statements for $L_{\text{ss}}$ are thus reduced to similar statements for $S_2+\beta \mathscr{K}$.
Next since the arguments for $L_{\text{st}} = S_1 + \mathscr{K}$ only use the skew-adjointness of $S_1$ and the compactness of $\mathscr{K}$, they apply verbatim to $S_2+\beta\mathscr{K}$. We thus only argue for $L_{\text{st}}$. First of all observe that, since $S_1$ is skew-adjoint, its spectrum is contained in the imaginary axis. In particular, for every $z$ with ${\rm Re}\, z \neq 0$ the operator $S_1-z$ is invertible and thus Fredholm with Fredholm index $0$. Hence by \cite[Theorem 5.26, Chapter IV]{Kato}, $L_{\text{st}}-z= S_1-z +\mathscr{K}$ is as well Fredholm and has index $0$. By \cite[Theorem 5.31, Chapter IV]{Kato} there is a discrete set $\Sigma \subset \{z: {\rm Re}\, z\neq 0\}$ with the property that the dimension of the kernel (which equals that of the cokernel) of $L_{\text{st}} -z$ is constant on the open sets $\{{\rm Re}\, z > 0\}\setminus \Sigma$ and $\{{\rm Re}\, z<0\}\setminus \Sigma$. Since, for every $z$ such that $|{\rm Re}\, z|> \|\mathscr{K}\|_O$, we know that $L_{\text{st}}-z$ has a bounded inverse from the Neumann series, the kernel (and cokernel) of $L_{\text{st}}-z$ equals $0$ on $\{{\rm Re}\, z \neq 0\}\setminus \Sigma$. From \cite[Theorem 5.28, Chapter IV]{Kato} it then follows that $\Sigma$ is a subset of the discrete spectrum of $L_{\text{st}}$. Obviously the essential spectrum must be contained in the imaginary axis.
In order to show (iv), denote by $P_k$ the orthogonal projection onto $U_{km}$ and observe that, since $L_{\text{st}} \circ P_k = P_k \circ L_{\text{st}}$,
\begin{equation}\label{e:commute}
P_z \circ P_k = \frac{1}{2\pi i} \int_\gamma \frac{1}{w-L_{\text{st}}}\circ P_k \, dw
= \frac{1}{2\pi i} \int P_k \circ \left(\frac{1}{w-L_{\text{st}}}\right)\, dw = P_k \circ P_z\, .
\end{equation}
Writing
\begin{equation}\label{e:splitting}
P_z = \sum_k P_z\circ P_k
\end{equation}
and observing that the commutation \eqref{e:commute} gives the orthogonality of the images of the $P_z\circ P_k$, since ${\rm Im}\, (P_z)$ is finite dimensional, we conclude that the sum is finite, i.e. that $P_z\circ P_k =0$ for all but finitely many $k$'s. Moreover, since $P_z^2 = P_z$ and $P_z$ equals the identity on ${\rm Im}\, (P_z)$, we see immediately that $U_{km} \cap {\rm Im}\, (P_z) = {\rm Im}\, (P_z\circ P_k)$.
We now come to the proof of Theorem \ref{t:group}.
We have already shown that, if ${\rm Re}\, \lambda$ is large enough, then $\lambda$ belongs to the resolvent of $L_{\text{ss}}$, which shows that $a_0 < \infty$. Next, observe that $L_{\text{ss}}$ generates a strongly continuous group if and only if $S_2+\beta \mathscr{K}$ does. On the other hand, using the skew-adjointness of $S_2$, we conclude that, if ${\rm Re}\, z > \beta \|\mathscr{K}\|_O$, then $z$ is in the resolvent of $S_2+\beta \mathscr{K}$ and
\[
\|(S_2+\beta \mathscr{K} - z)^{-1}\|_O \leq \frac{1}{{\rm Re}\, z - \beta \|\mathscr{K}\|_O}\, .
\]
Therefore we can apply \cite[Corollary 3.6, Chapter II]{EngelNagel} to conclude that $S_2+\beta \mathscr{K}$ generates a strongly continuous semigroup. Since the same argument applies to $-S_2-\beta \mathscr{K}$, we actually conclude that indeed the operator generates a strongly continuous group.
Next we invoke \cite[Corollary 2.11, Chapter IV]{EngelNagel} that characterizes the growth bound $\omega_0 (L_{\text{ss}})$ of the semigroup $e^{tL_{\text{ss}}}$ as
\[
\omega_0 (L_{\text{ss}}) = \max \{\omega_{\text{ess}} (L_{\text{ss}}), a_0\}\, ,
\]
where $\omega_{\text{ess}}$ is the essential growth bound of \cite[Definition 2.9, Chapter IV]{EngelNagel}. By \cite[Proposition 2.12, Chapter IV]{EngelNagel}, $\omega_{\text{ess}} (L_{\text{ss}})$ equals $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)$ and, since $e^{\tau S_2}$ is a unitary operator, the growth bound of $e^{(1-1/\alpha +S_2)\tau}$ equals $1-\frac{1}{\alpha}$, from which we conclude that $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)\leq 1-\frac{1}{\alpha}$. In particular we infer that if $a_0\geq 1-\frac{1}{\alpha}$, then $\omega_0 (L_{\text{ss}})=a_0$.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: preliminary lemmas}
In this and the next section we will derive Theorem \ref{thm:spectral} from Theorem \ref{thm:spectral2}. It is convenient to introduce the following operator:
\begin{equation}\label{e:Lbeta}
L_\beta : = \frac{1}{\beta} \left(L_{\text{ss}} - \left(1-{\textstyle{\frac{1}{\alpha}}}\right)\right)
= \frac{1}{\beta} S_2 + \mathscr{K} \, .
\end{equation}
In particular
\begin{equation}\label{e:Lbeta-2}
L_\beta (\Omega) = \frac{1}{\beta}\left[{\rm div}\, \left(\frac{\xi}{\alpha} \Omega\right)+ \frac{\Omega}{\alpha}\right] + L_{st}\, .
\end{equation}
Clearly the spectrum of $L_{\text{ss}}$ can be easily computed from the spectrum of $L_\beta$. The upshot of this section and the next section is that, as $\beta \to \infty$, the spectrum of $L_\beta$ converges to that of $L_{\text{st}}$ in a rather strong sense.
In this section we state two preliminary lemmas. We will use extensively the notation $P_V$ for the orthogonal projection onto some closed subspace $V$ of $L^2_m$.
\begin{lemma}\label{l:two}
Let $H= L^2_m, U_{km}, U_{-km}$, or any closed invariant subspace common to $L_{st}$ and all the $L_\beta$.
For every compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}} \circ P_H))$, there is $\beta_0 (K)$ such that $K\subset \mathbb C \setminus (\mathbb R \cup {\rm spec}_m (L_\beta \circ P_H))$ for $\beta \geq \beta_0 (K)$. Moreover,
\begin{equation}\label{e:op_norm_est}
\sup_{\beta \geq \beta_0 (K)} \sup_{z\in K} \|(L_\beta \circ P_H - z)^{-1}\|_O < \infty\,
\end{equation}
and
$(L_\beta \circ P_H -z)^{-1}$ converges strongly to $(L_{\text{st}}\circ P_H -z)^{-1}$ for every $z\in K$, namely,
\begin{equation}\label{e:strong_convergence}
\lim_{\beta\to \infty} \|(L_\beta \circ P_H -z)^{-1} (w) - (L_{\text{st}}\circ P_H -z)^{-1} (w)\| = 0\, \qquad \forall w\in L^2_m\, .
\end{equation}
\end{lemma}
\begin{lemma}\label{l:three}
For every $\varepsilon >0$ there is a $R=R (\varepsilon)$ such that
\begin{equation}\label{e:exclude_large_eigenvalue}
{\rm spec}_m (L_\beta) \cap \{z : |z|\geq R, |{\rm Re}\, z|\geq \varepsilon\} = \emptyset \qquad
\forall \beta \geq 1\, .
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:two}] The proof is analogous for all $H$ and we will thus show it for $H=L^2_m$. Fix first $z$ such that ${\rm Re}\, z \neq 0$ and recalling that $z- \beta^{-1} S_2$ is invertible, we write
\begin{equation}\label{e:invert_L_beta-z}
z- L_\beta = (z- \beta^{-1} S_2) (1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})\, .
\end{equation}
\medskip
{\bf Step 1} The operators $(\beta^{-1} S_2 -z)^{-1}$ enjoy the bound
\begin{equation}\label{e:uniform-bound-inverse-beta}
\|(z-\beta^{-1} S_2)^{-1}\|_O \leq |{\rm Re}\, z|^{-1}
\end{equation}
because $\beta^{-1} S_2$ are skew-adjoint. We claim that $(z-\beta^{-1} S_2)^{-1}$ converges strongly to $(z-S_1)^{-1}$ for $\beta \to \infty$. For a family of operators with a uniform bound on the operator norm, it suffices to show strong convergence of $(z-\beta^{-1} S_2)^{-1} w$ for a (strongly) dense subset.
Without loss of generality we can assume ${\rm Re}\, z >0$. Recalling that $\beta^{-1} S_2$ generates a strongly continuous unitary semigroup, we can use the formula
\begin{equation}\label{e:exponential_formula}
(z-\beta^{-1} S_2)^{-1} (w) = \int_0^\infty e^{-(z-\beta^{-1} S_2)\tau} (w)\, d\tau\, .
\end{equation}
Next observe that $\|e^{\beta^{-1} S_2\tau}\|_O=1$. Moreover if $w\in \mathscr{S}$, $e^{\beta^{-1}S_2 \tau} w$ is the solution of a transport equation with a locally bounded and smooth coefficient and initial data $w$. We can thus pass into the limit in $\beta\to \infty$ and conclude that $e^{\beta^{-1} S_2 \tau} w$ converges strongly in $L^2$ to $e^{S_1\tau} w$. We can thus use the dominated convergence theorem in \eqref{e:exponential_formula} to conclude that $(z-\beta^{-1} S_2)^{-1} (w)$ converges to $(z-S_1)^{-1} (w)$ strongly in $L^2$. Since $\mathscr{S}$ is strongly dense, this concludes our proof.
\medskip
{\bf Step 2} We next show that $(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}$ converges in the operator norm to
$(z-S_1)^{-1} \circ \mathscr{K}$. Indeed using Proposition \ref{p:abstract} we can find a sequence of finite rank projections $P_N$ such that $P_N \circ \mathscr{K}$ converges to $\mathscr{K}$ in operator norm. From Step 1 it suffices to show that $(z-\beta^{-1} S_2)^{-1} \circ P_N \circ \mathscr{K}$ converges to $(z-S_1)^{-1} \circ P_N \circ \mathscr{K}$ in operator norm for each $N$. But clearly $(z-\beta^{-1} S_2)^{-1} \circ P_N$ is a finite rank operator and for finite rank operators the norm convergence is equivalent to strong convergence. The latter has been proved in Step 1.
\medskip
{\bf Step 3} Fix $z$ which is outside the spectrum of $L_{st}$. Because of Step 2 we conclude that
\[
(1- (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}) \to (1- (z-S_1)^{-1}\circ \mathscr{K})
\]
in the operator norm. Observe that $1-(z-S_1)^{-1}\circ \mathscr{K}$ is a compact perturbation of the identity. As such it is a Fredholm operator with index $0$ and thus invertible if and only if its kernel is trivial. Its kernel is given by $w$ which satisfy
\[
z w - S_1 (w)- \mathscr{K} (w) = 0\, ,
\]
i.e. it is the kernel of $z-(S_1+\mathscr{K}) = z- L_{\text{st}}$, which we have assumed to be trivial since $z$ is in the spectrum of $L_{\text{st}}$. Thus $(1-(z-S_1)^{-1} \circ \mathscr{K})$ is invertible and hence, because of the operator convergence, so is $(1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K})$ for any sufficiently large $\beta$. Hence, by \eqref{e:invert_L_beta-z} so is $z-L_\beta$.
\medskip
{\bf Step 4} The inverse $(z- L_\beta)^{-1}$ is given explicitly by the formula
\begin{equation}\label{e:inversion_formula}
(z-L_\beta)^{-1} = (z-\beta^{-1} S_2)^{-1} (1-(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})^{-1}\, .
\end{equation}
Since $1-(z-S_2)^{-1} \circ \mathscr{K}$ converges to $1- (z-S_1)^{-1} \circ \mathscr{K}$ in the operator norm, their inverses converge as well in the operator norm. Since the composition of strongly convergent operators with norm convergent operators is strongly convergent, we conclude that $(z-L_\beta)^{-1}$ converges strongly to the operator
\[
(z-S_1)^{-1} (1- (z-S_1)^{-1} \circ \mathscr{K})^{-1} = (z-L_{\text{st}})^{-1}\, .
\]
\medskip
{\bf Step 5} Choose now a compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}}))$.
Recall first that
\[
K \ni z \mapsto (z-S_1)^{-1}
\]
is continuous in the operator norm. Thus $K\ni z \mapsto (1- (z-S_1)^{-1} \circ \mathscr{K})$ is continuous in the operator norm. {\color{red} We claim that $K\times [0,1]\ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1} \circ \mathscr{K})$ is also continuous in the operator norm and in order to show this we will prove the uniform continuity in $z$ once we fix $\delta$, with an estimate which is independent of $\delta$. We first write
\[
\|(1-(z - \delta S_2)^{-1}\circ \mathcal{K}) - (1- (z'- \delta S_2)^{-1} \circ \mathcal{K})\|_O
\leq \|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \|\mathcal{K}\|_O \, .
\]
Hence we compute
\[
(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1} = (z - \delta S_2)^{-1} \circ ((z' - \delta S_2)-(z-\delta S_2)) \circ (z'-\delta S_2)^{-1}
\]
and use \eqref{e:uniform-bound-inverse-beta} to estimate
\[
\|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \leq |z-z'| \|(z - \delta S_2)^{-1}\|_O \|(z'-\delta S_2)^{-1}\|_O
\leq \frac{|z-z'|}{| {\rm Re}\, z| |{\rm Re}\, z'|}\, .
\]
}
Since the space of invertible operators is open in the norm topology, this implies the existence of a $\delta_0>0$ such that $K\times [0,\delta_0] \ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1}\circ \mathscr{K})^{-1}$ is well defined and continuous. Thus, for $\beta \geq \beta_0= \delta_0^{-1}$ we conclude that $1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K}$ is invertible and the norm of its inverse is bounded by a constant $C$ independent of $\beta$ and $z\in K$. By \eqref{e:inversion_formula} and \eqref{e:uniform-bound-inverse-beta}, we infer that in the same range of $z$ and $\beta$ the norm of the operators $(z-L_\beta)^{-1}$ enjoy a uniform bound.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:three}]
We show \eqref{e:exclude_large_eigenvalue} for ${\rm Re}\, z \geq \varepsilon$ replacing $|{\rm Re}\, z|\geq \varepsilon$, as the argument for the complex lower half-plane is entirely analogous.
Using \eqref{e:invert_L_beta-z}, we wish to show that there is $R = R (\varepsilon)$ such that the operator
\[
1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}
\]
is invertible for all $\beta \geq 1$ and $z$ such that $|z|\geq R$ and ${\rm Re}\, z \geq \varepsilon$.
This will follow after showing that, for $\beta$ and $z$ in the same range
\begin{equation}\label{e:small}
\|(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}\|_O \leq \frac{1}{2}\, .
\end{equation}
By \eqref{e:uniform-bound-inverse-beta}, we can use Proposition \ref{p:abstract} to reduce \eqref{e:small} to the claim
\begin{equation}\label{e:small-2}
\|(z-\beta^{-1} S_2)^{-1} \circ P_V \circ \mathscr{K}\|_O \leq \frac{1}{4}\, ,
\end{equation}
where $P_V$ is the projection onto an appropriately chosen finite-dimensional space $V\subset C^\infty_c$. If $N$ is the dimension of the space and $w_1, \ldots , w_N$ an orthonormal base, it suffices to show that
\begin{equation}\label{e:small-3}
\|(z-\beta^{-1} S_2)^{-1} (w_i)\|_{L^2} \leq \frac{1}{4N} \qquad \forall i\, .
\end{equation}
We argue for one of them and set $w=w_i$. The goal is thus to show
\eqref{e:small-3} provided $|z|\geq R$ for some large enough $R$. We use again \eqref{e:exponential_formula} and write
\[
(z-\beta^{-1} S_2)^{-1} (w) = \underbrace{\int_0^T e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(A)} + \underbrace{\int_T^\infty e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(B)}\, .
\]
We first observe that
\[
\|(B)\| \leq \int_T^\infty e^{-\varepsilon \tau}\, d\tau \leq \frac{e^{-\varepsilon T}}{\varepsilon}\, .
\]
Thus, choosing $T$ sufficiently large we achieve $\|(B)\| \leq \frac{1}{8N}$.
Having fixed $T$ we integrate by parts in the integral defining (A) to get
\begin{align*}
(A) & =\int_0^T e^{-z\tau} e^{\beta^{-1} S_2 \tau} (w)\, d\tau
= \underbrace{\frac{w - e^{- (z-\beta^{-1} S_2) T} (w)}{z}}_{=: (A1)} + \underbrace{\frac{1}{z} \int_0^T e^{-z} \beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\, d\tau}_{=: (A2)}\, .
\end{align*}
First of all we can bound
\[
\|(A1)\| \leq \frac{1+ e^{-T\varepsilon}}{|z|}\leq \frac{2}{R}\, .
\]
As for the second term, observe that $[0,T]\ni \tau \mapsto e^{\beta^{-1} S_2 \tau} (w)$ is the solution of a transport equation with smooth coefficients and smooth and compactly supported initial data, considered over a finite interval of time. Hence the support of the solution is compact and the solution is smooth. Moreover, the operators $\beta^{-1} S_2$ are first-order differential operators with coefficients which are smooth and whose derivatives are all bounded. In particular
\[
\max_{\tau\in [0,T]} \|\beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\| \leq C
\]
for a constant $C$ depending on $w$ and $T$ but not on $\beta$, in particular we can estimate
\[
\|(A2)\|\leq \frac{C (T)}{R}
\]
Since the choice of $T$ has already been given, we can now choose $R$ large enough to conclude $\|(A)\|\leq \frac{1}{8N}$ as desired.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: conclusion}
First of all observe that $z\in {\rm spec}_m (L_\beta)$ if and only if $\beta z + 1-\frac{1}{\alpha}\in {\rm spec}_m (L_{\text{ss}})$. Thus, in order to prove Theorem \ref{thm:spectral} it suffices to find $\beta_0$ and $c_0$ positive such that:
\begin{itemize}
\item[(P)] If $\beta \geq \beta_0$, then ${\rm spec}_m (L_\beta)$ contains an element $z$ with ${\rm Re}\, z \geq c_0$ such that ${\rm Re}\, z = \max \{{\rm Re}\, w : w\in {\rm spec}_m\, (L_\beta)\}$.
\end{itemize}
Observe indeed that using the fact that the $U_{km}$ are invariant subspaces of $L_{\text{ss}}$, $\beta z + 1-\frac{1}{\alpha}$ have an eigenfunction $\vartheta$ which belongs to one of them, and we can assume that $k\geq 1$ by possibly passing to the complex conjugate $\bar z$. If $z$ is not real, we then set $\eta=\vartheta$ and the latter has the properties claimed in Theorem \ref{thm:spectral}. If $z$ is real it then follows that the real and imaginary part of $\vartheta$ are both eigenfunctions and upon multiplying by $i$ we can assume that the real part of $\vartheta$ is nonzero. We can thus set $\eta= {\rm Re}\, \vartheta$ as the eigenfunction of Theorem \ref{thm:spectral}.
We will split the proof in two parts, namely, we will show separately that
\begin{itemize}
\item[(P1)] There are $\beta_1, c_0 >0$ such that ${\rm spec}_m (L_\beta)\cap \{{\rm Re}\, z \geq c_0\}\neq \emptyset$ for all $\beta \geq \beta_1$.
\item[(P2)] If $\beta \geq \beta_0:= \max \{\beta_1, 1\}$, then $\sup \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\}$ is attained.
\end{itemize}
\medskip
{\bf Proof of (P1).} We fix $z\in {\rm spec}\, (L_{\text{st}})$ with positive real part and we set $2c_0:= {\rm Re}\, z$. We then fix a contour $\gamma \subset B_\varepsilon (z)$ which:
\begin{itemize}
\item it is a simple smooth curve;
\item it encloses $z$ and no other portion of the spectrum of $L_{\text{st}}$;
\item it does not intersect the spectrum of $L_{\text{st}}$;
\item it is contained in $\{w: {\rm Re}\, w \geq c_0\}$.
\end{itemize}
By the Riesz formula we know that
\[
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1}\, dw
\]
is a projection onto a subspace which contains all eigenfunctions of $L_{\text{st}}$ relative to the eigevanlue $z$. In particular this projection is nontrivial. By Lemma \ref{l:two} for all sufficiently large $\beta$ the curve $\gamma$ is not contained in the spectrum of $L_\beta$ and we can thus define
\[
P_{z,\beta} = \frac{1}{2\pi i} \int_\gamma (w-L_\beta)^{-1}\, dw\, .
\]
If $\gamma$ does not enclose any element of the spectrum of $L_\beta$, then $P_{z, \beta} = 0$. On the other hand, by Lemma \ref{l:two} and the dominated convergence theorem,
\[
P_{z,\beta} (u) \to P_z (u)
\]
strongly for every $u$. I.e. the operators $I_{z,\beta}$ converge strongly to the operator $P_z$. If for a sequence $\beta_k\to \infty$ the operators $P_{z,\beta_k}$ where trivial, then $P_z$ would be trivial too. Since this is excluded, we conclude that the curve $\gamma$ encloses some elements of the spectrum of $L_\beta$ for all $\beta$ large enough. Each such element has real part not smaller than $c_0$.
\medskip
{\bf Proof of (P2).} Set $\varepsilon := c_0$ and apply Lemma \ref{l:three} to find $R>0$ such that
${\rm spec}_m (L_\beta)\setminus \overline{B}_R$ is contained in $\{w: {\rm Re}\, w < c_0\}$. In particular, if $\beta \geq \max\{\beta_1, 1\}$, then the eigenvalue $z$ found in the previous step belongs to $\overline{B}_R$ and thus
\[
\sup\, \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\} =
\sup\, \{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta) \, .
\]
However, since ${\rm spec}\, (L_\beta)\cap \{w: {\rm Re}\, w \neq 0\}$ belongs to the discrete spectrum, the set $\{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta)$ is finite.
\chapter{Linear theory: Part I}
\label{chapter:linearparti}
In this chapter, we will reduce Theorem \ref{thm:spectral} to an analogous spectral result for another differential operator, and we will also show an important corollary of Theorem \ref{thm:spectral} concerning the semigroup that it generates. We start by giving the two relevant statements, but we will need first to introduce some notation and terminology.
First of all, in the rest of the chapter we will always assume that the positive integer $m$ is at least $2$. We then introduce a new (closed and densely defined) operator on $L^2_m$, which we will denote by $L_{\text{st}}$. The operator is defined by\index{aalLst@$L_{\text{st}}$}
\begin{equation}
L_{\text{st}} (\Omega) = - {\rm div}\, (\bar V \Omega) - (K_2*\Omega \cdot \nabla) \bar \Omega\,
\end{equation}
and (recalling that the operator $\Omega\mapsto (K_2* \Omega\cdot \nabla) \bar \Omega$ is bounded and compact, as will be shown below) its domain in $L^2_m$ is given by
\begin{equation}
D_m (L_{\text{st}}) = \{\Omega\in L^2_m : {\rm div}\, (\bar V \Omega)\in L^2\}\, .
\end{equation}
The key underlying idea behind the introduction of $L_{\text{st}}$ is that we can write $L_{\text{ss}}$ as
\[
L_{\text{ss}} = \left(1+{\textstyle{\frac{\xi}{\alpha}}}\cdot \nabla\right) + \beta L_{\text{st}} \,
\]
and since $\beta$ will be chosen very large, we will basically study the spectrum of $L_{\text{ss}}$ as a perturbation of the spectrum of $\beta L_{\text{st}}$. In particular Theorem \ref{thm:spectral} will be derived from a more precise spectral analysis of $L_{\text{st}}$. Before coming to it, we split the space $L^2_m$ into an appropriate infinite sum of closed orthogonal subspaces.
First of all, if we fix an element $\vartheta\in L^2 (\mathbb R^2)$ and we introduce the polar coordinates $(\theta, r)$ through $x= r (\cos \theta , \sin \theta)$, we can then use the Fourier expansion to write
\begin{equation}\label{e:Fourier}
\vartheta (x) =\sum_{k\in \mathbb Z} a_k (r) e^{ik\theta}\,
\end{equation}
where
\[
a_k (r) := \frac{1}{2\pi} \int_0^{2\pi} \vartheta(r \cos(\theta),r\sin(\theta)) e^{-ik\theta}\,\mathrm d\theta .
\]
By Plancherel's formula,
\[
\|\vartheta\|_{L^2 (\ensuremath{\mathbb R}^2)}^2 = 2\pi \sum_{k\in \mathbb Z} \|a_k\|^2_{L^2 (\ensuremath{\mathbb R}^+, r\,\mathrm dr)}\, .
\]
In particular it will be convenient to introduce the subspaces\index{aalUk@$U_k$}
\begin{equation}
U_k :=\{f(r) e^{ik\theta} : f \in L^2 (\mathbb R^+, r\,\mathrm dr)\}\, .
\end{equation}
Each $U_k$ is a closed subspace of $L^2$, distinct $U_k$'s are orthogonal to each other and moreover
\begin{equation}\label{e:Fourier-2}
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, .
\end{equation}
Each $U_{mk}$ is an invariant space of $L_{\text{st}}$, and it can be easily checked that $U_{mk}\subset D_m (L_{\text{st}})$ and that indeed the restriction of $L_{\text{st}}$ to $U_{mk}$ is a bounded operator. Following the same convention as for $L_{\text{ss}}$ we will denote by ${\rm spec}_m\, (L_{\text{st}})$ the spectrum of $L_{\text{st}}$ on $L^2_m$.
\begin{theorem}\label{thm:spectral2}\label{THM:SPECTRAL2}
For every $m\geq 2$ and every $\bar\Omega$
we have
\begin{itemize}
\item[(a)] each $z_i\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re} \,z\, \neq 0\}$ belongs to the discrete spectrum and if ${\rm Im}\, (z_i)=0$, then there is a nontrivial real eigenfunction relative to $z_i$.
\end{itemize}
Moreover, for an appropriate choice of $\bar\Omega$ there is an integer $m\geq 2$ such that:
\begin{itemize}
\item[(b)] ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is nonempty.
\end{itemize}
\end{theorem}
\begin{remark}\label{r:better}\label{R:BETTER}
The theorem stated above contains the minimal amount of information that we need to complete the proof of Theorem \ref{thm:main2}. We can however infer some additional conclusions with more work, more precisely we can show that
\begin{itemize}
\item[(c)] $m$ can be chosen so that, in addition to (b), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is finite and the image of the Riesz projector\footnote{Recall that in the case of an isolated point $z$ in the spectrum of a closed, densely defined operator $A$, the Riesz projector is defined as
\[
\frac{1}{2\pi i} \int_\gamma (w -A)^{-1}\, dw
\]
for any simple closed rectifiable contour $\gamma$ bounding a closed disk $D$ with $D \cap {\rm spec}\, (A) = \{z\}$. For an element of the discrete spectrum the Riesz projector has finite rank (the algebraic multiplicity of the eigenvalue $z$).}\index{Riesz projector}\index{aalPz@$P_z$}
$P_{z}$ of $L_{\text{st}}$ relative to each $z\in {\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}$ is contained in $U_m\cup U_{-m}$.
\end{itemize}
Since this property is not needed to prove Theorem~\ref{thm:main2} we defer its proof to Appendix~\ref{a:better}.
\end{remark}
{{red} In \cite{Vishik2} Vishik claims the following greatly improved statement.
\begin{theorem}\label{thm:spectral-stronger}\label{THM:SPECTRAL-STRONGER}
For a suitable $\bar \Omega$:
\begin{itemize}
\item[(c')] $m$ can be chosen so that, in addition to (b) and (c), ${\rm spec}_m\, (L_{\text{st}})\cap \{z: {\rm Re}\, z >0\}\cap U_m$ consists of a single element, with algebraic multiplicity $1$ in $U_m$.
\end{itemize}
\end{theorem}
Since the spectrum of $L_{\text{st}}$ is invariant under complex conjugation (b), (c), and (c') imply that ${\rm spec}_m\, (L_{\text{st}})\cap \{{\rm Re}\, z>0\}$ consists either of a single real eigenvalue or of two complex conjugate eigenvalues. In the first case, the algebraic and geometric multiplicity of the eigenvalue is $2$ and the space of eigenfunctions has a basis consisting of an element of $U_m$ and its complex conjugate in $U_{-m}$. In the second case the two eigenvalues $z$ and $\bar z$ have algebraic multiplicity $1$ and their eigenspaces are generated, respectively, by an element of $U_m$ and its complex conjugate in $U_{-m}$.
The argument given in \cite{Vishik2} for (c') is however not complete. Vishik provided later (\cite{Vishik3}) a way to close the gap. In Appendix~\ref{a:better} we will give a proof of Theorem \ref{thm:spectral-stronger} along his lines.}
\medskip
In this chapter we also derive an important consequence of Theorem \ref{thm:spectral} for the semigroup generated by $L_{\text{ss}}$.\index{Semigroup}
\begin{theorem}\label{t:group}\label{T:GROUP}
For every $m\geq 2$, $L_{\text{ss}}$ is the generator of a strongly continuous semigroup on $L^2_m$ which will be denoted by $e^{\tau L_{\text{ss}}}$\index{aalEtauLss@$e^{\tau L_{\text{ss}}}$}, and the growth bound $\omega (L_{\text{ss}})$ of $e^{\tau L_{\text{ss}}}$ equals\index{Semigroup, growth bound}
\[
a_0 := \sup \{{\rm Re}\, z_0 : z_0\in {\rm spec}_m (L_{\text{ss}})\}\, <\infty\,
\]
if $a_0 \geq 1-\frac{1}{\alpha}$.
In other words, for every $\delta>0$, there is a constant $M (\delta)$ with the property that
\begin{equation}\label{e:growth-bound}
\left\|e^{\tau L_{\text{ss}}} \Omega\right\|_{L^2} \leq M (\delta) e^{(a_0 +\delta) \tau} \|\Omega\|_{L^2}
\qquad \qquad \forall \tau\geq 0,\, \forall \Omega\in L^2_m\, .
\end{equation}
\end{theorem}
\section{Preliminaries}\label{s:abstract-operators}
In this section we start with some preliminaries which will take advantage of several structural properties of the operators $L_{\text{ss}}$ and $L_{\text{st}}$. First of all we decompose $L_{\text{st}}$ as
\begin{equation}\label{e:decompo_L_st}
L_{\text{st}} = S_1 + \mathscr{K}\, ,
\end{equation}\index{aalK@$\mathscr{K}$}\index{aalS1@$S_1$}where
\begin{align}
S_1 (\Omega)&:= - {\rm div}\, (\bar V \Omega)\label{e:S1}\\
\mathscr{K} (\Omega) &:= - (K_2*\Omega \cdot \nabla) \bar \Omega \label{e:compatto}\, .
\end{align}
Hence we introduce the operator\index{aalS2@$S_2$}
\begin{equation}
S_2 (\Omega) := {\rm div} \left(\left(\frac{\xi}{\alpha} - \beta \bar V\right) \Omega\right) - \frac{\Omega}{\alpha}\, ,
\end{equation}
so that we can decompose $L_{\text{ss}}$ as
\begin{equation}
L_{\text{ss}} = \left(1-\frac{1}{\alpha}\right) + S_2 + \beta \mathscr{K}\, .
\end{equation}
The domains of the various operators $A$ involved are always understood as $D_m (A):= \{\Omega : A(\Omega)\in L^2\}$.
Finally, we introduce the real Hilbert spaces $L^2_m (\mathbb R)$ and $U_j (\mathbb R)$ by setting\index{aalL2mR@$L^2_m(\ensuremath{\mathbb R})$}\index{aalUkR@$U_k(\ensuremath{\mathbb R})$}
\begin{align}
L^2_m (\mathbb R) &:= \{{\rm Re}\, \Omega : \Omega \in L^2_m\}\,
\end{align}
and, for $j>0$ natural,
\begin{equation}
U_j (\mathbb R) :=\{{\rm Re}\, \Omega: \Omega \in U_j\}\, .
\end{equation}
Observe that while clearly $L^2_m (\mathbb R)$ is a real subspace of $L^2_m$, $U_j (\mathbb R)$ is a real subspace of $U_j \oplus U_{-j}$.
As it is customary, $L^2_m (\mathbb R)$ and its real vector subspaces are endowed with the inner product\index{Rotationally symmetric function space, inner product}
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb R} = \int \Omega\, \Xi\, ,
\end{equation}
while $L^2_m$ and its complex vector subspaces are endowed with the Hermitation product
\begin{equation}
\langle \Omega, \Xi\rangle_{\mathbb C} = \int ({\rm Re}\, \Omega\, {\rm Re}\, \Xi +
{\rm Im}\, \Omega\, {\rm Im}\, \Xi) + i \int ({\rm Im}\, \Omega\, {\rm Re}\, \Xi - {\rm Re} \, \Omega\, {\rm Im}\, \Xi)\, .
\end{equation}
We will omit the subscripts from $\langle \cdot, \cdot \rangle$ when the underlying field is clear from the context. The following proposition details the important structural properties of the various operators. A closed unbounded operator $A$ on $L^2_m$ will be called \emph{real} if its restriction $A_{\mathbb R}$ to $L^2_m (\mathbb R)$ is a closed, densely defined operator with domain $D_m (A) \cap L^2_m (\mathbb R)$ such that $A(\Omega)\in L^2_m(\mathbb R)$ for all $\Omega\in D_m(A)\cap L^2_m(\mathbb R)$.
\begin{proposition}\label{p:abstract}
\begin{itemize}
\item[(i)] The operators $\mathscr{K}$, $S_1$ and $S_2$ are all real operators.
\item[(ii)] $\mathscr{K}$ is bounded and compact. More precisely there is a sequence of finite dimensional vector spaces $V_n \subset C^\infty_c (\mathbb R^2,\mathbb C)\cap L^2_m$ with the property that, if $P_n$ denotes the orthogonal projection onto $V_n$, then
\begin{equation}\label{e:explicit-approx}
\lim_{n\to\infty} \|\mathscr{K} - P_n\circ \mathscr{K}\|_O = 0\, ,
\end{equation}
where $\|\cdot\|_O$ denotes the operator norm.
\item[(iii)] $S_1$ and $S_2$ are skew-adjoint.
\item[(iv)] $D_m (L_{\text{st}}) = D_m (S_1)$ and $D_m (L_{\text{ss}}) = D_m (S_2)$.
\item[(v)] $U_{km}$ is an invariant subspace of $S_1, S_2, \mathscr{K}, L_{\text{st}}, L_{\text{ss}}$.
\item[(vi)] The restrictions of $S_1$ and $L_{\text{st}}$ to each $U_{km}$ are bounded operators.
\end{itemize}
\end{proposition}
\begin{proof} The verification of (i), (iii), (iv), (v), and (vi) are all simple and left therefore to the reader. We thus come to (ii) and prove the compactness of the operator $\mathscr{K}$. Recalling Lemma \ref{l:extension}, for every $\Omega \in L^2_m$ we can write the tempered distribution $\mathscr{K} (\Omega)$ as
\begin{equation}
\mathscr{K} (\Omega) = \nabla \bar \Omega \cdot V
\end{equation}
where $V=V(\Omega)$ is a $W^{1,2}_{\text{loc}}$ function with the properties that
\begin{equation}\label{e:stima-W12}
R^{-1} \|V \|_{L^2 (B_R)} + \|DV\|_{L^2 (B_R)} \leq C \|\Omega\|_{L^2} \qquad \forall R \geq 0\, . \end{equation}
Since $|\nabla \bar \Omega (\xi)| \leq C |\xi|^{-{\bar\alpha} -1}$ for $|\xi|\geq 1$, whenever $R\geq 1$ we can estimate
\begin{align*}
\|\mathscr{K} (\Omega)\|^2_{L^2 (B_R^c)} &= \sum_{j=0}^\infty \|\mathscr{K} (\Omega)\|_{L^2 (B_{2^{j+1} R}\setminus B_{2^j R})}^2 \leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2 (1+{\bar\alpha}) j} \|V\|^2_{L^2 (B_{2^{j+1} R})}\\
&\leq C R^{-2-2{\bar\alpha}} \sum_{j=0}^\infty 2^{-2(1+{\bar\alpha}) j} 2^{2j+2} R^2 \|\Omega\|_{L^2}^2
\leq C R^{-2{\bar\alpha}} \|\Omega\|_{L^2}^2\, .
\end{align*}
This shows at the same time that
\begin{itemize}
\item $\mathscr{K}$ is a bounded operator;
\item If we introduce the operators
\begin{equation}
\Omega \mapsto \mathscr{K}_N (\Omega) := \mathscr{K} (\Omega) \mathbf{1}_{B_N}\, ,
\end{equation}
then $\|\mathscr{K}_N- \mathscr{K}\|_{O}\to 0$.
\end{itemize}
Since the uniform limit of compact operators is a compact operator, it suffices to show that each $\mathscr{K}_N$ is a compact operator. This is however an obvious consequence of \eqref{e:stima-W12} and the compact embedding of $W^{1,2} (B_N)$ into $L^2 (B_N)$.
As for the remainder of the statement (ii), by the classical characterization of compact operators on a Hilbert space, for every $\varepsilon > 0$ there is a finite-rank linear map $L_N$ such that $\|\mathscr{K} - L_N\|_O \leq \frac{\varepsilon}{4}$. If we denote by $W_N$ the image of $L_N$ and by $Q_N$ the orthogonal projection onto it, given that $Q_N \circ L_N = L_N$ we can estimate
\[
\|Q_N \circ \mathscr{K} - \mathscr{K}\|_O \leq \|Q_N \circ \mathscr{K} - Q_N \circ L_N\|_O
+ \|L_N - \mathscr{K}\|_O \leq 2 \|L_N - \mathscr{K}\|_0 \leq \frac{\varepsilon}{2}\, .
\]
Fix next an orthonormal base $\{w_1, \ldots, w_N\}$ of $W_N$ and, using the density of $C^\infty_c (\mathbb R^2)$, approximate each element $w_i$ in the base with $v_i\in C^\infty_c (\mathbb R^2, \mathbb C)$. This can be done for instance convolving $w_i$ with a smooth radial kernel and multiplying by a suitable cut-off function. If the $v_i$'s are taken sufficiently close to $w_i$, the orthogonal projection $P_N$ onto $V_N = {\rm span}\, (v_1, \ldots , v_N)$ satisfies $\|Q_N-P_N\|_O \leq \frac{\varepsilon}{2\|\mathscr{K}\|_O}$ and thus
\[
\|\mathscr{K} - P_N \circ \mathscr{K}\|_O \leq \|\mathscr{K} - Q_N \circ \mathscr{K}\|_O + \|P_N-Q_N\|_O \|\mathscr{K}\|_O \leq \varepsilon\, .\qedhere
\]
\end{proof}
\section{Proof of Theorem \ref{t:group} and proof of Theorem \ref{thm:spectral2}(a)}\label{subsect:Proof-of-spectral2-and-group}
The above structural facts allow us to gain some important consequences as simple corollaries of classical results in spectral theory, which we gather in the next statement. Observe in particular that the statement (a) of Theorem \ref{thm:spectral2} follows from it.
In what follows we take the definition of essential spectrum of an operator as given in \cite{EngelNagel}. We caution the reader that other authors use different definitions; at any rate the main conclusion about the essential spectra of the operators $L_{\text{ss}}$ and $L_{\text{st}}$ in Corollary \ref{c:structural} below depends only upon the property that the essential and discrete spectra are disjoint (which is common to all the different definitions used in the literature).
\begin{corollary}\label{c:structural}
The essential spectrum of $L_{\text{st}}$ and the essential spectrum of $L_{\text{ss}} - \left(1-\frac{1}{\alpha}\right)$ are contained in the imaginary axis, while the remaining part of the spectrum is contained in the discrete spectrum. In particular, every $z\in {\rm spec}_m (L_{\text{st}})$ (resp. $z\in {\rm spec}_m (L_{\text{ss}})$) with nonzero real part (resp. real part different from $1-\frac{1}{\alpha}$) has the following properties.
\begin{itemize}
\item[(i)] $z$ is isolated in ${\rm spec}_m (L_{\text{st}})$ (resp. ${\rm spec}_m (L_{\text{ss}})$);
\item[(ii)] There is at least one nontrivial $\Omega$ such that $L_{\text{st}} (\Omega) = z \Omega$ (resp. $L_{\text{ss}} (\Omega) = z \Omega$) and if ${\rm Im}\, (z)=0$, then $\Omega$ can be chosen to be real-valued;
\item[(iii)] The Riesz projection $P_z$ has finite rank;
\item[(iv)] ${\rm Im}\, (P_z) = \bigoplus_{k\in \mathbb Z\setminus \{0\}} ({\rm Im}\, (P_z)\cap U_{km})$ and in particular the intersection ${\rm Im}\, (P_z) \cap U_{km}$ is trivial for all but a finite number of $k$'s and it is nontrivial for at least one $k$.
\end{itemize}
Moreover, Theorem \ref{t:group} holds.
\end{corollary}
\begin{proof} The points (i)-(iii) are consequence of classical theory, but we present briefly their proofs referring to \cite{Kato}. Observe that addition of a constant multiple $c$ of the identity only shifts the spectrum (and its properties) by the constant $c$. The statements for $L_{\text{ss}}$ are thus reduced to similar statements for $S_2+\beta \mathscr{K}$.
Next since the arguments for $L_{\text{st}} = S_1 + \mathscr{K}$ only use the skew-adjointness of $S_1$ and the compactness of $\mathscr{K}$, they apply verbatim to $S_2+\beta\mathscr{K}$. We thus only argue for $L_{\text{st}}$. First of all observe that, since $S_1$ is skew-adjoint, its spectrum is contained in the imaginary axis. In particular, for every $z$ with ${\rm Re}\, z \neq 0$ the operator $S_1-z$ is invertible and thus Fredholm with Fredholm index $0$. Hence by \cite[Theorem 5.26, Chapter IV]{Kato}, $L_{\text{st}}-z= S_1-z +\mathscr{K}$ is as well Fredholm and has index $0$. By \cite[Theorem 5.31, Chapter IV]{Kato} there is a discrete set $\Sigma \subset \{z: {\rm Re}\, z\neq 0\}$ with the property that the dimension of the kernel (which equals that of the cokernel) of $L_{\text{st}} -z$ is constant on the open sets $\{{\rm Re}\, z > 0\}\setminus \Sigma$ and $\{{\rm Re}\, z<0\}\setminus \Sigma$. Since, for every $z$ such that $|{\rm Re}\, z|> \|\mathscr{K}\|_O$, we know that $L_{\text{st}}-z$ has a bounded inverse from the Neumann series, the kernel (and cokernel) of $L_{\text{st}}-z$ equals $0$ on $\{{\rm Re}\, z \neq 0\}\setminus \Sigma$. From \cite[Theorem 5.28, Chapter IV]{Kato} it then follows that $\Sigma$ is a subset of the discrete spectrum of $L_{\text{st}}$. Obviously the essential spectrum must be contained in the imaginary axis.
In order to show (iv), denote by $P_k$ the orthogonal projection onto $U_{km}$ and observe that, since $L_{\text{st}} \circ P_k = P_k \circ L_{\text{st}}$,
\begin{equation}\label{e:commute}
P_z \circ P_k = \frac{1}{2\pi i} \int_\gamma \frac{1}{w-L_{\text{st}}}\circ P_k \, dw
= \frac{1}{2\pi i} \int P_k \circ \left(\frac{1}{w-L_{\text{st}}}\right)\, dw = P_k \circ P_z\, .
\end{equation}
Writing
\begin{equation}\label{e:splitting}
P_z = \sum_k P_z\circ P_k
\end{equation}
and observing that the commutation \eqref{e:commute} gives the orthogonality of the images of the $P_z\circ P_k$, since ${\rm Im}\, (P_z)$ is finite dimensional, we conclude that the sum is finite, i.e. that $P_z\circ P_k =0$ for all but finitely many $k$'s. Moreover, since $P_z^2 = P_z$ and $P_z$ equals the identity on ${\rm Im}\, (P_z)$, we see immediately that $U_{km} \cap {\rm Im}\, (P_z) = {\rm Im}\, (P_z\circ P_k)$.
We now come to the proof of Theorem \ref{t:group}.
We have already shown that, if ${\rm Re}\, \lambda$ is large enough, then $\lambda$ belongs to the resolvent of $L_{\text{ss}}$, which shows that $a_0 < \infty$. Next, observe that $L_{\text{ss}}$ generates a strongly continuous group if and only if $S_2+\beta \mathscr{K}$ does. On the other hand, using the skew-adjointness of $S_2$, we conclude that, if ${\rm Re}\, z > \beta \|\mathscr{K}\|_O$, then $z$ is in the resolvent of $S_2+\beta \mathscr{K}$ and
\[
\|(S_2+\beta \mathscr{K} - z)^{-1}\|_O \leq \frac{1}{{\rm Re}\, z - \beta \|\mathscr{K}\|_O}\, .
\]
Therefore we can apply \cite[Corollary 3.6, Chapter II]{EngelNagel} to conclude that $S_2+\beta \mathscr{K}$ generates a strongly continuous semigroup. Since the same argument applies to $-S_2-\beta \mathscr{K}$, we actually conclude that indeed the operator generates a strongly continuous group.
Next we invoke \cite[Corollary 2.11, Chapter IV]{EngelNagel} that characterizes the growth bound $\omega_0 (L_{\text{ss}})$ of the semigroup $e^{tL_{\text{ss}}}$ as
\[
\omega_0 (L_{\text{ss}}) = \max \{\omega_{\text{ess}} (L_{\text{ss}}), a_0\}\, ,
\]
where $\omega_{\text{ess}}$ is the essential growth bound of \cite[Definition 2.9, Chapter IV]{EngelNagel}. By \cite[Proposition 2.12, Chapter IV]{EngelNagel}, $\omega_{\text{ess}} (L_{\text{ss}})$ equals $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)$ and, since $e^{\tau S_2}$ is a unitary operator, the growth bound of $e^{(1-1/\alpha +S_2)\tau}$ equals $1-\frac{1}{\alpha}$, from which we conclude that $\omega_{\text{ess}} (1-\frac{1}{\alpha} + S_2)\leq 1-\frac{1}{\alpha}$. In particular we infer that if $a_0\geq 1-\frac{1}{\alpha}$, then $\omega_0 (L_{\text{ss}})=a_0$.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: preliminary lemmas}
In this and the next section we will derive Theorem \ref{thm:spectral} from Theorem \ref{thm:spectral2}. It is convenient to introduce the following operator:
\begin{equation}\label{e:Lbeta}
L_\beta : = \frac{1}{\beta} \left(L_{\text{ss}} - \left(1-{\textstyle{\frac{1}{\alpha}}}\right)\right)
= \frac{1}{\beta} S_2 + \mathscr{K} \, .
\end{equation}
In particular
\begin{equation}\label{e:Lbeta-2}
L_\beta (\Omega) = \frac{1}{\beta}\left[{\rm div}\, \left(\frac{\xi}{\alpha} \Omega\right)+ \frac{\Omega}{\alpha}\right] + L_{st}\, .
\end{equation}
Clearly the spectrum of $L_{\text{ss}}$ can be easily computed from the spectrum of $L_\beta$. The upshot of this section and the next section is that, as $\beta \to \infty$, the spectrum of $L_\beta$ converges to that of $L_{\text{st}}$ in a rather strong sense.
In this section we state two preliminary lemmas. We will use extensively the notation $P_V$ for the orthogonal projection onto some closed subspace $V$ of $L^2_m$.
\begin{lemma}\label{l:two}
Let $H= L^2_m, U_{km}, U_{-km}$, or any closed invariant subspace common to $L_{st}$ and all the $L_\beta$.
For every compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}} \circ P_H))$, there is $\beta_0 (K)$ such that $K\subset \mathbb C \setminus (\mathbb R \cup {\rm spec}_m (L_\beta \circ P_H))$ for $\beta \geq \beta_0 (K)$. Moreover,
\begin{equation}\label{e:op_norm_est}
\sup_{\beta \geq \beta_0 (K)} \sup_{z\in K} \|(L_\beta \circ P_H - z)^{-1}\|_O < \infty\,
\end{equation}
and
$(L_\beta \circ P_H -z)^{-1}$ converges strongly to $(L_{\text{st}}\circ P_H -z)^{-1}$ for every $z\in K$, namely,
\begin{equation}\label{e:strong_convergence}
\lim_{\beta\to \infty} \|(L_\beta \circ P_H -z)^{-1} (w) - (L_{\text{st}}\circ P_H -z)^{-1} (w)\| = 0\, \qquad \forall w\in L^2_m\, .
\end{equation}
\end{lemma}
\begin{lemma}\label{l:three}
For every $\varepsilon >0$ there is a $R=R (\varepsilon)$ such that
\begin{equation}\label{e:exclude_large_eigenvalue}
{\rm spec}_m (L_\beta) \cap \{z : |z|\geq R, |{\rm Re}\, z|\geq \varepsilon\} = \emptyset \qquad
\forall \beta \geq 1\, .
\end{equation}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{l:two}] The proof is analogous for all $H$ and we will thus show it for $H=L^2_m$. Fix first $z$ such that ${\rm Re}\, z \neq 0$ and recalling that $z- \beta^{-1} S_2$ is invertible, we write
\begin{equation}\label{e:invert_L_beta-z}
z- L_\beta = (z- \beta^{-1} S_2) (1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})\, .
\end{equation}
\medskip
{\bf Step 1} The operators $(\beta^{-1} S_2 -z)^{-1}$ enjoy the bound
\begin{equation}\label{e:uniform-bound-inverse-beta}
\|(z-\beta^{-1} S_2)^{-1}\|_O \leq |{\rm Re}\, z|^{-1}
\end{equation}
because $\beta^{-1} S_2$ are skew-adjoint. We claim that $(z-\beta^{-1} S_2)^{-1}$ converges strongly to $(z-S_1)^{-1}$ for $\beta \to \infty$. For a family of operators with a uniform bound on the operator norm, it suffices to show strong convergence of $(z-\beta^{-1} S_2)^{-1} w$ for a (strongly) dense subset.
Without loss of generality we can assume ${\rm Re}\, z >0$. Recalling that $\beta^{-1} S_2$ generates a strongly continuous unitary semigroup, we can use the formula
\begin{equation}\label{e:exponential_formula}
(z-\beta^{-1} S_2)^{-1} (w) = \int_0^\infty e^{-(z-\beta^{-1} S_2)\tau} (w)\, d\tau\, .
\end{equation}
Next observe that $\|e^{\beta^{-1} S_2\tau}\|_O=1$. Moreover if $w\in \mathscr{S}$, $e^{\beta^{-1}S_2 \tau} w$ is the solution of a transport equation with a locally bounded and smooth coefficient and initial data $w$. We can thus pass into the limit in $\beta\to \infty$ and conclude that $e^{\beta^{-1} S_2 \tau} w$ converges strongly in $L^2$ to $e^{S_1\tau} w$. We can thus use the dominated convergence theorem in \eqref{e:exponential_formula} to conclude that $(z-\beta^{-1} S_2)^{-1} (w)$ converges to $(z-S_1)^{-1} (w)$ strongly in $L^2$. Since $\mathscr{S}$ is strongly dense, this concludes our proof.
\medskip
{\bf Step 2} We next show that $(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}$ converges in the operator norm to
$(z-S_1)^{-1} \circ \mathscr{K}$. Indeed using Proposition \ref{p:abstract} we can find a sequence of finite rank projections $P_N$ such that $P_N \circ \mathscr{K}$ converges to $\mathscr{K}$ in operator norm. From Step 1 it suffices to show that $(z-\beta^{-1} S_2)^{-1} \circ P_N \circ \mathscr{K}$ converges to $(z-S_1)^{-1} \circ P_N \circ \mathscr{K}$ in operator norm for each $N$. But clearly $(z-\beta^{-1} S_2)^{-1} \circ P_N$ is a finite rank operator and for finite rank operators the norm convergence is equivalent to strong convergence. The latter has been proved in Step 1.
\medskip
{\bf Step 3} Fix $z$ which is outside the spectrum of $L_{st}$. Because of Step 2 we conclude that
\[
(1- (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}) \to (1- (z-S_1)^{-1}\circ \mathscr{K})
\]
in the operator norm. Observe that $1-(z-S_1)^{-1}\circ \mathscr{K}$ is a compact perturbation of the identity. As such it is a Fredholm operator with index $0$ and thus invertible if and only if its kernel is trivial. Its kernel is given by $w$ which satisfy
\[
z w - S_1 (w)- \mathscr{K} (w) = 0\, ,
\]
i.e. it is the kernel of $z-(S_1+\mathscr{K}) = z- L_{\text{st}}$, which we have assumed to be trivial since $z$ is in the spectrum of $L_{\text{st}}$. Thus $(1-(z-S_1)^{-1} \circ \mathscr{K})$ is invertible and hence, because of the operator convergence, so is $(1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K})$ for any sufficiently large $\beta$. Hence, by \eqref{e:invert_L_beta-z} so is $z-L_\beta$.
\medskip
{\bf Step 4} The inverse $(z- L_\beta)^{-1}$ is given explicitly by the formula
\begin{equation}\label{e:inversion_formula}
(z-L_\beta)^{-1} = (z-\beta^{-1} S_2)^{-1} (1-(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K})^{-1}\, .
\end{equation}
Since $1-(z-S_2)^{-1} \circ \mathscr{K}$ converges to $1- (z-S_1)^{-1} \circ \mathscr{K}$ in the operator norm, their inverses converge as well in the operator norm. Since the composition of strongly convergent operators with norm convergent operators is strongly convergent, we conclude that $(z-L_\beta)^{-1}$ converges strongly to the operator
\[
(z-S_1)^{-1} (1- (z-S_1)^{-1} \circ \mathscr{K})^{-1} = (z-L_{\text{st}})^{-1}\, .
\]
\medskip
{\bf Step 5} Choose now a compact set $K\subset \mathbb C \setminus (i \mathbb R \cup {\rm spec}_m (L_{\text{st}}))$.
Recall first that
\[
K \ni z \mapsto (z-S_1)^{-1}
\]
is continuous in the operator norm. Thus $K\ni z \mapsto (1- (z-S_1)^{-1} \circ \mathscr{K})$ is continuous in the operator norm. {{red} We claim that $K\times [0,1]\ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1} \circ \mathscr{K})$ is also continuous in the operator norm and in order to show this we will prove the uniform continuity in $z$ once we fix $\delta$, with an estimate which is independent of $\delta$. We first write
\[
\|(1-(z - \delta S_2)^{-1}\circ \mathcal{K}) - (1- (z'- \delta S_2)^{-1} \circ \mathcal{K})\|_O
\leq \|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \|\mathcal{K}\|_O \, .
\]
Hence we compute
\[
(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1} = (z - \delta S_2)^{-1} \circ ((z' - \delta S_2)-(z-\delta S_2)) \circ (z'-\delta S_2)^{-1}
\]
and use \eqref{e:uniform-bound-inverse-beta} to estimate
\[
\|(z - \delta S_2)^{-1} - (z'-\delta S_2)^{-1}\|_O \leq |z-z'| \|(z - \delta S_2)^{-1}\|_O \|(z'-\delta S_2)^{-1}\|_O
\leq \frac{|z-z'|}{| {\rm Re}\, z| |{\rm Re}\, z'|}\, .
\]
}
Since the space of invertible operators is open in the norm topology, this implies the existence of a $\delta_0>0$ such that $K\times [0,\delta_0] \ni (z, \delta) \mapsto (1-(z-\delta S_2)^{-1}\circ \mathscr{K})^{-1}$ is well defined and continuous. Thus, for $\beta \geq \beta_0= \delta_0^{-1}$ we conclude that $1-(z-\beta^{-1} S_2)^{-1}\circ \mathscr{K}$ is invertible and the norm of its inverse is bounded by a constant $C$ independent of $\beta$ and $z\in K$. By \eqref{e:inversion_formula} and \eqref{e:uniform-bound-inverse-beta}, we infer that in the same range of $z$ and $\beta$ the norm of the operators $(z-L_\beta)^{-1}$ enjoy a uniform bound.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:three}]
We show \eqref{e:exclude_large_eigenvalue} for ${\rm Re}\, z \geq \varepsilon$ replacing $|{\rm Re}\, z|\geq \varepsilon$, as the argument for the complex lower half-plane is entirely analogous.
Using \eqref{e:invert_L_beta-z}, we wish to show that there is $R = R (\varepsilon)$ such that the operator
\[
1 - (z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}
\]
is invertible for all $\beta \geq 1$ and $z$ such that $|z|\geq R$ and ${\rm Re}\, z \geq \varepsilon$.
This will follow after showing that, for $\beta$ and $z$ in the same range
\begin{equation}\label{e:small}
\|(z-\beta^{-1} S_2)^{-1} \circ \mathscr{K}\|_O \leq \frac{1}{2}\, .
\end{equation}
By \eqref{e:uniform-bound-inverse-beta}, we can use Proposition \ref{p:abstract} to reduce \eqref{e:small} to the claim
\begin{equation}\label{e:small-2}
\|(z-\beta^{-1} S_2)^{-1} \circ P_V \circ \mathscr{K}\|_O \leq \frac{1}{4}\, ,
\end{equation}
where $P_V$ is the projection onto an appropriately chosen finite-dimensional space $V\subset C^\infty_c$. If $N$ is the dimension of the space and $w_1, \ldots , w_N$ an orthonormal base, it suffices to show that
\begin{equation}\label{e:small-3}
\|(z-\beta^{-1} S_2)^{-1} (w_i)\|_{L^2} \leq \frac{1}{4N} \qquad \forall i\, .
\end{equation}
We argue for one of them and set $w=w_i$. The goal is thus to show
\eqref{e:small-3} provided $|z|\geq R$ for some large enough $R$. We use again \eqref{e:exponential_formula} and write
\[
(z-\beta^{-1} S_2)^{-1} (w) = \underbrace{\int_0^T e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(A)} + \underbrace{\int_T^\infty e^{-(z-\beta^{-1} S_2) \tau} (w)\, d\tau}_{=:(B)}\, .
\]
We first observe that
\[
\|(B)\| \leq \int_T^\infty e^{-\varepsilon \tau}\, d\tau \leq \frac{e^{-\varepsilon T}}{\varepsilon}\, .
\]
Thus, choosing $T$ sufficiently large we achieve $\|(B)\| \leq \frac{1}{8N}$.
Having fixed $T$ we integrate by parts in the integral defining (A) to get
\begin{align*}
(A) & =\int_0^T e^{-z\tau} e^{\beta^{-1} S_2 \tau} (w)\, d\tau
= \underbrace{\frac{w - e^{- (z-\beta^{-1} S_2) T} (w)}{z}}_{=: (A1)} + \underbrace{\frac{1}{z} \int_0^T e^{-z} \beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\, d\tau}_{=: (A2)}\, .
\end{align*}
First of all we can bound
\[
\|(A1)\| \leq \frac{1+ e^{-T\varepsilon}}{|z|}\leq \frac{2}{R}\, .
\]
As for the second term, observe that $[0,T]\ni \tau \mapsto e^{\beta^{-1} S_2 \tau} (w)$ is the solution of a transport equation with smooth coefficients and smooth and compactly supported initial data, considered over a finite interval of time. Hence the support of the solution is compact and the solution is smooth. Moreover, the operators $\beta^{-1} S_2$ are first-order differential operators with coefficients which are smooth and whose derivatives are all bounded. In particular
\[
\max_{\tau\in [0,T]} \|\beta^{-1} S_2 \circ e^{\beta^{-1} S_2 \tau} (w)\| \leq C
\]
for a constant $C$ depending on $w$ and $T$ but not on $\beta$, in particular we can estimate
\[
\|(A2)\|\leq \frac{C (T)}{R}
\]
Since the choice of $T$ has already been given, we can now choose $R$ large enough to conclude $\|(A)\|\leq \frac{1}{8N}$ as desired.
\end{proof}
\section{Proof of Theorem \ref{thm:spectral}: conclusion}
First of all observe that $z\in {\rm spec}_m (L_\beta)$ if and only if $\beta z + 1-\frac{1}{\alpha}\in {\rm spec}_m (L_{\text{ss}})$. Thus, in order to prove Theorem \ref{thm:spectral} it suffices to find $\beta_0$ and $c_0$ positive such that:
\begin{itemize}
\item[(P)] If $\beta \geq \beta_0$, then ${\rm spec}_m (L_\beta)$ contains an element $z$ with ${\rm Re}\, z \geq c_0$ such that ${\rm Re}\, z = \max \{{\rm Re}\, w : w\in {\rm spec}_m\, (L_\beta)\}$.
\end{itemize}
Observe indeed that using the fact that the $U_{km}$ are invariant subspaces of $L_{\text{ss}}$, $\beta z + 1-\frac{1}{\alpha}$ have an eigenfunction $\vartheta$ which belongs to one of them, and we can assume that $k\geq 1$ by possibly passing to the complex conjugate $\bar z$. If $z$ is not real, we then set $\eta=\vartheta$ and the latter has the properties claimed in Theorem \ref{thm:spectral}. If $z$ is real it then follows that the real and imaginary part of $\vartheta$ are both eigenfunctions and upon multiplying by $i$ we can assume that the real part of $\vartheta$ is nonzero. We can thus set $\eta= {\rm Re}\, \vartheta$ as the eigenfunction of Theorem \ref{thm:spectral}.
We will split the proof in two parts, namely, we will show separately that
\begin{itemize}
\item[(P1)] There are $\beta_1, c_0 >0$ such that ${\rm spec}_m (L_\beta)\cap \{{\rm Re}\, z \geq c_0\}\neq \emptyset$ for all $\beta \geq \beta_1$.
\item[(P2)] If $\beta \geq \beta_0:= \max \{\beta_1, 1\}$, then $\sup \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\}$ is attained.
\end{itemize}
\medskip
{\bf Proof of (P1).} We fix $z\in {\rm spec}\, (L_{\text{st}})$ with positive real part and we set $2c_0:= {\rm Re}\, z$. We then fix a contour $\gamma \subset B_\varepsilon (z)$ which:
\begin{itemize}
\item it is a simple smooth curve;
\item it encloses $z$ and no other portion of the spectrum of $L_{\text{st}}$;
\item it does not intersect the spectrum of $L_{\text{st}}$;
\item it is contained in $\{w: {\rm Re}\, w \geq c_0\}$.
\end{itemize}
By the Riesz formula we know that
\[
P_z = \frac{1}{2\pi i} \int_\gamma (w-L_{\text{st}})^{-1}\, dw
\]
is a projection onto a subspace which contains all eigenfunctions of $L_{\text{st}}$ relative to the eigevanlue $z$. In particular this projection is nontrivial. By Lemma \ref{l:two} for all sufficiently large $\beta$ the curve $\gamma$ is not contained in the spectrum of $L_\beta$ and we can thus define
\[
P_{z,\beta} = \frac{1}{2\pi i} \int_\gamma (w-L_\beta)^{-1}\, dw\, .
\]
If $\gamma$ does not enclose any element of the spectrum of $L_\beta$, then $P_{z, \beta} = 0$. On the other hand, by Lemma \ref{l:two} and the dominated convergence theorem,
\[
P_{z,\beta} (u) \to P_z (u)
\]
strongly for every $u$. I.e. the operators $I_{z,\beta}$ converge strongly to the operator $P_z$. If for a sequence $\beta_k\to \infty$ the operators $P_{z,\beta_k}$ where trivial, then $P_z$ would be trivial too. Since this is excluded, we conclude that the curve $\gamma$ encloses some elements of the spectrum of $L_\beta$ for all $\beta$ large enough. Each such element has real part not smaller than $c_0$.
\medskip
{\bf Proof of (P2).} Set $\varepsilon := c_0$ and apply Lemma \ref{l:three} to find $R>0$ such that
${\rm spec}_m (L_\beta)\setminus \overline{B}_R$ is contained in $\{w: {\rm Re}\, w < c_0\}$. In particular, if $\beta \geq \max\{\beta_1, 1\}$, then the eigenvalue $z$ found in the previous step belongs to $\overline{B}_R$ and thus
\[
\sup\, \{{\rm Re}\, w : w\in {\rm spec}\, (L_\beta)\} =
\sup\, \{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta) \, .
\]
However, since ${\rm spec}\, (L_\beta)\cap \{w: {\rm Re}\, w \neq 0\}$ belongs to the discrete spectrum, the set $\{w: {\rm Re}\, w \geq c_0, |w|\leq R\}\cap {\rm spec}\, (L_\beta)$ is finite.
\chapter{Linear theory: Part II}
\label{chapter:linearpartii}\label{sect:Proof-spectral-2}
This chapter is devoted to proving Theorem \ref{thm:spectral2}. Because of the discussions in the previous chapter, considering the decomposition
\[
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, ,
\]
the statement of Theorem \ref{thm:spectral2} can be reduced to the study of the spectra of the restrictions $L_{\text{st}}|_{U_{km}}$ of the operator $L_{\text{st}}$ to the invariant subspaces $U_{km}$. For this reason we introduce the notation ${\rm spec}\, (L_{\text{st}}, U_j)$ for the spectrum of the operator $L_{\text{st}}|_{U_j}$, understood as an operator from $U_j$ to $U_j$. The following is a very simple observation.
\begin{lemma}
The restriction of the operator $L_{\text{st}}$ to the radial functions $U_0$ is identically $0$. Moreover, $z\in {\rm spec}\, (L_{\text{st}}, U_j)$ if and only if $\bar z \in {\rm spec}\, (L_{\text{st}}, U_{-j})$.
\end{lemma}
We will then focus on proving the following statement, which is slightly stronger than what we need to infer Theorem \ref{thm:spectral2}.
\begin{theorem}\label{thm:spectral3}
For a suitable choice of $\bar \Omega$, there is $m_0\geq 2$ such that
${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ is nonempty and ${\rm spec}\, (L_{\text{st}}, U_{m_0})\cap \{{\rm Re}\, z \geq \bar a\}$ is finite for every positive $\bar a$.
\end{theorem}
\begin{remark}\label{r:better2}\label{R:BETTER2} As it is the case for Theorem \ref{thm:spectral2} we can deepen our analysis and prove the following stronger statement:
\begin{itemize}
\item[(i)] For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} we have ${\rm spec}\, (L_{\text{st}}, U_m) \subset i \mathbb R$ for every $m> m_0$.
\end{itemize}
This will be done in Appendix \ref{a:better}, where we will also show how conclusion (c) of Remark \ref{r:better} follows from it.
\end{remark}
{\color{red} Note that in \cite{Vishik2} Vishik claims the following stronger statement.
\begin{theorem}\label{thm:spectral-stronger-2}\label{THM:SPECTRAL-STRONGER-2}
For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} and to Remark \ref{r:better2}(i), we have also
\begin{itemize}
\item[(ii)] ${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ consists of a single eigenvalue with algebraic multiplicity $1$.
\end{itemize}
\end{theorem}
In Appendix \ref{a:better} we will show how to prove the latter conclusion and how Theorem \ref{thm:spectral-stronger} follows from it.}
\section{Preliminaries}
If we write an arbitrary element $\Omega\in U_m$ as $\Omega (x) = e^{im\theta} \gamma (r)$ using polar coordinates, we find an isomorphism of the Hilbert space $U_m$ with the Hilbert space
\begin{equation}\label{e:def-H}
\mathcal{H}:= \left\{\gamma : \mathbb R^+ \to \mathbb C : \int_0^\infty |\gamma (r)|^2\, r\, dr < \infty\right\}
\end{equation}
and thus the operator $L_{\text{st}}: U_m \to U_m$ can be identified with an operator $\mathcal{L}_m : \mathcal{H}\to \mathcal{H}$. In fact, since $L_{\text{st}} = S_1+\mathscr{K}$, where $S_1$ is skew-adjoint and $\mathscr{K}$ compact, $\mathcal{L}_m$ is also a compact perturbation of a skew-adjoint operator. In order to simplify our notation and terminology, we will then revert our considerations to the operator $i\mathcal{L}_m$, which will be written as the sum of a self-adjoint operator, denoted by $\mathcal{S}_m$, and a compact operator, denoted by $\mathscr{K}_m$.
\begin{lemma}\label{l:S-in-polar}
After the latter identification, if $\bar\Omega (x) = g (|x|)$ and $\zeta$ is given through the formula \eqref{e:def-zeta}, then $\mathcal{S}_m: \mathcal{H}\to \mathcal{H}$ is the following bounded self-adjoint operator:
\begin{equation}\label{e:explicit}
\gamma \mapsto \mathcal{S}_m (\gamma) = m \zeta \gamma\, .
\end{equation}
\end{lemma}
\begin{proof}
The formula is easy to check. The self-adjointness of \eqref{e:explicit} is obvious. Concerning the boundedness we need to show that $\zeta$ is bounded. Since $g$ is smooth (and hence locally bounded), $\zeta$ is smooth and locally bounded by \eqref{e:def-zeta}. To show that it is globally bounded recall that $g (r) = r^{-{\bar\alpha}}$ for $r\geq 2$, so that
\[
\zeta (r) = \frac{\tilde c_0}{r^2} + \frac{1}{r^2} \int_2^r \rho^{1-{\bar\alpha}}\, d\rho = \frac{c_0}{r^2} + \frac{c_1}{r^{\bar\alpha}} \qquad \forall r\geq 2\, ,
\]
where $c_0$ and $c_1$ are two appropriate constants.
\end{proof}
A suitable, more complicated, representation formula can be shown for the operator $\mathscr{K}_m$.
\begin{lemma}\label{l:K-in-polar}
Under the assumptions of Lemma \ref{l:S-in-polar}, the compact operator $\mathscr{K}_m: \mathcal{H}\to \mathcal{H}$ is given by
\begin{equation}\label{e:explicit2}
\gamma \mapsto \mathscr{K}_m (\gamma)= - \frac{m}{r} \psi g'\,
\end{equation}
where
\begin{equation}\label{e:explicit3}
\psi (r) = - \frac{1}{2m} r^{m} \int_r^\infty \gamma (s) s^{1-m}\, ds - \frac{1}{2m} r^{-m} \int_0^r \gamma (s) s^{1+m}\, ds\, .
\end{equation}
\end{lemma}
\begin{remark}\label{r:potential-theory}
When $\gamma$ is compactly supported, $\phi (\theta,r):= \psi (r) e^{im\theta}$ with $\psi$ as in \eqref{e:explicit} gives the unique potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$, namely, $\phi$ obtained as the convolution of $\gamma e^{im\theta}$ with the Newtonian potential $\frac{1}{2\pi} \ln r$. For general $\gamma\in \mathcal{H}$ we do not have enough summability to define such convolution using Lebesgue integration, but, as already done before, we keep calling $\phi$ the potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{l:K-in-polar}]
First of all we want to show that the formula is correct when $\Omega = \gamma (r) e^{im \theta} \in C^\infty_c \cap L^2_m$. We are interested in computing $-i (K_2*\Omega\cdot \nabla) \bar \Omega$. First of all we recall that $K_2* \Omega = \nabla^\perp \phi$, where $\phi$ is the potential-theoretic solution of $\Delta \phi = \Omega$. Recall that for $\phi$ we have the explicit formula
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) \ln |y-x|\,\mathrm dy\, .
\]
$\phi$ is clearly smooth and hence locally bounded. Observe that $\Omega$ averages to $0$ and thus
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) (\ln |y-x| - \ln |x|)\,\mathrm dy\, .
\]
Fix $R$ larger than $1$ so that ${\rm spt}\, (\Omega) \subset B_R$ and choose $|x|\geq 2R$. We then have the following elementary inequality for every $y\in {\rm spt}\, (\Omega)$:
\[
|\ln |x| - \ln |x-y||\leq \ln (|x-y| + |y|) - \ln (|x-y|)\leq \frac{|y|}{|y-x|} \leq \frac{2|y|}{|x|}\, ,
\]
from which we conclude that $|\phi (x)|\leq C (1+|x|)^{-1}$. Hence $\phi$ is the only solution to $\Delta \phi = \Omega$ with the property that it converges to $0$ at infinity. This allows us to show that $\phi$ satisfies the formula
\[
\phi (x) = \psi (r) e^{im\theta}
\]
where $\psi$ is given by formula \eqref{e:explicit3}. We indeed just need to check that the Laplacian of
$\psi (r) e^{im\theta}$ equals $\gamma (r) e^{im\theta}$ and that $\lim_{r\to \infty} \psi (r) = 0$.
Using the formula $\Delta = \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} + \frac{1}{r} \frac{\partial}{\partial r} + \frac{\partial^2}{\partial r^2}$ the first claim is a direct verification. Next, since $\gamma (r) =0$ for $r\geq R$, we conclude $\psi (r) = C r^{-m}$ for all $r\geq R$, which shows the second claim.
Observe next that
\[
\nabla \phi = \frac{m i}{r^2} \psi (r) e^{im\theta} \frac{\partial}{\partial \theta} - \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial r}\, ,
\]
which turns into
\[
\nabla \phi^\perp = - \frac{mi}{r} \psi (r) e^{im\theta} \frac{\partial}{\partial r} - \frac{1}{r} \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial \theta}\, .
\]
Since $\bar\Omega (x) = g(r)$, we then conclude that
\[
- (K_2*\Omega\cdot \nabla) \bar\Omega = \frac{mi}{r} \psi (r) e^{im\theta} g' (r)\, .
\]
Upon multiplication by $i$ we obtain formula \eqref{e:explicit2}. Since we know from the previous chapter that $\mathscr{K}$ is a bounded and compact operator and $\mathscr{K}_m$ is just the restriction of $i\mathscr{K}$ to a closed invariant subspace of it, the boundedness and compactness of $\mathscr{K}_m$ is obvious.
\end{proof}
Notice next that, while in all the discussion so far we have always assumed that $m$ is an integer larger than $1$, the operator $\mathcal{S}_m$ can in fact be easily defined for every {\em real} $m>1$, while, using the formulae \eqref{e:explicit2} and \eqref{e:explicit3} we can also make sense of $\mathscr{K}_m$ for every real $m>1$. In particular we can define as well the operator $\mathcal{L}_m$ for every $m>1$. The possibility of varying $m$ as a real parameter will play a crucial role in the rest of the chapter, and we start by showing that, for $m$ in the above range, the boundedness of $\mathcal{L}_m$ and $\mathcal{S}_m$ and the compactness of $\mathscr{K}_m$ continue to hold.
\begin{proposition}\label{p:all-m}
The operators $\mathcal{L}_m$, $\mathcal{S}_m$, and $\mathscr{K}_m$ are bounded operators from $\mathcal{H}$ to $\mathcal{H}$ for every real $m>1$, with a uniform bound on their norms if $m$ ranges in a compact set. Moreover, under the same assumption $\mathscr{K}_m$ is compact. In particular:
\begin{itemize}
\item[(i)] ${\rm spec}\, (\mathcal{L}_m)$ is compact;
\item[(ii)] for every $z$ with ${\rm Im}\, z \neq 0$ the operator $\mathcal{L}_m-z$ is a bounded Fredholm operator with index $0$;
\item[(iii)] every $z\in {\rm spec}\, (\mathcal{L}_m)$ with ${\rm Im}\, z \neq 0$ belongs to the discrete spectrum.
\end{itemize}
\end{proposition}
\begin{proof} The boundedness of $\mathcal{S}_m$ is obvious. Having shown the boundedness and compactness of $\mathscr{K}_m$, (i) follows immediately from the boundedness of $\mathcal{L}_m$, while (ii) follows immediately from the fact that $\mathcal{L}_m - z$ is a compact perturbation of the operator $\mathcal{S}_m -z$, which is invertible because $\mathcal{S}_m$ is selfajoint, and (iii) is a standard consequence of (ii).
First of all let us prove that $\mathcal{K}_m$ is bounded (the proof is necessary because from what previously proved, we can just conclude the boundedness and compactness of the operator for {\em integer values} of $m$ larger than $1$). We observe first that the function $\| r^{-1}\psi\|_\infty \leq \|\gamma\|_{\mathcal{H}}$, as it follows from Cauchy-Schwarz that
\begin{align*}
r^{m-1} \int_r^\infty |\gamma (s)| s^{1-m}\, ds&\leq
r^{m-1} \left(\int_r^\infty |\gamma(s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_r^\infty s^{1-2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m-2}} \|\gamma\|_{\mathcal{H}}\\
r^{-m-1} \int_0^r |\gamma (s)| s^{1+m}\, ds &\leq r^{-m-1} \left(\int_0^r |\gamma (s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_0^r s^{1+2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m+2}} \|\gamma\|_{\mathcal{H}}\, .
\end{align*}
Since $g' (r) \leq C (1+r)^{-1-{\bar\alpha}}$, it follows immediately that
\begin{equation}\label{e:K_m-pointwise-bound}
|(\mathcal{K}_m (\gamma)) (r)| \leq \frac{C\|\gamma\|_{\mathcal{H}}}{(1+r)^{1+{\bar\alpha}}}
\end{equation}
and in particular
\[
\|\mathcal{K}_m (\gamma)\|_{\mathcal{H}} \leq
C\|\gamma\|_{\mathcal{H}} \left(\int_0^\infty \frac{s}{(1+s)^{2+2{\bar\alpha}}}\, ds\right)^{\frac{1}{2}}
\leq C \|\gamma\|_{\mathcal{H}} \, .
\]
This completes the proof of boundedness of the operator. In order to show compactness consider now a bounded sequence $\{\gamma_k\}\subset \mathcal{H}$. Observe that for every fixed $N$, \eqref{e:explicit3} gives the following obvious bound
\begin{equation}
\|\mathcal{K}_m(\gamma_k)\|_{W^{1,2} [N^{-1}, N]} \leq C (N) \|\gamma_k\|_{\mathcal{H}}\, .
\end{equation}
In particular, through a standard diagonal procedure, we can extract a subsequence of $\{\mathcal{K}_m(\gamma_k)\}$ (not relabeled) which converges strongly in $L^2 ([N^{-1}, N], rdr)$ for every $N$. It is now easy to show that $\{\mathcal{K}_m (\gamma_k)\}_k$ is a Cauchy sequence in $\mathcal{H}$. Fix indeed $\varepsilon>0$. Using \eqref{e:K_m-pointwise-bound} it is easy to show that there is a sufficiently large $N$ with the property that
\begin{equation}\label{e:Cauchy-1}
\sup_k \|\mathcal{K}_m (\gamma_k) \mathbf{1}_{[0, N^{-1}]\cup [N, \infty[}\|_{\mathcal{H}} < \frac{\varepsilon}{3}\, .
\end{equation}
Hence, given such an $N$, we can choose $k_0$ big enough so that
\begin{equation}\label{e:Cauchy-2}
\|(\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)) \mathbf{1}_{[N^{-1}, N]}\|_{\mathcal{H}} \leq
\frac{\varepsilon}{3} \qquad \forall k,j \geq k_0\, .
\end{equation}
Combining \eqref{e:Cauchy-1} and \eqref{e:Cauchy-2} we immediately conclude
\[
\|\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)\|_{\mathcal{H}} < \varepsilon
\]
for every $j,k \geq k_0$. This completes the proof that $\{\mathcal{K}_m (\gamma_j)\}$ is a Cauchy sequence and hence the proof that $\mathcal{K}_m$ is compact.
\end{proof}
\section{The eigenvalue equation and the class \texorpdfstring{$\mathscr{C}$}{scrC}}
\label{s:eigenvalue-equation}
Using the operators introduced in the previous setting, we observe that Theorem \ref{thm:spectral3} is equivalent to showing that ${\rm spec}\, (\mathcal{L}_{m_0}) \cap \{{\rm Im}\, z>0\}$ is finite and nonempty.
We next notice that, thanks to Proposition \ref{p:all-m}, the latter is equivalent to showing that the equation\footnote{Recall that $\psi$ is defined through \eqref{e:explicit3}.}
\begin{equation}\label{e:eigenvalue-equation}
m \zeta \gamma - \frac{m}{r} g' \psi = z \gamma
\end{equation}
has a nontrivial solution $\gamma \in \mathcal{H}$ for some integer $m=m_0\geq 2$ and some complex number $z$ with positive imaginary part.
We thus turn \eqref{e:eigenvalue-equation} into an ODE problem by changing the unknown from $\gamma$ to the function $\psi$.
In particular, recall that the relation between the two is that $\Delta (\psi (r) e^{im\theta}) = \gamma (r) e^{im\theta}$, and $\psi e^{im\theta}$ is in fact the potential-theoretic solution. We infer that
\[
\psi'' + \frac{1}{r} \psi' - \frac{m^2}{r^2} \psi = \gamma\,
\]
and hence \eqref{e:eigenvalue-equation} becomes
\begin{equation}\label{e:eigenvalue-equation-2}
- \psi'' - \frac{1}{r}\psi' + \frac{m^2}{r^2} \psi + \frac{g'}{r (\zeta -m^{-1} z)} \psi = 0 \, .
\end{equation}
Notice that, by classical estimates for ODEs, $\psi \in W^{2,2}_{\text{loc}} (\mathbb R^+)$.
Observe, moreover, that if $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-2} and $z$ has nonzero imaginary part, it follows that
\[
\gamma = \frac{mg'}{r (m \zeta -z)} \psi
\]
belongs to $L^2 (r dr)$ and solves \eqref{e:eigenvalue-equation}, because the function $\frac{mg'}{m \zeta -z}$ is bounded. Viceversa, assume that $\gamma \in L^2 (r dr)$ solves \eqref{e:eigenvalue-equation}. Then $\psi$ solves \eqref{e:eigenvalue-equation-2} and we claim that $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$. First of all notice that, by classical Calder{\'o}n-Zygmund estimates, $\phi (x) := \psi (r) e^{im\theta}$ is a $W^{2,2}_{\text{loc}}$ function of $\mathbb R^2$. As such $\phi\in C^\omega (B_1)$ for every $\omega<1$ and therefore $\psi\in C^\omega ([0,1])$ and, by symmetry considerations, $\psi (0) =0$. Thus it turns out that $|\psi (r)|\leq C r^\omega$ for every $r\in [0,1]$, which easily shows that $\psi\in L^2 ([0,1], \frac{dr}{r})$. It remains to show that
\begin{equation}\label{e:correzione-1}
\int_1^\infty \frac{|\psi (r)|^2}{r}\, dr < \infty\, .
\end{equation}
However recall that, for $r$ sufficiently large, $\zeta (r) = \frac{c_0}{r^2}+ \frac{c_1}{r^{\bar\alpha}}$ for some constants $c_0$ and $c_1$, while $g' (r) = -{\bar\alpha} r^{1+{\bar\alpha}}$. We thus infer
\[
|\psi (r)| = \left|\frac{r (\zeta (r)-\frac{z}{m})}{g' (r)}\right| \leq \frac{C|\gamma (r)|}{r^{\bar\alpha}}\, ,
\]
which in turn easily implies \eqref{e:correzione-1} because $\int_1^\infty |\gamma (r)|^2 r\, dr < \infty$.
Hence our problem is equivalent to understand for which $m$ and $z$ with positive imaginary part there is an $L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solution of \eqref{e:eigenvalue-equation-2}. The next step is to change variables to $t = \ln r$ and we thus set $\varphi (t) = \psi (e^t)$, namely, $\psi (r) = \varphi (\ln r)$. The condition that $\psi\in L^2 (\frac{dr}{r})$ translates then into $\varphi\in L^2 (\mathbb R)$ and $\psi\in W^{2,2}_{\text{loc}}$ translates into $\varphi\in W^{2,2}_{\text{loc}}$.
Moreover, if we substitute the complex number $z$ with $\frac{z}{m}$ we can rewrite
\begin{equation}\label{e:eigenvalue-equation-3}
- \varphi'' (t) + m^2 \varphi (t) + \frac{A(t)}{\Xi (t) - z} \varphi (t) = 0\, ,
\end{equation}
which is \emph{Rayleigh's stability equation},
where the functions $A$ and $\Xi$ are given by changing variables in the corresponding functions $g'$ and
$\zeta$:
\begin{align}
A (t) &= \frac{d}{dt} g(e^t)\\
\Xi (t) &= \int_{-\infty}^t e^{-2 (t-\tau)} g (e^\tau)\, d\tau\, .
\end{align}
Note in particular that we can express $A$ and $\Xi$ through the relation
\begin{equation}\label{e:A-Xi}
A = \Xi'' + 2 \Xi'\, .
\end{equation}
The function $g$ (and so our radial function $\bar\Omega$) can be expressed in terms of $\Xi$ through the formula
\begin{equation}\label{e:formula-g}
g (e^t) = e^{-2t} \frac{d}{dt} (e^{2t} \Xi (t))\, .
\end{equation}
Rather than looking for $g$ we will then look for $\Xi$ in an appropriate class $\mathscr{C}$ which we next detail:
\begin{definition}\label{d:class-C}
The class $\mathscr{C}$ consists of those functions $\Xi: \mathbb R \to ]0, \infty[$ such that
\begin{itemize}
\item[(i)] $\Xi (-\infty) := \lim_{t\to - \infty} \Xi (t)$ is finite and there are constants $c_0>0$ and $M_0$ such that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for all $t\leq M_0$;
\item[(ii)] there is a constant $c_1$ such that $\Xi (t) = c_1 e^{-2t} + \frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}$ for $t\geq \ln 2$;
\item[(iii)] $A$ has exactly two zeros, denoted by $a<b$, and $A' (a)>0$ and $A' (b)<0$ (in particular $A<0$ on $]-\infty,a[ \cup ]b, \infty[$ and $A>0$ on $]a,b[$);
\item[(iv)] $\Xi ' (t) <0$ for every $t$.
\end{itemize}
\end{definition}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/fig1.pdf}
\caption{A sketch of the function in the class $\mathscr{C}$ which will be finally chosen in Section \ref{s:choice-of-A} to prove Theorem \ref{thm:spectral3}, in the $t= \log r$ axis. The graph of $A(t)$ is the solid curve, $G(t):=\Xi'(t)+2\Xi(t)$ the dashed one, and $\Xi'(t)$ the dotted one. Even though $A$ is smooth, its derivative undergoes a very sharp change around the points $t = \frac{1}{2}$ and the point $t= -\frac{1}{\sqrt{B}}$, where $B$ is an appropriately large constant, cf. Section \ref{s:choice-of-A}.}
\label{fig:fig1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{Figures/fig2.pdf}
\caption{The profile of the background vorticity $\bar \Omega(x) = g(r)$ in the original coordinates (the solid curve). Compare with the exact singular profile $r^{-{\bar\alpha}}$ (the dashed curve)}
\label{fig:fig2}
\end{figure}
Fix $\Xi\in \mathscr{C}$. By \eqref{e:formula-g}, $g$ is then smooth, it equals $2\Xi (-\infty)- 4 c_0 r^2$ in a neighborhood of $0$, and it is equal to $r^{-{\bar\alpha}}$ for $r\geq 2$, thanks to the conditions (i)-(ii). In particular the corresponding function $\bar \Omega (x) = g (|x|)$ satisfies the requirements of Theorem \ref{thm:spectral3}. We are now ready to turn Theorem \ref{thm:spectral3} into a (in fact stronger) statement for the eigenvalue equation \eqref{e:eigenvalue-equation-3}. In order to simplify its formulation and several other ones in the rest of these notes, we introduce the following sets
\begin{definition}\label{d:sets-U-m}
Having fixed $\Xi\in \mathscr{C}$ and a real number $m>1$, we denote by $\mathscr{U}_m$ the set of those complex $z$ with positive imaginary part with the property that there are nontrivial solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}} (\mathbb R, \mathbb C)$ of \eqref{e:eigenvalue-equation-3}.
\end{definition}
{\begin{remark}\label{r:m-factor-eigenvalue}
Observe that $z$ belongs to $\mathscr{U}_m$ if and only if it has positive imaginary part and $m z$ is an eigenvalue of $\mathcal{L}_m$.
\end{remark}}
\begin{theorem}\label{thm:spectral5}\label{THM:SPECTRAL5}
There is a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m_0}$ is finite and nonempty.
\end{theorem}
\section{Overview of the proof of Theorem \ref{thm:spectral5}}\label{sec:overviewspectralthm}
The rest of the chapter is devoted to proving Theorem \ref{thm:spectral5}. The proof will be achieved through a careful study of Rayleigh's stability equation~\eqref{e:eigenvalue-equation-3} and, in particular, the set $\mathscr{P}$ of pairs $(m,z)$ such that $z\in \mathscr{U}_m$ and $m>1$, i.e.,
\begin{equation}\label{e:def-pairs}
\mathscr{P}:= \left\{(m,z)\in \mathbb R \times \mathbb C: z\in \mathscr{U}_m, m>1\right\}\, .
\end{equation}
Given that $\Xi$ is strictly decreasing, we have
\[
\lim_{t\to -\infty} \Xi (t) > \Xi (a) > \Xi (b) > \lim_{t\to \infty} \Xi (t) = 0
\]
and in order to simplify our notation we will use $\Xi (-\infty)$ for $\lim_{t\to -\infty} \Xi (t)$ and, occasionally, $\Xi (\infty)$ for $0$.
The first step in the proof of Theorem \ref{thm:spectral5} is understanding which pairs $(m,z)$ belong to the closure of $\mathscr{P}$ and have ${\rm Im}\, z =0$. Solutions $(m,z,\varphi)$ to~\eqref{e:eigenvalue-equation-3} with $(m,z) \in \overline{\mathscr{P}}$ are sometimes called \emph{neutral limiting modes}~\cite{LinSIMA2003}.\footnote{The interested reader may compare with the strategy for bounded shear flows in~\cite{LinSIMA2003}.} To that end, it is convenient to introduce the following two self-adjoint operators:
\begin{align}
L_a &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (a)}\label{e:def-L_a}\\
L_b &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (b)}\label{e:def-L_b}\, .
\end{align}
Thanks to the definition of the class $\mathscr{C}$, it is easy to see that both functions $\frac{A(t)}{\Xi (t) - \Xi (a)}$ and $\frac{A(t)}{\Xi (t) - \Xi (b)}$ are bounded and that $\frac{A(t)}{\Xi (t) - \Xi (a)} < \frac{A(t)}{\Xi (t) - \Xi (b)}$. Moreover, the first is negative on $]-\infty, b[$ and positive on $]b, \infty[$, while the second is negative on $]-\infty, a[$ and positive on $]a, \infty[$. Recall that the spectra of these operators are necessarily real and denote by $-\lambda_a$ and $-\lambda_b$ the smallest element in the respective ones: observe that, by the Rayleigh quotient characterization, $-\lambda_a < -\lambda_b$.
The following proposition characterizes the possible neutral limiting modes:
\begin{proposition}\label{p:3+4}\label{P:3+4}
If $(m_0,z)\in \overline{\mathscr{P}}$ and ${\rm Im}\, z =0$, then either $z= \Xi (a)$ or $z= \Xi (b)$. {\color{red} Moreover, in either case, if $m_0>1$ then necessarily $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$.} Assume in addition that $- \lambda_a < -1$. Then, for $z = \Xi (a)$, the unique $m\geq 1$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$. Moreover, any nontrivial solution has the property that $\psi_a (a) \neq 0$.
\end{proposition}
{\color{red}
\begin{remark}\label{r:b-also}
We remark that the exact same argument applies with $b$ in place of $a$ when $\lambda_b >1$, even though this fact does not play any role in the rest of the notes.
\end{remark}
}
Observe that this does not yet show that $(m_a, \Xi (a))\in \overline{\mathscr{P}}$ corresponds to a neutral limiting mode. The latter property will be achieved in a second step, in which we seek a curve of unstable modes emanating from $(m_a, \Xi(a))$:
\begin{proposition}\label{p:5-7}\label{P:5-7}
Assume $- \lambda_a<-1$ and let $m_a=\sqrt{\lambda_a}$.
There are positive constants $\varepsilon >0$ and $\delta>0$ with the following property:
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) \neq \emptyset$.
\end{proposition}
{\color{red}
\begin{remark}\label{r:b-also-2} In fact, the argument given for the proposition proves the stronger conclusion that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ consists of a single point $z$, with the property that $mz$ is an eigenvalue of $\mathcal{L}_m$ with geometric multiplicity $1$.
Moreover, the very same argument applies to $b$ in place of $a$ and $h \in ]-\delta,0[$ if $\lambda_b >1$.
\end{remark}
}
Combined with some further analysis, in which the curve of unstable modes is continued, the latter proposition will allow us to conclude the following:
\begin{proposition}\label{p:almost-final}\label{P:ALMOST-FINAL}
Assume $- \lambda_a<-1$, let $m_a = \sqrt{\lambda_a}$ and set $m_b:= \sqrt{\max \{1, \lambda_b\}}$: then
$\mathscr{U}_m\neq \emptyset$ for every $m\in ]m_b, m_a[$. \end{proposition}
Thus far, we have not selected our function $\Xi$: the above properties are valid for any element in the class $\mathscr{C}$. The choice of $\Xi$ comes in the very last step.
\begin{proposition}\label{p:final}
There is a choice of $\Xi\in \mathscr{C}$ with the property that $]m_b,m_a[$ contains an integer larger than $1$.
\end{proposition}
Clearly, the combination of Proposition \ref{p:almost-final} and Proposition \ref{p:final} gives Theorem \ref{thm:spectral5}: we first choose $\Xi$ as in Proposition \ref{p:final} and hence we select $m_0$ as the largest natural number which belongs to the interval $]m_b,m_a[$; the properties claimed in Theorem \ref{thm:spectral5} follow then from Proposition \ref{p:almost-final}. The proof of Proposition \ref{p:final} is in fact a rather straightforward application of the following.
\begin{lemma}\label{l:bottom}\label{L:BOTTOM}
Let $m_0$ be any integer. Then there exists $\Xi\in \mathscr{C}$ with $a=0$ and $b=\frac{1}{2}$ such that the smallest eigenvalue of the operator $L_a$ is smaller than $-m_0^2$.
\end{lemma}
\begin{remark}\label{rmk:veryunstablemodes}
A consequence of Lemma~\ref{l:bottom} is that the most unstable wavenumber $m_0$ can be made arbitrarily large. Only $m_0 \geq 2$ is necessary to prove non-uniqueness.
\end{remark}
The rest of the chapter will be devoted to proving the Propositions \ref{p:3+4} and \ref{p:5-7} and Lemma \ref{l:bottom}. We finish this section by giving the simple proof of Proposition \ref{p:final}
\begin{proof} For simplicity we fix $a=0$ and $b=\frac{1}{2}$ and we look at the set of functions $\Xi$ with this particular choice of zeros for $A$. We then denote by $L_{\Xi, a}$ the operator in \eqref{e:def-L_a}. We fix an arbitrary $\Xi_0\in \mathscr{C}$ and let $-\lambda (0)$ be the smallest eigenvalue of $L_{\Xi_0,a}$. We then consider the smallest integer $m_0\geq 3$ such that $m_0^2 > \lambda (0)$. By Lemma \ref{l:bottom} there is an element $\Xi_1\in \mathscr{C}$ with the property that $a=0$, $b=\frac{1}{2}$ and, if $-\lambda (1)$ is the smallest element of the spectrum of $L_{\Xi_1, a}$, then $-\lambda (1) < m_0^2$. For $\sigma\in [0,1]$ consider $L_{\Xi_\sigma,a}$ where
\[
\Xi_\sigma = (1-\sigma) \Xi_0 + \sigma \Xi_1\,
\]
and observe that $\Xi_\sigma \in \mathscr{C}$ for every $\sigma\in [0,1]$.
Since $\sigma \mapsto \Xi_\sigma$ is continuous in the uniform convergence, by the Rayleigh quotient characterization we see that the smallest element $-\lambda (\sigma)$ of the spectrum of $L_{\Xi_\sigma,a}$ is a continuous function of $\sigma$. There is thus one $\sigma\in [0,1[$ with $\lambda (\sigma)= m_0^2$. Let $\sigma_0$ be the largest $\sigma$ with $\lambda (\sigma)= m_0^2$. Observe now that, if we let $- \mu (\sigma_0)$ be the smallest eigenvalue of $L_{\Xi_{\sigma_0}, b}$, then $\mu (\sigma_0) < m_0^2$. In addition, $\sigma\mapsto \mu (\sigma)$ is also continuous and thus there is $h>0$ such that $\mu (\sigma) < m_0^2$ for all $\sigma\in [\sigma_0-h, \sigma_0+h]$. On the other hand $\lambda (\sigma_0+h)> m_0^2$. This shows that $m_b < m_0 < m_a$ if we choose $\Xi= \Xi_{\sigma_0+h}$, completing the proof of our claim.
\end{proof}
\section{ODE Lemmas}
An essential tool in the proofs of the Propositions \ref{p:3+4} and \ref{p:5-7} are the following two ODE lemmas.
\begin{lemma}\label{l:ODE1}
Let $m>0$. For every $f\in L^2 (\mathbb R)$ there is a unique $\psi\in L^2(\ensuremath{\mathbb R}) \cap W^{2,2}_{\text{loc}}$ s.t.
\begin{equation}\label{e:Laplacian-1d}
-\frac{d^2\psi}{dt^2} + m^2\psi = f
\end{equation}
and it is given by
\begin{equation}\label{e:potential-1d}
\psi (t) = \frac{1}{2m} \int_{\ensuremath{\mathbb R}} e^{-m|t-\tau|} f (\tau)\, d\tau\, .
\end{equation}
\end{lemma}
\begin{proof} The lemma is a classical well-known fact. At any rate the verification that
$\psi$ as in \eqref{e:potential-1d} solves \eqref{e:Laplacian-1d} is an elementary computation while, since obviosuly $e^{-m|t|}\in L^1$, $\psi\in L^2$ if $f\in L^2$. Moreover, any other solution $\hat\psi$ of \eqref{e:Laplacian-1d} must satisfy $\hat\psi (t) = \psi (t) + C_+ e^{mt} + C_- e^{-mt}$ for some constants $C_\pm$ and the requirement $\hat\psi\in L^2$ immediately implies $C_+=C_-=0$.
\end{proof}
The second ODE Lemma is the following:
\begin{lemma}\label{l:ODE2}
Let $v\in L^1 (\mathbb R, \mathbb C)$. Then for every constant $c_-$ there is a unique solution $y \in W^{2,1}_{\text{loc}} (\mathbb R, \mathbb C)$ of
\begin{equation}\label{e:ODE2}
- \frac{d^2y}{dt^2} + (m^2 + v) y =0
\end{equation}
with the property that
\begin{equation}\label{e:y=e^mt}
\lim_{t\to - \infty} e^{-mt} y (t) =c_-\, .
\end{equation}
Moreover we have $y(t) = e^{mt} (c_-+z(t))$ for a function $z(t)$ which satisfies the bounds
\begin{align}
|z(t)| &\leq |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z}\\
|z'(t)| &\leq 2m |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z'}
\end{align}
A symmetric statement, left to the reader, holds for solutions such that
\begin{equation}\label{e:y=e^mt-plus}
\lim_{t\to \infty} e^{mt} y (t) =c_+\, .
\end{equation}
\end{lemma}
Important consequences of the above Lemmas are the following:
\begin{corollary}\label{c:decay}
If $(m,z)\in \mathscr{P}$, then the space of solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ of \eqref{e:eigenvalue-equation-3} is $1$-dimensional. Moreover for any such $\varphi$ there is a constant $C$ with the property that
\begin{align}
|\varphi (t)| &\leq C e^{-m|t|}\,
\end{align}
and there are two constants $C_+$ and $C_-$ such that
\begin{align}
\lim_{t\to\infty} e^{mt} \varphi (t) &= C_+\\
\lim_{t\to -\infty} e^{-mt} \varphi (t) &= C_-\, .
\end{align}
The constants are either both nonzero or both zero, in which case $\varphi$ vanishes identically.
The same conclusions apply if $m>1$, $z\in \{\Xi (a), \Xi (b)\}$ and $\varphi$ solves \eqref{e:eigenvalue-equation-3}.
\end{corollary}
\begin{proof}
Observe that $|\Xi (t)-z|\geq |{\rm Im}\, z|$, while $A (t) = 6 c_0 e^{2t}$ for $-t$ sufficiently large and $|A(t)|\leq 2 e^{-2{\bar\alpha} t}$ for $t$ sufficiently large. In particular
\begin{equation}\label{e:estimate-A-over-Xi}
\frac{|A(t)|}{|\Xi (t)-z|} \leq C e^{-2{\bar\alpha} |t|}\, .
\end{equation}
First of all notice that, if $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-3}, by Lemma \ref{l:ODE1} (applied with $f= -\frac{A\varphi}{\Xi-z}$) we have
\begin{equation}\label{e:integral-equation}
|\varphi (t)| \leq \frac{C}{2m} \int e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)|\, d\tau\, .
\end{equation}
Using Cauchy-Schwarz and the fact that $\varphi\in L^2$ we immediately obtain that $\varphi\in L^\infty$, namely, that there is a constant $C$ such that $|\varphi|\leq C$. We now prove inductively that $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$ as long as $k{\bar\alpha} \leq m$. The case $k=0$ has already been shown. Assume thus that the inequality holds for $k-1$ and that $k{\bar\alpha} \leq m$. We then observe that
\begin{align*}
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| &\leq C_{k-1} e^{- m|t-\tau| - k{\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|}
\leq C_{k-1} e^{-k{\bar\alpha} (|t-\tau| + |\tau|)} e^{-{\bar\alpha} |\tau|}\\
&\leq C_{k-1} e^{-k{\bar\alpha} |t|} e^{-{\bar\alpha} |\tau|}\, .
\end{align*}
Inserting in \eqref{e:integral-equation} and using that $e^{-{\bar\alpha} |\tau|}\in L^1$ we then obtain $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$. Assuming now $k{\bar\alpha} \leq m < (k+1) {\bar\alpha}$ we can, likewise, bound
\[
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| \leq C_k e^{- m|t-\tau| - (k+1){\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|} \leq C_k e^{-m|t|} e^{-{\bar\alpha} |\tau|}
\]
and plugging into \eqref{e:integral-equation} one last time we conclude $|\varphi (t)|\leq C e^{-m|t|}$.
In order to show that $\varphi$ is unique up to a multiplicative constants, it suffices to show that $\lim_{t\to -\infty} e^{-mt} \varphi (t)$ exists and is finite. Hence Lemma \ref{l:ODE2} would conclude that the solution is uniquely determined by $C_-$, and that the latter must be nonzero, otherwise $\varphi\equiv 0$.
In order to show existence and finiteness of $C_-$ rewrite
\[
\varphi (t) = \frac{e^{mt}}{2m} \int_t^\infty e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds
+ \frac{e^{-mt}}{2m} \int_{-\infty} ^t e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\]
Since by our estimates both $e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ and $e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ are integrable, we conclude that $C_{\pm}$ exist and equal
\begin{align*}
C_\pm = \frac{1}{2m}\int_{-\infty}^\infty e^{\pm ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\end{align*}
\medskip
As for the last sentence of the statement of the lemma, the same arguments can be used in the case $z\in \{\Xi (a), \Xi (b)\}$, since the crucial point is that, thanks to the assumption that $A (a) = A(b)=0$ and $\Xi' (a) \neq 0 \neq \Xi' (b)$, the estimate \eqref{e:estimate-A-over-Xi} remains valid.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:ODE2}] We distinguish between the case $c_-\neq 0$ and $c_-=0$. In the case $c_- \neq 0$ we can divide by $c_-$ and reduce the statement to $c_-=1$. For the existence it suffices to look for a solution of \eqref{e:ODE2} which satisfies \eqref{e:y=e^mt} on a half-line of type $]-\infty, T]$ for some $T$. Such solution has then a $W^{2,1}_{\text{loc}}$ continuation on $[T, \infty[$ by standard ODE theory. Likewise the uniqueness is settled once we can show the uniqueness holds on $]-\infty, T]$. Observe next that, if the solution exists, we would clearly conclude that $\frac{d^2 y}{dt^2}\in L^1 (]-\infty, T])$, hence implying that
\[
\lim_{t\to-\infty} y' (t)
\]
exists and is finite. On the other hand \eqref{e:y=e^mt} implies that such limit must be $0$.
Let $\tilde{y} (t) = e^{-mt} y (t)$ and observe that we are looking for a solution of
\[
(e^{2mt} \tilde{y}')' = e^{2mt} v \tilde{y}\, .
\]
Integrating between $-N$ and $t$ the latter identity and letting $t\to -\infty$ we conclude
\begin{equation}\label{e:tildey'}
e^{2mt} \tilde{y}' (t) = \int_{-\infty}^t e^{2ms} v (s)\tilde{y} (s)\, ds\, .
\end{equation}
Divide by $e^{2mt}$ and integrate once more to reach
\begin{align*}
\tilde{y} (t) -1 = - \int_{-\infty}^t \int_{-\infty}^r e^{2m (s-r)} v(s)\tilde{y} (s)\, ds\, dr
= \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\end{align*}
We then define the transformation
\begin{equation}\label{e:fixed-point}
\mathscr{F} (\tilde{y}) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds + 1\,
\end{equation}
which we consider as a map from $L^\infty (]-\infty, T])$ into itself.
From our discussion we conclude that $y$ solves \eqref{e:ODE2} and obeys \eqref{e:y=e^mt} if and only if $\tilde{y}$ is a fixed point of $\mathscr{F}$. Choosing $T$ large enough so that $\|v\|_{L^1 (]-\infty, T])}\leq m$ we see immediately that $\mathscr{F}$ is contraction on $L^\infty (]-\infty, T])$ and it thus has a unique fixed point. We have thus showed existence and uniqueness of the solution in question.
Observe now that $z(t) = \tilde{y} (t) -1$ and set
\[
Z(t) := \exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\, .
\]
$Z$ solves the ODE $Z' = \frac{|v|}{2m} Z + \frac{|v|}{2m}$ and, since $\lim_{t\to-\infty} Z(t) =0$, the integral equation
\[
Z (t) = \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\, .
\]
We first want to show that $|z(t)|\leq Z(t)$ on $]-\infty, T]$. We set $\tilde{y}_0 := Z+1$ and define inductively $\tilde{y}_{i+1} = \mathscr{F} (\tilde{y}_i)$. From the above discussion we know that $\tilde{y}_i$ converges uniformly to $\tilde{y}$ and it suffices thus to show that $|\tilde{y}_i -1| \leq Z$ for all $i$.
By definition we have $|\tilde{y}_0-1| = Z$ and thus we need to show the inductive step. We estimate
\begin{align*}
|\tilde{y}_{i+1} (t) -1| &\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| |\tilde{y}_i (s)|\, ds\\
&\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds = Z(t)\, ,
\end{align*}
We have shown \eqref{e:est-z} on $]-\infty, T]$. In order to extend the inequality to the whole real axis observe first that we can assume, without loss of generality, that $\|v\|_{L^1 (\mathbb R)}>0$, otherwise we trivially have $|\tilde{y} (t)-1| = Z(t) =0$ for all $t$. In particular we can select $T$ so that all of the above holds and at the same time $\|v\|_{L^1 (]-\infty, T])}>0$. This implies $Z(T)>0$. Moreover, by \eqref{e:fixed-point} and $\mathscr{F} (\tilde{y})= \tilde{y}$, either
\begin{align*}
|\tilde{y} (T) -1| &< \frac{1}{2m} \int_{-\infty}^{T} |v(s)| |\tilde{y} (s)|\, ds
\end{align*}
or $|v||\tilde{y}|$ vanishes identically on $]-\infty, T]$. In both cases we conclude $|\tilde{y} (T)-1|< Z(T)$. Consider now $\sup \{t\geq T: |\tilde{y} (t)-1|< Z (t)\}$. Such supremum cannot be a finite number $T_0$ because in that case we would have $|\tilde{y} (T_0)-1| = Z(t_0)$ while the same argument leading to the strict inequality $|\tilde{y} (T)-1|< Z(T)$ implies $|\tilde{y} (T_0)-1|< Z (T_0)$.
Having shown \eqref{e:est-z} we now come to \eqref{e:est-z'}. Recalling \eqref{e:tildey'} we have
\begin{align*}
z'(t) &= \int_{-\infty}^t e^{-2m (t-s)} v (s) (z(s)+1)\, ds\\
&\leq \int_{-\infty}^t e^{-2m (t-s)} |v (s)| Z(s)\, ds + \int_{-\infty}^t e^{-2m (t-s)} |v (s)|\, ds
= 2m Z(t)\, .
\end{align*}
We now come to the case $c_- =0$. In that case we need to show that the unique solution is identically~$0$. Arguing as for the case $c_- =1$ we conclude that $\varphi$ is a fixed point of the transformation
\[
\mathscr{F} (\varphi) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\]
Again, for a sufficiently small $T$, $\mathscr{F}$ is a contraction on $L^\infty (]-\infty, T])$ and hence it has a unique fixed point. Since however $0$ is, trivially, a fixed point, we conclude that $\varphi\equiv 0$ on $]-\infty, T]$. Standard ODE theory implies then that $\varphi$ vanishes identically on the whole $\mathbb R$.
\end{proof}
\section{Proof of Proposition \ref{p:3+4}}\label{s:3+4}
We start by showing the last statement of the proposition, namely:
\begin{itemize}
\item[(A)] For $z = \Xi (a)$ and under the assumption that $\lambda_a>1$, the unique $m$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$.
\end{itemize}
{\color{red} Before coming to its proof we also observe that the same argument applies with $b$ in place of $a$.}
First of all observe that, for $z=\Xi (a)$, the equation \eqref{e:eigenvalue-equation-3}, which becomes
\begin{equation}\label{e:eigenvalue-equation-again}
-\frac{d^2\varphi}{dt^2} + m^2 \varphi + \frac{A}{\Xi-\Xi (a)} \varphi = 0,
\end{equation}
has nontrivial solutions $\varphi\in W^{2,2}_{\text{loc}}\cap L^2 (\mathbb R; \mathbb C)$ if and only if it has nontrivial solution $\varphi \in W^{2,2}_{\text{loc}} \cap L^2 (\mathbb R;\mathbb R)$. That the equation has a nontrivial solution when $m=\sqrt{\lambda_a}$ follows from the classical theory of self-adjoint operators. We therefore only need to show that the existence of a nontrivial solution is only possible for a single $m\geq 1$. Arguing by contradiction assume there are two, $1\leq m_1< m_2$, and denote by $\psi_1$ and $\psi_2$ the respective solutions. Then there is a nontrivial linear combination
\[
\psi = C_1 \psi_1 + C_2 \psi_2
\]
which vanishes on $a$. Observe that $\psi_1$ and $\psi_2$ can be interpreted as eigenfuctions of the self-adjoint operator $-\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t)-\Xi (a)} $ relative to distinct eigenvalues and they are, therefore, $L^2$ orthogonal. Summing the equations, multiplying by $\psi$ and integrating by parts we achieve
\begin{equation}\label{e:tested}
\underbrace{\int \left((\psi')^2 +\frac{A}{\Xi-\Xi (a)} \psi^2\right)}_{=:I} = - C_1^2 m_1^2 \int \psi_1^2 - C_2^2 m_2^2 \int \psi_2^2\, .
\end{equation}
Recalling that $A = \Xi'' + 2\Xi' = (\Xi' + 2\Xi)'$, we wish to integrate by parts the second integrand in the left-hand side. Observe that, because $\psi$ vanishes on $a$ and $\Xi' (a) \neq 0$, the function $\frac{\psi^2}{\Xi - \Xi (a)}$ is in fact continuously differentiable. In particular we can write
\[
\int\frac{A}{\Xi-\Xi (a)} \psi^2 = \int \left(\frac{\Xi'+2\Xi}{(\Xi-\Xi(a))^2} \Xi' \psi^2 - 2\frac{\Xi'+2\Xi}{\Xi-\Xi (a)}\psi\psi'\right)\, .
\]
Substituting it into I, we achieve
\begin{align*}
I &= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + \int \left(\frac{2\Xi\Xi'\psi^2}{(\Xi-\Xi (a))^2} - \frac{4\Xi\psi\psi'}{\Xi-\Xi (a)} \right)\\
&= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + 2 \int \frac{\Xi'}{\Xi-\Xi (a)} \psi^2\, ,
\end{align*}
where to reach the second line we have written the first term in the second integral as
\[
- 2 \Xi \frac{d}{dt} \left(\frac{1}{\Xi-\Xi (a)}\right) \psi^2
\]
and integrated it by parts. Again thanks to the fact that $\psi$ vanishes at $a$ we can write it as $\psi= (\Xi-\Xi (a)) \eta$ and hence conclude
\begin{align*}
I &= \int ((\Xi - \Xi (a))\eta')^2 + \int 2 (\Xi-\Xi (a))\Xi' \eta^2 = \int ((\Xi - \Xi (a))\eta')^2 - 2 \int (\Xi-\Xi (a))^2 \eta\eta'\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (\Xi-\Xi(a))^2 \eta^2\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (C_1^2\psi_2^2 + C_2^2\psi_2^2)\, .
\end{align*}
Inserting the latter in \eqref{e:tested} we conclude
\[
\int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 = - C_1^2 (m_1^2-1) \int \psi_1^2 - C_2^2 (m_2^2-1) \int \psi_2^2\, .
\]
Observe that, since $m_2>1$ and $\psi_2$ is nontrivial, we conclude that $C_2=0$. This would then imply that $\psi = C_1 \psi_1$ and we can thus assume $C_1=1$ in all our computations. In particular $\eta'=\eta$, which implies $\eta (t) = C e^t$. We can now write $\psi_1 (t) = (\Xi (t)-\Xi(a)) \eta (t)$ and given the properties of $\Xi (t)$ we easily see that this would violate the decay at $+\infty$ that we know for $\psi_1$ from Corollary \ref{c:decay}.
{\color{red}
\begin{remark}\label{r:phi(a)-nonzero}
We record here a consequence of the above argument: a nontrivial solution $\varphi$ of \eqref{e:eigenvalue-equation-again} necessarily satisfies $\varphi (a) \neq 0$ (and thus it must be unique up to constant factors).
\end{remark}
}
\medskip
We next show that
\begin{itemize}
\item[(B)] If $(m_0,z)\in \overline{\mathscr{P}}$, $m_0\geq 1$ and $z\in \mathbb R$, then $z$ is in the closure of the range of $\Xi$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $z\in \mathbb R\setminus \overline{\Xi (\mathbb R)}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-5}
-\frac{d^2\psi_j}{dt^2} + m^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0
\end{equation}
\end{itemize}
By Corollary \ref{c:decay} we can normalize our functions $\psi_j$ so that $\psi_j (t) e^{-m_jt} \to 1$ as $t\to-\infty$ and $\psi_j (t) e^{m_jt} \to C_j\neq 0$ as $t\to\infty$. Observe also that there is a positive constant $c_0$ such that $|\Xi-z_j|\geq c_0$ for all $j$ sufficiently large, thanks to (ii). In particular, the functions $\frac{A}{\Xi-z_j}$ are uniformly bounded in $L^1$. By Lemma \ref{l:ODE2} there is a positive $T_0\geq b+1$, independent of $j$ such that
\begin{equation}\label{e:uniform-exp}
\left|\psi_j (t) - C_j e^{-m_j t}\right| \leq \frac{C_j}{2} e^{- m_j t} \qquad \forall t \geq T_0\, ,
\end{equation}
and there is a constant $C$, independent of $j$ such that
\begin{equation}\label{e:uniform-inside}
\|\psi_j\|_{L^\infty ([a,b])} \leq C\, .
\end{equation}
Next multiply \eqref{e:eigenvalue-5} by $\bar \psi_j$, integrate in $t$ and take the imaginary part of the resulting equality to conclude
\begin{equation}\label{e:imaginary-trick}
\int \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im\, z_j})^2} |\psi_j|^2 = 0\, .
\end{equation}
We might break the integral into three integrals on the regions $]-\infty, a[$, $]a,b[$, and $]b, \infty[$, where the function $A$ is, respectively, negative, positive, and negative. This gives
\[
-\int_{T_0}^{2T_0} \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq
\int_a^b \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2
\]
Now, the right-hand side of the inequality can be bounded uniformly independently of $j$ by \eqref{e:uniform-inside} and (ii). On the other hand the function $\frac{-A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im z_j})^2}$ is larger than a positive constant $c$ independent of $j$ on $[T_0, 2 T_0]$. Using \eqref{e:uniform-exp} we can achieve a uniform bound $|C_j|\leq C$ for the constants $C_j$. The latter bound, combined with the estimates of Lemma \ref{l:ODE2} and the uniform bound on $\|\frac{A}{\Xi-z_j}\|_{L^1}$ easily imply that $\psi_j$ is precompact in $L^2$. We can thus extract a subsequence, not relabeled, converging to a nontrivial $L^2$ solution $\psi$ of
\begin{equation}\label{e:eigenvalue-equation-6}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\end{equation}
Without loss of generality we assume that $\psi$ is real valued, since $z$ is real. We can thus multiply \eqref{e:eigenvalue-equation-6} by $\psi$ and integrate to achieve
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \frac{\Xi''+2\Xi'}{\Xi- z} \psi^2 = 0\, .
\]
Integrating by parts $\int \frac{\Xi''}{\Xi-z} \psi^2$ we find
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \left(\frac{(\Xi')^2}{(\Xi-z)^2} \psi^2 - 2 \frac{\Xi'}{\Xi-z} \psi' \psi\right) +
\int \frac{2\Xi'}{\Xi-z} \psi^2 = 0 \, ,
\]
which we can rewrite as
\begin{equation}\label{e:energy-trick}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-z} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-z} \psi^2 = 0\, .
\end{equation}
As already done in the previous paragraphs we set $\eta = \frac{\psi}{\Xi-z}$ and write the identity as
\[
\int \left((\Xi-z)^2 (\eta')^2 + m_0^2 (\Xi-z)^2 \eta^2 + 2 \Xi' (\Xi-z) \eta^2\right) = 0
\]
Integrating by parts the last term we find
\[
\int (\Xi-z)^2 (\eta'-\eta)^2 + \int (m_0^2-1) (\Xi-z)^2 \eta^2 = 0\, .
\]
We thus conclude that $m_0=1$ and $\eta'=\eta$, i.e. $\eta (t) = C e^t$, but again we see that this would violate $\psi\in L^2$.
\medskip
We next employ a suitable variation of the latter argument to show that
\begin{itemize}
\item[(C)] $(m_0, 0)$ and $(m_0, \Xi (-\infty))$ do not belong to $\overline{\mathscr{P}}$ if $m_0\geq 1$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $0$ or to $\Xi (- \infty)$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-7}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
We first focus on the case $z_j\to 0$. Normalize again the solutions so that $\psi_j (t)$ is asymptotic to $e^{m_0t}$ for $t$ negative, and to $C_j e^{-m_0t}$ for $t$ positive.
Observe that in this case we have $\frac{A}{\Xi}\in L^1 (]-\infty, N])$ for every $N$, while $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on any $]-\infty, N]$. We can thus apply Lemma \ref{l:ODE2} and conclude the $\psi_j$ can be assumed to converge uniformly to a function $\psi$ on $]-\infty, N]$ for every $N$ and that likewise $\psi (t)$ is asymptotic to $e^{m_0 t}$ for $t$ negative.
As done previously we multiply the equation \eqref{e:eigenvalue-equation-7} by $\bar\psi_j$, integrate, and take the imaginary part. In particular we gain the inequality
\[
\int_b^\infty \frac{A}{(\Xi- {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq - \int_a^b \frac{A}{(\Xi- {\rm Re}\, z_j)^2+ ({\rm Im}\, z_j)^2} |\psi_j|^2\, .
\]
Since $z_j\to 0$ and the range of $\Xi$ on $[a,b]$ is bounded away from $0$, we conclude that the right-hand side is uniformly bounded. In particular, passing to the limit we conclude that
\begin{equation}\label{e:info-L^1}
\Xi^{-2} A |\psi|^2 \in L^1 ([b, \infty[)\, .
\end{equation}
Observe however that
\[
\lim_{t\to\infty} \frac{A(t)}{\Xi (t)} = \lim_{t\to \infty} \frac{-{\bar\alpha} e^{-{\bar\alpha} t}}{c_1 e^{-2t}+\frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}} = - {\bar\alpha} (2-{\bar\alpha})\, .
\]
In particular we conclude that $\psi\in L^2$. Moreover, we can write
\[
\frac{A}{\Xi} = -{\bar\alpha} (2-{\bar\alpha}) + B
\]
for a function $B$ which belongs to $L^1 ([T, \infty[)$ for every $T$. We thus have that
\[
\frac{d^2\psi}{dt^2} + (m_0^2 - {\bar\alpha} (2-{\bar\alpha})) \psi + B \psi = 0\, .
\]
Recalling that $0<{\bar\alpha} <1$ and $m_0\geq 1$, we have $m_0^2 - {\bar\alpha} (2-{\bar\alpha})>0$ and we can therefore apply Lemma \ref{l:ODE2} to conclude that, for $\bar m := \sqrt{m_0^2 - {\bar\alpha} (2-{\bar\alpha})}$
\[
\lim_{t\to \infty} e^{\bar m t} \psi (t)
\]
exists, it is finite, and nonzero. Observe however that \eqref{e:info-L^1} forces $e^{{\bar\alpha} t} |\psi|^2\in L^1$, which in particular implies that $\bar m > \frac{{\bar\alpha}}{2}$
We next argue as in the derivation of \eqref{e:energy-trick} to get
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi} \psi^2 = 0\, .
\]
We again set $\psi= \Xi \eta$ and observe that, by our considerations, $\eta$ decays exponentially at $-\infty$, while it is asymptotic to $e^{({\bar\alpha} - \bar m) t}$ at $+\infty$. We rewrite the latter identity as
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) = 0\, .
\]
We wish to integrate by parts the latter term to find
\begin{equation}\label{e:da-giustificare}
\int (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2)=0\, .
\end{equation}
Since we have exponential decay of $\eta$ at $-\infty$, while at $+\infty$ $\eta$ might grow, the latter integration by parts need some careful justification. First of all we notice that $\Xi \Xi' \eta^2$ decays exponentially at $+\infty$ and thus, since the other two integrands are positive, we can write
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\lim_{N\to\infty} \int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) \, .
\]
Next, we can integrate by parts the second integrand (before passing to the limit) to write
\[
\int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\int_{-\infty}^N (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2) + \Xi^2 (N) \eta^2 (N)\, .
\]
Since $\Xi (N) \eta (N)$ converges to $0$ exponentially, passing into the limit we conclude \eqref{e:da-giustificare}.
As before this would imply $m_0=1$ and $\eta (t) = C e^t$, while we have already argued that $\eta$ decays exponentially at $\infty$.
We next tackle the case $z_j \to \Xi (-\infty)$. This time we observe that $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on $[T, \infty[$ for every $T$ and we thus normalize the functions $\psi_j$ so that $\psi_j (t)$ is asymptotic to $e^{-m_j t}$ for $t\to \infty$. Arguing as above, we assume that $\psi_j$ converges uniformly on all $[T, \infty[$ to a $\psi$ which is asymptotic to $e^{-m_0 t}$ and solves
\begin{equation}\label{e:eigenvalue-equation-9}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (-\infty)} \psi=0\, .
\end{equation}
As above we can assume that $\psi$ is real valued. Moreover, this time we infer (with the same method used to prove \eqref{e:info-L^1})
\begin{equation}\label{e:info-L1-2}
(\Xi-\Xi (-\infty))^{-2} A \psi^2 \in L^1 (\mathbb R)
\end{equation}
This time observe that, for $t$ sufficiently negative, $\frac{A(t)}{\Xi (t)- \Xi (-\infty)} = 8$. In particular we can explicitly solve the equation as
\[
\psi (t) = C_1 e^{-t\sqrt{m_0^2+8}} + C_2 e^{t\sqrt{m_0^2+8}}
\]
when $t$ is sufficiently negative.
However, if $C_1$ were positive, \eqref{e:info-L1-2} would not hold. In particular we infer exponential decay at $-\infty$. We can now argue as for the case $z_j\to 0$: we multiply \eqref{e:eigenvalue-equation-9} by $\psi$, integrate in time and perform an integration by part to infer
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi - \Xi (-\infty)} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (-\infty)} \psi^2 = 0\, .
\]
We then introduce $\eta$ so that $\psi = (\Xi-\Xi (-\infty)) \eta$. This time we infer exponential decay for $\eta$ at both $\infty$ and $-\infty$. Arguing as above we rewrite the last identity as
\[
\int ((\Xi- \Xi (-\infty))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (-\infty))^2 \eta^2)=0\, ,
\]
reaching again a contradiction.
\medskip
In order to complete the proof of the proposition we need to show
\begin{itemize}
\item[(D)] If $(m_0, \Xi (c)) \in \overline{\mathscr{P}}$ {\color{red} and $m_0> 1$}, then either $c=a$ or $c=b$ {\color{red} and moreover we have, respectively, $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$}.
\end{itemize}
As before we argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in ]1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $\Xi (c)$ for some $c\not\in \{a,b\}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-10}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
This time we normalize the $\psi_j$'s so that
\begin{equation}\label{e:L2-normalization}
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) =1\, .
\end{equation}
By Lemma \ref{l:ODE2} we know that $\psi_j (t)$ is asymptotic to $\rho_j^\pm e^{\mp m_j t}$ for $t\to \pm \infty$, where $\rho_j^\pm \in \mathbb C \setminus \{0\}$. Since $\Xi (c)$ has a positive distance from both $0$ and $\Xi (-\infty)$, we can apply Lemma \ref{l:ODE2} to achieve uniform times $T_\pm$ with the properties that
\begin{align}
\left|\psi_j (t) - \rho_j^+ e^{-m_j t}\right|& \leq \frac{|\rho_j^+|}{2} e^{-m_j t} \qquad\qquad \forall t\geq T_+\, ,\label{e:exp-bound-1}\\
\left|\psi_j (t) - \rho_j^- e^{m_j t}\right| &\leq \frac{|\rho_j^-|}{2} e^{m_j t} \qquad\qquad \forall t\leq T_-\, .\label{e:exp-bound-2}
\end{align}
Combining the latter inequalities with \eqref{e:L2-normalization} we conclude that $\sup_j |\rho_j^\pm| < \infty$, and in particular $\{\psi_j\}_j$ is tight in $L^2$, i.e. for every $\varepsilon >0$ there is $N = N (\varepsilon)$ such that
\[
\sup_j \int_{|t|\geq N} |\psi_j|^2 < \varepsilon\, .
\]
The latter bound combined with \eqref{e:L2-normalization} implies, up to extraction of a subsequence which we do not relabel, the strong $L^2$ convergence of $\psi_j$ to a function $\psi$. Thanks to Sobolev embedding, the convergence is uniform on any compact set and, moreover, $\psi\in C^{1/2}$.
Arguing as for \eqref{e:imaginary-trick} we infer
\begin{equation}\label{e:imaginary-trick-2}
\int \frac{A}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 =0
\end{equation}
The latter bound implies $\psi (c)=0$. In fact first we
observe that $\frac{A}{|\Xi-z_j|^2} |\psi_j|^2$ converges in $L^1$ on $\mathbb R \setminus ]c-\delta, c+\delta[$ for every $\delta$. Choosing $\delta>0$ so that $|A (t) - A(c)| \leq \frac{|A(c)|}{2}$ for $t\in [c-\delta, c+\delta]$ and recalling that $|A(c)|>0$, we easily infer that
\[
\sup_j \int_{c-h}^{c+h} \frac{|\psi_j|^2}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty \qquad \forall h < \delta\, .
\]
If $\psi (c)$ were different from $0$, we can select a positive $h< \delta$ and a positive $c_0$ with the property that $|\psi (t)|^2 \geq 2c_0$ for all $t\in [c-h, c+h]$. In particular, for a large enough $j$ we infer $|\psi_j (t)|^2 \geq c$ for all $t\in [c-\delta, c+\delta]$. But then we would conclude
\[
\sup_j \int_{c-h}^{c+h} \frac{1}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty\, .
\]
Since the denominator converges to $(\Xi - \Xi (c))^2$, this is clearly not possible.
We now wish to pass in the limit in \eqref{e:eigenvalue-equation-10} to derive that
\begin{equation}\label{e:eigenvalue-equation-11}
- \psi'' + m_0^2 \psi + \frac{A}{\Xi-\Xi (c)} \psi =0\, ,
\end{equation}
where we notice that, thanks to $\psi (c)=0$ and the H\"older regularity of $\psi$, the function $\frac{A}{\Xi-\Xi (c)} \psi$ is indeed in $L^p$ for every $p<2$. We thus understand the equation distributionally.
The equation clearly passes to the limit outside the singularity $c$ of the denominator and thus we just need to pass it to the limit distributionally in some interval $]c-h,c+h[$. We write the third term as
\begin{align*}
\frac{A}{\Xi-z_j} \psi_j &= \left(\frac{d}{dt} \ln (\Xi-z_j) \right)\frac{A}{\Xi'} \psi\\
&= \frac{d}{dt} \left(\ln (\Xi-z_j) \frac{A}{\Xi'} \psi_j\right) - \ln (\Xi-z_j)\frac{A}{\Xi'} \psi'_j - \ln (\Xi-z_j) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi_j\, .
\end{align*}
Observe that we can define the logarithm unequivocally because $\Xi$ is real valued and ${\rm Im}\, z_j >0$.
Next, we remark that:
\begin{itemize}
\item[(i)] $\frac{A}{\Xi'}$ is smooth in $]c-h, c+h[$;
\item[(ii)] $\ln (\Xi-z_j)$ converges strongly \footnote{Since $\ln (\Xi-z_j)$ converges uniformly to $\ln (\Xi-\Xi(c))$ on any compact set which does not contain $c$, in order to reach the conclusion it suffices to prove a uniform $L^q$ bound on the functions, for every $q<\infty$. This can be easily concluded as follows. Choose an interval $[c-h, c+h]$ and recall that $\Xi$ does not change sign on it. For each $j$ large enough we then find a unique $c_j \in [c-h, c+h]$ such that $\Xi (c_j) = {\rm Re}\, z_j$. Using the mean value theorem we easily conclude that $|\Xi (t)-z_j|\geq |\Xi (t) - \Xi (c_j)|
\geq C^{-1} |t-c_j|$ for every $t\in [c-h, c+h]$, where $C^{-1} = \min \{|\Xi'(t)|: c-h\leq t \leq c+h\}$.} to $\ln (\Xi-\Xi(c))$ in $L^q (]c-h, c+h[)$ for every $q<\infty$;
\item[(iii)] $\psi_j' \to \psi'$ weakly in $L^2$, while $\psi_j\to \psi$ uniformly.
\end{itemize}
We thus conclude that $\frac{A}{\Xi-z_j} \psi_j$ converges distributionally to
\[
\frac{d}{dt} \left(\ln (\Xi-\Xi (c)) \frac{A}{\Xi'} \psi\right) - \ln (\Xi-\Xi(c)) \frac{A}{\Xi} \psi - \ln (\Xi-\Xi(c)) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi\, .
\]
Using now that $\psi\in W^{1,2}$ and $\psi (c)=0$ we can rewrite the latter distribution as
\[
\frac{A}{\Xi-\Xi (c)} \psi
\]
and hence conclude the validity of \eqref{e:eigenvalue-equation-11}.
Observe next that from \eqref{e:eigenvalue-equation-11} we infer $\psi''\in L^p$ for every $p< 2$, which in turn implies that $\psi$ is indeed $C^{1,\kappa}_{\text{loc}}$ for every $\kappa < \frac{1}{2}$. In turn this implies that $\frac{A}{\Xi-\Xi (c)} \psi$ is continuous at $c$, so that in particular $\psi$ is twice differentiable. We thus can argue as for the derivation of \eqref{e:energy-trick} and get
\begin{equation}\label{e:energy-trick-4}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-\Xi (c)} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (c)} \psi^2 = 0\, .
\end{equation}
Once again we can set $\psi = (\Xi-\Xi (c)) \eta$ and observe that $\eta\in W^{1,2}$, to rewrite the latter identity as
\[
\int ((\Xi- \Xi (c))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (c))^2 \eta^2)=0\, ,
\]
inferring that $\eta=0$.
We thus have concluded that $\psi$ vanishes identically, but this is not yet a contradiction since the normalization \eqref{e:L2-normalization} and the strong $L^2$ convergence does not ensure that $\psi$ is nontrivial. In order to complete our argument, note first that, by the monotonicity of $\Xi$, for each $j$ large enough there is a unique $c_j$ such that $\Xi (c_j) = {\rm Re}\, z_j$. We then multiply the equation \eqref{e:eigenvalue-equation-10} by $\bar \psi_j - \overline{\psi_j (c_j)}$ to obtain
\[
\int \left(|\psi_j'|^2 + m_j^2 \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) + \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})\right) = 0\, .
\]
Note that $c_j$ must converge to $c$ and that the integrals
\[
\int \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\]
converges to $0$ because $\psi_j - \psi_j (c_j)$ converges to $0$ uniformly and, thanks to the uniform exponential decay of $\psi_j$, the latter are uniformly bounded in $L^1$. For the same reason the first integral in the sum
\begin{equation}
\label{e:up}\int_{|t-c|\geq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) +\int_{|t-c|\leq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\end{equation}
converges to $0$ for every fixed $h$. On the other hand, $|\frac{A (t)}{\Xi (t)-z_j}| |\psi_j (t) - \psi_j (c_j)|\leq C |t-c_j|^{-1/2}$ and thus the second integrand in \eqref{e:up}
converges to $0$ as well. We thus conclude that the $L^2$ norm of $\psi'_j$ converges to $0$ as well. This however contradicts the normalization \eqref{e:L2-normalization}.
\section{Proof of Proposition \ref{p:5-7}: Part I}
We set $m_0=m_a$, $z_0 = \Xi (a)$, and
we fix a $\psi_0$ solution of
\[
-\frac{d^2\psi_0}{dt^2} + m_0^2 \psi_0 + \frac{A}{\Xi-z_0} \psi_0 = 0
\]
with $L^2$ norm equal $1$. Since the operator is self-adjoint we will indeed assume that $\psi_0$ is real. We then define the projector $P_0: L^2 (\mathbb R; \mathbb C) \to \{\kappa \psi_0:\kappa \in \mathbb C\}$ as
\[
P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0\, .
\]
Observe that $P_0$ is self-adjoint.
Next,
in a neighborhood of $(m_0, z_0)$ we will look for solutions of \eqref{e:eigenvalue-equation-3} by solving
\begin{equation}\label{e:Lagrange}
\left\{
\begin{array}{l}
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi + P_0 (\psi) = \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
which we can rewrite as
\begin{equation}\label{e:Lagrange-2}
\left\{
\begin{array}{l}
-\psi'' + m_0^2 \psi + \frac{A}{\Xi-z_0} \psi + P_0 (\psi) = A \left(((\Xi-z_0)^{-1} - (\Xi-z)^{-1}) \psi\right) + (m_0^2-m^2)\psi + \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
Next we observe that the operator $-\frac{d^2}{dt^2} + m_0^2$, considered as a closed unbounded self-adjoint operator in $L^2$ (with domain $W^{2,2}$) has an inverse $\mathcal{K}_{m_0}:L^2 \to L^2$ which is a bounded operator. We thus rewrite \eqref{e:Lagrange-2} as
\begin{equation}\label{e:Lagrange-3}
\left\{
\begin{array}{ll}
\underbrace{\psi + \mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_0} \psi + P_0 (\psi)\right)}_{=: T (\psi)}\\
\qquad\qquad \qquad= \underbrace{\mathcal{K}_{m_0}
\left(\left(A \left((\Xi-z_0)^{-1} - (\Xi-z)^{-1}\right) + (m_0^2 -m^2)\right) \psi\right)}_{=:- \mathcal{R}_{m,z} (\psi)} +
\mathcal{K}_{m_0} (\psi_0)\\ \\
\langle \psi, \psi_0 \rangle =1\, .
\end{array}
\right.
\end{equation}
The proof of Proposition \ref{p:5-7} will then be broken into two pieces. In this section we will show the first part, which we can summarize in the following
\begin{lemma}\label{l:solve-for-psi}
For every $\mu>0$, if $(m,z)$ sufficiently close to $(m_0, z_0)$ and ${\rm Im}\, z\geq \mu |{\rm Re}\, (z-z_0)|$ then there is a unique $\psi= \psi (m,z) \in L^2 (\mathbb R)$ solving
\begin{equation}\label{e:solve-for-psi}
T (\psi) + \mathcal{R}_{m,z} (\psi) = \mathcal{K}_{m_0} (\psi_0)\, .
\end{equation}
\end{lemma}
Before coming to its proof we single out two important ingredients.
\begin{lemma}\label{l:invert-T}
$T$ is a bounded operator with bounded inverse on the spaces $L^2$ and $C^\sigma$, for any $\sigma \in ]0,1[$.
\end{lemma}
\begin{proof} Recall that the operator $\mathcal{K}_m$ is given by the convolution with $\frac 1{2m} e^{-m|\cdot|}$. In this first step we prove that $T$ is a bounded operator with bounded inverse in the spaces $L^2 (\mathbb R)$ and $C^\sigma (\mathbb R)$\footnote{Observe that $\mathcal{K}_m$ is well-defined on $C^\sigma$ and so is the multiplication by $\frac{A}{\Xi-z_0}$, since the latter is a smooth functions with bounded derivatives, and the operator $P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0$: for the latter we just need to check that $\psi\overline{\psi_0}$ is integrable, which follows from the exponential decay of $\psi_0$, { cf. Corollary \ref{c:decay}}.}
Recall that $\frac{A}{\Xi-z_0}= \frac{A}{\Xi-\Xi (a)}$ is indeed a bounded smooth function (thanks to the structural assumptions on $\Xi$: in particular recall that $\Xi' (a)\neq 0$ and $A(a) =0$, which implies that $\frac{A}{\Xi-\Xi(a)}$ is in fact smooth at $a$). Moreover the function and its derivatives decay exponentially at $\pm \infty$. It follows therefore that $\psi \mapsto \mathcal{K}_{m_0} (\frac{A}{\Xi-z_0} \psi + P_0 (\psi))$ is a compact operator, both on $L^2$ and on $C^\sigma$. Thus $T$ is a Fredholm operator with index $0$. We thus just need to check that the kernel is $0$ in order to conclude that it is invertible with bounded inverse. In both cases we need to show that the equation
\begin{equation}\label{e:kernel-T}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi) = 0
\end{equation}
has only the trivial solution. Observe that the kernel $V$ of the operator $\psi \mapsto -\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi$ is $1$-dimensional by Lemma \ref{l:ODE2} and Corollary \ref{c:decay}. In particular $V$ is generated by $\psi_0$. Since the operator $P_0$ is the orthogonal projection onto $V$ and $-\frac{d^2\psi}{dt^2} + m_0^2+ + \frac{A}{\Xi-\Xi (a)} \psi$ is self-adjoint, the kernel of $-\frac{d^2}{dt^2} + m_0^2+ \frac{A}{\Xi-\Xi (a)} \psi + P_0$ in $L^2$ must be trivial.
In order to argue that the kernel is $0$ on $C^\sigma$ we apply a variation of the same idea: first we observe that if $\psi$ is a $C^\sigma$ solution of \eqref{e:kernel-T}, then $\frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi)$ is also in $C^\sigma$ and hence $\psi''\in C^\sigma$. Observe also that the operator is self-adjoint and thus we can assume that $\psi$ is real-valued. We then multiply both sides of \eqref{e:kernel-T} by $\bar \psi_0$, integrate by parts and use the fact that $\psi_0$ is in the kernel of the self-adjoint operator $-\frac{d^2}{dt^2} + m_0^2 + \frac{A}{\Xi-\Xi (a)}$ to conclude that $(\langle \psi, \psi_0\rangle)^2 =0$. But then
$\psi$ is a bounded solution of $-\frac{d\psi}{dt^2} + m_0 \psi^2 + \frac{A}{\Xi-\Xi (a)}\psi =0$. Given that $\frac{A}{\Xi-\Xi (a)} \psi$ is a product of an exponentially decaying function and a bounded function, we conclude that $-\frac{d^2\psi}{dt^2} + m_0^2 \psi$ is an exponentially decaying function $f$. We thus have $\psi = \mathcal{K}_m (f) + C_1 e^{-m_0t} + C_2 e^{m_0t}$ for two constants $C_1$ and $C_2$. However $\mathcal{K}_m (f)$ decays exponentially at both $\pm \infty$ and thus, given that $\psi$ is bounded, we must have $C_1=C_2=0$. In particular $\psi$ decays exponentially at both $\pm \infty$ and so it is an $L^2$ function. But we already saw that every $L^2$ solution is trivial.
\end{proof}
\begin{lemma}\label{l:Rmz-small}
For every constant $\mu>0$ we define the cone $C_\mu := \{z: {\rm Im} z \geq \mu |{\rm Re}\, (z-z_0)|\}$. Then
\begin{equation}
\lim_{z\in C_\mu, (m,z)\to (m_0, z_0)} \|\mathcal{R}_{m,z}\|_O = 0\, ,
\end{equation}
where $\|L\|_O$ is the operator norm of $L$
when considered as a bounded operator from $L^2$ to $L^2$.
\end{lemma}
\begin{proof} Clearly, it suffices to show that
\begin{equation}
\lim_{z\in C_\mu, z\to z_0} \|\mathcal{K}_{m_0} \circ (A/(\Xi-z) - A/(\Xi-z_0))\|_O = 0\, .
\end{equation}
We can rewrite the operator as
\[
\psi \mapsto \mathcal{K}_m \left(\frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi\right) \, .
\]
First of all observe that the operators
\[
\psi \mapsto L_z (\psi) = \frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi
\]
are bounded in the operator norm uniformly in $z\in C_\mu$ by a constant $M$. Moreover, we can see that the adjoint operator is given by $L_z^* (\psi)= \frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)} \psi$ converges strongly in $L^2$ to $0$: indeed the functions $\frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)}$ are uniformly bounded and they converge to $0$ on $\mathbb R \setminus \{a\}$. We now use an argument entirely similar to that used in the proof of Lemma \ref{l:three}: given any $\varepsilon >0$ we fix the orthogonal projection $P_N$ onto a finite-dimensional subspace of $L^2$ with the property that $\|\mathcal{K}_{m_0}\circ P_N - \mathcal{K}_{m_0}\|_O$ is smaller than $\frac{\varepsilon}{2M}$. We then argue that for $|z-z_0|$ sufficiently small $P_N \circ L_z$ has operator norm smaller than $\frac{\varepsilon}{2}$. Having chosen an orthonormal base $\psi_1, \ldots, \psi_N$ for $V$, we recall that
\[
P_N (\psi)= \sum_i \langle \psi_i, \psi\rangle \psi_i\, .
\]
Therefore our claim amounts to show that
\[
|\langle \psi_i, L_z (\psi)\rangle|\leq \frac{\varepsilon}{2N}
\]
for $z$ sufficiently close to $z_0$ and every $\psi$ with $\|\psi\|_{L^2}\leq 1$. For the latter we use
\[
|\langle \psi_i, L_z (\psi)\rangle| = |\langle L_z^* (\psi_i), \psi \rangle|\leq \|L_z^* (\psi_i)\|_{L^2}\, .
\]
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:solve-for-psi}]
We rewrite the equation that we want to solve as
\[
\psi + T^{-1} \circ \mathcal{R}_{m,z} (\psi) = T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\, .
\]
Note that $P_0(\psi_0)=\psi_0$. Furthermore, since $\mathcal K_{m_0}$ is, by definition, the inverse operator of $-\frac{\mathrm d^2}{\mathrm dt^2}+m_0^2\operatorname{Id}$,
\begin{equation*}
\mathcal K_{m_0}^{-1}\left(\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0\right)\right) = -\psi_0''+m_0^2\psi_0+\frac{A}{\Xi-z_0}\psi_0 = 0.
\end{equation*}
Therefore,
\begin{equation*}
\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Psi-z_0}\psi_0\right) = 0.
\end{equation*}
In combination with the definition of $T$ in \eqref{e:Lagrange-3}, we get
\begin{equation*}
T(\psi_0) = \psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0+\psi_0\right)=\mathcal K_{m_0}(\psi_0),
\end{equation*}
in other words,
\begin{equation}\label{e:T-1K}
T^{-1} \circ \mathcal{K}_{m_0} (\psi_0) = \psi_0\, .
\end{equation}
Therefore, \eqref{e:solve-for-psi} becomes
\begin{equation}\label{e:to-Neumann}
(\operatorname{ Id} + T^{-1} \circ \mathcal{R}_{m,z}) (\psi) = \psi_0\, ,
\end{equation}
so the existence of a unique solution is guaranteed as soon as $\|T^{-1} \circ \mathcal{R}_{m,z}\|_{O} < 1$.
\end{proof}
\begin{remark}\label{r:Neumann-series}
In the remaining part of the proof of Proposition \ref{p:5-7} we will take advantage of the representation of $\psi$ as a function of $\psi_0$ through the Neumann series coming from \eqref{e:to-Neumann}. More precisely, our proof of Lemma \ref{l:solve-for-psi} leads to the following representation:
\begin{equation}\label{e:Neumann-series}
\psi = \psi_0 - (T^{-1} \circ \mathcal{R}_{m,z}) (\psi_0) + \sum_{k=2}^\infty (-1)^k (T^{-1}\circ \mathcal{R}_{m,z})^k (\psi_0)\, .
\end{equation}
\end{remark}
\section{Proof of Proposition \ref{p:5-7}: Part II}\label{s:5-7-part-II}
We now complete the proof of Proposition \ref{p:5-7}. The positive parameter $\mu>0$ in Lemma \ref{l:solve-for-psi} will have to be chosen sufficiently small: its choice will be specified in a few paragraphs, while for the moment we assume it to be fixed. We set $m_0 = m_a$ and $z_0 = \Xi (a)$. Thus, for each $(m_0+h ,z)$ in a set
\[
U_{\delta, \mu} := \{|h|< \delta, |z-z_0|< \delta, {\rm Im} z > \mu |{\rm Re}\, (z-z_0)|\}
\]
we know that that there is a solution $\psi = \psi (m_0+h,z)$ of \eqref{e:solve-for-psi} which moreover satisfies the expansion \eqref{e:Neumann-series}.
We then define the function
\begin{equation}
H (h,z) := \langle \psi (m_0+h,z), \psi_0\rangle\, ,
\end{equation}
and obviously we are looking for those $z$ which solve
\begin{equation}\label{e:what-we-want-to-do}
H (h,z) =1
\end{equation}
The main point of our analysis is the following
\begin{lemma}\label{l:will-apply-Rouche}
The function $H$ is holomorphic in $z$ and moreover
\begin{equation}\label{e:expansion}
H (h,z) = 1 - 2m_a h + c (a) (z-z_0) + o (|z-z_0| + |h|)
\end{equation}
where $c(a)$ is a complex number with ${\rm Im}\, c(a) > 0 $.
\end{lemma}
Given Lemma \ref{l:will-apply-Rouche}, consider now $\xi (h)$ which we obtain by solving $c(a) (\xi-z_0)= 2m_a h$, namely,
\[
\xi (h) = \frac{2m_a h}{c(a)} +z_0 = \frac{2m_a h}{|c(a)|^2} \overline{c(a)} +z_0\, .
\]
The idea behind the latter definition is that, if the term $o (|z-z_0| + |h|)$ vanished identically, $z = \xi (h)$ would be the solution of $H (h,z)=1$. Even though $o (|z-z_0| + |h|)$ does not vanish, we nonetheless expect that the solution $z$ of $H (h,z)=1$ is relatively close to $\xi (h)$.
Since ${\rm Im}\, c(a)>0$, $\xi (h)$ has positive imaginary part if $h<0$. In particular we have
\[
{\rm Im}\, \xi (h) \geq \gamma |h| \qquad \forall h < 0\, .
\]
where $\gamma$ is a positive constant. We then rewrite
\[
H (h, z) = 1 + c (a) (z-\xi (h)) + \underbrace{o (|\xi (h)-z_0| + h)}_{=: r(h)} + o (|z-\xi (h)|)\, .
\]
Consider the disk $D_h := \{|z-\xi (h)| \leq 2 \beta |h|\}$, for a suitably chosen constant $\beta>0$. We will show below that adjusting the constants $\mu$ and $\beta$ suitably, the disk will be in the domain of the holomorphic function $H (\cdot, z)$. Leaving this aside for the moment, by Rouch\'e Theorem, if we choose $h$ sufficiently small the set $H(h, D_h)$ contains a disk of radius $|c(a)|\beta h$ centered at $1+ r (h)$. But then for $h$ sufficiently small we also have $|r(h)| \leq \frac{|c(a)|\beta h}{2}$ and so we conclude that $1\in H (h, D_h)$, namely that there is a point $z (h)$ in the disk $D_h$ which is mapped in $1$ by $H (h, \cdot)$. This would then complete the proof of Proposition \ref{p:5-7} if we were able to prove that ${\rm Im}\, z (h) >0$. We therefore need to show that $D_h$ is in the domain of $H (h, \cdot)$, namely,
\[
{\rm Im}\, z \geq \mu |{\rm Re}\, (z-z_0)|\, \qquad \forall z\in D_h\, .
\]
We first estimate
\[
{\rm Im}\, z \geq {\rm}\, {\rm Im}\, \xi (h) - 2 \beta |h| \geq (\gamma- 2 \beta) |h|\, .
\]
Then
\begin{equation}\label{e:inequality-101}
|{\rm Re}\, z-z_0| \leq |\xi (h)-z_0| + |z-\xi (h)| \leq (|c(a)| + 2 \beta) |h|\, .
\end{equation}
We thus conclude that
\begin{equation}\label{e:inequality-102}
{\rm Im}\, z(h) \geq \frac{\gamma-2\beta}{|c(a)|+2\beta} |{\rm Re}\, (z (h)-z_0)|\, .
\end{equation}
Thus it suffices to chose $\beta = \frac{\gamma}{3}$ and $\mu = \frac{\gamma}{3 |c(a)|+\gamma}$. This guarantees at the same time the existence of a solution and the fact that $z (h)$ has positive imaginary part when $h<0$ (which results from combining \eqref{e:inequality-101} and \eqref{e:inequality-102}.
In order to complete the proof of Proposition \ref{p:5-7} we therefore just need to show Lemma \ref{l:will-apply-Rouche}.
\begin{proof}[Proof of Lemma \ref{l:will-apply-Rouche}]
In order to show holomorphicity we just need to show that, for each fixed $z$,
\[
z\mapsto \sum_{k=0}^\infty (- T^{-1} \circ \mathcal{R}_{m,z})^k
\]
is holomorphic. Since the series converges in the operator norm, it suffices to show that each map $z\mapsto (-T^{-1} \circ \mathcal{R}_{m,z})^k$ is holomorphic for every $k$, for which indeed it suffices to show that $z\mapsto \mathcal{R}_{m,z}$ is holomorphic. This is however obvious from the explicit formula. We therefore now come to the the Taylor expansion \eqref{e:expansion}.
\medskip
{\bf Step 1} We will show here that
\begin{equation}\label{e:small-in-Csigma}
\|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)} \leq C (\sigma) (|h| + |z-z_0|)\,
\end{equation}
for every $\sigma\in ]0,1[$, where $\|L\|_{\mathcal{L} (C^\sigma)}$ is the operator norm of a bounded linear operator $L$ on $C^{\sigma}$.
The estimate will have the following consequence. First of all using \eqref{e:Neumann-series} and $\|\psi_0\|_{L^2}^2 =1$ we expand
\begin{equation}\label{e:Taylor-2}
H (h,z) = 1 - \langle T^{-1} \circ \mathcal{R}_{m_0+h,z} (\psi_0), \psi_0\rangle
+ \underbrace{\sum_{k=2}^\infty \langle (-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0), \psi_0\rangle}_{=: R_1 (z,h)}\, .
\end{equation}
Hence using \eqref{e:small-in-Csigma} we estimate
\begin{align}
|R_1 (z,h)| & \leq \sum_{k=2}^\infty \|(-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0)\|_\infty \|\psi_0\|_{L^1}\nonumber\\
&\leq C \sum_{k=2}^\infty (\|T^{-1}\|_{C^\sigma} \|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)})^k \|\psi_0\|_{C^\sigma}\|\psi_0\|_{L^1} = o (|h|+|z-z_0|)\, ,\label{e:resto-1}
\end{align}
for some fixed $\sigma$.
In order to show \eqref{e:small-in-Csigma} we write
\[
\mathcal{R}_{m_0+h,z} (\psi) = (z-z_0) \mathcal{K}_{m_0} \left(\frac{1}{\Xi-z} \left(\frac{A}{\Xi-z_0} \psi\right)\right) + (2m_0 h +h^2) \mathcal{K}_{m_0} (\psi)\, .
\]
Since $\frac{A}{\Xi-z_0}$ is smooth, it suffices to show that the operators $B_z:= \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ are uniformly bounded in $\mathcal{L} (C^\sigma)$. We first fix a smooth cut-off function $\varphi \in C^\infty_c (]a-2, a+2[)$ which equals $1$ on $[a-1,a+1]$ and write
\[
B_z= B_z^1 + B_z^2
:= \mathcal{K}_{m_0} \circ \left(\frac{1-\varphi}{\Xi-z}\right)+\mathcal{K}_{m_0} \circ \left(\frac{\varphi}{\Xi-z} \right)\, .
\]
But since $(1-\varphi)/(\Xi-z)$ enjoys a uniform bound in $C^k$, it is easy to conclude that $\|B^1_z\|_{\mathcal{L} (C^\sigma)}$ is bounded uniformly in $z$. We thus need to bound
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z} \psi (s)\, ds\, .
\end{align*}
We first bound $\|B^2_z\|_{L^\infty}$. We write $z= x+iy$ and, since $x$ is close to $a$, we select the only $a'$ such that $\Xi (a')=x$ and write
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s) (\psi (s)-\psi (a'))}{(\Xi (s) -\Xi (a')) - iy} \, ds}_{=: I_1 (t)}
+ \frac {\psi(a')}{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z}\, ds}_{=: I_2 (t)}
\end{align*}
Writing $\frac{1}{\Xi -z} = \frac{1}{\Xi'}\frac{d}{dt} \ln (\Xi -z)$ we can integrate by parts to get
\begin{align*}
I_2 (t) &= - \underbrace{\int m_0 \frac{t-s}{|t-s|} e^{-m_0 |t-s|} (\Xi' (s))^{-1} \ln (\Xi (s)-z) \varphi (s)\, ds}_{=:I_{2,1} (t)}\\
&\qquad -
\underbrace{\int e^{-m_0 |t-s|} \ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\, ds}_{=: I_{2,2} (t)}
\end{align*}
and use the uniform bound for $\ln (\Xi (s)-z)$ in $L^1 ([a-2,a+2])$ to conclude that $|I_{2,1}|$ and $|I_{2,2}|$ are both bounded uniformly. As for $I_1$, note that, on any compact interval $K$ around $a'$, we have, since $\Xi'$ is continuous and $\Xi'<0$,
\begin{equation*}
C(K):=\inf_{x\in K} |\Xi'(x)| = -\max_{x\in K} \Xi'(x) >0.
\end{equation*}
Therefore by the mean value theorem, for all $s\in K$, there exists a $\iota = \iota(s)\in K$ such that
\begin{equation*}
\abs{\Xi(s)-\Xi(a')-iy}=\abs y+\abs{\Xi(s)-\Xi(a')} >\abs{\Xi(s)-\Xi(a')}= \abs{s-a'}\abs{ \Xi'(\iota)}\ge \abs{s-a'} C(K).
\end{equation*}
By the definition of the Hölder semi-norm, we thus have, for all $s\in K$,
\[
\left|\frac{\psi (s)- \psi (a')}{\Xi (s) - \Xi (a') - iy}\right| \leq \frac{\|\psi\|_{C^\sigma}}{C(K)|s-a'|^{1-\sigma}},
\]
which is integrable. Furthermore, outside of $K$ the integrand of $I_1$ is bounded and decays exponentially, therefore one can uniformly bound $I_1$.
We next wish to bound the seminorm
\[
[B^2_z (\psi)]_\sigma:= \sup_{t\neq t'} \frac{|B^2_z (\psi) (t) - B^2_z (\psi) (t')|}{|t-t'|^\sigma}\, .
\]
We write
\[
B^2_z (\psi) (t) - B^2_z (\psi) (t') = (I_1 (t) - I_1 (t')) + \psi' (a) (I_2 (t) - I_2 (t'))\, .
\]
Using that $|e^{-m_0 |t-s|} - e^{-m_0 |t'-s|}|\leq C |t-t'|$ we can bound
\[
|I_1 (t) - I_1 (t')| \leq C |t-t'| \int |\varphi (s)| \frac{|\psi (s)-\psi (a')|}{|\Xi (s) - \Xi (a')|}\, ds
\leq C \|\psi\|_{C^\sigma} |t-t'|\, .
\]
Similarly we can write
\[
|I_{2,2} (t) - I_{2,2} (t')| \leq C |t-t'| \int \left|\ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\right|\, ds \leq C |t-t'|\, .
\]
Next denoting the function $(\Xi' (s))^{-1} \varphi (s) \ln (\Xi (s) -z)$ by $B (s)$ we assume $t> t'$ and write further
\begin{align*}
I_{2,1} (t) - I_{2,1} (t')&= m_0 \Bigg(\underbrace{\int_t^\infty e^{-m_0 (s-t)} B(s)\, ds - \int_{t'}^\infty e^{-m_0(s-t')} B(s)\, ds}_{=: J_+(t,t')}\Bigg)\\
&\qquad - m_0 \Bigg(\underbrace{\int_{-\infty}^t e^{-m_0 (t-s)} B(s)\, ds - \int_{-\infty}^{t'} e^{-m_0 (t'-s)} B(s)\, ds}_{=: J_- (t,t')}\Bigg)\, .
\end{align*}
Then we choose $p=\frac{1}{\sigma}$, let $p'$ be the dual exponent and estimate
\begin{align*}
|J_+ (t,t')| &\leq C |t-t'| \int_t^\infty |B(s)|\, ds + \int_{t'}^t |B (s)|\, ds\\
&\leq C |t-t'| \|B\|_{L^1} + |t-t'|^\sigma \|B\|_{L^{p'}}\, .
\end{align*}
A similar estimate for $J_- (t,t')$ finally shows the existence of a constant $C$ such that
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} \left(|t-t'|+|t-t'|^\sigma\right)\, .
\]
Clearly this implies
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma \qquad \mbox{if $|t-t'|\leq 1$.}
\]
On the other hand we can trivially bound
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')| \leq 2 \|B^2_z (\psi)\|_\infty \leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma
\quad\mbox{if $|t-t'|\geq 1$.}
\]
\medskip
{\bf Step 2.} In this second step we compute
\begin{align*}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
\langle T^{-1} \circ \mathcal{K}_{m_0} \left(A ((\Xi-z)^{-1} - (\Xi-z_0)^{-1})\psi_0\right), \psi_0\rangle\\
&\qquad
+ (2m_0 h + h^2) \langle T^{-1} \circ \mathcal{K}_{m_0} (\psi_0), \psi_0\rangle\, .
\end{align*}
Recalling \eqref{e:T-1K} (and using that both $T^{-1}$ and $\mathcal{K}_{m_0}$ are self-adjoint) we rewrite the expression as
\begin{align}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
(z-z_0) \langle T^{-1}\circ \mathcal{K}_{m_0} \big( A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0\big), \psi_0\rangle + 2m_a h + h^2\nonumber\\
&= (z-z_0) \langle A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0, T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\rangle + 2ma_ h + h^2\nonumber\\
&= (z-z_0) \underbrace{\langle A (\Xi-z)^{-1} (\Xi -z_0)^{-1} \psi_0, \psi_0 \rangle}_{=: G (z)} + 2m_a h + h^2\label{e:Taylor-3}\, .
\end{align}
We thus want to show that the following limit exists and to compute its imaginary part:
\[
- c (a) := \lim_{{\rm Im}\, z >0, z\to \Xi (a)} G (z) =
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} \int \frac{1}{\Xi (s)-z} |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}\, ds\, .
\]
Observe indeed that inserting $G(z) = - c(a) + o (1)$ in \eqref{e:Taylor-3} and taking into account \eqref{e:Taylor-2} and \eqref{e:resto-1} we conclude that \eqref{e:expansion} holds.
In order to compute $c(a)$ we observe first that the function $\phi (s) := |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}$ is smooth and decays exponentially. We thus rewrite
\[
G (z) = \int \frac{1}{\Xi (s)-z} \phi (s)\, ds\, .
\]
Next we decompose $z$ into its real and imaginary part as $z = x + iy$ and observe that
\begin{align*}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Re}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds
\end{align*}
Here we are only interested in showing that the limit exists and we thus fix a cut-off function $\varphi\in C^\infty_c (]a-2, a+2[)$, identically $1$ on $[a-1, a+1]$ and split the integral into
\[
{\rm Re}\, G (z) = \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) \varphi (s)\, ds +
\int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) (1-\varphi (s))\, ds\, .
\]
The second integral has a limit, while in order to show that the first has a limit we write
\[
\frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} = \frac{1}{2\Xi' (s)} \frac{d}{ds} \ln ((\Xi(s)-x)^2 + y^2)\, .
\]
We then integrate by parts and use the fact that $\ln ((\Xi (s)-x)^2 +y^2)$ converges to $2 \ln |(\Xi(s)-\Xi (a)|$ strongly in $L^q ([a-2,a+2])$ for every $q$ to infer the existence of the limit of the first integral.
As for the imaginary part we write instead
\begin{align}\label{e:arctan-integral}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Im}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{y}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds\, .
\end{align}
We wish to show that the latter integral converges to
\begin{equation}\label{e:arctan-integral-2}
I = \phi (a) \int \frac{ds}{(\Xi' (a))^2 s^2 +1} = \frac{\pi \phi (a)}{2 |\Xi' (a)|}\, .
\end{equation}
On the other hand $\phi (a) = |\psi_0 (a)|^2 A' (a) (\Xi' (a))^{-1}$.
Since $A' (a) > 0$ and $\Xi' (a)<0$, we conclude that $ c (a)$ exists and it is a complex number with positive imaginary part, which completes the proof of the lemma.
It remains to show the convergence of \eqref{e:arctan-integral} to \eqref{e:arctan-integral-2}. First observe that for each $x$ sufficiently close to $\Xi (a)$ there is a unique $a' = \Xi^{-1} (x)$ such that $\Xi (a')=x$. Changing variables ($s$ becomes $a'+s$), the integral in \eqref{e:arctan-integral} becomes
\begin{equation}
\int \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds\,
\end{equation}
and we wish to show that its limit is $I$ as $(a',y)\to (a,0)$.
Next, fix any $\delta>0$ and observe that
\[
\lim_{y\to 0} \int_{|s|\geq \delta} \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds=0
\]
uniformly in $a' \in [a-1, a+1]$. We therefore define
\[
I (\delta, a', y) := \int_{-\delta}^\delta \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds
\]
and we wish to show that, for every $\varepsilon >0$ there is a $\delta>0$ such that
\begin{equation}\label{e:arctan-integral-3}
\limsup_{(a',y) \downarrow (a,0)} \left| I (\delta, a', y) - I\right| \leq C \varepsilon\, ,
\end{equation}
where $C$ is a geometric constant.
We rewrite
\[
I (\delta, a', y) = \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{\phi (a'+ys)}{y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 +1}\, ds\, .
\]
Fix now $\varepsilon$ and observe that, since $\Xi'$ and $\phi$ are continuous, if $\delta$ is chosen sufficiently small, then
\begin{align}
&((\Xi' (a))^2 - \varepsilon^2) s^2 \leq y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 \leq ((\Xi' (a))^2 + \varepsilon^2) s^2\\
& |\phi (a' + ys) - \phi (a)| \leq \varepsilon\, .
\end{align}
for all $|a'-a|<\delta$ and $y |s| \leq \delta$. Choosing $\varepsilon>0$ so that $\varepsilon \leq \frac{|\Xi' (a)|}{2}$ we easily see that, when $|a'-a| < \delta$, we have
\[
\left|I (\delta, a', y) - \phi (a) \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{ds}{(\Xi' (a))^2 s^2 +1}\right| \leq C \varepsilon\, .
\]
In particular, as $y\downarrow 0$, we conclude \eqref{e:arctan-integral-3}.
\end{proof}
\section{Proof of Proposition \ref{p:almost-final}}
We reduce the proof of Proposition \ref{p:almost-final} to the following lemma.
\begin{lemma}\label{l:almost-final-2}
Consider $G:= \{m> 1, m \neq m_a, m_b : \mathscr{U}_m \neq \emptyset\}$. Then $G$ is relatively open and relatively closed in $]1, \infty[\setminus \{m_a, m_b\}$.
\end{lemma}
Proposition \ref{p:almost-final} is an obvious consequence of the latter lemma and of Proposition \ref{p:5-7}: Lemma \ref{l:almost-final-2} implies that $G$ is the union of connected components of $[1, \infty[\setminus \{m_a, m_b\}$. On the other hand the connected component $]m_b, m_a[$ intersects $G$ because of Proposition \ref{p:5-7} and thus it is contained in $G$.
We thus complete the proof of Proposition \ref{p:almost-final} showing Lemma \ref{l:almost-final-2}
\begin{proof}[Proof of Lemma \ref{l:almost-final-2}] We start with some preliminary considerations. Fix an interval $[c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$.
Recalling Proposition \ref{p:all-m} we know that, since the operator norm of $\mathcal{L}_m$ is bounded uniformly in $m\in [c,d]$,
\begin{itemize}
\item[(a)] There is $R>0$ such that $\mathcal{U}_m\subset B_R (0)$ for all $m\in [c,d]$.
\end{itemize}
However it also follows from Proposition \ref{p:3+4} that
\begin{itemize}
\item[(b)] There is a $\delta >0$ such that $\mathcal{U}_m\subset \{{\rm Im}\, z > \delta\}$.
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $G$ is relatively closed. To that end we fix a sequence $m_j \to m\in ]1, \infty[\setminus \{m_a, m_b\}$ such that $m_j$ belongs to $G$. Without loss of generality we can assume $\{m_j\}\subset [c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$. For each $m_j$ we can then consider $z_j\in \mathscr{U}_{m_j}$, which by (a) and (b) we can assume to converge to some $z\in \mathbb C$ with positive imaginary part. We then let $\psi_j$ be a sequence of nontrivial elements in $L^2$ such that
\begin{equation}\label{e:eigenvalue-equation-21}
-\psi_j'' + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, ,
\end{equation}
and normalize them to $\|\psi_j\|_{L^2}=1$
Since ${\rm Im}\, z_j \geq \delta >0$, the sequence of functions $\frac{A}{\Xi-z_j}$ enjoy uniform bounds in the spaces $L^1$ and $C^k$. We can then argue as in Section \ref{s:3+4} to find that
\begin{itemize}
\item[(i)] $\|\psi_j'\|_{L^2}$ enjoy a uniform bound;
\item[(ii)] There are uniformly bounded nonzero constants $\{C^\pm_j\}$ with the property that $\psi_j$ is asymptotic to $C^\pm_j e^{\mp m_j t}$ and $\pm \infty$;
\item[(iii)] There is a $T_0>0$ independent of $j$ with the property that
\[
|\psi_j (t) - C_j e^{\mp m_j t}| \leq \frac{|C_j|}{2} e^{\mp m_j t} \qquad \forall \pm t > T_0\, .
\]
\end{itemize}
These three properties together imply that a subsequence, not relabeled, converges strongly in $L^2$ to some $\psi$. Passing into the limit in \eqref{e:eigenvalue-equation-21} we conclude that
\[
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\]
This shows that $z\in \mathscr{U}_m$, i.e. that $m \in G$.
\medskip
{\bf Step 2.} Here we show that $G$ is relatively open. To that end we consider some sequence $m_j \to m \in ]1, \infty[\setminus \{m_a, m_b\}$ with the property that $m_j \not\in G$ and we show that $m\not \in G$. By (a) and (b) above, it suffices to show that the domain
\[
\Delta := \{|z|< B_R : {\rm Im}\, z > \delta\}
\]
does not contain any element of ${\rm spec}\, m^{-1} \mathcal{L}_m$. Observe first that, since we know that it does not intersect $\gamma = \partial \Delta$, the distance between $\gamma$ and any element in ${\rm spec}\, m^{-1} \mathcal{L}_m$ is larger than a positive constant $\varepsilon$. Recalling that the spectrum on the upper half complex space is discrete, we have that
\[
P_m := \int_\gamma (m^{-1} \mathcal{L}_m -z)^{-1}\, dz
\]
is a projection on a finite-dimensional space which contains all eigenspaces of the elements $z\in {\rm spec}\, m^{-1} \mathcal{L}_m\cap \Delta = \mathcal{U}_m$. And since all such elements belong to the discrete spectrum, $\mathscr{U}_m = \emptyset$ if and only if $P_m = 0$. On the other hand
\[
P_{m_j} := \int_\gamma (m_j^{-1} \mathcal{L}_{m_j} -z)^{-1}\, dz
\]
equals $0$ precisely because $m_j \not \in G$. We thus just need to show that $P_{m_j}$ converges to $P_m$ to infer that $m\in G$. The latter follows from the following observations:
\begin{itemize}
\item[(i)] Since $\gamma$ is a compact set and does not intersect the spectrum of $m^{-1} \mathcal{L}_m$, there is a constant $M$ such that $\|(m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq M$ for all $z\in \gamma$;
\item[(ii)] $\mathcal{L}_{m_j}$ converges to $\mathcal{L}_m$ in the operator norm;
\item[(iii)] Writing
\[
(m_j^{-1} \mathcal{L}_{m_j} - z)^{-1} = ({\rm Id} + (m^{-1} \mathcal{L}_m -z)^{-1} (m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m))^{-1}(m^{-1} \mathcal{L}_m - z)^{-1}\, ,
\]
when $\|m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m\|_{\mathcal{L}} \leq \frac{1}{2M}$ we can use the Neumann series for the inverse to infer
\[
\sup_{z\in \gamma} \|(m_j^{-1} \mathcal{L}_{m_j} -z)^{-1} - (m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq C \|m^{-1} \mathcal{L}_m - m_j^{-1} \mathcal{L}_{m_j}\|_O\, ,
\]
for some constant $C$ independent of $j$.
\end{itemize}
We then conclude that $P_{m_j}$ converges to $P_m$ in the operator norm.
\end{proof}
\begin{remark}\label{rmk:algebraic dim const}
An immediate outcome of the argument above is that the sum of the algebraic multiplicities of $z\in \mathcal{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant on any connected component of $]-\infty, \infty[\setminus \{m_a, m_b\}$. Indeed, it coincides with the rank of the operator $P_m$ defined in Step 2.
\end{remark}
\section{Proof of Lemma \ref{l:bottom}}\label{s:choice-of-A}
Rather than looking for a suitable $\Xi$ we will write $G := \Xi' + 2\Xi$ and look for the latter function after expressing
\[
\Xi (t) := \int_{-\infty}^t e^{-2(t-\tau)} G (\tau)\, d\tau\, .
\]
To check that the above formula recovers $\Xi$ under our assumptions, observe first that
\[
G = \Xi'+2\Xi
\]
by the classical solution formula for first order ODEs with constant coefficients. It thus suffices to show that that the integral and $\Xi$ coincide in a neighborhood of $-\infty$. To that end consider that
that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for any sufficiently negative $t$ and thus
\[
G (t) = 2\Xi (-\infty) - 4c_0 e^{2t}\, ,
\]
so that, for any such $t$,
\[
\Xi (t) = e^{-2t} \int_{-\infty}^t (2\Xi (-\infty) e^{2\tau} - 4c_0 e^{4\tau})\, d\tau =
\Xi (-\infty) - c_0 e^{2t}\, .
\]
We next read the conditions $\Xi\in \mathscr{C}$ in terms of $G$ to find that they are
\begin{itemize}
\item[(i)] $G (t) = 2 \Xi (-\infty) - 4 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii)] $G (t) = e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii)] There are exactly two zeros $a<b$ of $G'$ and $G'' (a)>0$, $G'' (b)<0$;
\item[(iv)] $\int_{-\infty}^t e^{-2(t-\tau)} G' (\tau) d\tau < 0$ for every $t$.
\end{itemize}
The conditions (i), (ii), and (iii) are obviously equivalent to the corresponding ones in Definition \ref{d:class-C}. As for (iv), we just need to check the formula
\[
\Xi' (t) = \int_{-\infty}^t e^{-2(t-\tau)} G' (\tau)\, d\tau\, .
\]
Arguing as above, the solution formula for first order ODEs with constant coefficients show that the two sides of the above identity can differ at most by a constant, while a direct verification using (i) shows that the two sides coincide for sufficiently negative $t$'s.
We next can read all the above conditions in terms of $A$, more precisely it suffices to impose
\begin{itemize}
\item[(i')] $A (t) = - 8 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii')] $A(t) = -{\bar\alpha} e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii')] There are exactly two zeros $a<b$ of $A$ and $A' (a) >0$, $A' (b)<0$;
\item[(iv')] $\int_{-\infty}^t e^{-2(t-\tau)} A(\tau) d\tau <0$ for every $t$.
\end{itemize}
In fact, assuming the four conditions above we easily recover $G$ by setting
\[
G (t) := - \int_t^\infty A(\tau)\, d\tau\, .
\]
Note in passing that since (i'), (ii'), (iii'), and (iv') imply (i), (ii), (iii), and (iv), which in turn imply the corresponding conditions in Definition \ref{d:class-C}, we derive
\[
\Xi (-\infty) = \frac{1}{2} G (-\infty) = - \int_{-\infty}^\infty A(\tau)\, d\tau\, .
\]
In turn, since $\Xi (\infty)$ = 0 and $\Xi'<0$, the latter equality implies
\[
\int_{-\infty}^\infty A (\tau)\, d\tau < 0\, .
\]
We next fix $a=0$ and $b=\frac{1}{2}$ and rather than imposing (iv') we impose the two conditions
\begin{itemize}
\item[(v')] $\int_{-\infty}^0 e^{2\tau} A (\tau) d\tau = -1$;
\item[(vi')] $\max A \leq \frac{1}{e}$.
\end{itemize}
Observe, indeed, that (iv) is equivalent to
\[
\int_{-\infty}^t e^{2\tau} A (\tau) \, d\tau < 0
\]
and that, since $A$ is negative on $]-\infty, 0[$ and $]\frac{1}{2}, \infty[$, the integral on the left-hand side is maximal for $t=\frac{1}{2}$. We then can use (v') and (vi') to estimate
\[
\int_{-\infty}^{\frac{1}{2}} e^{2\tau} A(\tau)\,d \tau \leq -1 + \frac{e}{2} \max A \leq -\frac{1}{2}\, .
\]
We next recall that, by the Rayleigh criterion,
\[
- \lambda_a = \min_{\|\psi\|_{L^2} = 1} \langle \psi, L_a \psi\rangle = \min_{\|\psi\|_{L^2} = 1} \int \left(|\psi'|^2 + \frac{A}{\Xi-\Xi (0)} |\psi|^2 \right)\, .
\]
We test the right-hand side with
\[
\psi (t) :=
\left\{
\begin{array}{ll}
0 \qquad & \mbox{for $|t|\geq \frac{1}{2}$}\\ \\
\sqrt{2} \cos (\pi t) \qquad &\mbox{for $|t|\leq \frac{1}{2}$.}
\end{array}\right.
\]
We therefore get
\begin{equation}\label{e:bottom-est-1}
- \lambda_a \leq 2 \pi^2 + 2 \int_{-1/2}^{1/2} \frac{A (t)}{\Xi (t) - \Xi (0)} \cos^2 \pi t\, dt\, .
\end{equation}
Next, for any fixed positive constant $B>0$, we impose that $A (t) = B t$ on the interval $]- \sqrt{B}^{-1}, 0]$ and we then continue it smoothly on $[0, \infty[$ so to satisfy (ii'), (iii'), and (v') on $[0, \infty[$ (the verification that this is possible is rather simple). We also can continue it smoothly on $]-\infty, - \sqrt{B}^{-1}]$ so to ensure (i'). In order to ensure (v') as well we just need to show that
\[
\int_{-\sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq -\frac{1}{2}\, .
\]
The latter is certainly ensured by
\[
\int_{- \sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq \int_{-\sqrt{B}^{-1}}^0 A(\tau)\, d\tau = -\frac{1}{2}\, .
\]
Now, observe that $\Xi' (0) = \int_{-\infty}^0 e^{2\tau} A (\tau) \, d\tau = -1$. For $t\in \, ]-\sqrt{B}^{-1}, 0[$ we wish to estimate $\Xi' (t)$ and to do it we compute
\begin{align*}
|\Xi' (t)- \Xi' (0)| & = \left|e^{-2t}\int_{-\infty}^t e^{2\tau} A (\tau)\, d\tau - \int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq e^{-2t} \left|\int_t^0 e^{2\tau} A(\tau)\, d\tau\right| + (e^{-2t}-1) \left|\int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq \frac{e^{2\sqrt{B}^{-1}}}{2} + (e^{2\sqrt{B}^{-1}}-1) \leq \frac{3}{4}\, ,
\end{align*}
which can be ensured by taking $B$ sufficiently large. In particular $-\frac{1}{4} \geq \Xi' (t) \geq -2 $ for $t\in \, ]-\sqrt{B}^{-1}, 0[$. We thus conclude that
\[
-2t \leq \Xi (t) - \Xi (0) \leq -\frac{t}{4} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
In turn the latter can be used to show
\[
\frac{A (t)}{\Xi (t) - \Xi(0)} \leq - \frac{B}{2} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
Since $\frac{A}{\Xi - \Xi (0)}$ is otherwise negative on $]-\frac{1}{2}, \frac{1}{2}[$, we conclude
\begin{equation}\label{e:bottom-est-2}
- \lambda_a \leq 2\pi^2 - 2 \int_{-\sqrt{B}^{-1}}^0 B \cos^2 \pi t\, dt\, .
\end{equation}
By taking $B$ large enough we can ensure that $\cos^2 \pi t\geq \frac{1}{2}$ on the interval $]-\sqrt{B}^{-1}, 0[$. In particular we achieve
\[
- \lambda_a \leq 2\pi^2 - \sqrt{B}\, .
\]
Since we can choose $\sqrt{B}$ as large as we wish, the latter inequality completes the proof of the lemma.
\chapter{Linear theory: Part II}
\label{chapter:linearpartii}\label{sect:Proof-spectral-2}
This chapter is devoted to proving Theorem \ref{thm:spectral2}. Because of the discussions in the previous chapter, considering the decomposition
\[
L^2_m = \bigoplus_{k\in \mathbb Z} U_{km}\, ,
\]
the statement of Theorem \ref{thm:spectral2} can be reduced to the study of the spectra of the restrictions $L_{\text{st}}|_{U_{km}}$ of the operator $L_{\text{st}}$ to the invariant subspaces $U_{km}$. For this reason we introduce the notation ${\rm spec}\, (L_{\text{st}}, U_j)$ for the spectrum of the operator $L_{\text{st}}|_{U_j}$, understood as an operator from $U_j$ to $U_j$. The following is a very simple observation.
\begin{lemma}
The restriction of the operator $L_{\text{st}}$ to the radial functions $U_0$ is identically $0$. Moreover, $z\in {\rm spec}\, (L_{\text{st}}, U_j)$ if and only if $\bar z \in {\rm spec}\, (L_{\text{st}}, U_{-j})$.
\end{lemma}
We will then focus on proving the following statement, which is slightly stronger than what we need to infer Theorem \ref{thm:spectral2}.
\begin{theorem}\label{thm:spectral3}
For a suitable choice of $\bar \Omega$, there is $m_0\geq 2$ such that
${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ is nonempty and ${\rm spec}\, (L_{\text{st}}, U_{m_0})\cap \{{\rm Re}\, z \geq \bar a\}$ is finite for every positive $\bar a$.
\end{theorem}
\begin{remark}\label{r:better2}\label{R:BETTER2} As it is the case for Theorem \ref{thm:spectral2} we can deepen our analysis and prove the following stronger statement:
\begin{itemize}
\item[(i)] For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} we have ${\rm spec}\, (L_{\text{st}}, U_m) \subset i \mathbb R$ for every $m> m_0$.
\end{itemize}
This will be done in Appendix \ref{a:better}, where we will also show how conclusion (c) of Remark \ref{r:better} follows from it.
\end{remark}
{{red} Note that in \cite{Vishik2} Vishik claims the following stronger statement.
\begin{theorem}\label{thm:spectral-stronger-2}\label{THM:SPECTRAL-STRONGER-2}
For a suitable choice of $m_0$, in addition to the conclusion of Theorem \ref{thm:spectral3} and to Remark \ref{r:better2}(i), we have also
\begin{itemize}
\item[(ii)] ${\rm spec}\, (L_{\text{st}}, U_{m_0}) \cap \{{\rm Re}\, z > 0\}$ consists of a single eigenvalue with algebraic multiplicity $1$.
\end{itemize}
\end{theorem}
In Appendix \ref{a:better} we will show how to prove the latter conclusion and how Theorem \ref{thm:spectral-stronger} follows from it.}
\section{Preliminaries}
If we write an arbitrary element $\Omega\in U_m$ as $\Omega (x) = e^{im\theta} \gamma (r)$ using polar coordinates, we find an isomorphism of the Hilbert space $U_m$ with the Hilbert space
\begin{equation}\label{e:def-H}
\mathcal{H}:= \left\{\gamma : \mathbb R^+ \to \mathbb C : \int_0^\infty |\gamma (r)|^2\, r\, dr < \infty\right\}
\end{equation}
and thus the operator $L_{\text{st}}: U_m \to U_m$ can be identified with an operator $\mathcal{L}_m : \mathcal{H}\to \mathcal{H}$. In fact, since $L_{\text{st}} = S_1+\mathscr{K}$, where $S_1$ is skew-adjoint and $\mathscr{K}$ compact, $\mathcal{L}_m$ is also a compact perturbation of a skew-adjoint operator. In order to simplify our notation and terminology, we will then revert our considerations to the operator $i\mathcal{L}_m$, which will be written as the sum of a self-adjoint operator, denoted by $\mathcal{S}_m$, and a compact operator, denoted by $\mathscr{K}_m$.
\begin{lemma}\label{l:S-in-polar}
After the latter identification, if $\bar\Omega (x) = g (|x|)$ and $\zeta$ is given through the formula \eqref{e:def-zeta}, then $\mathcal{S}_m: \mathcal{H}\to \mathcal{H}$ is the following bounded self-adjoint operator:
\begin{equation}\label{e:explicit}
\gamma \mapsto \mathcal{S}_m (\gamma) = m \zeta \gamma\, .
\end{equation}
\end{lemma}
\begin{proof}
The formula is easy to check. The self-adjointness of \eqref{e:explicit} is obvious. Concerning the boundedness we need to show that $\zeta$ is bounded. Since $g$ is smooth (and hence locally bounded), $\zeta$ is smooth and locally bounded by \eqref{e:def-zeta}. To show that it is globally bounded recall that $g (r) = r^{-{\bar\alpha}}$ for $r\geq 2$, so that
\[
\zeta (r) = \frac{\tilde c_0}{r^2} + \frac{1}{r^2} \int_2^r \rho^{1-{\bar\alpha}}\, d\rho = \frac{c_0}{r^2} + \frac{c_1}{r^{\bar\alpha}} \qquad \forall r\geq 2\, ,
\]
where $c_0$ and $c_1$ are two appropriate constants.
\end{proof}
A suitable, more complicated, representation formula can be shown for the operator $\mathscr{K}_m$.
\begin{lemma}\label{l:K-in-polar}
Under the assumptions of Lemma \ref{l:S-in-polar}, the compact operator $\mathscr{K}_m: \mathcal{H}\to \mathcal{H}$ is given by
\begin{equation}\label{e:explicit2}
\gamma \mapsto \mathscr{K}_m (\gamma)= - \frac{m}{r} \psi g'\,
\end{equation}
where
\begin{equation}\label{e:explicit3}
\psi (r) = - \frac{1}{2m} r^{m} \int_r^\infty \gamma (s) s^{1-m}\, ds - \frac{1}{2m} r^{-m} \int_0^r \gamma (s) s^{1+m}\, ds\, .
\end{equation}
\end{lemma}
\begin{remark}\label{r:potential-theory}
When $\gamma$ is compactly supported, $\phi (\theta,r):= \psi (r) e^{im\theta}$ with $\psi$ as in \eqref{e:explicit} gives the unique potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$, namely, $\phi$ obtained as the convolution of $\gamma e^{im\theta}$ with the Newtonian potential $\frac{1}{2\pi} \ln r$. For general $\gamma\in \mathcal{H}$ we do not have enough summability to define such convolution using Lebesgue integration, but, as already done before, we keep calling $\phi$ the potential-theoretic solution of $\Delta \phi = \gamma e^{im\theta}$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{l:K-in-polar}]
First of all we want to show that the formula is correct when $\Omega = \gamma (r) e^{im \theta} \in C^\infty_c \cap L^2_m$. We are interested in computing $-i (K_2*\Omega\cdot \nabla) \bar \Omega$. First of all we recall that $K_2* \Omega = \nabla^\perp \phi$, where $\phi$ is the potential-theoretic solution of $\Delta \phi = \Omega$. Recall that for $\phi$ we have the explicit formula
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) \ln |y-x|\,\mathrm dy\, .
\]
$\phi$ is clearly smooth and hence locally bounded. Observe that $\Omega$ averages to $0$ and thus
\[
\phi (x) = \frac{1}{2\pi} \int_{\ensuremath{\mathbb R}^2} \Omega (y) (\ln |y-x| - \ln |x|)\,\mathrm dy\, .
\]
Fix $R$ larger than $1$ so that ${\rm spt}\, (\Omega) \subset B_R$ and choose $|x|\geq 2R$. We then have the following elementary inequality for every $y\in {\rm spt}\, (\Omega)$:
\[
|\ln |x| - \ln |x-y||\leq \ln (|x-y| + |y|) - \ln (|x-y|)\leq \frac{|y|}{|y-x|} \leq \frac{2|y|}{|x|}\, ,
\]
from which we conclude that $|\phi (x)|\leq C (1+|x|)^{-1}$. Hence $\phi$ is the only solution to $\Delta \phi = \Omega$ with the property that it converges to $0$ at infinity. This allows us to show that $\phi$ satisfies the formula
\[
\phi (x) = \psi (r) e^{im\theta}
\]
where $\psi$ is given by formula \eqref{e:explicit3}. We indeed just need to check that the Laplacian of
$\psi (r) e^{im\theta}$ equals $\gamma (r) e^{im\theta}$ and that $\lim_{r\to \infty} \psi (r) = 0$.
Using the formula $\Delta = \frac{1}{r^2} \frac{\partial^2}{\partial \theta^2} + \frac{1}{r} \frac{\partial}{\partial r} + \frac{\partial^2}{\partial r^2}$ the first claim is a direct verification. Next, since $\gamma (r) =0$ for $r\geq R$, we conclude $\psi (r) = C r^{-m}$ for all $r\geq R$, which shows the second claim.
Observe next that
\[
\nabla \phi = \frac{m i}{r^2} \psi (r) e^{im\theta} \frac{\partial}{\partial \theta} - \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial r}\, ,
\]
which turns into
\[
\nabla \phi^\perp = - \frac{mi}{r} \psi (r) e^{im\theta} \frac{\partial}{\partial r} - \frac{1}{r} \frac{\partial}{\partial r} \left(\psi (r) e^{im\theta}\right) \frac{\partial}{\partial \theta}\, .
\]
Since $\bar\Omega (x) = g(r)$, we then conclude that
\[
- (K_2*\Omega\cdot \nabla) \bar\Omega = \frac{mi}{r} \psi (r) e^{im\theta} g' (r)\, .
\]
Upon multiplication by $i$ we obtain formula \eqref{e:explicit2}. Since we know from the previous chapter that $\mathscr{K}$ is a bounded and compact operator and $\mathscr{K}_m$ is just the restriction of $i\mathscr{K}$ to a closed invariant subspace of it, the boundedness and compactness of $\mathscr{K}_m$ is obvious.
\end{proof}
Notice next that, while in all the discussion so far we have always assumed that $m$ is an integer larger than $1$, the operator $\mathcal{S}_m$ can in fact be easily defined for every {\em real} $m>1$, while, using the formulae \eqref{e:explicit2} and \eqref{e:explicit3} we can also make sense of $\mathscr{K}_m$ for every real $m>1$. In particular we can define as well the operator $\mathcal{L}_m$ for every $m>1$. The possibility of varying $m$ as a real parameter will play a crucial role in the rest of the chapter, and we start by showing that, for $m$ in the above range, the boundedness of $\mathcal{L}_m$ and $\mathcal{S}_m$ and the compactness of $\mathscr{K}_m$ continue to hold.
\begin{proposition}\label{p:all-m}
The operators $\mathcal{L}_m$, $\mathcal{S}_m$, and $\mathscr{K}_m$ are bounded operators from $\mathcal{H}$ to $\mathcal{H}$ for every real $m>1$, with a uniform bound on their norms if $m$ ranges in a compact set. Moreover, under the same assumption $\mathscr{K}_m$ is compact. In particular:
\begin{itemize}
\item[(i)] ${\rm spec}\, (\mathcal{L}_m)$ is compact;
\item[(ii)] for every $z$ with ${\rm Im}\, z \neq 0$ the operator $\mathcal{L}_m-z$ is a bounded Fredholm operator with index $0$;
\item[(iii)] every $z\in {\rm spec}\, (\mathcal{L}_m)$ with ${\rm Im}\, z \neq 0$ belongs to the discrete spectrum.
\end{itemize}
\end{proposition}
\begin{proof} The boundedness of $\mathcal{S}_m$ is obvious. Having shown the boundedness and compactness of $\mathscr{K}_m$, (i) follows immediately from the boundedness of $\mathcal{L}_m$, while (ii) follows immediately from the fact that $\mathcal{L}_m - z$ is a compact perturbation of the operator $\mathcal{S}_m -z$, which is invertible because $\mathcal{S}_m$ is selfajoint, and (iii) is a standard consequence of (ii).
First of all let us prove that $\mathcal{K}_m$ is bounded (the proof is necessary because from what previously proved, we can just conclude the boundedness and compactness of the operator for {\em integer values} of $m$ larger than $1$). We observe first that the function $\| r^{-1}\psi\|_\infty \leq \|\gamma\|_{\mathcal{H}}$, as it follows from Cauchy-Schwarz that
\begin{align*}
r^{m-1} \int_r^\infty |\gamma (s)| s^{1-m}\, ds&\leq
r^{m-1} \left(\int_r^\infty |\gamma(s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_r^\infty s^{1-2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m-2}} \|\gamma\|_{\mathcal{H}}\\
r^{-m-1} \int_0^r |\gamma (s)| s^{1+m}\, ds &\leq r^{-m-1} \left(\int_0^r |\gamma (s)|^2 s\, ds\right)^{\frac{1}{2}} \left(\int_0^r s^{1+2m}\, ds\right)^{\frac{1}{2}}
\leq \frac{1}{\sqrt{2m+2}} \|\gamma\|_{\mathcal{H}}\, .
\end{align*}
Since $g' (r) \leq C (1+r)^{-1-{\bar\alpha}}$, it follows immediately that
\begin{equation}\label{e:K_m-pointwise-bound}
|(\mathcal{K}_m (\gamma)) (r)| \leq \frac{C\|\gamma\|_{\mathcal{H}}}{(1+r)^{1+{\bar\alpha}}}
\end{equation}
and in particular
\[
\|\mathcal{K}_m (\gamma)\|_{\mathcal{H}} \leq
C\|\gamma\|_{\mathcal{H}} \left(\int_0^\infty \frac{s}{(1+s)^{2+2{\bar\alpha}}}\, ds\right)^{\frac{1}{2}}
\leq C \|\gamma\|_{\mathcal{H}} \, .
\]
This completes the proof of boundedness of the operator. In order to show compactness consider now a bounded sequence $\{\gamma_k\}\subset \mathcal{H}$. Observe that for every fixed $N$, \eqref{e:explicit3} gives the following obvious bound
\begin{equation}
\|\mathcal{K}_m(\gamma_k)\|_{W^{1,2} [N^{-1}, N]} \leq C (N) \|\gamma_k\|_{\mathcal{H}}\, .
\end{equation}
In particular, through a standard diagonal procedure, we can extract a subsequence of $\{\mathcal{K}_m(\gamma_k)\}$ (not relabeled) which converges strongly in $L^2 ([N^{-1}, N], rdr)$ for every $N$. It is now easy to show that $\{\mathcal{K}_m (\gamma_k)\}_k$ is a Cauchy sequence in $\mathcal{H}$. Fix indeed $\varepsilon>0$. Using \eqref{e:K_m-pointwise-bound} it is easy to show that there is a sufficiently large $N$ with the property that
\begin{equation}\label{e:Cauchy-1}
\sup_k \|\mathcal{K}_m (\gamma_k) \mathbf{1}_{[0, N^{-1}]\cup [N, \infty[}\|_{\mathcal{H}} < \frac{\varepsilon}{3}\, .
\end{equation}
Hence, given such an $N$, we can choose $k_0$ big enough so that
\begin{equation}\label{e:Cauchy-2}
\|(\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)) \mathbf{1}_{[N^{-1}, N]}\|_{\mathcal{H}} \leq
\frac{\varepsilon}{3} \qquad \forall k,j \geq k_0\, .
\end{equation}
Combining \eqref{e:Cauchy-1} and \eqref{e:Cauchy-2} we immediately conclude
\[
\|\mathcal{K}_m (\gamma_k) - \mathcal{K}_m (\gamma_j)\|_{\mathcal{H}} < \varepsilon
\]
for every $j,k \geq k_0$. This completes the proof that $\{\mathcal{K}_m (\gamma_j)\}$ is a Cauchy sequence and hence the proof that $\mathcal{K}_m$ is compact.
\end{proof}
\section{The eigenvalue equation and the class \texorpdfstring{$\mathscr{C}$}{scrC}}
\label{s:eigenvalue-equation}
Using the operators introduced in the previous setting, we observe that Theorem \ref{thm:spectral3} is equivalent to showing that ${\rm spec}\, (\mathcal{L}_{m_0}) \cap \{{\rm Im}\, z>0\}$ is finite and nonempty.
We next notice that, thanks to Proposition \ref{p:all-m}, the latter is equivalent to showing that the equation\footnote{Recall that $\psi$ is defined through \eqref{e:explicit3}.}
\begin{equation}\label{e:eigenvalue-equation}
m \zeta \gamma - \frac{m}{r} g' \psi = z \gamma
\end{equation}
has a nontrivial solution $\gamma \in \mathcal{H}$ for some integer $m=m_0\geq 2$ and some complex number $z$ with positive imaginary part.
We thus turn \eqref{e:eigenvalue-equation} into an ODE problem by changing the unknown from $\gamma$ to the function $\psi$.
In particular, recall that the relation between the two is that $\Delta (\psi (r) e^{im\theta}) = \gamma (r) e^{im\theta}$, and $\psi e^{im\theta}$ is in fact the potential-theoretic solution. We infer that
\[
\psi'' + \frac{1}{r} \psi' - \frac{m^2}{r^2} \psi = \gamma\,
\]
and hence \eqref{e:eigenvalue-equation} becomes
\begin{equation}\label{e:eigenvalue-equation-2}
- \psi'' - \frac{1}{r}\psi' + \frac{m^2}{r^2} \psi + \frac{g'}{r (\zeta -m^{-1} z)} \psi = 0 \, .
\end{equation}
Notice that, by classical estimates for ODEs, $\psi \in W^{2,2}_{\text{loc}} (\mathbb R^+)$.
Observe, moreover, that if $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-2} and $z$ has nonzero imaginary part, it follows that
\[
\gamma = \frac{mg'}{r (m \zeta -z)} \psi
\]
belongs to $L^2 (r dr)$ and solves \eqref{e:eigenvalue-equation}, because the function $\frac{mg'}{m \zeta -z}$ is bounded. Viceversa, assume that $\gamma \in L^2 (r dr)$ solves \eqref{e:eigenvalue-equation}. Then $\psi$ solves \eqref{e:eigenvalue-equation-2} and we claim that $\psi\in L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$. First of all notice that, by classical Calder{\'o}n-Zygmund estimates, $\phi (x) := \psi (r) e^{im\theta}$ is a $W^{2,2}_{\text{loc}}$ function of $\mathbb R^2$. As such $\phi\in C^\omega (B_1)$ for every $\omega<1$ and therefore $\psi\in C^\omega ([0,1])$ and, by symmetry considerations, $\psi (0) =0$. Thus it turns out that $|\psi (r)|\leq C r^\omega$ for every $r\in [0,1]$, which easily shows that $\psi\in L^2 ([0,1], \frac{dr}{r})$. It remains to show that
\begin{equation}\label{e:correzione-1}
\int_1^\infty \frac{|\psi (r)|^2}{r}\, dr < \infty\, .
\end{equation}
However recall that, for $r$ sufficiently large, $\zeta (r) = \frac{c_0}{r^2}+ \frac{c_1}{r^{\bar\alpha}}$ for some constants $c_0$ and $c_1$, while $g' (r) = -{\bar\alpha} r^{1+{\bar\alpha}}$. We thus infer
\[
|\psi (r)| = \left|\frac{r (\zeta (r)-\frac{z}{m})}{g' (r)}\right| \leq \frac{C|\gamma (r)|}{r^{\bar\alpha}}\, ,
\]
which in turn easily implies \eqref{e:correzione-1} because $\int_1^\infty |\gamma (r)|^2 r\, dr < \infty$.
Hence our problem is equivalent to understand for which $m$ and $z$ with positive imaginary part there is an $L^2 (\frac{dr}{r})\cap W^{2,2}_{\text{loc}}$ solution of \eqref{e:eigenvalue-equation-2}. The next step is to change variables to $t = \ln r$ and we thus set $\varphi (t) = \psi (e^t)$, namely, $\psi (r) = \varphi (\ln r)$. The condition that $\psi\in L^2 (\frac{dr}{r})$ translates then into $\varphi\in L^2 (\mathbb R)$ and $\psi\in W^{2,2}_{\text{loc}}$ translates into $\varphi\in W^{2,2}_{\text{loc}}$.
Moreover, if we substitute the complex number $z$ with $\frac{z}{m}$ we can rewrite
\begin{equation}\label{e:eigenvalue-equation-3}
- \varphi'' (t) + m^2 \varphi (t) + \frac{A(t)}{\Xi (t) - z} \varphi (t) = 0\, ,
\end{equation}
which is \emph{Rayleigh's stability equation},
where the functions $A$ and $\Xi$ are given by changing variables in the corresponding functions $g'$ and
$\zeta$:
\begin{align}
A (t) &= \frac{d}{dt} g(e^t)\\
\Xi (t) &= \int_{-\infty}^t e^{-2 (t-\tau)} g (e^\tau)\, d\tau\, .
\end{align}
Note in particular that we can express $A$ and $\Xi$ through the relation
\begin{equation}\label{e:A-Xi}
A = \Xi'' + 2 \Xi'\, .
\end{equation}
The function $g$ (and so our radial function $\bar\Omega$) can be expressed in terms of $\Xi$ through the formula
\begin{equation}\label{e:formula-g}
g (e^t) = e^{-2t} \frac{d}{dt} (e^{2t} \Xi (t))\, .
\end{equation}
Rather than looking for $g$ we will then look for $\Xi$ in an appropriate class $\mathscr{C}$ which we next detail:
\begin{definition}\label{d:class-C}
The class $\mathscr{C}$ consists of those functions $\Xi: \mathbb R \to ]0, \infty[$ such that
\begin{itemize}
\item[(i)] $\Xi (-\infty) := \lim_{t\to - \infty} \Xi (t)$ is finite and there are constants $c_0>0$ and $M_0$ such that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for all $t\leq M_0$;
\item[(ii)] there is a constant $c_1$ such that $\Xi (t) = c_1 e^{-2t} + \frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}$ for $t\geq \ln 2$;
\item[(iii)] $A$ has exactly two zeros, denoted by $a<b$, and $A' (a)>0$ and $A' (b)<0$ (in particular $A<0$ on $]-\infty,a[ \cup ]b, \infty[$ and $A>0$ on $]a,b[$);
\item[(iv)] $\Xi ' (t) <0$ for every $t$.
\end{itemize}
\end{definition}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{Figures/fig1.pdf}
\caption{A sketch of the function in the class $\mathscr{C}$ which will be finally chosen in Section \ref{s:choice-of-A} to prove Theorem \ref{thm:spectral3}, in the $t= \log r$ axis. The graph of $A(t)$ is the solid curve, $G(t):=\Xi'(t)+2\Xi(t)$ the dashed one, and $\Xi'(t)$ the dotted one. Even though $A$ is smooth, its derivative undergoes a very sharp change around the points $t = \frac{1}{2}$ and the point $t= -\frac{1}{\sqrt{B}}$, where $B$ is an appropriately large constant, cf. Section \ref{s:choice-of-A}.}
\label{fig:fig1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.75\textwidth]{Figures/fig2.pdf}
\caption{The profile of the background vorticity $\bar \Omega(x) = g(r)$ in the original coordinates (the solid curve). Compare with the exact singular profile $r^{-{\bar\alpha}}$ (the dashed curve)}
\label{fig:fig2}
\end{figure}
Fix $\Xi\in \mathscr{C}$. By \eqref{e:formula-g}, $g$ is then smooth, it equals $2\Xi (-\infty)- 4 c_0 r^2$ in a neighborhood of $0$, and it is equal to $r^{-{\bar\alpha}}$ for $r\geq 2$, thanks to the conditions (i)-(ii). In particular the corresponding function $\bar \Omega (x) = g (|x|)$ satisfies the requirements of Theorem \ref{thm:spectral3}. We are now ready to turn Theorem \ref{thm:spectral3} into a (in fact stronger) statement for the eigenvalue equation \eqref{e:eigenvalue-equation-3}. In order to simplify its formulation and several other ones in the rest of these notes, we introduce the following sets
\begin{definition}\label{d:sets-U-m}
Having fixed $\Xi\in \mathscr{C}$ and a real number $m>1$, we denote by $\mathscr{U}_m$ the set of those complex $z$ with positive imaginary part with the property that there are nontrivial solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}} (\mathbb R, \mathbb C)$ of \eqref{e:eigenvalue-equation-3}.
\end{definition}
{\begin{remark}\label{r:m-factor-eigenvalue}
Observe that $z$ belongs to $\mathscr{U}_m$ if and only if it has positive imaginary part and $m z$ is an eigenvalue of $\mathcal{L}_m$.
\end{remark}}
\begin{theorem}\label{thm:spectral5}\label{THM:SPECTRAL5}
There is a function $\Xi\in \mathscr{C}$ and an integer $m_0\geq 2$ such that $\mathscr{U}_{m_0}$ is finite and nonempty.
\end{theorem}
\section{Overview of the proof of Theorem \ref{thm:spectral5}}\label{sec:overviewspectralthm}
The rest of the chapter is devoted to proving Theorem \ref{thm:spectral5}. The proof will be achieved through a careful study of Rayleigh's stability equation~\eqref{e:eigenvalue-equation-3} and, in particular, the set $\mathscr{P}$ of pairs $(m,z)$ such that $z\in \mathscr{U}_m$ and $m>1$, i.e.,
\begin{equation}\label{e:def-pairs}
\mathscr{P}:= \left\{(m,z)\in \mathbb R \times \mathbb C: z\in \mathscr{U}_m, m>1\right\}\, .
\end{equation}
Given that $\Xi$ is strictly decreasing, we have
\[
\lim_{t\to -\infty} \Xi (t) > \Xi (a) > \Xi (b) > \lim_{t\to \infty} \Xi (t) = 0
\]
and in order to simplify our notation we will use $\Xi (-\infty)$ for $\lim_{t\to -\infty} \Xi (t)$ and, occasionally, $\Xi (\infty)$ for $0$.
The first step in the proof of Theorem \ref{thm:spectral5} is understanding which pairs $(m,z)$ belong to the closure of $\mathscr{P}$ and have ${\rm Im}\, z =0$. Solutions $(m,z,\varphi)$ to~\eqref{e:eigenvalue-equation-3} with $(m,z) \in \overline{\mathscr{P}}$ are sometimes called \emph{neutral limiting modes}~\cite{LinSIMA2003}.\footnote{The interested reader may compare with the strategy for bounded shear flows in~\cite{LinSIMA2003}.} To that end, it is convenient to introduce the following two self-adjoint operators:
\begin{align}
L_a &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (a)}\label{e:def-L_a}\\
L_b &:= -\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t) - \Xi (b)}\label{e:def-L_b}\, .
\end{align}
Thanks to the definition of the class $\mathscr{C}$, it is easy to see that both functions $\frac{A(t)}{\Xi (t) - \Xi (a)}$ and $\frac{A(t)}{\Xi (t) - \Xi (b)}$ are bounded and that $\frac{A(t)}{\Xi (t) - \Xi (a)} < \frac{A(t)}{\Xi (t) - \Xi (b)}$. Moreover, the first is negative on $]-\infty, b[$ and positive on $]b, \infty[$, while the second is negative on $]-\infty, a[$ and positive on $]a, \infty[$. Recall that the spectra of these operators are necessarily real and denote by $-\lambda_a$ and $-\lambda_b$ the smallest element in the respective ones: observe that, by the Rayleigh quotient characterization, $-\lambda_a < -\lambda_b$.
The following proposition characterizes the possible neutral limiting modes:
\begin{proposition}\label{p:3+4}\label{P:3+4}
If $(m_0,z)\in \overline{\mathscr{P}}$ and ${\rm Im}\, z =0$, then either $z= \Xi (a)$ or $z= \Xi (b)$. {{red} Moreover, in either case, if $m_0>1$ then necessarily $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$.} Assume in addition that $- \lambda_a < -1$. Then, for $z = \Xi (a)$, the unique $m\geq 1$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$. Moreover, any nontrivial solution has the property that $\psi_a (a) \neq 0$.
\end{proposition}
{{red}
\begin{remark}\label{r:b-also}
We remark that the exact same argument applies with $b$ in place of $a$ when $\lambda_b >1$, even though this fact does not play any role in the rest of the notes.
\end{remark}
}
Observe that this does not yet show that $(m_a, \Xi (a))\in \overline{\mathscr{P}}$ corresponds to a neutral limiting mode. The latter property will be achieved in a second step, in which we seek a curve of unstable modes emanating from $(m_a, \Xi(a))$:
\begin{proposition}\label{p:5-7}\label{P:5-7}
Assume $- \lambda_a<-1$ and let $m_a=\sqrt{\lambda_a}$.
There are positive constants $\varepsilon >0$ and $\delta>0$ with the following property:
For every $h\in ]0, \delta[$, $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a)) \neq \emptyset$.
\end{proposition}
{{red}
\begin{remark}\label{r:b-also-2} In fact, the argument given for the proposition proves the stronger conclusion that $\mathscr{U}_{m_a-h} \cap B_\varepsilon (\Xi (a))$ consists of a single point $z$, with the property that $mz$ is an eigenvalue of $\mathcal{L}_m$ with geometric multiplicity $1$.
Moreover, the very same argument applies to $b$ in place of $a$ and $h \in ]-\delta,0[$ if $\lambda_b >1$.
\end{remark}
}
Combined with some further analysis, in which the curve of unstable modes is continued, the latter proposition will allow us to conclude the following:
\begin{proposition}\label{p:almost-final}\label{P:ALMOST-FINAL}
Assume $- \lambda_a<-1$, let $m_a = \sqrt{\lambda_a}$ and set $m_b:= \sqrt{\max \{1, \lambda_b\}}$: then
$\mathscr{U}_m\neq \emptyset$ for every $m\in ]m_b, m_a[$. \end{proposition}
Thus far, we have not selected our function $\Xi$: the above properties are valid for any element in the class $\mathscr{C}$. The choice of $\Xi$ comes in the very last step.
\begin{proposition}\label{p:final}
There is a choice of $\Xi\in \mathscr{C}$ with the property that $]m_b,m_a[$ contains an integer larger than $1$.
\end{proposition}
Clearly, the combination of Proposition \ref{p:almost-final} and Proposition \ref{p:final} gives Theorem \ref{thm:spectral5}: we first choose $\Xi$ as in Proposition \ref{p:final} and hence we select $m_0$ as the largest natural number which belongs to the interval $]m_b,m_a[$; the properties claimed in Theorem \ref{thm:spectral5} follow then from Proposition \ref{p:almost-final}. The proof of Proposition \ref{p:final} is in fact a rather straightforward application of the following.
\begin{lemma}\label{l:bottom}\label{L:BOTTOM}
Let $m_0$ be any integer. Then there exists $\Xi\in \mathscr{C}$ with $a=0$ and $b=\frac{1}{2}$ such that the smallest eigenvalue of the operator $L_a$ is smaller than $-m_0^2$.
\end{lemma}
\begin{remark}\label{rmk:veryunstablemodes}
A consequence of Lemma~\ref{l:bottom} is that the most unstable wavenumber $m_0$ can be made arbitrarily large. Only $m_0 \geq 2$ is necessary to prove non-uniqueness.
\end{remark}
The rest of the chapter will be devoted to proving the Propositions \ref{p:3+4} and \ref{p:5-7} and Lemma \ref{l:bottom}. We finish this section by giving the simple proof of Proposition \ref{p:final}
\begin{proof} For simplicity we fix $a=0$ and $b=\frac{1}{2}$ and we look at the set of functions $\Xi$ with this particular choice of zeros for $A$. We then denote by $L_{\Xi, a}$ the operator in \eqref{e:def-L_a}. We fix an arbitrary $\Xi_0\in \mathscr{C}$ and let $-\lambda (0)$ be the smallest eigenvalue of $L_{\Xi_0,a}$. We then consider the smallest integer $m_0\geq 3$ such that $m_0^2 > \lambda (0)$. By Lemma \ref{l:bottom} there is an element $\Xi_1\in \mathscr{C}$ with the property that $a=0$, $b=\frac{1}{2}$ and, if $-\lambda (1)$ is the smallest element of the spectrum of $L_{\Xi_1, a}$, then $-\lambda (1) < m_0^2$. For $\sigma\in [0,1]$ consider $L_{\Xi_\sigma,a}$ where
\[
\Xi_\sigma = (1-\sigma) \Xi_0 + \sigma \Xi_1\,
\]
and observe that $\Xi_\sigma \in \mathscr{C}$ for every $\sigma\in [0,1]$.
Since $\sigma \mapsto \Xi_\sigma$ is continuous in the uniform convergence, by the Rayleigh quotient characterization we see that the smallest element $-\lambda (\sigma)$ of the spectrum of $L_{\Xi_\sigma,a}$ is a continuous function of $\sigma$. There is thus one $\sigma\in [0,1[$ with $\lambda (\sigma)= m_0^2$. Let $\sigma_0$ be the largest $\sigma$ with $\lambda (\sigma)= m_0^2$. Observe now that, if we let $- \mu (\sigma_0)$ be the smallest eigenvalue of $L_{\Xi_{\sigma_0}, b}$, then $\mu (\sigma_0) < m_0^2$. In addition, $\sigma\mapsto \mu (\sigma)$ is also continuous and thus there is $h>0$ such that $\mu (\sigma) < m_0^2$ for all $\sigma\in [\sigma_0-h, \sigma_0+h]$. On the other hand $\lambda (\sigma_0+h)> m_0^2$. This shows that $m_b < m_0 < m_a$ if we choose $\Xi= \Xi_{\sigma_0+h}$, completing the proof of our claim.
\end{proof}
\section{ODE Lemmas}
An essential tool in the proofs of the Propositions \ref{p:3+4} and \ref{p:5-7} are the following two ODE lemmas.
\begin{lemma}\label{l:ODE1}
Let $m>0$. For every $f\in L^2 (\mathbb R)$ there is a unique $\psi\in L^2(\ensuremath{\mathbb R}) \cap W^{2,2}_{\text{loc}}$ s.t.
\begin{equation}\label{e:Laplacian-1d}
-\frac{d^2\psi}{dt^2} + m^2\psi = f
\end{equation}
and it is given by
\begin{equation}\label{e:potential-1d}
\psi (t) = \frac{1}{2m} \int_{\ensuremath{\mathbb R}} e^{-m|t-\tau|} f (\tau)\, d\tau\, .
\end{equation}
\end{lemma}
\begin{proof} The lemma is a classical well-known fact. At any rate the verification that
$\psi$ as in \eqref{e:potential-1d} solves \eqref{e:Laplacian-1d} is an elementary computation while, since obviosuly $e^{-m|t|}\in L^1$, $\psi\in L^2$ if $f\in L^2$. Moreover, any other solution $\hat\psi$ of \eqref{e:Laplacian-1d} must satisfy $\hat\psi (t) = \psi (t) + C_+ e^{mt} + C_- e^{-mt}$ for some constants $C_\pm$ and the requirement $\hat\psi\in L^2$ immediately implies $C_+=C_-=0$.
\end{proof}
The second ODE Lemma is the following:
\begin{lemma}\label{l:ODE2}
Let $v\in L^1 (\mathbb R, \mathbb C)$. Then for every constant $c_-$ there is a unique solution $y \in W^{2,1}_{\text{loc}} (\mathbb R, \mathbb C)$ of
\begin{equation}\label{e:ODE2}
- \frac{d^2y}{dt^2} + (m^2 + v) y =0
\end{equation}
with the property that
\begin{equation}\label{e:y=e^mt}
\lim_{t\to - \infty} e^{-mt} y (t) =c_-\, .
\end{equation}
Moreover we have $y(t) = e^{mt} (c_-+z(t))$ for a function $z(t)$ which satisfies the bounds
\begin{align}
|z(t)| &\leq |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z}\\
|z'(t)| &\leq 2m |c_-|\left[\exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\right]\label{e:est-z'}
\end{align}
A symmetric statement, left to the reader, holds for solutions such that
\begin{equation}\label{e:y=e^mt-plus}
\lim_{t\to \infty} e^{mt} y (t) =c_+\, .
\end{equation}
\end{lemma}
Important consequences of the above Lemmas are the following:
\begin{corollary}\label{c:decay}
If $(m,z)\in \mathscr{P}$, then the space of solutions $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ of \eqref{e:eigenvalue-equation-3} is $1$-dimensional. Moreover for any such $\varphi$ there is a constant $C$ with the property that
\begin{align}
|\varphi (t)| &\leq C e^{-m|t|}\,
\end{align}
and there are two constants $C_+$ and $C_-$ such that
\begin{align}
\lim_{t\to\infty} e^{mt} \varphi (t) &= C_+\\
\lim_{t\to -\infty} e^{-mt} \varphi (t) &= C_-\, .
\end{align}
The constants are either both nonzero or both zero, in which case $\varphi$ vanishes identically.
The same conclusions apply if $m>1$, $z\in \{\Xi (a), \Xi (b)\}$ and $\varphi$ solves \eqref{e:eigenvalue-equation-3}.
\end{corollary}
\begin{proof}
Observe that $|\Xi (t)-z|\geq |{\rm Im}\, z|$, while $A (t) = 6 c_0 e^{2t}$ for $-t$ sufficiently large and $|A(t)|\leq 2 e^{-2{\bar\alpha} t}$ for $t$ sufficiently large. In particular
\begin{equation}\label{e:estimate-A-over-Xi}
\frac{|A(t)|}{|\Xi (t)-z|} \leq C e^{-2{\bar\alpha} |t|}\, .
\end{equation}
First of all notice that, if $\varphi\in L^2\cap W^{2,2}_{\text{loc}}$ solves \eqref{e:eigenvalue-equation-3}, by Lemma \ref{l:ODE1} (applied with $f= -\frac{A\varphi}{\Xi-z}$) we have
\begin{equation}\label{e:integral-equation}
|\varphi (t)| \leq \frac{C}{2m} \int e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)|\, d\tau\, .
\end{equation}
Using Cauchy-Schwarz and the fact that $\varphi\in L^2$ we immediately obtain that $\varphi\in L^\infty$, namely, that there is a constant $C$ such that $|\varphi|\leq C$. We now prove inductively that $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$ as long as $k{\bar\alpha} \leq m$. The case $k=0$ has already been shown. Assume thus that the inequality holds for $k-1$ and that $k{\bar\alpha} \leq m$. We then observe that
\begin{align*}
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| &\leq C_{k-1} e^{- m|t-\tau| - k{\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|}
\leq C_{k-1} e^{-k{\bar\alpha} (|t-\tau| + |\tau|)} e^{-{\bar\alpha} |\tau|}\\
&\leq C_{k-1} e^{-k{\bar\alpha} |t|} e^{-{\bar\alpha} |\tau|}\, .
\end{align*}
Inserting in \eqref{e:integral-equation} and using that $e^{-{\bar\alpha} |\tau|}\in L^1$ we then obtain $|\varphi (t)|\leq C_k e^{-k{\bar\alpha} |t|}$. Assuming now $k{\bar\alpha} \leq m < (k+1) {\bar\alpha}$ we can, likewise, bound
\[
e^{-m|t-\tau|} e^{-2{\bar\alpha} |\tau|} |\varphi (\tau)| \leq C_k e^{- m|t-\tau| - (k+1){\bar\alpha} |\tau|} e^{-{\bar\alpha} |\tau|} \leq C_k e^{-m|t|} e^{-{\bar\alpha} |\tau|}
\]
and plugging into \eqref{e:integral-equation} one last time we conclude $|\varphi (t)|\leq C e^{-m|t|}$.
In order to show that $\varphi$ is unique up to a multiplicative constants, it suffices to show that $\lim_{t\to -\infty} e^{-mt} \varphi (t)$ exists and is finite. Hence Lemma \ref{l:ODE2} would conclude that the solution is uniquely determined by $C_-$, and that the latter must be nonzero, otherwise $\varphi\equiv 0$.
In order to show existence and finiteness of $C_-$ rewrite
\[
\varphi (t) = \frac{e^{mt}}{2m} \int_t^\infty e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds
+ \frac{e^{-mt}}{2m} \int_{-\infty} ^t e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\]
Since by our estimates both $e^{-ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ and $e^{m s} \frac{A(s)}{\Xi (s) -z} \varphi (s)$ are integrable, we conclude that $C_{\pm}$ exist and equal
\begin{align*}
C_\pm = \frac{1}{2m}\int_{-\infty}^\infty e^{\pm ms} \frac{A(s)}{\Xi (s) -z} \varphi (s)\, ds\, .
\end{align*}
\medskip
As for the last sentence of the statement of the lemma, the same arguments can be used in the case $z\in \{\Xi (a), \Xi (b)\}$, since the crucial point is that, thanks to the assumption that $A (a) = A(b)=0$ and $\Xi' (a) \neq 0 \neq \Xi' (b)$, the estimate \eqref{e:estimate-A-over-Xi} remains valid.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:ODE2}] We distinguish between the case $c_-\neq 0$ and $c_-=0$. In the case $c_- \neq 0$ we can divide by $c_-$ and reduce the statement to $c_-=1$. For the existence it suffices to look for a solution of \eqref{e:ODE2} which satisfies \eqref{e:y=e^mt} on a half-line of type $]-\infty, T]$ for some $T$. Such solution has then a $W^{2,1}_{\text{loc}}$ continuation on $[T, \infty[$ by standard ODE theory. Likewise the uniqueness is settled once we can show the uniqueness holds on $]-\infty, T]$. Observe next that, if the solution exists, we would clearly conclude that $\frac{d^2 y}{dt^2}\in L^1 (]-\infty, T])$, hence implying that
\[
\lim_{t\to-\infty} y' (t)
\]
exists and is finite. On the other hand \eqref{e:y=e^mt} implies that such limit must be $0$.
Let $\tilde{y} (t) = e^{-mt} y (t)$ and observe that we are looking for a solution of
\[
(e^{2mt} \tilde{y}')' = e^{2mt} v \tilde{y}\, .
\]
Integrating between $-N$ and $t$ the latter identity and letting $t\to -\infty$ we conclude
\begin{equation}\label{e:tildey'}
e^{2mt} \tilde{y}' (t) = \int_{-\infty}^t e^{2ms} v (s)\tilde{y} (s)\, ds\, .
\end{equation}
Divide by $e^{2mt}$ and integrate once more to reach
\begin{align*}
\tilde{y} (t) -1 = - \int_{-\infty}^t \int_{-\infty}^r e^{2m (s-r)} v(s)\tilde{y} (s)\, ds\, dr
= \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\end{align*}
We then define the transformation
\begin{equation}\label{e:fixed-point}
\mathscr{F} (\tilde{y}) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds + 1\,
\end{equation}
which we consider as a map from $L^\infty (]-\infty, T])$ into itself.
From our discussion we conclude that $y$ solves \eqref{e:ODE2} and obeys \eqref{e:y=e^mt} if and only if $\tilde{y}$ is a fixed point of $\mathscr{F}$. Choosing $T$ large enough so that $\|v\|_{L^1 (]-\infty, T])}\leq m$ we see immediately that $\mathscr{F}$ is contraction on $L^\infty (]-\infty, T])$ and it thus has a unique fixed point. We have thus showed existence and uniqueness of the solution in question.
Observe now that $z(t) = \tilde{y} (t) -1$ and set
\[
Z(t) := \exp \left(\frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\right) -1\, .
\]
$Z$ solves the ODE $Z' = \frac{|v|}{2m} Z + \frac{|v|}{2m}$ and, since $\lim_{t\to-\infty} Z(t) =0$, the integral equation
\[
Z (t) = \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds\, .
\]
We first want to show that $|z(t)|\leq Z(t)$ on $]-\infty, T]$. We set $\tilde{y}_0 := Z+1$ and define inductively $\tilde{y}_{i+1} = \mathscr{F} (\tilde{y}_i)$. From the above discussion we know that $\tilde{y}_i$ converges uniformly to $\tilde{y}$ and it suffices thus to show that $|\tilde{y}_i -1| \leq Z$ for all $i$.
By definition we have $|\tilde{y}_0-1| = Z$ and thus we need to show the inductive step. We estimate
\begin{align*}
|\tilde{y}_{i+1} (t) -1| &\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| |\tilde{y}_i (s)|\, ds\\
&\leq \frac{1}{2m} \int_{-\infty}^t |v(s)| Z(s)\, ds + \frac{1}{2m} \int_{-\infty}^t |v(s)|\, ds = Z(t)\, ,
\end{align*}
We have shown \eqref{e:est-z} on $]-\infty, T]$. In order to extend the inequality to the whole real axis observe first that we can assume, without loss of generality, that $\|v\|_{L^1 (\mathbb R)}>0$, otherwise we trivially have $|\tilde{y} (t)-1| = Z(t) =0$ for all $t$. In particular we can select $T$ so that all of the above holds and at the same time $\|v\|_{L^1 (]-\infty, T])}>0$. This implies $Z(T)>0$. Moreover, by \eqref{e:fixed-point} and $\mathscr{F} (\tilde{y})= \tilde{y}$, either
\begin{align*}
|\tilde{y} (T) -1| &< \frac{1}{2m} \int_{-\infty}^{T} |v(s)| |\tilde{y} (s)|\, ds
\end{align*}
or $|v||\tilde{y}|$ vanishes identically on $]-\infty, T]$. In both cases we conclude $|\tilde{y} (T)-1|< Z(T)$. Consider now $\sup \{t\geq T: |\tilde{y} (t)-1|< Z (t)\}$. Such supremum cannot be a finite number $T_0$ because in that case we would have $|\tilde{y} (T_0)-1| = Z(t_0)$ while the same argument leading to the strict inequality $|\tilde{y} (T)-1|< Z(T)$ implies $|\tilde{y} (T_0)-1|< Z (T_0)$.
Having shown \eqref{e:est-z} we now come to \eqref{e:est-z'}. Recalling \eqref{e:tildey'} we have
\begin{align*}
z'(t) &= \int_{-\infty}^t e^{-2m (t-s)} v (s) (z(s)+1)\, ds\\
&\leq \int_{-\infty}^t e^{-2m (t-s)} |v (s)| Z(s)\, ds + \int_{-\infty}^t e^{-2m (t-s)} |v (s)|\, ds
= 2m Z(t)\, .
\end{align*}
We now come to the case $c_- =0$. In that case we need to show that the unique solution is identically~$0$. Arguing as for the case $c_- =1$ we conclude that $\varphi$ is a fixed point of the transformation
\[
\mathscr{F} (\varphi) (t) = \frac{1}{2m} \int_{-\infty}^t \big(1-e^{-2m (t-s)}\big) v(s) \tilde{y} (s)\, ds
\]
Again, for a sufficiently small $T$, $\mathscr{F}$ is a contraction on $L^\infty (]-\infty, T])$ and hence it has a unique fixed point. Since however $0$ is, trivially, a fixed point, we conclude that $\varphi\equiv 0$ on $]-\infty, T]$. Standard ODE theory implies then that $\varphi$ vanishes identically on the whole $\mathbb R$.
\end{proof}
\section{Proof of Proposition \ref{p:3+4}}\label{s:3+4}
We start by showing the last statement of the proposition, namely:
\begin{itemize}
\item[(A)] For $z = \Xi (a)$ and under the assumption that $\lambda_a>1$, the unique $m$ such that \eqref{e:eigenvalue-equation-3} has a nontrivial solution $\psi_a\in L^2$ is $m_a = \sqrt{\lambda_a}$.
\end{itemize}
{{red} Before coming to its proof we also observe that the same argument applies with $b$ in place of $a$.}
First of all observe that, for $z=\Xi (a)$, the equation \eqref{e:eigenvalue-equation-3}, which becomes
\begin{equation}\label{e:eigenvalue-equation-again}
-\frac{d^2\varphi}{dt^2} + m^2 \varphi + \frac{A}{\Xi-\Xi (a)} \varphi = 0,
\end{equation}
has nontrivial solutions $\varphi\in W^{2,2}_{\text{loc}}\cap L^2 (\mathbb R; \mathbb C)$ if and only if it has nontrivial solution $\varphi \in W^{2,2}_{\text{loc}} \cap L^2 (\mathbb R;\mathbb R)$. That the equation has a nontrivial solution when $m=\sqrt{\lambda_a}$ follows from the classical theory of self-adjoint operators. We therefore only need to show that the existence of a nontrivial solution is only possible for a single $m\geq 1$. Arguing by contradiction assume there are two, $1\leq m_1< m_2$, and denote by $\psi_1$ and $\psi_2$ the respective solutions. Then there is a nontrivial linear combination
\[
\psi = C_1 \psi_1 + C_2 \psi_2
\]
which vanishes on $a$. Observe that $\psi_1$ and $\psi_2$ can be interpreted as eigenfuctions of the self-adjoint operator $-\frac{d^2}{dt^2} + \frac{A(t)}{\Xi (t)-\Xi (a)} $ relative to distinct eigenvalues and they are, therefore, $L^2$ orthogonal. Summing the equations, multiplying by $\psi$ and integrating by parts we achieve
\begin{equation}\label{e:tested}
\underbrace{\int \left((\psi')^2 +\frac{A}{\Xi-\Xi (a)} \psi^2\right)}_{=:I} = - C_1^2 m_1^2 \int \psi_1^2 - C_2^2 m_2^2 \int \psi_2^2\, .
\end{equation}
Recalling that $A = \Xi'' + 2\Xi' = (\Xi' + 2\Xi)'$, we wish to integrate by parts the second integrand in the left-hand side. Observe that, because $\psi$ vanishes on $a$ and $\Xi' (a) \neq 0$, the function $\frac{\psi^2}{\Xi - \Xi (a)}$ is in fact continuously differentiable. In particular we can write
\[
\int\frac{A}{\Xi-\Xi (a)} \psi^2 = \int \left(\frac{\Xi'+2\Xi}{(\Xi-\Xi(a))^2} \Xi' \psi^2 - 2\frac{\Xi'+2\Xi}{\Xi-\Xi (a)}\psi\psi'\right)\, .
\]
Substituting it into I, we achieve
\begin{align*}
I &= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + \int \left(\frac{2\Xi\Xi'\psi^2}{(\Xi-\Xi (a))^2} - \frac{4\Xi\psi\psi'}{\Xi-\Xi (a)} \right)\\
&= \int \left(\psi' - \frac{\Xi'}{\Xi-\Xi (a)}\psi\right)^2 + 2 \int \frac{\Xi'}{\Xi-\Xi (a)} \psi^2\, ,
\end{align*}
where to reach the second line we have written the first term in the second integral as
\[
- 2 \Xi \frac{d}{dt} \left(\frac{1}{\Xi-\Xi (a)}\right) \psi^2
\]
and integrated it by parts. Again thanks to the fact that $\psi$ vanishes at $a$ we can write it as $\psi= (\Xi-\Xi (a)) \eta$ and hence conclude
\begin{align*}
I &= \int ((\Xi - \Xi (a))\eta')^2 + \int 2 (\Xi-\Xi (a))\Xi' \eta^2 = \int ((\Xi - \Xi (a))\eta')^2 - 2 \int (\Xi-\Xi (a))^2 \eta\eta'\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (\Xi-\Xi(a))^2 \eta^2\\
&= \int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 - \int (C_1^2\psi_2^2 + C_2^2\psi_2^2)\, .
\end{align*}
Inserting the latter in \eqref{e:tested} we conclude
\[
\int (\Xi-\Xi(a))^2 (\eta'-\eta)^2 = - C_1^2 (m_1^2-1) \int \psi_1^2 - C_2^2 (m_2^2-1) \int \psi_2^2\, .
\]
Observe that, since $m_2>1$ and $\psi_2$ is nontrivial, we conclude that $C_2=0$. This would then imply that $\psi = C_1 \psi_1$ and we can thus assume $C_1=1$ in all our computations. In particular $\eta'=\eta$, which implies $\eta (t) = C e^t$. We can now write $\psi_1 (t) = (\Xi (t)-\Xi(a)) \eta (t)$ and given the properties of $\Xi (t)$ we easily see that this would violate the decay at $+\infty$ that we know for $\psi_1$ from Corollary \ref{c:decay}.
{{red}
\begin{remark}\label{r:phi(a)-nonzero}
We record here a consequence of the above argument: a nontrivial solution $\varphi$ of \eqref{e:eigenvalue-equation-again} necessarily satisfies $\varphi (a) \neq 0$ (and thus it must be unique up to constant factors).
\end{remark}
}
\medskip
We next show that
\begin{itemize}
\item[(B)] If $(m_0,z)\in \overline{\mathscr{P}}$, $m_0\geq 1$ and $z\in \mathbb R$, then $z$ is in the closure of the range of $\Xi$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $z\in \mathbb R\setminus \overline{\Xi (\mathbb R)}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-5}
-\frac{d^2\psi_j}{dt^2} + m^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0
\end{equation}
\end{itemize}
By Corollary \ref{c:decay} we can normalize our functions $\psi_j$ so that $\psi_j (t) e^{-m_jt} \to 1$ as $t\to-\infty$ and $\psi_j (t) e^{m_jt} \to C_j\neq 0$ as $t\to\infty$. Observe also that there is a positive constant $c_0$ such that $|\Xi-z_j|\geq c_0$ for all $j$ sufficiently large, thanks to (ii). In particular, the functions $\frac{A}{\Xi-z_j}$ are uniformly bounded in $L^1$. By Lemma \ref{l:ODE2} there is a positive $T_0\geq b+1$, independent of $j$ such that
\begin{equation}\label{e:uniform-exp}
\left|\psi_j (t) - C_j e^{-m_j t}\right| \leq \frac{C_j}{2} e^{- m_j t} \qquad \forall t \geq T_0\, ,
\end{equation}
and there is a constant $C$, independent of $j$ such that
\begin{equation}\label{e:uniform-inside}
\|\psi_j\|_{L^\infty ([a,b])} \leq C\, .
\end{equation}
Next multiply \eqref{e:eigenvalue-5} by $\bar \psi_j$, integrate in $t$ and take the imaginary part of the resulting equality to conclude
\begin{equation}\label{e:imaginary-trick}
\int \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im\, z_j})^2} |\psi_j|^2 = 0\, .
\end{equation}
We might break the integral into three integrals on the regions $]-\infty, a[$, $]a,b[$, and $]b, \infty[$, where the function $A$ is, respectively, negative, positive, and negative. This gives
\[
-\int_{T_0}^{2T_0} \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq
\int_a^b \frac{A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2
\]
Now, the right-hand side of the inequality can be bounded uniformly independently of $j$ by \eqref{e:uniform-inside} and (ii). On the other hand the function $\frac{-A}{(\Xi - {\rm Re}\, z_j)^2 + ({\rm Im z_j})^2}$ is larger than a positive constant $c$ independent of $j$ on $[T_0, 2 T_0]$. Using \eqref{e:uniform-exp} we can achieve a uniform bound $|C_j|\leq C$ for the constants $C_j$. The latter bound, combined with the estimates of Lemma \ref{l:ODE2} and the uniform bound on $\|\frac{A}{\Xi-z_j}\|_{L^1}$ easily imply that $\psi_j$ is precompact in $L^2$. We can thus extract a subsequence, not relabeled, converging to a nontrivial $L^2$ solution $\psi$ of
\begin{equation}\label{e:eigenvalue-equation-6}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\end{equation}
Without loss of generality we assume that $\psi$ is real valued, since $z$ is real. We can thus multiply \eqref{e:eigenvalue-equation-6} by $\psi$ and integrate to achieve
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \frac{\Xi''+2\Xi'}{\Xi- z} \psi^2 = 0\, .
\]
Integrating by parts $\int \frac{\Xi''}{\Xi-z} \psi^2$ we find
\[
\int ((\psi')^2 + m_0^2 \psi^2) + \int \left(\frac{(\Xi')^2}{(\Xi-z)^2} \psi^2 - 2 \frac{\Xi'}{\Xi-z} \psi' \psi\right) +
\int \frac{2\Xi'}{\Xi-z} \psi^2 = 0 \, ,
\]
which we can rewrite as
\begin{equation}\label{e:energy-trick}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-z} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-z} \psi^2 = 0\, .
\end{equation}
As already done in the previous paragraphs we set $\eta = \frac{\psi}{\Xi-z}$ and write the identity as
\[
\int \left((\Xi-z)^2 (\eta')^2 + m_0^2 (\Xi-z)^2 \eta^2 + 2 \Xi' (\Xi-z) \eta^2\right) = 0
\]
Integrating by parts the last term we find
\[
\int (\Xi-z)^2 (\eta'-\eta)^2 + \int (m_0^2-1) (\Xi-z)^2 \eta^2 = 0\, .
\]
We thus conclude that $m_0=1$ and $\eta'=\eta$, i.e. $\eta (t) = C e^t$, but again we see that this would violate $\psi\in L^2$.
\medskip
We next employ a suitable variation of the latter argument to show that
\begin{itemize}
\item[(C)] $(m_0, 0)$ and $(m_0, \Xi (-\infty))$ do not belong to $\overline{\mathscr{P}}$ if $m_0\geq 1$.
\end{itemize}
We again argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in [1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $0$ or to $\Xi (- \infty)$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-7}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
We first focus on the case $z_j\to 0$. Normalize again the solutions so that $\psi_j (t)$ is asymptotic to $e^{m_0t}$ for $t$ negative, and to $C_j e^{-m_0t}$ for $t$ positive.
Observe that in this case we have $\frac{A}{\Xi}\in L^1 (]-\infty, N])$ for every $N$, while $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on any $]-\infty, N]$. We can thus apply Lemma \ref{l:ODE2} and conclude the $\psi_j$ can be assumed to converge uniformly to a function $\psi$ on $]-\infty, N]$ for every $N$ and that likewise $\psi (t)$ is asymptotic to $e^{m_0 t}$ for $t$ negative.
As done previously we multiply the equation \eqref{e:eigenvalue-equation-7} by $\bar\psi_j$, integrate, and take the imaginary part. In particular we gain the inequality
\[
\int_b^\infty \frac{A}{(\Xi- {\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 \leq - \int_a^b \frac{A}{(\Xi- {\rm Re}\, z_j)^2+ ({\rm Im}\, z_j)^2} |\psi_j|^2\, .
\]
Since $z_j\to 0$ and the range of $\Xi$ on $[a,b]$ is bounded away from $0$, we conclude that the right-hand side is uniformly bounded. In particular, passing to the limit we conclude that
\begin{equation}\label{e:info-L^1}
\Xi^{-2} A |\psi|^2 \in L^1 ([b, \infty[)\, .
\end{equation}
Observe however that
\[
\lim_{t\to\infty} \frac{A(t)}{\Xi (t)} = \lim_{t\to \infty} \frac{-{\bar\alpha} e^{-{\bar\alpha} t}}{c_1 e^{-2t}+\frac{1}{2-{\bar\alpha}} e^{-{\bar\alpha} t}} = - {\bar\alpha} (2-{\bar\alpha})\, .
\]
In particular we conclude that $\psi\in L^2$. Moreover, we can write
\[
\frac{A}{\Xi} = -{\bar\alpha} (2-{\bar\alpha}) + B
\]
for a function $B$ which belongs to $L^1 ([T, \infty[)$ for every $T$. We thus have that
\[
\frac{d^2\psi}{dt^2} + (m_0^2 - {\bar\alpha} (2-{\bar\alpha})) \psi + B \psi = 0\, .
\]
Recalling that $0<{\bar\alpha} <1$ and $m_0\geq 1$, we have $m_0^2 - {\bar\alpha} (2-{\bar\alpha})>0$ and we can therefore apply Lemma \ref{l:ODE2} to conclude that, for $\bar m := \sqrt{m_0^2 - {\bar\alpha} (2-{\bar\alpha})}$
\[
\lim_{t\to \infty} e^{\bar m t} \psi (t)
\]
exists, it is finite, and nonzero. Observe however that \eqref{e:info-L^1} forces $e^{{\bar\alpha} t} |\psi|^2\in L^1$, which in particular implies that $\bar m > \frac{{\bar\alpha}}{2}$
We next argue as in the derivation of \eqref{e:energy-trick} to get
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi} \psi^2 = 0\, .
\]
We again set $\psi= \Xi \eta$ and observe that, by our considerations, $\eta$ decays exponentially at $-\infty$, while it is asymptotic to $e^{({\bar\alpha} - \bar m) t}$ at $+\infty$. We rewrite the latter identity as
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) = 0\, .
\]
We wish to integrate by parts the latter term to find
\begin{equation}\label{e:da-giustificare}
\int (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2)=0\, .
\end{equation}
Since we have exponential decay of $\eta$ at $-\infty$, while at $+\infty$ $\eta$ might grow, the latter integration by parts need some careful justification. First of all we notice that $\Xi \Xi' \eta^2$ decays exponentially at $+\infty$ and thus, since the other two integrands are positive, we can write
\[
\int (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\lim_{N\to\infty} \int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) \, .
\]
Next, we can integrate by parts the second integrand (before passing to the limit) to write
\[
\int_{-\infty}^N (\Xi^2 (\eta')^2 + m_0^2 \Xi^2 \eta^2 + 2 \Xi\Xi' \eta^2) =
\int_{-\infty}^N (\Xi^2 (\eta'-\eta)^2 + (m_0^2-1) \Xi^2 \eta^2) + \Xi^2 (N) \eta^2 (N)\, .
\]
Since $\Xi (N) \eta (N)$ converges to $0$ exponentially, passing into the limit we conclude \eqref{e:da-giustificare}.
As before this would imply $m_0=1$ and $\eta (t) = C e^t$, while we have already argued that $\eta$ decays exponentially at $\infty$.
We next tackle the case $z_j \to \Xi (-\infty)$. This time we observe that $\frac{A}{\Xi-z_j}$ enjoys a uniform $L^1$ bound on $[T, \infty[$ for every $T$ and we thus normalize the functions $\psi_j$ so that $\psi_j (t)$ is asymptotic to $e^{-m_j t}$ for $t\to \infty$. Arguing as above, we assume that $\psi_j$ converges uniformly on all $[T, \infty[$ to a $\psi$ which is asymptotic to $e^{-m_0 t}$ and solves
\begin{equation}\label{e:eigenvalue-equation-9}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (-\infty)} \psi=0\, .
\end{equation}
As above we can assume that $\psi$ is real valued. Moreover, this time we infer (with the same method used to prove \eqref{e:info-L^1})
\begin{equation}\label{e:info-L1-2}
(\Xi-\Xi (-\infty))^{-2} A \psi^2 \in L^1 (\mathbb R)
\end{equation}
This time observe that, for $t$ sufficiently negative, $\frac{A(t)}{\Xi (t)- \Xi (-\infty)} = 8$. In particular we can explicitly solve the equation as
\[
\psi (t) = C_1 e^{-t\sqrt{m_0^2+8}} + C_2 e^{t\sqrt{m_0^2+8}}
\]
when $t$ is sufficiently negative.
However, if $C_1$ were positive, \eqref{e:info-L1-2} would not hold. In particular we infer exponential decay at $-\infty$. We can now argue as for the case $z_j\to 0$: we multiply \eqref{e:eigenvalue-equation-9} by $\psi$, integrate in time and perform an integration by part to infer
\[
\int \left( \left(\psi' - \frac{\Xi'}{\Xi - \Xi (-\infty)} \psi\right)^2 + m_0 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (-\infty)} \psi^2 = 0\, .
\]
We then introduce $\eta$ so that $\psi = (\Xi-\Xi (-\infty)) \eta$. This time we infer exponential decay for $\eta$ at both $\infty$ and $-\infty$. Arguing as above we rewrite the last identity as
\[
\int ((\Xi- \Xi (-\infty))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (-\infty))^2 \eta^2)=0\, ,
\]
reaching again a contradiction.
\medskip
In order to complete the proof of the proposition we need to show
\begin{itemize}
\item[(D)] If $(m_0, \Xi (c)) \in \overline{\mathscr{P}}$ {{red} and $m_0> 1$}, then either $c=a$ or $c=b$ {{red} and moreover we have, respectively, $m_0 = \sqrt{\lambda_a}$ or $m_0 = \sqrt{\lambda_b}$}.
\end{itemize}
As before we argue by contradiction and assume the existence of
\begin{itemize}
\item[(i)] A sequence $\{m_j\}\subset ]1, \infty[$ converging to $m_0\in ]1, \infty[$;
\item[(ii)] A sequence $\{z_j\}\subset \mathbb C$ with ${\rm Im}\, z_j >0$ converging to $\Xi (c)$ for some $c\not\in \{a,b\}$;
\item[(iii)] A sequence $\psi_j$ of nontrivial solutions of
\begin{equation}\label{e:eigenvalue-equation-10}
-\frac{d^2\psi_j}{dt^2} + m_j^2 \psi_j + \frac{A}{\Xi-z_j} \psi_j = 0\, .
\end{equation}
\end{itemize}
This time we normalize the $\psi_j$'s so that
\begin{equation}\label{e:L2-normalization}
\int (|\psi_j'|^2 + m_j^2 |\psi_j|^2) =1\, .
\end{equation}
By Lemma \ref{l:ODE2} we know that $\psi_j (t)$ is asymptotic to $\rho_j^\pm e^{\mp m_j t}$ for $t\to \pm \infty$, where $\rho_j^\pm \in \mathbb C \setminus \{0\}$. Since $\Xi (c)$ has a positive distance from both $0$ and $\Xi (-\infty)$, we can apply Lemma \ref{l:ODE2} to achieve uniform times $T_\pm$ with the properties that
\begin{align}
\left|\psi_j (t) - \rho_j^+ e^{-m_j t}\right|& \leq \frac{|\rho_j^+|}{2} e^{-m_j t} \qquad\qquad \forall t\geq T_+\, ,\label{e:exp-bound-1}\\
\left|\psi_j (t) - \rho_j^- e^{m_j t}\right| &\leq \frac{|\rho_j^-|}{2} e^{m_j t} \qquad\qquad \forall t\leq T_-\, .\label{e:exp-bound-2}
\end{align}
Combining the latter inequalities with \eqref{e:L2-normalization} we conclude that $\sup_j |\rho_j^\pm| < \infty$, and in particular $\{\psi_j\}_j$ is tight in $L^2$, i.e. for every $\varepsilon >0$ there is $N = N (\varepsilon)$ such that
\[
\sup_j \int_{|t|\geq N} |\psi_j|^2 < \varepsilon\, .
\]
The latter bound combined with \eqref{e:L2-normalization} implies, up to extraction of a subsequence which we do not relabel, the strong $L^2$ convergence of $\psi_j$ to a function $\psi$. Thanks to Sobolev embedding, the convergence is uniform on any compact set and, moreover, $\psi\in C^{1/2}$.
Arguing as for \eqref{e:imaginary-trick} we infer
\begin{equation}\label{e:imaginary-trick-2}
\int \frac{A}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} |\psi_j|^2 =0
\end{equation}
The latter bound implies $\psi (c)=0$. In fact first we
observe that $\frac{A}{|\Xi-z_j|^2} |\psi_j|^2$ converges in $L^1$ on $\mathbb R \setminus ]c-\delta, c+\delta[$ for every $\delta$. Choosing $\delta>0$ so that $|A (t) - A(c)| \leq \frac{|A(c)|}{2}$ for $t\in [c-\delta, c+\delta]$ and recalling that $|A(c)|>0$, we easily infer that
\[
\sup_j \int_{c-h}^{c+h} \frac{|\psi_j|^2}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty \qquad \forall h < \delta\, .
\]
If $\psi (c)$ were different from $0$, we can select a positive $h< \delta$ and a positive $c_0$ with the property that $|\psi (t)|^2 \geq 2c_0$ for all $t\in [c-h, c+h]$. In particular, for a large enough $j$ we infer $|\psi_j (t)|^2 \geq c$ for all $t\in [c-\delta, c+\delta]$. But then we would conclude
\[
\sup_j \int_{c-h}^{c+h} \frac{1}{(\Xi-{\rm Re}\, z_j)^2 + ({\rm Im}\, z_j)^2} < \infty\, .
\]
Since the denominator converges to $(\Xi - \Xi (c))^2$, this is clearly not possible.
We now wish to pass in the limit in \eqref{e:eigenvalue-equation-10} to derive that
\begin{equation}\label{e:eigenvalue-equation-11}
- \psi'' + m_0^2 \psi + \frac{A}{\Xi-\Xi (c)} \psi =0\, ,
\end{equation}
where we notice that, thanks to $\psi (c)=0$ and the H\"older regularity of $\psi$, the function $\frac{A}{\Xi-\Xi (c)} \psi$ is indeed in $L^p$ for every $p<2$. We thus understand the equation distributionally.
The equation clearly passes to the limit outside the singularity $c$ of the denominator and thus we just need to pass it to the limit distributionally in some interval $]c-h,c+h[$. We write the third term as
\begin{align*}
\frac{A}{\Xi-z_j} \psi_j &= \left(\frac{d}{dt} \ln (\Xi-z_j) \right)\frac{A}{\Xi'} \psi\\
&= \frac{d}{dt} \left(\ln (\Xi-z_j) \frac{A}{\Xi'} \psi_j\right) - \ln (\Xi-z_j)\frac{A}{\Xi'} \psi'_j - \ln (\Xi-z_j) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi_j\, .
\end{align*}
Observe that we can define the logarithm unequivocally because $\Xi$ is real valued and ${\rm Im}\, z_j >0$.
Next, we remark that:
\begin{itemize}
\item[(i)] $\frac{A}{\Xi'}$ is smooth in $]c-h, c+h[$;
\item[(ii)] $\ln (\Xi-z_j)$ converges strongly \footnote{Since $\ln (\Xi-z_j)$ converges uniformly to $\ln (\Xi-\Xi(c))$ on any compact set which does not contain $c$, in order to reach the conclusion it suffices to prove a uniform $L^q$ bound on the functions, for every $q<\infty$. This can be easily concluded as follows. Choose an interval $[c-h, c+h]$ and recall that $\Xi$ does not change sign on it. For each $j$ large enough we then find a unique $c_j \in [c-h, c+h]$ such that $\Xi (c_j) = {\rm Re}\, z_j$. Using the mean value theorem we easily conclude that $|\Xi (t)-z_j|\geq |\Xi (t) - \Xi (c_j)|
\geq C^{-1} |t-c_j|$ for every $t\in [c-h, c+h]$, where $C^{-1} = \min \{|\Xi'(t)|: c-h\leq t \leq c+h\}$.} to $\ln (\Xi-\Xi(c))$ in $L^q (]c-h, c+h[)$ for every $q<\infty$;
\item[(iii)] $\psi_j' \to \psi'$ weakly in $L^2$, while $\psi_j\to \psi$ uniformly.
\end{itemize}
We thus conclude that $\frac{A}{\Xi-z_j} \psi_j$ converges distributionally to
\[
\frac{d}{dt} \left(\ln (\Xi-\Xi (c)) \frac{A}{\Xi'} \psi\right) - \ln (\Xi-\Xi(c)) \frac{A}{\Xi} \psi - \ln (\Xi-\Xi(c)) \frac{d}{dt} \left(\frac{A}{\Xi'}\right) \psi\, .
\]
Using now that $\psi\in W^{1,2}$ and $\psi (c)=0$ we can rewrite the latter distribution as
\[
\frac{A}{\Xi-\Xi (c)} \psi
\]
and hence conclude the validity of \eqref{e:eigenvalue-equation-11}.
Observe next that from \eqref{e:eigenvalue-equation-11} we infer $\psi''\in L^p$ for every $p< 2$, which in turn implies that $\psi$ is indeed $C^{1,\kappa}_{\text{loc}}$ for every $\kappa < \frac{1}{2}$. In turn this implies that $\frac{A}{\Xi-\Xi (c)} \psi$ is continuous at $c$, so that in particular $\psi$ is twice differentiable. We thus can argue as for the derivation of \eqref{e:energy-trick} and get
\begin{equation}\label{e:energy-trick-4}
\int \left(\left(\psi' - \frac{\Xi'}{\Xi-\Xi (c)} \psi\right)^2 + m_0^2 \psi^2\right) + 2 \int \frac{\Xi'}{\Xi-\Xi (c)} \psi^2 = 0\, .
\end{equation}
Once again we can set $\psi = (\Xi-\Xi (c)) \eta$ and observe that $\eta\in W^{1,2}$, to rewrite the latter identity as
\[
\int ((\Xi- \Xi (c))^2 (\eta'-\eta)^2 + (m_0^2-1) (\Xi- \Xi (c))^2 \eta^2)=0\, ,
\]
inferring that $\eta=0$.
We thus have concluded that $\psi$ vanishes identically, but this is not yet a contradiction since the normalization \eqref{e:L2-normalization} and the strong $L^2$ convergence does not ensure that $\psi$ is nontrivial. In order to complete our argument, note first that, by the monotonicity of $\Xi$, for each $j$ large enough there is a unique $c_j$ such that $\Xi (c_j) = {\rm Re}\, z_j$. We then multiply the equation \eqref{e:eigenvalue-equation-10} by $\bar \psi_j - \overline{\psi_j (c_j)}$ to obtain
\[
\int \left(|\psi_j'|^2 + m_j^2 \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) + \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})\right) = 0\, .
\]
Note that $c_j$ must converge to $c$ and that the integrals
\[
\int \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\]
converges to $0$ because $\psi_j - \psi_j (c_j)$ converges to $0$ uniformly and, thanks to the uniform exponential decay of $\psi_j$, the latter are uniformly bounded in $L^1$. For the same reason the first integral in the sum
\begin{equation}
\label{e:up}\int_{|t-c|\geq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)}) +\int_{|t-c|\leq h} \frac{A}{\Xi-z_j} \psi_j (\bar\psi_j - \overline{\psi_j (c_j)})
\end{equation}
converges to $0$ for every fixed $h$. On the other hand, $|\frac{A (t)}{\Xi (t)-z_j}| |\psi_j (t) - \psi_j (c_j)|\leq C |t-c_j|^{-1/2}$ and thus the second integrand in \eqref{e:up}
converges to $0$ as well. We thus conclude that the $L^2$ norm of $\psi'_j$ converges to $0$ as well. This however contradicts the normalization \eqref{e:L2-normalization}.
\section{Proof of Proposition \ref{p:5-7}: Part I}
We set $m_0=m_a$, $z_0 = \Xi (a)$, and
we fix a $\psi_0$ solution of
\[
-\frac{d^2\psi_0}{dt^2} + m_0^2 \psi_0 + \frac{A}{\Xi-z_0} \psi_0 = 0
\]
with $L^2$ norm equal $1$. Since the operator is self-adjoint we will indeed assume that $\psi_0$ is real. We then define the projector $P_0: L^2 (\mathbb R; \mathbb C) \to \{\kappa \psi_0:\kappa \in \mathbb C\}$ as
\[
P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0\, .
\]
Observe that $P_0$ is self-adjoint.
Next,
in a neighborhood of $(m_0, z_0)$ we will look for solutions of \eqref{e:eigenvalue-equation-3} by solving
\begin{equation}\label{e:Lagrange}
\left\{
\begin{array}{l}
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi + P_0 (\psi) = \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
which we can rewrite as
\begin{equation}\label{e:Lagrange-2}
\left\{
\begin{array}{l}
-\psi'' + m_0^2 \psi + \frac{A}{\Xi-z_0} \psi + P_0 (\psi) = A \left(((\Xi-z_0)^{-1} - (\Xi-z)^{-1}) \psi\right) + (m_0^2-m^2)\psi + \psi_0\\ \\
\langle \psi, \psi_0\rangle =1
\end{array}
\right.
\end{equation}
Next we observe that the operator $-\frac{d^2}{dt^2} + m_0^2$, considered as a closed unbounded self-adjoint operator in $L^2$ (with domain $W^{2,2}$) has an inverse $\mathcal{K}_{m_0}:L^2 \to L^2$ which is a bounded operator. We thus rewrite \eqref{e:Lagrange-2} as
\begin{equation}\label{e:Lagrange-3}
\left\{
\begin{array}{ll}
\underbrace{\psi + \mathcal{K}_{m_0} \left(\frac{A}{\Xi-z_0} \psi + P_0 (\psi)\right)}_{=: T (\psi)}\\
\qquad\qquad \qquad= \underbrace{\mathcal{K}_{m_0}
\left(\left(A \left((\Xi-z_0)^{-1} - (\Xi-z)^{-1}\right) + (m_0^2 -m^2)\right) \psi\right)}_{=:- \mathcal{R}_{m,z} (\psi)} +
\mathcal{K}_{m_0} (\psi_0)\\ \\
\langle \psi, \psi_0 \rangle =1\, .
\end{array}
\right.
\end{equation}
The proof of Proposition \ref{p:5-7} will then be broken into two pieces. In this section we will show the first part, which we can summarize in the following
\begin{lemma}\label{l:solve-for-psi}
For every $\mu>0$, if $(m,z)$ sufficiently close to $(m_0, z_0)$ and ${\rm Im}\, z\geq \mu |{\rm Re}\, (z-z_0)|$ then there is a unique $\psi= \psi (m,z) \in L^2 (\mathbb R)$ solving
\begin{equation}\label{e:solve-for-psi}
T (\psi) + \mathcal{R}_{m,z} (\psi) = \mathcal{K}_{m_0} (\psi_0)\, .
\end{equation}
\end{lemma}
Before coming to its proof we single out two important ingredients.
\begin{lemma}\label{l:invert-T}
$T$ is a bounded operator with bounded inverse on the spaces $L^2$ and $C^\sigma$, for any $\sigma \in ]0,1[$.
\end{lemma}
\begin{proof} Recall that the operator $\mathcal{K}_m$ is given by the convolution with $\frac 1{2m} e^{-m|\cdot|}$. In this first step we prove that $T$ is a bounded operator with bounded inverse in the spaces $L^2 (\mathbb R)$ and $C^\sigma (\mathbb R)$\footnote{Observe that $\mathcal{K}_m$ is well-defined on $C^\sigma$ and so is the multiplication by $\frac{A}{\Xi-z_0}$, since the latter is a smooth functions with bounded derivatives, and the operator $P_0 (\psi) = \langle \psi, \psi_0\rangle \psi_0$: for the latter we just need to check that $\psi\overline{\psi_0}$ is integrable, which follows from the exponential decay of $\psi_0$, { cf. Corollary \ref{c:decay}}.}
Recall that $\frac{A}{\Xi-z_0}= \frac{A}{\Xi-\Xi (a)}$ is indeed a bounded smooth function (thanks to the structural assumptions on $\Xi$: in particular recall that $\Xi' (a)\neq 0$ and $A(a) =0$, which implies that $\frac{A}{\Xi-\Xi(a)}$ is in fact smooth at $a$). Moreover the function and its derivatives decay exponentially at $\pm \infty$. It follows therefore that $\psi \mapsto \mathcal{K}_{m_0} (\frac{A}{\Xi-z_0} \psi + P_0 (\psi))$ is a compact operator, both on $L^2$ and on $C^\sigma$. Thus $T$ is a Fredholm operator with index $0$. We thus just need to check that the kernel is $0$ in order to conclude that it is invertible with bounded inverse. In both cases we need to show that the equation
\begin{equation}\label{e:kernel-T}
-\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi) = 0
\end{equation}
has only the trivial solution. Observe that the kernel $V$ of the operator $\psi \mapsto -\frac{d^2\psi}{dt^2} + m_0^2 \psi + \frac{A}{\Xi-\Xi (a)} \psi$ is $1$-dimensional by Lemma \ref{l:ODE2} and Corollary \ref{c:decay}. In particular $V$ is generated by $\psi_0$. Since the operator $P_0$ is the orthogonal projection onto $V$ and $-\frac{d^2\psi}{dt^2} + m_0^2+ + \frac{A}{\Xi-\Xi (a)} \psi$ is self-adjoint, the kernel of $-\frac{d^2}{dt^2} + m_0^2+ \frac{A}{\Xi-\Xi (a)} \psi + P_0$ in $L^2$ must be trivial.
In order to argue that the kernel is $0$ on $C^\sigma$ we apply a variation of the same idea: first we observe that if $\psi$ is a $C^\sigma$ solution of \eqref{e:kernel-T}, then $\frac{A}{\Xi-\Xi (a)} \psi + P_0 (\psi)$ is also in $C^\sigma$ and hence $\psi''\in C^\sigma$. Observe also that the operator is self-adjoint and thus we can assume that $\psi$ is real-valued. We then multiply both sides of \eqref{e:kernel-T} by $\bar \psi_0$, integrate by parts and use the fact that $\psi_0$ is in the kernel of the self-adjoint operator $-\frac{d^2}{dt^2} + m_0^2 + \frac{A}{\Xi-\Xi (a)}$ to conclude that $(\langle \psi, \psi_0\rangle)^2 =0$. But then
$\psi$ is a bounded solution of $-\frac{d\psi}{dt^2} + m_0 \psi^2 + \frac{A}{\Xi-\Xi (a)}\psi =0$. Given that $\frac{A}{\Xi-\Xi (a)} \psi$ is a product of an exponentially decaying function and a bounded function, we conclude that $-\frac{d^2\psi}{dt^2} + m_0^2 \psi$ is an exponentially decaying function $f$. We thus have $\psi = \mathcal{K}_m (f) + C_1 e^{-m_0t} + C_2 e^{m_0t}$ for two constants $C_1$ and $C_2$. However $\mathcal{K}_m (f)$ decays exponentially at both $\pm \infty$ and thus, given that $\psi$ is bounded, we must have $C_1=C_2=0$. In particular $\psi$ decays exponentially at both $\pm \infty$ and so it is an $L^2$ function. But we already saw that every $L^2$ solution is trivial.
\end{proof}
\begin{lemma}\label{l:Rmz-small}
For every constant $\mu>0$ we define the cone $C_\mu := \{z: {\rm Im} z \geq \mu |{\rm Re}\, (z-z_0)|\}$. Then
\begin{equation}
\lim_{z\in C_\mu, (m,z)\to (m_0, z_0)} \|\mathcal{R}_{m,z}\|_O = 0\, ,
\end{equation}
where $\|L\|_O$ is the operator norm of $L$
when considered as a bounded operator from $L^2$ to $L^2$.
\end{lemma}
\begin{proof} Clearly, it suffices to show that
\begin{equation}
\lim_{z\in C_\mu, z\to z_0} \|\mathcal{K}_{m_0} \circ (A/(\Xi-z) - A/(\Xi-z_0))\|_O = 0\, .
\end{equation}
We can rewrite the operator as
\[
\psi \mapsto \mathcal{K}_m \left(\frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi\right) \, .
\]
First of all observe that the operators
\[
\psi \mapsto L_z (\psi) = \frac{A (z-z_0)}{(\Xi-z) (\Xi-z_0)} \psi
\]
are bounded in the operator norm uniformly in $z\in C_\mu$ by a constant $M$. Moreover, we can see that the adjoint operator is given by $L_z^* (\psi)= \frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)} \psi$ converges strongly in $L^2$ to $0$: indeed the functions $\frac{A (\bar z-z_0)}{(\Xi-\bar z) (\Xi-z_0)}$ are uniformly bounded and they converge to $0$ on $\mathbb R \setminus \{a\}$. We now use an argument entirely similar to that used in the proof of Lemma \ref{l:three}: given any $\varepsilon >0$ we fix the orthogonal projection $P_N$ onto a finite-dimensional subspace of $L^2$ with the property that $\|\mathcal{K}_{m_0}\circ P_N - \mathcal{K}_{m_0}\|_O$ is smaller than $\frac{\varepsilon}{2M}$. We then argue that for $|z-z_0|$ sufficiently small $P_N \circ L_z$ has operator norm smaller than $\frac{\varepsilon}{2}$. Having chosen an orthonormal base $\psi_1, \ldots, \psi_N$ for $V$, we recall that
\[
P_N (\psi)= \sum_i \langle \psi_i, \psi\rangle \psi_i\, .
\]
Therefore our claim amounts to show that
\[
|\langle \psi_i, L_z (\psi)\rangle|\leq \frac{\varepsilon}{2N}
\]
for $z$ sufficiently close to $z_0$ and every $\psi$ with $\|\psi\|_{L^2}\leq 1$. For the latter we use
\[
|\langle \psi_i, L_z (\psi)\rangle| = |\langle L_z^* (\psi_i), \psi \rangle|\leq \|L_z^* (\psi_i)\|_{L^2}\, .
\]
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:solve-for-psi}]
We rewrite the equation that we want to solve as
\[
\psi + T^{-1} \circ \mathcal{R}_{m,z} (\psi) = T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\, .
\]
Note that $P_0(\psi_0)=\psi_0$. Furthermore, since $\mathcal K_{m_0}$ is, by definition, the inverse operator of $-\frac{\mathrm d^2}{\mathrm dt^2}+m_0^2\operatorname{Id}$,
\begin{equation*}
\mathcal K_{m_0}^{-1}\left(\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0\right)\right) = -\psi_0''+m_0^2\psi_0+\frac{A}{\Xi-z_0}\psi_0 = 0.
\end{equation*}
Therefore,
\begin{equation*}
\psi_0+\mathcal K_{m_0}\left(\frac{A}{\Psi-z_0}\psi_0\right) = 0.
\end{equation*}
In combination with the definition of $T$ in \eqref{e:Lagrange-3}, we get
\begin{equation*}
T(\psi_0) = \psi_0+\mathcal K_{m_0}\left(\frac{A}{\Xi-z_0}\psi_0+\psi_0\right)=\mathcal K_{m_0}(\psi_0),
\end{equation*}
in other words,
\begin{equation}\label{e:T-1K}
T^{-1} \circ \mathcal{K}_{m_0} (\psi_0) = \psi_0\, .
\end{equation}
Therefore, \eqref{e:solve-for-psi} becomes
\begin{equation}\label{e:to-Neumann}
(\operatorname{ Id} + T^{-1} \circ \mathcal{R}_{m,z}) (\psi) = \psi_0\, ,
\end{equation}
so the existence of a unique solution is guaranteed as soon as $\|T^{-1} \circ \mathcal{R}_{m,z}\|_{O} < 1$.
\end{proof}
\begin{remark}\label{r:Neumann-series}
In the remaining part of the proof of Proposition \ref{p:5-7} we will take advantage of the representation of $\psi$ as a function of $\psi_0$ through the Neumann series coming from \eqref{e:to-Neumann}. More precisely, our proof of Lemma \ref{l:solve-for-psi} leads to the following representation:
\begin{equation}\label{e:Neumann-series}
\psi = \psi_0 - (T^{-1} \circ \mathcal{R}_{m,z}) (\psi_0) + \sum_{k=2}^\infty (-1)^k (T^{-1}\circ \mathcal{R}_{m,z})^k (\psi_0)\, .
\end{equation}
\end{remark}
\section{Proof of Proposition \ref{p:5-7}: Part II}\label{s:5-7-part-II}
We now complete the proof of Proposition \ref{p:5-7}. The positive parameter $\mu>0$ in Lemma \ref{l:solve-for-psi} will have to be chosen sufficiently small: its choice will be specified in a few paragraphs, while for the moment we assume it to be fixed. We set $m_0 = m_a$ and $z_0 = \Xi (a)$. Thus, for each $(m_0+h ,z)$ in a set
\[
U_{\delta, \mu} := \{|h|< \delta, |z-z_0|< \delta, {\rm Im} z > \mu |{\rm Re}\, (z-z_0)|\}
\]
we know that that there is a solution $\psi = \psi (m_0+h,z)$ of \eqref{e:solve-for-psi} which moreover satisfies the expansion \eqref{e:Neumann-series}.
We then define the function
\begin{equation}
H (h,z) := \langle \psi (m_0+h,z), \psi_0\rangle\, ,
\end{equation}
and obviously we are looking for those $z$ which solve
\begin{equation}\label{e:what-we-want-to-do}
H (h,z) =1
\end{equation}
The main point of our analysis is the following
\begin{lemma}\label{l:will-apply-Rouche}
The function $H$ is holomorphic in $z$ and moreover
\begin{equation}\label{e:expansion}
H (h,z) = 1 - 2m_a h + c (a) (z-z_0) + o (|z-z_0| + |h|)
\end{equation}
where $c(a)$ is a complex number with ${\rm Im}\, c(a) > 0 $.
\end{lemma}
Given Lemma \ref{l:will-apply-Rouche}, consider now $\xi (h)$ which we obtain by solving $c(a) (\xi-z_0)= 2m_a h$, namely,
\[
\xi (h) = \frac{2m_a h}{c(a)} +z_0 = \frac{2m_a h}{|c(a)|^2} \overline{c(a)} +z_0\, .
\]
The idea behind the latter definition is that, if the term $o (|z-z_0| + |h|)$ vanished identically, $z = \xi (h)$ would be the solution of $H (h,z)=1$. Even though $o (|z-z_0| + |h|)$ does not vanish, we nonetheless expect that the solution $z$ of $H (h,z)=1$ is relatively close to $\xi (h)$.
Since ${\rm Im}\, c(a)>0$, $\xi (h)$ has positive imaginary part if $h<0$. In particular we have
\[
{\rm Im}\, \xi (h) \geq \gamma |h| \qquad \forall h < 0\, .
\]
where $\gamma$ is a positive constant. We then rewrite
\[
H (h, z) = 1 + c (a) (z-\xi (h)) + \underbrace{o (|\xi (h)-z_0| + h)}_{=: r(h)} + o (|z-\xi (h)|)\, .
\]
Consider the disk $D_h := \{|z-\xi (h)| \leq 2 \beta |h|\}$, for a suitably chosen constant $\beta>0$. We will show below that adjusting the constants $\mu$ and $\beta$ suitably, the disk will be in the domain of the holomorphic function $H (\cdot, z)$. Leaving this aside for the moment, by Rouch\'e Theorem, if we choose $h$ sufficiently small the set $H(h, D_h)$ contains a disk of radius $|c(a)|\beta h$ centered at $1+ r (h)$. But then for $h$ sufficiently small we also have $|r(h)| \leq \frac{|c(a)|\beta h}{2}$ and so we conclude that $1\in H (h, D_h)$, namely that there is a point $z (h)$ in the disk $D_h$ which is mapped in $1$ by $H (h, \cdot)$. This would then complete the proof of Proposition \ref{p:5-7} if we were able to prove that ${\rm Im}\, z (h) >0$. We therefore need to show that $D_h$ is in the domain of $H (h, \cdot)$, namely,
\[
{\rm Im}\, z \geq \mu |{\rm Re}\, (z-z_0)|\, \qquad \forall z\in D_h\, .
\]
We first estimate
\[
{\rm Im}\, z \geq {\rm}\, {\rm Im}\, \xi (h) - 2 \beta |h| \geq (\gamma- 2 \beta) |h|\, .
\]
Then
\begin{equation}\label{e:inequality-101}
|{\rm Re}\, z-z_0| \leq |\xi (h)-z_0| + |z-\xi (h)| \leq (|c(a)| + 2 \beta) |h|\, .
\end{equation}
We thus conclude that
\begin{equation}\label{e:inequality-102}
{\rm Im}\, z(h) \geq \frac{\gamma-2\beta}{|c(a)|+2\beta} |{\rm Re}\, (z (h)-z_0)|\, .
\end{equation}
Thus it suffices to chose $\beta = \frac{\gamma}{3}$ and $\mu = \frac{\gamma}{3 |c(a)|+\gamma}$. This guarantees at the same time the existence of a solution and the fact that $z (h)$ has positive imaginary part when $h<0$ (which results from combining \eqref{e:inequality-101} and \eqref{e:inequality-102}.
In order to complete the proof of Proposition \ref{p:5-7} we therefore just need to show Lemma \ref{l:will-apply-Rouche}.
\begin{proof}[Proof of Lemma \ref{l:will-apply-Rouche}]
In order to show holomorphicity we just need to show that, for each fixed $z$,
\[
z\mapsto \sum_{k=0}^\infty (- T^{-1} \circ \mathcal{R}_{m,z})^k
\]
is holomorphic. Since the series converges in the operator norm, it suffices to show that each map $z\mapsto (-T^{-1} \circ \mathcal{R}_{m,z})^k$ is holomorphic for every $k$, for which indeed it suffices to show that $z\mapsto \mathcal{R}_{m,z}$ is holomorphic. This is however obvious from the explicit formula. We therefore now come to the the Taylor expansion \eqref{e:expansion}.
\medskip
{\bf Step 1} We will show here that
\begin{equation}\label{e:small-in-Csigma}
\|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)} \leq C (\sigma) (|h| + |z-z_0|)\,
\end{equation}
for every $\sigma\in ]0,1[$, where $\|L\|_{\mathcal{L} (C^\sigma)}$ is the operator norm of a bounded linear operator $L$ on $C^{\sigma}$.
The estimate will have the following consequence. First of all using \eqref{e:Neumann-series} and $\|\psi_0\|_{L^2}^2 =1$ we expand
\begin{equation}\label{e:Taylor-2}
H (h,z) = 1 - \langle T^{-1} \circ \mathcal{R}_{m_0+h,z} (\psi_0), \psi_0\rangle
+ \underbrace{\sum_{k=2}^\infty \langle (-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0), \psi_0\rangle}_{=: R_1 (z,h)}\, .
\end{equation}
Hence using \eqref{e:small-in-Csigma} we estimate
\begin{align}
|R_1 (z,h)| & \leq \sum_{k=2}^\infty \|(-T^{-1} \circ \mathcal{R}_{m_0+h,z})^k (\psi_0)\|_\infty \|\psi_0\|_{L^1}\nonumber\\
&\leq C \sum_{k=2}^\infty (\|T^{-1}\|_{C^\sigma} \|\mathcal{R}_{m_0+h,z}\|_{\mathcal{L} (C^\sigma)})^k \|\psi_0\|_{C^\sigma}\|\psi_0\|_{L^1} = o (|h|+|z-z_0|)\, ,\label{e:resto-1}
\end{align}
for some fixed $\sigma$.
In order to show \eqref{e:small-in-Csigma} we write
\[
\mathcal{R}_{m_0+h,z} (\psi) = (z-z_0) \mathcal{K}_{m_0} \left(\frac{1}{\Xi-z} \left(\frac{A}{\Xi-z_0} \psi\right)\right) + (2m_0 h +h^2) \mathcal{K}_{m_0} (\psi)\, .
\]
Since $\frac{A}{\Xi-z_0}$ is smooth, it suffices to show that the operators $B_z:= \mathcal{K}_{m_0} \circ \frac{1}{\Xi-z}$ are uniformly bounded in $\mathcal{L} (C^\sigma)$. We first fix a smooth cut-off function $\varphi \in C^\infty_c (]a-2, a+2[)$ which equals $1$ on $[a-1,a+1]$ and write
\[
B_z= B_z^1 + B_z^2
:= \mathcal{K}_{m_0} \circ \left(\frac{1-\varphi}{\Xi-z}\right)+\mathcal{K}_{m_0} \circ \left(\frac{\varphi}{\Xi-z} \right)\, .
\]
But since $(1-\varphi)/(\Xi-z)$ enjoys a uniform bound in $C^k$, it is easy to conclude that $\|B^1_z\|_{\mathcal{L} (C^\sigma)}$ is bounded uniformly in $z$. We thus need to bound
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z} \psi (s)\, ds\, .
\end{align*}
We first bound $\|B^2_z\|_{L^\infty}$. We write $z= x+iy$ and, since $x$ is close to $a$, we select the only $a'$ such that $\Xi (a')=x$ and write
\begin{align*}
B^2_z (\psi) (t) &= \frac 1{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s) (\psi (s)-\psi (a'))}{(\Xi (s) -\Xi (a')) - iy} \, ds}_{=: I_1 (t)}
+ \frac {\psi(a')}{2m_0}\underbrace{\int e^{-m_0 |t-s|} \frac{\varphi (s)}{\Xi (s) -z}\, ds}_{=: I_2 (t)}
\end{align*}
Writing $\frac{1}{\Xi -z} = \frac{1}{\Xi'}\frac{d}{dt} \ln (\Xi -z)$ we can integrate by parts to get
\begin{align*}
I_2 (t) &= - \underbrace{\int m_0 \frac{t-s}{|t-s|} e^{-m_0 |t-s|} (\Xi' (s))^{-1} \ln (\Xi (s)-z) \varphi (s)\, ds}_{=:I_{2,1} (t)}\\
&\qquad -
\underbrace{\int e^{-m_0 |t-s|} \ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\, ds}_{=: I_{2,2} (t)}
\end{align*}
and use the uniform bound for $\ln (\Xi (s)-z)$ in $L^1 ([a-2,a+2])$ to conclude that $|I_{2,1}|$ and $|I_{2,2}|$ are both bounded uniformly. As for $I_1$, note that, on any compact interval $K$ around $a'$, we have, since $\Xi'$ is continuous and $\Xi'<0$,
\begin{equation*}
C(K):=\inf_{x\in K} |\Xi'(x)| = -\max_{x\in K} \Xi'(x) >0.
\end{equation*}
Therefore by the mean value theorem, for all $s\in K$, there exists a $\iota = \iota(s)\in K$ such that
\begin{equation*}
\abs{\Xi(s)-\Xi(a')-iy}=\abs y+\abs{\Xi(s)-\Xi(a')} >\abs{\Xi(s)-\Xi(a')}= \abs{s-a'}\abs{ \Xi'(\iota)}\ge \abs{s-a'} C(K).
\end{equation*}
By the definition of the Hölder semi-norm, we thus have, for all $s\in K$,
\[
\left|\frac{\psi (s)- \psi (a')}{\Xi (s) - \Xi (a') - iy}\right| \leq \frac{\|\psi\|_{C^\sigma}}{C(K)|s-a'|^{1-\sigma}},
\]
which is integrable. Furthermore, outside of $K$ the integrand of $I_1$ is bounded and decays exponentially, therefore one can uniformly bound $I_1$.
We next wish to bound the seminorm
\[
[B^2_z (\psi)]_\sigma:= \sup_{t\neq t'} \frac{|B^2_z (\psi) (t) - B^2_z (\psi) (t')|}{|t-t'|^\sigma}\, .
\]
We write
\[
B^2_z (\psi) (t) - B^2_z (\psi) (t') = (I_1 (t) - I_1 (t')) + \psi' (a) (I_2 (t) - I_2 (t'))\, .
\]
Using that $|e^{-m_0 |t-s|} - e^{-m_0 |t'-s|}|\leq C |t-t'|$ we can bound
\[
|I_1 (t) - I_1 (t')| \leq C |t-t'| \int |\varphi (s)| \frac{|\psi (s)-\psi (a')|}{|\Xi (s) - \Xi (a')|}\, ds
\leq C \|\psi\|_{C^\sigma} |t-t'|\, .
\]
Similarly we can write
\[
|I_{2,2} (t) - I_{2,2} (t')| \leq C |t-t'| \int \left|\ln (\Xi (s) -z) \frac{d}{ds} ((\Xi')^{-1} \varphi) (s)\right|\, ds \leq C |t-t'|\, .
\]
Next denoting the function $(\Xi' (s))^{-1} \varphi (s) \ln (\Xi (s) -z)$ by $B (s)$ we assume $t> t'$ and write further
\begin{align*}
I_{2,1} (t) - I_{2,1} (t')&= m_0 \Bigg(\underbrace{\int_t^\infty e^{-m_0 (s-t)} B(s)\, ds - \int_{t'}^\infty e^{-m_0(s-t')} B(s)\, ds}_{=: J_+(t,t')}\Bigg)\\
&\qquad - m_0 \Bigg(\underbrace{\int_{-\infty}^t e^{-m_0 (t-s)} B(s)\, ds - \int_{-\infty}^{t'} e^{-m_0 (t'-s)} B(s)\, ds}_{=: J_- (t,t')}\Bigg)\, .
\end{align*}
Then we choose $p=\frac{1}{\sigma}$, let $p'$ be the dual exponent and estimate
\begin{align*}
|J_+ (t,t')| &\leq C |t-t'| \int_t^\infty |B(s)|\, ds + \int_{t'}^t |B (s)|\, ds\\
&\leq C |t-t'| \|B\|_{L^1} + |t-t'|^\sigma \|B\|_{L^{p'}}\, .
\end{align*}
A similar estimate for $J_- (t,t')$ finally shows the existence of a constant $C$ such that
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} \left(|t-t'|+|t-t'|^\sigma\right)\, .
\]
Clearly this implies
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')|\leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma \qquad \mbox{if $|t-t'|\leq 1$.}
\]
On the other hand we can trivially bound
\[
|B^2_z (\psi) (t) - B^2_z (\psi) (t')| \leq 2 \|B^2_z (\psi)\|_\infty \leq C \|\psi\|_{C^\sigma} |t-t'|^\sigma
\quad\mbox{if $|t-t'|\geq 1$.}
\]
\medskip
{\bf Step 2.} In this second step we compute
\begin{align*}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
\langle T^{-1} \circ \mathcal{K}_{m_0} \left(A ((\Xi-z)^{-1} - (\Xi-z_0)^{-1})\psi_0\right), \psi_0\rangle\\
&\qquad
+ (2m_0 h + h^2) \langle T^{-1} \circ \mathcal{K}_{m_0} (\psi_0), \psi_0\rangle\, .
\end{align*}
Recalling \eqref{e:T-1K} (and using that both $T^{-1}$ and $\mathcal{K}_{m_0}$ are self-adjoint) we rewrite the expression as
\begin{align}
\langle T^{-1} \mathcal{R}_{m,z} (\psi_0), \psi_0\rangle &=
(z-z_0) \langle T^{-1}\circ \mathcal{K}_{m_0} \big( A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0\big), \psi_0\rangle + 2m_a h + h^2\nonumber\\
&= (z-z_0) \langle A (\Xi-z)^{-1} (\Xi-z_0)^{-1} \psi_0, T^{-1} \circ \mathcal{K}_{m_0} (\psi_0)\rangle + 2ma_ h + h^2\nonumber\\
&= (z-z_0) \underbrace{\langle A (\Xi-z)^{-1} (\Xi -z_0)^{-1} \psi_0, \psi_0 \rangle}_{=: G (z)} + 2m_a h + h^2\label{e:Taylor-3}\, .
\end{align}
We thus want to show that the following limit exists and to compute its imaginary part:
\[
- c (a) := \lim_{{\rm Im}\, z >0, z\to \Xi (a)} G (z) =
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} \int \frac{1}{\Xi (s)-z} |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}\, ds\, .
\]
Observe indeed that inserting $G(z) = - c(a) + o (1)$ in \eqref{e:Taylor-3} and taking into account \eqref{e:Taylor-2} and \eqref{e:resto-1} we conclude that \eqref{e:expansion} holds.
In order to compute $c(a)$ we observe first that the function $\phi (s) := |\psi_0 (s)|^2 \frac{A(s)}{\Xi (s) - \Xi (a)}$ is smooth and decays exponentially. We thus rewrite
\[
G (z) = \int \frac{1}{\Xi (s)-z} \phi (s)\, ds\, .
\]
Next we decompose $z$ into its real and imaginary part as $z = x + iy$ and observe that
\begin{align*}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Re}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds
\end{align*}
Here we are only interested in showing that the limit exists and we thus fix a cut-off function $\varphi\in C^\infty_c (]a-2, a+2[)$, identically $1$ on $[a-1, a+1]$ and split the integral into
\[
{\rm Re}\, G (z) = \int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) \varphi (s)\, ds +
\int \frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} \phi (s) (1-\varphi (s))\, ds\, .
\]
The second integral has a limit, while in order to show that the first has a limit we write
\[
\frac{\Xi (s)-x}{(\Xi (s)-x)^2 + y^2} = \frac{1}{2\Xi' (s)} \frac{d}{ds} \ln ((\Xi(s)-x)^2 + y^2)\, .
\]
We then integrate by parts and use the fact that $\ln ((\Xi (s)-x)^2 +y^2)$ converges to $2 \ln |(\Xi(s)-\Xi (a)|$ strongly in $L^q ([a-2,a+2])$ for every $q$ to infer the existence of the limit of the first integral.
As for the imaginary part we write instead
\begin{align}\label{e:arctan-integral}
\lim_{{\rm Im}\, z >0, z\to \Xi (a)} {\rm Im}\, G (z) &= \lim_{x\to \Xi(a), y \downarrow 0} \int \frac{y}{(\Xi (s)-x)^2 + y^2} \phi (s)\, ds\, .
\end{align}
We wish to show that the latter integral converges to
\begin{equation}\label{e:arctan-integral-2}
I = \phi (a) \int \frac{ds}{(\Xi' (a))^2 s^2 +1} = \frac{\pi \phi (a)}{2 |\Xi' (a)|}\, .
\end{equation}
On the other hand $\phi (a) = |\psi_0 (a)|^2 A' (a) (\Xi' (a))^{-1}$.
Since $A' (a) > 0$ and $\Xi' (a)<0$, we conclude that $ c (a)$ exists and it is a complex number with positive imaginary part, which completes the proof of the lemma.
It remains to show the convergence of \eqref{e:arctan-integral} to \eqref{e:arctan-integral-2}. First observe that for each $x$ sufficiently close to $\Xi (a)$ there is a unique $a' = \Xi^{-1} (x)$ such that $\Xi (a')=x$. Changing variables ($s$ becomes $a'+s$), the integral in \eqref{e:arctan-integral} becomes
\begin{equation}
\int \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds\,
\end{equation}
and we wish to show that its limit is $I$ as $(a',y)\to (a,0)$.
Next, fix any $\delta>0$ and observe that
\[
\lim_{y\to 0} \int_{|s|\geq \delta} \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds=0
\]
uniformly in $a' \in [a-1, a+1]$. We therefore define
\[
I (\delta, a', y) := \int_{-\delta}^\delta \frac{y}{(\Xi (a'+s)-x)^2 + y^2} \phi (a'+s)\, ds
\]
and we wish to show that, for every $\varepsilon >0$ there is a $\delta>0$ such that
\begin{equation}\label{e:arctan-integral-3}
\limsup_{(a',y) \downarrow (a,0)} \left| I (\delta, a', y) - I\right| \leq C \varepsilon\, ,
\end{equation}
where $C$ is a geometric constant.
We rewrite
\[
I (\delta, a', y) = \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{\phi (a'+ys)}{y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 +1}\, ds\, .
\]
Fix now $\varepsilon$ and observe that, since $\Xi'$ and $\phi$ are continuous, if $\delta$ is chosen sufficiently small, then
\begin{align}
&((\Xi' (a))^2 - \varepsilon^2) s^2 \leq y^{-2} (\Xi (a' + ys) - \Xi (a'))^2 \leq ((\Xi' (a))^2 + \varepsilon^2) s^2\\
& |\phi (a' + ys) - \phi (a)| \leq \varepsilon\, .
\end{align}
for all $|a'-a|<\delta$ and $y |s| \leq \delta$. Choosing $\varepsilon>0$ so that $\varepsilon \leq \frac{|\Xi' (a)|}{2}$ we easily see that, when $|a'-a| < \delta$, we have
\[
\left|I (\delta, a', y) - \phi (a) \int_{-\delta y^{-1}}^{\delta y^{-1}} \frac{ds}{(\Xi' (a))^2 s^2 +1}\right| \leq C \varepsilon\, .
\]
In particular, as $y\downarrow 0$, we conclude \eqref{e:arctan-integral-3}.
\end{proof}
\section{Proof of Proposition \ref{p:almost-final}}
We reduce the proof of Proposition \ref{p:almost-final} to the following lemma.
\begin{lemma}\label{l:almost-final-2}
Consider $G:= \{m> 1, m \neq m_a, m_b : \mathscr{U}_m \neq \emptyset\}$. Then $G$ is relatively open and relatively closed in $]1, \infty[\setminus \{m_a, m_b\}$.
\end{lemma}
Proposition \ref{p:almost-final} is an obvious consequence of the latter lemma and of Proposition \ref{p:5-7}: Lemma \ref{l:almost-final-2} implies that $G$ is the union of connected components of $[1, \infty[\setminus \{m_a, m_b\}$. On the other hand the connected component $]m_b, m_a[$ intersects $G$ because of Proposition \ref{p:5-7} and thus it is contained in $G$.
We thus complete the proof of Proposition \ref{p:almost-final} showing Lemma \ref{l:almost-final-2}
\begin{proof}[Proof of Lemma \ref{l:almost-final-2}] We start with some preliminary considerations. Fix an interval $[c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$.
Recalling Proposition \ref{p:all-m} we know that, since the operator norm of $\mathcal{L}_m$ is bounded uniformly in $m\in [c,d]$,
\begin{itemize}
\item[(a)] There is $R>0$ such that $\mathcal{U}_m\subset B_R (0)$ for all $m\in [c,d]$.
\end{itemize}
However it also follows from Proposition \ref{p:3+4} that
\begin{itemize}
\item[(b)] There is a $\delta >0$ such that $\mathcal{U}_m\subset \{{\rm Im}\, z > \delta\}$.
\end{itemize}
\medskip
{\bf Step 1.} We first prove that $G$ is relatively closed. To that end we fix a sequence $m_j \to m\in ]1, \infty[\setminus \{m_a, m_b\}$ such that $m_j$ belongs to $G$. Without loss of generality we can assume $\{m_j\}\subset [c,d]\subset ]1, \infty[\setminus \{m_a, m_b\}$. For each $m_j$ we can then consider $z_j\in \mathscr{U}_{m_j}$, which by (a) and (b) we can assume to converge to some $z\in \mathbb C$ with positive imaginary part. We then let $\psi_j$ be a sequence of nontrivial elements in $L^2$ such that
\begin{equation}\label{e:eigenvalue-equation-21}
-\psi_j'' + m_j^2 \psi_j + \frac{A}{\Xi -z_j} \psi_j = 0\, ,
\end{equation}
and normalize them to $\|\psi_j\|_{L^2}=1$
Since ${\rm Im}\, z_j \geq \delta >0$, the sequence of functions $\frac{A}{\Xi-z_j}$ enjoy uniform bounds in the spaces $L^1$ and $C^k$. We can then argue as in Section \ref{s:3+4} to find that
\begin{itemize}
\item[(i)] $\|\psi_j'\|_{L^2}$ enjoy a uniform bound;
\item[(ii)] There are uniformly bounded nonzero constants $\{C^\pm_j\}$ with the property that $\psi_j$ is asymptotic to $C^\pm_j e^{\mp m_j t}$ and $\pm \infty$;
\item[(iii)] There is a $T_0>0$ independent of $j$ with the property that
\[
|\psi_j (t) - C_j e^{\mp m_j t}| \leq \frac{|C_j|}{2} e^{\mp m_j t} \qquad \forall \pm t > T_0\, .
\]
\end{itemize}
These three properties together imply that a subsequence, not relabeled, converges strongly in $L^2$ to some $\psi$. Passing into the limit in \eqref{e:eigenvalue-equation-21} we conclude that
\[
-\psi'' + m^2 \psi + \frac{A}{\Xi-z} \psi = 0\, .
\]
This shows that $z\in \mathscr{U}_m$, i.e. that $m \in G$.
\medskip
{\bf Step 2.} Here we show that $G$ is relatively open. To that end we consider some sequence $m_j \to m \in ]1, \infty[\setminus \{m_a, m_b\}$ with the property that $m_j \not\in G$ and we show that $m\not \in G$. By (a) and (b) above, it suffices to show that the domain
\[
\Delta := \{|z|< B_R : {\rm Im}\, z > \delta\}
\]
does not contain any element of ${\rm spec}\, m^{-1} \mathcal{L}_m$. Observe first that, since we know that it does not intersect $\gamma = \partial \Delta$, the distance between $\gamma$ and any element in ${\rm spec}\, m^{-1} \mathcal{L}_m$ is larger than a positive constant $\varepsilon$. Recalling that the spectrum on the upper half complex space is discrete, we have that
\[
P_m := \int_\gamma (m^{-1} \mathcal{L}_m -z)^{-1}\, dz
\]
is a projection on a finite-dimensional space which contains all eigenspaces of the elements $z\in {\rm spec}\, m^{-1} \mathcal{L}_m\cap \Delta = \mathcal{U}_m$. And since all such elements belong to the discrete spectrum, $\mathscr{U}_m = \emptyset$ if and only if $P_m = 0$. On the other hand
\[
P_{m_j} := \int_\gamma (m_j^{-1} \mathcal{L}_{m_j} -z)^{-1}\, dz
\]
equals $0$ precisely because $m_j \not \in G$. We thus just need to show that $P_{m_j}$ converges to $P_m$ to infer that $m\in G$. The latter follows from the following observations:
\begin{itemize}
\item[(i)] Since $\gamma$ is a compact set and does not intersect the spectrum of $m^{-1} \mathcal{L}_m$, there is a constant $M$ such that $\|(m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq M$ for all $z\in \gamma$;
\item[(ii)] $\mathcal{L}_{m_j}$ converges to $\mathcal{L}_m$ in the operator norm;
\item[(iii)] Writing
\[
(m_j^{-1} \mathcal{L}_{m_j} - z)^{-1} = ({\rm Id} + (m^{-1} \mathcal{L}_m -z)^{-1} (m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m))^{-1}(m^{-1} \mathcal{L}_m - z)^{-1}\, ,
\]
when $\|m_j^{-1} \mathcal{L}_{m_j} - m^{-1} \mathcal{L}_m\|_{\mathcal{L}} \leq \frac{1}{2M}$ we can use the Neumann series for the inverse to infer
\[
\sup_{z\in \gamma} \|(m_j^{-1} \mathcal{L}_{m_j} -z)^{-1} - (m^{-1} \mathcal{L}_m -z)^{-1}\|_O \leq C \|m^{-1} \mathcal{L}_m - m_j^{-1} \mathcal{L}_{m_j}\|_O\, ,
\]
for some constant $C$ independent of $j$.
\end{itemize}
We then conclude that $P_{m_j}$ converges to $P_m$ in the operator norm.
\end{proof}
\begin{remark}\label{rmk:algebraic dim const}
An immediate outcome of the argument above is that the sum of the algebraic multiplicities of $z\in \mathcal{U}_m$, as eigenvalues of $m^{-1} \mathcal{L}_m$, is constant on any connected component of $]-\infty, \infty[\setminus \{m_a, m_b\}$. Indeed, it coincides with the rank of the operator $P_m$ defined in Step 2.
\end{remark}
\section{Proof of Lemma \ref{l:bottom}}\label{s:choice-of-A}
Rather than looking for a suitable $\Xi$ we will write $G := \Xi' + 2\Xi$ and look for the latter function after expressing
\[
\Xi (t) := \int_{-\infty}^t e^{-2(t-\tau)} G (\tau)\, d\tau\, .
\]
To check that the above formula recovers $\Xi$ under our assumptions, observe first that
\[
G = \Xi'+2\Xi
\]
by the classical solution formula for first order ODEs with constant coefficients. It thus suffices to show that that the integral and $\Xi$ coincide in a neighborhood of $-\infty$. To that end consider that
that $\Xi (t) = \Xi (-\infty) - c_0 e^{2t}$ for any sufficiently negative $t$ and thus
\[
G (t) = 2\Xi (-\infty) - 4c_0 e^{2t}\, ,
\]
so that, for any such $t$,
\[
\Xi (t) = e^{-2t} \int_{-\infty}^t (2\Xi (-\infty) e^{2\tau} - 4c_0 e^{4\tau})\, d\tau =
\Xi (-\infty) - c_0 e^{2t}\, .
\]
We next read the conditions $\Xi\in \mathscr{C}$ in terms of $G$ to find that they are
\begin{itemize}
\item[(i)] $G (t) = 2 \Xi (-\infty) - 4 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii)] $G (t) = e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii)] There are exactly two zeros $a<b$ of $G'$ and $G'' (a)>0$, $G'' (b)<0$;
\item[(iv)] $\int_{-\infty}^t e^{-2(t-\tau)} G' (\tau) d\tau < 0$ for every $t$.
\end{itemize}
The conditions (i), (ii), and (iii) are obviously equivalent to the corresponding ones in Definition \ref{d:class-C}. As for (iv), we just need to check the formula
\[
\Xi' (t) = \int_{-\infty}^t e^{-2(t-\tau)} G' (\tau)\, d\tau\, .
\]
Arguing as above, the solution formula for first order ODEs with constant coefficients show that the two sides of the above identity can differ at most by a constant, while a direct verification using (i) shows that the two sides coincide for sufficiently negative $t$'s.
We next can read all the above conditions in terms of $A$, more precisely it suffices to impose
\begin{itemize}
\item[(i')] $A (t) = - 8 c_0 e^{2t}$ for all $t$ sufficiently negative;
\item[(ii')] $A(t) = -{\bar\alpha} e^{-{\bar\alpha} t}$ for all $t\geq \ln 2$;
\item[(iii')] There are exactly two zeros $a<b$ of $A$ and $A' (a) >0$, $A' (b)<0$;
\item[(iv')] $\int_{-\infty}^t e^{-2(t-\tau)} A(\tau) d\tau <0$ for every $t$.
\end{itemize}
In fact, assuming the four conditions above we easily recover $G$ by setting
\[
G (t) := - \int_t^\infty A(\tau)\, d\tau\, .
\]
Note in passing that since (i'), (ii'), (iii'), and (iv') imply (i), (ii), (iii), and (iv), which in turn imply the corresponding conditions in Definition \ref{d:class-C}, we derive
\[
\Xi (-\infty) = \frac{1}{2} G (-\infty) = - \int_{-\infty}^\infty A(\tau)\, d\tau\, .
\]
In turn, since $\Xi (\infty)$ = 0 and $\Xi'<0$, the latter equality implies
\[
\int_{-\infty}^\infty A (\tau)\, d\tau < 0\, .
\]
We next fix $a=0$ and $b=\frac{1}{2}$ and rather than imposing (iv') we impose the two conditions
\begin{itemize}
\item[(v')] $\int_{-\infty}^0 e^{2\tau} A (\tau) d\tau = -1$;
\item[(vi')] $\max A \leq \frac{1}{e}$.
\end{itemize}
Observe, indeed, that (iv) is equivalent to
\[
\int_{-\infty}^t e^{2\tau} A (\tau) \, d\tau < 0
\]
and that, since $A$ is negative on $]-\infty, 0[$ and $]\frac{1}{2}, \infty[$, the integral on the left-hand side is maximal for $t=\frac{1}{2}$. We then can use (v') and (vi') to estimate
\[
\int_{-\infty}^{\frac{1}{2}} e^{2\tau} A(\tau)\,d \tau \leq -1 + \frac{e}{2} \max A \leq -\frac{1}{2}\, .
\]
We next recall that, by the Rayleigh criterion,
\[
- \lambda_a = \min_{\|\psi\|_{L^2} = 1} \langle \psi, L_a \psi\rangle = \min_{\|\psi\|_{L^2} = 1} \int \left(|\psi'|^2 + \frac{A}{\Xi-\Xi (0)} |\psi|^2 \right)\, .
\]
We test the right-hand side with
\[
\psi (t) :=
\left\{
\begin{array}{ll}
0 \qquad & \mbox{for $|t|\geq \frac{1}{2}$}\\ \\
\sqrt{2} \cos (\pi t) \qquad &\mbox{for $|t|\leq \frac{1}{2}$.}
\end{array}\right.
\]
We therefore get
\begin{equation}\label{e:bottom-est-1}
- \lambda_a \leq 2 \pi^2 + 2 \int_{-1/2}^{1/2} \frac{A (t)}{\Xi (t) - \Xi (0)} \cos^2 \pi t\, dt\, .
\end{equation}
Next, for any fixed positive constant $B>0$, we impose that $A (t) = B t$ on the interval $]- \sqrt{B}^{-1}, 0]$ and we then continue it smoothly on $[0, \infty[$ so to satisfy (ii'), (iii'), and (v') on $[0, \infty[$ (the verification that this is possible is rather simple). We also can continue it smoothly on $]-\infty, - \sqrt{B}^{-1}]$ so to ensure (i'). In order to ensure (v') as well we just need to show that
\[
\int_{-\sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq -\frac{1}{2}\, .
\]
The latter is certainly ensured by
\[
\int_{- \sqrt{B}^{-1}}^0 e^{2\tau} A(\tau)\, d\tau \geq \int_{-\sqrt{B}^{-1}}^0 A(\tau)\, d\tau = -\frac{1}{2}\, .
\]
Now, observe that $\Xi' (0) = \int_{-\infty}^0 e^{2\tau} A (\tau) \, d\tau = -1$. For $t\in \, ]-\sqrt{B}^{-1}, 0[$ we wish to estimate $\Xi' (t)$ and to do it we compute
\begin{align*}
|\Xi' (t)- \Xi' (0)| & = \left|e^{-2t}\int_{-\infty}^t e^{2\tau} A (\tau)\, d\tau - \int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq e^{-2t} \left|\int_t^0 e^{2\tau} A(\tau)\, d\tau\right| + (e^{-2t}-1) \left|\int_{-\infty}^0 e^{2\tau} A(\tau)\, d\tau\right|\\
&\leq \frac{e^{2\sqrt{B}^{-1}}}{2} + (e^{2\sqrt{B}^{-1}}-1) \leq \frac{3}{4}\, ,
\end{align*}
which can be ensured by taking $B$ sufficiently large. In particular $-\frac{1}{4} \geq \Xi' (t) \geq -2 $ for $t\in \, ]-\sqrt{B}^{-1}, 0[$. We thus conclude that
\[
-2t \leq \Xi (t) - \Xi (0) \leq -\frac{t}{4} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
In turn the latter can be used to show
\[
\frac{A (t)}{\Xi (t) - \Xi(0)} \leq - \frac{B}{2} \qquad \forall t\in \, ]-\sqrt{B}^{-1}, 0[\, .
\]
Since $\frac{A}{\Xi - \Xi (0)}$ is otherwise negative on $]-\frac{1}{2}, \frac{1}{2}[$, we conclude
\begin{equation}\label{e:bottom-est-2}
- \lambda_a \leq 2\pi^2 - 2 \int_{-\sqrt{B}^{-1}}^0 B \cos^2 \pi t\, dt\, .
\end{equation}
By taking $B$ large enough we can ensure that $\cos^2 \pi t\geq \frac{1}{2}$ on the interval $]-\sqrt{B}^{-1}, 0[$. In particular we achieve
\[
- \lambda_a \leq 2\pi^2 - \sqrt{B}\, .
\]
Since we can choose $\sqrt{B}$ as large as we wish, the latter inequality completes the proof of the lemma.
\chapter{Nonlinear theory}\label{sect:Proof-main4}
This final chapter will prove Theorem~\ref{thm:main4} and hence complete the argument leading to Theorem~\ref{thm:main}. To that end we fix a choice of $\bar \Omega$, $\bar V$, $m$ and $\eta$ as given by Theorem~\ref{thm:spectral}, where $\bar a>0$ is a large parameter whose choice will be specified only later. We introduce a particular space and we will indeed prove an estimate corresponding to \eqref{e:H2-estimate} in this smaller space.
\begin{definition}\label{d:X}
We denote by $X$ the subspace of elements $\Omega\in L^2_m$ for which the following norm is finite:
\begin{equation}\label{e:X-norm}
\|\Omega\|_X:= \|\Omega\|_{L^2} + \||x| \nabla \Omega\|_{L^2} + \|\nabla \Omega\|_{L^4}\, .
\end{equation}
\end{definition}
The above norm has two features which will play a crucial role in our estimates. The first feature, which is obvious, is that it ensures an appropriate decay of the $L^2$ norm of $D\Omega$ on the complements of large disks $\mathbb R^2\setminus B_R$. The second feature is that it allows to bound the $L^\infty$ norm of $\Omega$ and $\nabla (K_2 *\Omega)$ and to give a bound on the growth of $K_2*\Omega$ at infinity. More precisely, we have the following:
\begin{proposition}\label{p:X-bounds}\label{P:X-BOUNDS}
For all $\kappa \in ]0,1[$, there is a constant $C(\kappa) >0$ such that the following estimates hold for every $m$-fold symmetric $\Omega\in X$:
\begin{align}
|\nabla (K_2*\Omega) (x)|+|\Omega (x)| &\leq \frac{C(\kappa)}{1+|x|^{1-\kappa}}\|\Omega\|_X\qquad\forall x\in\ensuremath{\mathbb R}^2 \label{e:decay-Omega}\\
|K_2* \Omega (x)| &\leq C \|\Omega\|_X \min\{ |x| , 1\} \qquad \forall x\in \mathbb R^2\, \label{e:Hoelder}.
\end{align}
\end{proposition}
The aim of this chapter is therefore to give the bound
\begin{equation}\label{e:final-bound}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_{X} \leq e^{\tau (a_0+\delta_0)} \qquad\qquad\qquad \forall \tau\leq \tau_0\,
\end{equation}
for some appropriately chosen constants $\delta_0>0$ and $\tau_0<0$, independent of $k$. Of course the main difficulty will be to give the explicit estimate \eqref{e:final-bound}. However a first point will be to show that the norm is indeed finite for every $\tau\geq -k$. This will be a consequence of the following:
\begin{lemma}\label{l:initial-bound}\label{L:INITIAL-BOUND}
Provided $a_0$ is large enough, the eigenfunction $\eta$ of Theorem \ref{thm:spectral} belongs to $C^2 (\mathbb R^2\setminus \{0\})$ and satisfies the pointwise estimates
\begin{equation}\label{e:Hk}
|D^\ell \eta| (x) \leq C (1+|x|)^{-\ell-\varrho} \qquad \forall \ell\in \{0,1,2\}, \forall \varrho\in [0, 2[\,
\end{equation}
(in particular $\eta\in W^{2,\infty}$).
Moreover $\Omega_{\text{per}, k} \in C ([-k, T]; X)$ for every $T<\infty$.
\end{lemma}
In fact we can prove even sharper estimates if $m$ were larger than $2$. At any rate, one relevant outcome of Lemma
\ref{l:initial-bound} is that the bound \eqref{e:final-bound} holds at least for $\tau$ sufficiently close to $-k$, given that $\Omega_{\text{per}, k} (\cdot, -k)\equiv 0$. The main point of \eqref{e:final-bound} is then that we will be able to deduce the following estimates.
\begin{lemma}\label{l:final-estimates}
Under the assumptions of Theorem \ref{thm:main4} there is a constant $C_0$ (independent of $k$) such that the following holds. Assume that $\bar\tau \leq 0$ is such that for all $\tau\in [-k, \bar\tau]$ we have the estimate
\begin{equation}\label{e:a-priori}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_X \leq e^{(a_0+\delta_0) \tau}.
\end{equation}
Then
\begin{align}
\|\Omega_{\text{per},k} (\cdot, \bar \tau)\|_{L^2} &\leq C_0 e^{(a_0+\delta_0+1/2)\bar\tau}\, ,
\label{e:stima-L2}\\
\||x| D\Omega_{\text{per}, k} (\cdot, \bar\tau)\|_{L^2} &\leq C_0 e^{(a_0+2\delta_0)\bar\tau}\, , \label{e:stima-H1-pesata}\\
\|D \Omega_{\text{per},k} (\cdot, \bar \tau)\|_{L^4} &\leq C_0 e^{(a_0+2\delta_0)\bar\tau}\label{e:stima-L4}\, .
\end{align}
\end{lemma}
With the above lemma we easily conclude \eqref{e:final-bound} (and hence Theorem \ref{thm:main4}). Indeed denote by $\tau_k$ the largest non-positive time such that \begin{equation}\label{e:assumed-for-the-moment}
\|\Omega_{\text{per}, k} (\cdot, \tau)\|_X \leq e^{(a_0+\delta_0) \tau} \qquad \forall \tau\in [-k, \tau_k]\, .
\end{equation}
Then we must have
\begin{equation}\label{e:forza}
\|\Omega_{\text{per}, k} (\cdot, \tau_k)\|_X = e^{(a_0+\delta_0) \tau_k}\, .
\end{equation}
On the other hand, summing the three estimates \eqref{e:stima-L2}, \eqref{e:stima-H1-pesata}, and \eqref{e:stima-L4} we conclude
\begin{equation}\label{e:forza2}
\|\Omega_{\text{per}, k} (\cdot, \tau_k)\|_X \leq \bar C e^{(a_0+2 \delta_0) \tau_k}
\end{equation}
for some constant $\bar C$ independent of $k$. However \eqref{e:forza} and \eqref{e:forza2} give
$e^{\delta_0 \tau_k}\geq \bar C^{-1}$, i.e. $\tau_k \geq - \frac{1}{\delta_0} \ln \bar C$, implying that \eqref{e:final-bound} holds with $\tau_0:= - \frac{1}{\delta_0} \ln \bar C$.
After proving Proposition \ref{p:X-bounds} and Lemma \ref{l:initial-bound}, we will dedicate two separate sections to the three estimates \eqref{e:stima-L2}, \eqref{e:stima-H1-pesata}, and \eqref{e:stima-L4}. The first estimate, which we will call {\em baseline estimate}, will differ substantially from the other two, and to it we will dedicate a section. In order to accomplish the gain in the exponent in \eqref{e:stima-L2} we will use crucially the information on the semigroup which comes from Theorem \ref{thm:spectral} and Theorem \ref{t:group}, namely, that the growth bound $\omega (L_{ss})$ is precisely $a_0$ (i.e., the growth achieved by $\Omega_{\text{lin}}$). Since, however, the terms in the Duhamel's formula depend on derivatives, we need to invoke an a priori control of them, which is present in the norm $\|\cdot\|_X$. Indeed, one such term experiencing the derivative loss arise from the nonlinearity is the following:
\begin{equation}
\int_{-k}^\tau e^{(\tau-s) L_{\rm ss}} [(K_2 * \Omega_{\text{per}, k}) \cdot \nabla \Omega_{\text{per}, k}](\cdot,s) \, ds \, .
\end{equation}
Note that $\|\cdot\|_X$ also includes the weighted $L^2$ norm $\||x| D\Omega\|_{L^2}$ because we encounter a term where $D\Omega_{\text{per}, k}$ is multiplied by the function $V^r$, which grows at $\infty$ like $|x|^{1-{\bar\alpha}}$ when ${\bar\alpha} \in ]0,2[$. In order to close the argument we then need to control the $L^4$ norm and the weighted $L^2$ norm of $D\Omega_{\text{per}, k}$. The latter estimates will not be accomplished through a growth bound on the semigroup $e^{\tau L_{ss}}$ (which would invoke controls on yet higher derivatives), but rather through some careful energy estimates. The structure of the problem will then enter crucially, since the term $(K_2 * \Omega_{\text{per}, k}) \cdot \nabla \bar{\Omega}$ which we need to bound in the energy estimates will take advantage of the improved baseline estimate on the $L^2$ norm. The above term,
which is responsible for the creation of unstable eigenvalues, actually \emph{gains a derivative}. Finally, there is one remaining difficulty when estimating $D \Omega_{\text{per}, k}$ due to the transport term. Namely, differentiating the equation in cartesian coordinates contributes a term $(\partial_i \bar{V}) \cdot \nabla \Omega_{\text{per} ,k}$, which could destabilize the estimates. We exploit the structure of the problem again in a crucial way by estimating angular and radial derivatives, rather than derivatives in cartesian coordinates, and by estimating the angular derivatives {\em first} and the radial derivatives {\em after}.
\section{Proof of Proposition \ref{p:X-bounds}}
We start by bounding $|\Omega (x)|$. Since $W^{1,4} (B_2)$ embeds in $C^{1/2}$, the bound $|\Omega (x)|\leq C \|\Omega\|_X$ is true for every $x\in B_2$. Consider further $R:= \frac{|x|}{2} \geq 1$ and define $u_R (y) := \Omega (Ry)$. In particular let $B:= B_1 (\frac{x}{R})$ and notice that
\begin{align}
\|u_R\|_{L^2 (B)} &= R^{-1} \|\Omega\|_{L^2 (B_R (x))} \leq R^{-1}\|\Omega\|_X\, ,\\
\|Du_R\|_{L^2 (B)} &= \|D\Omega\|_{L^2 (B_R (x))} = R^{-1} \||\cdot |D\Omega\|_{L^2 (B_R (x))} \leq R^{-1} \|\Omega\|_X\, ,\\
\|D u_R\|_{L^4 (B)} &= R^{1/2} \|D\Omega\|_{L^4 (B_R (x))} \leq R^{1/2} \|\Omega\|_X\, .
\end{align}
By interpolation, for $\frac{1}{p} = \frac{\lambda}{2} + \frac{1-\lambda}{4}$ we have
\[
\|Du_R\|_{L^p (B)} \leq C \|\Omega\|_X R^{-\lambda + (1-\lambda)/2}\, .
\]
Choosing $p$ very close to $2$, but larger, we achieve $\|Du\|_{L^p (B)} \leq C R^{-1+\kappa} \|\Omega\|_X$. Since the average of $u_R$ over $B$ is smaller than $\|u_R\|_{L^2 (B)} \leq C R^{-1} \|\Omega\|_X$, from Poincar\'e we conclude $\|u_R\|_{W^{1,p} (B)} \leq C R^{-1+\kappa} \|\Omega\|_X$. In particular using the embedding of $W^{1,p}$ in $L^\infty$ we conclude
\begin{equation}\label{e:Morrey}
\|u_R\|_{L^\infty (B)} \leq C (p) R^{-1+\kappa} \|\Omega\|_X\, .
\end{equation}
Since however $|\Omega (x)|\leq \|u_R\|_{L^\infty (B)}$, we reach the estimate
\begin{equation}\label{e:decay-Omega-2}
|\Omega (x)|\leq \frac{C}{|x|^{1-\kappa}} \|\Omega\|_X\, .
\end{equation}
Note that the constant $C$ depends on $\kappa$: $\kappa$ is positive, but a very small choice of it forces to choose a $p$ very close to $2$, which in turn gives a dependence of the constant $C(p)$ in \eqref{e:Morrey} on $\kappa$.
We next come to the estimates for $\nabla K_2 * \Omega$. First of all, observe that
\begin{align*}
\|\nabla K_2 * \Omega\|_{L^2} \leq & C \|\Omega\|_{L^2} \leq C \|\Omega\|_X\\
\|D^2 K_2*\Omega\|_{L^2} + \|D^2 K_2* \Omega\|_{L^4} \leq & C \|D\Omega\|_{L^2} + C \|D\Omega\|_{L^4}
\leq C \|\Omega\|_X
\end{align*}
and and thus $|\nabla K_2*\Omega (x)|\leq C \|\Omega\|_X$ follows for every $x\in B_4$. Consider now $|x|\geq 4$, set $R:= \frac{|x|}{4}$ and let $\varphi\in C^\infty_c (B_{2R} (x))$ be a cut-off function identically equal to $1$ on $B_R (x)$ and $\psi\in C^\infty_c (B_{2R})$ equal to $1$ on $B_R$. We choose them so that $\|D^k \psi\|_{C^0} + \|D^k \varphi\|_{C^0} \leq C (k) R^{-k}$.
We split
\[
\nabla K_2 * \Omega = \nabla K_2 * (\varphi \Omega) + \nabla K_2 * (\psi \Omega) + \nabla K_2 * ((1-\varphi -\psi) \Omega)\, =: F_1 + F_2 + F_3\, .
\]
We have
\begin{align*}
\|F_1\|_{L^2} &\leq C\|\varphi \Omega\|_{L^2} \leq C \|\Omega\|_X\\
\|DF_1\|_{L^2} &\leq C\|D (\varphi \Omega)\|_{L^2} \leq C R^{-1} \|\Omega\|_{L^2} + C \|\varphi D\Omega\|_{L^2}
\leq C R^{-1} \|\Omega\|_X\\
\|D F_1\|_{L^4} & \leq C \|\Omega\|_X\, .
\end{align*}
The argument used above implies then $|F_1 (x)| \leq C (\kappa) |x|^{\kappa-1} \|\Omega\|_X$. As for estimating $F_2$ we observe that $F_2$ is harmonic outside $B_{2R}$. On the other hand $\|F_2\|_{L^2} \leq C \|\Omega\|_X$. Using the mean-value inequality for harmonic functions we then get
\[
|F_2 (x)| \leq \frac{1}{\pi (2R)^2} \int_{B_{2R} (x)} |F_2| \leq \frac{C}{R} \|\Omega\|_X\, .
\]
As for $F_3$ we write, using the bound on $|\Omega|$ and $|\nabla K (x-y)|\leq C |x-y|^{-2}$,
\begin{align*}
|F_3 (x)| &\leq \int_{\mathbb R^2\setminus (B_{2R} (x) \cup B_{2R})} \frac{C(\kappa) \|\Omega\|_X}{|x-y|^2 |y|^{1-\kappa}}\\
&\leq \int_{(\mathbb R^2\setminus B_{2R})\cap \{|x-y|\geq |y|\}} \frac{C(\kappa) \|\Omega\|_X}{|y|^{3-\kappa}}
+ \int_{(\mathbb R^2\setminus B_{2R} (x)) \cap \{|y|\geq |x-y|\}} \frac{C(\kappa) \|\Omega\|_X }{|x-y|^{3-\kappa}}
\leq \frac{C(\kappa) \|\Omega\|_X}{R^{1-\kappa}}\, .
\end{align*}
Recalling that $K_2*\Omega (0)=0$, integrating\eqref{e:decay-Omega} on the segment with endpoints $0$ and $x$ we conclude \eqref{e:Hoelder} for $|x|\leq 2$. In order to prove the bound when $|x|\geq 1$, fix a point $y$ with $3 R:= |y|\geq 1$. Let $\varphi$ be a radial cut-off function which is identically equal to $1$ on $B_{R} (0)$, is supported in $B_{2R} (0)$ and whose gradient is bounded by $C R^{-1}$. We then write
\begin{equation}\label{e:decompose}
K_2 * \Omega = K_2 * (\varphi \Omega) + K_2 * ((1-\varphi) \Omega)\, .
\end{equation}
Since the distance between $y$ and the support of $\varphi \Omega$ is larger than $R$, we can estimate
\begin{equation}\label{e:cut-inside}
|K_2*(\varphi \Omega) (y)|\leq \frac{C}{R} \int |\varphi \Omega|
\leq C\|\Omega\|_{L^2}\leq C\|\Omega\|_X\, .
\end{equation}
Next observe that, by Calderon-Zygmund,
\[
\|D^2 K_2 * (\Omega (1-\varphi))\|_{L^2} = \|D ((1-\varphi) \Omega)\|_{L^2} \leq \frac{C}{R} \|\Omega\|_{L^2} + C \|D\Omega\|_{L^2 (\mathbb R^2\setminus B_R)}\leq \frac{C}{R} \|\Omega\|_X\, .
\]
Since $(1-\varphi) \Omega$ belongs to $L^2_m$, the average over $B$ of $K_2* ((1-\varphi) \Omega)$ equals $0$. Hence we conclude from Poincar\'e inequality and Calderon-Zygmund that
$$
\|K_2 * ((1-\varphi) \Omega) \|_{L^{2}(B)} \leq CR\|DK_2 * ((1-\varphi) \Omega) \|_{L^{2}(B)} \leq CR\|((1-\varphi) \Omega) \|_{L^{2}(B)} \leq CR \|\Omega\|_X
$$
From Gagliardo-Nirenberg interpolation inequality applied on $B$ we have
\begin{align*}
\|K_2* ((1-\varphi) \Omega)\|_{L^\infty(B)}&\leq C\|D^2 K_2 * (\Omega (1-\varphi))\|_{L^2}^{1/2}\|K_2* ((1-\varphi) \Omega)\|^{1/2}_{L^{2}(B)}
\\
&\qquad + \frac C R \|K_2 * ((1-\varphi) \Omega) \|_{L^{2}(B)}
\leq C \|\Omega\|_X.
\end{align*}
\section{Proof of Lemma \ref{l:initial-bound}}
We will in fact prove the following more precise version of the estimates for $\eta$.
\begin{lemma}\label{l:pointwise}
Under the assumptions of Lemma \ref{l:initial-bound}, $\eta\in C^2 (\mathbb R^2\setminus \{0\})$ and its derivatives up to the second order satisfy the estimate
\begin{equation}\label{e:eta-pointwise-decay}
|D^j \eta (x)|\leq C (1+ |x|)^{-m-2-j-{\bar\alpha}} \qquad \forall x, \forall j\in \{0,1,2\}\, .
\end{equation}
In particular $\eta\in C^1 (\mathbb R^2)$ and its first derivatives are Lipschitz (namely, $\eta\in W^{2,\infty} (\mathbb R^2)$).
\end{lemma}
As for the second conclusion of the Lemma, observe that, by going back to the solutions $\omega_{\varepsilon, k}$, it follows from the regularity and decay of the initial data in \eqref{e:Euler-later-times} (just proved in the above lemma) and the regularity and decay of the forcing term $f$ for positive times, that $\omega_{\varepsilon, k}\in C ([t_k, T], X)$ for every $T> t_k$. Given the explicit transformation from $\omega_{\varepsilon, k}$ to $\Omega_{\varepsilon, k}$ we conclude that $\Omega_{\varepsilon, k} \in C ([-k, T], X)$ for every $T> -k$. Since the same regularity is enjoyed by $\tilde{\Omega}$ on $[-k, T]$ (the latter is in fact smooth and compactly supported on $\mathbb R^2\times [-k,T]$) and by $\Omega_{\text{lin}}$, we infer that $\Omega_{\text{per}, k} = \Omega_{\varepsilon, k} - \tilde{\Omega}-\Omega_{\text{lin}}$ belongs to $C ([-k, T], X)$.
\begin{proof}[Proof of Lemma \ref{l:pointwise}]
Consider $\eta\in L^2_m$, for $m\geq 2$ as in Theorem \ref{thm:spectral} and write it as $\eta (\theta, r) = \vartheta (r) e^{ik m\theta}$ when $b_0\neq 0$ or $\eta (\theta, r) = \vartheta (r) e^{ik m \theta} +\bar\vartheta (r) e^{-ikm\theta}$ if $b_0=0$ (where $\bar\vartheta$ denotes the complex conjugate of $\vartheta$).
In both cases $\vartheta (r) e^{ikm\theta}$ is an eigenfunction and through this property we will show that it satisfies the estimates of the Lemma. We can therefore assume without loss of generality that $\eta (x) = \vartheta (r) e^{ikm\theta}$. We will also see that the argument leads in fact to estimates \eqref{e:eta-pointwise-decay} with $km$ replacing $m$ and hence without loss of generality we assume
$k=1$. Furthermore an outcome of the argument below is that $\eta$ is smooth except possibly at the origin.
Note moreover that after having shown the pointwise bounds \eqref{e:eta-pointwise-decay} for $\eta$ outside of the origin, the $W^{2,\infty}$ regularity of $\eta$ follows immediately: indeed $\eta$ and $D\eta$ can be continuously extended to the origin by setting them equal to $0$, hence showing that $\eta\in C^1 (B_1)$, while the uniform bound of $|D^2 \eta|$ in $B_1\setminus \{0\}$ easily implies that $D\eta$ is Lipschitz.
\medskip
{\bf Step 1. Exponential coordinates.} We recall that the distribution $K_2* \eta$ is well defined, according to Lemma \ref{l:extension}, and its action on a test function in $\mathscr{S}$ is given by integrating the scalar product of the test with a function $v\in W^{1,2}_{\text{loc}} (\mathbb R^2, \mathbb C^2)$, cf. the proof of Lemma \ref{l:extension} in the appendix. It follows from the argument given there that $R_{2\pi/m} v (R_{2\pi/m} x) = v (x)$. Given that ${\rm div}\, V =0$, $-V^\perp$ is the gradient of a continuous function, which is determined by its value $\psi$ at the origin. $\psi$ inherits the symmetry and can thus be written as $\psi (\theta, r) = f (r) e^{im \theta}$. We thus conclude that
\begin{align}
\vartheta =&f'' +\frac{1}{r} f' -\frac{m^2}{r^2} f\label{e:Poisson}\\
v = &\frac{f'}{r} \frac{\partial}{\partial \theta} - \frac{im f}{r} \frac{\partial}{\partial r}\, .
\end{align}
Observe moreover that $v\in W^{1,2}_{\text{loc}}$ implies $f'', \frac{f'}{r}$, $\frac{f}{r^2}\in L^2_{\text{loc}}$. Therefore $f$ is determined by \eqref{e:Poisson} and the boundary conditions
\begin{itemize}
\item[(a)] $f (0) =0$;
\item[(b)] $\int_0^1 \frac{|f'(r)|^2}{r} \, dr < \infty$.
\end{itemize}
(observe that condition (b) implies $f'(0)=0$ when $f\in C^1$ and we can therefore interpret it as a surrogate for a boundary condition). We recall also that we have the estimate $\|D^2\psi\|_{L^2} \leq C \|\vartheta\|_{L^2} < \infty$ and owing to $\int_{B_R} D\psi = 0$, by Poincar\'e we achieve
\[
\|D\psi\|_{L^p (B_R)} \leq C (p) R^{2/p}\, .
\]
Using Morrey's embedding and the fact that $\psi (0)=0$ we conclude, in turn
\[
|\psi (x)|\leq C \|D\psi\|_{L^p (B_{|x|})} |x|^{1-2/p} \leq C |x|\, .
\]
In particular we conclude
\begin{equation}\label{e:linear-bound-f}
|f(r)|\leq C r\, .
\end{equation}
The equation satisfied by $\eta$ can thus be written in terms of the function $f$ as
\begin{equation}\label{e:third-order}
\left(1+\frac{r}{\alpha}\frac{d}{dr} - z_0 - im\beta \zeta\right) \left(f''+\frac{f'}{r} - \frac{m^2}{r^2} f\right) - \frac{imf \beta g'}{r} = 0\, ,
\end{equation}
where $g$ is the smooth function such that $\bar\Omega (x) = g (|x|)$ (in particular $g$ is constant in a neighborhood of the origin, and equals $r^{-\alpha}$ for $r\geq 2$) and $\zeta$ is given by the formula \eqref{e:def-zeta}. We next set $s = \ln r$ and in particular
\begin{align}
\tilde{g} (s) &= g (e^s)\\
h (s) & = f (e^s)\\
\tilde\zeta (s) &= \zeta (e^s)\, .
\end{align}
Note that
\begin{equation}\label{e:Gamma}
\vartheta (e^s) = e^{-2s} (h'' (s) - m^2 h (s)) =: e^{-2s} \Gamma (s)\, .
\end{equation}
In these new coordinates we observe that the claim of the lemma corresponds then to showing that $\vartheta\in C^2_{\text{loc}} (\mathbb R)$ and
\begin{align}
|\Gamma (s)| + |\Gamma' (s)| + |\Gamma'' (s)|&\leq C e^{4s} \qquad &\forall s\leq 0\label{e:decad-negativo}\, ,\\
|\Gamma (s)| + |\Gamma' (s)| + |\Gamma'' (s)|&\leq C e^{- (m+{\bar\alpha})s} \qquad &\forall s\geq 0\label{e:decad-positivo}\, .
\end{align}
In order to achieve the latter estimates we will need the following bounds on $\tilde{g}'$, $\tilde{g}''$, $\tilde\zeta$, and $\tilde{\zeta}'$, which we record here and can be easily checked:
\begin{align}
|\tilde{g} (s)| + |\tilde{g}' (s)| + |\tilde\zeta (s)| + |\tilde\zeta' (s)| &\leq C e^{-{\bar\alpha} s}\qquad &\forall s \geq 0\, ,\label{e:stima-tilde-g-positivo}\\
|\tilde{g} (s) - g (0)| + |\tilde{g}' (s)| + |\tilde\zeta (s) - \zeta (0)| +
|\tilde\zeta' (s)| &\leq C e^{2s}\qquad &\forall s\leq 0\, .\label{e:stima-tilde-g-negativo}
\end{align}
We observe next that, by \eqref{e:linear-bound-f}
\begin{equation}\label{e:the-very-first-crappy-bound}
|h(s)|\leq C e^s\, .
\end{equation}
\medskip
{\bf Step 2. The equation for $h$.} In these new coordinates the equation \eqref{e:third-order} becomes then
\begin{equation}
\left(1+\frac{1}{\alpha} \frac{d}{ds} -z_0 - im \beta \tilde{\zeta}\right) \left(e^{-2s} (h''-m^2 h)\right) - im h e^{-2s} \tilde{g}' = 0\, ,
\end{equation}
which we simplify as
\begin{equation}
\left[ \frac{d}{ds} - \alpha \left(im\beta \tilde{\zeta} + z_0 - 1 +\frac{2}{\alpha}\right)\right] (h''-m^2 h) - i\alpha mh \tilde{g}' = 0\, .
\end{equation}
We then define the integrating factor
\[
I (s) = \exp \left[- \alpha \int_0^s \left(im \beta \tilde{\zeta} + z_0 -1 +\frac{2}{\alpha}\right)\, d\sigma\right]
\]
We can thus write
\[
\frac{d}{ds} \left[ I (h'' -m^2 h)\right] = i \alpha m I h \tilde{g}'\, .
\]
Given that $z_0= a_0 + i b_0$,
\begin{equation}\label{e:exact-identity-I}
|I (s)|\le C e^{-(2+\alpha (a_0-1)) s}
\end{equation}
and in particular, by \eqref{e:stima-tilde-g-positivo}
\begin{equation}\label{e:first-crappy-bound}
|I h \tilde{g}'| (s) \leq C e^{-(1+\alpha a_0 + {({\bar\alpha}-\alpha)}) s}\, .
\end{equation}
This implies that the latter is an integrable function on every halfline $[s, \infty[$ so that we can write
\begin{equation}\label{e:ODE-again}
\Gamma (s) = h'' (s) - m^2 h (s) = - \alpha I(s)^{-1} \int_s^\infty i m I h \tilde{g}' \, .
\end{equation}
Since $\Gamma (s) = e^{2s} \vartheta (e^s)$ and $\vartheta \in L^2 (rdr)$, $e^{-s} \Gamma \in L^2 (ds)$. We claim in particular that $e^{-m|s|} \Gamma (s)$ is integrable. Indeed:
\[
\int_{\mathbb R}|\Gamma (s)| e^{-m|s|}\, ds \leq \|e^{-s} \Gamma\|_{L^2 (\mathbb R)} \left(\int_{\mathbb R} e^{-2 (m|s|-s)}\, ds\right)^{1/2}< \infty\, .
\]
We claim then that for the function $h$ we have the formula
\begin{equation}\label{e:formulozza}
h (s) = -\frac{1}{2m} e^{-ms} \int_{-\infty}^s e^{ms'} \Gamma (s') ds' - \frac{1}{2m} e^{ms} \int_s^\infty e^{-ms'} \Gamma (s')\, ds'\, .
\end{equation}
In order to prove the identity denote the right hand side by $H$ and observe that it is a solution of the
same ODE satisfied by $H$, namely, $H''-m^2 H = \Gamma$. Hence $(\frac{d^2}{ds^2} -m^2) (H-h) =0$, which implies that $H (s)-h(s) = C_1 e^{ms} + C_2 e^{-ms}$. On the other hand, using the information that $e^{-s}\Gamma\in L^2$, it can be readily checked that $H (s) = o (e^{m|s|})$ at both $\pm \infty$. Since this property is shared by $h$, thanks to the bound \eqref{e:the-very-first-crappy-bound}, we conclude that $C_1 e^{ms} + C_2 e^{-ms}$ must be $o (e^{m|s|})$ at $\pm \infty$, implying $C_1=C_2=0$.
\medskip
{\bf Step 3. Estimates at $+\infty$.} In this step we give bounds for the asymptotic behavior of $h$ at $+\infty$.
We recall \eqref{e:stima-tilde-g-positivo} and hence observe that
\begin{equation}\label{e:stima-Gamma}
|\Gamma (s)| \leq C e^{(2 + \alpha a_0 - \alpha) s} \int_s^\infty |h(\sigma)| e^{-(2+\alpha a_0+{({\bar\alpha}-\alpha)}) \sigma}\, d\sigma\, ,
\end{equation}
for $s$ positive.
On the other hand, for $s\geq 0$ we can also write from \eqref{e:formulozza}
\begin{equation}\label{e:stima-h-positiva}
|h(s)| \leq C e^{-ms} + C e^{-ms} \int_0^s e^{m\sigma} |\Gamma (\sigma)|\, d\sigma + C e^{ms} \int_s^\infty e^{-m\sigma} |\Gamma (\sigma)|\, d\sigma\, ,
\end{equation}
Starting with the information $|h(s)|\leq C e^s$ for $s>0$, we then infer from \eqref{e:stima-Gamma} that $|\Gamma (s)|\leq C e^{(1-{\bar\alpha}) s}$ for $s>0$. In turn plugging the latter into \eqref{e:stima-h-positiva} we infer $|h (s)|\leq C e^{(1-{\bar\alpha}) s}$ for $s>0$. The latter, plugged into \eqref{e:stima-Gamma} turns into $|\Gamma (s)|\leq C e^{(1-2{\bar\alpha}) s}$ for $s>0$. We then can keep iterating this procedure. The bootstrap argument can be repeated until we reach the largest integer $k$ such that $(1-k{\bar\alpha}) > -m$: one last iteration of the argument gives then
\begin{equation}\label{e:bound-finale-h-positivo}
|h(s)|\leq C e^{-ms}
\end{equation}
and hence, inserting one last time in \eqref{e:stima-Gamma}
\begin{equation}\label{e:decad-positivo-1}
|\Gamma (s)|\leq C e^{-(m+{\bar\alpha}) s}\, .
\end{equation}
In order to estimate the first and second derivatives of $\Gamma$ we observe that
\[
\frac{I'}{I} = -\alpha (im \beta \tilde{\zeta} + z_0-1+2\alpha)
\]
and we compute explicitly
\begin{align}
\Gamma' &= \alpha (im \beta \tilde\zeta + z_0-1+2\alpha) \Gamma + i m h \tilde{g}'\label{e:Gamma'}\\
\Gamma'' &=\alpha im \beta \tilde\zeta' \Gamma + \alpha (im \beta \tilde\zeta + z_0-1+2\alpha) \Gamma'
+ i m h \tilde{g}'' + i m h' \tilde{g}'\, .\label{e:Gamma''}
\end{align}
From \eqref{e:Gamma'} and the bounds \eqref{e:decad-positivo-1},\eqref{e:bound-finale-h-positivo}, and \eqref{e:stima-tilde-g-positivo}, we immediately conclude
\begin{equation}\label{e:decad-positivo-2}
|\Gamma' (s)|\leq C e^{-(m+{\bar\alpha})s}\, .
\end{equation}
As for the second derivative, using \eqref{e:decad-positivo-2}, \eqref{e:decad-positivo-1},\eqref{e:bound-finale-h-positivo}, and \eqref{e:stima-tilde-g-positivo}, we conclude
\[
|\alpha im \beta \tilde\zeta' \Gamma + \alpha (im \beta \tilde\zeta + z_0-1+2\alpha) \Gamma'
+ i m h \tilde{g}''|\leq C e^{-(m+{\bar\alpha})s}\, .
\]
In order to estimate the term $i m h' \tilde{g}'$ we differentiate \eqref{e:formulozza} to infer
\begin{equation}\label{e:h'}
h' (s) = \frac{1}{2} e^{-ms} \int_{-\infty}^s e^{ms'} \Gamma (s') ds' - \frac{1}{2} e^{ms} \int_s^\infty e^{-ms'} \Gamma (s')\, ds'\,
\end{equation}
and thus derive the bound $|h'(s)|\leq C e^{-ms}$ using the same argument for bounding $h$. In turn, combined again with \eqref{e:stima-tilde-g-positivo} we conclude $|i m h' \tilde{g}' (s)|\leq C e^{-(m+{\bar\alpha})s}$, hence completing the proof of \eqref{e:decad-positivo}.
\medskip
{\bf Step 4. Estimates at $-\infty$.}
For the bound at $-\infty$ we use instead \eqref{e:stima-tilde-g-negativo} (which we observe holds for positive $s$ as well). This leads to the inequality
\begin{equation}\label{e:bootstrap-negative-1}
|\Gamma (s)|\leq C e^{(2+ \alpha (a_0-1))s} \int_s^{\infty} e^{-\alpha (a_0-1) \sigma} |h(\sigma)|\, d\sigma
\end{equation}
In this argument we assume that $a_0$ is selected very large, depending on $m$.
In turn we estimate $h$ for negative $s$ by
\begin{equation}\label{e:bootstrap-negative-2}
|h (s)|\leq C e^{ms} + C e^{ms} \int_s^0 e^{-m\sigma} |\Gamma (\sigma)|\, d\sigma
+ C e^{-ms} \int_{-\infty}^s e^{m\sigma} |\Gamma (\sigma)|\, d\sigma\, .
\end{equation}
Observe now that we have $|h(s)|\leq C e^s$ for every $s$. Inserting this bound in \eqref{e:bootstrap-negative-1} and assuming that $a_0$ is large enough we conclude $|\Gamma (s)|\leq C e^{3s}$. In turn we can insert the latter bound in \eqref{e:bootstrap-negative-2} to conclude
$|h (s)|\leq C (e^{ms} + e^{3s})$. Since $m\geq 2$ we can then conclude $|h(s)|\leq C e^{2s}$ and inserting it in \eqref{e:bootstrap-negative-1} we conclude $|\Gamma (s)|\leq C e^{4s}$.
For the first and second derivatives we use the formulae \eqref{e:Gamma'}, \eqref{e:Gamma'}, and \eqref{e:h'} and argue as above to conclude $|\Gamma' (s)| + |\Gamma'' (s)|\leq C e^{4s}$.
\end{proof}
\section{Proof of the baseline \texorpdfstring{$L^2$}{L2} estimate}
In this section we prove \eqref{e:stima-L2}. In order to simplify the notation, from now on we will use $\Omega$ in place $\Omega_{\text{per}, k}$. We recall next equation \eqref{e:master}
\begin{align}
(\partial_{\tau} - L_{\text{ss}}) \Omega
= &- \underbrace{(V_{\text{lin}}\cdot \nabla) \Omega}_{=:\mathscr{F}_1} - \underbrace{(V_r\cdot \nabla) \Omega}_{=:\mathscr{F}_2}- \underbrace{(V \cdot \nabla) \Omega_{\text{lin}}}_{=:\mathscr{F}_3} + \underbrace{(V\cdot \nabla) \Omega_r}_{=:\mathscr{F}_4} + \underbrace{(V \cdot \nabla) \Omega}_{=:\mathscr{F}_5}\nonumber\\
&-\underbrace{(V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}}}_{=:\mathscr{F}_6} - \underbrace{(V_r\cdot \nabla) \Omega_{\text{lin}}}_{=:\mathscr{F}_7} - \underbrace{(V_{\text{lin}}\cdot \nabla) \Omega_r}_{=:\mathscr{F}_8}\, .\label{e:master-2}
\end{align}
We then define $\mathscr{F} := - \sum_{i=1}^8 \mathscr{F}_i$.
Recalling Theorem \ref{t:group} and the fact that $\Omega (\cdot, -k) =0$, we estimate via Duhamel's formula
\begin{equation}\label{e:Duhamel}
\|\Omega (\cdot, \bar\tau)\|_{L^2} \leq C (\varepsilon) \int_{-k}^{\bar\tau} e^{(a_0+\varepsilon) (\bar\tau - s)} \|\mathscr{F} (\cdot, s)\|_{L^2}\, ds\, .
\end{equation}
We next estimate the $L^2$ norms of the various $\mathscr{F}_i$. In order to keep our notation simpler we use $\|\cdot\|_2$ for $\|\cdot\|_{L^2}$ and $\|\cdot\|_\infty$ for $\|\cdot\|_{L^\infty}$. $\mathscr{F}_1$ is simple:
\begin{equation}\label{e:F-1}
\|\mathscr{F}_1 (\cdot, s)\|_2 \leq \|V_{\text{lin}} (\cdot, s)\|_\infty \|D\Omega (\cdot, s)\|_{L^2} \leq C e^{a_0 s} e^{(a_0+\delta_0) s} \leq C e^{(2a_0+\delta_0) s}\, .
\end{equation}
As for $\mathscr{F}_2$ we use the fact that
\begin{align*}
\int |\mathscr{F}_2 (\xi, \tau)|^2\, d\xi & \leq C \int_{|\xi|\geq e^{-\tau/\alpha}} |\xi|^{2-{2{\bar\alpha}}} |D \Omega (\xi, \tau)|^2\, d\xi\\
& \leq C e^{{\frac{2{\bar\alpha}}{\alpha}}\tau} \int |\xi|^2 |D \Omega (\xi, \tau)|^2\, d\xi \leq C e^{{\frac{2{\bar\alpha}}{\alpha}}\tau} \|\Omega (\cdot, \tau)\|^2_X\, .
\end{align*}
We hence conclude
\begin{equation}\label{e:F2}
\|\mathscr{F}_2 (\cdot, \tau)\|_{L^2}
\leq C e^{(a_0+{\frac{{\bar\alpha}}{\alpha}}+\delta_0)\tau}
\leq C e^{(a_0+1+\delta_0)\tau}\, .
\end{equation}
As for $\mathscr{F}_3$, for every fixed $\tau$ with $\kappa=\frac{1}{2}$ we can use Proposition \ref{p:X-bounds} to conclude
\begin{align}\label{e:F3}
\|\mathscr{F}_3 (\cdot, \tau)\|_{L^2} &\leq \|V (\cdot, \tau)\|_{L^\infty} \|\nabla \Omega_{\text{lin}} (\cdot, \tau)\|_{L^2} \leq C \|\Omega (\cdot, \tau)\|_X \|\nabla \Omega_{\text{lin}} (\cdot, \tau)\|_{L^2}\leq C e^{(2a_0 +{\delta_0}) \tau}\, .
\end{align}
To estimate $\mathscr{F}_4$ we recall that
\begin{equation}
\Omega_r (\xi, \tau) = \beta (1-\chi (e^{\tau/\alpha} (\xi)) \bar\Omega (\xi) + e^{\tau/\alpha} (\beta \zeta (|\xi|) |\xi|) \chi' (e^{\tau/\alpha} \xi)\, .
\end{equation}
Differentiating the latter identity we get:
\begin{align}
|\nabla \Omega_r (\xi, \tau)| \leq & C \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} |D \bar \Omega| (\xi)
+ C e^{\tau/\alpha} (|\bar\Omega| (\xi) + | D (\zeta (|\xi|) |\xi|)) \mathbf{1}_{e^{-\tau/\alpha} R\geq |\xi|\geq e^{-\tau/\alpha}}\nonumber\\
& + C e^{2\tau/\alpha} (|\zeta (\xi)||\xi|) \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\nonumber\\
\leq & C \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} |\xi|^{-1-{\bar\alpha}} + C (e^{\tau/\alpha} |\xi|^{-{\bar\alpha}} + e^{2\tau/\alpha} |\xi|^{1-{\bar\alpha}}) \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\label{e:sfava}
\end{align}
where we are assuming that ${\rm spt}\, (\chi) \subset B_R$. We next use Proposition \ref{p:X-bounds} with $\kappa = \alpha/2$ to get $\|V (\cdot, \tau)\|_{L^\infty} \leq C \|\Omega (\cdot, \tau)\|_X\leq C e^{(a_0+\delta_0)\tau}$.
In particular we can estimate
\begin{align*}
\int |\mathscr{F}_4 (\xi, \tau)|^2 d\xi \leq & C e^{2(a_0+\delta_0) \tau} \int_{e^{-\tau/\alpha}}^\infty r^{-1-{ 2{\bar\alpha}}} d r+ C e^{2(a_0+\delta_0 + 1/\alpha)\tau} \int_{e^{-\tau/\alpha}}^{e^{-\tau/\alpha} R} r^{-{ 2{\bar\alpha}} +1}\, dr\\
& + C e^{2(a_0+\delta_0
+ 2/\alpha)\tau} \int_{e^{-\tau/\alpha}}^{e^{-\tau/\alpha} R} r^{3 -{ 2{\bar\alpha}}}\, dr \leq C e^{(2 a_0+ 2\delta_0 + {2})\tau}\, .
\end{align*}
We thus conclude
\begin{equation}\label{e:F-4}
\|\mathscr{F}_4 (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1) \tau}\, .
\end{equation}
For $\mathscr{F}_5$ we use again $\|V (\cdot, \tau)\|_{L^\infty} \leq C e^{(a_0+\delta_0) \tau}$ to get
\begin{equation}\label{e:F-5}
\|\mathscr{F}_5 (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0)\tau} \| D\Omega (\cdot, \tau)\|_2 \leq C e^{2(a_0+\delta_0)\tau}\, .
\end{equation}
$\mathscr{F}_6$ follows easily from Lemma \ref{l:pointwise} and Lemma \ref{l:initial-bound}
\begin{equation}\label{e:F-6}
\|\mathscr{F}_6 (\cdot, \tau)\|_2 \leq \|\Omega_{\text{lin}} (\cdot, \tau)\|_\infty \|\Omega_{\text{lin}} (\cdot, \tau)\|_2
\leq C e^{2(a_0+\delta_0) \tau}\, .
\end{equation}
$\mathscr{F}_7$ and $\mathscr{F}_8$ can be easily estimated using the explicit formula for $\Omega_r$ and the decay estimates given by Lemma \ref{l:pointwise} for $\Omega_{\text{lin}}$, in particular they enjoy an estimate which is better than \eqref{e:F-4}, i.e.
\begin{equation}\label{e:F-7+8}
\|\mathscr{F}_7 (\cdot, \tau)\|_2 + \|\mathscr{F}_8 (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1/2) \tau}\, .
\end{equation}
Assuming that $a_0$ is sufficiently large we then achieve the estimate
\begin{equation}\label{e:F-all-together}
\|\mathscr{F} (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1/2) \tau}\, .
\end{equation}
Inserting in \eqref{e:Duhamel} we choose $\varepsilon < 1/2 +\delta_0$ and we then achieve
\begin{equation}\label{e:baseline-bar-tau}
\|\Omega (\cdot, \bar\tau)\|_2 \leq C e^{(a_0+\varepsilon) \bar \tau} \int_{-k}^{\bar \tau} e^{(\delta_0+1/2 -\varepsilon) s}\, ds
\leq C e^{(a_0+\delta_0 + 1/2)\bar \tau}\, .
\end{equation}
In fact, observe that the argument just given implies the stronger conclusion
\begin{equation}\label{e:stronger-baseline}
\|\Omega (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0 + 1/2) \tau}\, , \qquad \forall \tau \in [-k, \bar \tau]\, .
\end{equation}
\section{Estimates on the first derivative}
In this section we prove \eqref{e:stima-H1-pesata} and \eqref{e:stima-L4}.
The proof will be achieved via $L^2$ and $L^4$ energy estimates, where we will differentiate \eqref{e:master} first with respect to the angular variable and then with respect to the radial variable. We start rewriting \eqref{e:master} as
\begin{align}
& \partial_\tau \Omega - \Omega + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \Omega\nonumber\\
= & - \beta (V \cdot \nabla) \bar \Omega - (V\cdot \nabla)\Omega_{\text{lin}} - (V\cdot \nabla) \Omega_r - (V_{\text{lin}}\cdot \nabla) \Omega_{\text{lin}} - (V_r \cdot \nabla) \Omega_{\text{lin}}\nonumber\\
& - (V_{\text{lin}}\cdot \nabla) \Omega_r = : \mathscr{G}\, .\label{e:master-10}
\end{align}
We next differentiate in $\theta$. In order to simplify our notation we will write $\theta$ in the subscript (or eventually $,\theta$ if there is already another subscript). We also recall that $\Omega_r$, $\bar\Omega$ are radial functions, while $(V_r \cdot \nabla)$ and $(\bar V \cdot \nabla)$ are angular derivatives times a function which is radial, and $\xi\cdot \nabla$ is a radial derivative times a radial function. So we can write
\begin{align}
& \partial_\tau \Omega_\theta - \Omega_\theta + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \Omega_\theta\nonumber\\
= &\mathscr{G}_\theta - (V_{\text{lin}, \theta}\cdot \nabla) \Omega - (V_\theta \cdot \nabla) \Omega =: \mathscr{H}_1\, \label{e:master-angular-1}\\
& \partial_\tau \frac{\Omega_\theta}{r} + \left(\frac{1}{\alpha}-1\right) \frac{\Omega_\theta}{r} + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \frac{\Omega_\theta}{r}\nonumber\\
= &\frac{1}{r} \mathscr{G}_\theta - \frac{1}{r} (V_{\text{lin}, \theta}\cdot \nabla) \Omega - \frac{1}{r} (V_\theta \cdot \nabla) \Omega
+ \Omega_\theta ((V_{\text{lin}} + V) \cdot \nabla)\frac{1}{r}
= :\mathscr{H}_2\, . \label{e:master-angular-2}
\end{align}
We then multiply by $\Omega_\theta$ the first equation and integrate by parts the terms on the left-hand side to conclude
\begin{align}
\frac{d}{d\tau} \frac{1}{2} \|\Omega_\theta (\cdot, \tau)\|_2^2 &= \left(1 -\frac{1}{\alpha}\right) \|\Omega_\theta (\cdot, \tau)\|_2^2
+ \int \mathscr{H}_1 (\xi, \tau) \Omega_\theta (\xi, \tau)\nonumber\\
& \leq \|\mathscr{H}_1 (\cdot, \tau)\|_2 \|\Omega_\theta (\cdot, \tau)\|_2\label{e:first-energy-est}
\end{align}
Likewise we multiply the second identity by $(\frac{1}{r} \Omega_\theta)^3$ and integrate by parts to achieve
\begin{align}
\frac{d}{d\tau} \frac{1}{4} \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4^4 &= \left(1 -\frac{1}{\alpha}\right) \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4^4
+ \int \mathscr{H}_2 (\xi, \tau) (r^{-1} \Omega_\theta (\xi, \tau))^3\nonumber\\
&\leq \|\mathscr{H}_2 (\cdot, \tau)\|_4 \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4^3\, .\label{e:second-energy-est}
\end{align}
We next wish to estimate the two integrals in the right-hand sides of both equations.
We summarize the relevant estimates in the Lemma \ref{l:ugly-lemma} below. Note that they imply
\begin{align}
\frac{d}{d\tau} \|\Omega_\theta (\cdot, \tau)\|_2 &\leq C e^{(a_0+2\delta_0)\tau}\, ,\\
\frac{d}{d\tau} \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4 &\leq C e^{(a_0+2\delta_0)\tau}\, ,
\end{align}
Integrating the latter estimate between $-k$ and $\bar\tau$ we conclude
\begin{align}
\|\Omega_\theta (\cdot, \bar \tau)\|_2 &\leq C e^{(a_0+2 \delta_0)\bar \tau}\, ,\\
\|r^{-1} \Omega_\theta (\cdot, \bar \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\bar \tau}\, .
\end{align}
But in fact the very same argument give the stronger conclusions:
\begin{align}
\|\Omega_\theta (\cdot, \hat\tau)\|_2 &\leq C e^{(a_0+2 \delta_0)\hat \tau}\, \qquad &\forall \hat \tau\in [-k, \bar \tau ]\, ,\\
\|r^{-1} \Omega_\theta (\cdot, \hat \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\hat \tau}\qquad &\forall \hat \tau\in [-k, \bar\tau]\, .
\end{align}
\begin{lemma}\label{l:ugly-lemma}
Under the assumptions of Lemma \ref{l:final-estimates} we have
\begin{align}
\|D \mathscr{G} (\cdot, \tau)\|_{4} &\leq C e^{(a_0+2\delta_0) \tau}\label{e:DG}\\
\|r D\mathscr{G} (\cdot, \tau)\|_{2} &\leq C e^{(a_0+2\delta_0) \tau}\label{e:rDG}\\
\||D V_{\text{lin}}| |\nabla \Omega| (\cdot, \tau)\|_4 + \||D V| |\nabla \Omega| (\cdot, \tau) \|_4 &\leq C e^{(a_0+2 \delta_0) \tau}\label{e:DVDOmega}\\
\|r |D V_{\text{lin}}| |\nabla \Omega| (\cdot, \tau)\|_2 + \|r |D V| |\nabla \Omega| (\cdot, \tau) \|_2 &\leq C e^{(a_0+2 \delta_0) \tau}\label{e:rDVDOmega}\\
\|r^{-1}V_{\text{lin}} D \Omega (\cdot, \tau)\|_4 + \|r^{-1} V D\Omega (\cdot, \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\tau}\, .\label{e:comm-term}
\end{align}
\end{lemma}
\begin{proof}
{\bf Proof of \eqref{e:DG} and of \eqref{e:rDG}.} We break the terms as
\begin{align}
\|D\mathscr{G}\|_4 \leq & C \|DV D\bar\Omega\|_4 + C \|V D^2 \bar \Omega\|_4 + \|DVD\Omega_{\text{lin}}\|_4 + \|VD^2\Omega_{\text{lin}}\|_4\nonumber\\
&+ C \|DVD\Omega_r\|_4 + C \|VD^2\Omega_r\|_4 + C \|DV_{\text{lin}}D\Omega_{\text{lin}}\|_4 + C \|V_{\text{lin}}D^2 \Omega_{\text{lin}}\|_4\nonumber\\
& + \|D V_r D\Omega_{\text{lin}}\|_4 + \|V_r D^2 \Omega_{\text{lin}}\|_4 + \|DV_{\text{lin}}D\Omega_r\|_4 + \|V_{\text{lin}} D^2 \Omega_r\|_4\,
\end{align}
and
\begin{align}
\|rD\mathscr{G}\|_2 \leq & C \|rDV D\bar\Omega\|_2 + C \|rV D^2 \bar \Omega\|_2 + \|rDVD\Omega_{\text{lin}}\|_2 + \|rVD^2\Omega_{\text{lin}}\|_2\nonumber\\
&+ C \|rDVD\Omega_r\|_2 + C \|VrD^2\Omega_r\|_2 + C \|rDV_{\text{lin}}D\Omega_{\text{lin}}\|_2 + C \|rV_{\text{lin}}D^2 \Omega_{\text{lin}}\|_2\nonumber\\
& + \|rD V_r D\Omega_{\text{lin}}\|_2 + \|rV_r D^2 \Omega_{\text{lin}}\|_2 + \|rDV_{\text{lin}}D\Omega_r\|_2 + \|rV_{\text{lin}} D^2 \Omega_r\|_2\, .
\end{align}
The terms involving $\Omega$ and $\bar\Omega$ is where we use the baseline $L^2$ estimate. Observe that
\begin{equation}\label{e:interpolating-baseline}
\|\Omega (\cdot, \tau)\|_{4} \leq \|\Omega (\cdot, \tau)\|_{\infty}^{1/2} \|D\Omega (\cdot, \tau)\|^{1/2}_{2}
\leq C e^{(a_0 + \delta_0+1/4)\tau}\,
\end{equation}
and, by Calder{\'o}n-Zygmund,
\begin{equation}\label{e:CZ-baseline}
\|D K_2* \Omega (\cdot, \tau)\|_{4} \leq C \|\Omega (\cdot, \tau)\|_{4} \leq C e^{(a_0+\delta_0+1/4) \tau}\, .
\end{equation}
Next we estimate
\begin{align}
\|DV D\bar\Omega (\cdot, \tau)\|_4 \leq \|D \bar \Omega (\cdot, \tau)\|_\infty \|DV (\cdot, \tau)\|_4 \leq C \|\Omega (\cdot, \tau)\|_4
\leq C e^{(a_0+\delta_0+1/4) \tau}\, \label{e:per-bar-1}\\
\|r DV D\bar\Omega (\cdot, \tau)\|_{L^2} \leq \|r D \bar \Omega (\cdot, \tau)\|_\infty \|DV (\cdot, \tau)\|_2 \leq C \|\Omega (\cdot, \tau)\|_{L^2}
\leq C e^{(a_0+\delta_0+1/2) \tau}\label{e:per-bar-1-weight}
\end{align}
Next, recalling Lemma \ref{l:extension} we get
\begin{equation}
\|V (\cdot , \tau)\|_{L^2 (B_R)} \leq C R \|\Omega (\cdot, \tau)\|_2 \leq C R e^{(a_0+\delta_0+1/2) \tau}\, .
\end{equation}
However, using $\int_{B_R} V (\cdot, \tau) =0$, we can in fact estimate also
\begin{equation}
\|V (\cdot, \tau)\|_{L^4 (B_R)} \leq C R^{1/2} \|\Omega (\cdot, \tau)\|_2 \leq C R^{1/2} e^{(a_0+\delta_0+1/2)\tau}
\end{equation}
In particular we can infer
\begin{equation}
\| (1+|\xi|)^{-(1+\varepsilon)} V (\cdot, \tau)\|_2 \leq C(\varepsilon) e^{(a_0+\delta_0+1/2) \tau}
\end{equation}
and
\begin{equation}
\| (1+\xi)^{-(1/2+\varepsilon)} V (\cdot, \tau)\|_4 \leq C (\varepsilon) e^{(a_0+\delta_0+1/2) \tau}
\end{equation}
for every positive $\varepsilon$. On the other hand, given that $|D^2 \bar \Omega (\xi)|\leq C (1+|\xi|)^{-2-{\bar\alpha}}$, we easily infer
\begin{align}
\|V D^2 \bar \Omega (\cdot, \tau)\|_2 &\leq C \|(1+|\xi|)^{-{1}} V (\cdot, \tau)\|_4 \leq C e^{(a_0+\delta_0+1/2) \tau}\label{e:per-bar-2}\\
\|r V D^2 \bar \Omega (\cdot, \tau)\|_2 &\leq C \|(1+|\xi|)^{-1-{\bar\alpha}} V (\cdot, \tau)\|_2 \leq C e^{(a_0+\delta_0+1/2) \tau}\, .\label{e:per-bar-2-r}
\end{align}
From now on we will not handle the terms with the weight $r$ as the proof is entirely analogous: we will just focus on the $L^4$ estimates and leave to the reader the computations with the weight.
For the two quadratic terms in $V_{\text{lin}}$ we can use Lemma \ref{l:pointwise} to achieve
\begin{equation}\label{e:lin-lin}
\|DV_{\text{lin}}D\Omega_{\text{lin}} (\cdot, \tau)\|_4 + \|V_{\text{lin}}D^2 \Omega_{\text{lin}} (\cdot, \tau)\|_4 \leq C e^{2a_0 \tau}\, .
\end{equation}
Likewise we can estimate
\begin{equation}\label{e:lin-per}
\|DV D \Omega_{\text{lin}} (\cdot, \tau)\|_4 + \|V D^2\Omega_{\text{lin}}\|_4 \leq C e^{a_0\tau}\|\Omega (\cdot, \tau)\|_X \leq C e^{(2a_0+\delta_0) \tau}\, ,
\end{equation}
(where for the second term we use the decay at infinity of $D^2 \Omega_{\text{lin}}$ to compensate the moderate growth of $V$, the argument is the same as for \eqref{e:per-bar-2} and we do not repeat it here).
Observe next that, by \eqref{e:sfava}, $\|D \Omega_r (\cdot, \tau)\|_\infty \leq C$ for $\tau\leq 0$. Hence the term $DV D\Omega_r$ can be estimated as in \eqref{e:per-bar-1}:
\begin{equation}\label{e:per-err-1}
\|DV D\Omega_r (\cdot, \tau)\|_4 \leq C \|DV (\cdot, \tau)\|_4 \leq C e^{(a_0+\delta_0+1/4) \tau}\, .
\end{equation}
As for the other term, differentiating once more and arguing as for \eqref{e:sfava} we get:
\begin{align}
& |D^2 \Omega_r (\xi, \tau)| \nonumber\\
\leq & C |\xi|^{-2-{\bar\alpha}} \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + (e^{\tau/\alpha} |\xi|^{-1-{\bar\alpha}} + e^{2\tau/\alpha} |\xi|^{-{\bar\alpha}} + e^{3\tau/\alpha} |\xi|^{1-{\bar\alpha}}) \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\nonumber\\
\leq & C |\xi|^{-2-{\bar\alpha}} \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + C e^{3\tau/\alpha} |\xi|^{1-{\bar\alpha}} \mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\, . \label{e:sfava-2}
\end{align}
We can thus argue similarly as for \eqref{e:per-bar-2} to conclude
\begin{align}\label{e:per-err-2}
\|V D^2 \bar \Omega_r (\cdot, \tau)\|_4 &\leq C \|(1+|\xi|)^{-3/2} V (\cdot, \tau)\|_4 + C e^{3\tau/\alpha} \|V\|_{L^4 (B_{R e^{-\xi/\tau}})}\nonumber\\ &\leq C e^{(a_0+\delta_0+1/4) \tau}\, .
\end{align}
In order to handle the remaining three terms, we recall that, by Lemma \ref{l:pointwise},
\begin{align}
\|V_{\text{lin}} (\cdot, \tau) \|_\infty &\leq C e^{a_0 \tau}\\
|DV_{\text{lin}} (\xi, \tau)
&\leq C e^{a_0\tau} |\xi|^{-2-{\bar\alpha}}\\
|D^k \Omega_{\text{lin}} (\xi, \tau)| &\leq C e^{a_0\tau} |\xi|^{-2-k-{\bar\alpha}}\, .
\end{align}
On the other hand, owing to the computations in this and the previous section we can also write
\begin{align*}
|V_r (\xi, \tau)| &\leq C |\xi|^{1- {\bar\alpha}} \mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}}\\
|DV_r (\xi, \tau)| + |\Omega_r (\xi, \tau)| &\leq C |\xi|^{-{\bar\alpha}}\mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + C e^{\tau/\alpha} |\xi|^{1-{\bar\alpha}}
\mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\\
|D^k \Omega_r (\xi, \tau)| &\leq C |\xi|^{-k-{\bar\alpha}}\mathbf{1}_{|\xi|\geq e^{-\tau/\alpha}} + C e^{k \tau/\alpha} |\xi|^{1-{\bar\alpha}}
\mathbf{1}_{e^{-\tau/\alpha} R \geq |\xi|\geq e^{-\tau/\alpha}}\, .
\end{align*}
Integrating the estimates in the respective domain we easily get
\begin{equation}\label{e:lin-err}
\|D V_r D\Omega_{\text{lin}}\|_4 + \|V_r D^2 \Omega_{\text{lin}}\|_4 + \|DV_{\text{lin}}D\Omega_r\|_4 + \|V_{\text{lin}} D^2 \Omega_r\|_4\leq C e^{(a_0+1)\tau}\, .
\end{equation}
\medskip
{\bf Remaining estimates.} The two terms \eqref{e:DVDOmega} and \eqref{e:rDVDOmega} have already been covered in the argument above. It remains to handle \eqref{e:comm-term}. Notice that, by Lemma \ref{l:pointwise} and Proposition \ref{p:X-bounds} we have
\begin{align}
\|r^{-1} V (\cdot, \tau)\|_\infty &\leq C \|\Omega (\cdot, \tau)\|_X \leq C e^{(a_0+\delta_0) \tau}\\
\|r^{-1} V_{\text{lin}} (\cdot, \tau)\|_\infty & \leq C e^{a_0\tau}\, .
\end{align}
We thus conclude easily
\begin{align}
\|r^{-1} V D\Omega (\cdot, \tau)\|_4 & \leq C e^{(a_0+\delta_0) \tau}\|D\Omega (\cdot, \tau)\|_4 \leq C e^{2(a_0+\delta_0) \tau}\, \\
\|r^{-1} V_{\text{lin}} D\Omega(\cdot, \tau)\|_4 &\leq C e^{a_0\tau}\|D\Omega (\cdot, \tau)\|_4 \leq C e^{(2a_0+\delta_0) \tau}\, .
\end{align}
\end{proof}
We next differentiate in $r$ \eqref{e:master-10} in order to achieve similar identities to \eqref{e:master-angular-1} and \eqref{e:master-angular-2}. This time, given the ambiguity with $V_r$ and $\Omega_r$, we write $,r$ in the subscript to denote the radial derivative of {\em any} function.
\begin{align}
& \partial_\tau \Omega_{,r} + \left(1-\frac{1}{\alpha}\right) \Omega_{,r} + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_r + V_{\text{lin}} + V \right)\cdot \nabla\right) \Omega_{,r}\nonumber\\
= &\mathscr{G}_{,r} - (V_{\text{lin}, r}\cdot \nabla) \Omega - (V_{,r} \cdot \nabla) \Omega - \beta (\bar V_{,r} \cdot \nabla) \Omega - (V_{r,r}\cdot \nabla ) \Omega \label{e:master-radial-1}\\
& \partial_\tau r\Omega_{,r} - r \Omega_{,r} + \left(\left(-\frac{\xi}{\alpha} + \beta \bar V + V_{\text{lin}} + V \right)\cdot \nabla\right) (r \Omega_{,r})\nonumber\\
= & r \mathscr{G}_{,r} - r (V_{\text{lin}, r}\cdot \nabla) \Omega - r (V_{,r} \cdot \nabla) \Omega - r (\bar V_{,r} \cdot \nabla) \Omega - r (V_{r,r}\cdot \nabla ) \Omega\nonumber\\
& + \Omega_{,r} (V_{\text{lin}} + V)\cdot \nabla r \, . \label{e:master-radial-2}
\end{align}
Multiplying by $(\Omega_{,r})^3$ and $r \Omega_{,r}$ respectively, and using the estimates \eqref{e:DG} and \eqref{e:rDG}, we achieve, in the respective cases:
\begin{align}
\frac{d}{d\tau} \|\Omega_{,r} (\cdot, \tau)\|_4 &\leq C e^{(a_0+2 \delta_0)\tau}
+ C \|D V_{\text{lin}} D\Omega (\cdot, \tau)\|_4 + C\|D V D \Omega (\cdot, \tau)\|_4\nonumber\\
& \qquad\qquad + C \|\bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_4 + \|DV_r D \Omega (\cdot, \tau)\|_4\label{e:master-radial}\\
\frac{d}{d\tau} \|r\Omega_{,r} (\cdot, \tau)\|_2 &\leq C e^{(a_0+2 \delta_0)\tau} + C \|r V_{\text{lin}, r} D\Omega (\cdot, \tau)\|_2 + C\|r (V_{,r}\cdot \nabla) \Omega (\cdot, \tau)\|_2\nonumber\\
&+ C \|r \bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_2 + \|r (V_{r,r} \cdot\nabla) \Omega (\cdot, \tau)\|_2
+ \|V_{\text{lin}} D\Omega \|_2 + \|V D\Omega\|_2\label{e:master-radial-r}\, .
\end{align}
Note next that
\begin{align}
\|D V_{\text{lin}} D\Omega (\cdot, \tau)\|_4 + \|D V D \Omega (\cdot, \tau)\|_4 &\leq (\|DV_{\text{lin}} (\cdot, \tau)\|_\infty +
\|DV (\cdot, \tau)\|_\infty) \|D\Omega (\cdot, \tau)\|_4\nonumber\\
& \leq C e^{(2a_0+\delta_0)\tau}\, ,
\end{align}
and likewise
\begin{align}
\|r D V_{\text{lin}} D\Omega (\cdot, \tau)\|_2 + \|r D V D \Omega (\cdot, \tau)\|_2 &\leq (\|DV_{\text{lin}} (\cdot, \tau)\|_\infty +
\|DV (\cdot, \tau)\|_\infty) \|r D\Omega (\cdot, \tau)\|_2\nonumber\\
& \leq C e^{(2a_0+\delta_0)\tau}\, ,
\end{align}
The terms $(\bar V_{, r} \cdot \nabla) \Omega$ and $r(\bar V_{,r} \cdot \nabla)\Omega$ can be bounded observing that they involve only the angular derivative and that $\bar V_{,r}$ is bounded. Since the angular derivative has already been estimated (this is the reason for estimating it {\em before} estimating the radial derivative), we get
\begin{align}
\|\bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_4 &\leq C \|r^{-1} \Omega_\theta (\cdot, \tau)\|_4 \leq C e^{(a_0+2\delta_0)\tau}\, .\\
\|r \bar{V}_{,r}\cdot \nabla \Omega (\cdot, \tau)\|_2 &\leq C \|\Omega_\theta (\cdot, \tau)\|_2 \leq C e^{(a_0+2 \delta_0)\tau}\, .
\end{align}
As for $DV_r D\Omega$ and $r DV_r D\Omega$\, we observe that $\|DV_r\|_\infty \leq e^\tau$ and thus
we easily get
\begin{align}
\|DV_r D\Omega (\cdot, \tau)\|_4 &\leq Ce^{\tau} \| D\Omega (\cdot, \tau)\|_4 \leq Ce^{(a_0+\delta_0+1)\tau}\\
\|r DV_r D\Omega (\cdot, \tau)\|_2 &\leq Ce^{\tau} \|r D\Omega (\cdot, \tau)\|_2 \leq Ce^{(a_0+\delta_0+1)\tau}
\end{align}
We finally need to estimate $\|V_{\text{lin}} D\Omega\|_2$ and $\|V D\Omega\|_2$, but we observe that this has already been done in the previous section, since they correspond to the terms $\mathscr{F}_1$ and $\mathscr{F}_4$ in \eqref{e:master-2}, cf. \eqref{e:F-1} and \eqref{e:F-4}.
Summarizing, we conclude
\begin{align}
\frac{d}{d\tau} \|\Omega_{,r} (\cdot, \tau)\|_2 &\leq C e^{(a_0+\delta_0+1/2)\tau}\\
\frac{d}{d\tau} \|r\Omega_{,r} (\cdot, \tau)\|_2 &\leq C e^{(a_0+\delta_0+1/2)\tau}\, ,
\end{align}
which we then integrate between $0$ and $\bar\tau$ to achieve the desired conclusion. |
1,108,101,565,963 | arxiv | \section{Introduction}
We study the classical statistical problem of regression in general spaces. Given an instance metric space $(\mathcal{X},\rho)$ and a value space $\mathcal{Y}$ with a loss $\ell$, one observes instances in $\mathcal{X}$ and aims to predict the corresponding values in $\mathcal{Y}$. The learning procedure follows an iterative process where successively, the learner is given an instance $X_t$ and predicts the value $Y_t$ based on the historical samples and the new instance. The learner's goal is to minimize the loss of its predictions $\hat Y_t$ compared to the true value $Y_t$. In particular, $\mathcal{Y}=\{0,1\}$ (resp. $\mathcal{Y}=\{0,\ldots,k\}$) with 0-1 loss corresponds to binary (resp. multiclass) classification while $\mathcal{Y}=\mathbb{R}$ corresponds to the classical regression setting. These basic regression settings have then been extended to non-Euclidean value spaces, needed to analyze new types of data arising in numerous data analysis applications. Such examples include directional data on spherical and circular spaces \cite{chang1989spherical,mardia2000directional}, data lying on manifolds \cite{shi2009intrinsic,davis2010population,thomas2013geodesic}, Banach spaces \cite{ferraty2011kernel}, Hilbert spaces \cite{biess2019regression} or Hadamard spaces \cite{lin2021total}. This paper studies \emph{metric-valued regression} where both instances and values lie in general metric spaces. This general setting adopted in the recent literature on universal learning \cite{hanneke2021open,tsir2022metric,blanchard2022universal} includes and extends the specific classification and regression settings mentioned above. In this context, we are interested in obtaining predictions with low average loss. It is well known, however, that obtaining vanishing average loss is impossible if the values are noisy. As a result, in the Bayesian version of this problem, where the samples $(\mathbb{X},\mathbb{Y}):=(X_t,Y_t)_{t\geq 1}$ are drawn independent and identically distributed (i.i.d.) from an unknown distribution over $\mathcal{X}\times\mathcal{Y}$, a learning procedure long-term average loss is compared to the minimal loss (the term \emph{risk} is more commonly used in the Bayesian literature) of a fixed predictor, where the minimum is taken over \emph{all} predictor functions $f:\mathcal{X}\to\mathcal{Y}$. In the case of squared loss, and value space $\mathbb{R}$, the Bayes minimal risk is precisely $Var[Y|X]$. In this work, we study the considerably more general case of non-i.i.d. processes. Similarly to the Bayesian case, one aims to minimize the average \emph{excess} loss of the predictions compared to some fixed measurable predictor function $f:\mathcal{X}\to\mathcal{Y}$, i.e., $\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)-\ell(f(X_t),Y_t)$. We are then interested in \emph{consistent} learning rules which have vanishing long-run average excess loss almost surely.
There is a rich literature analyzing the general regression problem under i.i.d. sequences. In this setting, a classical result is that for the Euclidean space, $k-$nearest neighbor (kNN) with $k/\ln T\to\infty$ and $k/T\to 0$ is consistent under mild assumptions on the distribution of $(X_1,Y_1)$ \cite{stone1977consistent,devroye1994strong,devroye2013probabilistic}. Other variants of kNN algorithms and simple learning rules have been proposed to obtain almost-sure consistency for larger classes of i.i.d. processes and spaces. For instance, in the case of binary classification and bounded regression, under mild conditions on the instance space $\mathcal{X}$, one can achieve consistency for any i.i.d. process, which we refer to as \emph{universal Bayes consistency} \cite{devroye2013probabilistic,gyorfi:02}. More recently, this result was extended to any essentially separable metric space $\mathcal{X}$ when the value space $\mathcal{Y}$ is finite or countable and for 0-1 loss \cite{hanneke2021bayes,gyorfi2021universal}, then generalized to any separable metric space $(\mathcal{Y},\ell)$ \cite{tsir2022metric} under the constraint that $(X,Y)$ has a finite first order moment. A natural question then becomes whether such results can be obtained in the non-i.i.d. setting. Various assumptions on the sequence $(\mathbb{X},\mathbb{Y})$, which are natural relaxations of the i.i.d. condition, have been proposed. In particular, the universality result for binary classification was extended to \emph{stationary ergodic} processes \cite{morvai1996nonparametric,gyorfi1999simple,gyorfi:02} or processes satisfying the law of large numbers \cite{morvai1999regression,gray2009probability,steinwart2009learning}.
\comment{\color{red}TODO? RELATE TO LIPSCHITZ (CONTEXTUAL) BANDITS: i.i.d. case with partial information. Either here or in conclusion for future possible research: Lipschitz adversarial contextual bandits for adversarial rewards and (limited) adversarial context within $\text{SOLAR}$. Note: what we do is more general than Lipstchiz since powers of metrics are non lipschitz. However, not clear how to induce these results in bandits framework without introducing an artificial setup. Although virtually the same as our setup (except for the exploration part), perhaps this formulation could have more traction from the RL community?
Another question: a direct application of this paper would be to take all deterministic experts, i.e., fixed functions from $\mathcal{X}$ to $\mathcal{Y}$. Would it generalize to general (online or maybe not even) experts?}
\comment{\color{red}TODO? RELATE TO ADVERSARIAL (CONTEXTUAL) BANDITS: is there a notion of adversarial contextual bandits in the litterature? If so, do they restrict reference function class $\mathcal{F}$? I know Exp 4 and such assume $\mathcal{F}$ finite or just $\epsilon-$nets. (survey https://arxiv.org/pdf/1508.03326.pdf) But traditional function classes are however highly non-totally-bounded. Not clear what the importance of "contextual" is in this context (since we could view the experts as black box and forget about the context).}
In this work, we are interested in the fundamental question of \emph{learnability}. Namely, we aim to understand which are the minimal assumptions on the problem sequences for which consistency is still achievable. We follow the general \emph{optimistic decision theory} introduced by \cite{hanneke2021learning} which formalizes the general paradigm of ``learning whenever learning is possible". Precisely, the \emph{provably} minimal assumption for a given objective is that this task is achievable, or in other words that learning is possible. The goal then becomes to 1. characterize for which settings this objective is achievable---this is the minimal assumption for any learner---and 2. if possible, provide learning rules that achieve this objective whenever it is achievable. These algorithms are called \emph{optimistically universal} learning rules and enjoy the convenient property that if they failed the objective, any other algorithm would fail as well.
This paradigm was recently used to study minimal assumptions for the noiseless (realizable) case where there exists an unknown underlying function $f^*:\mathcal{X}\to\mathcal{Y}$ such that $Y_t=f^*(X_t)$ \cite{hanneke2021learning}. In this setting, the above universal consistency objective is equivalent to having vanishing long-run average error of the predictions for any measurable target function $f^*$. Indeed, in this specific case, the best fixed function to which we can compare the predictions of a learner is $f^*$, which always achieves zero loss. The two main questions of interest described above were very recently settled for noiseless responses, unveiling an important dichotomy between bounded and unbounded losses. For bounded losses, \cite{blanchard2021universal,blanchard2022universal} gave a characterization of the set $\text{SOUL}$ (Strong Online Universal Learning) of stochastic processes $\mathbb{X}$ for which universal learning is possible. The latter work also provides a learning rule 2C1NN, which is a simple variant of the 1-Nearest-Neighbor algorithm (1NN) and is optimistically universal for this noiseless setting with bounded losses. In particular, the set of learnable processes $\text{SOUL}$ for noiseless online learning and bounded loss, is significantly larger than stationary processes or related assumptions. On the other hand, the case of unbounded losses is considerably more restrictive. Indeed, learnable processes necessarily visit a \emph{finite} number of distinct instance points almost surely (Condition $\text{FS}$ below) and simple memorization is optimistically universal \cite{blanchard2022optimistic}. On the other hand, the more general non-realizable setting was not yet characterized. In this framework and for bounded losses, the very recent preprint \cite{hanneke2022bayes} proposes an algorithm for metric losses which achieves consistency for \emph{arbitrary} responses $\mathbb{Y}$ under for a large family of instance processes $\mathbb{Y}$ (condition $\text{CS}$ below which intuitively asks that the sub-measure induced by empirical visits of the input sequence be continuous). Importantly, the response sequence $\mathbb{Y}$ is completely unrestricted and can be arbitrarily correlated with the instance sequence $\mathbb{X}$. There is however a significant gap between the proposed $\text{CS}$ condition and the learnable processes $\text{SOUL}$ in the bounded noiseless setting. \cite{hanneke2022bayes} then left as an open problem the question of identifying the precise provably-minimal conditions to achieve consistency, and whether there exists an optimistically universal learning rule.
In this paper, we solve this question and extend it to \emph{adversarial} responses, which slightly generalize arbitrary responses. We will show that we can obtain the same results for adversarial processes as we would obtain if considering arbitrary responses, without any generalisability cost. Intuitively, adversarial responses can not only arbitrarily depend on the instance sequence $\mathbb{X}$, but may also depend on past predictions and (private) randomness used by the learner. Although adversarial processes coincide with arbitrary responses if the learner is \emph{deterministic}, this is a non-trivial generalization for randomized algorithms---note that randomization is necessary to obtain guarantees for general online learning frameworks \cite{bubeck2012regret,slivkins2019introduction}. We now precise the distinction between arbitrary and adversarial responses. In the context of online learning, arbitrary responses correspond to an \emph{oblivious opponent}. It is known that for some large classes of algorithms, having guarantees against any oblivious opponent yields the same guarantees against any adversarial opponents as well \cite{cesa2006prediction}. This statement holds for learners such that the immediate expected loss is only dependent on the past responses. The learning rule proposed in \cite{hanneke2022bayes} falls into this category and, as a result, enjoys the same consistency guarantees for adversarial responses as shown for arbitrary responses in the original manuscript. However, for learning rules which may \emph{explicitly} depend on the past prediction---which will be the case for some of our proposed optimistically universal learning rules---such result does not hold in general. Hence, at the level of generality considered in this work, adversarial responses seem to be a non-trivial generalization of arbitrary responses, for which we can obtain the same guarantees without any generalizability cost. We note that in this paper, we consider excess loss---or regret---compared to any fixed prediction function: in the case of adversarial responses, our regret guarantees hold against the losses obtained by any fixed prediction function along the \emph{observed} response trajectory. As such, we do not analyze \emph{counterfactual} excess loss---or regret---for which it is impossible to have sublinear rates against general unrestricted adversaries \cite{slivkins2019introduction}.
There is a rich theory for arbitrary or adversarial responses $\mathcal{Y}$ when the reference functions $f^*:\mathcal{X}\to\mathcal{Y}$ are restricted to specific function classes $\mathcal{F}$. As a classical example, for the noiseless binary classification setting, there exist learning rules which guarantee a finite number of mistakes for arbitrary sequences $\mathbb{X}$, if and only if the class $\mathcal{F}$ has finite Littlestone dimension \cite{littlestone1988learning}. Other restrictions on the function class have been considered \cite{cesa2006prediction,ben2009agnostic,rakhlin2015online}. Universal learning diverges from this line of work by imposing no restrictions on function classes, namely \emph{all} measurable functions, but instead restricting instance processes $\mathbb{X}$ to the optimistic set where consistency is achievable. Nevertheless, the algorithms we introduce for adversarial responses use as a subroutine the traditional exponentially weighted forecaster for learning with expert advice from the online learning literature, also known as the Hedge algorithm \cite{littlestone1994weighted,cesa1997use,freund1997decision}.
\paragraph{Contributions.}We first provide a complete characterization of the provably minimal assumptions for regression with adversarial responses, i.e., of the set of learnable processes $\text{SOLAR}$ (Strong universal Online Learning with Adversarial Responses) in the \emph{bounded loss} setting, which is the main interest of universal learning as shown in \cite{blanchard2022optimistic}. An interesting discovery is that learnability for the general regression problem is fundamentally dependent on the value space $(\mathcal{Y},\ell)$. We show that the minimal condition for learnability $\text{SOLAR}$ is either the $\text{CS}$ condition which was considered in \cite{hanneke2022bayes}, or the significantly larger set of processes learnable in the noiseless setting $\text{SOUL}$ for which a characterization is known (condition ${\text{SMV}}$ below which intuitively asks that the process visit a sublinear number of sets from any measurable partition) \cite{blanchard2022universal}. We further precisely identify this alternative by providing a property on the value space $(\mathcal{Y},\ell)$ (Property $\text{F-TiME}$ below) such that if satisfied, $\text{SOLAR}={\text{SMV}}$ and otherwise $\text{SOLAR}=\text{CS}$. This property intuitively asks that the mean-estimation problem on $(\mathcal{Y},\ell)$ be achievable with a fixed error rate within a random time with fixed horizon. We show that this property is satisfied for all ``reasonable'' value spaces, e.g., totally-bounded spaces or countably-many-classes classification $(\mathbb{N},\ell_{01})$. On the other hand, there exist ``pathological'' value spaces for which adversarial regression is inherently harder than noiseless regression.
For both cases, we provide optimistically universal learning rules. It is worth noting that the learning rules designed for each alternative are crucially different in their techniques and in nature. In the alternative when $\text{F-TiME}$ is satisfied, the rule is \emph{implicit} in general: it uses the existing algorithm for finite-time-mean-estimation as subroutine. However, given such algorithm, the rule is \emph{explicit}. We show that this is the case for totally bounded value spaces and countable classification by providing explicit algorithms for finite-time-mean-estimation. On the other hand, the learning rule tailored for learning the more restrictive $\text{CS}$ processes is always explicit and relies on specific properties of $\text{CS}$ processes which are not satisfied by any other processes. This learning rule uses similar techniques to that introduced by \cite{hanneke2022bayes} but generalizes this result to non-metric losses satisfying specific relaxed triangle-inequality properties. This allows encompassing any powers of a metric $|\cdot|^\alpha$ for $\alpha\geq 1$ and in particular the popular squared loss regression. In both alternatives, an implication of these results is that for ``reasonable'' losses, learning in the regression framework can be achieved even in face of adversarially-chosen responses, and for a significantly larger class---$\text{CS}$ or ${\text{SMV}}$---of instance processes family than \emph{stationary} and \emph{non-stationary} processes previously considered in the traditional statistical learning literature (e.g., \cite{ryabko2006pattern}) outside of the optimistically universal learning theory.\\
For unbounded losses, we present a general result for mean-estimation when the loss is a metric, that holds for adversarial responses. Precisely, this is the problem of predicting values in $\mathcal{Y}$ to minimize the loss on a sequence of samples $\mathbb{Y}$, which corresponds to regression without instances $\mathcal{X}=\{0\}$. For example in the case of i.i.d. sequences $\mathbb{Y}$, this is equivalent to the Fr\'echet means estimation problem for which different notions of consistency and generalizations have been recently examined \cite{evans2020strong,schotz2022strong,jaffe2022strong}. Note that for $\mathcal{Y}=\mathbb{R}$ with Euclidean norm and i.i.d. sequences $\mathbb{Y}$ with finite first moment, this is exactly the problem of estimating the median of $Y$. We show however, that mean estimation might not be achievable for adversarial responses, even in very simple cases. For instance we show that regression on Euclidean real-valued $\mathcal{Y}=\mathbb{R}$ with loss $|\cdot|^p$ adversarial responses is impossible, for any $p>1$. As a simple consequence, this translates into an alternative for adversarial regression. Indeed, for unbounded losses, we show that we have either $\text{SOLAR}=\text{FS} \ (=\text{SOUL})$ when mean-estimation is achievable, and $\text{SOLAR}=\emptyset$ otherwise. In particular, metric losses fall in the first alternative. Further, there always exists an optimistically universal learning rule which is inspired by the weighted average forecasters with expert advice algorithms \cite{cesa2006prediction}. The main difficulty is that the experts---fixed values in $\mathcal{Y}$---lie in a general metric space that may be infinite.\\
Last, we address the tremendous gap between learning for bounded losses compared to unbounded losses---where learnable processes necessarily visit a finite number of instance values almost surely. This issue was raised as an open problem in the noiseless setting in \cite{blanchard2022optimistic}. Precisely, learning arbitrary functions even in this realizable case is too restrictive. Hence a natural question is whether imposing additional conditions on the responses would allow recovering the results from the bounded case. We propose a novel constraint asking that the sequence $\mathbb{Y}$ be \emph{empirically integrable}. Intuitively, this property asks that we can bound the tails of the empirical first moment of $\mathbb{Y}$. Under this additional constraint, we prove that learning under $\text{CS}$ or ${\text{SMV}}$ processes depending on the value space satisfying $\text{F-TiME}$, can be achieved with optimistically universal learning rules, even for unbounded losses. The empirical integrability property is essentially necessary in order to get such results. Indeed, we show that for the i.i.d. setting, this is exactly asking for the distribution to have finite first moment. As a result, our work significantly generalizes the main result from \cite{tsir2022metric}---which consider i.i.d. processes $(\mathbb{X},\mathbb{Y})$---to adversarial responses, $\text{CS}$ or ${\text{SMV}}$ instance processes, and a larger class of losses which in particular encompass powers of a metric. Further, we also show that with only a finite first-order moment condition, even in the noiseless case, learning is not achievable with such a level of generality on the instance processes $\mathbb{X}$.\comment{ We note that similar higher-order conditions have been traditionally assumed to get positive results in non-i.i.d. settings, e.g., stationary ergodic processes {\color{red}(TODO: FIND CITATIONS)}.} \\
The two tables \ref{table:summary_of_results} and \ref{table:summary_of_learning_rules} summarize the known results in the literature and our contributions on learnability characterization and proposed learning rules in universal learning. For clarity, we state here the inclusions $\text{FS}\subset\text{CS}\subset{\text{SMV}}$ which are not equalities whenever $\mathcal{X}$ is infinite \cite{hanneke2021learning}. Further, $\text{CS}$ contains in particular i.i.d., stationary ergodic or stationary processes.
\begin{table}[h!]
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c |c |c |c|}
\hline
$\begin{array}{c}\text{Learning}\\
\text{setting}
\end{array}$& Bounded loss & Unbounded loss & $\begin{array}{c}
\text{Unbounded loss with} \\
\text{empirically integrable}\\
\text{responses}
\end{array}$\\ [0.5ex]
\hline\hline
$\begin{array}{c}\text{Noiseless}\\
\text{responses}
\end{array}$ & $
\text{SOUL}={\text{SMV}} $ \cite{blanchard2022universal} & $\text{SOUL}=\text{FS}$ \cite{blanchard2022optimistic} & $\begin{array}{c}
\text{Idem as} \\
\text{bounded loss}
\end{array} \mb{[\text{This paper}]} $\\
\hline
$\begin{array}{c}
\text{Adversarial}\\
\text{(or arbitrary)}\\
\text{responses}
\end{array}$ & $\begin{array}{c}
\text{SOLAR}\supset\text{CS} \text{ (metric loss) \cite{hanneke2022bayes}} \\[0.5ex]
\hline
\text{Does }(\mathcal{Y},\ell) \text{ satisfy }\text{F-TiME} ?\\
\begin{cases}
\text{Yes} & \text{SOLAR} ={\text{SMV}}\\
\text{No} & \text{SOLAR} =\text{CS}
\end{cases}\mb{[\text{This paper}]}
\end{array}$ & $\begin{array}{c}
\text{Is ME achievable?} \\
\begin{cases}
\text{Yes} & \text{SOLAR}=\text{FS} \\
\text{No} & \text{SOLAR} =\emptyset
\end{cases} \mb{[\text{This paper}]}
\end{array}$ & $\begin{array}{c}
\text{Idem as} \\
\text{bounded loss}
\end{array} \mb{[\text{This paper}]} $\\
\hline
\end{tabular}
}
\caption{Characterization of the sets of learnable instance processes $\text{SOUL}$ and $\text{SOLAR}$ in universal consistency (ME = Mean-Estimation).}
\end{center}
\label{table:summary_of_results}
\end{table}
\begin{table}[h!]
\begin{center}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c |l |l |c|c|c|}
\hline
$\begin{array}{c}\text{Learning}\\
\text{setting}
\end{array}$& Loss (and response/setting constraints) & Learning rule & $\begin{array}{c}
\text{Guarantees} \\
\text{for which}\\
\text{processes }\mathbb{X}?
\end{array}$ &$\begin{array}{c}
\text{Optimist.}\\
\text{universal?}
\end{array}$ & Reference\\ [0.5ex]
\hline\hline
Bayesian & Finite or countable class., 01-loss & OptiNet & i.i.d. & No & \cite{hanneke2021bayes} \\
(i.i.d. & Real-valued regression + integrable & Proto-NN & i.i.d. & No & \cite{gyorfi2021universal}\\
responses) & Metric loss + integrable & MedNet & i.i.d. & No &\cite{tsir2022metric}\\
\hline
Noiseless & Bounded loss & 2C1NN &${\text{SMV}}$ & Yes &\cite{blanchard2022universal}\\
responses & Unbounded loss & Memorization &$\text{FS}$ & Yes &\cite{blanchard2022optimistic}\\
(realizable) & Unbounded + EI & 2C1NN &${\text{SMV}}$ & Yes & [This paper]\\
\hline
& Bounded loss + metric loss & Hedge-variant &$\text{CS}$ &Not always &\cite{hanneke2022bayes}\\
& Bounded loss + $\text{F-TiME}$ & $(1+\delta)$C1NN-hedged & ${\text{SMV}}$ & Yes & [This paper]\\
Adversarial & Bounded loss + not $\text{F-TiME}$ & Hedge-variant 2 & $\text{CS}$ & Yes & [This paper]\\
(or arbitrary) & Unbounded loss + ME & ME-variant & $\text{FS}$ & Yes &[This paper]\\
responses & Unbounded loss + not ME & N/A & $\emptyset $ & N/A &[This paper]\\
& Unbounded + EI + local $\text{F-TiME}$& $(1+\delta)$C1NN-hedged 2 &${\text{SMV}}$ & Yes & [This paper]\\
& Unbounded + EI + not local $\text{F-TiME}$ & Hedge-variant 3 & $\text{CS}$ & Yes & [This paper]\\
\hline
\end{tabular}
}
\caption{Proposed learning rules for universal consistency (ME = Mean-Estimation and EI = Empirical Integrability). Note: we refer to optimistical universality with respect to the set of processes $\mathbb{X}$ for which each learning rule has guarantees. In this context, an algorithm is optimistically universal if it is universally consistent for all processes under which universal learning is possible in the considered setting. \cite{hanneke2021bayes} showed that OptiNet, Proto-NN and MedNet are optimistically universal in another sense: they show that their guarantees hold for all \emph{essentially separable} metric instance spaces $(\mathcal{X},\rho)$, which is exactly the optimistic set of metric spaces for which Bayesian universal learning is achievable. Our proposed learning rules also enjoy this property.}
\label{table:summary_of_learning_rules}
\end{center}
\end{table}
\paragraph{Paper outline.}After presenting the learning framework and definitions in Section \ref{sec:formal_setup}, we describe in Section \ref{sec:main_results} our main results, where we give a complete characterization of learnable processes, and provide optimistically universal learning rules for all instance and value spaces. We first turn to the case of bounded losses which is less restrictive, and explicitly construct an optimistically universal learning rule for totally-bounded value spaces in Section \ref{sec:totally_bounded_value_spaces}. The alternative between non-totally-bounded value spaces is characterized in the following Section \ref{sec:alternative}. We provide optimistically universal learning rules for each alternatives. We then turn to unbounded losses and show that for metric losses, the learnable processes are identical as for noiseless regression in Section \ref{sec:mean_estimation}, by proving an universality result for mean estimation in general spaces. We then give an alternative for adversarial regression in the general case. However, these learnable processes in unbounded value spaces are always very restrictive. Hence, in Section \ref{sec:unbounded_loss_moment_constraint} we propose a mild moment constraint on the responses such that all known results on bounded losses can be recovered even in the unbounded loss case. Finally, we discuss open research directions in Section \ref{sec:conclusion}.
\section{Formal setup and Preliminaries}
\label{sec:formal_setup}
\paragraph{Instance and value spaces.}We recall that a metric space is \emph{separable} if it contains a dense countable set. In the general \emph{metric-valued} regression problem, we observe inputs from an \emph{instance} separable metric space $(\mathcal{X},\rho)$ equipped with its Borel $\sigma-$algebra $\mathcal{B}$, and predict values from a \emph{value} separable metric space $(\mathcal{Y},|\cdot|)$ given with a loss $\ell$. Unless mentioned otherwise, we suppose that the loss is a power of a metric, i.e., there exists $\alpha\geq 1$ such that the loss is $\ell=|\cdot|^\alpha$. All of the results in this work can be generalized to \emph{essentially separable} metric instance spaces $(\mathcal{X},\rho)$, but for the sake of exposition, we will consider separable metric spaces $(\mathcal{X},\rho)$ in the rest of this paper. This notion of essentially separable metric space was introduced by \cite{hanneke2021bayes} and asks that for every probability measure $\mu$ on the $\sigma$-algebra $\mathcal{B}$, the metric probability space $(\mathcal{X},\rho,\mu)$ is separable, i.e., there exists $\mathcal{X}'\in\mathcal{B}$ with $\mu(\mathcal{X}')=1$ such that $(\mathcal{X}',\rho)$ is separable. \cite{hanneke2021bayes} showed that this is the largest class of instance metric spaces for which learning is possible, even in the Bayesian i.i.d. setting. In the first Sections \ref{sec:totally_bounded_value_spaces} and \ref{sec:alternative} of this work, we suppose that the loss $\ell$ is \emph{bounded}, i.e., $\sup_{y_1,y_2\in\mathcal{Y}}\ell(y_1,y_2)<\infty$. In the rest of this paper, we will use the notation $\bar\ell := \sup_{y_1,y_2\in\mathcal{Y}}\ell(y_1,y_2)$. The case of \emph{unbounded} losses is addressed in the next sections \ref{sec:mean_estimation} and \ref{sec:unbounded_loss_moment_constraint}. As an example, binary classification corresponds to $\mathcal{Y}=\{0,1\}$ together with the indicator loss $\ell_{01}(i,j) = \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{i\neq j}$. Similarly, this setup covers finite multi-classification with $\mathcal{Y}=\{0,1,\ldots,k\}$ or with countable number of classes $\mathcal{Y}=\mathbb{N}$, and classical regression with $\mathcal{Y}=\mathbb{R}$ and any $L^\alpha$ loss $\ell(x,y)=|x-y|^\alpha$ for $\alpha\geq 1$. We also introduce the notion of near-metrics for which we will provide some results. We say that $\ell$ is a near-metric on $\mathcal{Y}$ if it is symmetric, satisfies $\ell(y,y)=0$ for all $y\in\mathcal{Y}$, for any $y'\neq y\in\mathcal{Y}$ we have $\ell(y,y')>0$, and it satisfies a relaxed triangle inequality $\ell(y_1,y_2)\leq c_\ell( \ell(y_1,y_3) + \ell(y_2,y_3))$ where $c_\ell$ is a finite constant.
\paragraph{Online learning.}We consider the \emph{online learning} framework where at step $t\geq 1$, one observes a new instance $X_t\in\mathcal{X}$ and predicts a value $\hat Y_t\in\mathcal{Y}$ based on the past history $(X_u,Y_u)_{u\leq t-1}$ and the new instance $X_t$ only. The learning rule may be randomized, where the (private) randomness used at each iteration $t$ is independent from the data generation process used to generate $Y_t$. Formally, an online learning rule $f_\cdot:=\{f_t\}_{t\geq 1}$ is defined as a sequence of measurable functions $f_t: \mathcal{R}_t \times \mathcal{X}^{t-1}\times \mathcal{Y}^{t-1} \times \mathcal{X} \to\mathcal{Y}$ together with a distribution $R_t$ on $\mathcal{R}_t$, where $\mathcal{R}_t$ denotes the space on which the randomness used by $f_t$ lies. The prediction at time $t$ is
\begin{equation*}
f_t(r_t; (X_u)_{\leq t-1},(Y_u)_{\leq t-1},X_t),
\end{equation*}
where $r_t\sim R_t$ is a sample according to the distribution of $R_t$, independent of the past history and the new value $(X_u,Y_u)_{\leq t}$. For simplicity, we might omit the randomness $r_t$ when possible and write directly $f_t:\mathcal{X}^{t-1}\times \mathcal{Y}^{t-1}\times \mathcal{X}\to\mathcal{Y}$.
\paragraph{Adversarial responses.}We are interested in general data generating processes. To this means, a possible very general choice of instances and values are general stochastic processes $(\mathbb{X},\mathbb{Y}):=\{(X_t,Y_t)\}_{t\geq 1}$ on the product space $\mathcal{X}\times \mathcal{Y}$. This corresponds to the setting of arbitrarily dependent responses under instance processes $\mathbb{X}$, introduced in \cite{hanneke2022bayes}. In this work, we consider \emph{adversarial responses} $\mathbb{Y}$, which generalize arbitrarily dependent responses. Specifically, the difference is that the value $Y_t$ is also allowed to depend on the past private randomness $(r_u)_{u\leq t-1}$ used by the learning rule $f_\cdot$. Formally, the data generation is given by a stochastic process $\{(X_t,\mb Y_t)\}_{t\geq 1}$ where $\mb Y_t=\mb Y_t(\cdot\mid \cdot)$ is generated from a Markov kernel from $\mathcal{R}_1\times\ldots \mathcal{R}_{t-1}$ to $\mathcal{Y}$, using the realizations of the sampled randomness of the learning rule $r_1,\ldots,r_{t-1}$. This data generation process for the values can be viewed as a randomized measurable function $\mb Y_t:\mathcal{R}_1\times \ldots \times \mathcal{R}_{t-1}\to \mathcal{Y}$ correlated with the instance process $\mathbb{X}$. Having observed the sampled randomness $r_1\in \mathcal{R}_1,\ldots, r_{t-1}\in\mathcal{R}_{t-1}$ used by the learning rule $f_\cdot$, the target value at time $t$ is given by $Y_t:=\mb Y_t(r_1,\ldots ,r_{t-1}).$ For simplicity, we will refer to this adversarial response process as $\mathbb{Y}$, which allows to view the data generating process as a usual stochastic process on $\mathcal{X}\times \mathcal{Y}$ where the responses can depend on the randomness of the learning rule. If the learning rule is \emph{deterministic}, adversarial responses are equivalent to arbitrary dependent responses considered in \cite{hanneke2022bayes}, but this is not the case for general \emph{randomized} algorithms. Note that only the responses can be adapted to the randomness used by the learning rule. The instances $\mathbb{X}$, however, are independent of this randomness.
\paragraph{Universal consistency.}In this general setting, we are interested in online learning rules which achieve low long-run average loss compared to any fixed prediction function. Precisely, given a learning rule $f_\cdot$ and an adversarial process $(\mathbb{X},\mathbb{Y})$, for any measurable function $f^*:\mathcal{X}\to\mathcal{Y}$, we define the long-run average excess loss as
\begin{equation*}
\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot, f^*):= \limsup_{T\to\infty} \frac{1}{T} \sum_{t=1}^T \left(\ell(f_t(\mathbb{X}_{\leq t-1},\mathbb{Y}_{\leq t-1},X_t),Y_t) - \ell(f^*(X_t),Y_t) \right).
\end{equation*}
In particular, we say that the learning rule $f_\cdot$ is strongly consistent under $(\mathbb{X},\mathbb{Y})$ if for any measurable function $f^*:\mathcal{X}\to\mathcal{Y}$ we have
\begin{equation*}
\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot, f^*) \leq 0,\quad (a.s.).
\end{equation*}
In this paper, we will only study \emph{strong} consistency, which we will then simply refer to as consistency. For example, if $(\mathbb{X},\mathbb{Y})$ is an i.i.d. process on $\mathcal{X}\times\mathcal{Y}$ following a distribution $\mu$ where $\mu$ has a finite first-order moment, achieving consistency is equivalent to reaching Bayes-optimal risk, which is defined as
\begin{equation*}
R^*:=\inf_f R(f) = \inf_f \mathbb{E}_{(X,Y)\sim \mu} \left[\ell(f(X),Y)\right],
\end{equation*}
where the infimum is taken over all measurable functions $f:\mathcal{X}\to\mathcal{Y}$. As introduced in \cite{hanneke2021learning,hanneke2022bayes}, consistency against all measurable function is the natural extension of Bayes consistency to non-i.i.d. settings. The goal of universal learning under adversarial processes is to design learning rules which are consistent for any adversarial process $\mathbb{Y}$. Precisely, for any stochastic process $\mathbb{X}$ on $\mathcal{X}$, we say that $f_\cdot$ is \emph{universally consistent} for adversarial responses under $\mathbb{X}$ if it is consistent for any adversarial process $(\tilde \mathbb{X},\mathbb{Y})$ where $\tilde \mathbb{X}\sim\mathbb{X}$, i.e., follows the same distribution as $\mathbb{X}$. In the case of unbounded losses, we might need to impose additional moment restrictions on $\mathbb{Y}$. We refer to Sections \ref{sec:main_results} and \ref{sec:unbounded_loss_moment_constraint} for details on the corresponding restrictions.
\paragraph{Optimistically universal learning.}Given this regression setup, we define $\text{SOLAR}$ (Strong universal Online Learning with Adversarial Responses) as the set of processes $\mathbb{X}$ for which universal consistency with adversarial responses is \emph{achievable} by some learning rule. Note that this learning rule is allowed to depend on the process $\mathbb{X}$. We are then interested in learning rules that would achieve universal consistency with adversarial responses under all processes $\mathbb{X}\in\text{SOLAR}$, i.e., that would achieve the regression objective whenever it is achievable. We refer to these algorithms as \emph{optimistically universal} learning rules for adversarial regression. In particular, if such a learning rule fails to reach universal consistency, then any other algorithm would fail as well.
In this general setting under minimal assumptions, the main interests of optimistic learning are 1. identifying the set of learnable processes with adversarial responses $\text{SOLAR}$, 2. determining whether there exists an optimistically universal learning rule, and 3. constructing one if it exists.
\subsection{Preliminaries}
We recall the following known identities, which we will use to analyze the loss $\ell=|\cdot|^\alpha$.
\begin{lemma}
\label{lemma:loss_identity}
Let $\alpha\geq 1$. Then, $(a+b)^\alpha \leq 2^{\alpha-1}(a^\alpha+b^\alpha)$ for all $a,b\geq 0$. Let $0< \epsilon\leq 1$ and $\alpha\geq 1$. There exists some constant $c_\epsilon^\alpha>0$ such that $(a+b)^\alpha \leq (1+\epsilon)a^\alpha + c_\epsilon^\alpha b^\alpha$ for all $a,b\geq 0$, and $c_\epsilon^\alpha\leq \left(\frac{4\alpha}{\epsilon}\right)^\alpha$.
\end{lemma}
\begin{proof}
The first identity is classical. A proof of the second one can be found for example in \cite{evans2020strong} (Lemma 2.3) where they obtain $
c_\epsilon^\alpha = \left(1+\frac{1}{(1+\epsilon)^{1/\alpha}-1}\right)^\alpha \leq \left( \frac{2}{\epsilon\cdot \frac{1}{\alpha}2^{1/\alpha-1}} \right)^\alpha \leq \left(\frac{4\alpha}{\epsilon}\right)^\alpha.$
\end{proof}
In particular, we will use this identity to write for any $y_1,y_2,y_3\in\mathcal{Y}$,
\begin{equation*}
\ell(y_1,y_2) \leq 2^{\alpha-1}\ell(y_1,y_3) + 2^{\alpha-1} \ell(y_2,y_3) \quad \text{ and } \quad
\ell(y_1,y_2) \leq (1+\epsilon)\ell(y_1,y_3) + c_\epsilon^\alpha \ell(y_2,y_3).
\end{equation*}
These will be the only used identities on the loss $\ell$. Hence, except for Section \ref{subsec:metric_mean_estimation} in which we assume that the loss is a metric $\alpha=1$, our results can be generalized to any symmetric and discernible loss $\ell$ satisfying the following property: for any $0<\epsilon\leq 1$, there exists a constant $c_\epsilon$ such that for all $y_1,y_2,y_3\in\mathcal{Y}$,
\begin{equation*}
\ell(y_1,y_2) \leq (1+\epsilon)\ell(y_1,y_3) + c_\epsilon \ell(y_2,y_3).
\end{equation*}
Without loss of generality, we can further assume that $c_\epsilon$ is non-increasing in $\epsilon$. Note that this is a stronger assumption than having a near-metric $\ell$, for which we also give some results in Section \ref{sec:totally_bounded_value_spaces} and \ref{sec:unbounded_loss_moment_constraint}.
\section{Main results}
\label{sec:main_results}
We introduce a first condition on stochastic processes on $\mathcal{X}$. For any process $\mathbb{X}$ on $\mathcal{X}$, given any measurable set $A\in\mathcal{B}$ of $\mathcal{X}$, we define $\hat\mu_\mathbb{X}(A)=\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_A(X_t)$. We consider the condition $\text{CS}$ (Continuous Sub-measure) defined as follows.
\paragraph{Condition $\text{CS}$.}
\textit{For every decreasing sequence $\{A_k\}_{k=1}^\infty$ of measurable sets in $\mathcal{X}$ with $A_k\downarrow \emptyset$,}
\begin{equation*}
\lim_{k\to\infty} \mathbb{E}[\hat \mu_{\mathbb{X} }(A_k)] = 0.
\end{equation*}
It is known that this condition is equivalent to $\mathbb{E}[\hat\mu_\mathbb{X}(\cdot)]$ being a continuous sub-measure \cite{hanneke2021learning}, hence the adopted name $\text{CS}$. We now introduce a second condition ${\text{SMV}}$ (Sublinear Measurable Visits) which asks that for any partition, the process $\mathbb{X}$ visits a sublinear number of sets of the partition. Formally, the condition is defined as follows.
\paragraph{Condition ${\text{SMV}}$.} \textit{For every disjoint sequence $\{A_k\}_{k=1}^\infty$ of measurable sets of $\mathcal{X}$ with $\bigcup_{k=1}^\infty A_k=\mathcal{X}$, (every countable measurable partition),}
\begin{equation*}
|\{k\geq 1: A_k\cap\mathbb{X}_{\leq T}\neq\emptyset \}| =o(T),\quad (a.s.).
\end{equation*}
This condition is significantly weaker and allows to consider a larger family of processes $\text{CS}\subset{\text{SMV}}$, with $\text{CS}\subsetneq{\text{SMV}}$ whenever $\mathcal{X}$ is infinite \cite{hanneke2021learning}. Note that these sets depend on the instance space $(\mathcal{X},\rho)$. This dependence is omitted for the sake of simplicity.
We first consider bounded losses. In the \emph{noiseless} case, where there exists some unknown measurable function $f^*:\mathcal{X}\to\mathcal{Y}$ such that the stochastic process $\mathbb{Y}$ is given as $Y_t=f^*(X_t)$ for all $t\geq 1$, \cite{blanchard2022universal} showed that ${\text{SMV}}$ processes are exactly those for which universal consistency is achievable. Precisely, defining $\text{SOUL}$ (Strong Online Universal Learning) the optimistic set of processes $\mathbb{X}$ for which universal consistency in the noiseless setting is achievable, we have ${\text{SMV}}=\text{SOUL}$ whenever the value space is bounded. \cite{blanchard2022universal} also introduced a learning rule 2-Capped-1-Nearest-Neighbor (2C1NN), variant of the classical 1NN algorithm, which is \emph{optimistically universal} in the noiseless case. Indeed, for any process $\mathbb{X}\in{\text{SMV}}$ and any target function $f^*:\mathcal{X}\to\mathcal{Y}$, we have
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T} \sum_{t=1}^T \ell(2C1NN_t(\mathbb{X}_{\leq t-1},f^*(\mathbb{X}_{t-1}),X_t),f^*(X_t)) = 0,\quad (a.s.),
\end{equation*}
or equivalently $\mathcal{L}_{(\mathbb{X},f^*(\mathbb{X}))}(2C1NN,f^*)=0\quad (a.s.)$. In other words, 2C1NN is universally consistent under all noiseless processes with $\mathbb{X}\in{\text{SMV}}$. Note that the above equation coincides with the notion of universal consistency introduced in Section \ref{sec:formal_setup}. Indeed, using $f^*$ as comparison to the learning rule 2C1NN is an optimal choice because the loss obtained with this fixed measurable function is null. Further, because $\text{SOUL}={\text{SMV}}$, if 2C1NN fails to achieve universal consistency in the noiseless setting, then any other learning rule would fail as well. Because we consider more general (noisy) responses, this shows that in particular, universal consistency with adversarial responses cannot be achieved outside of $\text{SOUL}$, i.e., $\text{SOLAR}\subset \text{SOUL}$ in general. In particular, for bounded value spaces we obtain $\text{SOLAR}\subset{\text{SMV}}$. It was posed as open problem whether we could recover the complete set ${\text{SMV}}$ for learning under adversarial---or arbitrary---processes \cite{hanneke2022bayes}.
\paragraph{Open problem \cite{hanneke2022bayes}.}For bounded losses, does there exist an online learning rule that is universally consistent for arbitrary responses under all processes $\mathbb{X}\in\text{SOUL}$?\\
We answer this question with an alternative. We show that depending on the bounded value space $(\mathcal{Y},\ell)$, we have either $\text{SOLAR}=\text{SOUL}$ or $\text{SOLAR}=\text{CS}$, but that in both cases there exists an optimistically universal learning rule. We now introduce the property $\text{F-TiME}$ (Finite-Time Mean Estimation) on the value space $(\mathcal{Y},\ell)$ which characterizes this alternative.
\paragraph{Property $\text{F-TiME}$:}\textit{For any $\eta>0$, there exists a horizon time $T_\eta \geq 1$, an online learning rule $g_{\leq T_\eta}$ and $\tau$ a random time with $1\leq \tau\leq T_\eta$ such that for any $\mb y:=(y_t)_{t=1}^{T}$ of values in $\mathcal{Y}$ and any value $y\in\mathcal{Y}$, we have
\begin{equation*}
\mathbb{E}\left[\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau\right] \leq 0.
\end{equation*}
}
We are now ready to state our main results for bounded value spaces. The first result shows that if the value space satisfies the above property, we can universally learn all the processes in $\text{SOUL}$ even under adversarial responses. We also construct an optimistically universal learning rule for this case.
\begin{restatable}{theorem}{ThmGoodValueSpaces}
\label{thm:good_value_spaces}
Suppose that $\ell$ is bounded and $(\mathcal{Y},\ell)$ satisfies $\text{F-TiME}$. Then, $\text{SOLAR} = {\text{SMV}} (=\text{SOUL})$ and there exists an optimistically universal learning rule for adversarial regression, i.e., which achieves universal consistency with adversarial responses under any process $\mathbb{X}\in{\text{SMV}}$.
\end{restatable}
In other words, value spaces satisfying $\text{F-TiME}$ allow recovering the same learnable processes for adversarial responses as for noiseless responses. We show this includes a very large class of metric spaces. Specifically, we prove that any totally-bounded metric space satisfies $\text{F-TiME}$. We also show that $(\mathcal{Y},\ell)=(\mathbb{N},\ell_{01})$, which is not totally-bounded, still satisfies $\text{F-TiME}$. Hence, we can universally learn all $\text{SOUL}$ processes with adversarial responses, for countable classification. However, this property defines a non-trivial alternative and we also explicitly construct a value space that disproves $\text{F-TiME}$. We now turn to this second case.
\begin{restatable}{theorem}{ThmBadValueSpaces}
\label{thm:bad_value_spaces}
Suppose that $\ell$ is bounded and $(\mathcal{Y},\ell)$ does not satisfy $\text{F-TiME}$. Then, $\text{SOLAR} = \text{CS}$ and there exists an optimistically universal learning rule for adversarial regression, i.e., which achieves universal consistency with adversarial responses under any process $\mathbb{X}\in\text{CS}$.
\end{restatable}
In the case of metric losses $\alpha=1$, it is already known \cite{hanneke2022bayes} that universal learning under adversarial responses under all processes in $\text{CS}$ is achievable by some learning rule. Hence, the above result shows that this learning rule is automatically optimistically universal for adversarial regression for all metric value spaces which do not satisfy $\text{F-TiME}$. The main result in \cite{hanneke2022bayes} considered the case of regression under arbitrary responses, but the proof can easily be adapted to adversarial responses. We will give a stronger result that holds for any $\alpha\geq 1$ and unbounded value spaces, and hence, implies this statement. This completely closes the open problem of \cite{hanneke2022bayes}: the answer is positive if and only if the value spaces satisfies $\text{F-TiME}$.\\
We then turn to the case of unbounded losses. Unfortunately, even in the noiseless setting, universal learning is extremely restrictive in that case. Specifically, the set of universally learnable processes $\text{SOUL}$ for noiseless responses is reduced to the set of processes $\mathbb{X}$ which visit a finite number of different points of $\mathcal{X}$ almost surely \cite{blanchard2022optimistic}. This condition is referred to as the FS (Finite Support) condition.
\paragraph{Condition $\text{FS}$.}The process $\mathbb{X}$ satisfies $|\{x\in \mathcal{X}: \{x\}\cap \mathbb{X} \neq \emptyset\}|<
\infty\quad (a.s.)$.\\
Hence, because $\text{SOUL}=\text{FS}$ for unbounded losses, even the simple memorization learning rule is optimistically universal in the noiseless setting. We show that in the adversarial setting we still have $\text{SOLAR} = \text{SOUL}=\text{FS}$ when $\ell$ is a metric. We prove that we can solve the fundamental problem of mean estimation where one sequentially makes predictions of a sequence $\mathbb{Y}$ of values in $(\mathcal{Y},\ell)$ and aims to have a better long-run average loss than any fixed value. In the case of i.i.d. processes this is precisely the Fr\'echet means estimation problem. We now state our main result on mean estimation, which is of independent interest and holds for general separable metric value spaces and adversarial processes.
\begin{restatable}{theorem}{ThmMeanEstimation}
\label{thm:mean_estimation}
Let $(\mathcal{Y},\ell)$ be a separable metric space. There exists an online learning rule $f_\cdot$ that is universally consistent for adversarial mean estimation, i.e., for any adversarial process $\mathbb{Y}$ on $\mathcal{Y}$, almost surely, for all $y\in \mathcal{Y}$,
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T} \sum_{t=1}^T \left(\ell( f_t(\mathbb{Y}_{\leq t-1}),Y_t) - \ell(y,Y_t)\right)\leq 0.
\end{equation*}
\end{restatable}
Further, we show that for powers of metric we may have $\text{SOLAR}=\emptyset$. Specifically, for real-valued regression with Euclidean norm and loss $|\cdot|^\alpha$ and $\alpha>1$, neither adversarial regression nor mean-estimation are achievable. We then show that we have an alternative. Either mean-estimation with adversarial responses is achievable and in this case $\text{SOLAR}=\text{FS}$ and we have an optimistically universal learning rule, or mean-estimation is not achievable and we obtain $\text{SOLAR}=\emptyset$.
Even in the best case scenario for unbounded losses, we obtain $\text{SOLAR}=\text{SOUL}=\text{FS}$, which is already extremely restrictive. Thus, \cite{blanchard2022optimistic} asked whether imposing moment conditions on the responses---such as empirically bounded losses in the long-run average---would allow recovering the large set ${\text{SMV}}$ as learnable processes instead of the restricted set $\text{SOUL}=\text{FS}$. Specifically, they formulated the following open problem.
\paragraph{Open Problem \cite{blanchard2022optimistic}:} For unbounded losses $\ell$,
does there exist an online learning rule $f_\cdot$ which is consistent under every $\mathbb{X} \in {\text{SMV}}$, for every measurable function $f^*:\mathcal{X}\to\mathcal{Y}$ such that there exists $y_0\in\mathcal{Y}$ with $\limsup_{T \to \infty} \frac{1}{T} \sum_{t=1}^{T} \ell(y_0, f^*(X_t)) < \infty~~\text{(a.s.)}$, i.e., such that we have $\mathcal{L}_{\mathbb{X}}(f_{\cdot},f^*) = 0~~\text{(a.s.)}$?\\
We answer negatively to this question. Under this first moment condition, universal learning under all ${\text{SMV}}$ processes is not achievable even in this noiseless case. We show the stronger statement that noiseless universal learning under all processes having pointwise convergent relative frequencies---which are included in $\text{CS}$---is not achievable. We therefore introduce a novel condition on the responses, namely \emph{empirical integrability}, under which, we can recover all positive results from the bounded losses case. Precisely, we ask that there exists $y_0\in\mathcal{Y}$ such that for any $\epsilon>0$, almost surely there exists $M\geq 0$ for which
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M}\leq \epsilon.
\end{equation*}
Similarly as before with the definition of $\text{SOLAR}$, our objective is now to study the processes $\mathbb{X}$ for which there exists an online learning rule which would be consistent for all adversarial processes $(\tilde\mathbb{X},\mathbb{Y})$ with $\tilde \mathbb{X}\sim\mathbb{X}$ and $\mathbb{Y}$ satisfying the above condition. We refer to this objective as universal consistency for adversarial responses with bounded moments. We then show that all results from the bounded loss case can be recovered with this additional bounded moments constraint. Interestingly, in the noiseless case, the same 2C1NN learning rule as introduced in \cite{blanchard2022universal} for the bounded loss case, achieves this objective.
\begin{restatable}{theorem}{ThmNoiselessUnbounded}
\label{thm:2C1NN_unbounded}
Let $(\mathcal{Y},\ell)$ a separable near-metric space. Then, 2C1NN is optimistically universal in the noiseless setting with empirically integrable responses, i.e., for all processes $\mathbb{X}\in{\text{SMV}}$ and for all measurable target functions $f^*:\mathcal{X}\to\mathcal{Y}$ such that there exists $y_0\in \mathcal{Y}$ for which for all $\epsilon>0$, there exists $M\geq 0$ with $ \limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,f^*(X_t))\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,f^*(X_t))\geq M} \leq \epsilon \quad (a.s.)$, we have $\mathcal{L}_\mathbb{X}(2C1NN,f^*)=0\quad (a.s.)$.
\end{restatable}
For adversarial responses in unbounded loss spaces, we can also recover similar results to the bounded loss case, but the learning rules have to be adapted. We obtain the following result for adversarial universal learning under $\text{CS}$ processes.
\begin{restatable}{theorem}{ThmCSRegressionUnbounded}
\label{thm:CS_regression_unbounded}
There exists an online learning rule $f_\cdot$ that is universally consistent for adversarial empirically integrable responses under all processes in $\text{CS}$, i.e., such that for any stochastic process $(\mathbb{X},\mathbb{Y})$ on $(\mathcal{X},\mathcal{Y})$ with $\mathbb{X}\in\text{CS}$ and $\mathbb{Y}$ empirically integrable, then, for any measurable function $f:\mathcal{X}\to\mathcal{Y}$ we have $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f^*)\leq 0,\quad (a.s.)$.
\end{restatable}
As a result, this provides an optimistically universal learning rule for adversarial responses under moment condition, for all value spaces $(\mathcal{Y},\ell)$ such that there exists a ball $B_\ell(y,r)$ with $r>0$ which does not satisfy $\text{F-TiME}$. Otherwise, under the same moment condition, adversarial universal learning is achievable under all processes in ${\text{SMV}}(=\text{SOUL})$.
\begin{restatable}{theorem}{ThmSOULRegressionUnbounded}
\label{thm:SOUL_regression_unbounded}
Suppose that any ball of $(\mathcal{Y},\ell)$, $B_\ell(y,r)$ satisfies $\text{F-TiME}$. Then, there exists an optimistically universal online learning rule $f_\cdot$ for adversarial empirically integrable responses with bounded moments, i.e., such that for any stochastic process $(\mathbb{X},\mathbb{Y})$ on $\mathcal{X}\times\mathcal{Y}$ with $\mathbb{X}\in{\text{SMV}}$ and $\mathbb{Y}$ empirically integrable, then, for any measurable function $f:\mathcal{X}\to\mathcal{Y}$ we have $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f^*)\leq 0,\quad (a.s.)$.
\end{restatable}
\section{An optimistically universal learning rule for totally-bounded value spaces}
\label{sec:totally_bounded_value_spaces}
We start our analysis of universal learning under adversarial responses with \emph{totally-bounded} value spaces. Hence, we suppose in this section that the value space $(\mathcal{Y},\ell)$ is totally-bounded, i.e., for any $\epsilon>0$ there exists a finite $\epsilon-$net $\mathcal{Y}_\epsilon$ of $\mathcal{Y}$ such that for any $y\in\mathcal{Y}$, there exists $y'\in\mathcal{Y}_\epsilon$ with $\ell(y,y')<\epsilon$. Note in particular that a totally-bounded space is necessarily bounded and separable. The goal of this section is to show that for such value spaces, adversarial universal regression is achievable for all processes in $\text{SOUL}={\text{SMV}}$ which correspond to learnable processes in the noiseless setting. Further, we explicitly construct an optimistically universal learning rule for adversarial responses. The main result of this section is stated below.
\begin{theorem}
\label{thm:optimistic_regression_totally_bounded}
Suppose that $(\mathcal{Y},\ell)$ is totally-bounded. Then, there exists an online learning rule $f_\cdot$ which is universally consistent for adversarial responses under any process $\mathbb{X}\in{\text{SMV}}(=\text{SOUL})$, i.e., such that for any process $(\mathbb{X},\mathbb{Y})$ on $(\mathcal{X},\mathcal{Y})$ with adversarial response, such that $\mathbb{X}\in{\text{SMV}}$, then for any measurable function $f:\mathcal{X}\to\mathcal{Y}$, we have $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f)\leq 0,\quad (a.s.)$.
\end{theorem}
We recall that in the noiseless setting, the 2C1NN learning rule achieves universal consistency for all ${\text{SMV}}$ processes \cite{blanchard2022universal}. Precisely, the 2C1NN learning rule performs, at each iteration $t$, the nearest neighbor rule over an updated dataset instead of the complete history $\mathbb{X}_{\leq t-1}$. The dataset is updated by keeping track of the number of times each point $X_u$ was used as representative. This number is then capped at $2$ by deleting of the current dataset any point which has been used twice as representative. Unfortunately, this learning rule is not optimistically universal for adversarial responses. More generally, \cite{tsir2022metric} noted that any learning rule which only outputs observed historical values cannot be consistent, even in the simplest case of $\mathcal{X}=\{0\}$ and i.i.d. responses $\mathbb{Y}$. For instance, take $\mathcal{Y}=\bar B(0,1)$ the closed ball of radius $1$ in the plane $\mathbb{R}^2$ with the euclidean loss, consider the points $A,B,C\in\mathcal{Y}$ representing the equilateral triangle $e^{2ik\pi/3}$ for $k=0,1,2$, and let $\mathbb{Y}$ be an i.i.d. process following the distribution which visits $A$, $B$ or $C$ with probability $\frac{1}{3}$. Predictions within observed values, i.e., $A,B$ or $C$, incur an average loss of $\frac{2}{3}\sqrt 3 >1$ where $1$ is the loss obtained with the fixed value $(0,0)$.
To construct an optimistically universal learning rule for adversarial responses, we first need to generalize a result from \cite{blanchard2022universal}. Instead of the 2C1NN learning rule, we will use $(1+\delta)$C1NN rules for $\delta>0$ arbitrarily small. In the 2C1NN learning rule, to each new input point $X_t$ is associated a representative $\phi(t)$ which is used for the prediction $\hat Y_t =Y_{\phi(t)}$. This rule was designed so that each point can be used at most twice as representative for future times. In the $(1+\delta)$C1NN rule, each point will be used as representative at most twice with probability $\delta$ and at most once with probability $1-\delta$. In order to have this behaviour irrespective of the process $\mathbb{X}$, which can be thought of been chosen by a (limited) adversary within the $\text{SOUL}$ processes, the information of whether a point can allow for 1 or 2 children is only revealed when necessary. Specifically, at any step $t\geq 1$, the algorithm initiates a search for a representative $\phi(t)$. It successively tries to use the nearest neighbor of $X_t$ within the current dataset, as performed by the 2C1NN, and uses it as representative if allowed by the maximum number of children that this nearest neighbor can have. However, the information whether a potential representative $u$ can have at most 1 or 2 children is revealed only when $u$ already has one children.
\begin{itemize}
\item If $u$ allows for 2 children, it will be used as final representative $\phi(t)$.
\item Otherwise, $u$ is deleted from the dataset and the search for a representative continues.
\end{itemize}
The rule is formally described in Algorithm \ref{alg:1+deltaC1NN}, where $\bar y\in\mathcal{Y}$ is an arbitrary value, and the maximum number of children that a point $X_t$ can have is represented by $1+U_t$. In this formulation, all Bernouilli $\mathcal{B}(\delta)$ samples are drawn independently of the past history. Note that if $\delta=1$, the $(1+\delta)$C1NN learning rule coincides with the 2C1NN rule of \cite{blanchard2022universal} up to minor memorization improvements.
\begin{algorithm}[ht]
\caption{The $(1+\delta)$C1NN learning rule}\label{alg:1+deltaC1NN}
\hrule height\algoheightrule\kern3pt\relax
\KwIn{Historical samples $(X_t,Y_t)_{t<T}$ and new input point $X_T$}
\KwOut{Predictions $\hat Y_t = (1+\delta)C1NN_t({\mb X}_{<t},{\mb Y}_{<t},X_t)$ for $t\leq T$}
$\hat Y_1:= \bar y$ \tcp*[f]{Arbitrary prediction at $t=1$}\\
$\mathcal{D}_2\gets \{1\}$;
$n_1 \gets 0$;
$t\gets 2$; \tcp*[f]{Initialisation}\\
\While{$t\leq T$}{
$continue\gets True$ \tcp*[f]{Begin search for available representative $\phi(t)$}\\
\While{continue}{
$\phi(t)\gets \min \left\{l\in \arg\min_{u\in \mathcal{D}_t} \rho(X_t,X_u) \right\}$\\
\uIf(\tcp*[f]{Candidate representative has no children}){$n_{\phi(t)}=0$}{
$\mathcal{D}_{t+1}\gets \mathcal{D}_t\cup\{t\}$\\
$continue\gets False$
}
\uElse(\tcp*[f]{Candidate representative has one child}){
$U_{\phi(t)}\sim \mathcal{B}(\delta)$\\
\uIf{$U_{\phi(t)}=0$}{
$\mathcal{D}_t \gets \mathcal{D}_t\setminus\{\phi(t)\}$
}
\uElse{
$\mathcal{D}_{t+1}\gets (\mathcal{D}_t\setminus\{\phi(t)\})\cup\{t\}$\\
$continue\gets False$
}
}
}
$\hat Y_t:=Y_{\phi(t)}$\\
$n_{\phi(t)}\gets n_{\phi(t)}+1$\\
$n_t\gets 0$\\
$t\gets t+1$
}
\hrule height\algoheightrule\kern3pt\relax
\end{algorithm}
\begin{theorem}
\label{thm:1+deltaC1NN_optimistic}
Fix $\delta>0$. For any separable Borel space $(\mathcal{X},\mathcal{B})$ and any separable near-metric output setting $(\mathcal{Y},\ell)$ with bounded loss, in the noiseless setting, $(1+\delta)$C1NN is optimistically universal.
\end{theorem}
The proof of this theorem is given in the following Section \ref{subsec:1+deltaC1NN}. We now construct our algorithm. This learning rule uses a collection of algorithms $f^\epsilon_\cdot$ which each yield an asymptotic error at most a constant factor from $\epsilon^{\frac{1}{\alpha+1}}$. Now fix $\epsilon>0$ and let $\mathcal{Y}_\epsilon$ be an $\epsilon-$net of $\mathcal{Y}$ for $\ell$. Importantly, we can take $\mathcal{Y}_\epsilon$ finite because $\mathcal{Y}$ is totally-bounded. Recall that we denote by $\bar\ell$ the supremum loss, i.e., $\bar\ell:=\sup_{y_1,y_2\in\mathcal{Y}}\ell(y_1,y_2)$. We pose
\begin{equation*}
T_\epsilon := \left\lceil \frac{\bar \ell^2 \ln |\mathcal{Y}_\epsilon|}{2\epsilon^2}\right\rceil \quad \text{and} \quad \delta_\epsilon:= \frac{\epsilon}{2\bar\ell(2^{T_\epsilon}+T_\epsilon)}.
\end{equation*}
The quantity $T_\epsilon$ will be the horizon window used by our learning rule to make its prediction using the $(1+\delta_\epsilon)$C1NN learning rule. Precisely, let $\phi$ be the representative function from the $(1+\delta_\epsilon)$C1NN learning rule. Further, we denote by $d(t)$ the depth of time $t$ within the graph constructed by $(1+\delta_\epsilon)$C1NN. At time $t$, we define the horizon $L_t=d(t)\mod T_\epsilon$. The learning rule performs its prediction based on the values $Y_{\phi^l(t)}$ for $l=1,\ldots,L_t$. We pose $\eta_\epsilon:=\sqrt{\frac{8\ln |\mathcal{Y}_\epsilon|}{\bar\ell^2 T_\epsilon}}$ and define the losses $L_y^t=\sum_{l=1}^{L_t} \ell(Y_{\phi^l(t)},y)$. The learning rule $f^\epsilon_t(\mathbb{X}_{\leq t-1},\mathbb{Y}_{\leq t-1},X_t)$ outputs a random value in $\mathcal{Y}_\epsilon$ independently from the past history with
\begin{equation*}
\mathbb{P}(\hat Y_t(\epsilon)=y) = \frac{e^{-\eta_\epsilon L_y^t}}{\displaystyle\sum_{z\in \mathcal{Y}_\epsilon} e^{-\eta_\epsilon L_z^t}},\quad y\in \mathcal{Y}_\epsilon,
\end{equation*}
where, for simplicity, we denoted $\hat Y_t(\epsilon)$ the prediction given by the learning rule $f^\epsilon_\cdot$ at time $t$. This ends the construction of the learning rules $f^\epsilon_\cdot$. We are now ready to define our final learning rule $f_\cdot$. Let $\epsilon_i=2^{-i}$ for all $i\geq 0$. We define $I_t:=\{i\leq \ln t\}$ for any $t\geq 1$. We also denote $t_i:=\lceil e^i\rceil$ and pose $\eta_t=\sqrt{\frac{\ln t}{t}}$. For any $i\in I_t$ we define $L_{t-1,i}:=\sum_{s=t_i}^{t-1}\ell(\hat Y_s(\epsilon_i),Y_s)$ and construct some weights $w_{t-1,i}$ for $t\geq 1$ and $i\in I_t$ recursively in the following way. Note that $I_1=\{0\}$. Therefore, we pose $w_{0,0}=1$. Now let $t\geq 2$ and suppose that the weights $w_{s-1,i}$ have been constructed for all $i\in I_s$ and $1\leq s\leq t-1$. We define
\begin{equation*}
\hat\ell_s:= \frac{\sum_{i\in I_s} w_{s-1,i}\ell(\hat Y_s(\epsilon_i),Y_s)}{\sum_{i\in I_s} w_{s-1,i}}.
\end{equation*}
Now for any $i\in I_t$ we note $\hat L_{t-1,i}:=\sum_{s=t_i}^{t-1}\hat\ell_s$. In particular, if $t_i=t$ we have $\hat L_{t-1,i}=L_{t-1,i}=0$. The weights at time $t$ are constructed as $w_{t-1,i}=e^{\eta_t(\hat L_{t-1,i}-L_{t-1,i})}$. We now define a random index $\hat i_t$, independent from the past history such that
\begin{equation*}
\mathbb{P}(\hat i_t = i) = \frac{w_{t-1,i}}{\sum_{j\in I_t}w_{t-1,j}}.
\end{equation*}
The output of our learning rule is $\hat Y_t:= \hat Y_t(\epsilon_{\hat i})$. Before analyzing this algorithm, we introduce the following helper lemma which can be found in \cite{cesa2006prediction}.
\begin{lemma}[\cite{cesa2006prediction}]
\label{lemma:cesa_bianchi_lugosi}
For all $N\geq 2$, for all $\beta\geq\alpha\geq 0$ and for all $d_1,\ldots,d_N\geq 0$ such that $\sum_{i=1}^N e^{-\alpha d_i}\geq 1$,
\begin{equation*}
\ln \frac{\sum_{i=1}^N e^{-\alpha d_i}}{\sum_{i=1}^N e^{-\beta d_i}} \leq \frac{\beta-\alpha}{\alpha} \ln N.
\end{equation*}
\end{lemma}
We now compare the predictions of this learning rule $f_\cdot$ to the predictions of the rules $f^\epsilon_\cdot$.
\begin{lemma}\label{lemma:concatenation_predictors}
Almost surely, there exists $\hat t\geq 0$ such that
\begin{equation*}
\forall t\geq \hat t,\forall i\in I_t,\quad \sum_{s=t_i}^t\ell(\hat Y_t,Y_t) \leq \sum_{s=t_i}^t \ell(\hat Y_t(\epsilon_i),Y_t) + (2+\bar\ell+\bar\ell^2)\sqrt {t\ln t}.
\end{equation*}
\end{lemma}
\begin{proof}
For any $t\geq 0$, we define the instantaneous regret $r_{t,i} = \hat\ell_t - \ell(\hat Y_t(\epsilon_i),Y_t)$. We first note that $|r_{t,i}|\leq \bar \ell$. We now define $w'_{t-1,i}:=e^{\eta_{t-1}(\hat L_{t-1,i}-L_{t-1,i})}$. We also introduce $W_{t-1} = \sum_{i\in I_t}w_{t-1,i}$ and $W'_{t-1} = \sum_{i\in I_{t-1}} w'_{t-1,i}$. We denote the index $k_t\in I_t$ such that $\hat L_{t,k_t}- L_{t,k_t} = \max_{i\in I_t} \hat L_{t,i} - L_{t,i}$. Then we write
\begin{equation*}
\frac{1}{\eta_t}\ln \frac{w_{t-1,k_{t-1}}}{W_{t-1}}- \frac{1}{\eta_{t+1}}\ln \frac{w_{t,k_t}}{W_t}=
\left(\frac{1}{\eta_{t+1}}-\frac{1}{\eta_t}\right)\ln\frac{W_t}{w_{t,k_t}} + \frac{1}{\eta_t} \ln \frac{W_t/w_{t,k_t}}{W'_t/w'_{t,k_t}} +\frac{1}{\eta_t}\ln \frac{w_{t-1,k_{t-1}}}{w'_{t,k_t}} + \frac{1}{\eta_t}\ln \frac{W'_t}{W_{t-1}}.
\end{equation*}
By construction, we have $\ln\frac{W_t}{w_{t,k_t}}\leq \ln |I_t| \leq \ln(1+\ln t)$. Further, we have that
\begin{align*}
\frac{1}{\eta_t} \ln \frac{W_t/w_{t,k_t}}{W'_t/w'_{t,k_t}} &=\frac{1}{\eta_t}\ln \frac{\sum_{i\in I_{t+1}} e^{\eta_{t+1}(\hat L_{t,i}-L_{t,i}-\hat L_{t,k_t}+L_{t,k_t})}}{\sum_{i\in I_t} e^{\eta_t(\hat L_{t,i}-L_{t,i}-\hat L_{t,k_t}+L_{t,k_t})}}\\
&=\frac{1}{\eta_t}\ln \frac{\sum_{i\in I_{t+1}} w_{t,i}}{\sum_{i\in I_t} w_{t,i}} + \frac{1}{\eta_t} \ln \frac{\sum_{i\in I_{t+1}} e^{\eta_{t+1}(\hat L_{t,i}-L_{t,i}-\hat L_{t,k_t}+L_{t,k_t})}}{\sum_{i\in I_{t+1}} e^{\eta_t(\hat L_{t,i}-L_{t,i}-\hat L_{t,k_t}+L_{t,k_t})}}\\
&\leq \frac{1}{\eta_t}\ln \frac{\sum_{i\in I_{t+1}} w_{t,i}}{\sum_{i\in I_t} w_{t,i}} + \frac{1}{\eta_t}\left(\frac{\eta_t-\eta_{t+1}}{\eta_{t+1}}\right) \ln |I_{t+1}|\\
&\leq \frac{|I_{t+1}|-|I_t|}{\eta_t \sum_{i\in I_t} w_{t,i}} + \left(\frac{1}{\eta_{t+1}}-\frac{1}{\eta_t}\right) \ln (1+\ln (t+1)),
\end{align*}
where in the first inequality we applied Lemma \ref{lemma:cesa_bianchi_lugosi}. We also have
\begin{equation*}
\frac{1}{\eta_t}\ln \frac{w_{t-1,k_{t-1}}}{w'_{t,k_t}} = (\hat L_{t-1,k_{t-1}}- L_{t-1,k_{t-1}}) - (\hat L_{t,k_t}, L_{t,k_t}).
\end{equation*}
Last, because $|r_{t,i}|\leq \bar \ell$ for all $i\in I_t$, we can use Hoeffding's lemma to obtain
\begin{equation*}
\frac{1}{\eta_t}\ln \frac{W'_t}{W_{t-1}} = \frac{1}{\eta_t} \ln \sum_{i\in I_t} \frac{w_{t-1,i}}{W_{t-1}}e^{\eta_t r_{t,i}} \leq \frac{1}{\eta_t}\left( \eta_t\sum_{i\in I_t} r_{t,i} \frac{w_{t-1,i}}{W_{t-1}} + \frac{\eta_t^2 (2\bar\ell)^2}{8}\right) = \frac{1}{2}\eta_t \bar\ell^2.
\end{equation*}
Putting everything together gives
\begin{multline}
\label{eq:to_sum_combine_estimators}
\frac{1}{\eta_t}\ln \frac{w_{t-1,k_{t-1}}}{W_{t-1}}- \frac{1}{\eta_{t+1}}\ln \frac{w_{t,k_t}}{W_t}
\leq 2\left(\frac{1}{\eta_{t+1}}-\frac{1}{\eta_t}\right) \ln (1+\ln (t+1)) + \frac{|I_{t+1}|-|I_t|}{\eta_t \sum_{i\in I_t} w_{t,i}} \\
+ (\hat L_{t-1,k_{t-1}}- L_{t-1,k_{t-1}}) - (\hat L_{t,k_t}- L_{t,k_t}) + \frac{1}{2}\eta_t \bar\ell^2.
\end{multline}
First suppose that we have $\sum_{i\in I_t}w_{t,i}\leq 1$. Then either $k_t\in I_{t+1}\setminus I_t$ in which case $\hat L_{t,k_t}-L_{t,k_t}=0$, or we have directly
\begin{equation*}
\hat L_{t,k_t}-L_{t,k_t} \leq \frac{1}{\eta_{t+1}}\ln\left[\sum_{i\in I_t}w_{t,i}\right] \leq 0.
\end{equation*}
Otherwise, let $t'=\min \{1\leq s\leq t:\forall s\leq s'\leq t,\sum_{i\in I_{s'}} w_{s',i}\geq 1\}$. We sum equation~\eqref{eq:to_sum_combine_estimators} for $s=t',\ldots, t$ which gives
\begin{equation*}
\frac{1}{\eta_1}\ln \frac{w_{t'-1,k_{t'-1}}}{W_{t'-1}}- \frac{1}{\eta_{t+1}}\ln \frac{w_{t,k_t}}{W_t} \leq \frac{2}{\eta_{t+1}} \ln (1+\ln (t+1))+ \frac{|I_{t+1}|}{\eta_t} + (\hat L_{t'-1,k_{t'-1}}- L_{t'-1,k_{t'-1}}) - (\hat L_{t,k_t}- L_{t,k_t}) + \frac{\bar\ell^2}{2}\sum_{s=t'}^t\eta_s.
\end{equation*}
Note that we have $\frac{w_{t,k_t}}{W_t}\leq 1$ and $\frac{w_{t'-1,k_{t'-1}}}{W_{t'-1}}\geq \frac{1}{|I_{t'-1}|}\geq \frac{1}{1+\ln t}$. Also, assuming $t'\geq 2$, since $\sum_{i\in I_{t'-1}} w_{t'-1,i}< 1$, we have for any $i\in I_{t'-1}$ that $\hat L_{t'-1,i}-L_{t'-1,i}\leq 0$, hence $\hat L_{t'-1,k_{t'-1}}- L_{t'-1,k_{t'-1}}\leq 0$. If $t'=1$ we have directly $\hat L_{0,k_0}-L_{0,k_0}=0$. Finally, using the fact that $\sum_{s=1}^t \frac{1}{\sqrt s}\leq 2\sqrt t$, we obtain
\begin{equation*}
\hat L_{t,k_t}- L_{t,k_t} \leq \ln(1+\ln (t+1))\left(1+2\sqrt{\frac{t+1}{\ln(t+1)}}\right) +(1+\ln (t+1))\sqrt {\frac{t}{\ln t}} + \bar\ell^2\sqrt {t\ln t} \leq (3/2+\bar \ell^2) \sqrt {t \ln t},
\end{equation*}
for all $t\geq t_0$ where $t_0$ is a fixed constant. This in turn implies that for all $t\geq t_0$ and $i\in I_t$, we have $\hat L_{t,i} -L_{t,i} \leq (3/2+\bar \ell^2) \sqrt {t \ln t}.$ Now note that $|\ell(\hat Y_t,Y_t)-\hat \ell_t|\leq \bar \ell$. Hence, we can use Hoeffding-Azuma inequality for the variables $\ell(\hat Y_t,Y_t)-\hat \ell_t$ that form a sequence of martingale differences to obtain $ \mathbb{P}\left[\sum_{s=t_i}^t \ell(\hat Y_s,Y_s)>\hat L_{t,i} + u\right] \leq e^{ -\frac{2u^2}{t\bar\ell^2}}.$ Hence, for $t\geq t_0$ and $i\in I_t$, with probability $1-\delta$, we have
\begin{equation*}
\sum_{s=t_i}^t \ell(\hat Y_s,Y_s)\leq \hat L_{t,i} + \bar\ell \sqrt{\frac{t}{2}\ln\frac{1}{\delta}} \leq L_{t,i} +(3/2+\bar \ell^2) \sqrt {t \ln t} + \bar\ell \sqrt{\frac{t}{2}\ln\frac{1}{\delta}}.
\end{equation*}
Therefore, since $|I_t|\leq 1+\ln t$, by union bound with probability $1-\frac{1}{t^2}$ we obtain that for all $i\in I_t$,
\begin{equation*}
\sum_{s=t_i}^t \ell(\hat Y_s,Y_s) \leq L_{t,i} + (3/2+\bar \ell^2) \sqrt {t \ln t} + \bar\ell\sqrt{\frac{t}{2}\ln (1+\ln t)}+ \bar\ell \sqrt{t\ln t}\leq (2+\bar\ell+\bar\ell^2)\sqrt {t\ln t},
\end{equation*}
for all $t\geq t_1$ where $t_1\geq t_0$ is a fixed constant. The Borel-Cantelli lemma implies that almost surely, there exists $\hat t\geq 0$ such that
\begin{equation*}
\forall t\geq \hat t, \forall i\in I_t,\quad \sum_{s=t_i}^t \ell(\hat Y_s,Y_s) \leq L_{t,i} +(2+\bar\ell+\bar\ell^2)\sqrt {t\ln t}.
\end{equation*}
This ends the proof of the lemma.
\end{proof}
We are now ready to prove the main result that for totally-bounded value spaces, $f_\cdot$ is a universal consistent for adversarial regression under all processes $\mathbb{X}\in{\text{SMV}}$.
\begin{proof}[Proof of Theorem \ref{thm:optimistic_regression_totally_bounded}]
Let $0<\epsilon\leq 1$. We first analyze the prediction of the learning rule $f^\epsilon_\cdot$. The learning rule was constructed so that we perform exactly the classical exponentially weighted average forecaster for the predictions $\hat Y_{\phi^{L_t}(t)},\hat Y_{\phi^{L_t-1}(t)},\ldots, \hat Y_{\phi (t)},\hat Y_t$. As a result, denoting $\bar\ell(\hat Y_t(\epsilon),Y_t):=\sum_{y\in\mathcal{Y}_\epsilon} \mathbb{P}(\hat Y_t(\epsilon)=y) \ell(y,Y_t)$, by online learning, we have that for any $t\geq 1$,
\begin{equation*}
\frac{1}{\bar\ell} \sum_{u=0}^{L_t} \bar\ell(\hat Y_{\phi^u(t)}(\epsilon),Y_{\phi^u(t)}) \leq \frac{1}{\bar\ell} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{L_t} \ell(y,Y_{\phi^u(t)}) + \sqrt{\frac{L_t \ln|\mathcal{Y}_\epsilon|}{2}}.
\end{equation*}
Now consider a horizon $T\geq 1$, denote by $\mathcal{A}_i:=\{t\leq T:|\{u\leq T:\phi(u)=t\}|=i\}$ the set of times which have exactly $i$ children within horizon $T$, for $i=0,1,2$. Note that no time can have more than 2 children. Define $\mathcal{B}_T = \{t\leq T: L_t=T_\epsilon-1\text{ and }\forall u=1,\ldots,T_\epsilon-1,\phi^u(t)\notin \mathcal{A}_1\}$, i.e., times congruent to $T_\epsilon-1$ modulo $T_\epsilon$ and such that parents until the $(T_\epsilon-1)-$th generation have only childs. Note that by construction, for all $t\in\mathcal{B}_T$, the sets $\{\phi^u(t),u=0,\ldots,T_\epsilon-1\}$ are disjoint which yields $|\mathcal{B}_T|T_\epsilon\leq T$. Last, we denote $\mathcal{E}=\bigcup_{t\in\mathcal{B}_T} \{\phi^u(t),u=0,\ldots,T_\epsilon-1\}$. This set contains all times $t\leq T$ except for times close to leaves $\mathcal{A}_0$ or to times in $\mathcal{A}_2$ which had at least two children. Specifically, for time $t\in\mathcal{A}_2$, potentially, its children until generation $T_\epsilon-1$ will not be in $\mathcal{E}$. Therefore, by summing the above equation for all times in $\mathcal{B}_T$, we obtain
\begin{align*}
\sum_{t=1}^{T} \bar\ell(\hat Y_t(\epsilon),Y_t) &\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + |\mathcal{B}_T|\bar\ell\sqrt{\frac{T_\epsilon \ln|\mathcal{Y}_\epsilon|}{2}} + (T-|\mathcal{E}|)\bar\ell \\
&\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + T\bar\ell\sqrt{\frac{ \ln|\mathcal{Y}_\epsilon|}{2T_\epsilon}} + (|\mathcal{A}_2|2^{T_\epsilon}+|\mathcal{A}_0|T_\epsilon)\bar\ell\\
&\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + \epsilon T+ (|\mathcal{A}_2|2^{T_\epsilon}+|\mathcal{A}_0|T_\epsilon)\bar\ell.
\end{align*}
Now note that by counting the number of edges of the tree structure we obtain $\frac{1}{2}(3|\mathcal{A}_2| + 2|\mathcal{A}_1|+|\mathcal{A}_0|-1) = T-1 = |\mathcal{A}_0|+|\mathcal{A}_1|+|\mathcal{A}_2|-1$, where the $-1$ on the left-hand side accounts for the root of this tree which does not have a parent. Hence we obtain $|\mathcal{A}_0|= |\mathcal{A}_2|+1$. Further, $|\mathcal{A}_2|\leq |\{t\leq T:U_t=1\}|$ which follows a binomial distribution $\mathcal{B}(T,\delta_\epsilon)$. Therefore, using the Chernoff bound, with probability $1-e^{-T\delta_\epsilon/3}$ we have
\begin{align*}
\sum_{t=1}^{T} \bar\ell(\hat Y_t(\epsilon),Y_t) &\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + \epsilon T+ (T_\epsilon+2T\delta_\epsilon(2^{T_\epsilon}+T_\epsilon))\bar\ell.\\
&\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + T_\epsilon \bar\ell+2\epsilon T.
\end{align*}
We now observe that the sequence $\{\ell(\hat Y_t(\epsilon),Y_t)-\bar\ell(\hat Y_t(\epsilon),Y_t)\}_{T\geq 1}$ is a sequence of martingale differences bounded by $\bar\ell$ in absolute value. Hence, the Hoeffding-Azuma inequality yields that for any $T\geq 1$, with probability $1-\frac{1}{T^2}-e^{-T\delta_\epsilon/3}$,
\begin{equation*}
\sum_{t=1}^{T} \ell(\hat Y_t(\epsilon),Y_t)\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + T_\epsilon \bar\ell+2\epsilon T + 2\bar\ell \sqrt{T\ln T}.
\end{equation*}
Because $\sum_{T\geq 1}\frac{1}{T^2}+e^{-T\delta_\epsilon/3}<\infty$ the Borel-Cantelli lemma implies that with probability one, there exists a time $\hat T$ such that
\begin{equation*}
\forall T\geq \hat T,\quad \sum_{t=1}^{T} \ell(\hat Y_t(\epsilon),Y_t)\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + T_\epsilon\bar\ell+ 2\bar\ell \sqrt{T\ln T} +2\epsilon T.
\end{equation*}
We denote by $\mathcal{E}_\epsilon$ this event. We are now ready to analyze the risk of the learning rule $f^\epsilon_\cdot$. Let $f:\mathcal{X}\to\mathcal{Y}$ a measurable function to which we compare the prediction of $f^\epsilon_\cdot$. By Theorem \ref{thm:1+deltaC1NN_optimistic}, the rule $(1+\delta_\epsilon)$C1NN is optimistically universal in the noiseless setting. Therefore, because $\mathbb{X}\in\text{SOUL}$ we have in particular
\begin{equation*}
\frac{1}{T}\sum_{t=1}^T \ell((1+\delta_\epsilon)C1NN_t(\mathbb{X}_{\leq t-1},f(\mathbb{X}_{\leq t-1}),X_t),f(X_t))\to 0\quad (a.s.),
\end{equation*}
i.e., almost surely, $\frac{1}{T}\sum_{t=1}^T \ell(f(X_{\phi(t)}),f(X_t)) \to 0$. We denote by $\mathcal{F}_\epsilon$ this event of probability one. Using Lemma \ref{lemma:loss_identity}, we write for any $u=1,\ldots,T_\epsilon-1$,
\begin{align*}
\sum_{t=1}^T \ell(f(X_{\phi^u(t)}),f(X_t)) &\leq 2^{\alpha-1}\sum_{t=1}^T \ell(f(X_{\phi^{u-1}(t)}),f(X_t)) + 2^{\alpha-1} \sum_{t=1}^T\ell(f(X_{\phi^l(t)}),f(X_{\phi^{u-1}(t)}))\\
&\leq 2^{\alpha-1}\sum_{t=1}^T \ell(f(X_{\phi^{u-1}(t)}),f(X_t)) + 2^{\alpha-1}\sum_{t=1}^T \ell(f(X_{\phi(t)}),f(X_t)) \cdot |\{l\leq T:\phi^{u-1}(l)=t\}|\\
&\leq 2^{\alpha-1}\sum_{t=1}^T \ell(f(X_{\phi^{u-1}(t)}),f(X_t)) + 2^{\alpha+u-2}\sum_{t=1}^T \ell(f(X_{\phi(t)}),f(X_t))
\end{align*}
where we used the fact that times have at most $2$ children. Therefore, iterating the above equations, we obtain that on $\mathcal{F}_\epsilon$, for any $u=1,\ldots,T_\epsilon-1$
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell(f(X_{\phi^u(t)}),f(X_t)) &\leq \left(\sum_{k=1}^u 2^{\alpha+k-2 + (\alpha-1)(u-k)}\right)\frac{1}{T}\sum_{t=1}^T \ell(f(X_{\phi(t)}),f(X_t))\\
&\leq \frac{2^{u\alpha}}{T}\sum_{t=1}^T \ell(f(X_{\phi(t)}),f(X_t)) \to 0.
\end{align*}
In the rest of the proof, for any $y\in\mathcal{Y}$, we will denote by $y^\epsilon$ a value in the $\epsilon-$net $\mathcal{Y}_\epsilon$ such that $\ell(y,y^\epsilon)\leq \epsilon$. We now pose $\mu_\epsilon=\min\{0<\mu\leq 1:c_\mu^\alpha \leq \frac{1}{\sqrt \epsilon} \}$ if the corresponding set is non-empty and $\mu_\epsilon=1$ otherwise. Note that because $c_\mu^\alpha$ is non-increasing in $\mu$, we have $\mu_\epsilon\longrightarrow_{\epsilon\to 0} 0$. Now let $0<\mu\leq 1$. $\mu:=\epsilon^{\frac{1}{\alpha+1}}$. Putting everything together, on the event $\mathcal{E}_\epsilon\cap\mathcal{F}_\epsilon$, for any $T\geq \hat T$, we have
\begin{align*}
\sum_{t=1}^T \ell(\hat Y_t(\epsilon),Y_t)&\leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathcal{Y}_\epsilon} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)})+ T_\epsilon \bar\ell + 2\bar\ell \sqrt{T\ln T} +2\epsilon T\\
&\leq \sum_{t\in\mathcal{B}_T} \sum_{u=0}^{T_\epsilon-1} \ell(f(X_t)^\epsilon,Y_{\phi^u(t)})+ T_\epsilon \bar\ell + 2\bar\ell \sqrt{T\ln T} +2\epsilon T\\
&\leq \sum_{t\in\mathcal{B}_T} \sum_{u=0}^{T_\epsilon-1} \left[c_{\mu_\epsilon}^\alpha \ell(f(X_t)^\epsilon,f(X_t)) +(c_{\mu_\epsilon}^\alpha)^2\ell(f(X_t),f(X_{\phi^u(t)})) + (1+{\mu_\epsilon})^2 \ell(f(X_{\phi^u(t)}),Y_{\phi^u(t)}) \right]\\
&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + T_\epsilon \bar\ell + 2\bar\ell \sqrt{T\ln T} +2\epsilon T\\
&\leq (1+{\mu_\epsilon})^2 \sum_{t=1}^T \ell(f(X_t),Y_t) + (c_{\mu_\epsilon}^\alpha)^2\sum_{u=1}^{T_\epsilon-1}\sum_{t=1}^T \ell(f(X_t),f(X_{\phi^u(t)})) + T_\epsilon \bar\ell + 2\bar\ell \sqrt{T\ln T} +(2 + c_{\mu_\epsilon}^\alpha)\epsilon T\\
&\leq \sum_{t=1}^T \ell(f(X_t),Y_t) + (c_{\mu_\epsilon}^\alpha)^2\sum_{u=1}^{T_\epsilon-1}\sum_{t=1}^T \ell(f(X_t),f(X_{\phi^u(t)})) + T_\epsilon \bar\ell + 2\bar\ell \sqrt{T\ln T} +(2\epsilon + \epsilon c_{\mu_\epsilon}^\alpha + 3{\mu_\epsilon}) T,
\end{align*}
where in the thirds inequality we used Lemma \ref{lemma:loss_identity} twice. Hence, for any $\epsilon<(c_1^\alpha)^{-2}$, on the event $\mathcal{E}_\epsilon\cap\mathcal{F}_\epsilon$, we obtain
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T} \sum_{t=1}^T \ell(\hat Y_t(\epsilon),Y_t) - \ell(f(X_t),Y_t) \leq 2\epsilon + \epsilon c_{\mu_\epsilon}^\alpha + 3{\mu_\epsilon} \leq 2\epsilon + \sqrt \epsilon + 3{\mu_\epsilon},
\end{equation*}
where $\mu_\epsilon\longrightarrow_{\epsilon\to 0} 0$. We now denote $\delta_\epsilon:= 2\epsilon + \sqrt \epsilon + 3{\mu_\epsilon}$ and $i_0=\lceil \frac{2\ln c_1^\alpha}{\ln 2} \rceil$. We now turn to the final learning rule and show that by using the predictions of the rules $f^{\epsilon_i}_\cdot$ for $i\geq 0$, it achieves zero risk. First, by the union bound, on the event $\bigcap_{i\geq 0} \mathcal{E}_{\epsilon_i}\cap\mathcal{F}_{\epsilon_i}$ of probability one,
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T} \sum_{t=1}^T \ell(\hat Y_t(\epsilon_i),Y_t) - \ell(f(X_t),Y_t) \leq \delta_{\epsilon_i},\quad \forall i\geq i_0.
\end{equation*}
Now define $\mathcal{H}$ the event probability one according to Lemma \ref{lemma:concatenation_predictors} such that there exists $\hat t$ for which
\begin{equation*}
\forall t\geq \hat t,\forall i\in I_t,\quad \sum_{s=t_i}^t\ell(\hat Y_t,Y_t) \leq \sum_{s=t_i}^t \ell(\hat Y_t(\epsilon_i),Y_t) +
(2+\bar\ell+\bar\ell^2)\sqrt{t\ln t}.
\end{equation*}
In the rest of the proof we will suppose that the event $\mathcal{H}\cap \bigcap_{i\geq 0} \mathcal{E}_{\epsilon_i}\cap\mathcal{F}_{\epsilon_i}$ is met. Let $i\geq i_0$. For any $T\geq \max(\hat t,t_i)$, we have
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t) &\leq \frac{t_i}{T}\bar \ell + \frac{1}{T}\sum_{t=t_i}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t)\\
&\leq \frac{t_i}{T}\bar \ell + \frac{1}{T}\sum_{t=t_i}^T \ell(\hat Y_t(\epsilon_i),Y_t)- \ell(f(X_t),Y_t) + (2+\bar\ell+\bar\ell^2)\sqrt{\frac{\ln T}{T}}\\
&\leq \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t(\epsilon_i),Y_t)- \ell(f(X_t),Y_t) + \frac{2t_i}{T}\bar\ell + (2+\bar\ell+\bar\ell^2)\sqrt{\frac{\ln T}{T}}.
\end{align*}
Therefore we obtain $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t) \leq \delta_{\epsilon_i}$. Because this holds for any $i\geq i_0$ on the event $\mathcal{H}\cap \bigcap_{i\geq 0} \mathcal{E}_{\epsilon_i}\cap\mathcal{F}_{\epsilon_i}$ of probability one, and $\delta_{\epsilon_i}\to 0$ for $i\to\infty$, we have
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t) \leq 0.
\end{equation*}
This ends the proof of the theorem.
\end{proof}
As a result, we obtain in particular ${\text{SMV}}\subset\text{SOLAR}$ for totally-bounded value spaces. Recalling that for bounded values ${\text{SMV}}=\text{SOUL}$ \cite{blanchard2022universal}, i.e., processes $\mathbb{X}\notin{\text{SMV}}$ are not universally learnable even in the noiseless setting, we have $\text{SOLAR}\subset{\text{SMV}}$. Thus we obtain a complete characterization of the processes which admit universal learning with adversarial responses: $\text{SOLAR}={\text{SMV}}$. Further, we obtain as corollary that the proposed learning rule from Theorem \ref{thm:optimistic_regression_totally_bounded} is optimistically universal for adversarial regression. These results are summarized in the following result.
\begin{corollary}
\label{cor:optimistic_totally_bounded}
Suppose that $(\mathcal{Y},\ell)$ is totally-bounded. Then, $\text{SOLAR} = {\text{SMV}}(=\text{SOUL})$ and there exists an optimistically universal learning rule for adversarial regression, i.e., which achieves universal consistency with adversarial responses under any process $\mathbb{X}\in\text{SOUL}$.
\end{corollary}
This is a first step towards the more general Theorem \ref{thm:good_value_spaces}. Indeed, one can note that $\text{F-TiME}$ is satisfied by any totally-bounded value space: given a fixed error tolerance $\eta>0$, consider a finite $\frac{\eta}{2}-$net $\mathcal{Y}_{\eta/2}$ of $\mathcal{Y}$. Intuitively, because this is a finite set, we can perform classical online learning---for instance with the exponentially weighted average forecaster \cite{cesa2006prediction}---to have $\Theta(\sqrt {T\ln |\mathcal{Y}_{\eta/2}|})$ regret compared to the best fixed value of $\mathcal{Y}_{\eta/2}$. For example, if $\alpha=1$, posing $T_\eta=\Theta(\frac{4}{\eta^2}\ln |\mathcal{Y}_{\eta/2}|)$ enables to have a regret at most $\frac{\eta}{2} T_\eta$ compared to any fixed value of $\mathcal{Y}_{\eta/2}$, hence regret at most $\eta T_\eta$ compared to any value of $\mathcal{Y}$. This achieves $\text{F-TiME}$, taking a deterministic time $\tau_\eta:=T_\eta$.
\subsection{Proof of Theorem \ref{thm:1+deltaC1NN_optimistic}}
\label{subsec:1+deltaC1NN}
In this section, we prove that for any $\delta>0$, the $(1+\delta)$C1NN learning rule is optimistically universal for the noiseless setting. The proof follows the same structure as the proof of the main result in \cite{blanchard2022universal} which shows that 2C1NN is optimistically universal. We first focus on the binary classification setting and show that the learning rule $(1+\delta)$C1NN is consistent on functions representing open balls.
\begin{proposition}
\label{prop:consistent_ball_borel}
Fix $0<\delta\leq 1$. Let $(\mathcal{X},\mathcal{B})$ be a separable Borel space constructed from the metric $\rho$. We consider the binary classification setting $\mathcal{Y} =\{0,1\}$ and the $\ell_{01}$ binary loss. For any input process $\mathbb{X}\in {\text{SMV}}$, for any $x\in \mathcal{X}$, and $r>0$, the learning rule $(1+\delta)$C1NN is consistent for the target function $f^*= \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{B_\rho(x,r)}$.
\end{proposition}
\begin{proof}
We fix $\bar x\in \mathcal{X}$, $r>0$ and $f^* = \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{ B(\bar x,r)}$. We reason by the contrapositive and suppose that $(1+\delta)$C1NN is not consistent on $f^*$. Then, $\eta:=\mathbb P(\mathcal{L}_\mathbb{X} ((1+\delta)C1NN,f^*)>0)>0$. Therefore, there exists $0<\epsilon \leq 1$ such that $\mathbb P(\mathcal{L}_\mathbb{X} ((1+\delta)C1NN,f^*)> \epsilon)>\frac{\eta}{2}$.
Denote by $\mathcal{A}:=\{\mathcal{L}_\mathbb{X} ((1+\delta)C1NN,f^*)>\epsilon\}.
$ this event of probability at least $\frac{\eta}{2}$. Because $\mathcal{X}$ is separable, let $(x^i)_{i\geq 1}$ a dense sequence of $\mathcal{X}$. We consider the same partition $(P_i)_{i\geq 1}$ of $B(\bar x,r)$ and the partition $(A_i)_{i\geq 0}$ of $\mathcal{X}$ as in the original proof of \cite{blanchard2022universal}, but with the constant $c_\epsilon:=\frac{1}{2\cdot 2^{2^8/(\epsilon\delta)}}$ and changing the construction of the sequence $(n_l)_{l\geq 1}$ so that for all $l\geq 1$
\begin{equation*}
\mathbb{P}\left[\forall n\geq n_l,\;|\{i,\; P_i(\tau_l)\cap \mathbb{X}_{< n}\neq \emptyset \} | \leq \frac{\epsilon\delta}{2^{10}} n\right]\geq 1- \frac{\delta}{2\cdot 2^{l+2}}\quad \text{ and } \quad n_{l+1} \geq \frac{2^9}{\epsilon\delta}n_l.
\end{equation*}
Last, consider the product partition of $(P_i)_{i\geq 1}$ and $(A_i)_{i\geq 0}$ which we denote $\mathcal{Q}$. Similarly, we define the same events $\mathcal{E}_l,\mathcal{F}_l$ for $l\geq 1$. We aim to show that with nonzero probability, $\mathbb{X}$ does not visit a sublinear number of sets of $\mathcal{Q}$.
We now denote by $(t_k)_{k\geq 1}$ the increasing sequence of all (random) times when $(1+\delta)$C1NN makes an error in the prediction of $f^*(X_t)$. Because the event $\mathcal{A}$ is satisfied, $\mathcal{L}_{\mb x} ((1+\delta)C1NN,f^*)>\epsilon$, we can construct an increasing sequence of indices $(k_l)_{l\geq 1}$ such that $t_{k_l}<\frac{2k_l}{\epsilon}$. For any $t\geq 2$, we will denote by $\phi(t)$ the (random) index of the representative chosen by the $(1+\delta)$C1NN learning rule. Now let $l\geq 1$. Consider the tree $\mathcal{G}$ where nodes are times $\mathcal{T}:=\{t\leq t_{k_l}\}$ within horizon $t_{k_l}$, where the parent relations are given by $(t,\phi(t))$ for $t\in \mathcal{T}\setminus\{1\}$. In other words, we construct the tree in which the parent of each new input is its representative. Note that by construction of the $(1+\delta)$C1NN learning rule, each node has at most $2$ children.\\
\paragraph{Step 1.}In this step, we consider the case when the majority of input points on which $(1+\delta)$C1NN made a mistake belong to $B(\bar x,r)$, i.e., $|\{k\leq k_l,\; X_{t_k}\in B(\bar x,r)\}|\geq \frac{k_l}{2}$. We denote $\mathcal{H}_1$ this event. Let us now consider the subgraph $\tilde \mathcal{G}$ given by restricting $\mathcal{G}$ only to nodes in the ball $B(\bar x,r)$---which are mapped to the true value $1$---i.e., on times $\mathcal{T}:=\{t\leq t_{k_l},\; X_t\in B(\bar x,r)\}$. In this subgraph, the only times with no parent are times $t_k$ with $k\leq k_l$ and $X_{t_k}\in B(\bar x,r)$, and possibly time $t=1$. Therefore, $\tilde \mathcal{G}$ is a collection of disjoint trees with roots times $\{t_k, \; k\leq k_l, \; x_{t_k}\in B(\bar x,r)\}$, and possibly $t=1$ if $X_1\in B(\bar x,r)$. For a given time $t_k$ with $k\leq k_l$ and $X_{t_k}\in B(\bar x,r)$, we denote by $\mathcal{T}_k$ the corresponding tree in $\tilde \mathcal{G}$ with root $t_k$. We now introduce the notion of \emph{good} trees. We say that $\mathcal{T}_k$ is a good tree if $\mathcal{T}_k\cap \mathcal{D}_{t_{k_l}+1}\neq \emptyset$, i.e., the tree survived until the last dataset. Conversely a tree is \emph{bad} if all its nodes were deleted before time $t_{k_l}+1$. We denote the set of good and bad trees by $G=\{k:\mathcal{T}_k\text{ good}\}$ and $B=\{k:\mathcal{T}_k\text{ bad}\}$. In particular, we have $|G|+|B| = |\{k\leq k_l,X_{t_k}\in B(\bar x,r)\}|\geq k_l/2$. We aim to upper bound the number of bad trees. We now focus on trees $\mathcal{T}_k$ which induced a future first mistake, i.e., such that $\{l\in\mathcal{T}_k|\exists u\leq t_{k_l}:\phi(u)=l,\rho(X_l,\bar x)\geq r \text{ and } \forall v<u,\phi(v)\neq l \}\neq\emptyset$. We denote the corresponding minimum time $l_k=\min \{l\in\mathcal{T}_k\mid \exists u\leq t_{k_l}:\phi(u)=l,\rho(X_l,\bar x)\geq r,\forall v<u,\phi(v)\neq l \}$. The terminology first mistake refers to the fact that the first time which used $l$ as representative corresponded to a mistake, as opposed to $l$ already having a children $X_u\in B(\bar x,r)$ which continues the descendance of $l$ within the tree $\mathcal{T}_k$. Note that bad trees necessarily induce a future first mistake---otherwise, this tree would survive. For each of these times $l_k$ two scenarios are possible.
\begin{enumerate}
\item The value $U_{l_k}$ was never revealed within horizon $t_{k_l}$: as a result $l_k\in\mathcal{D}_{t_{k_l}+1}$.
\item The value $U_{l_k}$ was revealed within horizon $t_{k_l}$. Then, $U_{l_k}$ we revealed using a time $t$ for which $l_k$ was a potential representative. This scenario has two cases:
\begin{enumerate}
\item $\rho(X_t,\bar x)< r$. If used as representative $\phi(t)=l_k$, then $l_k$ would not have induced a mistake in the prediction of $Y_t$.
\item $\rho(X_t,\bar x)\geq r$. If used as representative $\phi(t)=l_k$, then $l_k$ would have induced a mistake in the prediction of $Y_t$.
\end{enumerate}
\end{enumerate}
In the case 2.a), if the point is used as representative $\phi(t)=l_k$ and if the corresponding tree $\mathcal{T}_k$ was bad, at least another future mistake is induced by $\mathcal{T}_k$---otherwise this tree would survive. We consider times $l_k$ for which the value was revealed, which corresponds to the only possible scenario for bad trees. We denote the corresponding set $K:=\{k:U_{l_k}\text{ revealed within horizon }t_{k_l}\}$. We now consider the sequence $k^a_1,\ldots k^a_\alpha$ containing all indices of $K$ for which scenario 2.a) was followed, ordered by chronological order for the reveal of $U_{l_{k^a_i}}$, i.e., $U_{l_{k^a_1}}$ was the first item of scenario 2.a) to be revealed, then $U_{l_{k^a_2}}$ etc. until $U_{l_{k^a_\alpha}}$. Similarly, we construct the sequence $k^b_1,\ldots k^b_\beta$ of indices in $K$ corresponding to scenario 2.b), ordered by order for the reveal of $U_{l_{k^b_i}}$. We now consider the events
\begin{equation*}
\mathcal{B}:=\left\{\alpha + \beta\leq \frac{k_l}{2}-\frac{k_l\delta}{32}\right\},\quad
\mathcal{C}:=\left\{\sum_{i=1}^{\min(\alpha,\lceil k_l/8\rceil)} U_{l_{k^a_i}}\geq \frac{k_l\delta}{16}\right\}\quad
\text{and} \quad \mathcal{D}:=\left\{\sum_{i=1}^{\min(\beta,\lceil k_l/8 \rceil)} U_{l_{k^b_i}}\geq \frac{k_l\delta}{16} \right\}.
\end{equation*}
We now show that for $l>16$, under the event
\begin{equation*}
\mathcal{M}_{k_l}:=\mathcal{H}_1\cap \left[\mathcal{B}\cup
(\{\alpha\geq \lceil k_l/8\rceil\}\cap \mathcal{C}) \cup
(\{\alpha< \lceil k_l/8\rceil\}\cap \mathcal{D})\right],
\end{equation*}
we have that $|G|\geq \frac{k_l\delta}{32}$. Suppose that $\mathcal{M}_{k_l}$ is met. First note that because a bad tree can only fall into scenarios 2.a) or 2.b) we have $|B|\leq \alpha+\beta$. Hence $|G|\geq \frac{k_l}{2}-\alpha-\beta$ because of $\mathcal{H}_1$. Thus, the result holds directly if $\mathcal{B}$ is satisfied. We can now suppose that $\mathcal{B}^c$ is satisfied, i.e., $\alpha+\beta > \frac{k_l}{2}-\frac{k_l\delta}{32}$. Now suppose that $\alpha\geq \lceil k_l/8\rceil$ and $\mathcal{C}$ are also satisfied. For all indices such that $U_{l_{k^a_i}}=1$, i.e., we fall in case 2.a) and $l_{k_i^a}$ is used as representative, the corresponding tree $\mathcal{T}_{k^a_i}$ would need to induce at least an additional mistake to be bad. Recall that in total at most $k_l/2$ mistakes are induced by points of $\mathcal{T}$. Also, by definition of the set $K$, $\alpha+\beta$ mistakes are already induced by the times $t_k$ for $k\in K$. These corresponded to the future first mistakes for all times $\{l_k:k\in K\}$. Hence, we obtain
\begin{equation*}
|G| \geq \sum_{i=1}^\alpha U_{l_{k^a_i}} - \left(\frac{k_l}{2} - \alpha-\beta\right) \geq \frac{k_l \delta}{16} - \frac{k_l\delta}{32} = \frac{k_l \delta}{32}.
\end{equation*}
Now consider the case where $\mathcal{H}_1$, $\mathcal{B}^c$, $\alpha < \lceil k_l/8\rceil $ and $\mathcal{D}$ are met. In particular, because $l>16$ we have $k_l>16$ hence $\frac{k_l}{2}-\frac{k_l\delta}{32}\geq 2 \lceil k_l/8\rceil$. Thus, because of $\mathcal{B}^c$ we have $\beta> \frac{k_l}{2}-\frac{k_l\delta}{32}-\alpha\geq \lceil k_l/8\rceil$. Now observe that for all indices such that $U_{l_{k^b_i}}=1$, the time $l_k$ induced two mistakes. Therefore, counting the total number of mistakes we obtain
\begin{equation*}
\frac{k_l}{2}\geq \alpha + \beta + \sum_{i=1}^{\beta} U_{l_{k^b_i}} \geq \frac{k_l}{2} - \frac{k_l\delta}{32} + \frac{k_l\delta}{16}
\end{equation*}
which is impossible. This ends the proof that under $\mathcal{M}_{k_l}$ we have $|G|\geq \frac{k_l\delta}{32}$.
We now aim to lower bound the probability of this event. To do so, we first upper bound the probability of the event $\{\alpha\geq \lceil k_l/8\rceil\}\cap \mathcal{C}^c$. We introduce a process $(Z_i)_{i=1}^{\lceil k_l/8 \rceil}$ such that for all $i\leq \max(\alpha,\lceil k_l/8\rceil)$, $Z_i=U_{l_{k^a_i}}-\delta$ and $Z_i=0$ for $\alpha<i\leq \lceil k_l/8 \rceil$. Because of the specific ordering chosen $k_1^a,\ldots,k_\alpha^a$, this process is a sequence of martingale differences, with values bounded by $1$ in absolute value. Therefore, for $l>16$ the Azuma-Hoeffing inequality yields
\begin{equation*}
\mathbb{P}\left[\sum_{i=1}^{\lceil k_l/8\rceil} Z_i\leq -\frac{k_l\delta}{16}\right] \leq e^{-\frac{k_l^2\delta^2}{2\cdot 16^2(k_l/8+1)}} \leq e^{-\frac{k_l\delta^2}{2^7}}.
\end{equation*}
But on the event $\{\alpha\geq \lceil k_l/8\rceil\}\cap \mathcal{C}^c$ we have precisely
\begin{equation*}
\sum_{i=1}^{\lceil k_l/8\rceil}Z_i = \sum_{i=1}^{\min(\alpha,\lceil k_l/8\rceil)} U_{l_{k_i^a}}-\lceil k_l/8 \rceil\delta \leq \frac{k_l\delta}{16} -\lceil k_l/8 \rceil\delta \leq -\frac{k_l\delta}{16}.
\end{equation*}
Therefore $\mathbb{P}[ \mathcal{C}^c\cap \{\alpha\geq \lceil k_l/8\rceil\}]\leq \mathbb{P}\left[\sum_{i=1}^{\lceil k_l/8\rceil} Z_i\leq -\frac{k_l\delta}{16}\right] \leq e^{-k_l\delta^2/2^7}.$ Similarly we obtain $\mathbb{P}[ D^c\cap \{\beta\geq \lceil k_l/8 \rceil\}] \leq e^{-k_l\delta^2/2^7}.$ Finally we write for any $l>16$,
\begin{align*}
\mathbb{P}[\mathcal{H}_1\setminus\mathcal{M}_{k_l}]&= \mathbb{P}[\mathcal{H}_1\cap\mathcal{B}^c \cap(\{\alpha< \lceil k_l/8 \rceil\}\cup \mathcal{C}^c) \cap(\{\alpha\geq \lceil k_l/8 \rceil\}\cup \mathcal{D}^c)]\\
&= \mathbb{P}[\mathcal{H}_1\cap\mathcal{B}^c \cap[(\{\alpha< \lceil k_l/8 \rceil\}\cap \mathcal{D}^c)\cup (\{\alpha \geq \lceil k_l/8 \rceil\}\cap \mathcal{C}^c)]]\\
&\leq \mathbb{P}[\mathcal{C}^c\cap\{\alpha\geq \lceil k_l/8\rceil\}] + \mathbb{P}[\mathcal{D}^c\cap\{\alpha< \lceil k_l/8\rceil\}\cap\mathcal{B}^c]\\
&\leq \mathbb{P}[\mathcal{C}^c\cap\{\alpha\geq \lceil k_l/8\rceil\}] + \mathbb{P}[\mathcal{D}^c\cap\{\beta\geq \lceil k_l/8\rceil\}]\\
&\leq 2e^{-\frac{k_l\delta^2}{2^7}}.
\end{align*}
In particular, we obtain
\begin{equation*}
\mathbb{P}\left[\left\{|G|\geq \frac{k_l\delta}{32}\right\}\cap\mathcal{H}_1\right]\geq \mathbb{P}[\mathcal{M}_{k_l}] \geq \mathbb{P}[\mathcal{H}_1]-2e^{-\frac{k_l\delta^2}{2^7}}.
\end{equation*}
\paragraph{Step 2.} We now consider the opposite case, when a majority of mistakes are made outside $B(\bar x,r)$, i.e., $|\{k\leq k_l,\; X_{t_k}\in B(\bar x,r)\}|< \frac{k_l}{2}$, which corresponds to the event $\mathcal{H}_1^c$. Similarly, we consider the subgraph $\tilde \mathcal{G}$ given by restricting $\mathcal{G}$ only to nodes outside the ball $B(\bar x,r)$, i.e., on times $\mathcal{T}:=\{t\leq t_{k_l},\; \rho(X_t,\bar x)\geq r)\}$. Again, $\tilde \mathcal{G}$ is a collection of disjoint trees with roots times $\{t_k, \; k\leq k_l, \; \rho(X_{t_k},\bar x)\geq r)\}$---and possibly $t=1$. For a given time $t_k$ with $k\leq k_l$ and $\rho(X_{t_k},\bar x)\geq r$, we denote by $\mathcal{T}_k$ the corresponding tree in $\tilde \mathcal{G}$ with root $t_k$. Similarly to the previous case, $\mathcal{T}_k$ is a \emph{good} tree if $\mathcal{T}_k\cap \mathcal{D}_{t_{k_l}+1}\neq \emptyset$ and \emph{bad} otherwise. We denote the set of good and bad trees by $G=\{k:\mathcal{T}_k\text{ good}\}$. We can again focus on trees $\mathcal{T}_k$ which induced a future first mistake, i.e., such that $\{l\in\mathcal{T}_k|\exists u\leq t_{k_l}:\phi(u)=l,\rho(X_l,\bar x)< r \text{ and } \forall v<u,\phi(v)\neq l \}\neq\emptyset$ and more specifically their minimum time $l_k=\min \{l\in\mathcal{T}_k\mid \exists u\leq t_{k_l}:\phi(u)=l,\rho(X_l,\bar x)< r,\forall v<u,\phi(v)\neq l \}$. The same analysis as above shows that
\begin{equation*}
\mathbb{P}\left[\left\{|G|\geq \frac{k_l\delta}{32}\right\}\cap\mathcal{H}_1^c\right] \geq \mathbb{P}[\mathcal{H}_1^c] -2e^{-\frac{k_l\delta^2}{2^7}}.
\end{equation*}
Therefore, if $G$ denotes more generally the set of good trees (where we follow the corresponding case 1 or 2) we finally obtain that for any $l>16$,
\begin{equation*}
\mathbb{P}\left[|G|\geq \frac{k_l\delta}{32}\right]\geq 1-4e^{-\frac{k_l\delta^2}{2^7}}.
\end{equation*}
We denote by $\tilde \mathcal{M}_{k_l}$ this event. By Borel-Cantelli lemma, almost surely, there exists $\hat l$ such that for any $l\geq \hat l$, the event $\tilde \mathcal{M}_{k_l}$ is satisfied.
We denote $\mathcal{M}:=\bigcup_{l\geq 1} \bigcap_{l'\geq l}\tilde \mathcal{M}_{k_l}$ this event of probability one. The aim is to show that on the event $\mathcal{A}\cap\mathcal{M}\cap\bigcap_{l\geq 1}(\mathcal{E}_l\cap\mathcal{F}_l)$, which has probability at least $\frac{\eta}{4}$, $\mathbb{X}$ disproves the ${\text{SMV}}_{(\mathcal{X},\rho)}$ condititon. In the following, we consider a specific realization $\mb x$ of the process $\mathbb{X}$ falling in the event $\mathcal{A}\cap\mathcal{M}\cap\bigcap_{l\geq 1}(\mathcal{E}_l\cap\mathcal{F}_l)$---$\mb x$ is not random anymore. Let $\hat l$ be the index given by the event $\mathcal{M}$ such that for any $l\geq \hat l$, $\mathcal{M}_{k_l}$ holds. We consider $l\geq \hat l$ and successively consider different cases in which the realization $\mb x$ may fall.
\paragraph{Case 1.} In this case, we suppose that a majority of mistakes were made in $B(\bar x,r)$, i.e., that we fell into event $\mathcal{H}_1$ similarly to Step 1. Because the event $\tilde \mathcal{M}_{k_l}$ is satisfied we have $|G|\geq \frac{k_l\delta}{2^5}$. Now note that trees are disjoint, therefore, $\sum_{k\in G} |\mathcal{T}_k|\leq t_{k_l}<\frac{2k_l}{\epsilon}.$
Therefore,
\begin{equation*}
\sum_{k\in G}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{|\mathcal{T}_k|\leq \frac{2^7}{\epsilon\delta}} = |G| - \sum_{k\in G}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{|\mathcal{T}_k|> \frac{2^7}{\epsilon\delta}}> |G|-\frac{\epsilon\delta}{2^7} \sum_{k\in G}|\mathcal{T}_k|\geq \frac{k_l\delta}{2^5}- \frac{k_l\delta}{2^6} = \frac{k_l\delta}{2^6}.
\end{equation*}
We will say that a tree $|\mathcal{T}_k|$ is \emph{sparse} if it is good and has at most $\frac{2^7}{\epsilon\delta}$ nodes. With $S := \{k\in G,\;|\mathcal{T}_k|\leq \frac{2^7}{\epsilon\delta} \}$ the set of sparse trees, the above equation yields $|S|\geq \frac{k_l\delta}{2^6}$. The same arguments as in \cite{blanchard2022universal} give
\begin{equation*}
|\{i,\; A_i\cap \mb{x}_{\leq t_{k_l}}\neq \emptyset \}|\geq |S|\geq \frac{k_l\delta}{2^6} \geq \frac{\epsilon\delta}{2^7}t_{k_l}.
\end{equation*}
The only difference is that we chose $c_\epsilon$ so that $2^{2\cdot \frac{2^7}{\epsilon\delta} -1} \leq \frac{1}{4c_\epsilon}$ as needed in the original proof.
\paragraph{Case 2.}We now turn to the case when the majority of input points on which $(1+\delta)$C1NN made a mistake are not in the ball $B(\bar x,r)$, similarly to Step 2. Using the same notion of sparse tree $S := \{k\in G,\;|\mathcal{T}_k|\leq \frac{2^7}{\epsilon\delta} \}$, we have again $|S|\geq \frac{k_l\delta}{2^6}$. We use the same arguments as in the original proof. Suppose $|\{k\in S,\; \rho(x_{p^k_{d(k)}},\bar x)>r\}|\geq \frac{|S|}{2}$, then we have
\begin{equation*}
|\{i,\; A_i\cap \mb{x}_{\leq t_{k_l}}\neq \emptyset \}|\geq |\{k\in S,\; \rho(x_{p^k_{d(k)}},\bar x)>r\}| \geq \frac{|S|}{2} \geq \frac{k_l\delta}{2^7}\geq \frac{\epsilon\delta}{2^8}t_{k_l}.
\end{equation*}
\paragraph{Step 3.}
In this last step, we suppose again that the majority of input points on which $(1+\delta)$C1NN made a mistake are not in the ball $B(\bar x,r)$ but that $|\{k\in S,\; \rho(x_{p^k_{d(k)}},\bar x)>r\}|< \frac{|S|}{2}$. Therefore, we obtain
\begin{equation*}
|\{k\in S,\; \rho(x_{p^k_{d(k)}},\bar x)=r\}| = |S|-|\{k\in S,\; \rho(x_{p^k_{d(k)}},\bar x)>r\}| \geq \frac{|S|}{2} \geq \frac{k_l\delta}{2^7}\geq \frac{\epsilon\delta}{2^8}t_{k_l}.
\end{equation*}
We will now make use of the partition $(P_i)_{i\geq 1}$. Because $(n_u)_{u\geq 1}$ is an increasing sequence, let $u\geq 1$ such that $n_{ u+1}\leq t_{k_l}\leq n_{ u+2}$ (we can suppose without loss of generality that $t_{k_0}>n_2$). Note that we have $n_u\leq \frac{\epsilon\delta}{2^9}n_{u+1}\leq \frac{\epsilon\delta}{2^9}t_{k_l}$. Let us now analyze the process between times $n_u$ and $t_{k_l}$. In particular, we are interested in the indices $T=\{k\in S,\; \rho(x_{p^k_{d(k)}},\bar x)=r\}$ and times $\mathcal{U}_u = \{p^k_{d(k)}:\; n_u< p^k_{d(k)}\leq k_l,\; k\in T\}$. In particular, we have
\begin{equation*}
|\mathcal{U}_u| \geq |\{k\in S,\; \rho(x_{p^k_{d(k)}},\bar x)=r\}| - n_u \geq \frac{\epsilon\delta}{2^8}t_{k_l} -\frac{\epsilon\delta}{2^9}t_{k_l} = \frac{\epsilon\delta}{2^9}t_{k_l}.
\end{equation*}
Defining $T' := \{k\in T,\; r-\frac{r}{2^{u+3}}\leq \rho(x_{\phi(t_k)},\bar x)<r\}$, the same arguments as in the original proof yield
\begin{equation*}
|\{i,\; P_i\cap \mb x_{\leq t_{k_l}} \neq \emptyset\}| \geq |T'| \geq |\mathcal{U}_u|-|\{i,\; P_i(\tau_u)\cap \mb x_{\mathcal{U}_u} \neq \emptyset\}|\geq \frac{\epsilon\delta}{2^9}t_{k_l}-\frac{\epsilon\delta}{2^{10}}t_{k_l}=\frac{\epsilon\delta}{2^{10}}t_{k_l}.
\end{equation*}
\paragraph{Step 4.}
In conclusion, in all cases, we obtain
\begin{equation*}
|\{Q\in \mathcal{Q},\; Q\cap \mb x_{\leq t_{k_l}} \neq \emptyset\}| \geq \max(|\{i,\; A_i\cap \mb{x}_{\leq t_{k_l}}\neq \emptyset \}|,|\{i,\; P_i\cap \mb x_{\leq t_{k_l}} \neq \emptyset\}|) \geq \frac{\epsilon\delta}{2^{10}}t_{k_l}.
\end{equation*}
Because this is true for all $l\geq \hat l$ and $t_{k_l}$ is an increasing sequence, we conclude that $\mb x$ disproves the ${\text{SMV}}_{(\mathcal{X},\rho)}$ condition for $\mathcal{Q}$. Recall that this holds whenever the event $\mathcal{A}\cap\mathcal{M}\cap\bigcap_{l\geq 1}(\mathcal{E}_l\cap \mathcal{F}_l)$ is met. Thus,
\begin{equation*}
\mathbb{P}[|\{Q\in \mathcal{Q},\; Q\cap \mathbb{X}_{<T}\}|=o(T)]\leq 1-\mathbb{P}\left[\mathcal{A}\cap\mathcal{M}\cap\bigcap_{l\geq 1}(\mathcal{E}_l\cap \mathcal{F}_l)\right] \leq 1-\frac{\eta}{4}<1.
\end{equation*}
This shows that $\mathbb{X}\notin {\text{SMV}}_{(\mathcal{X},\rho)}$ which is absurd. Therefore $(1+\delta)$C1NN is consistent on $f^*$. This ends the proof of the proposition.
\end{proof}
Using the fact that in the $(1+\delta)$C1NN learning rule, no time $t$ can have more than $2$ children, as the 2C1NN rule, we obtain with the same proof as in \cite{blanchard2022universal} the following proposition.
\begin{proposition}
\label{prop:opt1+delta_bin}
Fix $0<\delta\leq 1$. Let $(\mathcal{X},\mathcal{B})$ be a separable Borel space. For the binary classification setting, the learning rule $(1+\delta)$C1NN is universally consistent for all processes $\mathbb{X}\in {\text{SMV}}$.
\end{proposition}
Finally, we use a result from \cite{blanchard2021universal} which gives a reduction from any near-metric bounded value space to binary classification.
\begin{theorem}[\cite{blanchard2021universal}]
\label{thm:invariance}
If $(1+\delta)$C1NN is universally consistent under a process $\mathbb{X}$ for binary classification, it is also universally consistent under $\mathbb{X}$ for any separable near-metric setting $(\mathcal{Y},\ell)$ with bounded loss.
\end{theorem}
Together with Proposition \ref{prop:opt1+delta_bin}, Theorem \ref{thm:invariance} ends the proof of the Theorem \ref{thm:1+deltaC1NN_optimistic}.
\section{Characterization of learnable processes for bounded losses}
\label{sec:alternative}
While the Section \ref{sec:totally_bounded_value_spaces} focused on totally-bounded value spaces, the goal of this section is to give a full characterization of the set $\text{SOLAR}$ of processes for which adversarial regression is achievable for any \emph{bounded} value space. We also provide optimistically universal algorithms for any bounded setting.
\subsection{Negative result for non-totally-bounded spaces}
\label{subsec:bad_non_totally_bounded_ex}
In Corollary \ref{cor:optimistic_totally_bounded}, we showed that for totally-bounded value spaces, $\text{SOLAR}={\text{SMV}}$, i.e., adversarial regression is achievable for all processes $\mathbb{X}\in{\text{SMV}}$. Unfortunately, although for all bounded value spaces $(\mathcal{Y},\ell)$, noiseless universal learning is achievable on all ${\text{SMV}}(=\text{SOUL})$ processes, this is not the case for adversarial regression in non-totally-bounded spaces. We show in this section that extending Corollary \ref{cor:optimistic_totally_bounded} to any bounded value space is impossible. Precisely, there exists value spaces for which the set of learnable processes for adversarial regression is reduced to $\text{CS}$ only, instead of ${\text{SMV}}$.
\begin{theorem}
\label{thm:negative_optimistic}
Let $(\mathcal{X},\mathcal{B})$ a separable Borel metrizable space. There exists a separable metric value space $(\mathcal{Y},\ell)$ with bounded loss such that the following holds: for any process $\mathbb{X}\notin\text{CS}$, universal learning under $\mathbb{X}$ for arbitrary responses is not achievable. Precisely, for any learning rule $f_\cdot$, there exists a process $\mathbb{Y}$ on $\mathcal{Y}$, a measurable function $f^*:\mathcal{X}\to\mathcal{Y}$ and $\epsilon>0$ such that with non-zero probability $ \mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f^*) \geq \epsilon.$
\end{theorem}
\begin{proof}
We start by constructing the space $(\mathcal{Y},\ell)$ which we will use for the negative result. Specifically, we choose $\mathcal{Y}=\mathbb{N}:=\{i\geq 0\}$ and construct the corresponding loss $\ell$. For any $k\geq 1$, we pose $n_k:=2k(k-1)+2^k-1$ and define the sets
\begin{equation*}
I_k:=\{n_k,n_k+1,\ldots,n_k+4k-1\} \quad \text{and} \quad J_k:=\{n_k+4k,n_k+4k+1,\ldots,n_k+(4k-1)+ 2^k = n_{k+1}-1\}.
\end{equation*}
These sets are constructed so that $|I_k|=4k$, $|J_k|=2^k$ for all $k\geq 1$, and together with $\{0\}$, they form a partition of $\mathbb{N}$. We now construct the loss $\ell$. We pose $\ell(0,i)=\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{i=0}$ for all $i\in \mathbb{N}$. For any $1\leq k<l$, and any $i\in I_k\cup J_k, j\in I_l\cup J_l$ we define $\ell(i,j)=1$. Further, for any $k\geq 1$, for all $i,j\in I_k$ we pose $\ell(i,j)=\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{i=j}$. Similarly, for all $i, j\in J_k$ we pose $\ell(i,j)=\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{i=j}$. It now remains to define the loss $\ell(i,j)$ for all $i\in I_k$ and $j\in J_k$. Note that for any $j\in J_k$, we have that $j-n_k-4k\in\{0,\ldots,2^k-1\}$. Hence we will use their binary representation which we write as $j-n_k-4k = \{b_j^{k-1}\ldots b_j^1 b_j^0\}_2 = \sum_{u=0}^{k-1} b_j^u 2^u$ where $b_j^0,b_j^1,\ldots,b_j^{k-1}\in\{0,1\}$ are binary digits. Finally, we pose
\begin{equation*}
\ell(n_k + 4u,j)= \ell(n_k + 4u+1,j) = \frac{1+b^u_j}{2} \quad \text{and} \quad \ell(n_k + 4u+2,j)= \ell(n_k + 4u+3,j) = \frac{2-b^u_j}{2},
\end{equation*}
for all $u\in\{0,1,\ldots,k-1\}$ and $j\in J_k.$ This ends the definition of $\ell$. We first check that this indeed defines a metric space $(\mathbb{N},\ell)$. We only have to check that the triangular inequality is satisfied, the other properties of a metric being directly satisfied. By construction, the loss has values in $\{0,\frac{1}{2},1\}$. Now let $i,j,k\in \mathbb{N}$. The triangular inequality $\ell(i,j)\leq \ell(i,k)+\ell(k,j)$ is directly satisfied if two of these indices are equal. Therefore, we can suppose that they are all distinct and as a result $\ell(i,j),\ell(i,k),\ell(k,j) \in\{\frac{1}{2},1\}.$ Therefore
\begin{equation*}
\ell(i,j)\leq 1 \leq \ell(i,k)+\ell(k,j),
\end{equation*}
which ends the proof that $\ell$ is a metric.
Now let $(\mathcal{X},\mathcal{B})$ be a separable metrizable Borel space. Let $\mathbb{X}\notin\text{CS}$. We aim to show that universal online learning under adversarial responses is not achievable under $\mathbb{X}$ for value space $(\mathcal{Y},\ell)$. Because $\mathbb{X}\notin\text{CS}$, there exists a sequence of decreasing measurable sets $\{A_i\}_{i\geq 1}$ with $A_i \downarrow \emptyset$ such that $\mathbb{E}[\hat\mu_\mathbb{X}(A_i)]$ does not converge to $0$ for $i\to\infty$. In particular, there exist $\epsilon>0$ and an increasing subsequence $(i_l)_{l\geq 1}$ such that $\mathbb{E}[\hat \mu_\mathbb{X}(A_{i_l})]\geq \epsilon$ for all $l\geq 1$. We now denote $B_l:=A_{i_l}\setminus A_{i_{l+1}}$ for any $l\geq 1$. Then $\{B_l\}_{l\geq 1}$ forms a sequence of disjoint measurable sets such that
\begin{equation*}
\mathbb{E}\left[\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq l} B_{l'}\right)\right] \geq \epsilon,\quad l\geq 1.
\end{equation*}
Therefore, for any $l\geq 1$ because $\mathbb{E}\left[\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq l} B_{l'}\right)\right] \leq \mathbb{P}\left[\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq l} B_{l'}\right)\geq \frac{\epsilon}{2}\right] + \frac{\epsilon}{2}$ we obtain
\begin{equation*}
\mathbb{P}\left[\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq l} B_{l'}\right)\geq \frac{\epsilon}{2}\right] \geq \frac{\epsilon}{2}.
\end{equation*}
Now because $\hat\mu$ is increasing we obtain
\begin{equation*}
\mathbb{P}\left[\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq l} B_{l'}\right)\geq \frac{\epsilon}{2},\forall l\geq 1 \right] = \lim_{L\to\infty} \mathbb{P}\left[\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq l} B_{l'}\right)\geq \frac{\epsilon}{2},1\leq l\leq L \right] = \lim_{L\to\infty}\mathbb{P}\left[\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq L} B_{l'}\right)\geq \frac{\epsilon}{2}\right]\geq \frac{\epsilon}{2}.
\end{equation*}
We will denote by $\mathcal{A}$ this event in which for all $l\geq 1$, we have $\hat \mu_\mathbb{X} \left(\bigcup_{l'\geq l} B_{l'}\right)\geq \frac{\epsilon}{2}$. Under the event $\mathcal{A}$, for any $l,t^0\geq 1$, there always exists $t^1>t^0$ such that $\frac{1}{t^1}\sum_{t=1}^{t^1}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\bigcup_{l'\geq l}B_{l'}}(X_t) \geq \frac{3\epsilon}{8}.$ We construct a sequence of times $(t_p)_{p\geq 1}$ and indices $(l_p)_{p\geq 1}$, $(u_p)_{p\geq 1}$ by induction as follows. We first pose $u_0=t_0=0$. Now assume that for $p\geq 1$, the time $t_{p-1}$ and index $u_{p-1}$ are defined. We first construct an index $l_p>u_{p-1}$ such that
\begin{equation*}
\mathbb{P}\left[\mathbb{X}_{\leq t_{p-1}}\cap \left(\bigcup_{l\geq l_p} B_l\right)\neq \emptyset\right]\leq \frac{\epsilon}{2^{p+3}}.
\end{equation*}
We will denote by $\mathcal{E}_p$ the complementary of this event. Note that finding such index $l_p$ is possible because the considered events $\{\mathbb{X}_{\leq t_{p-1}}\cap \left(\bigcup_{l'\geq l} B_{l'}\right)\neq \emptyset\}$ are decreasing as $l>u_{p-1}$ increases and we have $\bigcap_{l>u_{p-1}}\left\{ \mathbb{X}_{\leq t_{p-1}}\cap \left(\bigcup_{l'\geq l} B_{l'} \right)\neq\emptyset \right\} = \left\{ \mathbb{X}_{\leq t_{p-1}} \cap \left(\bigcap_{l>u_{p-1}}\bigcup_{l'\geq l}B_{l'}\right)\neq\emptyset \right\}=\emptyset.$ We then construct $t_p>t_{p-1}$ such that
\begin{equation*}
\mathbb{P}\left[\mathcal{A}^c \cup \bigcup_{t_{p-1}<t\leq t_p}\left\{\frac{1}{t}\sum_{u=1}^t \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\bigcup_{l\geq l_p}B_l}(X_u) \geq \frac{3\epsilon}{8}\right\}\right]\geq 1-\frac{\epsilon}{2^{p+4}}.
\end{equation*}
This is also possible because $\mathcal{A}\subset \bigcup_{t>\frac{8}{\epsilon}t_{p-1}}\left\{\frac{1}{t}\sum_{u=1}^t \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\bigcup_{l\geq l_p}B_l}(X_u) \geq \frac{3\epsilon}{8}\right\}$. Last, we can now construct $u_p\geq l_p$ such that
\begin{equation*}
\mathbb{P}\left[\mathcal{A}^c \cup \bigcup_{t_{p-1}<t\leq t_p}\left\{\frac{1}{t}\sum_{u=1}^t \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\bigcup_{l_p\leq l\leq u_p}B_l}(X_u) \geq \frac{\epsilon}{4}\right\}\right]\geq 1-\frac{\epsilon}{2^{p+3}},
\end{equation*}
which is possible using similar arguments as above. We denote $\mathcal{F}_p$ this event. This ends the recursive construction of times $t_p$ and indices $l_p$ for all $p\geq 1$. Note that by construction, $\mathbb{P}[\mathcal{E}_p^c],\mathbb{P}[\mathcal{F}_p^c]\leq \frac{\epsilon}{2^{p+3}}$. Hence, by union bound, the event $\mathcal{A}\cap\bigcap_{p\geq 1}(\mathcal{E}_p\cap \mathcal{F}_p)$ has probability $\mathbb{P}[\mathcal{A}\cap\bigcap_{p\geq 1}(\mathcal{E}_p \cap \mathcal{F}_p)]\geq \mathbb{P}[\mathcal{A}]-\frac{\epsilon}{4}\geq \frac{\epsilon}{4}$. To simplify the rest of the proof, we denote $\tilde B_p = \bigcup_{l_p\leq l\leq u_p} B_l$ for any $p\geq 1$. Also, for any $t_1\leq t_2$, we denote by
\begin{equation*}
N_p(t_1,t_2) = \sum_{t=t_1}^{t_2}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\tilde B_p}(X_t)
\end{equation*}
the number of times that set $\tilde B_p$ has been visited between times $t_1$ and $t_2$.
We now fix a learning rule $f_\cdot$ and construct a process $\mathbb{Y}$ for which consistency will not be achieved on the event $\mathcal{A}\cap\bigcap_{p\geq 1}(\mathcal{E}_p\cap\mathcal{F}_p)$. Precisely, we first construct a family of processes $\mathbb{Y}^b$ indexed by a sequence of binary digits $ b=(b_i)_{i\geq 1}$. The process $\mathbb{Y}^b$ is defined such that for any $p\geq 1$,
\begin{equation*}
Y_t^b := \begin{cases}
n_{t_p} + 4u_p(t) +2b_{i(p,u_p(t))}+b_{i(p,u_p(t))+1} &\text{if } X_t\in \tilde B_p,\\
n_{t_{p'}}+4t_{p'} +\{b_{i(p',t_{p'}-1)}\ldots b_{i(p',1)} b_{i(p',0)} \}_2 &\text{if } X_t\in \tilde B_{p'}, p'<p, \\
0 &\text{otherwise}.
\end{cases} \quad\quad \forall t_{p-1}<t\leq t_p,
\end{equation*}
where we denoted $u_p(t)=N_p(t_{p-1}+1,t-1)$ and posed for any $p\geq 1$ and $u\geq 1$:
\begin{equation*}
i(p,u) = 2\sum_{p'<p} t_{p'} + 2u.
\end{equation*}
Note in particular that conditionally on $\mathbb{X}$, $\mathbb{Y}^b$ is deterministic: it does not depends on the random predictions of the learning rule. Because we always have $N_p(t_{p-1}+1,t-1)\leq t_p$ for any $t\leq t_p$, the process is designed so that we have $Y^b_t\in I_{t_p}$ if $X_t\in \tilde B_p$ and $t_{p-1}<t\leq t_p$. Further, for $t_{p-1}<t\leq t_p$, if $X_t\in \bigcup_{p'<p}\tilde B_{p'}$ then $Y^b_t\in J_{t_{p'}}$.
We now consider an i.i.d. Bernoulli $\mathcal{B}(\frac{1}{2})$ sequence of random bits $\mb b$ independent from the process $\mathbb{X}$---and any learning rule predictions. We analyze the responses of the learning rule for responses $\mathbb{Y}^{\mb b}$. We first fix a realization $\mb x$ of the process $\mathbb{X}$, which falls in the event $\mathcal{A}\cap\bigcap_{p\geq 1}(\mathcal{E}_p\cap\mathcal{F}_p)$. For any $p\geq 1$ we define $\mathcal{T}_p:=\{t_{p-1}<t\leq t_p:\;x_t\in \tilde B_p\}$. For simplicity of notation, for any $t\in \mathcal{T}_p$ we denote $i(t)=i(p,u_p(t))$. We will also denote $\hat Y_t:=f_t({\mb x}_{<t},\mathbb{Y}^{\mb b}_{<t},x_t)$. Last, denote by $r_t$ the possible randomness used by the learning rule $f_t$ at time $t$. For any $t\in \mathcal{T}_p$, we have
\begin{align*}
\mathbb{E}_{\mb b,\mb r}\ell(\hat Y_t,Y^{\mb b}_t)
&= \mathbb{E}_{\{b_{i(p',u')},b_{i(p',u')+1},\; p'\leq p,u'\leq t_{p'}\}\cup\{r_{t'},t'\leq t\}} \ell(\hat Y_t,Y^{\mb b}_t)\\
&= \mathbb{E} \left[ \mathbb{E}_{b_{i(t)},b_{i(t)+1}}\left. \ell(\hat Y_t,Y^{\mb b}_t) \right| b_{i(t')},b_{i(t')+1},t'< t,t'\in\mathcal{T}_{p};\;b_i, i< i(p,0);\;r_{t'},t'\leq t\right]\\
&= \mathbb{E} \left[ \mathbb{E}_{b_{i(t)},b_{i(t)+1}}\left. \ell(\hat Y_t,Y^{\mb b}_t) \right|\hat Y_t\right]\\
&= \mathbb{E}_{\hat Y_t}\left[ \frac{1}{4}\sum_{m=0}^3\ell(\hat Y_t,n_{t_p}+4u_p(t)+m)\right]\\
&= \mathbb{E}_{\hat Y_t}\left[ \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\notin \{n_{t_p}+4u_p(t)+m,0\leq m\leq 3\}\cup J_{t_p}} +\frac{3}{4}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\in \{n_{t_p}+4u_p(t)+m,0\leq m\leq 3\}}+ \frac{3}{4}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\in J_{t_p}}\right]\\
&\geq \frac{3}{4}.
\end{align*}
where in the last equality, we used the fact that if $j\in J_{k(t)}$ then by construction $\ell(j,n_{t_p}+4u_p(t))=\ell(j,n_{t_p}+4u_p(t)+1)$, $\ell(j,n_{t_p}+4u_p(t)+2)=\ell(j,n_{t_p}+4u_p(t)+3)$, and $\{\ell(j,n_{t_p}+4u_p(t)),\ell(j,n_{t_p}+4u_p(t)+2)\}=\{\frac{1}{2},1\}$. Summing all equations, we obtain for any $t_{p-1}<T\leq t_p$,
\begin{equation*}
\mathbb{E}_{\mb b,\mb r}\left[ \sum_{t=1}^{T}\ell(f_t({\mb x}_{<t},\mathbb{Y}^{\mb b}_{<t},x_t),Y^{\mb b}_t) \right] \geq \frac{3}{4}\sum_{p'<p}|\mathcal{T}_{p'}| + \frac{3}{4}|\mathcal{T}_p\cap\{t\leq T\}|.
\end{equation*}
This holds for all $p\geq 1$. Let us now compare this loss to the best prediction of a fixed measurable function. Specifically, for any binary sequence $b$, we consider the following function $f^b:\mathcal{X}\to\mathbb{N}$:
\begin{equation*}
f^b(x) = \begin{cases}
n_{t_p}+4t_p +\{b_{i(p,t_p-1)}\ldots b_{i(p,1)} b_{i(p,0)} \}_2 &\text{if }x\in \tilde B_p\\
0 &\text{if }x\notin\bigcup_{p\geq 1} \tilde B_p.
\end{cases}
\end{equation*}
Now let $t_{p-1}<t\leq t_p$ and $p\geq 1$. If $x_t\in \bigcup_{p'<p}\tilde B_{p'}$ we have $f^{\mb b}(x_t)=Y_t^{\mb b}$, hence $\ell(f^{\mb b}(x_t),Y_t^{\mb b})=0$. Now if $X_t\in\tilde B_p$ by construction we have $\ell(f^{\mb b}(x_t),Y_t^{\mb b})=\frac{1}{2}$. Finally, observe that because the event $\mathcal{E}_{p+1}$ is satisfied by $\mb x$ there does not exist $t_{p-1}<t\leq t_p$ such that $t\in \bigcup_{p'>p}\tilde B_{p'}\subset \bigcup_{l\geq l_{p+1}}B_l$. As a result, we have $\ell(f^{\mb b}(x_t),Y_t^{\mb b})=\frac{1}{2}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{t\in\mathcal{T}_p}$ for any $t_{p-1}<t\leq t_p$. Thus, we obtain for any $t_{p-1}<T\leq t_p$,
\begin{equation*}
\mathbb{E}_{\mb b,\mb r}\left[ \sum_{t=1}^{T}\ell(\hat Y_t,Y^{\mb b}_t) - \ell(f^{\mb b}(X_t),Y^{\mb b}_t)\right] \geq \frac{1}{4}\sum_{p'\leq p}|\mathcal{T}_{p'}| + \frac{1}{4}|\mathcal{T}_p\cap\{t\leq T\}|\geq \frac{1}{4}|\mathcal{T}_p\cap\{t\leq T\}|.
\end{equation*}
Recall that the event $\mathcal{F}_p$ is satisfied by $\mb x$ for any $p\geq 1$. Therefore, there exists a time $t_{p-1}<T_p\leq t_p$ such that $\sum_{t=1}^{T_p} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\tilde B_p}(x_t)\geq \frac{\epsilon T_p}{4}.$ Then, note that because the event $\mathcal{E}_p$ is satisfied, we have $\sum_{t=1}^{t_{p-1}} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\tilde B_p}(x_t)=0$. Therefore, we obtain $|\mathcal{T}_p\cap\{t\leq T_p\}|\geq \frac{\epsilon T_p}{4}$, and as a result,
\begin{equation*}
\mathbb{E}_{\mb b,\mb r}\left[\frac{1}{T_p}\sum_{t=1}^{T_p}\ell(\hat Y_t,Y^{\mb b}_t) - \ell(f^{\mb b}(X_t),Y^{\mb b}_t)\right] \geq \frac{\epsilon}{16}.
\end{equation*}
Because this holds for any $p\geq 1$ and as $p\to\infty$ we have $T_p\to\infty$, we can now use Fatou lemma which yields
\begin{equation*}
\mathbb{E}_{\mb b,\mb r}\left[\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^{T}\ell(\hat Y_t,Y^{\mb b}_t) - \ell(f^{\mb b}(X_t),Y^{\mb b}_t)\right]\geq \frac{\epsilon}{16}.
\end{equation*}
This holds for any realization in $\mathcal{A}\cap\bigcap_{p\geq 1}(\mathcal{E}_p\cap\mathcal{F}_p)$ which we recall has probability at least $\frac{\epsilon}{4}$. Therefore we finally obtain
\begin{equation*}
\mathbb{E}_{\mb b,\mb r,\mathbb{X}} \left[\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^{T}\ell(\hat Y_t,Y^{\mb b}_t) - \ell(f^{\mb b}(X_t),Y^{\mb b}_t)\right]\geq \frac{\epsilon^2}{2^6}.
\end{equation*}
As a result, there exists a specific realization of $\mb b$ which we denote $b$ such that
\begin{equation*}
\mathbb{E}_{\mb r,\mathbb{X}} \left[\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^{T}\ell(\hat Y_t,Y^b_t) - \ell(f^{b}(X_t),Y^b_t)\right]\geq \frac{\epsilon^2}{2^6},
\end{equation*}
which shows that we do not have almost surely $ \limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^{T}\ell(\hat Y_t,Y^b_t) - \ell(f^b(X_t),Y^b_t)\leq 0.$ This ends the proof of the theorem. As a remark, one can note that the construction of our bad example $\mathbb{Y}^b$ is a deterministic function of $\mathbb{X}$: it is independent from the realizations of the randomness used by the learning rule.
\end{proof}
For the constructed value space, although we have $\text{SOLAR}=\text{CS} \subsetneq\text{SOUL}$, there still exists an optimistically universal learning rule for adversarial responses. Indeed, the main result of \cite{hanneke2022bayes} shows that for any bounded value space, there exists a learning rule which is consistent under all $\text{CS}$ processes for arbitrary responses (when $\ell$ is a metric, i.e., $\alpha=1$).
\begin{theorem}[\cite{hanneke2022bayes}]
\label{thm:hanneke_2022}
Suppose that $(\mathcal{Y},\ell)$ is metric and $\ell$ is bounded. Then, there exists an online learning rule $f_\cdot$ which is universally consistent for arbitrary responses under any process $\mathbb{X}\in\text{CS}$, i.e., such that for any stochastic process $(\mathbb{X},\mathbb{Y})$ on $(\mathcal{X},\mathcal{Y})$ with $\mathbb{X}\in{\text{SMV}}(=\text{SOUL})$, then for any measurable function $f:\mathcal{X}\to\mathcal{Y}$, we have $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f)\leq 0,\quad (a.s.)$.
\end{theorem}
The proof of this theorem given in \cite{hanneke2022bayes} extends to adversarial responses. However, we do not directly prove that this extension holds, because we will later prove a stronger result which also holds for any $\alpha\geq 1$---even if $\ell$ is a power of metric---and for unbounded losses in Section \ref{sec:unbounded_loss_moment_constraint}. This shows that for any separable metric space $(\mathcal{X},\rho)$, there exists a metric value space for which the learning rule proposed in \cite{hanneke2022bayes} was already optimistically universal.
\subsection{Adversarial regression for classification with countable number of classes}
\label{subsec:countable_classification}
Although we showed in the last section that adversarial regression under all ${\text{SMV}}$ processes is not achievable for some non-totally-bounded spaces, we will show that there exists non-totally-bounded value spaces for which we can recover $\text{SOLAR}={\text{SMV}}(=\text{SOUL})$. Precisely, we consider the case of classification with countable number of classes $(\mathbb{N},\ell_{01})$, with $0-1$ loss $\ell_{01}(i,j)=\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{i\neq j}$. The goal of this section is to prove that in this case, we can learn arbitrary responses under any $\text{SOUL}$ process. The main difficulty with non-totally-bounded classification is that we cannot apply traditional online learning tools because $\epsilon-$nets may be infinite. Hence, we first show a result which allows to perform online learning with an infinite number of experts in the context of countable classification.
\begin{lemma}
\label{lemma:bandits_Nbb}
Let $T\geq 1$. There exists an online learning rule $f_\cdot$ such that for any sequence $\mb y:=(y_i)_{i\geq 1}^T$ of values in $\mathbb{N}$, we have that
\begin{equation*}
\sum_{t=1}^T \mathbb{E}[\ell_{01}(f_t({\mb y}_{\leq t-1}), y_t)] \leq \min_{y\in \mathbb{N}} \sum_{t=1}^T \ell_{01}(y, y_t) + 1+ \ln 2\sqrt{\frac{T}{2\ln T}} + \sqrt{2T\ln T},
\end{equation*}
or equivalently,
\begin{equation*}
\sum_{t=1}^T \mathbb{E}[\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{f_t({\mb y}_{\leq t-1})= y_t}] \geq \max_{y\in \mathbb{N}} \sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y= y_t} - 1- \ln 2\sqrt{\frac{T}{2\ln T}} - \sqrt{2T\ln T},
\end{equation*}
and with probability $1-\delta$,
\begin{equation*}
\sum_{t=1}^T \mathbb{E}[\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{f_t({\mb y}_{\leq t-1})= y_t}] \geq \max_{y\in \mathbb{N}} \sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y= y_t} - 1- \ln 2\sqrt{\frac{T}{2\ln T}} - \sqrt{2T\ln T} - \sqrt{2T\ln \frac{1}{\delta}}.
\end{equation*}
\end{lemma}
\begin{proof}
We first construct our online learning algorithm, which is a simple variant of the classical exponential forecaster. We first define a step $\eta:=\sqrt{2\ln T/ T}$. At time $t=1$ we always predict $0$. For time step $t\geq 2$, we define the set $S_{t-1}:=\{y\in\mathbb{N}, \sum_{u=1}^{t-1} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y= y_u}>0\}$ the set of values which have been visited. Then, we construct weights for all $y\in\mathbb{N}$ such that
\begin{equation*}
w_{y,t-1} = \begin{cases}
e^{\eta\sum_{u=1}^{t-1} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y= y_u}}, &y\in S_{t-1}\\
0 &\text{otherwise},
\end{cases}
\end{equation*}
and output a randomized prediction independent of the past history such that
\begin{equation*}
\mathbb{P}(\hat y_t=y) = \frac{w_{y,t-1}}{\sum_{y'\in \mathbb{N}}w_{y',t-1}}.
\end{equation*}
This defines a proper online learning rule. Note that the denominator is well defined since $w_{y,t-1}$ is non-zero only for values in $S_{t-1}$, which contains at most $t-1$ elements. We now define the expected success at time $1\leq t\leq T$ as $\hat s_t:= \frac{w_{y_t,t-1}}{\sum_{y\in\mathbb{N}} w_{y,t-1}}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y_t\in S_t}.$ Note that $\hat s_t=\mathbb{E}[\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{f_t({\mb y}_{\leq t-1})=y_t}]$. We first show that we have
\begin{equation*}
\sum_{t=1}^T \hat s_t\geq \max_{y\in \mathbb{N}} \sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y= y_t} - \sqrt T \ln T.
\end{equation*}
To do so, we analyze the quantity $W_t:=\frac{1}{\eta}\ln\left(\sum_{y\in S_t} e^{\eta\sum_{u=1}^t (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_u}-\hat s_u)}\right)$. Let $2\leq t\leq T$. Supposing that $y_t\in S_{t-1}$, i.e., $S_t=S_{t-1}$, we define the operator $\Phi:\mb x\in \mathbb{R}^{|S_{t-1}|}\mapsto \frac{1}{\eta}\ln\left(\sum_{y\in S_{t-1}} e^{\eta x_y}\right)$ and use the Taylor expansion of $\Phi$ to obtain
\begin{align*}
W_t &= \frac{1}{\eta}\ln\left(\sum_{y\in S_{t-1}} e^{\eta\sum_{u=1}^{t-1} (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_u}-\hat s_u) + \eta(\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t}-\hat s_t)}\right)\\
&= W_{t-1} + \sum_{y\in S_{t-1}} (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t}-\hat s_t)\frac{e^{\eta\sum_{u=1}^{t-1} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_u}}}{\sum_{y'\in S_{t-1}}e^{\eta\sum_{u=1}^{t-1} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y'=y_u} }} + \frac{1}{2} \sum_{y_1,y_2\in S_{t-1}} \left.\frac{\partial^2 \Phi}{\partial x_{y_1}\partial x_{y_2}}\right|_{\xi} (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y_1=y_u}-\hat s_u)(\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y_2=y_u}-\hat s_u)\\
&= W_{t-1} + \frac{1}{2} \sum_{y_1,y_2\in S_{t-1}} \left.\frac{\partial^2 \Phi}{\partial x_{y_1}\partial x_{y_2}}\right|_{\xi} (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y_1=y_t}-\hat s_u)(\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y_2=y_t}-\hat s_u)\\
&\leq W_{t-1} + \frac{1}{2} \sum_{y\in S_{t-1}} \frac{\eta e^{\eta \xi_y}}{\sum_{y'\in S_{t-1}}e^{\eta \xi_{y'}}} (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t}-\hat s_u)^2\\
&\leq W_{t-1} + \frac{\eta}{2},
\end{align*}
for some vector $\xi\in\mathbb{R}^{|S_{t-1}|}$, where in the last inequality we used the fact $|\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t}-\hat s_u|\leq 1$. We now suppose that $y_t\notin S_{t-1}$ and $W_{t-1}\geq 1+\frac{\ln 2+2\ln \frac{1}{\eta}}{\eta}$. In that case, $e^{\eta W_t}=e^{\eta W_{t-1}} + e^{\eta(1-\sum_{u=1}^{t-1}\hat s_u)}.$ Hence, we obtain
\begin{equation*}
W_t= W_{t-1} + \frac{\ln\left(1+e^{\eta(1-W_{t-1}-\sum_{u=1}^{t-1}\hat s_u)}\right)}{\eta} \leq W_{t-1} + \frac{e^{\eta(1-W_{t-1})}}{\eta}\leq W_{t-1} + \frac{\eta}{2}.
\end{equation*}
Now let $l=\max\{1\}\cup\left\{1\leq t\leq T:W_t < 1+\frac{\ln 2+2\ln \frac{1}{\eta}}{\eta}\right\}$. Note that for any $l< t\leq T$ the above arguments yield $W_t \leq W_{t-1} + \frac{\eta}{2}$. As a result, noting that $W_1\leq 1$, we finally obtain
\begin{equation*}
W_T \leq W_l + \eta\frac{T-l}{2} \leq 1+\frac{\ln 2+2\ln \frac{1}{\eta}}{\eta} + \eta\frac{T}{2} \leq 1+ \ln 2\sqrt{\frac{T}{2\ln T}} + \sqrt{2T\ln T}.
\end{equation*}
Therefore, for any $y\in S_T$, we have
\begin{equation*}
\sum_{t=1}^T (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t}-\hat s_t) \leq W_T\leq 1+ \ln 2\sqrt{\frac{T}{2\ln T}}+ \sqrt{2T\ln T}.
\end{equation*}
In particular, this shows that
\begin{equation*}
\sum_{t=1}^T \hat s_t \geq \max_{y\in S_T}\sum_{t=1}^T\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t} - 1- \ln 2\sqrt{\frac{T}{2\ln T}} - \sqrt{2T\ln T}.
\end{equation*}
Now note that if $y\notin S_T$, then $ \sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t}=0$, which yields $\max_{y\in S_T}\sum_{t=1}^T\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t} = \max_{y\in \mathbb{N}}\sum_{t=1}^T\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t}$. For the sake of conciseness, we will now denote by $\hat y_t$ the prediction of the online learning rule at time $t$. We observe that the variables $\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat y_t=y_t}-\hat s_t$ for $1\leq t\leq T$ form a sequence of martingale differences. Further, $|\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat y_t=y_t}-\hat s_t|\leq 1$. Therefore, the Hoeffding-Azuma inequality shows that with probability $1-\delta$,
\begin{equation*}
\sum_{t=1}^T (\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat y_t=y_t}-\hat s_t) \geq -\sqrt{2T\ln \frac{1}{\delta}}.
\end{equation*}
Putting everything together yields that with probability $1-\delta$,
\begin{equation*}
\sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat y_t=y_t} \geq \sum_{t=1}^T \hat s_t -\sqrt{2T\ln \frac{1}{\delta}} \geq \max_{y\in \mathbb{N}} \sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{y=y_t} - 1- \ln 2\sqrt{\frac{T}{2\ln T}} - \sqrt{2T\ln T} -\sqrt{2T\ln \frac{1}{\delta}}.
\end{equation*}
This ends the proof of the lemma.
\end{proof}
We are now ready to prove our main result for countable classification.
\begin{theorem}
\label{thm:countable_classification}
Let $(\mathcal{X},\mathcal{B})$ a separable Borel metrizable space. There exists an online learning rule $f_\cdot$ which is universally consistent for adversarial responses under any process $\mathbb{X}\in{\text{SMV}}$ for countable classification, i.e., such that for any adversarial process $(\mathbb{X},\mathbb{Y})$ on $(\mathcal{X},\mathbb{N})$ with $\mathbb{X}\in{\text{SMV}}$, for any measurable function $f^*:\mathcal{X}\to\mathbb{N}$, we have that $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f^*)\leq 0,\quad (a.s.).$
\end{theorem}
\begin{proof}
We use a similar learning rule to the one constructed in Section \ref{sec:totally_bounded_value_spaces} for totally-bounded spaces. Will only make a slight modification of the learning rules $f^\epsilon_\cdot$. Precisely, we pose for $0<\epsilon\leq 1$,
\begin{equation*}
T_\epsilon := \left\lceil \frac{2^4\cdot 3^2 (1+\ln \frac{1}{\epsilon})}{\epsilon^2}\right\rceil \quad \text{and} \quad \delta_\epsilon:= \frac{\epsilon}{2(2^{T_\epsilon}+T_\epsilon)}.
\end{equation*}
Then, let $\phi$ be the representative function from the $(1+\delta_\epsilon)$C1NN learning rule. Further, we denote by $d(t)$ the depth of time $t$ within the graph constructed by $(1+\delta_\epsilon)$C1NN. At time $t$, we define the horizon $L_t=d(t)\mod T_\epsilon$. The learning rule performs its prediction based on the values $Y_{\phi^l(t)}$ for $l=1,\ldots,L_t$. We pose $\eta_\epsilon:=\sqrt{2\ln T_\epsilon/ T_\epsilon}$ and define the weights $w_{y,t}=e^{\eta_\epsilon\sum_{l=1}^{L_t} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}(Y_{\phi^l(t)}=y)}$ for all $y\in \tilde S:=\{y'\in\mathbb{N}:\sum_{l=1}^{L_t} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}(Y_{\phi^l(t)}=y')>0\}$ and $w_{y,t}=0$ otherwise. The learning rule $f^\epsilon_t(\mathbb{X}_{\leq t-1},\mathbb{Y}_{\leq t-1},X_t)$ outputs a random value in $\mathbb{N}$ independent of the past history such that
\begin{equation*}
\mathbb{P}(\hat Y_t=y) = \frac{w_{y,t}}{\sum_{y'\in \mathbb{N}} w_{y',t} },\quad y\in \mathbb{N}.
\end{equation*}
The final learning rule $f_\cdot$ is then defined similarly as before from the learning rules $f^\epsilon_\cdot$ for $\epsilon>0$. Therefore, Lemma \ref{lemma:concatenation_predictors} still holds. Also, using the same notations as in the proof of Theorem \ref{thm:optimistic_regression_totally_bounded}, Lemma \ref{lemma:bandits_Nbb} implies that for any $t\geq 1$, we can write
\begin{equation*}
\sum_{u=0}^{L_t} \bar\ell_{01}(\hat Y_{\phi^u(t)}(\epsilon),Y_{\phi^u(t)}) \leq \min_{y\in\mathbb{N}} \sum_{u=0}^{L_t} \ell_{01}(y,Y_{\phi^u(t)}) + 1+\ln 2\sqrt{\frac{L_t+1}{2\ln (L_t+1)}} + \sqrt{2(L_t+1)\ln (L_t+1)}.
\end{equation*}
Therefore, when summing these equations for all times in $\mathcal{B}_T$, the term corresponding to the margin regret of these predictions $\hat Y_t(\epsilon)$ becomes
\begin{align*}
|\mathcal{B}_T| \left( 1+\ln 2\sqrt{\frac{T_\epsilon}{2\ln T_\epsilon}} + \sqrt{2T_\epsilon\ln T_\epsilon}\right) &\leq T \left( \frac{1}{T_\epsilon}+\frac{\ln 2}{\sqrt{2T_\epsilon\ln T_\epsilon}} + \sqrt{\frac{2\ln T_\epsilon}{T_\epsilon}}\right)\\
&\leq T\left(\frac{\epsilon}{3} + \frac{\epsilon}{3} + \frac{\epsilon}{3}\right)\\
&=\epsilon T.
\end{align*}
Thus the same estimates still hold starting from the inequality
\begin{equation*}
\sum_{t=1}^{T} \bar\ell_{01}(\hat Y_t(\epsilon),Y_t) \leq \sum_{t\in\mathcal{B}_T} \min_{y\in\mathbb{N}} \sum_{u=0}^{T_\epsilon-1} \ell(y,Y_{\phi^u(t)}) + \epsilon T+ (|\mathcal{A}_2|2^{T_\epsilon-1}+|\mathcal{A}_0|T_\epsilon),
\end{equation*}
by replacing all $\epsilon-$nets $\mathcal{Y}_\epsilon$ directly by $\mathbb{N}$. The martingale argument still holds since the learning rule used is indeed online. As a result, the rest of the proof of Theorem \ref{thm:optimistic_regression_totally_bounded} holds, which ends the proof of this theorem.
\end{proof}
\subsection{A complete characterization of universal regression on bounded spaces}
The last two Sections \ref{subsec:bad_non_totally_bounded_ex} and \ref{subsec:countable_classification} gave examples of non-totally-bounded value spaces for which we obtain respectively $\text{SOLAR}=\text{CS}$ or $\text{SOLAR}={\text{SMV}}$. In this section, we prove that there is an underlying alternative, defined by $\text{F-TiME}$, which enables to precisely characterize the set $\text{SOLAR}$ of learnable processes for adversarial regression. For the sake of exposition, we recall the property $\text{F-TiME}$ on separable value spaces $(\mathcal{Y},\ell)$.
\paragraph{Property $\text{F-TiME}$:}\textit{For any $\eta>0$, there exists a horizon time $T_\eta \geq 1$, an online learning rule $g_{\leq T_\eta}$ and $\tau$ a random time with $1\leq \tau\leq T_\eta$ such that for any $\mb y:=(y_t)_{t=1}^{T}$ of values in $\mathcal{Y}$ and any value $y\in\mathcal{Y}$, we have
\begin{equation*}
\mathbb{E}\left[\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau\right] \leq 0.
\end{equation*}
}
As an important note, the random time $\tau$ may depend on the possible randomness of the learning rule $g_\cdot$. However, $\tau$ is not online, it does not depend on any of the values $y_1,y_2,\ldots$ on which the learning rule $g_\cdot$ may be tested. Intuitively, the learning rule uses some randomness which is first privately sampled. The random stopping time $\tau$ depends on this private randomness. This randomness is then never revealed to the adversary choosing the values $\mb y$, only through the realizations of the predictions. We now show that if $\text{F-TiME}$ is satisfied by the value space, similarly to the case of countable classification, we can recover $\text{SOLAR}={\text{SMV}}(=\text{SOUL})$ and there exists an optimistically universal rule for adversarial regression.
\ThmGoodValueSpaces*
\begin{proof}
Let us construct a learning rule which we will then prove is universally consistent for adversarial responses under all processes $\mathbb{X}\in{\text{SMV}}$. We first define the learning rules $f_\cdot^\epsilon$, which will be combined in a second step. For $\epsilon>0$, we take the horizon time $T_\epsilon$ and the learning rule $g^\epsilon_{\leq \tau_\epsilon}$ satisfying the condition imposed by the assumption on $(\mathcal{Y},\ell)$. Similarly as before, we then pose $\delta_\epsilon:=\frac{\epsilon}{2\bar\ell (2^{T_\epsilon}+2T_\epsilon)}$ and let $\phi$ be the representative function from the $(1+\delta_\epsilon)$C1NN learning rule. We define a sequence of i.i.d. copies $g^{\epsilon,t}_\cdot$ of the learning rule $g^\epsilon_\cdot$ for all $t\geq 1$. Precisely, this means that the randomness used within these learning rules is i.i.d, and the copy $g^{\epsilon,t}_\cdot$ should be sampled only at time $t$, independently of the past history. For any $t\geq 1$, we then construct an integer $0\leq L_t< T_\epsilon$ which will correspond to the window span used for the prediction $\hat Y_t$. Precisely, we will make this prediction based on the values $Y_{\phi^l(t)}$ for $l=1,\ldots,L_t$. This window span is constructed recursively. Specifically, at time $t$, we draw a uniform $U_t\sim \mathcal{U}([0,1])$ independent from the past, and pose
\begin{equation*}
L_t = \begin{cases}
L_{\phi(t)}+1 &\text{if } U_t< \frac{\mathbb{P}[\tau_\epsilon \geq L_{\phi(t)}+2]}{\mathbb{P}[\tau_\epsilon \geq L_{\phi(t)}+1]},\\
0 &\text{otherwise.}
\end{cases}
\end{equation*}
with the convention that if $\frac{0}{0}=0$. In practice, this convention is not necessary, since if $\mathbb{P}[\tau_\epsilon\geq u]=0$, the learning rule never creates indices $L_t\geq u-1$ for any time $t\geq 1$. In particular, we always have $L_t\leq T_\epsilon-1$. We now define the learning rule $g^\epsilon$ such that for any sequence $\mb x$, $\mb y$ we have
\begin{equation*}
f_t^\epsilon(\mb x_{\leq t-1},\mb y_{\leq t-1}, x_t) := g^{\epsilon,\phi^{L_t}(t)}_{L_t+1}\left(\{y_{\phi^{L_t+1-u}(t)}\}_{u=1}^{L_t}\right).
\end{equation*}
For simplicity, we refer to this prediction as $\hat Y_t(\epsilon)$. The final learning rule $f_\cdot$ is defined similarly as before from the learning rules $f^\epsilon_\cdot$. This ends the construction of our learning rule.\\
We now show that it achieves Bayes optimistical universality for arbitrary responses. By construction of the learning rule $f_\cdot$, Lemma \ref{lemma:concatenation_predictors} still holds. Therefore, we only have to focus on the learning rules $f^\epsilon_\cdot$ and prove that we obtain similar results as before. Let $T\geq 1$ and denote by $\mathcal{A}_i:=\{t\leq T:|\{u\leq T:\phi(u)=t\}|=i\}$ the set of times which have exactly $i$ children within horizon $T$ for $i=0,1,2$. Then, we define
\begin{equation*}
\mathcal{B}_T = \{t\leq T: L_t=0 \text{ and } \nexists t'\in \mathcal{A}_0\cup\mathcal{A}_2, 0\leq u\leq T_\epsilon-1, \phi^u(t')=t\},
\end{equation*}
i.e., times that start a new block and such that their descendance until generation $T_\epsilon-1$ are neither leaves nor nodes with 2 children. In particular, any $t\in\mathcal{B}_T$ has a descendance of only children until generation $T_\epsilon-1$. To simplify notations, for any $t\in\mathcal{B}_T$, we denote $t^u$ its children at generation $u-1$ for $1\leq u\leq T_\epsilon$, i.e., we have $t^u = \phi^{T_\epsilon-u}(t^{T_\epsilon})$ for all $1\leq u\leq T_\epsilon$, and $t=t^1$. By construction, because blocks cannot be longer than $T_\epsilon$, there exists $1\leq u\leq T_\epsilon$ such that $t^u$ ends the block started in $t^1$. We have in particular $L_{t^u}=u-1$ and the only child $t'$ of $t^u$ has $L_{t'}=0$. To simplify notations, we denote $s(t) = u$ the size of the block started in $t$. By construction of the indices $L_t$, as well as the property that for any $t\in \mathcal{B}_T$, its children until generation $T_\epsilon-1$ have exactly one child, the blocks $\{t^u,1\leq u\leq s(t)\}$, for $t\in \mathcal{B}_T$, are all disjoint. This implies in particular $\sum_{t\in\mathcal{B}_T} s(t) \leq T$. We first analyze the predictions along these blocks and for any $t\in\mathcal{B}_T$ and $y\in\mathcal{Y}$, we pose $\delta_{t}(y):=\frac{1}{s(t)}\sum_{u=1}^{s(t)} \left(\ell(\hat Y_{t^u},Y_{t^u}) - \ell(y,Y_{t^u}) -\epsilon\right)$. We will now need the following lemma.
\begin{lemma}
\label{lemma:from_deterministic_to_process}
For any sequence $(y^t)_{t\geq 1}$ of values in $\mathcal{Y}$, with probability $1-\delta$ we have
\begin{equation*}
\sum_{t\in\mathcal{B}_T}s(t) \delta_{t}(y^t) \leq \left[\bar\ell(T_\epsilon + 1)+ T_\epsilon\right]\sqrt{2T\ln \frac{2}{\delta}}.
\end{equation*}
\end{lemma}
We also denote $\mathcal{T} = \bigcup_{t\in\mathcal{B}_T} \{t^u,1\leq u\leq s(t)\}$ the union of all blocks within horizon $T$. This set contains all times $t\leq T$ except times close to leaves $\mathcal{A}_0$ or times in $\mathcal{A}_2$, within distance $T_\epsilon-1-L_t\leq T_\epsilon-1$ distance. Therefore, using the same arguments as in the proof of Theorem \ref{thm:optimistic_regression_totally_bounded}, by the Chernoff bound, with probability at least $1-e^{-T\delta_\epsilon/3}$ we have
\begin{equation*}
T-|\mathcal{T}| \leq |\mathcal{A}_2|2^{T_\epsilon}+(|\mathcal{A}_2|+|\mathcal{A}_0|)T_\epsilon = T_\epsilon + (2^{T_\epsilon}+2T_\epsilon)|\mathcal{A}_2| \leq T_\epsilon + (2^{T_\epsilon}+2T_\epsilon)(2T\delta_\epsilon) =T_\epsilon + \frac{\epsilon}{\bar\ell} T.
\end{equation*}
By the Borel-Cantelli lemma, because $\sum_{T\geq 1} e^{-T\delta_\epsilon/3}<\infty$, almost surely there exists a time $\hat T$ such that for $T\geq \hat T$ we have $T-|\mathcal{T}| \leq T_\epsilon + \frac{\epsilon}{\bar\ell} T$. We denote by $\mathcal{E}_\epsilon$ this event. Then, on the event $\mathcal{E}_\epsilon$, for any $T\geq \hat T$ and for any sequence of values $(y^t)_{t\geq 1}$ we have
\begin{align*}
\sum_{t=1}^{T} \ell(\hat Y_t(\epsilon),Y_t) &\leq \sum_{t\in\mathcal{B}_T} \sum_{u=1}^{s(t)} \ell(\hat Y_{t^u},Y_{t^u}) + ( T-|\mathcal{T}| )\bar\ell\\
&\leq \sum_{t\in\mathcal{B}_T} \sum_{u=1}^{s(t)} \ell(y^t,Y_{t^u}) + \sum_{t\in\mathcal{B}_T} s(t) \delta_{t}(y^t)+ \epsilon\sum_{t\in\mathcal{B}_T} s(t)+ T_\epsilon\bar\ell + \epsilon T\\
&\leq \sum_{t\in\mathcal{B}_T} \sum_{u=1}^{s(t)} \ell(y^t,Y_{t^u}) + \sum_{t\in\mathcal{B}_T} s(t) \delta_t(y^t)+ T_\epsilon \bar\ell + 2\epsilon T.
\end{align*}
Now let $f:\mathcal{X}\to\mathcal{Y}$ be a measurable function to which we compare $f^\epsilon_\cdot$. By Theorem \ref{thm:1+deltaC1NN_optimistic}, because $(1+\delta_\epsilon)$C1NN is optimistically universal without noise and $\mathbb{X}\in\text{SOUL}$, almost surely $ \frac{1}{T}\sum_{t=1}^T \ell(f(X_{\phi(t)}),f(X_t)) \to 0$. We denote by $\mathcal{F}_\epsilon$ this event of probability one. The proof of Theorem \ref{thm:optimistic_regression_totally_bounded} shows that on $\mathcal{F}_\epsilon$, for any $0\leq u\leq T_\epsilon-1$ we have
\begin{equation*}
\frac{1}{T}\sum_{t=1}^T \ell(f(X_{\phi^u(t)}),f(X_t)) \to 0.
\end{equation*}
We let $y^t=f(X_t)$ for all $t\geq 1$. Then, recalling that for any $t\in\mathcal{B}_T$, we have $t = \phi^{u-1}(t^u)$, on the event $\mathcal{E}_\epsilon$, for any $T\geq \hat T$ we have
\begin{align*}
\sum_{t=1}^{T} \ell(\hat Y_t(\epsilon),Y_t)
&\leq \sum_{t\in\mathcal{B}_T} \sum_{u=1}^{s(t)} \left((1+\epsilon)\ell(f(X_{t^u}),Y_{t^u}) + c_\epsilon^\alpha \ell(f(X_t),f(X_{t^u}))\right) + \sum_{t\in\mathcal{B}_T} s(t) \delta_t(y^t)+ T_\epsilon \bar\ell + 2\epsilon T\\
&\leq \sum_{t=1}^T \ell(f(X_t),Y_t) +c_\epsilon^\alpha \sum_{u=0}^{T_\epsilon-1} \sum_{t=1}^T \ell(f(X_{\phi^u(t)}), f(X_t)) + \sum_{t\in\mathcal{B}_T} s(t) \delta_{\varphi(t)}(y^t)+ T_\epsilon\bar\ell + 3\epsilon T,
\end{align*}
where in the first inequality we used Lemma \ref{lemma:loss_identity}. By Lemma \ref{lemma:from_deterministic_to_process}, with probability $1-\frac{1}{T^2}$, we have
\begin{equation*}
\sum_{t\in\mathcal{B}_T} s(t) \delta_t(y^t)\leq 2\left[\bar\ell(T_\epsilon + 1)+ T_\epsilon\right]\sqrt{T(\ln T + \ln 2/2)}.
\end{equation*}
Because $\sum_{T\geq 1}\frac{1}{T^2}<0$, the Borel-Cantelli lemma implies that on an event $\mathcal{G}_\epsilon$ of probability one, there exists $\hat T_2$ such that for all $T\geq \hat T_2$ the above inequality holds. As a result, on the event $\mathcal{E}_\epsilon\cap\mathcal{F}_\epsilon\cap\mathcal{G}_\epsilon$ we obtain for any $T\geq \max(\hat T,\hat T_2)$ that
\begin{multline*}
\sum_{t=1}^{T} \ell(\hat Y_t(\epsilon),Y_t) \leq \sum_{t=1}^T \ell(f(X_t),Y_t) + c_\epsilon^\alpha \sum_{u=0}^{T_\epsilon-1} \sum_{t=1}^T \ell(f(X_{\phi^u(t)}), f(X_t)) + 2\left[\bar\ell(T_\epsilon + 1)+ T_\epsilon\right]\sqrt{T(\ln T + \ln 2/2)}\\
+ T_\epsilon \bar\ell + 3\epsilon T.
\end{multline*}
where $\frac{1}{T}\sum_{u=0}^{T_\epsilon-1} \sum_{t=1}^T \ell(f(X_{\phi^u(t)}), f(X_t)) \to 0$ because the event $\mathcal{F}_\epsilon$ is met. Therefore, we obtain that on the event $\mathcal{E}_\epsilon\cap\mathcal{F}_\epsilon\cap\mathcal{G}_\epsilon$ of probability one,
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^{T} \left[\ell(\hat Y_t(\epsilon),Y_t) - \ell(f(X_t),Y_t)\right] \leq 3\epsilon,
\end{equation*}
i.e., almost surely, the learning rule $f^\epsilon_\cdot$ achieves risk at most $3\epsilon$ compared to the fixed function $f$. By union bound, on the event $\bigcap_{i\geq 0}(\mathcal{E}_{\epsilon_i}\cap\mathcal{F}_{\epsilon_i}\cap\mathcal{G}_{\epsilon_i})$ of probability one we have that
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^{T} \left[\ell(\hat Y_t(\epsilon_i),Y_t) - \ell(f(X_t),Y_t)\right] \leq 3\epsilon_i,\quad \forall i\geq 0.
\end{equation*}
The rest of the proof uses similar arguments as in the proof of Theorem \ref{thm:optimistic_regression_totally_bounded}. Precisely, let $\mathcal{H}$ be the almost sure event of Lemma \ref{lemma:concatenation_predictors} such that there exists $\hat t$ for which
\begin{equation*}
\forall t\geq \hat t,\forall i\in I_t,\quad \sum_{s=t_i}^t\ell(\hat Y_t,Y_t) \leq \sum_{s=t_i}^t \ell(\hat Y_t(\epsilon_i),Y_t) + (2+\bar\ell+\bar\ell^2)\sqrt {t\ln t}.
\end{equation*}
In the rest of the proof we will suppose that the event $\mathcal{H}\cap \bigcap_{i\geq 0}( \mathcal{E}_{\epsilon_i}\cap\mathcal{F}_{\epsilon_i}\cap\mathcal{G}_{\epsilon_i})$ of probability one is met. Let $i\geq 0$. For all $t\geq \max(\hat t,t_i)$ we have
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t) &\leq \frac{t_i}{T}\bar \ell + \frac{1}{T}\sum_{t=t_i}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t)\\
&\leq \frac{t_i}{T}\bar \ell + \frac{1}{T}\sum_{t=t_i}^T \ell(\hat Y_t(\epsilon_i),Y_t)- \ell(f(X_t),Y_t) + (2+\bar\ell +\bar\ell^2)
\sqrt{\frac{\ln T}{ T}}\\
&\leq \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t(\epsilon_i),Y_t)- \ell(f(X_t),Y_t) + \frac{2t_i}{T}\bar\ell + (2+\bar\ell +\bar\ell^2)
\sqrt{\frac{\ln T}{ T}}.
\end{align*}
Therefore we obtain $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t) \leq 3\epsilon_i$. Because this holds for any $i\geq 0$ we finally obtain
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)- \ell(f(X_t),Y_t) \leq 0.
\end{equation*}
As a result, $f_\cdot$ is universally consistent for adversarial responses under all $\text{SOUL}$ processes. Hence, $\text{SOLAR}=\text{SOUL}$ and $f_\cdot$ is in fact optimistically universal. This ends the proof of the theorem.
\end{proof}
We now prove Lemma \ref{lemma:from_deterministic_to_process} which requires careful analysis.
\begin{proof}[Proof of Lemma \ref{lemma:from_deterministic_to_process}]
For any $t\in\mathcal{B}_T$ and $y\in \mathcal{Y}$, by construction of the learning rule $f^\epsilon_\cdot$, we have
\begin{equation*}
s(t) \delta_t(y) =
\sum_{u=1}^{s(t)} \left(\ell(g^{\epsilon,t}_u(\{Y_{t^l}\}_{l=1}^{u-1}), Y_{t^u}) - \ell(y,Y_{t^u})\right) - \epsilon s(t).
\end{equation*}
Also, observe that the quantities $L_t$ were constructed precisely so that for any $t\in\mathcal{B}_T$, $s(t)$ has the same distribution as $\tau_\epsilon$. The randomness in the construction of $L_t$ for $t\geq 1$ will be used to view the above equation as a realization of the same sum where $s(t)$ is a random stopping time, which allows to use the hypothesis on $g^\epsilon_{\leq \tau_\epsilon}$. We will then use concentration inequalities on the sequence formed by $s(t)\delta_t(y^t)$ for all $t\in \mathcal{B}_T$. Unfortunately, neither of these steps can be performed directly, because the values $Y_{t^u}$ for $t\in \mathcal{B}_T$ and $1\leq u\leq s(t)$ are dependent on each other and do not form martingales.
We first fix the sequence of values $(y^t)_{t\geq 1}$. Also, for any $t\leq T_\epsilon$ and sequence $\mb y_{\leq t-1}$ and value $y\in \mathcal{Y}$, we pose
\begin{equation*}
\bar \ell(g_t(\mb y_{\leq t-1}), y) := \mathbb{E} \left[\ell(g_t(\mb y_{\leq t-1}), y)\right],
\end{equation*}
where expectation is taken over the randomness of the learning rule. Now consider the following sequence $(\ell(\hat Y_{t^u},Y_{t^u}) -\bar \ell(\hat Y_{t^u},Y_{t^u}))_{t\in\mathcal{B}_T,1\leq u\leq s(t)}$. Because of the definition of the learning rule, which uses i.i.d. copies of the learning rule $g^\epsilon_\cdot$, if we order the former sequence by increasing order of $\phi^{L_t+1-u}(t)$, we obtain a sequence of martingale differences. We can continue this sequence by zeros to ensure that it has length exactly $T$. As a result we obtain a sequence of $T$ martingale differences, which are all bounded by $\bar\ell$ in absolute value. Now, the Azuma-Hoeffding inequality implies that with probability $1-\delta/2$, we have
\begin{equation*}
\sum_{t\in\mathcal{B}_T} \sum_{u=1}^{s(t)}\ell(\hat Y_{t^u},Y_{t^u}) \leq \sum_{t\in\mathcal{B}_T} \sum_{u=1}^{s(t)}\bar\ell(\hat Y_{t^u},Y_{t^u}) + \bar\ell\sqrt{2T\ln \frac{2}{\delta}}.
\end{equation*}
We denote by $\mathcal{E}$ the event where the above inequality is met. We will now reason only on these averaged losses. We introduce for any $t\in\mathcal{B}_T$ the variable
\begin{equation*}
\bar\delta_t(y):=\frac{1}{s(t)}\sum_{u=1}^{s(t)}\left(\bar\ell(
\hat Y_{t^u},Y_{t^u})-\ell(y,Y_{t^u}) -\epsilon\right).
\end{equation*}
The hypothesis on the learning rule $g^\epsilon_{\leq \tau_\epsilon}$ now becomes: for any $y\in\mathcal{Y}$ and any sequence of values $\mb y$,
\begin{equation}
\label{eq:hypothesis_good_predictor}
\sum_{t=1}^{T_\epsilon} \mathbb{P}[\tau_\epsilon=t] \left[\sum_{u=1}^t (\bar\ell(g_t(\mb y_{\leq u-1},y_u)-\ell(y,y_u)) - \epsilon t\right] \leq 0.
\end{equation}
We now reason on the process $\mathbb{Y}$. This process is arbitrary and allowed to depend on the randomness of the learning rule when revealed. In particular, $\mathbb{Y}$ may depend on the realizations of $U_t$ for $t\geq 1$, which define the stopping times $s(t)$ for all $t\in \mathcal{B}_T$. We will only focus on the times in $\mathcal{T}=\bigcup_{t\in\mathcal{B}_T}\{t^u,1\leq u\leq s(t)\}$. We reason conditionally to a realization of the tree formed by $\phi$. Formally, we consider a specific realization of the process $\mathbb{X}$ and the corresponding rule $(1+\delta_\epsilon)$C1NN which is used in our learning rule to construct the tree with parent relations given by $\phi$. Hence, we can now suppose that $\mathbb{X}$ and $\phi$ are fixed. Now note that because times $t\in\mathcal{B}_T$ were chosen so that the starting point of their corresponding block did not have any descendance in $\mathcal{A}_0\cup\mathcal{A}_2$ at distance $T_\epsilon-1$, irrespective of the horizon value $s(t)$, which is bounded by $T_\epsilon$, the block started in $t\in\mathcal{B}_T$ would always have time to end within time horizon $T$. Formally, on the realization of $\mathbb{X}$, $\mathbb{Y}$ is an online process which depends on the past values of $U_{\leq t-1}$. As a result, we can view the ``adversarial'' process $\mathbb{Y}_{\cdot\in\mathcal{T}}$ as a decision process which takes into account the values given by the realizations $U_t$ for $t\geq 1$ in an online fashion. Consider a realization $\mb Y$ of this random decision process, and enumerate the values in $\mathcal{B}_T:=\{t_1\leq \ldots\leq t_{|\mathcal{B}_T|}\}$. Recall that we have $|\mathcal{B}_T|\leq T$. Now, $\mb Y$ can be written into a binary decision tree $\mathcal{G}$ as follows.
There are two types of nodes, splitting nodes $S$ and leaves $L$ so that $V=S\cup L$. Each splitting node $v\in S$ contains a value $z_v\in \mathcal{Y}$ and an index $i_v\in \{1,\ldots,T\}$ and intuitively corresponds to the process trying to add value $z_v$ to the block of index $i_v$, which started at $t_{i_v}$. Splitting nodes always have a right children and potentially a left children, while leaves do not have children. This tree allows to construct the process $\mathbb{Y}$ taking into account the realizations of $U_t$ for $t\geq 1$. For instance, the root $r$---which is always a splitting node---is such that historically, the first started block has index $i_r$, i.e., $i_r=\arg\min\{t_i , 1\leq i\leq |\mathcal{B}_T|\}=1$. The process tries to add the value $z_r$ to the current block $i_r$. Precisely, we have $Y_{t_{i_r}^1}=z_r$ if the corresponding sample $U_{t_{i_r}^1}$ allows to add a value to block $i_r$---in this case, this is almost surely the case because $\mathbb{P}[\tau_\epsilon\geq 1]=1$ and the block $i_r$ is still empty at this point. Depending on the result of this sample, the decision process can follow two scenarios. Either, the value was added to the block $i_r$ successively, in which case we follow the next left children of the root; or the value was rejected, in which case we follow the right children of the root. More generally, arrived at a given splitting node $v\in S$, we try adding the value $z_v$ to the current block $i_v$ started at time $t_{i_v}$. If the block already contained $u-1$ values, we draw $U_{t^u_{i_v}}$.
\begin{itemize}
\item If $U_{t^u_{i_v}}<\frac{\mathbb{P}[\tau_\epsilon\geq u]}{\mathbb{P}[\tau_\epsilon\geq u-1]}$, the value is successfully added $Y_{t^u_{i_v}}=z_v$. Provided that $s(t_{i_v})\geq u-1$, this case corresponds precisely to $s(t_{i_v})\geq u$. We then move to the left children of $v$ if it exists. If it does not exist, the process ends.
\item Otherwise, the value is rejected. This corresponds to the case when $u-1=s(t_{i_v})$, where the block $i_v$ was already complete. We then move to the right children of $v$.
\end{itemize}
In practice, the process never ends at a splitting node: we only allow splitting nodes to not have a left children if the value is always rejected, otherwise this does not form a complete decision rule for the process $\mb Y$. Thus, the process ends when it arrives at a leaf node. The binary decision tree is constructed so that from a node $v\in S$, we moved to the right node when the block $i_v$ was completed, or in other words, we already added exactly $s(t_{i_v})$ values to this block.
As a remark, one can note that since $\mathbb{X}$ and $\phi$ are already sampled, there is no choice from $\mb Y$ for the indices on which to split at each node of the binary decision tree. Indeed, these are defined by the historical ordering of the variables $t_i^u$ for all $i\leq |\mathcal{B}_T|$ and $1\leq u\leq s(t)$ through the tree defined by $\phi$---recall that by construction of $\mathcal{B}_T$, independently of the values of $U_t$ for $t\geq 1$, started blocks from $\mathcal{B}_T$ will be completed. The random process $\mathbb{Y}|\mathbb{X},\phi$ can only induce randomness over the values $z_v$ at each splitting node $v\in S$. Hence a realization $\mb Y$ corresponds to a specific realization of the values $z_v$ for $v\in S$. However, this additional constraint on $\mb Y$ is not useful to derive our estimations. In other words, even if the process $\mathbb{Y}|\mathbb{X},\phi$ was allowed to choose which variable to split upon at each node, the property that once a block is started it has enough time to be completed, would be sufficient for the following analysis to hold.
The corresponding decision tree has the property that from any node $v\in S$ the right sub-tree rooted in the right children of $v$ (if it exists) does not have splitting nodes with the same index $i_v$ as $v$. Indeed, we moved to this sub-tree if the block $i_v$ was already complete. Hence, the future decision process never tries to add an additional value to block $i_v$. Further, because of the definition of $\mathcal{B}_T$ which ensures that any block can end within time horizon $T$ irrespective of the stopping times $s(t)\leq T_\epsilon$, for any leaf $l$ of the decision tree and any node $v$ in the path from the leaf to the root $r$, the block $i_v$ was completed, i.e., we moved right on a node which had same index $i_v$. For a given node $v$, we will denote by $N_v(j)$ the number of values which have been previously added to block $j$ in the path from the root to $v$. In particular, $N_v(i_v)$ counts the number of times a splitting on index $i_v$ was performed, the vertex $v$ not included. Last, because $\tau_\epsilon\leq T_\epsilon$, if a node $v$ satisfies $N_v(i_v)=T_\epsilon$, then the value $z_v$ will always be rejected and we can prune the left subtree rooted at the left child of $v$ if it exists. Hence we can prune the decision tree and suppose without loss of generality that for any splitting node $v\in S$, we have $N_v(i_v)\leq T_\epsilon$. As a result of these remarks, one can note that leaves can only be right children.
Recall that by design of the learning rule $f_\cdot^\epsilon$, at any time $t\leq T$, the value $Y_t$ is independent from the sampling $U_t$. As a result, the precise realization of $\mathbb{Y}$ considering the realizations of $U_t$ for $t\geq 1$, corresponds to a random walk on the decision tree. Precisely, arrived at a splitting node $v\in S$, the process follows its left children with probability $\frac{\mathbb{P}[\tau_\epsilon\geq N_v(i_v)+1]}{\mathbb{P}[\tau_\epsilon\geq N_v(i_v]}$---which is zero if $N_v(i_v)=T_\epsilon$ and more precisely, the process \emph{never} follows its left children in this case---and to its right children with probability $1-\frac{\mathbb{P}[\tau_\epsilon\geq N_v(i_v)+1]}{\mathbb{P}[\tau_\epsilon\geq N_v(i_v]}$. Thanks to this decision tree, we will upper bound for this specific realization $\mb Y$ and any $\lambda>0$ the quantity
\begin{equation*}
\Gamma_\lambda:= \mathbb{E}\left[e^{\lambda \sum_{t\in\mathcal{B}_T}s(t)\bar \delta_t(y^t)}\right],
\end{equation*}
where the expectation is taken over $U_t$ for $t\in\mathcal{T}$. At any node $v\in V$, we define the quantity
\begin{equation*}
\Gamma_\lambda(v):= e^{-\lambda \sum_{j=1}^T \sum_{u=1}^{N_v(j)} (\bar\ell(\hat Y_{t^u_j},z_{v,u}^j)-\ell(y^{t_j},z_{v,u}^j)-\epsilon)} \mathbb{E}\left[\left.e^{\lambda \sum_{t\in\mathcal{B}_T}s(t)\bar \delta_{\varphi(t)}(y^t)}\right| v\right]
\end{equation*}
where $z_{v,u}^j$ denotes the value $z_{v'}$ of the $u-$th splitting node on index $j$ along the path from the root $r$ to $v$, and the conditioning on $v$ denotes the event that the decision process has passed through node $v$. In particular, on this event, for any $1\leq j\leq T$ and $1\leq u\leq N_v(j)$ we have $Y_{t^u_j}=z_{v,u}^j$, by construction of the decision tree process. The other values of $Y$ are not revealed yet when the process arrives at node $v$. Hence, the quantity $\Gamma_\lambda(v)$ corresponds to the expected future contribution to the sum $\lambda \sum_{t\in\mathcal{B}_T}s(t)\delta_{\varphi(t)}(y)$ of the still unknown part at this step of the process. The first factor in $\Gamma_\lambda(s)$ precisely deletes the contribution of past decided values $Y$. An important remark is that $\Gamma_\lambda(r) = \Gamma_\lambda$.
We now define for all $0\leq u\leq T_\epsilon$ and $y\in \mathcal{Y}$ and any sequence $\mb z_{\leq u}$,
\begin{equation*}
\gamma_\lambda(y,\mb z_{\leq u}) = \sup_{z_{u+1},\ldots,z_{T_\epsilon}\in \mathcal{Y}} \frac{1}{\mathbb{P}[\tau_\epsilon\geq u]} \sum_{k=u}^{T_\epsilon} \mathbb{P}[\tau_\epsilon= k] e^{\lambda \sum_{l=u+1}^k\left(\bar\ell(g_l(\mb z_{\leq u},\mb z_{u+1\leq \cdot \leq l-1}),z_l) -\ell(y,z_l)-\epsilon\right)},
\end{equation*}
with the convention $\frac{0}{0}=1$. In other words, $\gamma_\lambda(y,\mb z_{\leq u})$ corresponds to the worst possible contribution of future values for the rule $g_\cdot$ given the first values $\mb z_{\leq u}$. Note that if $u=T_\epsilon$, all values have been decided so the future worst case contribution is always null. For any $v\in V$, we denote $A_v$ the set of indices $1\leq i\leq T$ such that the block $i$ was started in the past but potentially not completed, i.e., there exists a node $v'\neq v$ in the path from the root $r$ to $v$ with $i_{v'}=i$, but there does not exist a node $v'$ from the root to $v$ such that $i_{v'}=i$ and after $v'$ we moved to his right child. We also introduce $B_v$ the set of indices $1\leq i\leq T$ which do not appear in a split from any node $v'\neq v$ in the path from $r$ to $v$. This corresponds to blocks that have not been started but potentially could be in the future. We now prove by induction that for any node $v\in V$,
\begin{equation}
\label{eq:induction_nodes}
\Gamma_\lambda(v) \leq \prod_{j \in A_v} \gamma_\lambda\left(y^{t_j},{\mb z}^{\mb j}_{\mb v,\leq N_v(j)} \right) \prod_{j\in B_v} \max(1,\gamma_\lambda(y^{t_j})).
\end{equation}
We start the induction from leaves. For any leaf $v\in L$, because there are no future values to reveal we have $\Gamma_\lambda(v)=1$. Also, because it is a leaf which may end the decision process, we have $A_v=\emptyset$. Hence Eq~\eqref{eq:induction_nodes} holds. We now show the induction. Let $v\in S$ such that all its children satisfy Eq~\eqref{eq:induction_nodes}. We denote by $v_l$ (resp. $v_r$) the left (resp. right) child of $v$, if it exists. Then, recalling that from $v$ we move to $v_l$ with probability $\frac{\mathbb{P}[\tau_\epsilon\geq N_v(i_v)+1]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]} $ and that this probability is null if $v$ does not have a left children, we have
\begin{align*}
\Gamma_\lambda(v)&= \frac{\mathbb{P}[\tau_\epsilon\geq N_v(i_v)+1]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
e^{-\lambda \sum_{j=1}^T \sum_{u=1}^{N_v(j)} (\bar\ell(\hat Y_{t^u_j},z_{v,u}^j)-\ell(y^{t_j},z_{v,u}^j)-\epsilon)} \mathbb{E}\left[\left.e^{\lambda \sum_{t\in\mathcal{B}_T}s(t)\delta_t(y^t)}\right| v_l\right]\\
&+ \frac{\mathbb{P}[\tau_\epsilon = N_v(i_v)]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
e^{-\lambda \sum_{j=1}^T \sum_{u=1}^{N_v(j)} (\bar\ell(\hat Y_{t^u_j},z_{v,u}^j)-\ell(y^{t_j},z_{v,u}^j)-\epsilon)} \mathbb{E}\left[\left.e^{\lambda \sum_{t\in\mathcal{B}_T}s(t)\delta_t(y^t)}\right| v_r\right]\\
&=\frac{\mathbb{P}[\tau_\epsilon\geq N_v(i_v)+1]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
\exp\left[\lambda \left(\bar\ell\left(\hat Y_{t^{N_{v_l}(i_v)}_{i_v}},z_v\right)-\ell\left(y^{t_{i_v}},z_v\right)-\epsilon\right)\right] \Gamma_\lambda(v_l) + \frac{\mathbb{P}[\tau_\epsilon = N_v(i_v)]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
\Gamma_\lambda(v_r).
\end{align*}
Now note that if $v_l$ exists, then we necessarily have $i_v\in A_{v_l}$. Further, we always have $i_v\notin A_{v_r}$ because if we moved right after node $v$, then the block $i_v$ was completed. As a result, we have $A_{v_r}=A_{v_l}\setminus \{i_v\} = A_v\setminus\{i_v\}$ and also $B_{v_l}=B_{v_r} = B_v\setminus\{i_v\}$. Finally, for any $j\neq i_v$, we have $N_v(j)=N_{v_r}(j)=N_{v_l}(j)$. Therefore, we obtain
\begin{align*}
\Gamma_\lambda(v)
&\leq \frac{\mathbb{P}[\tau_\epsilon\geq N_v(i_v)+1]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
\exp\left[\lambda \left(\bar\ell\left(\hat Y_{t^{N_{v_l}(i_v)}_{i_v}},z_v\right)-\ell\left(y^{t_{i_v}},z_v\right)-\epsilon\right)\right] \gamma_\lambda\left(y^{t_{i_v}},{\mb z} ^{\mb{i_v}}_{\mb v,\leq N_v(j)} ,z_v\right)\\
&\quad\quad \quad \cdot \prod_{j\in A_{v_l}\setminus \{i_v\}}\gamma_\lambda\left(y^{t_j},{\mb z}^{\mb j}_{\mb v,\leq N_v(j)}\right) \prod_{j\in B_{v_l}}\max(1,\gamma_\lambda(y^{t_j}))\\
&\quad\quad\quad + \frac{\mathbb{P}[\tau_\epsilon = N_v(i_v)]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
\prod_{j\in A_{v_r}}\gamma_\lambda\left(y^{t_j},{\mb z}^{\mb j}_{\mb v,\leq N_v(j)}\right) \prod_{j\in B_{v_r}}\max(1,\gamma_\lambda(y^{t_j}))\\
&\leq \left[\frac{\mathbb{P}[\tau_\epsilon = N_v(i_v)]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
+
\frac{\mathbb{P}[\tau_\epsilon\geq N_v(i_v)+1]}{\mathbb{P}[\tau_\epsilon \geq N_v(i_v)]}
\exp\left[\lambda \left(\bar\ell\left(\hat Y_{t^{N_{v_l}(i_v)}_{i_v}},z_v\right)-\ell\left(y^{t_{i_v}},z_v\right)-\epsilon\right)\right] \gamma_\lambda\left(y^{t_{i_v}},{\mb z}^{\mb{i_v}}_{\mb v,\leq N_v(j)},z_v\right)\right]\\
&\quad\quad \quad \cdot \prod_{j\in A_v\setminus \{i_v\}}\gamma_\lambda\left(y^{t_j},{\mb z}^{\mb j}_{\mb v,\leq N_v(j)} \right) \prod_{j\in B_v\setminus\{i_v\}}\max(1,\gamma_\lambda (y^{t_j}))
\end{align*}
Now recall that $\gamma_\lambda$ was constructed as the worst future contribution possible. Essentially, $y_v$ could be optimized to get a worse contribution which yields the following inequality.
\begin{align*}
\Gamma_\lambda(v)
&\leq \gamma_\lambda\left(y^{t_{i_v}},{\mb z}^{\mb{i_v}}_{\mb v,\leq N_v(j)}\right) \prod_{j\in A_v\setminus \{i_v\}}\gamma_\lambda\left(y^{t_j},{\mb z}^{\mb j}_{\mb v,\leq N_v(j)} \right) \prod_{j\in B_v\setminus\{i_v\}}\max(1,\gamma_\lambda (y^{t_j}))\\
&= \prod_{j\in A_v}\gamma_\lambda\left(y^{t_j},{\mb z}^{\mb j}_{\mb v,\leq N_v(j)} \right) \prod_{j\in B_v\setminus\{i_v\}}\max(1,\gamma_\lambda (y^{t_j})) \cdot \left[\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}(i_v\notin B_v) + \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}(i_v\in B_v)\gamma_\lambda(y^{t_{i_v}} ) \right]\\
&\leq \prod_{j\in A_v}\gamma_\lambda\left(y^{t_j},{\mb z}^{\mb j}_{\mb v,\leq N_v(j)} \right) \prod_{j\in B_v}\max(1,\gamma_\lambda (y^{t_j})).
\end{align*}
This ends the recursion and shows that Eq~\eqref{eq:induction_nodes} holds for all $v\in V$, in particular for the root $r$. At the root, because no values have yet been revealed, we have $A_r=\emptyset$ and $B_r =\{1\leq i\leq T\}$. As a result we obtain the desired result
\begin{equation*}
\Gamma_\lambda = \Gamma_\lambda(r) \leq \prod_{1\leq j\leq T}\max(1,\gamma_\lambda(y^{t_j}))\leq \max(1,\gamma_\lambda)^T,
\end{equation*}
where $\gamma_\lambda:=\sup_{y\in\mathcal{Y}}\gamma_\lambda(y)$. We now give an upper bound on $\gamma_\lambda$. To do so, we note that for any $y,z_1,\ldots z_{T_\epsilon}\in\mathcal{Y}$, we have $-T\bar\ell-T\epsilon \leq \sum_{u=1}^t\left(\bar\ell(g_u(\mb z_{\leq u-1}),z_u) -\ell(y,z_u)-\epsilon \right)\leq T\bar\ell$ for any $1\leq t\leq T_\epsilon$, so we can apply Hoeffding's lemma.
\begin{align*}
\gamma_\lambda &=\sup_{y,y_1,\ldots,y_{T_\epsilon}} \mathbb{E}_{\tau_\epsilon}\left[e^{\lambda\sum_{u=1}^{\tau_\epsilon} \left(\bar\ell(g_u(\mb y_{\leq u-1}),y_u) -\ell(y,y_u)-\epsilon\right)} \right]\\
&\leq \sup_{y,y_1,\ldots,y_{T_\epsilon}} \exp\left(\lambda \mathbb{E}_{\tau_\epsilon}\left[\sum_{u=1}^{\tau_\epsilon} \left(\bar\ell(g_u(\mb y_{\leq u-1}),y_u) -\ell(y,y_u)-\epsilon\right) \right] + \lambda^2\frac{T_\epsilon^2 (2\bar\ell+\epsilon)^2}{8} \right)\\
&\leq e^{\lambda^2 T_\epsilon^2(\bar\ell+1)^2/2 },
\end{align*}
where in the second inequality we used the hypothesis Eq~\eqref{eq:hypothesis_good_predictor} on the learning rule $g_{\leq \tau_\epsilon}$. As a result, we obtain $\Gamma_\lambda \leq e^{\lambda^2 \bar\ell^2 T/2 }.$ We can now apply standard Markov inequalities for concentration. Let $\alpha> 0$. Then,
\begin{equation*}
\mathbb{P}\left[\sum_{t\in\mathcal{B}_T}s(t)\bar\delta_t(y^t)\geq \alpha\right]\leq \min_{\lambda>0}\exp\left(-\lambda \alpha + \lambda^2 \frac{T_\epsilon^2(\bar\ell+1)^2 T}{2} \right) = \exp\left(-\frac{\alpha^2}{2 T_\epsilon^2(\bar\ell+1)^2 T} \right),
\end{equation*}
and as a result, with probability at least $1-\delta/2$ we have
\begin{equation*}
\sum_{t\in\mathcal{B}_T}s(t)\bar \delta_t(y^t) < T_\epsilon (\bar\ell+1) \sqrt{2T\ln \frac{2}{\delta}}.
\end{equation*}
Because this holds for any realization of $\mathbb{X}$ and $\mb Y$, we finally obtain that with probability $1-\delta/2$ the same inequality holds where the probability is taken over $\mathbb{X}$, $\mathbb{Y}$ and the learning rule together. We denote $\mathcal{F}$ the event where the above inequality holds. Then, on $\mathcal{E}\cap\mathcal{F}$ which has probability at least $1-\delta$ by the union bound, we have
\begin{equation*}
\sum_{t\in\mathcal{B}_T}s(t)\delta_t(y^t) \leq \sum_{t\in\mathcal{B}_T}s(t)\bar \delta_t(y^t) + \bar\ell \sqrt{2T\ln\frac{2}{\delta}} \leq \left[\bar\ell(T_\epsilon + 1)+ T_\epsilon\right] \sqrt{2T\ln \frac{2}{\delta}}.
\end{equation*}
This ends the proof of the lemma.
\end{proof}
We are now interested in value spaces $(\mathcal{Y},\ell)$ which do not satisfy $\text{F-TiME}$. We will show that in this case, $\text{SOLAR}$ is reduced to the processes $\text{CS}$. We first introduce a second property on value spaces as follows.
\paragraph{Property 2.}\textit{For any $\eta>0$, there exists a horizon time $T_\eta \geq 1$ and an online learning rule $g_{\leq \tau}$ where $\tau$ is a random time with $1\leq \tau\leq T_\eta$ such that for any $\mb y:=(y_t)_{t=1}^{T}$ of values in $\mathcal{Y}$ and any value $y\in\mathcal{Y}$, we have
\begin{equation*}
\mathbb{E}\left[\frac{1}{\tau}\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right)\right] \leq \eta.
\end{equation*}
}
This equation can be conveniently rewritten as
\begin{equation*}
\mathbb{E}\left[\frac{\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right)-\eta\tau}{\tau}\right] \leq 0,
\end{equation*}
which hints to the similarity with $\text{F-TiME}$. We prove their equivalence in the next lemma.
\begin{lemma}
\label{lemma:equivalent_conditions}
Property $\text{F-TiME}$ is equivalent to Property 2.
\end{lemma}
\begin{proof}
We first start by showing that Condition 1 implies Condition 2. Let $(\mathcal{Y},\ell)$ satisfying Condition 1. We now fix $\eta>0$ and let $T,g_{\leq \tau}$ such that for any $\mb y:=(y_t)_{t=1}^{T}$ of values in $\mathcal{Y}$ and any value $y\in\mathcal{Y}$, we have
\begin{equation*}
\mathbb{E}\left[\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau\right] \leq 0.
\end{equation*}
We now construct a random time $1\leq \tilde \tau \leq T$ such that $\mathbb{P}[\tilde \tau=t] = \frac{t\mathbb{P}[\tau=t]}{\mathbb{E} [\tau]}$ for all $1\leq t\leq T$. This indeed defines a proper random variable because $\sum_{t=1}^T \frac{t\mathbb{P}[\tau=t]}{\mathbb{E} [\tau]}= 1.$ Let $Supp(\tau):=\{1\leq t\leq T:\mathbb{P}[\tau=t]>0\}$ be the support of $\tau$. For any $t\in Supp(\tau)$, we denote by $g^t_{\leq t}$ the learning rule obtained by conditioning $g_{\leq \tau}$ on the event $\{\tau=t\}$, i.e., $g^t_{\leq t}=g_{\leq \tau}|\tau=t$. More precisely, recall that $\tau$ only uses the randomness of $g_t$. It is not an online random time. Hence, a practical way to simulate $g^t_{\leq t}$ for all $t\in Supp(\tau)$ is to first draw an i.i.d. sequence of learning rules $(g_{i,\leq \tau_i})_{i\geq 1}$. Then, for each $t\in Supp(\tau)$ and select the randomness which first satisfies $\tau=t$. Specifically, we define the time $i_t = \min\{i: \tau_i=t\}$ for all $t\in Supp(\tau)$. With probability one, these times are finite for all $t\in Supp(\tau)$. Denote this event $\mathcal{E}$. Then, letting $\bar y\in\mathcal{Y}$ be an arbitrary fixed value, for all $1\leq t\leq T$ we pose
\begin{equation*}
g^t_{\leq t} = \begin{cases}
g_{i_t,\leq t} &\text{if }\mathcal{E} \text{ is met},\\
{\bar y}_{\leq t} &\text{otherwise},
\end{cases}
\quad t\in Supp(\tau)\quad \text{ and }\quad g^t_{\leq t} = {\bar y}_{\leq t},\quad t\notin Supp(\tau).
\end{equation*}
where ${\bar y}_{\leq t} $ denotes the learning rules which always outputs value $\bar y$ for all steps $u\leq t$. Intuitively, $g^t_{\leq t}$ has the same distribution as $g_{\leq \tau}$ conditioned on the event $\{\tau=t\}$. We are now ready to define a new learning rule $\tilde g_{\leq \tilde \tau}$, by $\tilde g_{\leq \tilde \tau} := g^{\tilde \tau}_{\leq \tilde \tau}.$ Noting that for any $t\notin Supp(\tau)$ we have $\mathbb{P}[\tilde\tau=t]=0$, we can write
\begin{align*}
\mathbb{E}&\left[\frac{\sum_{t=1}^{\tau} \left(\ell(\tilde g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau}{\tau}\right]\\
&= \sum_{t=1}^T \mathbb{P}[\tilde\tau=t] \frac{\mathbb{E}\left[\left.\sum_{u=1}^{t} \left(\ell(\tilde g_u({\mb y}_{\leq u-1}), y_u) - \ell(y,y_u)\right) - \eta t\right| \tilde \tau = t\right] }{t}\\
&= \sum_{t\in Supp(\tau)} \mathbb{P}[\tilde\tau=t] \frac{\mathbb{E}\left[\left.\sum_{u=1}^{t} \left(\ell(\tilde g_u({\mb y}_{\leq u-1}), y_u) - \ell(y,y_u)\right) - \eta t\right| \tilde \tau = t,\mathcal{E}\right] }{t}\\
&=\frac{1}{\mathbb{E}[\tau]}\sum_{t\in Supp(\tau)} \mathbb{P}[\tau=t] \mathbb{E}\left[\left.\sum_{u=1}^{t} \left(\ell(g_{i_t,u}({\mb y}_{\leq u-1}), y_u) - \ell(y,y_u) \right) -\eta t\right|\mathcal{E}\right]\\
&=\frac{1}{\mathbb{E}[\tau]}\sum_{t\in Supp(\tau)} \mathbb{P}[\tau=t] \mathbb{E}\left[\sum_{u=1}^{t} \left(\ell(g_{i_t,u}({\mb y}_{\leq u-1}), y_u) - \ell(y,y_u) \right) -\eta t\right]\\
&=\frac{1}{\mathbb{E}[\tau]}\sum_{t\in Supp(\tau)} \mathbb{P}[\tau=t] \mathbb{E}\left[\left.\sum_{u=1}^{t} \left(\ell(g_u({\mb y}_{\leq u-1}), y_u) - \ell(y,y_u) \right) -\eta t\right| \tau = t\right]\\
&=\frac{1}{\mathbb{E}[\tau]} \mathbb{E}\left[\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau\right]\leq 0.
\end{align*}
where in the third and fifth equality we used the fact that $\mathbb{P}[\mathcal{E}]=1$.
This ends the proof that Condition 1 implies Condition 2.\\
The other implication can be proved using the same technique. Suppose that $(\mathcal{Y},\ell)$ satisfies Condition 2. Let $\eta>0$ and $T\geq 1,g_{\leq \tau}$, where $1\leq \tau\leq T$ such that
\begin{equation*}
\mathbb{E}\left[\frac{\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau}{\tau}\right] \leq 0.
\end{equation*}
We first construct a random time $1\leq \tau\leq T$ such that $\mathbb{P}[\tilde\tau] = \frac{\mathbb{P}[\tau=t]}{t\mathbb{E}\left[\frac{1}{\tau}\right]}$ for all $1\leq t\leq T$. We then construct a learning rule $\tilde g_{\leq \tilde \tau}$ similarly as before. Using the same arguments, we obtain
\begin{align*}
\mathbb{E}&\left[\sum_{t=1}^{\tau} \left(\ell(\tilde g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau\right]\\
&= \sum_{t\in Supp(\tau)} \frac{\mathbb{P}[\tau=t]}{t\mathbb{E}\left[\frac{1}{\tau}\right]} \mathbb{E}\left[\left.\sum_{u=1}^{t} \left(\ell(\tilde g_u({\mb y}_{\leq u-1}), y_u) - \ell(y,y_u)\right) - \eta t\right| \tilde \tau = t\right]\\
&=\frac{1}{\mathbb{E}\left[\frac{1}{\tau}\right]} \sum_{t\in Supp(\tau)} \mathbb{P}[\tau=t] \frac{\mathbb{E}\left[\left.\sum_{u=1}^{t} \left(\ell(g_u({\mb y}_{\leq u-1}), y_u) - \ell(y,y_u)\right)-\eta t\right|\tau = t\right]}{t}\\
&=\frac{1}{\mathbb{E}\left[\frac{1}{\tau}\right]} \mathbb{E}\left[\frac{\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right) -\eta \tau}{\tau}\right]\leq 0.
\end{align*}
This ends the proof of the lemma.
\end{proof}
We are now ready to prove our main result for the second alternative when $\text{F-TiME}$ is not satisfied. Specifically, we show that in this case, universal learning for adversarial responses under processes outside $\text{CS}$ under arbitrary responses is not achievable.
\ThmBadValueSpaces*
\begin{proof}
We first prove that adversarial regression for processes outside of $\text{CS}$ is not achievable. Precisely, we show that for any $\mathbb{X}\notin\text{CS}$, for any online learning rule $f_\cdot$, there exists a process $\mathbb{Y}$ on $\mathcal{Y}$, a measurable function $f^*:\mathcal{X}\to\mathcal{Y}$ and $\delta>0$ such that with non-zero probability $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f^*)>\delta$.
Because $\text{F-TiME}$ is not satisfied by $(\mathcal{Y},\ell)$, by Lemma \ref{lemma:equivalent_conditions}, Property 2 is not satisfied either. Hence, we can fix $\eta>0$ such that for any horizon $T\geq 1$ and any online learning rule $g_{\leq \tau}$ with $1\leq \tau\leq T$, there exist a sequence $\mb y:=(y_t)_{t=1}^T$ of values in $\mathcal{Y}$ and a value $y$ such that
\begin{equation*}
\mathbb{E}\left[\frac{1}{\tau}\sum_{t=1}^{\tau} \left(\ell(g_t({\mb y}_{\leq t-1}), y_t) - \ell(y,y_t)\right)\right] >\eta.
\end{equation*}
Because as in the assumption of the space $(\mathcal{Y},\ell)$. Let $\mathbb{X}\notin\text{CS}$. The proof of Theorem \ref{thm:negative_optimistic} shows that there exist $0<\epsilon<1$, a sequence of disjoint measurable sets $\{B_p\}_{p\geq 1}$ and a sequence of times $(t_p)_{p\geq 0}$ with $t_0=0$ and such that with $\mu:=\max(1,\frac{8\bar\ell}{\epsilon\eta})$, for any $p\geq 1$, $t_p>\mu t_{p-1}$, and defining the events
\begin{align*}
\mathcal{E}_p = \left\{\mathbb{X}_{\leq t_{p-1}}\cap\left(\bigcup_{p'\geq p}B_p\right)=\emptyset \right\}
\text{ and } \mathcal{F}_p:= \bigcup_{\mu t_{p-1}<t\leq t_p} \left\{\frac{1}{t}\sum_{u=1}^t \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{B_p}(X_u)\geq \frac{\epsilon}{4}\right\},
\end{align*}
we have $\mathbb{P}[\bigcap_{p\geq 1}(\mathcal{E}_p\cap\mathcal{F}_p)]\geq \frac{\epsilon}{4}$. We now fix a learning rule $f_\cdot$ and construct a ``bad'' process $\mathbb{Y}$ recursively. Fix $\bar y\in\mathcal{Y}$ an arbitrary value. We start by defining the random variables $N_p(t) = \sum_{u=t_{p-1}+1}^{t}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{B_p}(X_u)$ for any $p\geq 1$. We now construct (deterministic) values $y_p$ and sequences $(y_p^u)_{u=1}^{t_p}$ for all $p\geq 1$, of values in $\mathcal{Y}$. Suppose we have already constructed the values $y_q$ as well as the sequences $(y_q^u)_{u=1}^{t_q}$ for all $q<p$. We will now construct $y_p$ and $(y_p^u)_{u=1}^{t_p}$. Assuming that the event $\mathcal{E}_p\cap \mathcal{F}_p$ is met, there exists $\mu t_{p-1}<t\leq t_p$ such that
\begin{equation*}
N_p(t) = \sum_{u=t_{p-1}+1}^t\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{B_p}(X_u) = \sum_{u=1}^t\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{B_p}(X_u) \geq \frac{\epsilon}{4}t,
\end{equation*}
where in the first equality we used the fact that on $\mathcal{E}_p$, the process $\mathbb{X}_{\leq t_{p-1}}$ does not visit $B_p$. In the rest of the construction, we will denote
\begin{equation*}
T_p = \begin{cases}
\min \{\mu t_{p-1}<t\leq t_p:N_p(t)\geq \frac{\epsilon}{4} t\} &\text{if } \mathcal{E}_p\cap\mathcal{F}_p \text{ is met}.\\
t_p &\text{otherwise}.
\end{cases}
\end{equation*}
Now consider the process $\mathbb{Y}_{t\leq t_{p-1}}(\mathbb{X})$ defined as follows. For any $1\leq q<p$ we pose
\begin{equation*}
Y_t(\mathbb{X}) = \begin{cases}
y_q^{N_q(t)} &\text{if } t\leq T_q \text{ and } X_t\in B_q, \\
y_q &\text{if } t> T_q \text{ and } X_t\in B_q, \\
y_{q'} &\text{if }X_t\in B_{q'},\;q'<q,\\
\bar y &\text{otherwise},
\end{cases}\quad \quad t_{q-1}<t\leq t_q.
\end{equation*}
Similarly, for $M\geq 1$ and given any sequence $\{\tilde y_i\}_{i = 1}^{M}$, we define the process $\mathbb{Y}_{t_{p-1}<u\leq t_p}\left(\mathbb{X},\{\tilde y_i\}_{i=1}^{M}\right)$ by
\begin{equation*}
Y_u\left(\mathbb{X},\{\tilde y_i\}_{i =q 1}^{M}\right) = \begin{cases}
\tilde y_{\min(N_p(u),M)} &\text{if }X_t\in B_p, \\
y_q &\text{if } X_t\in B_{q},\;q<p,\\
\bar y &\text{otherwise}.
\end{cases}
\end{equation*}
We now construct a learning rule $g^p_\cdot$. First, we define the event $\mathcal{B}:=\bigcap_{p\geq 1}(\mathcal{E}_p\cap\mathcal{F}_p)$. We will denote by $\tilde \mathbb{X}=\mathbb{X}|\mathcal{B}$ a sampling of the process $\mathbb{X}$ on the event $\mathcal{B}$ which has probability at least $\frac{\epsilon}{4}$. For instance we draw i.i.d. samplings following the same distribution as $\mathbb{X}$ then select the process which first falls into $\mathcal{B}$. We are now ready to define a learning rule $(g^p_u)_{u\leq \tau}$ where $\tau$ is a random time. To do so, we first draw a sample $\tilde \mathbb{X}$ which is now fixed for the learning rule $g^p_\cdot$. We define the stopping time as $\tau=N_p(T_p)$. Finally, for all $1\leq u\leq \tau$, and any sequence of values $\mb{\tilde y}_{\leq u-1}$, we pose
\begin{equation*}
g^p_u(\mb {\tilde y}_{\leq u-1}) =
\begin{cases}
f_{T_p(u)}\left(\tilde \mathbb{X}_{\leq T_p(u)-1},\left\{\mathbb{Y}_{\leq t_{p-1}}( \tilde \mathbb{X}),\mathbb{Y}_{t_{p-1}<u\leq T_p(u)-1} \left(\tilde \mathbb{X},\{\tilde y_i \}_{i=1}^{u-1}\right) \right\} , \tilde X_{T_p(u)}\right).
\end{cases}
\end{equation*}
where we used the notation $T_p(u):=\min\{t_{p-1}<t'\leq t_p:N_p(t)=u\}$ for the time of the $u-$th visit of $B_p$, which exists because $u\leq \tau=N_p(T_p)\leq N_p(t_p)$ since the event $\mathcal{B}$ is satisfied by $\tilde\mathbb{X}$. Note that the prediction of the rule $g_\cdot^p$ is random because of the dependence on $\tilde \mathbb{X}$. Also, observe that the random time $\tau$ is bounded by $1\leq \tau\leq T_p\leq t_p$. Therefore, by hypothesis on the value space $(\mathcal{Y},\ell)$, there exists a sequence $\{y^u_p\}_{u=1}^{t_p}$ and a value $y_p\in\mathcal{Y}$ such that
\begin{equation*}
\mathbb{E}\left[\frac{1}{\tau}\sum_{u=1}^{\tau} \left(\ell(g_u^p({\mb y_p}^{\leq u-1}), y_p^u) - \ell(y_p,y_p^u)\right)\right] \geq \eta.
\end{equation*}
This ends the recursive construction of the values $y_p$ and the sequences $(y^u_p)_{u=1}^{t_p}$ for all $p\geq 1$. We are now ready to define the process $\mathbb{Y}(\mathbb{X})$, using a similar construction as before. For any $p\geq 1$ we define
\begin{equation*}
Y_t(\mathbb{X}) = \begin{cases}
y_p^{N_p(t)} &\text{if } t\leq T_p \text{ and } X_t\in B_p, \\
y_p &\text{if } t> T_p \text{ and } X_t\in B_p, \\
y_q &\text{if } X_t\in B_q,\;q<p,\\
\bar y &\text{otherwise},
\end{cases}\quad \quad t_{p-1}<t\leq t_p.
\end{equation*}
We also define a function $f^*:\mathcal{X}\to\mathcal{Y}$ by
\begin{equation*}
f^*(x)=\begin{cases}
y_p &\text{if }x\in B_p,\\
\bar y &\text{otherwise}.
\end{cases}
\end{equation*}
This function is simple hence measurable. From now, we will suppose that the event $\mathcal{B}$ is met. For simplicity, we will denote by $\hat Y_t:=f_t(\mathbb{X}_{\leq t-1},\mathbb{Y}_{\leq t-1},X_t)$ the prediction of the learning rule at time $t$. For any $p\geq 1$, because $\mathcal{E}_p\cap\mathcal{F}_p$ is met, for all $1\leq u\leq N_p(T_p)$, we have $t_{p-1}<T_p(u)\leq T_p$, and $X_{T_p(u)}\in B_p$. Hence, by construction, we have $\hat Y_{T_q(u)}=y^u_q$ and we can write
\begin{equation*}
\sum_{t=1}^{T_p} \ell(\hat Y_t,Y_t) \geq \sum_{t=t_{p-1}+1}^{T_p} \ell(\hat Y_t,Y_t) \geq \sum_{u=1}^{N_p(T_p)} \ell(\hat Y_{T_p(u)},Y_{T_p(u)}) = \sum_{u=1}^{\tau} \ell( f_{T_p(u)}\left(\mathbb{X}_{\leq T_p(u)-1},\mathbb{Y}_{\leq T_p(u)-1}, X_{T_p(u)}\right),y_p^u).
\end{equation*}
Now note that because the construction was similar to the construction of $g_\cdot^p$, we have $\mathbb{Y}_{\leq T_p(u)-1} = \left\{ \mathbb{Y}_{\leq t_{p-1}}(\mathbb{X}), \mathbb{Y}_{t_{p-1}<t\leq T_p(u)-1}\left(\mathbb{X},\{y^i_p\}_{i=1}^{u-1} \right) \right\}$, i.e., $\hat Y_{T_p(u)}$ coincides with the prediction $g_u^p(\{y^i_p\}_{i=1}^{u-1})$ provided that $g_u^p$ precisely used the realization $\mathbb{X}$. Hence, conditioned on $\mathcal{B}$ for all $u\leq \tau_p$, $\hat Y_{T_p(u)}$ has the same distribution as $g_u^p(\mb{y_p}^{\leq u-1})$. Therefore we obtain
\begin{align*}
\mathbb{E}\left[\left. \frac{1}{\tau}\sum_{t=1}^{T_p} \ell(\hat Y_t,Y_t) - \frac{1}{\tau}\sum_{u=1}^{\tau}\ell(y_p,y_p^u) \right| \mathcal{B}\right]
&\geq \mathbb{E}\left[\left. \frac{1}{\tau}\sum_{u=1}^{\tau} \left(\ell(g_u^p(\hat Y_{T_p(u)},y_p^u) -\ell(y_p,y_p^u) \right) \right| \mathcal{B}\right]\\
&= \mathbb{E}\left[ \frac{1}{\tau}\sum_{u=1}^{\tau} \left(\ell(g_u^p(\mb{y_p}^{\leq u-1}),y_p^u) -\ell(y_p,y_p^u) \right) \right]\\
&\geq \eta.
\end{align*}
We now turn to the loss obtained by the simple function $f^*$. By construction, assuming that the event $\mathcal{B}$ is met, we have
\begin{equation*}
\sum_{t=1}^{T_p} \ell(f^*(X_t),Y_t) \leq \bar\ell t_{p-1} + \sum_{u=1}^{N_p(T_p)}\ell(f^*(X_{T_p(u)}),y_p^u) =\bar\ell t_{p-1} +\sum_{u=1}^{\tau}\ell(y_p,y_p^u).
\end{equation*}
Recalling that $T_p>\mu t_{p-1}\geq \frac{8\bar\ell}{\epsilon\eta}t_{p-1}$ and noting that $\tau=N_p(T_p)\geq \frac{\epsilon}{4}T_p$, we obtain
\begin{align*}
\mathbb{E}\left[\left. \sup_{t_{p-1}<T\leq t_p}\frac{1}{T}\sum_{t=1}^T (\ell(\hat Y_t,Y_t) - \ell(f(X_t),Y_t)) \right|\mathcal{B}\right] &\geq \mathbb{E}\left[\left. \frac{\tau}{T_p}\frac{1}{\tau}\left(\sum_{t=1}^T \ell(\hat Y_t,Y_t) -\sum_{u=1}^{\tau}\ell(y_p,y_p^u) \right)- \bar\ell \frac{t_{p-1}}{T_p} \right|\mathcal{B}\right]\\
&\geq \frac{\epsilon}{4} \mathbb{E}\left[\left. \frac{1}{\tau}\sum_{t=1}^{T_p} \ell(\hat Y_t,Y_t) - \frac{1}{\tau}\sum_{u=1}^{\tau}\ell(y_p,y_p^u) \right| \mathcal{B}\right] - \frac{\epsilon\eta}{8}\\
&\geq \frac{\epsilon\eta}{8}.
\end{align*}
Because this holds for any $p\geq 1$, Fatou lemma yields
\begin{equation*}
\mathbb{E}\left[\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t) - \ell(f(X_t),Y_t)\right] \geq \mathbb{E}\left[\left. \limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T (\ell(\hat Y_t,Y_t) - \ell(f(X_t),Y_t)) \right|\mathcal{B}\right] \mathbb{P}[\mathcal{B}]\geq \frac{\epsilon^2\eta}{32}.
\end{equation*}
Hence, we do note have almost surely $\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t) - \ell(f(X_t),Y_t)\leq 0$. This shows that $\mathbb{X}\notin\text{SOLAR}$, which in turn implies $\text{SOLAR}\subset\text{CS}$. This ends the proof that $\text{SOLAR}\subset\text{CS}$. The proof that $\text{CS}\subset\text{SOLAR}$ and the construction of an optimistically universal learning rule for adversarial regression is deferred to Section \ref{sec:unbounded_loss_moment_constraint} where we give a stronger result which also holds for unbounded losses. Note that generalizing Theorem \ref{thm:hanneke_2022} to adversarial responses already shows that $\text{CS}\subset\text{SOLAR}$ and provides an optimistically universal learning rule when the loss $\ell$ is a metric $\alpha=1$.
\end{proof}
This closes our study of universal learning with adversarial responses for bounded value spaces. Interestingly, we showed that there always exists an optimistically universal learning rule for adversarial regression, however this rule highly depends on the value space. Namely, if $(\mathcal{Y},\ell)$ satisfies $\text{F-TiME}$, we can learn all ${\text{SMV}}=\text{SOUL}$ processes. The proposed learning rule of Theorem \ref{thm:good_value_spaces} is \emph{implicit} in general. Indeed, to construct it one first needs to find an online learning rule for mean estimation with finite horizon as explicited by property $\text{F-TiME}$, which is then used as subroutine in the optimistically universal learning rule for adversarial regression. We showed however that for totally-bounded value spaces, this learning rule can be \emph{explicited} using $\epsilon-$nets for decreasing $\epsilon>0$.
If the value space does not satisfy $\text{F-TiME}$, we showed that we can only learn $\text{CS}\subsetneq\text{SOUL}$ processes, and we propose a learning rule in Section \ref{sec:unbounded_loss_moment_constraint} which is optimistically universal---see Theorem \ref{thm:CS_regression_unbounded}. This learning rule is inspired by the proposed algorithm of \cite{hanneke2022bayes} which is optimistically universal for metric losses $\alpha=1$. It is worth noting that these learning rule uses very different techniques than our first proposed algorithm for value spaces satisfying $\text{F-TiME}$. Specifically, under processes $\mathbb{X}\in\text{CS}$, \cite{hanneke2021learning} showed that there exists a countable set $\mathcal{F}$ of measurable functions $f:\mathcal{X}\to\mathcal{Y}$ which is ``dense'' within the space of all measurable functions along the realizations $f(X_t)$. We refer to Section \ref{sec:unbounded_loss_moment_constraint} for a precise description of this density notion. Intuitively it asks that for any measurable function $f^*:\mathcal{X}\to\mathcal{Y}$, the long-run empirical average losses of $f^*$ compared to a function $f\in\mathcal{F}$ along the instances $X_1,X_2,\ldots$ are arbitrarily small. Hence, under process $\mathbb{X}$, we can approximate $f^*$ by functions in $\mathcal{F}$ with arbitrary long-run average precision. Such property is impossible to obtain for any process $\mathbb{X}\in{\text{SMV}}\setminus \text{CS}$. Indeed, \cite{hanneke2021learning} showed that having such a ``dense'' countable set of measurable functions under a given process $\mathbb{X}$ is a sufficient condition for universal inductive or self-adaptive learning in the noiseless setting. However, in the same work, it is shown that the set of processes learnable for inductive or self-adaptive noiseless learning is precisely $\text{CS}$. As a result, no process $\mathbb{X}\notin \text{CS}$ admits a ``dense'' countable sequence of measurable functions. This implies that in order to learn processes ${\text{SMV}}$ for value spaces satisfying $\text{F-TiME}$, a \emph{fundamentally} different learning rule than that proposed by \cite{hanneke2021learning} or \cite{hanneke2022bayes} was needed. Further, the alternative $\text{SOLAR}=\text{CS}$ or $\text{SOLAR}={\text{SMV}}(=\text{SOUL})$ shows that there is an inherent gap between noiseless and noisy regression for some non-totally-bounded value spaces.
\section{Adversarial universal learning for unbounded losses}
\label{sec:mean_estimation}
We now turn to the case of unbounded losses. We say that the value space $(\mathcal{Y},\ell)$ is unbounded when $\sup_{y_1,y_2\in\mathcal{Y}}\ell(y_1,y_2)=\infty$. In this case, and for more general near-metrics, \cite{blanchard2022optimistic} showed that $\text{SOUL}=\text{FS}$. In other terms, for unbounded losses, the learnable processes in the noiseless setting necessarily visit a finite number of distinct instance points of $\mathcal{X}$ almost surely. This shows that universal learning on unbounded value spaces is very restrictive and in particular we have $\text{SOLAR}\subset\text{FS}$. The main question is whether we can recover the complete set of processes $\text{FS}$ for adversarial regression. We will show that there is again an alternative. We will obtain either $\text{SOLAR}=\text{FS}$ or $\text{SOLAR}=\emptyset$.
\subsection{Adversarial regression for metric losses}
\label{subsec:metric_mean_estimation}
In this section, we focus on metric losses $\ell$, i.e., $\alpha=1$. We will show in this section that we have in fact an equality $\text{SOLAR}=\text{FS}$ and that we can provide an optimistically universal learning rule. To do so, we first prove a general result on mean estimation. We refer as mean estimation the fundamental estimation problem where one observes values $\mathbb{Y}$ from a general separable metric value space and aims to sequentially predict a value $\hat Y_t$ in order to minimize the long-run average loss. This is equivalent to regression where the input space is $\mathcal{X}=\{0\}$. Note that in the specific case of i.i.d. processes $\mathbb{Y}$, mean estimation is exactly the problem of Fr\'echet mean estimation for distributions on $\mathcal{Y}$. We show that even for adversarial processes $\mathbb{Y}$, we can achieve sublinear regret compared to the best single value prediction, even for unbounded value spaces $(\mathcal{Y},\ell)$.
\ThmMeanEstimation*
\begin{proof}
Consider the following algorithm. Fix any $(y^i)_{i\geq 0}$ a sequence of values dense in $\mathcal{Y}$. Define $I_t=\{i:\; i\leq \ln t,\; \ell(y^0,y^i)\leq \ln t\}$, for any $i\geq 0$, denote $t_i=\lceil\max(e^i,e^{\ell(y^0,y^i)})\rceil$ and pose $\eta_t=\frac{1}{4\sqrt t}$. \comment{We define $(U_t)_{t\geq 1}$ an i.i.d. sequence of uniforms $\mathcal{U}([0,1])$ independent from the process $\mathbb{Y}$. }For any $i\in I_t$ we pose $L_{t-1,i}:= \sum_{s=t_i}^{t-1}\ell(y^i,Y_s)$ and construct some weights $w_{t,i}$ for $t\geq 1$ and $i\in I_t$ recursively in the following way.
Note that $I_1=\{0\}$. Therefore, we pose $w_{0,0}=1$. Now let $t\geq 2$ and suppose that $w_{s-1,i}$ have been constructed for all $1\leq s\leq t-1$. We define $\hat \ell_s:=\frac{\sum_{j\in I_s} w_{s-1,j}\ell(y^j,Y_s)}{\sum_{j\in I_s} w_{s-1,j}}$ and for any $i\in I_t$ we note $\hat L_{t-1,i} := \sum_{s=t_i}^{t-1} \hat \ell_s$. In particular, if $t_i=t$ we have $\hat L_{t-1,i}=L_{t-1,i}=0$. The weights at time $t$ are constructed as $w_{t-1,i}:= e^{\eta_t(\hat L_{t-1,i}-L_{t-1,i})}$. Finally, we name indices $I_t=\{i_1,\ldots,i_{|I_t|}\}$ and construct the prediction $\hat Y_t$ such that this prediction is independent of the past history and for any $i\in I_t$, we have
\comment{by
\begin{equation*}
\hat Y_t = \begin{cases}y_{i_k} &\text{if } \frac{\sum_{l=1}^{k-1} w_{t-1,i_l}}{\sum_{i\in I_t}w_{t-1,i}}< U_t \leq \frac{\sum_{l=1}^{k} w_{t-1,i_l}}{\sum_{i\in I_t}w_{t-1,i}}, \quad k=1,\ldots,|I_t|\\
y_{i_1} &\text{if } U_t=0,
\end{cases}
\end{equation*}
so that }
\begin{equation*}
\mathbb{P}(\hat Y_t = y^i) = \frac{w_{t-1,i}}{\sum_{j\in I_t}w_{t-1,j}}.
\end{equation*}
Note that the random prediction $\hat Y_t$ only uses the values $Y_1,\ldots,Y_{t-1}$ hence defines a proper online learning rule. We will now show that there exists $t_1\geq 1$ such that for any $t\geq t_1$, with high probability, for all $i\in I_t$,
\begin{equation*}
\sum_{s=t_i}^t\ell(\hat Y_s,Y_s)\leq L_{t,i} +3\ln^2 t\sqrt t.
\end{equation*}
For any $t\geq 0$, note that we have $\hat \ell_t=\mathbb{E}[\ell(\hat Y_t,Y_t)\mid \mathbb{Y}_{\leq t}]$. We define the instantaneous regret $r_{t,i} = \hat\ell_t - \ell(y^i,Y_t)$. We now define $w'_{t-1,i}:=e^{\eta_{t-1}(\hat L_{t-1,i}-L_{t-1,i})}$ and pose $W_{t-1} = \sum_{i\in I_t}w_{t-1,i}$ and $W'_{t-1} = \sum_{i\in I_{t-1}} w'_{t-1,i}$, i.e., which induces the most regret. We also denote the index $k_t\in I_t$ such that $\hat L_{t,k_t}- L_{t,k_t} = \max_{i\in I_t} \hat L_{t,i} - L_{t,i}$. We first note that for any $i,j\in I_t$, we have $\ell(y^i,Y_t)-\ell(y^j,Y_t)\leq \ell(y^i,y^0)+\ell(y^0,y^j)\leq 2\ln t$. Therefore, we also have $|r_{t,i}|\leq 2\ln t$. Hence, we can apply Hoeffding's lemma to obtain
\begin{equation*}
\frac{1}{\eta_t}\ln \frac{W'_t}{W_{t-1}} = \frac{1}{\eta_t} \ln \sum_{i\in I_t} \frac{w_{t-1,i}}{W_{t-1}}e^{\eta_t r_{t,i}} \leq \frac{1}{\eta_t}\left( \eta_t\sum_{i\in I_t} r_{t,i} \frac{w_{t-1,i}}{W_{t-1}} + \frac{\eta_t^2 (4\ln t)^2}{8}\right) = 2\eta_t \ln^2 t.
\end{equation*}
The same computations as in the proof of Lemma \ref{lemma:concatenation_predictors} then show that
\begin{multline}
\label{eq:to_sum_mean_estimation}
\frac{1}{\eta_t}\ln \frac{w_{t-1,k_{t-1}}}{W_{t-1}}- \frac{1}{\eta_{t+1}}\ln \frac{w_{t,k_t}}{W_t} \leq 2\left(\frac{1}{\eta_{t+1}}-\frac{1}{\eta_t}\right) \ln (1+\ln (t+1)) + \frac{|I_{t+1}|-|I_t|}{\eta_t \sum_{i\in I_t} w_{t,i}} \\
+ (\hat L_{t-1,k_{t-1}}- L_{t-1,k_{t-1}}) - (\hat L_{t,k_t}- L_{t,k_t}) + 2\eta_t \ln^2 t.
\end{multline}
First suppose that we have $\sum_{i\in I_t}w_{t,i}\leq 1$. Similarly to Lemma \ref{lemma:concatenation_predictors}, we obtain $\hat L_{t,k_t}-L_{t,k_t} \leq 0$. Otherwise, let $t'=\min \{1\leq s\leq t:\forall s\leq s'\leq t,\sum_{i\in I_{s'}} w_{s',i}\geq 1\}$. We sum equation~\eqref{eq:to_sum_mean_estimation} for $s=t',\ldots, t$ which gives
\begin{equation*}
\frac{1}{\eta_1}\ln \frac{w_{t'-1,k_{t'-1}}}{W_{t'-1}}- \frac{1}{\eta_{t+1}}\ln \frac{w_{t,k_t}}{W_t} \leq \frac{2}{\eta_{t+1}} \ln (1+\ln (t+1))+ \frac{|I_{t+1}|}{\eta_t} + (\hat L_{t'-1,k_{t'-1}}- L_{t'-1,k_{t'-1}}) - (\hat L_{t,k_t}- L_{t,k_t}) + 2\sum_{s=t'}^t\eta_s \ln^2 s.
\end{equation*}
Similarly as in Lemma \ref{lemma:concatenation_predictors}, we have $\frac{w_{t,k_t}}{W_t}\leq 1$, $\frac{w_{t'-1,k_{t'-1}}}{W_{t'-1}}\geq \frac{1}{1+\ln t}$ and $\hat L_{t'-1,k_{t'-1}}- L_{t'-1,k_{t'-1}}\leq 0$. Finally, using the fact that $\sum_{s=1}^t \frac{1}{\sqrt s}\leq 2\sqrt t$, we obtain
\begin{equation*}
\hat L_{t,k_t}- L_{t,k_t} \leq \ln(1+\ln (t+1))(4+8\sqrt{t+1}) +4(1+\ln (t+1))\sqrt t + \ln^2 t\sqrt t \leq 2 \ln^2 t \sqrt t,
\end{equation*}
for all $t\geq t_0$ where $t_0$ is a fixed constant, and as a result, for all $t\geq t_0$ and $i\in I_t$, we have $\hat L_{t,i} -L_{t,i} \leq 2\ln^2 t \sqrt t.$
Now note that $|\ell(\hat Y_t,Y_t)-\mathbb{E}[\ell(\hat Y_t,Y_t)\mid\mathbb{Y}_{\leq t}]|\leq 2\ln t$ because for all $i\in I_t$, we have $\ell(y^i,y^0)\leq \ln t$. Hence, we can apply Hoeffding-Azuma inequality to the variables $\ell(\hat Y_t,Y_t)-\hat \ell_t$ that form a sequence of differences of a martingale, which yields
\begin{equation*}
\mathbb{P}\left[\sum_{s=t_i}^t \ell(\hat Y_s,Y_s)>\hat L_{t,i} + u\right] \leq e^{ -\frac{u^2}{8t\ln^2 t}}.
\end{equation*}
Hence, for $t\geq t_0$ and $i\in I_t$, with probability $1-\delta$, we have
\begin{equation*}
\sum_{s=t_i}^t \ell(\hat Y_s,Y_s)\leq \hat L_{t,i} +\ln t \sqrt{2t\ln\frac{1}{\delta}} \leq L_{t,i} + 2\ln^2 t\sqrt t + \ln t \sqrt{2t\ln\frac{1}{\delta}}.
\end{equation*}
Therefore, since $|I_t|\leq 1+\ln t$, by union bound with probability $1-\frac{1}{t^2}$ we obtain that for all $i\in I_t$,
\begin{equation*}
\sum_{s=t_i}^t \ell(\hat Y_s,Y_s) \leq L_{t,i} + 2\ln^2 t\sqrt t + \ln t \sqrt{2t\ln(1+\ln t)}+ \ln t\sqrt{4t\ln t}\leq 3\ln^2 t \sqrt t
\end{equation*}
for all $t\geq t_1$ where $t_1\geq t_0$ is a fixed constant. Now because $\sum_{t\geq 1}\frac{1}{t^2}<\infty$, the Borel-Cantelli lemma implies that almost surely, there exists $\hat t\geq 0$ such that
\begin{equation*}
\forall t\geq \hat t, \forall i\in I_t,\quad \sum_{s=t_i}^t \ell(\hat Y_s,Y_s) \leq L_{t,i} +3\ln^2 t\sqrt t.
\end{equation*}
We denote by $\mathcal{A}$ this event. Now let $y\in\mathcal{Y}$, $\epsilon>0$ and consider $i\geq 0$ such that $\ell(y^i,y)<\epsilon$. On the event $\mathcal{A}$, we have for all $t\geq \max(\hat t,t_i)$,
\begin{equation*}
\quad \sum_{s=t_i}^t \ell(\hat Y_s,Y_s) \leq \sum_{s=t_i}^t \ell(y^i,Y_s) + 3\ln^2 t\sqrt t \leq \sum_{s=t_i}^t \ell(y,Y_s) + \epsilon t + 3\ln^2 t\sqrt t.
\end{equation*}
Therefore, $\limsup_{t\to\infty} \frac{1}{t}\sum_{s=1}^t \left( \ell(\hat Y_s,Y_s)-\ell(y,Y_s) \right) \leq \epsilon$ on $\mathcal{A}$. Because this holds for any $\epsilon>0$ we finally obtain $ \limsup_{t\to\infty} \frac{1}{t}\sum_{s=1}^t \left( \ell(\hat Y_s,Y_s)-\ell(y,Y_s) \right) \leq 0$ on the event $\mathcal{A}$ of probability one, which holds for all $y\in \mathcal{Y}$ simultaneously. This ends the proof of the theorem.
\end{proof}
Note that, unlike all the results that we showed in the previous sections for universal regression, on the same event of probability one, the propose learning rule achieves sublinear regret compared to any fixed value prediction. This was not the case for universal regression where, instead, for every fixed measurable function $f:\mathcal{X}\to\mathcal{Y}$, with probability one our learning rules achieve sublinear regret. This stems essentially from the fact that there exists a dense countable set of values $\mathcal{Y}$ but in general, there does not exist a countable set of measurable functions which are dense within all measurable functions in infinity norm. As a consequence of Theorem \ref{thm:mean_estimation} we obtain the following result for universal regression with adversarial responses.
\begin{corollary}
\label{cor:universal_regression_unbounded}
Suppose that $(\mathcal{Y},\ell)$ is an unbounded metric space. Then, $\text{SOLAR} = \text{FS}(=\text{SOUL})$ and there exists an optimistically universal learning rule for adversarial regression, i.e., which achieves universal consistency with adversarial responses under any process $\mathbb{X}\in\text{FS}$.
\end{corollary}
\begin{proof}
We denote by $g_\cdot$ the learning rule on values $\mathcal{Y}$ for mean estimation described in Theorem \ref{thm:mean_estimation}. Because processes in $\mathbb{X}\in\text{FS}$ visit only finite number of different instance points in $\mathcal{X}$ almost surely, we can simply perform the learning rule $g_\cdot$ on each sub-process $\mathbb{Y}_{\{t:X_t=x\}}$ separately for any $x\in\mathcal{X}$. Note that the learning rule $g_\cdot$ does not explicitely re-use past randomness for its prediction. Hence, we will not specify that the randomness used for all learning rules---for each $x$ visited by $\mathbb{X}$---should be independent. Let us formally describe our learning rule. Consider a sequence $\mb x_{\leq t-1}$ of instances in $\mathcal{X}$ and $\mb y_{\leq t-1}$ of values in $\mathcal{Y}$. We denote by $S_{t-1}=\{x:\mb x_{\leq t-1}\cap\{x\}\neq\emptyset\}$ the support of $\mb x_{\leq t-1}$. Further, for any $x\in S_{t-1}$, we denote $N_{t-1}(x)=\sum_{u\leq t-1}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{x_u=x}$ the number of times that the specific instance $x$ was visited by the sequence $\mb x_{\leq t-1}$. Last, for any $x\in S_{t-1}$, we denote $\mb y^x_{\leq N(x)}$ the values $\mb y_{\{u\leq t: X_u=x\}}$ obtained when the instance was precisely $x$ in the sequence $\mb x_{\leq t-1}$, ordered by increasing time $u$. We are now ready to define our learning rule $f_t$ at time $t$. Given a new instance point $x_t$, we pose
\begin{equation*}
f_t(\mb x_{\leq t-1},\mb y_{\leq t-1},x_t) = \begin{cases}
g_{N_{t-1}(x)+1}(\mb y^x_{\leq N_{t-1}(x)}) &\text{if }x_t\in S_{t-1},\\
g_1(\emptyset) &\text{otherwise}.
\end{cases}
\end{equation*}
Recall that for any $u\geq 1$, $g_u$ uses some randomness. The only subtlety is that at each iteration $t\geq 1$ of the learning rule $f_\cdot$, the randomness used by the subroutine call to $g_\cdot$ should be independent from the past history. We now show that $f_\cdot$ is universally consistent for adversarial regression under all processes $\mathbb{X}\in \text{FS}$.
Let $\mathbb{X}\in \text{FS}$. For simplicity, we will denote by $\hat Y_t$ the prediction of the learning rule $f_\cdot$ at time $t$. We denote $S=\{x:\{x\}\cap\mathbb{X}\neq\emptyset\}$ the random support of $\mathbb{X}$. By hypothesis, we have $|S|<\infty$ with probability one. Denote by $\mathcal{E}$ this event. We now consider a specific realization $\mb x$ of $\mathbb{X}$ falling in the event $\mathcal{E}$. Then, $S$ is a fixed set. We also denote $\tilde S:=\{x\in S:\lim_{t\to\infty}N_t(x) = \infty\}$ the instances which are visited an infinite number of times by the sequence $\mb x$. Now, we can write for any function $f :\mathcal{X}\to\mathcal{Y}$,
\begin{align*}
\sum_{t=1}^T \left(\ell(\hat Y_t,Y_t) -\ell(f(x_t),Y_t)\right)&= \sum_{x\in S} \sum_{u=1}^{N_t(x)} \left(\ell(g_u(\mathbb{Y}^x_{\leq u-1}),Y_u^x) - \ell(f(x),Y_u)\right)\\
&\leq \sum_{s\in S\setminus \tilde S} \bar\ell |\{t\geq 1:x_t=x\}| + \sum_{s\in \tilde S} \sum_{u=1}^{N_t(x)}\left(\ell(g_u(\mathbb{Y}^x_{\leq u-1}),Y_u^x) - \ell(f(x),Y_u)\right).
\end{align*}
Now, because the randomness in $g_\cdot$ was taken independently from the past at each iteration, we can apply directly Theorem \ref{thm:mean_estimation}. For all $x\in\tilde S$, with probability one, for all $y^x\in \mathcal{Y}$, we have
\begin{equation*}
\limsup_{t'\to\infty}\frac{1}{t'}\sum_{u=1}^{t'}\left(\ell(g_u(\mathbb{Y}^x_{\leq u-1}),Y_u^x) - \ell(y^x,Y_u)\right) \leq 0.
\end{equation*}
We denote by $\mathcal{E}_x$ this event. Then, on the event $\bigcap_{x\in\tilde S}\mathcal{E}_x$ of probability one, we have for any measurable function $f:\mathcal{X}\to\mathcal{Y}$,
\begin{align*}
\limsup_{T\to\infty} \frac{1}{T}\left(\ell(\hat Y_t,Y_t) -\ell(f(x_t),Y_t)\right) &\leq \sum_{s\in\tilde S} \limsup_{T\to\infty} \frac{1}{T} \sum_{u=1}^{N_t(x)}\left(\ell(g_u(\mathbb{Y}^x_{\leq u-1}),Y_u^x) - \ell(f(x),Y_u)\right)\\
&\leq \sum_{s\in\tilde S} \limsup_{T\to\infty} \frac{1}{N_t(x)} \sum_{u=1}^{N_t(x)}\left(\ell(g_u(\mathbb{Y}^x_{\leq u-1}),Y_u^x) - \ell(f(x),Y_u)\right)\\
&\leq 0.
\end{align*}
As a result, averaging on realisations of $\mathbb{X}$, we obtain that with probability one, we have that $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f)\leq 0$ for all measurable functions $f:\mathcal{X}\to\mathcal{Y}$. Note that this is stronger than the notion of universal consistence which we defined in Section \ref{sec:formal_setup}, where we ask that for all measurable function $f:\mathcal{X}\to\mathcal{Y}$, we have almost surely $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f)\leq 0$. In particular, this shows that $\text{FS}\subset \text{SOLAR}$. As result $\text{SOLAR}=\text{FS}$ and $f_\cdot$ is optimistically universal. This ends the proof of the result.
\end{proof}
\subsection{Negative result for real-valued adversarial regression with loss $\ell=|\cdot|^\alpha$ with $\alpha>1$}
In the previous section, we observed that in general metric spaces, we can recover $\text{SOLAR}=\text{FS}$ for metric spaces in adversarial regression. Unfortunately, even though $\text{FS}$ is an already extremely restrictive set of processes, we will now show that in the classical setting of real-valued regression $\mathcal{Y}=\mathbb{R}$ with Euclidean norm, adversarial regression with any loss $\ell=|\cdot|^\alpha$ for $\alpha>1$ is not achievable. Specifically, we will show that in that case $\text{SOLAR}=\emptyset$. As a consequence, adversarial regression and even mean-estimation for the classical setting of real-valued regression and squared loss is never achievable.
\begin{theorem}
\label{thm:empty_solar}
Let $\alpha>1$. For the Euclidean value space $(\mathbb{R},|\cdot|)$ and loss $\ell=|\cdot|^\alpha$ we obtain $\text{SOLAR}=\emptyset$. In particular, there does not exist a consistent learning rule for mean-estimation on $\mathbb{R}$ with squared loss for adversarial responses.
\end{theorem}
\begin{proof}
We first show that mean-estimation is not achievable. To do so, let $f_\cdot$ be a learning rule. For simplicity, we will denote by $\hat Y_t$ its prediction at step $t$. We aim to construct a process $\mathbb{Y}$ on $\mathbb{R}$ and a value $y^*\in\mathbb{R}$ such that with non-zero probability we have
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(f_t(\mathbb{Y}_{\leq t-1}),Y_t) - \ell(y^*,Y_t) > 0.
\end{equation*}
We now pose $\beta:=\frac{2\alpha}{\alpha-1}>2$. For any sequence $\mb b:=(b_t)_{t\geq 1}$ in $\{-1,1\}$, we consider the following process $\mathbb{Y}^{\mb b}$ such that for any $t\geq 1$ we have $Y_t^{\mb b} = 2^{\beta^t} b_t.$ Let $\mb B:= (B_t)_{t\geq 1}$ be an i.i.d. sequence of Rademacher random variables, i.e., such that $B_1=1$ (resp. $B_1=-1$) with probability $\frac{1}{2}$. We consider the random variables $e_t:= \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\cdot Y_t \leq 0}$ which intuitively correspond to flags for large mistakes of the learning rule $f_\cdot$ at time $t$. Because $f_\cdot$ is an online learning rule, we have
\begin{equation*}
\mathbb{E}[e_t\mid \mathbb{Y}_{\leq t-1}] = \mathbb{E}_{\hat Y_t}\left[\mathbb{E}_{B_t}[\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\cdot Y_t\leq 0} \mid \hat Y_t]\right] = \mathbb{E}_{\hat Y_t}\left[\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t=0} + \frac{1}{2}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\neq 0}\right] \geq \frac{1}{2}.
\end{equation*}
where the expectation $\mathbb{E}_{\hat Y_t}$ refers to the expectation on the randomness of the rule $f_t$. As a result, the random variables $e_t-\frac{1}{2}$ form a sequence of differences of a sub-martingale bounded by $\frac{1}{2}$ in absolute value. By the Azuma-Hoeffding inequality, we obtain $\mathbb{P}\left[\sum_{t=1}^T e_t \leq \frac{T}{4}\right] \leq e^{-T/8}.$ Because $\sum_{t\geq 1} e^{-t/8}<\infty$, the Borel-Cantelli lemma implies that on an event $\mathcal{E}$ of probability one, we have $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T e_t \geq \frac{1}{4}$. As a result, there exists a specific realization $\mb b$ of $\mb B$ such that on an event $\tilde \mathcal{E}$ of probability one, we have $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T e_t \geq \frac{1}{4}$. Note that the sequence $\mathbb{Y}^{\mb b}$ is now deterministic. Then, writing $e_t=e_t\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{Y_t>0} + e_t\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{Y_t<0}$, we obtain
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T e_t \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{Y_t>0}+ \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T e_t \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{Y_t<0} \geq \frac{1}{4}.
\end{equation*}
Without loss of generality, we can suppose that $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\cdot Y_t\leq 0} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{Y_t>0} \geq \frac{1}{8}$. We now pose $y^*=1$. In the other case, we pose $y^*=-1$. We now compute for any $T\geq 1$ such that $\hat Y_t\cdot Y_t\leq 0$ and $Y_t>0$,
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \left(\ell(f_t(\mathbb{Y}_{\leq t-1}),Y_t) - \ell(y^*,Y_t)\right) &\geq \frac{ \ell(0,2^{\beta^T}) - \ell(1,2^{\beta^T})}{T} - \frac{1}{T}\sum_{t=1}^{T-1} \ell(1,-2^{\beta^t}).\\
&= \frac{\alpha}{T} 2^{(\alpha-1)\beta^T} + O\left(\frac{1}{T} 2^{(\alpha-2)\beta^T}\right) - 2^{\alpha(1+\beta^{T-1})}\\
&= \frac{\alpha}{T} 2^{2\alpha\beta^{T-1}}(1+o(1)).
\end{align*}
Because, by construction $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\hat Y_t\cdot Y_t\leq 0} \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{Y_t>0} \geq \frac{1}{8}$, we obtain
\begin{equation*}
\limsup \frac{1}{T}\sum_{t=1}^T \left(\ell(f_t(\mathbb{Y}_{\leq t-1}),Y_t) - \ell(y^*,Y_t)\right) = \infty,
\end{equation*}
on the event $\tilde E$ of probability one. This end the proof that mean-estimation is not achievable. Because mean-estimation is the easiest regression setting, this directly implies $\text{SOLAR}=\emptyset$. Formally, let $\mathbb{X}$ a process on $\mathcal{X}$. and $f_\cdot$ a learning rule for regression. We consider the same processes $\mathbb{Y}^{\mb B}$ where $\mb B$ is i.i.d. Rademacher and independent from $\mathbb{X}$. The same proof shows that there exists a realization $\mb b$ for which we have almost surely $\mathcal{L}_{(\mathbb{X},\mathbb{Y})}(f_\cdot,f^*:=y^*) = \infty$, where $f^*=y^*$ denotes the constant function equal to $y^*$ where $y^*\in\mathbb{R}$ is the value constructed as above. Hence, $\mathbb{X}\notin\text{SOLAR}$, and as a result, $\text{SOLAR}=\emptyset.$
\end{proof}
The above proof also shows that the same negative result holds more generally for unbounded metric value spaces which have some ``symmetry''. The main ingredients for this negative result were having a point from which there exists arbitrary far values from symmetric directions. In particular, this holds for a discretized value space $(\mathbb{N},|\cdot|)$ with Euclidean metric, and any Euclidean space $\mathbb{R}^d$ with $d\geq 1$.
\subsection{An alternative for adversarial regression with unbounded losses}
The previous two sections show that there exist losses for which we obtain $\text{SOLAR}=\emptyset$ or $\text{SOLAR}=\text{FS}$. In this section, we show the simple result that this is the only alternative, and that having non-empty $\text{SOLAR}=\text{FS}$ is equivalent to achieving consistency for mean-estimation with adversarial responses.
\begin{proposition}
\label{prop:alternative_unbounded}
Suppose that there exists an online learning rule $g_\cdot$ which is consistent for mean-estimation with adversarial responses, i.e., for any adversarial process $\mathbb{Y}$ on $(\mathcal{Y},\ell)$, we have for any $y\in\mathcal{Y}$,
\begin{equation*}
\limsup \frac{1}{T}\sum_{t=1}^T \left(\ell(f_t(\mathbb{Y}_{\leq t-1}),Y_t) - \ell(y^*,Y_t)\right) \leq 0,\quad (a.s),
\end{equation*}
then $\text{SOLAR}=\text{FS}$ and there exists an optimistically universal learning rule for adversarial regression. On the other hand, if mean-estimation is not achievable for adversarial responses, $\text{SOLAR}=\emptyset$.
\end{proposition}
\begin{proof}
Suppose that there exists an online learning rule $g_\cdot$ for mean-estimation. In the proof of Corollary \ref{cor:universal_regression_unbounded}, instead of using the learning rule for mean-estimation for metric losses introduced in Theorem \ref{thm:mean_estimation}, we can use the learning rule $g_\cdot$ to construct the learning rule $f_\cdot$ for adversarial regression on $\text{FS}$ instance processes, which simply performs $f_\cdot$ separately on each subprocess $\mathbb{Y}_{t:X_t=x}$ with the same instance $x\in\mathcal{X}$ for all visited $x\in\mathcal{X}$ in the process $\mathbb{X}$. The same proof shows that because almost surely $\mathbb{X}$ visits a finite number of different instances, $f_\cdot$ is universally consistent under any process $\mathbb{X}\in\text{FS}$. Hence, $\text{FS}\subset\text{SOLAR}$. Because $\text{SOLAR}\subset\text{SOUL}=\text{FS}$, we obtain directly $\text{SOLAR}=\text{FS}$ and $f_\cdot$ is optimistically universal.
On the other hand, if mean-estimation with adversarial responses is not achievable, we can use similar arguments as for the proof of Theorem \ref{thm:empty_solar}. Let $f_\cdot$ a learning rule for regression, and consider the following learning rule $g_\cdot$ for mean-estimation. We first draw a process $\tilde \mathbb{X}$ with same distribution as $\mathbb{X}$. Then, we pose
\begin{equation*}
g_t(\mb y_{\leq t-1}):=f_t(\tilde \mathbb{X}_{\leq t-1},\mb y_{\leq t-1},\tilde X_t).
\end{equation*}
Then, because mean-estimation is not achievable, there exists an adversarial process $\mathbb{Y}$ on $(\mathcal{Y},\ell)$ such that with non-zero probability,
\begin{equation*}
\limsup \frac{1}{T}\sum_{t=1}^T \left(\ell(g_t(\mathbb{Y}_{\leq t-1}),Y_t) - \ell(y^*,Y_t)\right) > 0.
\end{equation*}
Then, we obtain that with non-zero probability, $\mathcal{L}_{(\tilde \mathbb{X},\mathbb{Y})}>0$. Hence, $f_\cdot$ is not universally consistent. Note that the ``bad'' process $\mathbb{Y}$ is not correlated with $\tilde \mathbb{X}$ in this construction.
\end{proof}
As a note, although in the Euclidean real-valued case we obtained a simple alternative $\text{SOLAR}=\text{FS}$ if $\alpha=1$ and $\text{SOLAR}=\emptyset$ for $\alpha>1$, one cannot hope to simplify the characterization of Corollary \ref{prop:alternative_unbounded} so that in general for any power of a metric with $\alpha>1$ we would obtain $\text{SOLAR}=\emptyset$. Indeed, consider the case $\mathcal{Y}=\mathbb{R}$ but with the metric $|\cdot| = \sqrt{|\cdot|_2}$ square root of the classical Euclidean metric. One can check that this does define a proper metric because it satisfies the triangular inequality. In that case, we obtain that for $\alpha\leq 2$, we have $\text{SOLAR}=\text{FS}$ and $\text{SOLAR}=\emptyset$ if $\alpha>2$.
\section{Adversarial universal learning with moment constraint}
\label{sec:unbounded_loss_moment_constraint}
In the previous section, we showed that the only learnable processes for adversarial regression are processes in $\text{FS}$, i.e., which visit finite number of instance points. This shows that universal learning---without restrictions on the adversarial responses $\mathbb{Y}$-- is extremely restrictive. For instance, it does not account for i.i.d. processes. A natural question is whether adding mild constraints on the process $\mathbb{Y}$ would allow to recover the same results for unbounded losses as for bounded losses from Section \ref{sec:totally_bounded_value_spaces} and \ref{sec:alternative}. In fact, this question arises in noiseless regression since the set of learnable processes is reduced from $\text{SOUL}={\text{SMV}}$ for bounded losses to $\text{SOUL}=\text{FS}$ for unbounded losses. Hence, \cite{blanchard2022optimistic} posed as open problem whether having finite long-run empirical first-order moments would be sufficient to recover learnability in ${\text{SMV}}$. Precisely, they introduced the following constraint on noiseless processes $\mathbb{Y}=f^*(\mathbb{X})$: there exists $y_0\in\mathcal{Y}$ with
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T} \sum_{t=1}^T \ell(y_0,f^*(X_t)) <\infty\quad (a.s.).
\end{equation*}
The open question now becomes whether there exist an online learning rule which would be consistent under all $\mathbb{X}\in{\text{SMV}}$ processes for any noiseless responses $\mathbb{Y}=f^*(\mathbb{X})$ with $f^*$ satisfying the above first-moment condition. We show that such objective is not achievable whenever $\mathcal{X}$ is infinite---if $\mathcal{X}$ is finite, any process $\mathbb{X}$ on $\mathcal{X}$ is automatically $\text{FS}$ and hence learnable in noiseless or adversarial setting. In fact, under this first-order moment condition, we show the stronger statement that learning under all processes $\mathbb{X}$ which admit pointwise convergent relative frequencies ($\text{CRF}$) is impossible even in this noiseless setting. Formally, set $\text{CRF}$ is defined as follows.
\paragraph{Condition $\text{CRF}$.}For any measurable set $A\in\mathcal{B}$, $ \lim_{T\to\infty} \frac{1}{T}\sum_{t=1}^T\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_A(X_t)$ exists almost surely.\\
\cite{hanneke2021learning} showed that we have $\text{CRF}\subset\text{CS}$. In particular, we have $\text{CRF}\subset {\text{SMV}}$. We show the following negative result on learning under $\text{CRF}$ processes for noiseless regression under first-order moment constraint, which holds for unbounded near-metric spaces $(\mathcal{Y},\ell)$.
\begin{theorem}
\label{thm:negative_first_order_moment}
Suppose that $\mathcal{X}$ is infinite, and that $(\mathcal{Y},\ell)$ is an unbounded separable near-metric space. There does not exist an online learning rule which would be consistent under all processes $\mathbb{X}\in\text{CRF}(\subset{\text{SMV}})$ for all measurable target functions $f^*:\mathcal{X}\to\mathcal{Y}$ such that there exists $y_0\in \mathcal{Y}$ with
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,f^*(X_t))<\infty \quad (a.s.).
\end{equation*}
\end{theorem}
\begin{proof}
Let $(x^k)_{k\geq 0}$ a sequence of distinct points of $\mathcal{X}$. Now fix a value $y_0\in\mathcal{Y}$ and construct a sequence of values $y^1_k,y^2_k$ for $k\geq 1$ such that $\ell(y^1_k,y^2_k)\geq c_\ell 2^{k+1}$. Because $\ell(y^1_k,y^2_k)\leq c_\ell \ell(y_0,y^1_k)+c_\ell \ell(y_0,y^2_k)$, there exists $i_k\in\{1,2\}$ such that $\ell(y_0,y^{i_k}_k)\geq 2^k$. For simplicity, we will now write $y_k:=y^{i_k}_k$ for all $k\geq 1$. We define
\begin{equation*}
t_k = \left\lfloor \sum_{l=1}^k \ell(y_0,y_l)\right\rfloor.
\end{equation*}
This forms an increasing sequence of times because $t_{k+1}-t_k\geq \ell(y_0,y_{k+1})\geq 1$. Consider the deterministic process $\mathbb{X}$ that visits $x^k$ at time $t_k$ and $x^0$ otherwise, i.e., such that
\begin{equation*}
X_t=\begin{cases}
x^k &\text{if }t=t_k,\\
x^0 &\text{otherwise}.
\end{cases}
\end{equation*}
The process $\mathbb{X}$ visits $\mathcal{X}\setminus\{x^0\}$ a sublinear number of times. Hence we have for any measurable set $A$:
\begin{equation*}
\lim_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_A(X_t) = \begin{cases}
1 &\text{if }x^0\in A\\
0 &\text{otherwise}.
\end{cases}
\end{equation*}
As a result, $\mathbb{X}\in\text{CRF}$. We will now show that universal learning under $\mathbb{X}$ with the first moment condition on the responses is not achievable. For any sequence $b:=(b_k)_{k\geq 1}$ of binary variables $b_k\in\{0,1\}$, we define the function $f^*_b:\mathcal{X}\to\mathcal{Y}$ such that
\begin{equation*}
f^*_b(x^k)=\begin{cases}
y_0 &\text{if }b_k=0,\\
y_k &\text{otherwise},
\end{cases}\quad k\geq 0\quad \text{and }\quad f^*_b(x)=y_0 \text{ if }x\notin\{x_k,k\geq 0\}.
\end{equation*}
These functions are simple, hence measurable. We will first show that for any binary sequence $b$, the function $f^*_b$ satisfies the moment condition on the target functions. Indeed, we note that for any $T\geq t_1$, with $k:=\max\{l\geq 1:t_l\leq T\}$, we have
\begin{equation*}
\frac{1}{T}\sum_{t=1}^T\ell(y_0,f^*_b(X_t))\leq \frac{1}{T}\sum_{l=1}^k \ell(y_0,y_k)\leq \frac{t_k+1}{T}\leq \frac{T+1}{T}.
\end{equation*}
Therefore, $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T\ell(y_0,f^*_b(X_t))\leq 1.$
We now consider any online learning rule $f_\cdot$. Let $B=(B_k)_{k\geq 1}$ be an i.i.d. sequence of Bernouilli variables independent from the learning rule randomness. For any $k\geq 1$, denoting by $\hat Y_{t_k}:=f_{t_k}(\mathbb{X}_{\leq t_k-1},f^*_B(\mathbb{X}_{\leq t_k-1}),X_{t_k})$ we have
\begin{equation*}
\mathbb{E}_{B_k} \ell(\hat Y_{t_k},f^*_B(X_{t_k})) = \frac{\ell(\hat Y_{t_k},y_0) + \ell(\hat Y_{t_k},y_k)}{2} \geq \frac{1}{2c_\ell} \ell(y_0,y_k).
\end{equation*}
In particular, taking the expectation over both $B$ and the learning rule, we obtain
\begin{equation*}
\mathbb{E}\left[ \frac{1}{t_k}\sum_{t=1}^{t_k} \ell(f_t(\mathbb{X}_{\leq t-1},f^*_B(\mathbb{X}_{\leq t-1}),X_t),f^*_B(X_t)) \right]\geq \frac{1}{2c_\ell t_k} \sum_{l=1}^k\ell(y_0,y_k)\geq \frac{1}{2c_\ell}.
\end{equation*}
As a result, using Fatou's lemma we obtain
\begin{align*}
\mathbb{E}\left[ \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(f_t(\mathbb{X}_{\leq t-1},f^*_B(\mathbb{X}_{\leq t-1}),X_t),f^*_B(X_t))\right] &\geq \limsup_{T\to\infty}\mathbb{E} \left[ \frac{1}{T}\sum_{t=1}^T \ell(f_t(\mathbb{X}_{\leq t-1},f^*_B(\mathbb{X}_{\leq t-1}),X_t),f^*_B(X_t))\right] \\
&\geq \frac{1}{2c_\ell}.
\end{align*}
Therefore, the learning rule $f_\cdot$ is not consistent under $\mathbb{X}$ for all target functions of the form $f^*_b$ for some sequence of binary variables $b$. Indeed, otherwise for all binary sequence $b=(b_k)_{k\geq 1}$, we would have that $ \mathbb{E}_\mathbb{X} \left[ \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(f_t(\mathbb{X}_{\leq t-1},f^*_b(\mathbb{X}_{\leq t-1}),X_t),f^*_b(X_t))\right] =0$ and as a result
\begin{equation*}
\mathbb{E}_B\mathbb{E}_\mathbb{X} \left[ \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(f_t(\mathbb{X}_{\leq t-1},f^*_B(\mathbb{X}_{\leq t-1}),X_t),f^*_B(X_t)) \right] =0.
\end{equation*}
This ends the proof of the theorem.
\end{proof}
Theorem \ref{thm:negative_first_order_moment} completely answers (negatively to) the open problem posed in \cite{blanchard2022optimistic}. A natural question is whether another meaningful constraint on responses can be applied to obtain positive results under large classes of processes on $\mathcal{X}$. To this means, we introduce a novel constraint, similar to that introduced by \cite{blanchard2022optimistic}, but slightly stronger, which we will refer to as the \emph{empirical integrability} constraint. An (adversarial) process $\mathbb{Y}$ is \emph{empirically integrable} i.if there exists $y_0\in\mathcal{Y}$ such that for any $\epsilon>0$, almost surely there exists $M\geq 0$ with
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M}\leq \epsilon.
\end{equation*}
Note that the threshold $M$ may be \emph{dependent} on the adversarial process $\mathbb{Y}$. This is essentially the mildest condition on the sequence $\mathbb{Y}$ for which we can still obtain results. For example, if the loss is bounded, this constraint is automatically satisfied using $M>\bar\ell$.Hence, for bounded value spaces, the moment constraint is not restrictive. More importantly, note that any process $\mathbb{Y}$ which has bounded higher-than-first moments, i.e., such that there exists $p>1$ and $y_0\in\mathcal{Y}$ such that
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell^p(y_0,Y_t) <\infty,\quad (a.s.).
\end{equation*}
is empirically integrable. Note that for i.i.d. processes $\mathbb{Y}$, having bounded first moment $\mathbb{E}[\ell(y_0,Y_1)]<\infty$ is exactly being empirically integrable. Indeed, by the strong law of large numbers, in this case, we obtain almost surely $\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M} = \mathbb{E}[\ell(y_0,Y_1)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_1)\geq M}]$. Then, using the dominated convergence theorem allows to find a suitable $M$ for any fixed tolerance $\epsilon>0$. We formally prove this in the next lemma.
\begin{lemma}
\label{lemma:link_to_first_moment}
Let $\mathbb{Y}$ an i.i.d. process on $\mathcal{Y}$ which has bounded first moment, i..e, there exists $y_0\in\mathcal{Y}$ such that $\mathbb{E}[\ell(y_0,Y_1)]<\infty$. Then, $\mathbb{Y}$ is empirically integrable.
\end{lemma}
\begin{proof}
Let $\mathbb{Y}$ an i.i.d. process and $y_0\in\mathcal{Y}$ with $\mathbb{E}[\ell(y_0,Y_1)]<\infty$. Then, by the dominated convergence theorem we have $\mathbb{E}[\ell(y_0,Y_1)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_1)\geq M}]\to 0$ as $M\to\infty$. Hence, for $\epsilon>0$, there exists $M_\epsilon$ such that $\mathbb{E}[\ell(y_0,Y_1)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_1)\geq M}]\leq \epsilon$. Now by the law of large numbers, almost surely, we have
\begin{equation*}
\lim_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M_\epsilon} = \mathbb{E}[\ell(y_0,Y_1)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_1)\geq M_\epsilon}] \leq \epsilon.
\end{equation*}
This ends the proof that $\mathbb{Y}$ is empirically integrable.
\end{proof}
The goal of this section is to show that under this moment constraint, we can recover all results from \cite{blanchard2022universal}, \cite{hanneke2022bayes} and this work in Sections \ref{sec:totally_bounded_value_spaces} and \ref{sec:alternative}, even for unbounded value spaces. We first prove a simple equivalent formulation for empirical integrability.
\begin{lemma}
\label{lemma:empirically_integrable}
A process $\mathbb{Y}$ is empirically integrable i.if there exists $y_0\in\mathcal{Y}$ such that almost surely, for any $\epsilon>0$ there exists $M>0$ with
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M}\leq \epsilon.
\end{equation*}
\end{lemma}
\begin{proof}
It suffices to prove that empirical integrability implies the latter property. We pose $\epsilon_i=2^{-i}$ for any $i\geq 0$. By definition, there exists an event $\mathcal{E}_i$ of probability one such that on $\mathcal{E}_i$ we have
\begin{equation*}
\exists M_i\geq 0,\quad \limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M_i}\leq \epsilon_i.
\end{equation*}
As a result, on $\bigcap_{i\geq 0}\mathcal{E}_i$ of probability one, we obtain
\begin{equation*}
\forall \epsilon>0, \exists M:=M_{\lceil \log_2 \frac{1}{\epsilon}\rceil}\geq 0,\quad \limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M}\leq \epsilon.
\end{equation*}
This ends the proof of the lemma.
\end{proof}
\subsection{Noiseless universal learning with moment condition}
The main result from \cite{blanchard2022universal} showed that for bounded value spaces, the 2C1NN learning rule is optimistically universal, i.e., achieves universal consistency on all ${\text{SMV}}$ processes. We now show that the same learning rule is consistent under all ${\text{SMV}}$ processes for noiseless responses $\mathbb{Y}=f^*(\mathbb{X})$ which are empirically integrable, even for unbounded value spaces.
\ThmNoiselessUnbounded*
\begin{proof}
Let $\mathbb{X}\in\text{SOUL}$ and $f^*:\mathcal{X}\to\mathcal{Y}$ such that $f^*(\mathbb{X})$ is empirically integrable. By Lemma \ref{lemma:empirically_integrable}, there exists some value $y_0\in\mathcal{Y}$ such that on an event $\mathcal{A}$ of probability one, for all $\epsilon>0$ there exists $M_\epsilon\geq 0$ such that $\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,f^*(X_t))\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,f^*(X_t))\geq M_\epsilon}\leq \epsilon.$ For any $M\geq 1$ we define the function $f^*_M$ by
\begin{equation*}
f^*_M(x)=\begin{cases}
f^*(x) &\text{if } \ell(y_0,f^*(x))\leq M,\\
y_0 &\text{otherwise}.
\end{cases}
\end{equation*}
We know that 2C1NN is optimistically universal in the noiseless setting for bounded losses. Therefore, restricting the study to the output space $(B_\ell(y_0,M),\ell)$ we obtain that 2C1NN is consistent for $f^*_M$ under $\mathbb{X}$, i.e.
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(2C1NN_t(\mathbb{X}_{t-1},f^*_M(\mathbb{X}_{\leq t-1}),X_t),f^*_M(X_t)) = 0\quad (a.s.).
\end{equation*}
For any $t\geq 1$, we denote $\phi(t)$ the representative used by the 2C1NN learning rule. We denote $\mathcal{E}_M$ the above event such that $\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(f^*_M(X_{\phi(t)}),f^*_M(X_t))=0$. We now write for any $T\geq 1$ and $M\geq 1$,
\begin{equation*}
\frac{1}{T}\sum_{t=1}^T \ell(f^*(X_{\phi(t)}),f^*(X_t))\leq \frac{c_\ell^2}{T}\sum_{t=1}^T \ell(f^*_M(X_{\phi(t)}),f^*_M(X_t)) +
\frac{c_\ell^2}{T}\sum_{t=1}^T \ell(f^*(X_t),f^*_M(X_t)) +\frac{c_\ell}{T}\sum_{t=1}^T \ell(f^*(X_{\phi(t)}),f^*_M(X_{\phi(t)})).
\end{equation*}
We now note that by construction of the 2C1NN learning rule,
\begin{equation*}
\frac{1}{T}\sum_{t=1}^T \ell(f^*(X_{\phi(t)}),f^*_M(X_{\phi(t)})) = \frac{1}{T}\sum_{u=1}^T \ell(f^*(X_u),f^*_M(X_u)) |\{u<t\leq T:\phi(t)=u\}|\leq \frac{2}{T}\sum_{t=1}^T \ell(f^*(X_t),f^*_M(X_t)).
\end{equation*}
Hence, we obtain
\begin{equation*}
\frac{1}{T}\sum_{t=1}^T \ell(f^*(X_{\phi(t)}),f^*(X_t))\leq \frac{c_\ell^2}{T}\sum_{t=1}^T \ell(f^*_M(X_{\phi(t)}),f^*_M(X_t)) +
\frac{c_\ell(2+c_\ell)}{T}\sum_{t=1}^T \ell(y_0,f^*(X_t))\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,f^*(X_t))>M}.
\end{equation*}
As a result, on the event $\mathcal{A}\cap\bigcap_{M\geq 1}\mathcal{E}_M$ of probability one, for any $M\geq 1$, we obtain
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(f^*(X_{\phi(t)}),f^*(X_t)) \leq c_\ell(2+c_\ell) \limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(y_0,f^*(X_t))\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,f^*(X_t))\geq M}.
\end{equation*}
In particular, if $\epsilon>0$ we can apply this result to $M:=\lceil M_\epsilon\rceil$, which yields $\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(f^*(X_{\phi(t)}),f^*(X_t)) \leq c_\ell(2+c_\ell)\epsilon$. Because this holds for any $\epsilon>0$ we finally obtain that on the event $\mathcal{A}\cap\bigcap_{M\geq 1}\mathcal{E}_M$ we have
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(f^*(X_\phi(t)),f^*(X_t)) =0.
\end{equation*}
This ends the proof of the theorem.
\end{proof}
\subsection{Adversarial regression with moment condition under $\text{CS}$ processes}
We now turn to adversarial regression under $\text{CS}$ processes. \cite{hanneke2022bayes} showed that regression for arbitrary responses under all $\text{CS}$ processes is achievable in bounded value spaces. We generalize this result to unbounded losses and to adversarial responses with a similar online learning rule. In particular, our proposed learning rule will also optimistically universal for adversarial regression for all bounded value spaces which do not satisfy $\text{F-TiME}$.
\ThmCSRegressionUnbounded*
\begin{proof}
Using Lemma 23 of \cite{hanneke2021learning}, let $\mathcal{T}\subset\mathcal{B}$ a countable set such that for all $\mathbb{X}\in\mathcal{C}_1,A\subset\mathcal{B}$ we have
\begin{equation*}
\inf_{G\in\mathcal{T}} \mathbb{E}[\hat \mu_\mathbb{X} (G \bigtriangleup A)]=0.
\end{equation*}
Now let $(y^i)_{i\geq 0}$ be a dense sequence in $\mathcal{Y}$. For any $k\geq 0$, any indices $l_1,\ldots, l_k\in \mathbb{N}$ and any sets $A_1,\ldots,A_k \in \mathcal{T}$, we define the function $f_{\{l_1,\ldots,l_k\},\{A_1,\ldots,A_k\}}:\mathcal{X}\to\mathcal{Y}$ as \begin{equation*}
f_{\{l_1,\ldots,l_k\},\{A_1,\ldots,A_k\}}(x) = y^{\max\{0\leq j\leq k:\; x\in A_j\}}
\end{equation*}
where $A_0=\mathcal{X}$. These functions are simple hence measurable. Because the set of such functions is countable, we enumerate these functions as $f^0,f^1\ldots$ Without loss of generality, we suppose that $f^0=y^0$. For any $i\geq 0$, we denote $k^i\geq 0$, $\{l_1^i,\ldots,l_{k^i}^i\}$ and $\{A_1^i,\ldots,A_{k^i}^i\}$ such that $f^i$ was defined as $f^i:=f_{\{l_1^i,\ldots,l_k^i\},\{A_1^i,\ldots,A_k^i\}}$. We now define a sequence of sets $(I_t)_{t\geq 1}$ of indices and a sequence of sets $(\mathcal{F}_t)_{t\geq 1}$ of measurable functions by
\begin{equation*}
I_t := \{i\leq \ln t: \ell(y^{l_p^i},y^0)\leq 2^{-\alpha+1}\ln t,\; \forall 1\leq p\leq k^i\}\quad \text{and}\quad \mathcal{F}_t:=\{f^i:i\in I_t\}.
\end{equation*}
Then, clearly $I_t$ is finite and $\bigcup_{t\geq 1}I_t = \mathbb{N}$. For any $i\geq 0$, we define $t_i = \min\{t:i\in I_t\}$. We are now ready to construct our learning rule. Let $\eta_t=\frac{1}{\ln t \sqrt t}$. Fix any sequences $(x_t)_{t\geq 1}$ in $\mathcal{X}$ and $(y_t)_{t\geq 1}$ in $\mathcal{Y}$. At step $t\geq 1$, after observing the values $x_i$ for $1\leq i\leq t$ and $y_i$ for $1\leq i\leq t-1$, we define for any $i\in I_t$ the loss $L_{t-1,i}:= \sum_{s=t_i}^{t-1}\ell(f^i(x_s),y_s).$
For any $M\geq 1$ we define the function $\phi_M:\mathcal{Y}\to\mathcal{Y}$ such that
\begin{equation*}
\phi_M(y) = \begin{cases}
y &\text{if } \ell(y,y^0)< M,\\
y^0 &\text{otherwise}.
\end{cases}
\end{equation*}
We now construct construct some weights $w_{t,i}$ for $t\geq 1$ and $i\in I_t$ recursively in the following way. Note that $I_1=\{0\}$. Therefore, we pose $w_{0,0}=1$. Now let $t\geq 2$ and suppose that $w_{s-1,i}$ have been constructed for all $1\leq s\leq t-1$. We define
\begin{equation*}
\hat \ell_s:=\frac{\sum_{j\in I_s} w_{s-1,j}\ell(f^j(x_s),\phi_{2^{-\alpha+1}\ln s}(y_s))}{\sum_{j\in I_s} w_{s-1,j}}
\end{equation*}
and for any $i\in I_t$ we note $\hat L_{t-1,i} := \sum_{s=t_i}^{t-1} \hat \ell_s$. In particular, if $t_i=t$ we have $\hat L_{t-1,i}=L_{t-1,i}=0$. The weights at time $t$ are constructed as $w_{t-1,i}:= e^{\eta_t(\hat L_{t-1,i}-L_{t-1,i})}$ for any $i\in I_t$. Last, let $\{\hat i_t\}_{t\geq 1}$ a sequence of independent random $\mathbb{N}-$valued variables such that
\begin{equation*}
\mathbb{P}(\hat i_t = i) = \frac{w_{t-1,i}}{\sum_{j\in I_{t}}w_{t-1,j}},\quad i\in I_t.
\end{equation*}
Finally, the prediction is defined as $\hat y_t:=f^{\hat i_t}(x_t)$. Note that the random prediction $\hat y_t$ only uses the values $x_1,\ldots, x_{t-1},y_1,\ldots,y_{t-1},x_t$ hence can be used to create an online learning rule which we denote by simplicity $(\hat Y_t)_{t\geq 1}$. Now consider a process $(\mathbb{X},\mathbb{Y})$ with $\mathbb{X}\in \mathcal{C}_1$ and such that $\mathbb{Y}$ is empirically integrable. By Lemma \ref{lemma:empirically_integrable}, there exists $y_0\in\mathcal{Y}$ such that on an event $\mathcal{A}$ of probability one, for any $\epsilon>0$, there exists $M_\epsilon \geq 0$ with $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M_\epsilon} \leq \epsilon$. We will now denote $\tilde \mathbb{Y}$ the process defined by $\tilde Y_t = \phi_{2^{-\alpha+1}\ln t}(Y_t)$ for all $t\geq 1$. Then, for any $i\in I_t$, note that using Lemma \ref{lemma:loss_identity} we have
\begin{equation*}
0\leq \ell(f^i(x_t),\tilde Y_t) \leq 2^{\alpha-1}\left(\ell(f^i(x_t), y^0)+ \ell(y^0,\tilde Y_t)\right)\leq 2\ln t,
\end{equation*}
by construction of the set $I_t$. As a result, for any $i,j\in I_t$, we obtain $|\ell(f^i(x_t),\tilde Y_t^M)-\ell(f^j(x_t)-\tilde Y_t^M)| \leq 2\ln t$. Hence, we can use the same proof as for Theorem \ref{thm:mean_estimation} and show that almost surely, there exists $\hat t\geq 1$ such that
\begin{equation*}
\forall t\geq \hat t,\forall i\in I_t, \quad \sum_{s=t_i}^t \ell(\hat Y_s,\tilde Y_s^M) \leq L_{t,i} + 3 \ln^2 t\sqrt t .
\end{equation*}
We denote by $\mathcal{B}$ this event. Now let $f:\mathcal{X}\to\mathcal{Y}$ to which we compare the predictions of our learning rule. For any $M\geq 1$, the function $\phi_M\circ f$ is measurable and has values in the ball $B_\ell(y_0,M)$ where the loss is bounded by $2^\alpha M$. Hence, by Lemma 24 from \cite{hanneke2021learning} because $\mathbb{X}\in\mathcal{C}_1$ we have
\begin{equation*}
\inf_{i\geq 0} \mathbb{E}\left[\hat\mu_\mathbb{X} (\ell(\phi_M\circ f(\cdot),f^i(\cdot)))\right] = 0.
\end{equation*}
Now for any $k\geq 0$, let $i_k\geq 0$ such that $\mathbb{E}\left[\hat\mu_\mathbb{X} (\ell(\phi_M\circ f(\cdot),f^{i_k}(\cdot)))\right] < 2^{-2k}$. By Markov inequality, we have
\begin{equation*}
\mathbb{P}\left[ \hat\mu_\mathbb{X} (\ell(\phi_M\circ f(\cdot),f^i(\cdot))) \right] < 2^{-k}] \geq 1-2^{-k}.
\end{equation*}
Because $\sum_k 2^{-k}<\infty$, the Borel-Cantelli lemma implies that almost surely there exists $\hat k$ such that for any $k\geq \hat k$, the above inequality is met. We denote $\mathcal{E}_M$ this event. On the event $\mathcal{B}\cap \mathcal{E}_M$ of probability one, for $k\geq \hat k$ and any $T\geq \max(t_{i_k},\hat t)$ we have for any $\epsilon>0$,
\begin{align*}
\frac{1}{T}\sum_{t=1}^T &\left(\ell(\hat Y_t,\tilde Y_t)-\ell( \phi_M\circ f(X_t),\tilde Y_t)\right) = \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,\tilde Y_t)-\ell( f^{i_k}(X_t),\tilde Y_t) + \frac{1}{T}\sum_{t=1}^T \ell(f^{i_k}(X_t),\tilde Y_t)-\ell( \phi_M\circ f(X_t),\tilde Y_t)\\
&\leq \frac{1}{T}\sum_{t=1}^{t_{i_k}-1}\ell(\hat Y_t,\tilde Y_t) + \frac{1}{T}\left(\sum_{t=t_{i_k}}^T \ell(\hat Y_t,\tilde Y_t) - L_{T,i_k} \right) + \frac{\epsilon}{T}\sum_{t=1}^T \ell( \phi_M\circ f(X_t),\tilde Y_t)+ \frac{c_\epsilon^\alpha}{T}\sum_{t=1}^T \ell(f^{i_k}(X_t), \phi_M\circ f(X_t))\\
&\leq \frac{2\ln t_{i_k}}{T}+ \frac{3\ln^2 T}{\sqrt T} + \epsilon 2^{\alpha-1}M + \epsilon 2^{\alpha-1}\frac{1}{T}\sum_{t=1}^T \ell(y^0,\tilde Y_t) + \frac{c_\epsilon^\alpha}{T}\sum_{t=1}^T \ell(f^{i_k}(X_t), \phi_M\circ f(X_t))\\
&\leq \frac{2\ln t_{i_k}}{T}+ \frac{3\ln^2 T}{\sqrt T} + \epsilon 2^{\alpha-1}M + \epsilon 2^{\alpha-1}\frac{1}{T}\sum_{t=1}^T \ell(y^0,Y_t) + \frac{c_\epsilon^\alpha}{T}\sum_{t=1}^T \ell(f^{i_k}(X_t), \phi_M\circ f(X_t)),
\end{align*}
where in the last inequality we used $\ell(y^0,\tilde Y_t)\leq \ell(y^0,Y_t)$ by construction of $\tilde Y_t = \phi_{2^{-\alpha+1}\ln t}(Y_t)$. Now on the event $\mathcal{A}$, we have
\begin{align*}
Z_1:=\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(y^0,Y_t) &\leq 2^{\alpha-1}\ell(y_0,y^0)+ 2^{\alpha-1}\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(y_0,Y_t)\\
&\leq 2^{\alpha-1}\ell(y_0,y^0)+ 2^{\alpha-1}\left(M_1+\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M_1}\right)\\
&\leq 2^{\alpha-1}\ell(y_0,y^0)+ 2^{\alpha-1}(M_1+1) < \infty.
\end{align*}
Thus, on the event $\mathcal{A}\cap\mathcal{B}\cap \mathcal{E}_M$, for any $k\geq \hat k$ we have for any $\epsilon>0$,
\begin{equation*}
\limsup_{T} \frac{1}{T}\sum_{t=1}^T\ell(\hat Y_t,\tilde Y_t)-\ell( \phi_M\circ f(X_t),\tilde Y_t)) \leq \epsilon 2^{\alpha-1}M + \epsilon 2^{\alpha-1}Z_1 + \frac{c_\epsilon^\alpha}{2^k}.
\end{equation*}
Let $\delta>0$. Now taking $\epsilon = \frac{1}{2^{\alpha}(M+Z_1)}$, we obtain that on the event $\mathcal{A}\cap\mathcal{B}\cap \mathcal{E}_M$, for any $k\geq \hat k$, we have $\limsup_{T} \frac{1}{T}\sum_{t=1}^T\ell(\hat Y_t,\tilde Y_t)-\ell( \phi_M\circ f(X_t),\tilde Y_t)) \leq \delta + \frac{c_\epsilon^\alpha}{2^k}.$ This yields $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,\tilde Y_t)-\ell( \phi_M\circ f(X_t),\tilde Y_t)) \leq \delta.$ Because this holds for any $\delta>0$ we obtain $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,\tilde Y_t)-\ell( \phi_M\circ f(X_t),\tilde Y_t)) \leq 0.$ Finally, on the event $\mathcal{A}\cap\mathcal{B}\cap \bigcap_{M= 1}^{\infty}\mathcal{E}_M$ of probability one, we have
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \left(\ell(\hat Y_t,\tilde Y_t)-\ell( \phi_M\circ f(X_t),\tilde Y_t)\right) \leq 0,\quad \forall M\geq 1,
\end{equation*}
where $M$ is an integer. We now observe that on the event $\mathcal{A}$, the same guarantee for $y_0$ also holds for $y^0$. Indeed, let $\epsilon$. For $\tilde M_\epsilon:=2^{\alpha-1}(M_{2^{-\alpha}\epsilon} +\ell(y^0,y_0)) + \ell(y_0,y^0)$ we have
\begin{align*}
\frac{1}{T}\sum_{T=1}^T \ell(y^0,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y^0,Y_t)\geq \tilde M_\epsilon} &\leq 2^{\alpha-1} \ell(y^0,y_0)\frac{1}{T} \sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y^0,Y_t)\geq \tilde M_\epsilon} + 2^{\alpha-1} \frac{1}{T}\sum_{T=1}^T \ell(y_0,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y^0,Y_t)\geq \tilde M_\epsilon}\\
&\leq 2^{\alpha-1} \ell(y^0,y_0)\frac{1}{T} \sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq 2^{-\alpha+1}M - \ell(y_0,y^0)} + 2^{\alpha-1} \frac{1}{T}\sum_{T=1}^T \ell(y_0,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq 2^{-\alpha+1}M - \ell(y_0,y^0)}\\
&\leq 2^{\alpha} \frac{1}{T}\sum_{t=1}^T \ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M_{2^{-\alpha}\epsilon}}
\end{align*}
Hence, we obtain $\limsup_{T\to\infty}\frac{1}{T}\sum_{T=1}^T \ell(y^0,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y^0,Y_t)\geq \tilde M_\epsilon} \leq \epsilon$. We now write
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell( \phi_M\circ f(X_t),\tilde Y_t)-\ell( f(X_t),Y_t)
&\leq \frac{1}{T}\sum_{t=1}^T \left(\ell(y^0, Y_t)-\ell( f(X_t),Y_t)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\geq M}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{ \ell(Y_t,y^0)\leq \ln t} \\
&\quad\quad\quad+ \frac{1}{T}\sum_{t=1}^T \left(\ell(f(X_t), y^0)-\ell( f(X_t),Y_t)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\leq M}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1} \ln t}\\
&\leq \frac{1}{T}\sum_{t=1}^T \left(2\ell(y^0, Y_t)-2^{-\alpha+1}\ell( f(X_t),y^0)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\geq M}\\
&\quad\quad\quad+
\frac{1}{T}\sum_{t=1}^T \left(2\ell(f(X_t), y^0)-2^{-\alpha+1}\ell( y^0,Y_t)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\leq M}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1}\ln t}\\
&\leq \frac{2}{T}\sum_{t=1}^T \ell(y^0, Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha}M} +
\frac{2M e^{2^{2\alpha-1} M}}{T}.
\end{align*}
\comment{
\begin{align*}
\frac{1}{T}&\sum_{t=1}^T \ell( \phi_M\circ f(X_t),\tilde Y_t)-\ell( f(X_t),Y_t)\\
&\leq \frac{1+\epsilon}{T}\sum_{t=1}^T \ell( \phi_M\circ f(X_t),\phi_{ M/2^{\alpha+1}}(Y_t)) + \frac{c_\epsilon^\alpha }{T}\sum_{t=1}^T \ell(\phi_{M/2^{\alpha+1}}(Y_t),\tilde Y_t) - \frac{1-\epsilon}{T}\sum_{t=1}^T\ell( f(X_t),\phi_{M/2^{\alpha+1}}(Y_t))\\
&\quad\quad + \frac{(1-\epsilon)c_{\frac{\epsilon}{1-\epsilon}}^\alpha}{T}\sum_{t=1}^T \ell(\phi_{M/2^{\alpha+1}}(Y_t),Y_t)\\
&\leq \frac{1}{T}\sum_{t=1}^T \left( \ell( \phi_M\circ f(X_t),\phi_{M/2^{\alpha+1}}(Y_t))-\ell( f(X_t),\phi_{M/2^{\alpha+1}}(Y_t)) \right) + \epsilon2^{\alpha}M + \frac{c_\epsilon^\alpha}{T}e^{M/2^{\alpha+1}}\frac{M}{2^{\alpha+1}} \\
&\quad\quad + \frac{c_\epsilon^\alpha+c_{\frac{\epsilon}{1-\epsilon}}^\alpha}{T}\sum_{t=1}^T \ell(y^0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y^0,Y_t)\geq M/2^{\alpha+1}} +\epsilon 2^{\alpha}M + \frac{\epsilon}{T}\sum_{t=1}^T\ell( f(X_t),\phi_{M/2^{\alpha+1}}(Y_t))\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\geq M}\\
&\leq \frac{1}{T}\sum_{t=1}^T \left(\ell( y^0,\phi_{M/2^{\alpha+1}}(Y_t))-(1-\epsilon)\ell( f(X_t),\phi_{M/2^{\alpha+1}}(Y_t))\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\geq M} +\epsilon 2^{\alpha+1}M + \frac{c_\epsilon^\alpha}{T}e^{M/2^{\alpha+1}}\frac{M}{2^{\alpha+1}}\\
&\quad \quad + \frac{c_\epsilon^\alpha+c_{\frac{\epsilon}{1-\epsilon}}^\alpha}{(M/2^{\alpha+1})^{p-1}T}\sum_{t=1}^T \ell^p(y^0,Y_t) \\
&\leq \frac{1}{T}\sum_{t=1}^T \left((2-\epsilon)\ell( y^0,\phi_{M/2^{\alpha+1}}(Y_t))-\frac{1-\epsilon}{2^{\alpha-1}}\ell( y^0,f(X_t))\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\geq M} +\epsilon 2^{\alpha+1}M + \frac{c_\epsilon^\alpha}{T}e^{M/2^{\alpha+1}}\frac{M}{2^{\alpha+1}}\\
&\quad \quad + \frac{c_\epsilon^\alpha+c_{\frac{\epsilon}{1-\epsilon}}^\alpha}{(M/2^{\alpha+1})^{p-1}T}\sum_{t=1}^T \ell^p(y^0,Y_t) \\
&\leq \frac{1}{T}\sum_{t=1}^T \left(\frac{M}{2^\alpha}-\frac{1}{2^\alpha}\ell( y^0,f(X_t))\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),y^0)\geq M} +\epsilon 2^{\alpha+1}M + \frac{c_\epsilon^\alpha}{T}e^{M/2^{\alpha+1}}\frac{M}{2^{\alpha+1}}\\
&\quad \quad + \frac{c_\epsilon^\alpha+c_{\frac{\epsilon}{1-\epsilon}}^\alpha}{(M/2^{\alpha+1})^{p-1}T}\sum_{t=1}^T \ell^p(y^0,Y_t) \\
&\leq \epsilon 2^{\alpha+1}M + \frac{c_\epsilon^\alpha}{T}e^{M/2^{\alpha+1}}\frac{M}{2^{\alpha+1}} + \frac{c_\epsilon^\alpha+c_{\frac{\epsilon}{1-\epsilon}}^\alpha}{(M/2^{\alpha+1})^{p-1}T}\sum_{t=1}^T \ell^p(y^0,Y_t).
\end{align*}
}
As a result, on the event $\mathcal{A}\cap\mathcal{B}\cap \bigcap_{M= 1}^{\infty}\mathcal{E}_M$, for any $M\geq 1$,
\begin{equation*}
\limsup_{T\to\infty}\frac{1}{T} \sum_{t=1}^T \ell( \phi_M\circ f(X_t),\tilde Y_t)-\ell( f(X_t),Y_t) \leq 2 \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(y^0, Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha}M} .
\end{equation*}
Last, we compute
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)-\ell(\hat Y_t,\tilde Y_t) &= \frac{1}{T}\sum_{t=1}^T \left(\ell(\hat Y_t,Y_t)-\ell(\hat Y_t,y^0)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1} \ln t}\\
&\leq \frac{1}{T}\sum_{t=1}^T \left(2^{\alpha-1}\ell(\hat Y_t,y^0) + 2^{\alpha-1}\ell(Y_t,y^0)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1}\ln t}\\
&\leq \frac{1}{T}\sum_{t=1}^T \left(\ln t + 2^{\alpha-1}\ell(Y_t,y^0)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1}\ln t}\\
&\leq \frac{2^\alpha}{T}\sum_{t=1}^T \ell(Y_t,y^0)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1}\ln t}.
\end{align*}
Note that for any $\epsilon>0$, we have on the event $\mathcal{A}$ that for any $M\geq 1$,
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(Y_t,y^0)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1}\ln t} \leq \limsup_{T\to\infty} \frac{1}{T}\sum_{t\geq e^{2^{\alpha-1}M}}^T \ell(Y_t,y^0)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq M} = \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(Y_t,y^0)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq M}.
\end{equation*}
Hence, because this holds for any $M\geq 1$, if $\epsilon>0$ we can apply this to the integer $M:=\lceil \tilde M_\epsilon\rceil$ which yields $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(Y_t,y^0)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1}\ln t}\leq \epsilon$. This holds for any $\epsilon>0$. Hence we obtain finally on the event $\mathcal{A}$ that $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(Y_t,y^0)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha+1}\ln t} \leq 0$, which implies that $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)-\ell(\hat Y_t,\tilde Y_t)\leq 0$. Putting everything together, we obtain on $\mathcal{A}\cap\mathcal{B}\cap\bigcap_{M= 1}^{\infty}\mathcal{E}_M$ that for any $M\geq 1$,
\begin{align*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)-\ell(f(X_t), Y_t)& \leq \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)-\ell(\hat Y_t,\tilde Y_t)\\
+\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,\tilde Y_t)&-\ell( \phi_M\circ f(X_t),\tilde Y_t)
+ \limsup_{T\to\infty}\frac{1}{T} \sum_{t=1}^T \ell( \phi_M\circ f(X_t),\tilde Y_t)-\ell( f(X_t),Y_t)\\
&\leq 2 \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(y^0, Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,y^0)\geq 2^{-\alpha}M}.
\end{align*}
Because this holds for all $M\geq 1$, we can again apply this result to $M:=\lceil \tilde M_\epsilon \rceil$ which yields $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)-\ell(f(X_t), Y_t)\leq \epsilon$. Because this holds for any $\epsilon>0$, we finally obtain on the event $\mathcal{A}\cap\mathcal{B}\cap\bigcap_{M= 1}^{\infty}\mathcal{E}_M$ of probability one, that $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t)-\ell(f(X_t), Y_t) \leq 0.$ This ends the proof of the theorem.
\end{proof}
This generalizes the main results from \cite{hanneke2022bayes} to unbounded non-metric losses and from \cite{tsir2022metric} to non-metric losses, arbitrary responses and $\text{CS}$ instance processes $\mathbb{X}$. Indeed, they consider bounded first moment conditions on i.i.d. responses, which are empirically integrable by Lemma \ref{lemma:link_to_first_moment}.
\subsection{Adversarial regression with moment condition under ${\text{SMV}}$ processes}
Last, we generalize our result Theorem \ref{thm:good_value_spaces} for value spaces satisfying $\text{F-TiME}$, to unbounded value spaces, with the same moment condition on responses. In order to apply Theorem \ref{thm:good_value_spaces} to bounded balls of the value space, we now ask that all balls $B_\ell(y,r)$ in the value space $(\mathcal{Y},\ell)$ satisfy $\text{F-TiME}$. For such value spaces, we will be able to recover learning for adversarial responses under all ${\text{SMV}}$ processes. Note that if this property is not satisfied, then the set of learnable processes for adversarial responses under this moment condition is automatically reduced to $\text{CS}$. Indeed, given a ball $B_\ell(y,r)$ which does not satisfy $\text{F-TiME}$, one can focus on responses which have value in this ball and directly apply the negative result from Theorem \ref{thm:bad_value_spaces} to show that adversarial regression under processes $\mathbb{X}\notin\text{CS}$ is not achievable.
\ThmSOULRegressionUnbounded*
\begin{proof}
Fix $(\mathcal{X},\rho)$ and a value space $(\mathcal{Y},\ell)$ such that any ball satisfies Condition 1. We now construct our learning rule. Let $\bar y\in\mathcal{Y}$ be an arbitrary value. For any $M\geq 1$, because $B_\ell(\bar y,M)$ is bounded and satisfies Condition 1, there exists a Bayes optimistically universal learning rule $f_\cdot^M$ for value space $(B_\ell(y_0,M),\ell)$. For any $M\geq 1$, we define the function $\phi_M:\mathcal{Y}\to\mathcal{Y}$ defined by restricting the space to the ball $B_\ell(\bar y,M)$ as follows
\begin{equation*}
\phi_M(y):=\begin{cases}
y &\text{if }\ell(y,\bar y)< M\\
\bar y &\text{otherwise}.
\end{cases}
\end{equation*}
For simplicity, we will denote by $\hat Y_t^M:=f^M_t(\mathbb{X}_{\leq t-1},\phi_M(\mathbb{Y})_{\leq t-1},X_t)$ the prediction of $f_\cdot^M$ at time $t$ for the responses which are restricted to the ball $B_\ell(\bar y,M)$. We now combine these predictors using online learning into a final learning rule $f_\cdot$. Specifically, we define $I_t:=\{0\leq M\leq 2^{-\alpha+1}\ln t\}$ for all $t\geq 1$. We also denote $t_M=\lceil e^{2^{\alpha-1}M}\rceil$ for $M\geq 0$ and pose $\eta_t=\frac{1}{4\sqrt t}$. For any $M\in I_t$, we define
\begin{equation*}
L_{t-1,M}:=\sum_{s=t_M}^{t-1}\ell(\hat Y_s^M,\phi_{2^{-\alpha+1}\ln s}(Y_s)).
\end{equation*}
For simplicity, we will denote by $\tilde\mathbb{Y}$ the process defined by $\tilde Y_t = \phi_{2^{-\alpha+1}\ln t}(Y_t)$ for all $t\geq 1$. We now construct recursive weights as $w_{0,1}=0$ and for $t\geq 2$ we pose for all $1\leq s\leq t-1$
\begin{equation*}
\hat l_s:= \frac{\sum_{M\in I_s}w_{s-1,M}\ell(\hat Y_s^M,\tilde Y_s)}{\sum_{M\in I_s} w_{s-1,M}}.
\end{equation*}
Now for any $M\in I_t$ we note $\hat L_{t-1,M}:=\sum_{s=t_M}^{t-1} \hat \ell_s$, and pose $w_{t-1,M}:=e^{\eta_t(\hat L_{t-1,M}-L_{t-1,M})}$. We then choose a random index $\hat M_t$ independent from the past history such that
\begin{equation*}
\mathbb{P}(\hat M_t = M):= \frac{w_{t-1,M}}{\sum_{M'\in I_t}w_{t-1,M'}},\quad M\in I_t.
\end{equation*}
The output the learning rule is $f_t(\mathbb{X}_{\leq t-1},\mathbb{Y}_{\leq t-1},X_t):= \hat Y_t^{\hat M_t}$. For simplicity, we will denote by $\hat Y_t:=f_t(\mathbb{X}_{\leq t-1},\mathbb{Y}_{\leq t-1},X_t)$ the prediction of $f_\cdot$ at time $t$. This ends the construction of our learning rule.
Now let $(\mathbb{X},\mathbb{Y})$ be such that $\mathbb{X}\in\text{SOUL}$ and $\mathbb{Y}$ empirically integrable. By Lemma \ref{lemma:empirically_integrable}, there exists some value $y_0\in\mathcal{Y}$ such that on an event $\mathcal{A}$ of probability one, we have for any $\epsilon$, a threshold $M_\epsilon\geq 0$ with $\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T\ell(y_0,Y_t)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(y_0,Y_t)\geq M_\epsilon} \leq \epsilon.$ We fix a measurable function $f:\mathcal{X}\to\mathcal{Y}$. Also, for any $t\geq 1$ and $M\in I_t$ we have $0\leq \ell(\hat Y^M_t,\tilde Y_t) \leq 2^{\alpha-1}\ell(\hat Y^M_t,\bar y) + 2^{\alpha-1}\ell(\tilde Y_t,\bar y) \leq 2\ln t$. As a result, for any $M, M'\in I_t$ we have $|\ell( \hat Y_t^M,\tilde Y_t)-\ell(\hat Y_t^{M'},\tilde Y_t)|\leq 2\ln t$. Because $|I_t|\leq 1+\ln t$ for all $t\geq 1$, the same proof as Theorem \ref{thm:mean_estimation} shows that on an event $\mathcal{B}$ of probability one, there exists $\hat t\geq 0$ such that
\begin{equation*}
\forall t\geq \hat t,\forall M\in I_t,\quad \sum_{s=t_M}^t \ell(\hat Y_t,\tilde Y_t)\leq \sum_{s=t_M}^t \ell(\hat Y_t^M,\tilde Y_t) + 3\ln^2 t\sqrt t.
\end{equation*}
Further, we know that $f_\cdot^M$ is Bayes optimistically universal for value space $(B_\ell(\bar y,M),\ell)$. In particular, because $\mathbb{X}\in\text{SOUL}$ and $\phi_M\circ f:\mathcal{X}\to B_\ell(\bar y,M)$, we have
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y^M_t,\phi_M(Y_t)) - \ell(\phi_M\circ f(X_t),\phi_M(Y_t)) \leq 0\quad (a.s.).
\end{equation*}
For simplicity, we introduce the quantity $\delta_T^M:= \frac{1}{T}\sum_{t=1}^T \ell(\hat Y^M_t,\phi_M(Y_t)) - \ell(\phi_M\circ f(X_t),\phi_M(Y_t))$ and define $\mathcal{E}_M$ as the event of probability one where the above inequality is satisfied, i.e., $\limsup_{T\to\infty} \delta_T^M\leq 0$. Because we always have
$\ell(\hat Y_t,\bar y)\leq 2^{-\alpha+1} \ln t$, we can write
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t) -\ell(\hat Y_t,\tilde Y_t)&= \frac{1}{T}\sum_{t=1}^T \left( \ell(\hat Y_t,Y_t) -\ell(\hat Y_t,\bar y) \right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha+1}\ln t}\\
&\leq \frac{1}{T}\sum_{t=1}^T \left( 2^{\alpha-1}\ell(\hat Y_t,\bar y) +2^{\alpha-1}\ell(Y_t,\bar y) \right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha+1}\ln t}\\
&\leq \frac{2^\alpha}{T}\sum_{t=1}^T \ell(Y_t,\bar y) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha+1}\ln t}.
\end{align*}
The proof of Theorem \ref{thm:CS_regression_unbounded} shows that on the event $\mathcal{A}$ we have $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(Y_t,\bar y) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha+1}\ln t}\leq 0$, which implies $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t) -\ell(\hat Y_t,\tilde Y_t) \leq 0$. Now let $M\geq 1$. We write
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell(\hat Y^M_t,\tilde Y_t) - \ell(\hat Y^M_t,\phi_M(Y_t)) &\leq \frac{1}{T}\sum_{t=1}^{t_M-1} \ell(\hat Y^M_t,\tilde Y_t) + \frac{1}{T}\sum_{t=t_M}^T \left(\ell(\hat Y^M_t,Y_t) - \ell(\hat Y^M_t,\bar y) \right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{M\leq \ell(Y_t,\bar y)<2^{-\alpha+1}\ln t}\\
&\leq \frac{e^{2^{\alpha-1}M}2^{\alpha}M}{T} + \frac{1}{T}\sum_{t=1}^T \left(2^{\alpha-1}\ell(\hat Y_t^M,\bar y) + 2^{\alpha-1}\ell(Y_t,\bar y)\right)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{ \ell(Y_t,\bar y)\geq M}\\
&\leq \frac{e^{2^{\alpha-1}M}2^{\alpha}M}{T} + \frac{2^{\alpha}}{T}\sum_{t=1}^T \ell(Y_t,\bar y)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{ \ell(Y_t,\bar y)\geq M}.
\end{align*}
Hence, on the event $\mathcal{A}$, we obtain \begin{equation*}
\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(\hat Y^M_t,\tilde Y_t) - \ell(\hat Y^M_t,\phi_M(Y_t)) \leq 2^{\alpha} \limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(Y_t,\bar y)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{ \ell(Y_t,\bar y)\geq M}.
\end{equation*}
Finally, we compute
\begin{align*}
\frac{1}{T}\sum_{t=1}^T & \ell(\phi_M\circ f(X_t),\phi_M(Y_t)) - \ell(f(X_t),Y_t)\\
&\leq \frac{1}{T}\sum_{t=1}^T \left(\ell(\bar y,Y_t) - \ell(f(X_t),Y_t) \right) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),\bar y)\geq M}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\leq M}\\
&\quad\quad\quad+ \frac{1}{T}\sum_{t=1}^T \left(\ell( f(X_t),\bar y) - \ell(f(X_t),Y_t) \right) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),\bar y)\leq M}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq M}\\
&\leq \frac{1}{T}\sum_{t=1}^T \ell(\bar y,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha}M} + \frac{1}{T}\sum_{t=1}^T \left(\ell(\bar y,Y_t) - \ell(f(X_t),Y_t) \right) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),\bar y)\geq M}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\leq 2^{-\alpha}M} + \frac{M}{T}\sum_{t=1}^T \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq M}\\
&\leq \frac{1}{T}\sum_{t=1}^T \ell(\bar y,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha}M} + \frac{1}{T}\sum_{t=1}^T \left(2\ell(\bar y,Y_t) - 2^{-\alpha+1}\ell(f(X_t),\bar y) \right) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(f(X_t),\bar y)\geq M}\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\leq 2^{-\alpha}M} \\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ \frac{1}{T}\sum_{t=1}^T \ell(Y_t,\bar y)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq M} \\
&\leq \frac{1}{T}\sum_{t=1}^T \ell(\bar y,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha}M} + \frac{1}{T}\sum_{t=1}^T \ell(Y_t,\bar y)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq M} .
\end{align*}
We now put all these estimates together. On the event $\mathcal{A}\cap\mathcal{B}\cap\bigcap_{M=1}^\infty \mathcal{E}_M$, for any $M\geq 1$ and $t\geq \max(\hat t,t_M)$ we can write
\begin{align*}
\frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t) - \ell( f(X_t),Y_t) &\leq \frac{1}{T}\sum_{t=1}^T \left(\ell(\hat Y_t,Y_t) - \ell(\hat Y_t,\tilde Y_t)\right) + \frac{1}{T}\sum_{t=1}^T \left(\ell(\hat Y_t,\tilde Y_t) - \ell(\hat Y^M_t,\tilde Y_t)\right)\\
+ \frac{1}{T}\sum_{t=1}^T & \left(\ell(\hat Y^M_t,\tilde Y_t) - \ell(\hat Y^M_t,\phi_M(Y_t))\right) + \delta_T^M + \frac{1}{T}\sum_{t=1}^T \left(\ell(\phi_M\circ f(X_t),\phi_M(Y_t)) - \ell(f(X_t),Y_t)\right)\\
&\leq \frac{1}{T}\sum_{t=1}^T \left(\ell(\hat Y_t,Y_t) - \ell(\hat Y_t,\tilde Y_t)\right) +\frac{3\ln^2 T}{\sqrt T} + \frac{1}{T}\sum_{t=1}^T \left(\ell(\hat Y^M_t,\tilde Y_t) - \ell(\hat Y^M_t,\phi_M(Y_t))\right)\\
&\quad\quad\quad + \delta_T^M + \frac{1}{T}\sum_{t=1}^T \left(\ell(\phi_M\circ f(X_t),\phi_M(Y_t)) - \ell(f(X_t),Y_t)\right).
\end{align*}
Thus, we obtain on the event $\mathcal{A}\cap\mathcal{B}\cap\bigcap_{M=1}^\infty \mathcal{E}_M$, for any $M\geq 1$,
\begin{multline*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t) - \ell( f(X_t),Y_t) \leq \limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(\bar y,Y_t) \mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq 2^{-\alpha}M}\\
+(1+2^{\alpha})\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(Y_t,\bar y)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq M}
\end{multline*}
On the event $\mathcal{A}$, the same arguments as in the proof of Theorem \ref{thm:CS_regression_unbounded} show that we have same guarantees for $y_0$ as for $\bar y$, i.e., for any $\epsilon>0$, there exists $\tilde M_\epsilon$ such that $\limsup_{T\to\infty}\frac{1}{T}\sum_{t=1}^T \ell(Y_t,\bar y)\mathbbm{1}}%{{\rm 1}\kern-0.24em{\rm I}_{\ell(Y_t,\bar y)\geq \tilde M_\epsilon} \leq \epsilon$. Therefore, for any $\epsilon>0$, we can apply the above equation to $M:=\lceil 2^\alpha M_\epsilon + M_{2^{-\alpha-1}\epsilon}\rceil$ to obtain
\begin{equation*}
\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \ell(\hat Y_t,Y_t) - \ell( f(X_t),Y_t) \leq \epsilon + \frac{1+2^{\alpha}}{2^{\alpha+1}} \leq 2\epsilon.
\end{equation*}
Because this holds for all $\epsilon>0$, we can in finally get $\limsup_{T\to\infty} \frac{1}{T}\sum_{t=1}^T \left(\ell(\hat Y_t,Y_t) - \ell(f(X_t),Y_t)\right) \leq 0$ on the event $\mathcal{A}\cap\mathcal{E}\cap\bigcap_{M\geq 1}\mathcal{F}_M$ of probability one. This ends the proof of the theorem.
\end{proof}
Theorems \ref{thm:CS_regression_unbounded} and \ref{thm:SOUL_regression_unbounded} completely characterize learnability for adversarial regression with moment condition. Namely, if the value space $(\mathcal{Y},\ell)$ is such that any bounded ball satisfies $\text{F-TiME}$, then Theorem \ref{thm:SOUL_regression_unbounded} provides an optimistic learning rule which achieves consistency under all processes in ${\text{SMV}}$. On the other hand, if there exists a ball $B_\ell(y,r)$ which disproves $\text{F-TiME}$, then Theorem \ref{thm:CS_regression_unbounded} provides an optimistic learning rule which achieves consistency under all processes in $\text{CS}$. This ends our analysis of adversarial regression for unbounded value spaces.
\section{Open research directions}
\label{sec:conclusion}
In this work we provided a characterization of learnability for universal learning in the regression setting, for a class of losses satisfying specific relaxed triangle inequality identities, which contains powers of metrics $\ell=|\cdot|^\alpha$ for $\alpha\geq 1$. A natural question would be whether one can generalize these results to larger classes of losses, for instance non-symmetric losses, which may appear in classical machine learning problems.
The present work could also have some implications for adversarial contextual bandits. Specifically, one may consider the case of learner which receives partial information on the rewards/losses as opposed to the traditional regression setting where the response is completely revealed at each iteration. In the latter case, the learner can for instance compute the loss of \emph{all} values with respect to the response realization. On the other hand, in the contextual bandits framework, the reward/loss is revealed \emph{only} for the pulled arm---or equivalently the prediction of the learner. In these partial information settings, exploration then becomes necessary. The authors are investigating whether the results presented in this work could have consequences in these related domains.
\paragraph{Acknowledgements.}The authors are grateful to Prof. Steve Hanneke for enlightening discussions. This work is being partly funded by ONR grant N00014-18-1-2122.
\printbibliography
\end{document}
|
1,108,101,565,964 | arxiv | \section{Introduction}
We study a model for the spread of an infection in a collection of moving interacting particles, based on the well-known epidemiological \textbf{SIR} (Susceptible-Infected-Removed) model. The process is initiated from a Poisson process of susceptible particles on $\Z^d$ with density $\mu > 0$. All particles perform independent continuous-time simple random walks on $\Z^d$. At time $0$ a single infected particle is added at the origin $0$. When an infected particle and a susceptible particle meet at a vertex, the susceptible particle becomes infected. Infected particles recover at rate $\nu > 0$, after which point they are removed and no longer affect the process.
Our goal is to understand the evolution of this process and how the infection spreads over time.
This model fits within a broader class of dynamical interacting particle systems including Susceptible-Infected (SI also called A/B or X/Y) and Susceptible-Infected-Susceptible (SIS) models of infection spread, multi-particle Diffusion Limited Aggregation (mDLA), frog models, and activated random walks. See Section \ref{SS:related} for discussion of related work on these models. Each of these models can be viewed as an infection spreading through a population. Typically, we are interested in understanding the rate of spread and properties of the infected region.
In contrast to the other models, the challenge here is that particles remain infected for only $O(1)$ time before being removed, and so particle density decreases as the infection spreads. Therefore, even if the infection survives forever we do not expect to see a growing ball of infected particles. Rather, we expect a growing sphere of particles of $O(1)$ width dividing the plane between an outer supercritical region with a high density of susceptible particles and a low density interior in a subcritical \emph{herd immunity} regime with too few susceptible particles to sustain an epidemic.
The SIR model originates in epidemiology and is generally studied in the mean field setting where any particle can infect any other~\cite{kermack1927contribution,ross1917application}. In large populations its evolution can be studied by deterministic differential equations (see \cite{anderson1991discussion} for a survey of the plethora of variants and applications). It has also been studied in the case where particles occupy fixed vertices in a graph and can infect neighbouring vertices. In the lattice setting, this model was first studied in detail by Kuulasma~\cite{kuulasmaa1982spatial} building on work in simpler models by Mollison~\cite{mollison1977spatial}.
In the case of random graphs with given degree distributions, Newman~\cite{newman2002spread} characterized the critical threshold for infection spread.
The case of moving particles is more mathematically challenging as the dynamic environment of particles has a complicated dependence on the infection process. Moreover, an additional challenge is that, unlike in various SI or SIS models, it lacks any obvious monotonicity in the density and recovery rate parameters $\mu$ and $\nu$. For example, if we couple two SIR models at different densities, then while the higher density model initially has more infections, this may cause particles to recover earlier, decreasing the density and breaking the monotone coupling between the models.
Our first main theorem shows that the SIR model displays qualitatively different behaviour at high and low recovery rates.
We let ${\bf S_t}$ and ${\bf I_t}$ denote the sets of susceptible and infected particles at time $t$, respectively. For this next theorem, we let $\P(d, \mu, \nu)$ denote the \textbf{survival probability} for the infection process in dimension $d$, density $\mu$ and recovery rate $\nu$. That is, $\P(d, \mu, \nu)$ is the probability that ${\bf I_t} \ne \emptyset$ for all $t > 0$.
\begin{theorem}
\label{T:main-1}
Fix any density $\mu > 0$ and any dimension $d \ge 2$. Then there exist $\nu^- = \nu^-(\mu, d), \nu^+ = \nu^+(\mu, d) \in (0, \infty)$ such that if $\nu > \nu^+$, then $\P(d, \mu, \nu) = 0$ and if $\nu < \nu^-$ then $\P(d, \mu, \nu) > 0$. Moreover, $\P(d, \mu, \nu) \to 1$ as $\nu \to 0$.
\end{theorem}
We expect that $\P(d, \mu, \nu)$ is a monotone function of $\nu$ and therefore there exists a critical recovery rate $\nu_c \in (0,\infty)$. However, proving this monotonicity appears to be a difficult problem for the reasons discussed above. Indeed, closely related processes are not monotone on certain graphs, see~\cite{deijfen2006nonmonotonic, candellero2021first}.
The fact that $\P(d, \mu, \nu) = 0$ for all large enough $\nu$ was previously established by Kesten and Sidoravicius in \cite{kesten2006phase} for the SIS model where particles become susceptible again after recovering from the infection. Their result immediately implies ours since the infection process for the SIS model stochastically dominates the infection process for the SIR model and so this part of Theorem~\ref{T:main-1} is not new. However, their proof is quite involved as it has to deal with the possibility of particles becoming reinfected. In the SIR setting, a much more straightforward supermartingale argument works, see Section \ref{S:recovery}. Recently, Grimmett and Li~\cite{grimmett2020brownian} have shown that $\P(d, \mu, \nu) = 0$ for large enough $\nu$ in a Brownian version of our SIR model. Their broad proof strategy is similar to ours but differs in the precise setup (and naturally, there are also different technicalities that come from working in discrete vs. continuous space). The proof that $\P(d, \mu, \nu) \to 1$ as $\nu \to 0$ is much more difficult and is given in Section \ref{S:linear}.
A version of Theorem \ref{T:main-1} also holds if we first fix $\nu$ and allow the density $\mu$ to vary instead. Indeed, the results of Section \ref{S:recovery} show that for fixed $\nu$, if $\mu$ is low enough then $\P(d, \mu, \nu) = 0$. A variant of the arguments in Section \ref{S:linear} would show that $\P(d, \mu, \nu) \to 1$ as $\mu \to \infty$, but the details are sufficiently different that we do not include the proof here.
\begin{remark}[Dimension $1$]
In dimension $1$, it is not difficult to check that the SIR process dies out exponentially quickly at every recovery rate $\nu$ and every density $\mu$. Indeed, by Theorem~1 in~\cite{kesten2005spread}, there exists a constant $C > 0$ such that the set of infected particles $I_t$ is contained in the interval $[-Ct, Ct]$ with exponentially high probability. Since particles move diffusively and each particle is only infected for an $O(1)$ amount of time, this implies that there is a constant $C' \in \N$ such that at most times $t$, at most $C'$ total particles are infected. At any such time $t$ there is a positive probability depending on $C'$ but not on $t$ that all of these particles recover before encountering any susceptible particles. Hence the infection must die out exponentially quickly.
\end{remark}
Theorem \ref{T:main-1} establishes that the SIR process can exhibit both recovery regimes and survival regimes. While the recovery regime is not particularly interesting, there are many possibilities for how the process might behave in the survival regime. The next theorem describes the particular behaviour of the process in the survival regime.
\begin{theorem}
\label{T:main-2}
Fix any density $\mu > 0$, any dimension $d \ge 2$, and $\nu < \nu^-(\mu, d)$. Then there is a positive probability event $\cG_\nu$ with $\P[\cG_\nu] \to 1$ as $\nu \to 0$, such that on $\cG_\nu$ the following events hold. Here $c, C > 0$ are constants that depend on $\nu, \mu, d$.
\begin{enumerate}
\item (Survival) ${\bf I_t} \ne \emptyset$ for all $t \ge 0$.
\item (Linear growth) For all large enough $t$, we have
$$
{\bf I_t} \sset B(0, C t) \smin B(0, c t) .
$$
\item (Infection Duration) For every site $x \in \Z^d$, let $D_x$ be the length of time between the first appearance of an infected particle at site $x$ and the last appearance. If an infected particle never reaches site $x$, set $D_x = 0$. Then for all $x \in \Z^d$ and $m \ge 3$ we have
\begin{equation}
\label{E:Dxm}
\P(D_x > m \;|\; \cG_\nu) \le C\exp(-m^{c/\log \log m}).
\end{equation}
In particular, on $\cG_\nu$ there exists a random constant $D > 0$ such that for all $x \in \Z^d$ we have
\begin{equation}
\label{E:DxDbound}
D_x \le D + [\log (\|x\|_2 + 3)]^{C \log \log \log (\|x\|_2 + 3)}.
\end{equation}
\item (Herd Immunity in the centre) We have
$$
\liminf_{t \to \infty} \frac{|{\bf S}_t \cap B(0, c t) |}{t^d} > 0.
$$
\end{enumerate}
\end{theorem}
We strongly believe that $\cG_\nu$ can be taken to be the survival event
$$
\{\textbf{I}_t \ne \emptyset \text{ for all } t\}.
$$
However, our methods do not suffice to show this.
\subsection{Proof Sketch}
In~\cite{dauvergne2021spread} we studied the evolution of infections in the SI model where particles do not recover. Previous work of Kesten and Sidoravicius~\cite{kesten2005spread,kesten2008shape} had analyzed this model in the setting where susceptible and infected particles move with the same speed. Under this assumption, unlabeled particle trajectories are independent random walks and are not affected by the infection process. This work left open the problem of models where the particle environment was dependent on the infection process.
Our work in~\cite{dauvergne2021spread} developed a technical framework to establish linear growth in general SI models which is flexible enough to study many other models.
In the present work we use it to understand the different phenomena that occur in SIR models, particularly the way in which the infection process divides the plane into subcritical and supercritical regimes divided by an infected region of $O(1)$ width.
We divide the lattice into blocks of fixed side length $L$ chosen based on the recovery rate $\nu$ (see equation~\eqref{E:Lnu-relationship}). We define a colouring process on the set of blocks where a block is coloured the first time an infected particle enters it or if it has been next to a coloured block for a sufficient period of time (see Section~\ref{S:SI-colouring} for the precise definition). In order to establish sufficient spatial independence, we only observe particles that have previously visited the coloured region. Outside of the coloured region, the conditional distribution of the particles is conditionally Poisson with a random intensity that can be bounded from below. This description allows us to define the SIR process in a spatially independent way according to the colouring process.
To each block we associate an independent Poisson process and use this to generate the particles inside that block at the time it is coloured, as well as their future trajectories and healing times.
The lower bound on the intensity implies a high-probability lower bound on the number of particles present in a block $B$ at the random time $\tau_B$ when it is first coloured. This implies that for most blocks, if they are infected at time $\tau_B$ then the infection will quickly be propagated to the neighbouring blocks irrespective of where the infection first enters. Those that do not have this property are labeled as \emph{blue seeds}. Our block construction of the SIR process using independent Poisson processes guarantees that the blue seed process is stochastically dominated by a highly subcritical percolation in the low recovery rate regime. To establish that the infection survives and grows linearly we couple the colouring process to a competitive growth model called Sidoravicius--Stauffer percolation (SSP) first used to study multi-particle DLA~\cite{sidoravicius2019multi}.
Our analysis here goes beyond the questions addressed in~\cite{dauvergne2021spread}, and a particular focus is the study of the herd immunity regime left behind after the infection passes through a region. Our aim is to show that in this herd immunity regime, there are no infected particles remaining while a small density of susceptible particles remains uninfected. To show that the infection dies out locally after the wave of infections passes through, we first use random walk estimates to show that the density of unremoved particles is small in regions that were coloured long ago. We then combine this with a more delicate version of the supermartingale argument used to analyze the high recovery rate regime to prove that the infection dies out completely with high probability.
To show that there is a small density of susceptible particles that never become infected we show that locally each block has a small probability of having a particle that survives for a long period of time after $\tau_B$. After the wave of infections has passed through, it is unlikely that there are ever infected particles in a neighbourhood of this susceptible particle. In order to show that many such particles survive we show that this event can be bounded below by one that is essentially local.
A particular challenge in establishing the features of the herd immunity regime is to show that the events in question can essentially be defined locally using the independent Poisson blocks, i.e. that they do not depend on the relative colouring times $\tau_B$ or on the behaviour of the coupled Sidoravicius--Stauffer percolation, both of which in principle can have long range dependencies.
\subsection{Related work}
\label{SS:related}
The type of SIR model we study in this paper fits within a broader class of dynamical interacting particle systems
built from random walks on the lattice. If we take our SIR model and make all susceptible particles stationary, then we recover the frog model on $\Z^d$ (with recovery). Alves, Machado, and Popov~\cite{alves2002phase} showed that
this model exhibits a phase transition, obtained asymptotics for the critical recovery rate, and found a shape theorem in the SI setting where recovery is not permitted~\cite{alves2002shape}. The frog model has also been well-studied in the SIS (Susceptible-Infected-Susceptible) setting, where frogs recover from the infection and return to a stationary susceptible state under the name `activated random walks', e.g. see~\cite{dickman2010activated, rolla2020activated, stauffer2018critical} for review articles and recent progress.
The case when both susceptible and infected particles are allowed to move is more mathematically challenging as the dynamic environment of particles has a much more complicated dependence on the infection process. In the SI and SIS settings, as discussed above this model was seriously attacked in a series of papers of Kesten and Sidoravicius~\cite{kesten2005spread, kesten2008shape, kesten2006phase}, culminating in a shape theorem for the infected region in the SI model. See~\cite{kesten2012asymptotic} for a survey of problems about these types of models and~\cite{baldasso2020local, baldasso2021local} for more recent work.
A version of the SI model on $\Z^d$ where susceptible particles can move but infected particles cannot move and instead form a stationary growing aggregate is known as multi-particle diffusion limited aggregation (mDLA). While the character of mDLA is somewhat different (for example, the aggregate has a fractal-like structure at low densities), the techniques used to study mDLA are useful here. Indeed, this line of work introduced SSP, known there as first passage percolation in a hostile environment (FPPHE), as a tool in the study of mDLA in ~\cite{sidoravicius2019multi}. See~\cite{finn2020non, candellero2021coexistence, candellero2021first} for further developments on FPPHE. A simpler version of the block Poisson description of the SIR process was also first used to study mDLA in one dimension in~\cite{sly2021one}.
\subsection{Outline of the paper}
For simplicity, throughout the paper we assume that the dimension $d=2$. The proofs in the case of general $d \ge 2$ go through essentially verbatim.
In Section \ref{S:recovery}, we give a martingale argument proving that $\P(d, \mu, \nu) = 0$ for all large enough $\nu$. This section also includes a technical extension of this result that we will use later on to establish local infection recovery in the survival regime. In Section \ref{S:SSP} we introduce SSP and in Section \ref{S:linear} we couple an SSP with a block description of the SIR process to prove that $\P(d, \mu, \nu) \to 1$ as $\nu \to 0$ and that the infection spreads linearly. The remaining sections prove parts $2, 3,$ and $4$ of Theorem \ref{T:main-2}. Section \ref{S:global} defines events that will allow us to separate out local and global effects on the SIR process. Section \ref{S:upper-bd-density} gives a local construction that shows that once the infection passes through a region, the density of unremoved particles in that region quickly decreases with high probability, leaving that region in a subcritical regime. Section \ref{S:survival} provides a converse to the results of Section \ref{S:upper-bd-density}, by defining a low probability local event where a particle survives long after the infection has passed through a region. Section \ref{S:herd} delicately puts together the constructions from all the previous sections to prove Theorem \ref{T:main-2}.
\subsection{Notational conventions}
Rather unfortunately, the paper is loaded with notation. Moving forward, we use the following conventions to help the reader orient themselves:
\begin{itemize}[nosep]
\item Events are typically denoted with calligraphic symbols $\cA, \cB, \cC, \dots$
\item Large constants are denoted by $C, C', C_1, C_2, \dots$ and small constants are denoted by $c, c', c_1, c_2, \dots$. In sections \ref{S:recovery} and \ref{S:SSP}, all constants are absolute.
From Section \ref{S:linear} onward, we will fix a density $\mu$ and all constants will depend on $\mu$ but no other parameters. We always allow the meaning of constants to change from line to line.
\item We always use fraktur notation $\fA, \fB, \dots$ to describe objects arising from the study of Sidoravicius--Stauffer percolation in Section \ref{S:SSP}.
\item We do not include ceilings and floors unless they materially affect arguments.
\item Many numbers, e.g. $4000, 5000$, are arbitrary. They are chosen so that various scales we use in the paper will nest together in the right ways. The reader should use the explicit constants only as a way to help orient themselves when we are considering multiple scales.
\item For $x, y \in \Z^2$, we write $d(x, y) = |x-y|$ for the $L^1$-distance (or graph distance) from $x$ to $y$. More generally, for sets $A, B$ we write $d(x, A) = \inf\{|x - y| : y \in A\}$ and $d(A, B) = \inf\{|x - y| : x \in A, y \in B\}$.
\item For a set $A \sset \Z^2$, we let $\del A = \{x \in A : d(x, A^c) = 1\}$ be the interior boundary of $A$.
\end{itemize}
\subsection*{Acknowledgements}
The authors would like to thank Alexander Stauffer for useful discussions. AS was supported by NSF grants DMS-1855527 and DMS-1749103, a Simons Investigator grant and a MacArthur Fellowship. DD was supported by an NSERC Discovery Grant.
\section{SIR recovery}
\label{S:recovery}
In this section, we show that in the high recovery rate regime, the infection process dies out exponentially quickly. A few modifications to the argument will also allow us to give a criterion for the SIR infection recovering in a local window where particles have an initial low density. This will be used later to establish that in the low recovery rate regime, the infection dies out quickly in a region after the infection front has passed through.
\begin{prop}
\label{P:high-recovery-rate}
Consider an SIR process with recovery rate $\nu$, started from a random initial configuration of susceptible particles that is stochastically dominated by a Poisson process of intensity $\nu/8$ with an initial infected particle added at the origin $0 := (0,0)$. Then for all $t > 0$, we have
$$
\P(\mathbf{I}_t \ne \emptyset) \le \E \mathbf{I}_t \le (1 + \nu/8)e^{-\nu t/2}.
$$
\end{prop}
To set up the proof of Proposition \ref{P:high-recovery-rate} we introduce a few conventions that we use throughout the paper. First, for a particle $a$, we let $\bar a:\R \to \R$ denote its cadlag trajectory. We also associate to every particle an independent intensity $\nu$ Poisson process $\bar a^h$ on $\R$ which is the \textbf{healing process} for $a$. Each particle $a$ is healed at the first time in its healing process after it becomes infected. Standard results about Poisson processes implies that this gives the same dynamics as associating to each particle a rate-$nu$ exponential recovery clock.
We will analyze the number of infected particles at a particular time by counting certain sequences of potentially infected particles known as active chains.
An \textbf{active chain $Z = (\Pi, Q)$ on an interval $[s, t]$} is a partition $\Pi = \{\pi_0 = s < \pi_1 < \dots < \pi_{k(Z)} = t\}$ and a sequence of particles $Q = \{a_i : i \in \{1, \dots, k(Z)\}\}$ such that
\begin{itemize}
\item $\oa_i(\pi_i) = \oa_{i+1}(\pi_i)$ and $\oa_i(\pi_i^-) \ne \oa_{i+1}(\pi_i^-)$ for all $i \in \{1, \dots, k(Z)-1\}$. Here $\oa(t^-) := \lim_{s \to t^-} \oa (s)$.
\item For all $i$, we have $\oa_i^h \cap [\pi_{i-1}, \pi_i] = \emptyset$.
\item $a_i \ne a_j$ for $i \ne j$.
\item For all $i \ge 2$, the particle $a_i$ is susceptible at time $0$.
\end{itemize}
For $r \in [\pi_i, \pi_{i+1}]$, we call $Z(r) := \oa_i (r)$ the \textbf{location} of $Z$ at time $t$ and $a_i$ the \textbf{label} of $Z$ at time $r$. We say that the active chain $Z$ starts at $(s, Z(s))$ and ends at $(t, Z(t))$. We allow the possibility that $s = t$ in the definition of active chains. In this case, an active chain always consists of a single particle. For a potential starting location $u = (s, z) \in [0, \infty) \times \Z^2$, and $t > s$, let $\sC_u(t)$ be the set of active chains in $X$ on the interval $[s, t]$ starting at $(s, z)$. We have the inequality
\begin{equation}
|{\bf I}_t| \le |\sC_{0, 0}(t)|,
\end{equation}
so to prove Proposition \ref{P:high-recovery-rate} it suffices to analyze $\sC_{0, 0}$.
The basic premise behind the proof of Proposition \ref{P:high-recovery-rate} is that $|\sC_{0, 0}|$ behaves like a supermartingale when the particle density is low or the recovery rate is high. Since we would also like a version of Proposition \ref{P:high-recovery-rate} that proves \emph{local} infection recovery even when we are in the global survival regime, we will analyze $|\sC_{0, 0}|$ in conjunction with more general supermartingale-like processes.
For $u = (s, z) \in [0, \infty) \times \Z^2$ and $A \sset \Z^2$, define the quantity
$$
I_{u, A}(t) = \sum_{Z \in \mathcal \sC_u(t)} \exp (-\nu d(Z(t), A)/8).
$$
Note that $I_{0, \Z^2} = |\sC_{0, 0}|$.
To analyze $I_{u, A}$, we start with two preliminary lemmas. The first establishes that $\E I_{u, A}$ exists and is continuous if we start from a finite initial particle configuration.
\begin{lemma}
\label{L:IuA-exist-cts}
Consider an SIR process started from a possibly random initial configuration with at most $n$ particles, which may be either infected or susceptible.
For all $u = (z, s), A,$ the quantity $\E I_{u, A}(r)$ is finite and continuous for $r \in [s, \infty)$.
\end{lemma}
\begin{proof} Fix $t > 0$ and let $r \in [s, t]$. Letting $T(t)$ be the set of times in $[s, t]$ when a particle jumps, any active chain can be encoded as a map from $\{s\} \cup T(t) \to \{1, \dots, n\}$ recording the label of $Z$ at each time in $\{s\} \cup T(t)$. Therefore
\begin{equation}
\label{E:IuA}
I_{u, A}(r) \le (|T(t)| + 1)^{n}.
\end{equation}
The quantity $T(t)$ is a Poisson random variable, so $\E I_{u, A}(r) < \infty$. To establish continuity of $\E I_{u, A}$ at $r \in [s, t)$, observe that the probability that any particle either recovers or jumps in the interval $(r-h, r + h)$ tends to $0$ in probability as $h \to 0$, so almost surely $I_{u, A}(r + h) \to I_{u, A}(r)$ as $h \to 0$. Therefore by the dominated convergence theorem, which we can apply by \eqref{E:IuA}, $\E I_{u, A}$ is continuous at $r$.
\end{proof}
We will also need the following version of Gr\"onwall's inequality.
\begin{lemma}
\label{L:gronwall}
Let $f:[0, t] \to \R$ be a continuous function with $f(0) > 0$. Suppose that for all $s \in (0, t)$, the upper right Dini derivative satisfies
$$
D^+f(s) := \limsup_{h \to 0^+} \frac{f(s + h) - f(s)}{h} \le \al f(s)
$$
for some $\al \in \R$. Then
$
f(s) \le e^{\al s} f(0)
$
for all $s \in [0, t]$.
\end{lemma}
\begin{proof}
We just need two facts about Dini derivatives of continuous functions $f, g$: a product rule and a mean value theorem.
$$
D^+(fg) \le D^+(f)g + f D^+(g), \qquad \sup_{s \in (0, t)} D^+ f \ge \frac{f(t) - f(0)}t.
$$
These facts are easy to check and we leave their proofs to the reader. Now, define $g(s) = e^{-\al s} f(s)$ for $s \in [0, t]$. By the product rule and the assumption of the lemma we have
\begin{equation*}
\label{E:D^+gg}
D^+ g(s) \le e^{-\al s} D^+ f(s) - \al f(s) e^{-\al s}\le 0
\end{equation*}
for $s \in (0, t)$. Therefore by the mean value theorem, $g(s) \le g(0)$ on $[0, t]$ and so $f(s) \le e^{\al s} f(0)$.
\end{proof}
We can now establish supermartingale-like behaviour for $I_{u, A}$.
\begin{lemma}
\label{L:Poisson-model-1} Consider an SIR process with recovery rate $\nu \le 1$, whose initial configuration of susceptible particles is stochastically dominated by a Poisson process of intensity $\nu/8$ and whose initial configuration of infected particles is at most countable. Then for any $u = (0, z) \in [0, \infty) \times \Z^2$ and $A \sset \Z^2$, we have
$$
\E I_{u, A}(s) \le \E I_{u, A}(0) \exp(-\nu s/2).
$$
If $A = \Z^2$, we can drop the condition that $\nu \le 1$.
\end{lemma}
\begin{proof}
We write $I=I_{u, A}$ throughout the proof to simplify notation, let $P$ be the full set of initially susceptible particles, and let $P'$ be the full set of initially infected particles. We first consider the case when $|P| + |P'| \le n$ for some $n \in \N$. For $r < s \in [0, \infty)$ and a sequence of labels $Q$, let $\sC_u^{r, Q}(s)$ be the set of active chains on $[0, s]$ starting at $u$ whose label sequence up to time $r$ is given by $Q$.
Define $I^{r,Q}(s)$ in the same way as $I(s)$ but with the set $\sC_u^{r, Q}(s)$ in place of $\sC_u(s)$. We first show that for any $r \in [0, \infty)$, almost surely
\begin{equation}
\label{E:hEt}
\limsup_{h \to 0^+} \frac{1}{h} \E\lf(I^{r, Q}(r + h) - I^{r, Q}(r) \mid I^{r, Q}(r) \rg) \le -\nu I^{r, Q}(r)/2.
\end{equation}
First, this bound is trivially true if $\sC_u^{r, Q}(r) = \emptyset$. Now, when $\sC_u^{r, Q}(r)$ is nonempty, we consider how $\sC_u^{r, Q}(r)$ can change by time $r + h$ for small $h$. We enumerate the possibilities. In this enumeration, let $q$ be the final label in $Q$.
\begin{enumerate}[label=(\roman*)]
\item $\oq^h \cap (r, r + h] = \emptyset$, the particle $q$ does not move in the interval $(r, r+ h]$, and no particles in $P$ jump onto the square $q(r)$ in the interval $(r, r+ h]$. In this case, every active chain in $\sC_u^{r, Q}(r)$ extends uniquely to an active chain in $\sC_u^{r, Q}(r+h)$ and $I^{r, Q}(r + h) = I^{r, Q}(r)$.
\item $\oq^h \cap (r, r + h] = \emptyset$, the particle $q$ does not move in the interval $(r, r+ h]$, and exactly one particle $p \in P \smin Q$ jumps onto the square $\oq(r)$ in the interval $(r, r+h]$, does not heal in that interval, and does not jump again in that interval. In this case, $I^{r, Q}(r + h) = 2I^{r, Q}(r)$.
\item $\oq^h \cap (r, r + h] = \emptyset$ and the particle $q$ jumps exactly once in the interval $(r, r+h]$ from the location $y = \oq(r)$ to a neighbouring location $y'$ with $|(P \smin Q) \cap \{y'\}| = N$. No other particles in $P$ jump onto or off of $y$ or $y'$ in the interval $(r, r+h]$ and no particles heal in the interval $(r, r + h]$. In this case, $I^{r, Q}(r + h)$ equals
\begin{equation}
\label{E:IrQ}
I^{r, Q}(r)(N+1)\exp\lf(\frac{\nu(d(y, A) - d(y', A))}{8}\rg).
\end{equation}
Note that in the $A = \Z^2$ setting, this is simply $I^{r, Q}(r) (N+1)$. In the $A \ne \Z^2$ setting, using that $\nu \le 1$, this is bounded above by $I^{r, Q}(r) (N+1)(\nu/7 + 1)$.
\item The particle $q$ does not jump in the interval $(r, r+h]$, no particles in $P \smin Q$ jump from another square onto $\oq(r)$ in that interval, and $q$ heals once in that interval. In this case $I^{r, Q}(r + h) = 0$.
\item Other possibilities occur. In all these possibilities, if $E$ denotes the collection of all jump times and healing times of all particles in $P \cup P'$, then $|E \cap (r, r + h]| \ge 2$.
\end{enumerate}
To help bound the probability of (i)-(v) conditioned on $I^{r,Q}(t)$, observe that conditioning on $I^{r, Q}(r)$ gives no information about particles $p \in P \smin Q$. Moreover, the location of the particles in $P \smin Q$ at time $r$ is stochastically dominated by a Poisson process of intensity $\nu/8,$ since this holds at time $0$. With this, we can easily see that conditionally on $I^{r, Q}(r)$,
\begin{equation}
\label{E:prob-bds}
\begin{split}
&\P (i) = 1 - O(h), \qquad \P (ii) \le \nu h/8 + O(h^2), \qquad \P (iii) = h + O(h^2), \\
\qquad &\P(iv) = \nu h + O(h^2), \qquad \P (v) = O(h^2).
\end{split}
\end{equation}
To get the estimate \eqref{E:hEt}, we also need to understand the conditional expectation of \eqref{E:IrQ} given $I^{r, Q}(r)$ and that the event in (iii) holds. Up to an $O(h)$ term, conditioning on (iii) is the same as conditioning on the particle $q$ to jump in $(r, r + h]$. Under this conditioning, $N$ is stochastically dominated by a Poisson random variable of mean $\nu/8$. Therefore
$$
\E\lf(I^{r, Q}(r + h) - I^{r, Q}(r) \mid I^{r, Q}(r), (iii) \rg) \le 3\nu I^{r, Q}(r) /8 + O(h).
$$
Also, conditionally on (v) occurring and $I^{r, Q}(r)$, since $|P| \le n$ it is easy to check using ideas similar to the proof of Lemma \ref{L:IuA-exist-cts} that $\E(I^{r, Q}(r + h) - I^{r, Q}(r) \mid I^{r, Q}(r), (v) ) \le c I^{r, Q}(r)$ for some constant $c>0$. Putting this all together with the bounds in \eqref{E:prob-bds}, and the explicit changes to $I^{r, Q}$ on events (i), (ii), and (iv) gives \eqref{E:hEt}. Next, again using that $P$ is finite it is straightforward to check using the decomposition (i)-(v) above that the random variables
$$
\frac{1}{h} \E\lf(I^{r, Q}(r + h) - I^{r, Q}(r) \;|\; I^{r, Q}(r) \rg)
$$
are uniformly integrable as we vary $h$ over $(0, 1)$. Therefore the bound \eqref{E:hEt} holds with the relevant random variables replaced with their expectations. Summing over the finitely many sequences $Q$ with labels in $P \cup P'$, we then get that for all $s > 0$, we have the right-hand derivative bound
\begin{equation}
\label{E:derivative}
\limsup_{h \to 0^+} \frac{\E I(s+h) - \E I(s)}h \le -\frac{\nu\E I(s)}2.
\end{equation}
The first inequality in the lemma then follows from Lemma \ref{L:gronwall} applied to the function $\E I$. The continuity of $\E I$ required for the lemma holds by Lemma \ref{L:IuA-exist-cts}.
Now we extend this to the case when $P, P'$ are potentially infinite. Note that $P$ is always countable by the stochastic domination of the initial configuration by a Poisson process. Let $\{p_i : i \in \N\}, \{p_i' : i \in \N\}$ be enumerations of $P, P'$, and let $I^n$ be defined in the same way as $I$ but with the set $\sC_u(s)$ replaced by the set of active chains on $[0, s]$ started at $u$ that only use labels from the set $\{p_1, \dots, p_n, p_1', \dots, p_n'\}$. Then for all $s$, the sequence $I^n(s)$ is monotone increasing, and $I^n(s) \nearrow I(s)$ almost surely, so by the monotone convergence theorem, $\E I^n(s) \to \E I(s)$, yielding the bound in the lemma.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P:high-recovery-rate}]
By Markov's inequality and Lemma \ref{L:Poisson-model-1} we have
$$
\P({\bf I_t} \ne \emptyset) \le \E \mathbf{I}_t \le \E|\sC_{0,0}(t)| = \E |I_{0, \Z^2}(t)| \le \E |I_{0, \Z^2}(0)| e^{-\nu t/2}.
$$
Now, $\E |I_{0, \Z^2}(0)|$ is simply the expected number of particles that are initially at $0$, which is at most $1 + \nu/8$.
\end{proof}
\subsection{Local recovery in the global survival regime}
\label{S:recovery-local}
In the remainder of this section, we give a more technical version of Proposition \ref{P:high-recovery-rate} that will
allow us to later establish local recovery in the global survival regime. While we have included this section here since it builds on Proposition \ref{P:high-recovery-rate}, the reader may wish to leave it for now and return to it later when it is applied in Section \ref{S:herd}.
We fix a large spatial parameter $M \in \N$, a set of particles $P$ performing independent continuous-time random walks from an initial configuration $P(0)$, and a collection of particles $Q$ with initial configuration $Q(0)$, where each particle $q \in Q$ follows a (potentially random) cadlag path $\oq : [0, \infty) \to \R$. We start with an arbitrary set of initially infected particles $S \sset P \cup Q$, and consider the SIR model on the particles $P, Q$ with recovery rate $\nu \in (0, \infty)$.
We write $P(t), Q(t)$ for the point processes $\{\op(t) : p \in P\},\{\oq(t) : q \in Q\}$. For the remainder of this section, we make the following assumption on the initial distribution $P(0)$:
\begin{itemize}
\item There exists $\de \in (0, 1)$ such that for all $z \in \Z^2$, we have
\begin{equation}
\label{E:IC-assumption}
P(0) \cap [-j M, j M]^2 \le \de j^2 M^2.
\end{equation}
\end{itemize}
Now, let $\sF_{P, t}$ be the $\sig$-algebra generated by the trajectories of $P$ up to time $t$ and define the event
\begin{equation}
\label{E:AY-event}
\mathcal D_Q = \{ \text{For all } t \in [0, 4M^2], \text{ we have } Q(t) \cap [-2M, 2M]^2 = \emptyset \}.
\end{equation}
In the remainder of this section, we prove the following proposition.
\begin{proposition}
\label{P:death-with-specifics}
There exist absolute constants $c, C > 0$ such that for $C \de \le \nu \le 1$, there exists an $\sF_{P, t}$-measurable event $\mathcal B_P$ such that on $\mathcal D_Q \cap \mathcal B_P$,
$$
\text{there are no infected particles in } [-M, M]^2 \text{ at any } t \in [M^2/4, 4M^2].
$$
Moreover, $\P [\cB_P] \ge 1 - C e^{-c \nu M}$.
\end{proposition}
Moving forward, we will apply Proposition \ref{P:death-with-specifics} in the following way. In our main SIR model, after the colouring process has passed through a region, then typically that region will have a low density of particles that have not been removed. Any remaining infections will quickly die out because of Proposition \ref{P:death-with-specifics}, applied at various scales $M$. The two sets of particles $P$ and $Q$ in Proposition \ref{P:death-with-specifics} are distinguished in order to separate local and global effects.
In applications, the set of trajectories $P$ will be measurable given some local $\sig$-algebra, allowing us to establish independence of spatially separated infection recovery events. The contributions from trajectories in $Q$ will be handled with a global union bound.
We prove Proposition \ref{P:death-with-specifics} by appealing to Lemma \ref{L:Poisson-model-1}. The first step for doing this is to show that in a certain time window, the configuration of $P$-particles near $0$ is stochastically dominated by a Poisson process.
\begin{lemma}
\label{L:poisson-domination}
Under assumption \eqref{E:IC-assumption}, for $M$ sufficiently large and $t \in [M^2/8, 4M^2]$, the configuration $P(t) \cap [-3M, 3M]^2$ is stochastically dominated by a Poisson process $\Pi$ of intensity $C\de$.
\end{lemma}
To prove Lemma \ref{L:poisson-domination}, we use the following lemma for establishing stochastic domination.
\begin{lemma}
\label{L:russo-lemma}
Let $P$ be a point process on a finite set $F$. Suppose that for any $x \in F$ and any point configuration $f$ on $F \smin \{x\}$ for which the event $P|_{F \smin \{x\}} = f$ has nonzero probability, the conditional distribution
\begin{equation}
\P(|P \cap \{x\}| \in \cdot \; | \; P|_{F \smin \{x\}} = f)
\end{equation}
is stochastically dominated by a Poisson random variable of mean $\ga$. Then $P$ is stochastically dominated by a Poisson process with intensity $\ga$.
\end{lemma}
Results of this form are well-known. For example, Lemma \ref{L:russo-lemma} was shown in \cite{russo1982approximate}, Lemma 1 for Bernoulli random variables and Bernoulli processes in place of Poisson random variables and Poisson processes. Russo's proof works verbatim in the context above, and so we omit it.
We will also use a standard random walk estimate. This estimate will be used throughout the paper so we record it here as a lemma. We leave the proof as an exercise for the reader using Azuma's inequality and basic large deviations.
\begin{lemma}
\label{L:rw-estimate}
Let $X:[0, \infty) \to \Z^2$ be a continuous time random walk. Then for all $u \in \Z^2$ and $m > 0$, we have
$$
\P(X(t) = u) \le \frac{C}{t} \exp\lf(- \frac{c|u|^2}{|u| + t}\rg), \;\; \P(\max_{0 \le r \le t} |X(t)| \ge m) \le C\exp\lf(- \frac{c m^2}{m + t}\rg).
$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{L:poisson-domination}] By Lemma \ref{L:russo-lemma}, it is enough to show that for any $t \in [M^2/8, 4M^2], z \in [-3M, 3M]^2$ and any finite point configuration $f$ on ${[-3M, 3M]^2 \smin \{z\}}$, the conditional distribution
\begin{equation}
\label{E:Xtt}
\P(|P(t) \cap \{z\}| \in \cdot \mid P(t)|_{[-3M, 3M]^2 \smin \{z\}} = f)
\end{equation}
is stochastically dominated by a mean $C\de$ Poisson distribution.
Equation \eqref{E:Xtt} follows from the stronger claim that for any set $P' \sset P$ and any function $f:P' \to [-3M, 3M]^2 \smin \{z\}$, the conditional distribution
\begin{equation*}
\label{E:Xttt}
\P\lf(|P(t) \cap \{z\}| \in \cdot \; \Big| \; \op(t) = f(p) \;\forall p \in P', \op(t) \notin [-3M, 3M]^2 \smin \{z\} \;\forall p \in P \smin P' \rg)
\end{equation*}
is stochastically dominated by a mean $C\de$ Poisson distribution. With the above conditioning, we have
\begin{equation}
\label{E:Xtcapz}
|P(t) \cap \{z\}| = \sum_{p \in P \smin P'} \mathbf{1}(\op(t) = z).
\end{equation}
This is a sum of independent Bernoulli random variables of mean
$$
\mu_p := \P(\op(t) = z \mid \op(t) \notin [-3M, 3M]^2 \smin \{z\}).
$$
Now, for all $p \in P, t \ge M^2/8$, we have
$
\P(\op(t) \notin [-3M, 3M]^2) \ge c.
$
Combining this with the bound in Lemma \ref{L:rw-estimate} gives that
$$
\mu_p \le CM^{-2} e^{-c|\op(0)|/M}.
$$
Also, a Bernoulli random variable of mean $\mu$ is stochastically dominated by a Poisson random variable of mean $-\log(1 - \mu)$. Therefore since the sum of independent Poisson random variables is Poisson, \eqref{E:Xtcapz} is stochastically dominated by a Poisson random variable of mean
$$
\sum_{p \in P \smin P'} - \log\lf(1 - CM^{-2} e^{-c|p(0)|/M}\rg).
$$
For $M$ sufficiently large, every term in the above sum is bounded above by $2CM^{-2} e^{-c|p(0)|/M}$, and so by \eqref{E:IC-assumption}, after increasing $C$ the whole sum is bounded above by $C \de$.
\end{proof}
We also record a bound that deals with particles moving an unusually large amount in a small time interval. For this next lemma and in the remainder of the section, we define the sets
\begin{align*}
P' &= \{p \in P : |\op(0)| \le M^2\}, \\
\qquad P'' &= \{p \in P' : \max_{r \in [0, 4 M^2], s \in [0, 2M]} |\op(s + r) - \op(r)| \le M \},\\
P''' &= \{p \in P : \min_{0 \le r \le 4 M^2} |\op(r)| \le 4 M \}.
\end{align*}
\begin{lemma}
\label{L:side-bry}
Let $\cH$ be the event $\{P' \ne P''\} \cup \{P''' \not \sset P'\}$.
Then \[\P[ \cH] \le C e^{- c M}.\]
\end{lemma}
\begin{proof}
Let $P_j = \{p \in P : |\op(0)| \in [jM, j(M+1)]\}$. We then have the union bounds
\begin{align*}
\P(P''' \not\subset P') &\le \sum_{j=M}^\infty \sum_{p \in P_j} \P(\max_{0 \le r \le 4M^2} |\op(0) - \op(r)| \ge jM/2), \\
\P( P' \ne P'') &\le \sum_{p \in P'} \sum_{r \in [0, 4M^2] \cap M \Z} \P(\max_{s \in [0, 4M]} |\op(s + r) - \op(r)| \ge M).
\end{align*}
By the right hand sides above can be bounded by $C e^{- c M}$ by Lemma \ref{L:rw-estimate} and \eqref{E:IC-assumption}.
\end{proof}
We will want to take a union bound over starting and ending times of active chains so it will be useful to discretize time on a fine mesh of size $e^{-M\nu/100}$. Let $A$ be the set of times at which a particle in $P'$ takes a step. Let
\[
\cJ=\{\forall 0\leq j\leq 5M^2 e^{M\nu/100}: |A\cap [je^{-M\nu/100},(j+1)e^{-M\nu/100}]| \leq 1\}
\]
which says that each interval of time of length $e^{-M\nu/100}$ up to time $5M^2$ has at most one particle step in $P'$.
\begin{lemma}
\label{l:time.discr}
We have that,
\[
\P[\cJ] \geq 1- C e^{- c M\nu}.
\]
\end{lemma}
\begin{proof}
By~\eqref{E:IC-assumption} there are at most $M^2$ particles in $P'$ and so $A$ is stochastically dominated by a rate $M^2$ Poisson process. Then
\[
\P[|A\cap [je^{-M\nu/100},(j+1)e^{-M\nu/100}]| > 1] \leq C (M^2 e^{-M\nu/100})^2
\]
and the result follows by a union bound over $j$.
\end{proof}
The next lemma will help relate the event in Proposition \ref{P:death-with-specifics} to active chains.
\begin{lemma}
\label{L:reduce-to-singles}
Let $T \le M, t \in [M^2/8, 4M^2 - T]$ and let $ \cE_{t, T}$ be the event where
$$
\text{there exists an infected particle in } [-M, M]^2 \text{ at some time in } [t + T/2, t+ T].
$$
Then $\cE_{t, T} \sset \cI_{t, T} \cup\cH \cup \cD_Q^c$, where $\cH$ is as in Lemma \ref{L:side-bry} and $\cI_{t, T}$ is the event where either
\begin{enumerate}[label=(\roman*)]
\item there is an active chain $Z$ on $[t, s]$ for some $s \in [t + T/2, t+T]$ with $Z(r) \in [-2M, 2M]^2$ for all $r \in [t, s]$ and final location $Z(s) \in [-M, M]^2$, which only uses particles with labels in $P''$, or
\item there is an active chain on $[s', s]$ for some $s' \in [t, t + T], s \in [t + T/2, t + T]$ with $Z(r) \in [-2M, 2M]^2$ for all $r \in [t, s]$, starting location $z \in \del [-2M, 2M]^2$, and final location in $[-M, M]^2$ which only uses particles with labels in $P''$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that there is an infected particle $p$ with $\op(s) \in [-M, M]^2$ for some $s \in [t+T/2, t+T]$, and that we are working on the event $\cH^c \cap \cD_Q$. On this event, the only particles in $P \cup Q$ that enter $[-2M, 2M]^2$ in the interval $[t, t + T]$ are particles with labels in $P'$, which equals $P''$, since we are working on $\cH^c$.
We can trace the particle $p$ backwards in time until the moment $s_0$ given by the maximum of
\begin{itemize}[nosep]
\item $t$,
\item the time when $p$ most recently entered the set $[-2M, 2M]^2$,
\item the time when $p$ became infected by a particle $q$.
\end{itemize}
In the former two cases, the particle $p$ and its trajectory form an active chain on $[s_0, s]$ either with starting location in $\del [-2M, 2M]^2$, implying (ii) above, or with starting location in $[-2M, 2M]^2$ and starting time $t$, implying (i). In the latter case, the infection location was contained in $[-2M, 2M]^2$, so $q \in P'$ as well. Moreover
$$
\op(s_0) = \oq(s_0), \qquad \op(s_0^-) \ne \oq(s_0^-).
$$
We can then continue tracing the particle $q$ back until the time and location when it either became infected or left the box $[-2M, 2M]^2$, and proceed in this way until we reach time $t$, or a location outside of $[-2M, 2M]^2$. This process terminates a.s. since $P'$ is a.s. finite by Lemma \ref{L:poisson-domination}. Either way, we are left with an active chain that implies the event $\cI_{t, T}$.
\end{proof}
\begin{lemma}
\label{L:F-bound}
There exist constants $c, C > 0$ such that as long as $C \de < \nu/8 < 1$, for any $T \le M, t \in [M^2/8, 4M - T]$ we have
$$
\P [\cI_{t, T} \cap \cJ] \le C e^{- c\nu T},
$$
where $\cI_{t, T}$ is as in Lemma \ref{L:reduce-to-singles}.
\end{lemma}
\begin{proof}
First, we may assume that $M$ is large enough so that Lemma \ref{L:poisson-domination} holds, since otherwise we can guarantee the bound by taking $C$ large enough and $c$ small enough. On the event $\cJ$ the active chain given in the event $\cI_{t, T}$ takes at most one step in an interval of length $e^{-M\nu/100}$ and so by rounding the starting and ending points we we have the union bound
\begin{equation}
\label{E:big-union}
\P [\cI_{t, T}] \le \sum_{\substack{s' \in e^{-M\nu/100}\Z \cap [t, t + T+1], \\s \in e^{-M\nu/100}\Z \cap [t + T/2, t + T + 1], \\
z: d(z, \del [-2M, 2M]^2)\leq 1}} \P [\cG_{z, s', s}] + \sum_{\substack{s \in e^{-M\nu/100}\Z \cap [t + T/2, t+ T + 1], \\
z \in [-2M, 2M]^2}} \P [\cG_{z, t, s}],
\end{equation}
where $\cG_{z, r, s}$ is the event where there is an active chain using only particles in $P'$ starting at $(z, r)$ and ending at a time $s$ at a location in $[-(M+1), M+1]^2$. Note that there are $O(M^4 e^{M\nu/50})$ terms in \eqref{E:big-union} so to complete the proof we bound $\P [\cG_{z, r, s}]$.
In fact, we will show that whenever $z \in [-2M, 2M]^2$ and $s - r \le \frac32 M$ we have
\begin{equation}
\label{E:cFzrs}
\P [\cG_{z, r, s}] \le C \exp\lf( - \nu\lf(\frac{d(z, [-(M+1), M+1]^2)}{8} + \frac{s-r}{2}\rg)\rg).
\end{equation}
To prove \eqref{E:cFzrs}, first observe that if $|r -s| \le \frac32 M$ then any active chain $Z$ on $[r, s]$ with $Z(r') \in [-2M, 2M]^2$ for all $r'$ that only uses particles in $P''$ only uses particles in the set
$$
P^* = \{p \in P' : \op(r) \in [-3M, 3M]^2\}.
$$
Therefore $\cG_{z, r, s} \sset \cG^*_{z, r, s}$, where $\cG^*_{z, r, s}$ is the event where there exists an active chain $Z$ from $(z, r)$ to $(s, y')$ for some $y' \in [-(M+1), (M+1)]^2$ with labels contained in $P^*$.
Now, by Lemma \ref{L:poisson-domination}, the process $P^*$ is stochastically dominated by a Poisson process of intensity $C \de$ and the trajectories from $P^*$-particles on the interval $[r, \infty)$ are independent random walks.
After a time shift by $r$, we are then in the setting of Lemma \ref{L:Poisson-model-1}.
By Markov's inequality and Lemma \ref{L:Poisson-model-1}, we have
\begin{align}
\nonumber
\P [\cG_{z, r, s}] \le \P [\cG^*_{z, r, s}] &\le \E I_{(z, r), [-(M+1), M+1]^2}(s) \le \E I_{(z, r), [-(M+1), M+1]^2}(r) e^{-\nu (s-r)/2}.
\end{align}
The inequality \eqref{E:cFzrs} then follows since
$$
I_{(z, r), [-(M+1), M+1]^2}(r)=
N\exp\lf(d(z, [-(M+1), M+1]^2)/8\rg),
$$
where $N = |\{p \in P' : \op(r) = z\}|$ is stochastically dominated by a Poisson random variable of mean $C \de$ and the result follows by the union bound in~\eqref{E:big-union}.
\end{proof}
Proposition \ref{P:death-with-specifics} now follows by a quick union bound.
\begin{proof}[Proof of Proposition \ref{P:death-with-specifics}]
We let
$$
\cB_X^c = \cH \cup \bigcup_{t \in \Z \cap [M^2/8, 4M^2]} \cI_{t, M/4}.
$$
On the event $\cD_Q \cap \cB_X$, Lemma \ref{L:reduce-to-singles} implies that there are no infected particles in the time-space box $[M^2/4, 4M^2] \times [-M, M]^2$. The estimate on $\P \cB_X^c$ then follows from a union bound, Lemma \ref{L:side-bry}, and Lemma \ref{L:F-bound}.
\end{proof}
\section{Sidoravicius--Stauffer percolation}
\label{S:SSP}
Proving the existence of a survival regime for the SIR process is much more involved than showing the existence of a recovery regime. To do this, we will couple the SIR model to a competing growth process that we call Sidoravicius--Stauffer percolation (SSP). The original variant of SSP was first studied in~\cite{sidoravicius2019multi}. We adapted the framework from~\cite{sidoravicius2019multi} in~\cite{dauvergne2021spread} in order to understand infection spread in random walks. Many of the results of~\cite{dauvergne2021spread} will be required here, though we do not need the same level of generality used in that paper. In this section, we introduce SSP and gather the necessary results.
Informally, in SSP on $\Z^2$, a set of red vertices grows outwards from the origin according to a set of edge weights, similarly to first passage percolation. Within $\Z^2$, there is a collection of blue seeds $\mathfrak{B}_*$ which cannot be invaded by the red growth process. Whenever the red process attempts to invade a blue seed, a competing blue growth process is activated. Once activated, the blue process will grow outward from a blue seed at a slow, constant speed. The red and blue processes can only invade squares that are not already part of one of the two coloured processes.
Within this framework, we also allow for squares adjacent to the blue process to become activated at arbitrary times. When these squares are activated, they will turn red and join the red process unless they belong to the set of blue seeds, in which case they will turn blue.
\subsection{Constructing the process}
We will define a pair of coupled growth processes $\fR, \fB:[0, \infty) \to \{S: S \sset \Z^2\}$. The growth of these processes will be governed according to weights on the set of directed edges
$$
E := \{(u, v) \in \Z^2 : |u - v|
= 1 \}.
$$
We will need the following data to define $\fR, \fB$.
\begin{itemize}
\item A constant $\ka > 0$, referred to as the \textbf{parameter} of the process. In \cite{dauvergne2021spread}, we let the parameter $\ka$ be a function from $E \to (0, \infty)$. In the current paper, we are only concerned with the case where $\ka$ is constant.
\item A collection of blue seeds $\fB_* \sset \Z^2$.
\item Edge weight functions $X_\fR:E \to [0, 1]$ and $X_\fB:E \to [0, \ka]$. We will think of $X_\fR, X_\fB$ as clocks governing the growth of the red and blue processes. Edge weights $X_\fB(e)= \ka$ will encourage the spread of the blue process, whereas other edge weights will encourage the spread of the red process.
\item We have allowed for the caveat that clocks may have a value of $0$. This corresponds to instantaneous invasion. To ensure that the process is still well-defined even when instantaneous invasion is allowed, we require that there are no directed cycles $(u_0, u_1, \dots u_n = u_0)$ with
$$
X_\fR(u_{i-1}, u_{i}) \wedge X_\fB(u_{i-1}, u_{i}) = 0
$$
for all $i = 1, \dots, n$.
\end{itemize}
We define $\fR$ and $\fB$ by recording a time $T(u)$ when each vertex $u$ gets added to the process, and a colour $C(u)$ for that vertex. We initialize the process by setting $T(0) = 0$, and setting $C(0) = \fR$ if $0 \notin \fB_*$ and $C(0) = \fB$ if $0 \in \fB_*$. The rules for assigning the other times and colours are as follows.
For every edge $(u, v) \in E$, at time $T(u) + X_{C(u)}(u, v)$, the edge $(u, v)$ will ring, and the process will update accordingly. If $T(v) < T(u) + X_{C(u)}(u, v)$, then nothing happens. This corresponds to the case when $v$ has already been added to one of the processes by this time. Otherwise, we set $T(v) = T(u) + X_{C(u)}(u, v)$, and colour $v$ according the following four rules.
\begin{enumerate}
\item If $v \in \fB_*$, set $C(v) = \fB$.
\item If $C(u) = \fR$ and $v \notin \fB_*$, then set $C(v) = \fR$.
\item If $C(u) = \fB, v \notin \fB_*$, and $X_{\fB}(u, v) < \ka$, then set $C(v) = \fR$.
\item If $C(u) = \fB, v \notin \fB_*$ and $X_{\fB}(u, v) = \ka$, then set $C(v) = \fB$.
\end{enumerate}
We must specify what happens if $T(u) + X_{C(u)}(u, v) = T(u') + X_{C(u')}(u', v)$ for two different vertices $u, u'$. In this case, if the colours assigned to $v$ via the two edges $(u, v)$ and $(u, v')$ are different, we set $C(v) = \fB$. For $t \in [0, \infty)$, we set
\begin{equation}
\label{E:BR-def}
\begin{split}
\fR(t) &= \{u \in \Z^2 : T(u) \le t, C(u) = \fR\} \quad \text{and} \\
\quad \fB(t) &= \{u \in \Z^2 : T(u) \le t, C(u) = \fB\}.
\end{split}
\end{equation}
When working with SSP, we will always assume that for any $t$, only finitely many squares $u$ satisfy $T(u) < t$.
Under this \textbf{finite speed} assumption, the colouring process is well-defined. From now on, all processes we consider have finite speed.
In~\cite{dauvergne2021spread}, we showed that as long as $\ka$ is sufficiently large, we can find a well-controlled set $\mathfrak{C}$ depending only on the initial configuration of blue seeds $\fB_*$, such that under certain conditions on $\fB_*$, we have $\fB(\infty) \sset \mathfrak{C}$.
The set $\mathfrak{C}$ is built via a multi-scale construction.
Let $r_0 = 1 \le r_1 < r_2 < \dots$ be a sequence of integer scales satisfying
\begin{equation}
\label{E:gammabound}
\ga := \prod_{i=0}^\infty \left(1 + \frac{10^{12} r_i}{r_{i+1}} \right) < 2,
\end{equation}
and such that $r_{i+1} = (i+1)^2 r_i$ for all large enough $i$. For closed disks and annuli, we write
$$
D(x, r) = \{y \in \Z^2: |y - x| \le r \}, \qquad A(x, r, R) = \{y \in \Z^2 : r \le |y - x| \le R\}.
$$
Here and throughout the paper, we use the union-of-disks notation
$$
D(A, r) = \{y \in \Z^2: d(y, A) \le r \}= \bigcup_{x \in A} D(x, r).
$$
Now, for a set of blue seeds $\fB_* \sset \Z^2$, define
$$
A_1(\fB_*) := \{x \in \fB_*: \fB_* \cap D(x, r_1/3) = \{x\} \},
$$
We then recursively define $\fB_{*, k} = \fB_* \smin \cup_{i=1}^{k-1} A_i(\fB_*)$, and let
$$
A_k(\fB_*) := \{x \in \fB_{*, k}: \fB_{*, k} \cap A(x, r_{k-1}, r_k/3) = \emptyset \}.
$$
Finally, for a set $A \sset \Z^2$, we let $[A]$ be the union of $A$ and all bounded components of $A^c$.
\begin{theorem}
\label{T:BssetI}
Consider an SSP $(\fR, \fB)$ on $\Z^2$ initiated from a set of blue seeds $\fB_*$ with parameter $\ka > 4000$. Suppose that
\begin{enumerate}
\item $\fB_* = \bigcup_{k=1}^\infty A_k(\fB_*)$, and
\item $0 \notin [\fD]$, where $\fD := \bigcup_{k=1}^\infty D(A_k(\fB_*), r_k/100)$.
\end{enumerate}
Then $\fB(\infty) \sset [\fC]$, where $\fC :=\bigcup_{k=1}^\infty D(A_k(\fB_*), 100 r_{k-1})$.
\end{theorem}
Theorem \ref{T:BssetI} is a special case of Theorem 2.1 from~\cite{dauvergne2021spread} when $V = \Z^2$ (see Remark 2.2 from that paper).
In our applications of Theorem \ref{T:BssetI}, the blue seed set $\fB_*$ will always be stochastically dominated by a low-density Bernoulli process. The next series of lemmas from~\cite{dauvergne2021spread} bound the behaviour of $\fC, \fD$ in this setting. We start with a simple monotonicity lemma.
\begin{lemma}[Lemma 2.11, \cite{dauvergne2021spread}, special case when $V = \Z^2$]
\label{L:dom-seeds}
Let $\fB_*' \sset \fB_* \sset \Z^2$ be two collections of blue seeds. Then for every $k \ge 1$, we have $\fB_{*, k}' \sset \fB_{*, k}$. Moreover, if the set $\fB_*$ satisfies the assumptions
of Theorem \ref{T:BssetI}, then $\fB_*'$ also satisfies these assumptions, and with $\fC, \fD, \fC', \fD'$ as in that theorem grown from the sets $\fB_*, \fB_*'$, we have
\begin{equation}
\label{E:IbIB}
\fC' \sset \fC \qquad \text{ and } \qquad \fD' \sset \fD.
\end{equation}
\end{lemma}
Note that the monotonicity in Lemma \ref{L:dom-seeds} does not necessarily hold at the level of the blue processes $\fB, \fB'$ generated from $\fB_*, \fB'_*$.
\begin{lemma} [Lemma 2.12, \cite{dauvergne2021spread}]
\label{L:Ak-estimate}
Consider any collection of blue seeds $\fB_{*}$.
Then for any set $X \sset \Z^2$, the set $X \cap \fB_{*, k}$ is a function of $\fB \cap D(X, r_{k-1}/2)$.
Moreover, suppose that $\fB_{*}$ is stochastically dominated by an i.i.d.\ Bernoulli process with mean $p> 0$. Then for any $x \in \Z^2$,
$$
\mathbb P(x \in \fB_{*, k}) \le (C p)^{2^{k-1}}.
$$
\end{lemma}
Lemma \ref{L:Ak-estimate} naturally yields bounds on the engulfing sets $\fC, \fD$, given by the next lemma. To state this lemma, for $k \in \N$ we also define
\begin{equation*}
\fD_k := \bigcup_{i=1}^k D(A_i(\fB_*), r_i/100) \qquad \text{ and } \qquad \fC_k := \bigcup_{i=1}^k D(A_i(\fB_*), 100 r_{i-1}).
\end{equation*}
\begin{lemma}[Lemma 2.13, \cite{dauvergne2021spread}]
\label{L:largest-scale} Assume that $\fB_{*}$ is stochastically dominated by an i.i.d.\ Bernoulli process with parameter $p> 0$.
Fix a ball $D(x, r)$ for some $r \in \N$, and let $M(x, r)$ be the diameter of the largest component of $[\fD]$ that intersects $D(x, r)$. Then
\begin{equation}
\label{E:Mrrk}
\P\lf( M(x, r) > r_k \rg) \le r^2 (Cp)^{2^{k-1}} \qquad \text{ and } \qquad \P\lf( M(x, r) > 0 \rg) \le C r^2 p.
\end{equation}
In particular, for every $n_0 \in \N$ with $n_0 \ge 3$ we have
\begin{equation}
\label{E:Mn-BC}
\P\lf( M(x, n) \ge n/2 \text{ for some } n \ge n_0\rg) \le \exp\lf(\log(Cp) \exp (\log n/(C\log \log n))\rg).
\end{equation}
Moreover, for all $k \ge 1$,
\begin{equation}
\label{E:fDk-estimate}
\P([\fD] \cap D(x, r) \ne [\fD_k] \cap D(x, r)) \le r^2 (Cp)^{2^{k-1}},
\end{equation}
and for all sufficiently small $p$, we have
\begin{equation}
\label{E:PDxr}
\P\lf(|[\fD] \cap D(x, r)| \ge \frac{1}{3}|D(x, r)| \rg) \le \exp\lf(\log(Cp) \exp (\log r/(C\log \log r))\rg).
\end{equation}
The same bounds hold with $\fC, \fC_k$ in place of $\fD, \fD_k$.
\end{lemma}
The next theorem is a version of Theorem \ref{T:BssetI} for random SSPs.
\begin{theorem}[Theorem 2.14, \cite{dauvergne2021spread}]
\label{T:random-red-blue}
Consider a random SSP $(\fR, \fB)$ on $\Z^2$ driven by potentially random clocks $X_\fR, X_\fB$, a collection of blue seeds $\fB_*$ and parameter $\ka > 4000$.
Suppose additionally that $\fB_*$ is stochastically dominated by an i.i.d.\ Bernoulli process with parameter $p > 0$ sufficiently small.
Let $\fC$ be defined from $\fB_*$ as in Theorem \ref{T:BssetI}. Then the event where
$$
\fB_* = \bigcup_{i=1}^\infty A_i(\fB_*) \quad \text{ and } \quad [\fC] \text{ has only bounded components}
$$
is almost sure. Moreover, the probability of the event $\cE$ given by
\begin{equation}
\label{E:CD}
\begin{split}
\Big\{\fB_* = \bigcup_{i=1}^\infty A_i(\fB_*), 0 \in [\fD]^c, [\fC]^c &\sset \fR(\infty), [\fC] \text{ has only bounded components}\Big\},
\end{split}
\end{equation}
is at least $1 - Cp$. Finally, on the event $\cE$, we have $D(0, t/(2\ka)) \sset [\fR(t)]$ for all large enough $t$. More precisely, for any $t_0 > 0$ we have
\begin{equation}
\label{E:D0kk}
\begin{split}
\P(\{D(0, t/(2\ka)) &\sset [\fR(t)] \text{ for all } t \ge t_0\} \mid \cE) \\
&\ge 1 - \exp\lf(\log(Cp) \exp (\log (t_0/\ka)/(C\log \log (t_0 /\ka))\rg).
\end{split}
\end{equation}
\end{theorem}
\section{SIR survival}
In this section we give a coupling of the SIR process $X_t$ with initial Poisson density $\mu$
with an SSP satisfying the conditions of Theorem \ref{T:random-red-blue}. In this coupling the red sites in the SSP will correspond to spatial blocks in $\Z^2$ where the SIR infection spreads efficiently. The initial configuration of blue seeds will be dominated by a Bernoulli process with parameter $p = p(\nu)$ where $p \to 0$ as $\nu \to 0$, allowing us to conclude Theorem \ref{T:main-1} by applying Theorem \ref{T:random-red-blue}. Throughout this section and in the remainder of the paper the density $\mu$ is fixed and all constants $c, C, \dots$ are allowed to depend on $\mu$.
\label{S:linear}
\subsection{A block construction for the SIR process}
\label{S:SI-colouring}
Fix a side length $L = 2^k$ for some large $k \in \N$. We choose $L$ so that
\begin{equation}
\label{E:Lnu-relationship}
\nu^{-1/6} \leq L \leq 2 \nu^{-1/6}
\end{equation}
and subdivide $\Z^2$ into a collection of dyadic blocks
\begin{equation}
\label{E:BL-construct}
\sB=\sB_L:=\Big\{zL +\big\{-L/2, \ldots ,L/2-1\big\}^2:z\in \Z^2\Big\}.
\end{equation}
This induces a natural map $f = f_L:\Z^2 \to \sB_L$. For $B, B' \in \sB$, we also define the \emph{block distance}
$$
d_\sB(B, B') = |f^{-1}(B) - f^{-1}(B')|
$$
and for $B \ne B'$ note the inequalities
\begin{equation}
\label{E:block-distance}
d(B, B') < L d_\sB(B, B'), \qquad d_\sB(B, B') < 4 \ceil{d(B, B')/L}.
\end{equation}
Both $d$ and $d_\sB$ will be useful measurements in the paper for analyzing distance.
For the arguments in the remainder of the paper, $L$ is the more natural parameter than $\nu$ and so we work exclusively with a sufficiently large $L$ rather than sufficiently small $\nu$, invoking the relationship \eqref{E:Lnu-relationship} when necessary.
Given a side length $L$, for some small $\alpha\in(0,1)$, let $\xi = \xi_L = \alpha L^2$. This will be the minimum speed at which the colouring of blocks spreads. We will now define a \textbf{colouring process} for blocks $B \in \sB_L$ from the SIR process.
\begin{definition}
\label{D:SI-colouring}
We will let $\tau_B$ denote the (stopping) time when a block $B$ is first coloured. For a vertex $u\in B$ we will use $\tau_u$ as shorthand for $\tau_B$. A block $B$ becomes coloured at time $t$ if one of the following events happens:
\begin{enumerate}[label=(\alph*)]
\item A neighbouring block $B'$ (i.e. $d_\sB(B, B') = 1$) was coloured at time $\tau_{B'} = t - \xi$.
\item An infected particle enters $B$ for the first time at time $t$.
\item A block $B'$ becomes coloured according to rule (b) at time~$t$ by an infected particle entering at location $x \in B'$, and $\inf \{\|x - u \|_\infty : u \in B \} \le \al L$.
\end{enumerate}
\end{definition}
In case (c) we call this a multi-colouring, where up to three blocks may be coloured simultaneously.
In case (b) we call the infected particle that entered $B$ the \textbf{ignition particle} of $B$. In case (c) the ignition particle is the ignition particle for block $B'$. In cases (b) and (c) we say that the block $B$ is \textbf{ignited} at the location $x$ where an infected particle first entered $B$ (for case (b)) or first entered the relevant neighbour of $B$ (for case (c)). All ignition locations are within $L^\infty$-distance $\al L$ of $B$.
We say that a particle $a$ is \textbf{coloured} at the first time it enters a coloured block, write $B(a)$ for the block in which $a$ was coloured, and write $T(a)$ for the time when $a$ was coloured. This can happen either because the block became coloured or because the particle entered a coloured block. We let
$
X_t^*
$ denote the process of all particles up to time $t$ that satisfy $T(a) \le t$, and let $\sF_t^*$ denote the $\sigma$-algebra generated by $X_t^*$. We let $\iota_a$ be the stopping time when particle $a$ becomes infected.
A key element of our analysis is to regard the collection of random walks as given by a Poisson process. We let $\fW$ be the space of cadlag sample paths $w(t):\R\to\Z^2$ such that
\begin{equation}\label{eq:sublinearGrowth}
\frac1{t}|w(t)|\to 0
\end{equation}
as $|t| \to \infty$. Elements of $\fW$ will represent the trajectories of particles. To represent healing times of particles we let we let $\fQ$ be the set of simple point measures on $\R$ and let $\fW^+=\fW \times \fQ$. An element of $\fW^+$ will represent the trajectory of a particle together with the set of times at which it receives a healing event.
Let $\sW^+_u$ denote the measure on $\fW^+$ given by the pair of
\begin{enumerate}[nosep, label=(\alph*)]
\item a continuous time random walk on $\Z^2$ over all time $t\in \R$ that is at $u\in \Z^2$ at time $0$ and
\item an independent rate $\nu$ Poisson process on $\R$.
\end{enumerate}
Note that a simple random walk satisfies~\eqref{eq:sublinearGrowth} almost surely. Furthermore, let $\sW^+ = \sum_{u\in \Z^2} \sW^+_u$. We let $\sP$ denote a Poisson process on $\fW^+$ with intensity measure $\mu\sW^+$. If we remove the initial infected particle from the origin, we can interpret all remaining particles and their trajectories as being given by a sample from $\sP$.
In order to obtain spatial independence of various events we will give an alternative construction of the SIR process with the same law based on a collection of Poisson processes $\sP_B, B \in \sB$ which are IID and equal in distribution to $\sP$. We call a particle $a$ in $\sP_B$ \textbf{simple}, let $a(t)$ denote its trajectory, and let $a^h \subset \R$ its set of healing times.
Let $\sM_B$ denote the $\sig$-algebra generated by
\begin{enumerate}[label=(\roman*)]
\item the independent Poisson process $\sP_B$,
\item a pair $(W_{B,\operatorname{ig}}, W_{B,\operatorname{ig}}^h)$ sampled independently from $\sW^+_0$, which will encode the trajectory and healing process of an ignition particle.
\end{enumerate}
Note that the $(\sP_B,W_{B,\operatorname{ig}}, W_{B,\operatorname{ig}}^h), B \in \sB$ are IID.
We will build the SIR process $X_t$ according to the $\sM_B$ in such a way that particles coloured in a block $B$ correspond to particles in $\sP_B$ but with a time shift of length $\tau_B$. We will abuse notation somewhat by conflating a particle $a$ in some $\sP_B$ with a particle in $X_t$ matched according to our construction. Define
\begin{equation}
\label{E:atti}
\oa(t):= a(t-\tau_B), \qquad \oa^h = a^h - \tau_B.
\end{equation}
These will be, respectively, the trajectory and healing process of $a$ in $X_t$. Particle locations in $\sP_B$ at time $0$ tell us the particle locations in $X_t$ at time $\tau_B$.
It will be enough to construct the process of previously coloured particles $X_t^*$ at every time $t$, since every particle is eventually coloured. This holds since the colouring process spreads at least linearly by Definition \ref{D:SI-colouring}(a), whereas particles spread diffusively.
Let $B_0$ be the block containing the origin so that $\tau_{B_0}=0$. At time $0$ we add all simple $\sP_{B_0}$-particles that are in $B_0$ at time $0$ to $X_t^*$ plus an infected ignition particle at the origin. The simple particles evolve according to their paths $a(t)$ while the initially infected particle evolves according to $W_{B_0, \operatorname{ig}}(t)$. Particles become infected if they enter the same vertex as another infected particle. New particles can enter $X_t^*$ in two ways, either when a new block becomes coloured or when an uncoloured particle enters a coloured block for the first time.
{\bf Case 1: A newly coloured block:} When a block $B$ is coloured for the first time according to Definition \ref{D:SI-colouring} we add particles to $X_t^*$ as follows. If a particle $a$ from $\sP_B$ is in $B$ at time $0$, then we add it to $X_t^*$ at time $t=\tau_B$ if for all $0\leq t < \tau_B$ we have that $\tau_{a(t-\tau_B)} > t$. This condition is equivalent to saying that a particle with trajectory $a(t-\tau_B)$ first hits a coloured block at time $\tau_B$. The trajectory and healing process of this particle is given by $\oa(t), \oa^h$.
If $B$ is ignited at a vertex $x \in \del B$, the ignition particle follows special rules. Instead of continuing to follow the trajectory given by the block it was initially coloured by, when it becomes the ignition particle of $B$ at time $\tau_B$ we alter its future trajectory to $x+ W_{B,\operatorname{ig}}(t-\tau_B)$ and its future healing process to $\tau_B + W^h_{B, ig}$. Note that some particles may ignite multiple boxes at distinct times, in which case their future trajectories and healing processes will change more than once.
{\bf Case 2: Particle first entering a coloured block:} For times $t\in(\tau_B,\tau_B+\xi]$ new particles are revealed in $B$ in the process $X_t^*$ according to the following rules. If $a$ is a particle in $\sP_B$ that enters $B$ at time $t-\tau_B$, then we add it to $X_t^*$ at time $t$ if for all $0\leq s < t$ we have that $\tau_{a(s-\tau_B)} > t$. This condition is equivalent to saying that a particle with trajectory $a(s-\tau_B)$ first hits a coloured block at time $t$. Again, the trajectory and healing process of this particle are given by $\oa(s), \oa^h$. Particles can only join $X^*_t$ in this way during the time interval $(\tau_B,\tau_B+\xi]$ since at time $\tau_B+\xi$ the neighbouring blocks of $B$ are already coloured by construction.
With this construction, a particle $a$ from $\sP_B$ is added with trajectory $\oa$ if and only if $B(a) = B$. It follows from independence properties of Poisson processes that the construction above indeed describes an SIR process. This is shown in Section 5 of \cite{dauvergne2021spread} in the setting when particles cannot heal. The proof goes through verbatim with the minor addition of healing processes.
We would like to partition the set of all particles in the SIR process $X_t$ into sets $H_B, B \in \sB$ where particle trajectories in each $H_B$ are entirely determined by $\sM_B$. However, this is not quite possible since some particles that are coloured in $B$ may become ignition particles for other blocks and hence their trajectories will not be entirely $\sM_B$-measurable. To deal with this, it will be convenient to think of a particle $a \in H_B$ as healing when it becomes the ignition particle for a new block $B'$, and a new infected particle $W_{B', \operatorname{ig}} \in H_{B'}$ appearing instantaneously in its place.
Let $H_B$ be the set of all particles that are first coloured in $B$. Definition~\ref{D:SI-colouring}(a) guarantees that our colouring process is moving at a linear speed depending only on $L$, which helps to ensure that there are typically many particles in $H_B$. With this in mind, define $H^-_B$ to be all particles $a\in \sP_B$ that satisfy the following additional constraint:
\begin{equation}
\label{eq:principal}
\sup_{t\leq 0} d(a(t),B) - L \lfloor |t\xi^{-1}|/4 \rfloor = 0.
\end{equation}
We call the particles in $H_B^-$ \textbf{principal particles} and claim that $H_B^- \sset H_B$. Indeed, let $a \in H_B^-$. Equation \eqref{eq:principal} implies that $a(0) \in B$. Now suppose $B'$ gets coloured before $B$. To show that $H_B^- \sset H_B$, we must check that $a(t) \notin B'$ for all $t \ge \tau_{B'} - \tau_{B}$.
By Definition \ref{D:SI-colouring}(a) and the bound \eqref{E:block-distance}, we have the bound
\begin{equation}
\label{E:first-lipschitz}
|\tau_B-\tau_{B'}| \leq d_\sB(B, B')\xi < 4\ceil{d(B, B')/L} \xi.
\end{equation}
Condition~\eqref{eq:principal} then implies that $d(a(t), B) < d(B, B')$ for all $t \ge \tau_{B'} - \tau_{B}$, so $a(t) \notin B'$.
Also let $H_B^+$ be the set of all particles $a \in \sP_B$ such that $a(t) \in B$ for some $t \in [0, \xi]$. We then have $H_B \sset H_B^{++} := H_B^+ \cup \{W_{B, \operatorname{ig}}\}$. The $\sM_B$-measurable set of trajectories in $H_B^{++}$ describes all randomness contributed to the SIR process by $\sM_B$.
\subsection{Blue Seeds}
\label{S:blue-seeds}
In this section we give a set of conditions on $\sM_B$ which will ensure the efficient spread of the infection to the neighbouring blocks if the block $B$ is ignited, i.e. coloured according to rule (b) or (c). The complement of this event will correspond to marking the block as a blue seed in our SSP coupling. For the remainder of Section \ref{S:linear}, we fix an arbitrary $\ka > 4000$.
Setting some notation, let
$$
U_x = \{B \in \sB_L : \inf_{y \in B} \|x - y\|_\infty \le \al L \}.
$$
When the meaning is clear from context, we will also let $U_x$ denote the set of vertices contained in the union of the blocks in $U_x$. By construction, $U_x$ is a rectangle in the plane made of either 1, 2, or 4 blocks. It includes the block containing $x$.
Furthermore, if $x$ is the ignition site for the block $B$, all blocks in $U_x$ are coloured at or before time $\tau_B$. Set
\[
\partial B^\#= \bigcup_{B'\in \sB_L}\{x\in \partial B': B\in U_x\},
\]
which is the set of possible ignition locations for the ignition particle of $B$ (the shape of the set $\partial B^\#$ resembles a hashtag $\#$).
The ignition particle for $B$ is infected, starts at $x\in\partial B^\#$ and follows the path $x+ W_{B_x,\operatorname{ig}}((t-\tau_B))$ for $t \ge \tau_B$, where $B_x$ is the block containing $x$. Define the event
\[
\cA^{(1)}_{B}=\bigcap_{B':d(B,B')\leq 2}\Big\{\sup_{0\leq s\leq \xi/ \log \log L} |W_{B',\operatorname{ig}}(s)| \leq \frac12 \alpha L\Big\}.
\]
Since $d(B_x, B) \le 2$ for all $x \in \del B^\#$, this event ensures that the ignition particle for $B$ remains in $U_x$ until time $\tau_B + \xi/ \log \log L$, and hence cannot become the ignition particle of another block prior to this time.
For $x \in \del B^\#$ and each block $B' \not\in U_x$ with $d(B, B') = 1$, define the event
\begin{align*}
\cA^{(2)}_{B,B',x}=\bigg\{\exists a \in H_B^-, \; &\exists \; 0\leq s \leq s' < \tfrac{\xi}{\log\log L} \text{ s.t.} \\ a(s) = x+ W_{B_x,\operatorname{ig}}(s);\quad
&a(s')\in B'; \quad \forall\; 0\leq s''<s', a(s'')\in U_x\bigg\}.
\end{align*}
This event asks for at least one principal particle in $B$ to have a trajectory that intersects the ignition particle's trajectory at some time $s$, and to stay within $U_x$ until some time $s'$, at which point it enters $B'$.
Finally let $\cA^{(3)}_{B}$ be the event that the ignition particles of all blocks $B'$ with $d(B, B') \le 2$ have no healing events in the interval $[0, L^3]$ and let $\cA^{(4)}_{B}$ be the event that no particle in $H_B^-$ has a healing event in the interval $[0, L^3]$. Finally we want that many principal particles which do not become ignition particles become infected. Thus we define
\begin{align*}
Q_{B,x}=\Big\{a\in H_B^- &: d(a(0),B^c) \geq \tfrac1{10} L, \exists s\in[0,\tfrac{\xi}{\log\log L}], a(s) = x+ W_{B_x,\operatorname{ig}}(s) \Big\},
\end{align*}
and
\begin{align*}
\cA^{(5)}_{B,x}=\Big\{|Q_{B,x}| \geq L^{2-\log^{-1/2} L} \Big\}.
\end{align*}
We combine these events and define
\begin{equation}
\label{E:cA123}
\cA_B=\cA_B^{(1)}\cap\cA^{(3)}_{B}\cap \cA^{(4)}_{B} \cap \bigcap_{x\in\partial B^\#} \bigg(\cA^{(5)}_{B,x} \cap \bigcap_{\substack{B'\in \sB \smin U_x \\
d(B,B')\le 1}}\cA^{(2)}_{B,B',x} \bigg)
\end{equation}
We call the block $B$ a \textbf{blue seed} if $(\cA_B)^c$ holds. It follows from the definition that $\cA_B$ is measurable with respect to the $\sigma$-algebra generated by $\{\sM_{B'}, d(B',B) \leq 2\}$.
We have asked for many things from our event $\cA_B$ that together will ensure the efficient spread of the infection. The simplest consequence of the definition is that if $\cA_B$ holds and $B$ is ignited, then all its neighbouring blocks in $\sB_L$ will be coloured before time $\tau_B + \xi/\kappa$.
\begin{lemma}
\label{L:lower-scale-ignition}
Suppose that a block $B \in \sB_L$ is ignited at time $\tau_B$ at location $x$ and that $\cA_B$ holds. Then any neighbouring block $B'$ is coloured by time $\tau_B + \xi/\kappa$.
\end{lemma}
\begin{proof}
First, we may assume that $B' \notin U_x$, since else $B'$ is coloured by time $\tau_B$.
Select some $a\in H_B^-$ and $s,s'$ that make the event $\cA^{(2)}_{B,B',x}$ hold. The particle $a$ must be susceptible at time $\tau_B$ and at time $\tau_B + s'$ will meet the ignition particle which by $\cA^{(3)}_{B}$ is still infected at that time. Therefore $a$ will become infected at or prior to to time $\tau_B + s'$. Moreover, it will reach $B'$ at time $\tau_B+s \leq \tau_B + \xi/\kappa$ and by $\cA^{(4)}_{B}$ it will not heal in the interval $[\tau_B, \tau_B + \xi/\kappa]$. Therefore this particle will ignite $B'$ at time $\tau_B + s$ if $B'$ has not already been previously coloured.
\end{proof}
With these colouring rules, we can associate an SSP to the given SIR process and then use the machinery of Section \ref{S:SSP}. To do this, it is more natural to use the time scaling of the current section, which will allow the red and blue clocks to take values in $[0, \xi/\ka]$ and $[0, \xi]$, rather than in $[0, 1]$ and $[0, \ka]$, respectively. We will also define the SSP directly on the block set $\sB$, rather than on $\Z^2$. Clearly this will map to an SSP on $\Z^2$ via the correspondence from \eqref{E:BL-construct}.
We let $B<_{\operatorname{SIR}} B'$ for two blocks $B, B' \in \sB$ if both they are ignited by the same ignition particle at location $x$, and $d(x, B) < d(x, B')$. Note that our ignition rules guarantee that either $d(x, B) < d(x, B')$ or $d(x, B) < d(x, B')$.
The random directed graph on $\sB$ with edges given by pairs $(B, B') \in \sB^2$ with $B <_{\operatorname{SIR}} B'$ is acyclic.
For each directed edge $(B, B')$ between adjacent vertices in $\sB$, define
\begin{equation*}
\begin{split}
X_\fR(B, B') = \begin{cases}
\lf(\tau_{B'} - \tau_B \rg) \wedge \frac{\xi}{\ka}, \qquad &\tau_{B'} - \tau_B > 0,
\\
0, \qquad &\tau_{B'} = \tau_B \text{ and } B <_{\operatorname{SIR}} {B'}, \\
\frac{\xi}{\ka} \qquad &\text{ else.}
\end{cases}
\end{split}
\end{equation*}
Similarly define the blue clock by
\begin{equation*}
\begin{split}
X_\fB(B, B') = \begin{cases}
\tau_{B'} - \tau_B, \qquad &\tau_{B'} - \tau_B > 0,
\\
0, \qquad &\tau_{B} = \tau_{{B'}} \text{ and } B <_{\operatorname{SIR}} B', \\
\xi \qquad &\text{ else.}
\end{cases}
\end{split}
\end{equation*}
\begin{prop}
\label{P:SI-to-BR}
The clocks above along with the set of blue seeds $\fB_* = \{B \in \sB: \cA_B^c \text{ holds}\}$ a.s.\ define a finite speed SSP on $\sB$. Moreover, a.s.\ for every $B \in \sB$, the colouring time $T(B)$ equals $\tau_B$. That is,
\begin{equation}
\label{E:BR-tau-eqn}
\mathfrak{B}(t) \cup \mathfrak{R} (t) = \{B \in \sB : \tau_B \le t \}.
\end{equation}
Finally, let $\operatorname{IGN} : =\{B \in \sB: B \text{ is ignited or } B = B_0\}$. Then a.s.\
$
\fR(\infty) \sset \operatorname{IGN} \smin \fB_*.$
\end{prop}
A version of Proposition \ref{P:SI-to-BR} for the SI model was shown in \cite{dauvergne2021spread}, Proposition 3.9. As the proof goes through verbatim for the SIR model, we omit it.
Next, to apply the framework of Section \ref{S:SSP} to understand the SIR process, we need to show that the blue seed probability is small.
\begin{proposition}\label{p:blueSeed}
There exists $\al > 0$ such that for any $L$ satisfying $L^5 \le \nu^{-1} \le L^6$, we have
\[
\P[(\cA_B)^c] \leq \epsilon_\nu,
\]
where $\eps_\nu \to 0$ with $\nu$.
\end{proposition}
When analyzing rare events in the final section of the paper, we will also require the following conditional estimate which will fall out immediately from the proof of Proposition \ref{p:blueSeed}.
\begin{lemma}\label{L:blueSeedconditional}
Let $\al, L$ be as in Proposition \ref{p:blueSeed}. Fix a block $B$, a subset $\sC \sset \{B' : d(B, B') \le 2\}$, and let $\cW$ be the event
$$
\bigcap_{B' \in \sC} \{ W_{B', \operatorname{ig}}(t) = W_{B', \operatorname{ig}}(0) \; \forall t \in [0, L^4], \; W^h_{B', \operatorname{ig}} \cap [0, L^3] = \emptyset, \;W^h_{B', \operatorname{ig}} \cap [L^3, L^4] \ne \emptyset\}.
$$
Then
$$
\P(\cA_B^c \;|\; \cW) \le \ep_\nu.
$$
\end{lemma}
Before proving Proposition \ref{p:blueSeed} and Lemma \ref{L:blueSeedconditional}, we will see how they imply Theorem \ref{T:main-1}. First, we have the following corollary of Proposition \ref{p:blueSeed}.
\begin{corollary}
\label{C:blue-domination}
The process $\fB$ of blue seeds on $\sB_L$ is stochastically dominated by a Bernoulli process of mean $\de_\nu$, where $\de_\nu \to 0$ with $\nu$.
\end{corollary}
\begin{proof}
We appeal to Theorem 0.0(i)\footnote{It really is Theorem 0.0, this is not a typo} in \cite{liggett1997domination}, which states the following. Let $d \ge 1$, and suppose that $X:\Z^d \to \{0, 1\}$ is a random process such that for any vertex $v \in \Z^d$, the conditional probability that $X(v) = 1$ given all the values of $X$ on vertices at $\ell^\infty$-distance at least $k$ away from $v$ is at most $\ep$. Then $X$ is stochastically by an i.i.d.\ Bernoulli process $Y$ on $\Z^d$ such that $\E Y(0) = f(\ep)$, where $f(\ep) \to 0$ with $\ep$. Here the function $f$ depends on $k$ and $d$.
In our setting, we let $X = \{ B \in \sB_L : \cA_B^c \text{ holds}\}$, and use that the event $\cA_B^c$ is measurable given the $\sig$-algebras $\sM_{B'}, d(B, B') \le 2$ and hence is independent of the collection of events $\cA_{B'}^c, d(B, B') \ge 3$. Proposition \ref{p:blueSeed} then implies the result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{T:main-1}]
Fix $\mu > 0$. By Proposition \ref{P:high-recovery-rate}, we can take $\nu^+ = 8 \mu$. Now, the SIR process survives forever if there exists $L$ for which the coupled SSP $(\fR, \fB)$ in Proposition \ref{P:SI-to-BR} satisfies $|\fR(\infty)| = \infty$. Choosing $L$ as in \eqref{E:Lnu-relationship}, by Corollary \ref{C:blue-domination} and Theorem \ref{T:random-red-blue} we have
$$
\P(d, \mu, \nu) \ge \P(|\fR(\infty)| = \infty) \ge 1 - C \de_\nu,
$$
which tends to $1$ as $\nu \to 0$.
\end{proof}
\subsection{Tools for analyzing random walks}
In this part we gather a couple of preliminary estimates used in the proof of Proposition \ref{p:blueSeed}. With $W_t$ a continuous time random walk, for $x\in B$ define
\[
p_x = \P\Big[\forall t\geq 0: d(x+W_t,B^c) \leq L \lfloor |t\xi^{-1}|/4 \rfloor \Big]
\]
At time $\tau_B$ the principal particles $H_B^-$ are Poisson distributed with intensity $p_x \mu$.
For any $x$ such that $d(x,B^c) \geq \frac1{10} L$ we have that
\begin{align}\label{eq:principalLB}
p_x &\geq 1 - \P[\sup_{0\leq t \leq 3\alpha L^2} |W_t| > \frac1{10} L^2 ] - \sum_{j\geq 1} \P[\sup_{0\leq t \leq 3(j+1)\alpha L^2} |W_t| > Lj ] \geq \frac34,
\end{align}
for all large enough $L$ and $\alpha \leq c$ by Lemma \ref{L:rw-estimate}. The next lemma then follows from a standard concentration estimate on Poisson random variables.
\begin{lemma}\label{l:HBsize}
Let $L$ be sufficiently large. The size of $|H_B^+|, |Q_B|$ and $|H_B^{-}|$ are all Poisson distributed with the following means:
\begin{align*}
\E |H_B^{-}| &= \sum_{x\in B} p_x \mu \geq \frac13 \mu L^2\\
\E |Q_B| &= \sum_{x\in B,d(u,B^c)\geq L/10} p_x \mu \geq \frac1{3} \mu L^2\\
\E |H_B^+| &= \sum_{x\in \Z^2} \mu \P[\exists t\in[0,\xi] : W_t+x \in B ] \leq \frac32 \mu L^2.
\end{align*}
In particular, for some $c>0$ we have that
\begin{equation}\label{eq:particleSizeBounds}
\P[|H_B^+|\leq 2\mu L^2, |H_B^{-}|\geq |Q_B|\geq \frac14 \mu L^2] \geq 1 - \exp(-c \mu L^2).
\end{equation}
\end{lemma}
Next, let $H$ be a set of particles and let $x:[s,s+T]\to\Z^2$ be an arbitrary cadlag path.
The following lemma will be useful for counting how many particles in $H$ hit $x$. With this in mind, let
\[
N(H,s,T,x):= \sum_{a\in H} \mathbf{1}(\{t\in [s,s+T]: a(t) = x(t)\}\neq \emptyset)
\]
be the number of $H$-particles that intersect the path $x$ and let
\[
R(H,s,T,x):= \sum_{a\in H} \int_s^{s+T} \mathbf{1}(a(t) = x(t))dt.
\]
be the aggregate intersection time of the $H$-particles with $x(t)$.
\begin{lemma}\label{l:hittingNumberSimple}
There exists a constant $C>0$ such that for any set of particles $H$ performing continuous-time simple random walks and any path $x:[s, s + T] \to \Z^2$, for $T \ge 2$ we have
\[
\E[N(H,s,T,x) \mid \sF_s] \geq \frac{1}{C\log T} \E[R(H,s,T,x) \mid \sF_s].
\]
Here $\sF_s$ is the $\sig$-algebra generated by the walks in $H$ up to time $s$.
The same results holds if the walks in $H$ instead perform a different Markov chain with transition matrix $Q_{x,y}^{t,t'}$ satisfying the following bound for all $t_1 > 0$, $T\geq 2,$ and $x \in \Z^2$:
\begin{equation}\label{eq:transitionProbReturnCondition}
\int_{t_1}^{t_1 +T} \sup_{y\in \Z^2} Q_{x,y}^{t_1,t} dt \leq C\log T.
\end{equation}
\end{lemma}
\begin{proof}
First note that the transition probability for a simple random walk satisfies \eqref{eq:transitionProbReturnCondition}.
Now, for each particle $a\in H$ let
\begin{align*}
n_a &= \P[\{t\in [s,s+T]: a(t) = x(t)\}\neq \emptyset\mid \sF_s],\\
r_a &= \E\Big[\int_s^{s+T} \mathbf{1}(a(t) = x(t)) dt \Bigm| \sF_s\Big].
\end{align*}
Setting $\varsigma_a = \inf\{t\geq s: a(t) = x(t)\}$ we have
\begin{align*}
r_a &= \E\Big[\mathbf{1}(\varsigma_a \leq s+T) \int_{\varsigma_a}^{s+T} Q^{\varsigma_a,t}_{x(\varsigma_a),x(t)} dt \Bigm| \sF_s\Big]\\
&\leq C\log (T) \E[\mathbf{1}(\varsigma_a \leq s+T) \mid \sF_s] \leq C\log (T)n_a.
\end{align*}
Summing over $a \in H$ yields the result.
\end{proof}
We will also need a slight variant of Lemma~\ref{l:hittingNumberSimple}. Define
\begin{align*}
N'(H,s,T,x)&:= \sum_{a\in H} \mathbf{1}(\{t\in [s,s+T]: a(t') = x(t) \text{ for all } t'\in[t,t+1]\} \neq \emptyset) \\
R'(H,s,T,x)&:= \sum_{a\in H} \int_s^{s+T} \mathbf{1}(a(t') = x(t) \text{ for all } t'\in[t,t+1])dt
\end{align*}
\begin{lemma}\label{l:hittingNumber}
There exist constants $C_1,C_2>0$ such that for any set of particles $H$ performing continuous time simple random walks and any path $x:[s, s + T] \to \Z^2$, for $T \ge 2$ we have
\[
\E[N'(H,s,T,x) \mid \sF_s] \geq \frac{1}{C\log T} \E[R'(H,s,T,x) \mid \sF_s].
\]
\end{lemma}
\begin{proof}
For each particle $a\in H$ let
\begin{align*}
n_a &= \P[\{t\in [s,s+T]: a(t') = x(t) \text{ for all } t'\in[t,t+1]\} \neq \emptyset \mid \sF_s],\\
r_a &= \E\Big[\int_s^{s+T} \mathbf{1}(a(t') = x(t) \text{ for all } t'\in[t,t+1])dt \Bigm| \sF_s \Big].
\end{align*}
Setting $\varsigma_a = \inf\{t\geq s: a(t) = x(t)\}$ we have
\begin{align*}
r_a &= e^{-1}\E\Big[\mathbf{1}(\varsigma_a \leq s+T) \int_{\varsigma_a}^{s+T} Q^{\varsigma_a,s+T}_{x(\varsigma_a),x(t)} dt \Bigm| \sF_s\Big]\\
&\leq Ce^{-1}\log (T) \E[\mathbf{1}(\varsigma_a \leq s+T) \mid \sF_s] \leq C\log (T)n_a.
\end{align*}
which yields the result after summing over $a$.
\end{proof}
\begin{remark}\label{rem:hittingNumberConcentration}
The counting function $N = N(H,s,T,x)$ is a sum of independent indicator random variables so by Bernstein's inequality,
\begin{align}
\nonumber
\P\Big[N \leq \frac{1}{2} \E[N \mid \sF_s] \;\Big|\; \sF_s\Big] \leq \exp(- c \E[N \mid \sF_s]).
\end{align}
The same bound holds with $N' = N'(H,s,T,x)$ by identical reasoning.
\end{remark}
\subsection{Proof of Proposition~\ref{p:blueSeed} and Lemma \ref{L:blueSeedconditional}}
The proofs of these two statements are essentially the same. We focus on proving Proposition~\ref{p:blueSeed} and will indicate when a minor difference is required for the proof of Lemma \ref{L:blueSeedconditional}.
By standard random walk estimates (Lemma \ref{L:rw-estimate}),
\begin{equation}\label{eq:a1Bound}
\P[\cA^{(1)}_{B}] \to 1
\end{equation}
as $L\to \infty$. Note that in the context of Lemma \ref{L:blueSeedconditional}, $\P[\cA^{(1)}_{B} \mid \cW] \ge \P[\cA^{(1)}_{B}]$ since the events for $\cA_B^{(1)}$ involving particles that are affected by $\cW$ are necessarily satisfied on the event $\cW$.
Moreover, the relationship \eqref{E:Lnu-relationship} implies that any particle has a healing event in the interval $[0, L^3]$ with probability at most $64 L^{-3}$. By this bound, Lemma~\ref{l:HBsize}, and a union bound we have that
\begin{equation}\label{eq:a34Bound}
\P[\cA^{(3)}_{B} \cup \cA^{(4)}_{B}] \geq 1 - CL^{-1}.
\end{equation}
Again, we have $\P[\cA^{(3)}_{B} \cup \cA^{(4)}_{B} \mid \cW] \ge \P[\cA^{(3)}_{B} \cup \cA^{(4)}_{B}]$ since the event $\cW$ guarantees that some ignition particles receive no healing events in the interval $[0, L^3]$.
It remains to bound the probabilities of $\cA^{(2)}_{B,B',x}$ and $\cA^{(5)}_{B,x}$. We define the following sets:
\begin{align*}
U_x^- &= \{y\in B:d(y,U_x^c)\geq \frac14 \alpha L\},\quad U_x^* = \{y\in B:d(y,U_x^c)\geq \frac12 \alpha L\},\\
B^- &= \{y\in B: d(y, B^c) \geq \tfrac1{10} L\},
\end{align*}
and
\begin{align*}
Q''_{B,x}=\Big\{a\in H_B^- &: a(0)\in B^-, \forall 0 \le s'\leq \tfrac{\xi}{2\log\log L} \quad a(s')\in U_x^- \Big\}.
\end{align*}
At time $0$, $Q''_{B,x}$ is a Poisson process of particles on the set $B^-$ with density $\mu p_y q_{x,y}$ where $p_y$ was defined in \eqref{eq:principalLB} and
\[
q_{x,y}=\P[W_s \in U_x^- \text{ for all } s \in [0, \tfrac{\xi}{2\log\log L}] ],
\]
where $W_s$ is a simple random walk started at $y$. When $x \in \del B^\#, y \in B^-$, then by Lemma \ref{L:rw-estimate} we have that $q_{x,y} \ge 1 - o(1)$ where the $o(1)$ term tends to $0$ as $L \to \infty$ uniformly over $x \in \del B^\#, y \in B^-$.
By equation~\eqref{eq:principalLB}, $p_y\geq \frac34$ and so for sufficiently large $L$,
\[
\E[|Q''_{B,x}|]\geq \frac12 \mu |B^-| \geq \frac14\mu L^2
\]
for all $x \in \del B^\#$.
The future trajectories of the particles in $Q''_{B,x}$ are random walks conditioned to stay inside $U_x^-$ until time $\tfrac{\xi}{2\log\log L}$. These conditioned walks are instances of a Markov chain. Moreover, the set $U_x^-$ is of the form $[a, b] \times [c, d]$ where $b - a \ge L/2, d-c \ge L/2$. Therefore conditioning a random walk to stay inside $U_x^-$ up in the interval $[0, \tfrac{\al L^2}{2\log\log L}]$ will not greatly increase the transition probabilities between points and the transition matrix of this Markov chain will satisfy
\[
P^{t,t'}_{y,y'} \leq 1\wedge \frac{C}{t'-t}
\]
for all $0\leq t'-t$. In particular, this Markov chain satisfies equation~\eqref{eq:transitionProbReturnCondition}. Furthermore, for all $y\in B^-$, all $y'\in U_x^*$ and all $s\in[\tfrac{\xi}{4\log\log L},\tfrac{\xi}{2\log\log L}]$ we have that
\[
P^{0,s}_{y,y'} \geq \frac1{L^2} \exp(-C(\log \log L)),
\]
since $y,y'$ are both distance $\al L/4$ away from the boundary of $U_x^-$ and are distance at most $4L$ from each other.
If we let $x(t)$ be a path in $U_x^*$ and let
\[
R(x)=\sum_{a\in Q''_{B,x}}\int_{\tfrac{\xi}{4\log\log L}}^{\tfrac{\xi}{2\log\log L}} \mathbf{1}(a(t)=x(t)) dt
\]
then
\begin{align}
\E R(x)&=\sum_{a\in Q''_{B,x}}\int_{\tfrac{\xi}{4\log\log L}}^{\tfrac{\xi}{2\log\log L}} P^{0,t}_{a(0),x(t)} dt\nonumber\\
&\geq \E[Q''_{B,x}] \inf_{y \in B^-} \int_{\tfrac{\xi}{4\log\log L}}^{\tfrac{\xi}{2\log\log L}} P^{0,t}_{y,x(t)} dt\nonumber\\
& \geq \tfrac{cL^2}{4\log\log L} \exp(-C_1(\log \log L))\nonumber\\
&\geq L^2 \exp(-C_2(\log \log L)).
\end{align}
Now let
\begin{align*}
Q'_{B,x}=\Big\{a\in Q^{''}_{B, x} &: \exists s\in[0,\tfrac{\xi}{2\log\log L}] \text{ such that } a(s) = x+ W_{B_x,\operatorname{ig}}(s)\Big\}.
\end{align*}
Since $x+ W_{B_x,\operatorname{ig}}(s) \in U_x^*$ for $s\in[0,\tfrac{\xi}{2\log\log L}]$ on the event $\cA^{(1)}_B$, by Lemma~\ref{l:hittingNumberSimple} we have that
\begin{equation}
\E[|Q'_{B,x}|\mid \cA^{(1)}_B] \geq \frac{C}{\log L} \E[R(x+ W_{B_x,\operatorname{ig}})\mid \cA_B^{(1)}] \geq L^2 \exp(-C_3(\log \log L)).
\end{equation}
Now, conditional on $\cA^{(1)}_B$ and all particle trajectories from $\sP_B$ up to time $\xi/(2 \log \log L)$, each particle in $Q'_{B,x}$ performs a simple random walk after time $\xi/(2 \log \log L)$. Therefore under this conditioning, each particle in $Q'_{B,x}$ has probability at least $\exp(-C_4 (\log\log L))$ of exiting $U_x$ through $B'$ for any neighbouring block $B' \not\subset U_x$ by time $\tfrac{\xi}{2\log\log L}$. Calling the set of particles that does this $Q_{B, B', x}'$, we therefore have
\begin{align*}
\E[|Q'_{B, B', x}| \mid \cA^{(1)}_B] &\geq \exp(-C_4 (\log\log L))\E[|Q'_{B,x}| \mid \cA^{(1)}_B] \\
&\geq L^2 \exp(-(C_3 + C_4)(\log \log L)).
\end{align*}
Conditional on the trajectory of $W_{B_x,\operatorname{ig}}$, we also have that $Q'_{B, B', x}$ is a Poisson random variable and so
\begin{align*}
\P\Big[|Q'_{B, B', x}| \Bigm| \cA^{(1)}_B\Big] &\geq \P\Big[\hbox{Poisson}\big(L^2 \exp(-(C_3 + C_4)(\log \log L))\big) \geq L^{2-\log^{-1/2} L}]\nonumber\\
&\geq 1-L^{-100}
\end{align*}
for $L$ sufficiently large. Now, $Q'_{B, B', x} \sset Q_{B, x}$ for any $B'$, so
\begin{align}
\label{eq:a5Bound}
\P\Big[\cA^{(5)}_{B,x} \Bigm| \cA^{(1)}_B\Big]\geq 1-L^{-100}.
\end{align}
Also, $\cA^{(2)}_{B,B', x}$ is contained in the event where $|Q'_{B, B', x}| \ne \emptyset$ for all $B' \notin U_x$ with $d(B, B') = 1$, so similarly
\begin{align}
\label{eq:a2Bound}
\P\Big[\cA^{(2)}_{B,B', x} \Bigm| \cA^{(1)}_B\Big]\geq 1-4L^{-100}.
\end{align}
Note that the argument for \eqref{eq:a5Bound} and \eqref{eq:a2Bound} goes through verbatim if we condition on $\cA^{(1)}_B \cap \cW$ rather than just on $\cA^{(1)}_B$, since this stricter conditioning just determines some of the ignition trajectories.
Combining~\eqref{eq:a1Bound}, \eqref{eq:a34Bound}, \eqref{eq:a5Bound} and~\eqref{eq:a2Bound} together with a union bound we have that
\[
\P[(\cA_B)^c]\to 0
\]
as $\nu\to 0$ (or equivalently as $L \to \infty$), completing the proof of Proposition~\ref{p:blueSeed}. Similarly, $\P[(\cA_B)^c \mid \cW]\to 0$, yielding Lemma \ref{L:blueSeedconditional}.
\section{Local and global events}
\label{S:global}
The remainder of the paper is devoted to the proof of the more refined Theorem~\ref{T:main-2}. One of the difficulties in proving this theorem is to separate local and global effects on the process. In this section we define events which will help us to deal with this. Here and throughout the remainder of the paper, for a set of blocks $\sD \sset \sB$, we write $\sM_{\sD}$ for the $\sig$-algebra generated by the $\sM_B, B \in \sD$.
Suppose that we wish to understand the behaviour of the SIR process $X_t$ in a spatial window near a block $B$ and in a window of time centered at the colouring time $\tau_B$. In this window, we would like to say that $X_t$ is essentially governed only by the $\sig$-algebras $\sM_{B'}$ for $B'$ close to $B$. However, there are three global effects that can disrupt this:
\begin{enumerate}
\item The behaviour of the coupled SSP from Proposition \ref{P:SI-to-BR} in blocks near $B$.
\item The presence of particles coloured in far away blocks coming close to $B$.
\item The differences $\tau_{B'} - \tau_B$, for all blocks $B'$ close to $B$.
\end{enumerate}
We handle the first of these effects by looking not at the exact behaviour of the SSP, but rather only the encapsulating sets $[\fC]$ and $[\fD]$. First, throughout the proof we assume that $L$ is sufficiently large so that Theorem \ref{T:random-red-blue} holds in the coupled SSP. Next, we have no control over the SSP or the SIR process unless we are working on the event
\begin{equation}
\label{E-event}
\cC = \Big\{\fB_* = \bigcup_{i=1}^\infty A_i(\fB_*), 0 \in [\fD]^c, [\fD]^c \sset \fR(\infty), [\fD] \text{ has only bounded components}\Big\}.
\end{equation}
This is essentially the event $\cE$ in \eqref{E:CD}, with $[\fC]$ replaced by $[\fD]$ in two places.
It satisfies the same probability bound as $\cE$ since $\fC \sset \fD$ and $[\fD]$ only has bounded components almost surely for $p$ small enough by Lemma \ref{L:largest-scale}. We choose to only work with $[\fD]$ to simplify things moving forward.
On the event $\cC$, the only information we need about the SSP will be contained in the set $[\fD]$. We can use inequality \eqref{E:fDk-estimate} in Lemma \ref{L:largest-scale} to estimate the probability that $[\fD]$ is not determined locally. For a block $B$ and $r \in (0, \infty]$, let
$$
D_\sB(B, r) = \{B' \in \sB: d_\sB(B, B') \le r\}.
$$
Define the encapsulating set $[\fD(B, r)]$ to be the version of $[\fD]$ generated using only blue seeds from blocks in $D_\sB(B, r)$. Note that these sets are nondecreasing in $r$ by Lemma \ref{L:dom-seeds}. The set $[\fD(B, r)]$ is a local approximation of $[\fD]$. The two sets are typically equal away from the boundary of $D_\sB(B, r)$; in particular they typically coincide on the block $B$. Indeed, we have the following estimate.
\begin{lemma}
\label{L:global-local-ests}
For $r < r' \in (0, \infty]$, define the event
\begin{equation}
\cD_B(r, r') = \{ \{B\} \cap [\fD(B, r)] = \{B\} \cap [\fD(B, r')]\}.
\end{equation}
Then as long as $\nu$ is sufficiently small given $\mu$, we have $\P [\cD_B(r, r')] \ge 1 - \exp(-c r^{c/\log \log r})$ for some $c>0$.
\end{lemma}
Moving forward, we will write $\cD_B(r) = \cD_B(r, \infty)$. It will also be useful to note the transitivity relationship
\begin{equation}
\label{E:transitivity}
\cD_B(r, r') \cap \cD_B(r', r'') = \cD_B(r, r'')
\end{equation}
for $r < r' < r''$, which follows from the monotonicity of the sets $[\fD(B, r)]$ in $r$. We will typically use this with $r'' = \infty$. In this case, \eqref{E:transitivity} splits the global event $\cD_B(r)$ into a local part $\cD_B(r, r')$ which is measurable given the $\sig$-algebra $\sM_{D_\sB(B, r' + 2)}$, and a global part $\cD_B(r')$ whose probability decays quickly with $r'$.
\begin{proof}
If $\{B\} \cap [\fD(B, r')] \ne \{B\} \cap [\fD(B, r)]$, then by monotonicity $B \notin [\fD(B, r)]$ but $B \in [\fD(B, r')]$.
We claim that then $B$ is in a component of $[\fD(B, r')]$ of radius at least $r/2$, and hence is also in a component of $[\fD]$ of radius at least $r/2$. This will imply the lemma via \eqref{E:Mn-BC} in Lemma \ref{L:largest-scale}.
Setting some notation, let
$$
\fB_*^1 = \fB_* \cap D(B, r), \qquad \fB_*^2 = \fB_* \cap D(B, r'),
$$
and for each blue seed $B'$ and $i = 1, 2$, let $k_i(B')\in \N$ be the unique level with $B' \in A_k(\fB_*^i)$. Suppose that the component of $B$ in $[\fD(B, r')]$ has radius less than $r/2$. Then this component only contains blue seeds in the smaller disk $D_\sB(B, r/2)$. One of these blue seeds $B'$ must satisfy $k_1(B') < k_2(B')$ and so $B' \in \fB^2_{*, k_1(B') + 1} \smin \fB^1_{*, k_1(B') + 1}$. Therefore by applying the first part of Lemma \ref{L:Ak-estimate}, with $X = D_\sB(B, r/2)$, we must have that
$$
D_\sB(D_\sB(B, r/2) , r_{k_1(B')}/2) \not \sset D_\sB(B, r),
$$
and hence $r_{k_1(B')} > r$. Therefore $r_{k_2(B')} > 10^{12} r_{k_1(B')}> 100r$ by \eqref{E:gammabound} and so by the definition of $\fD$ the component of $B'$ in $[\fD(B, r')]$ has radius at least $r$. This is a contradiction.
\end{proof}
We can handle the second effect with straightforward bounds on the deviations of random walks.
For this next lemma, recall that $H_B^{++}$ consists of all trajectories that may be contributed to the SIR process from $\sP_B$ plus the ignition trajectory for $B$.
\begin{lemma}
\label{L:far-particles}
For $t \in \R, s, m > 0$, and a block $B$, define the event
\begin{equation}
\label{E:deviation}
\cE_B(t, s, m) = \bigcap_{a \in H_B^{++}} \{\sup_{r \in [-s, s]} |a(t) - a(t + r)| \le m\}.
\end{equation}
Then $\P [\cE_B(t, s, m)] \ge 1 - C L^2 e^{- \frac{cm^2}{m+s}}$, where $C, c > 0$ are constants depending on $\mu$.
\end{lemma}
\begin{proof}
This bound is immediate from Lemma \ref{L:rw-estimate}, Lemma \ref{l:HBsize}, and a union bound.
\end{proof}
The third effect is more difficult to understand, as we have not established any control over the correlations between different $\tau_B$'s. We will therefore deal with the issue of the random differences $\tau_B - \tau_{B'}$ with a mixture of different methods. Our main tool will be the bound from \eqref{E:first-lipschitz} which crudely guarantees the Lipschitz estimate
\begin{equation}
\label{E:Lipschitz}
|\tau_B - \tau_B'| \le L^2 d(B, B').
\end{equation}
As an immediate application of \eqref{E:Lipschitz} we use Lemma \ref{L:far-particles} to control particles that come close to $B$ from far away.
\begin{corollary}
\label{C:no-far-particles}
For blocks $B, B'$ and $s > 0$, define the event
\begin{equation*}
\cL_B(B', s) := \cE_{B'}(0, s + L^2 d(B, B'), d(B, B')/4).
\end{equation*}
Then on $\cL_B(B', s)$, we have that
\begin{equation}
\label{E:LB-event}
\inf \{ d(\bar a (t), B) : t \in [\tau_B - s, \tau_B+ s], a \in H_{B'}^{++} \} \ge d(B, B')/2.
\end{equation}
Moreover,
\begin{align*}
\P [\cL_B(B', s)] &\ge 1 - C L^2 \exp \lf( -\frac{c d(B, B')^2}{s + L^2 d(B, B')}\rg).
\end{align*}
\end{corollary}
\begin{proof}
By \eqref{E:Lipschitz}, we have
$$
[\tau_B - s, \tau_B + s] \sset [\tau_B' - s - L^2 d(B, B'), \tau_B' + s + L^2 d(B, B')].
$$
Any particle $a \in H_{B'}^{++}$ is contained in the block $B'$ at some point in the interval $[\tau_{B'}, \tau_{B'} + \xi]$.
Therefore for \eqref{E:LB-event} to fail, there must be some particle $a \in H_{B'}^{++}$ whose path in the time interval $[-s - L^2 d(B, B'), s + L^2 d(B, B')]$ has diameter at least $d(B, B')/2$. This particle would have to travel distance $d(B, B')/4$ from its location at time $0$ in the time interval $[-s - L^2 d(B, B'), s + L^2 d(B, B')]$, implying $\cL_B(B', s)^c$.
Lemma \ref{L:far-particles} gives the quantitative bound.
\end{proof}
While Lemma \ref{L:global-local-ests} and Corollary \ref{C:no-far-particles} do not allow us to rule out all global effects on the local behaviour of the SIR process $X_t$, they do allow us to simplify our study to only consider blocks within a $(d_\sB(B, 0)^{1/6} + m)$-radius of $B$ for some constant $m$ (here $0$ is the block containing $(0,0)$). The following corollary is immediate from these results, and sets up the global event $\cG_\nu$ in Theorem \ref{T:main-2}.
\begin{corollary}\label{L:global-is-rare}
For any density $\mu > 0$ and $\nu < \nu^-$, as long as $m =m_\nu \in \N$ is large enough the following event has positive probability.
$$
\cG_\nu := \cC \cap \bigcap_{B \in \sB} \lf(\cD_B(d_\sB(0, B)^{1/6} + m) \cap \bigcap_{B' : d_\sB(B, B') \ge m + d_\sB(0, B)^{1/6}}\cL_B(B', [d(B, B')]^{3/2}) \rg).
$$
Moreover, by letting $m_\nu \to \infty$ as $\nu \to 0$ we can guarantee that $\P[ \cG_\nu] \to 1$ as $\nu \to 0$.
\end{corollary}
Moving forward, we set $m_B = m + d_\sB(0, B)^{1/6}$ to be the locality radius for $B$ and assume $\nu$ is small enough so that $\P[\cG_\nu] \ge 1/2$.
\section{Upper bound on the density of remaining susceptible particles}
\label{S:upper-bd-density}
In this section, we show that the density of infected and susceptible particles in a region decreasing dramatically shortly after the colouring process passes through. When combined with the estimate in Proposition \ref{P:death-with-specifics}, this will later allow us to conclude that the infection dies out locally after the colouring process passes through (see Section \ref{S:herd}).
\subsection{Local infection spread}
Fix any block $B$.
We first establish that the following holds for some small fixed $\de > 0$: on the event $\cG_\nu$,
\begin{equation}
\label{E:goal-delta}
\text{all particles in $H_B$ are infected by time $L^{2+\delta} + \tau_B$ and heal by time $L^7$}
\end{equation}
with probability superpolynomial in $L$.
We will construct a collection of events whose intersection with $\cG_\nu$ implies \eqref{E:goal-delta}. The first of these events, $\cJ_B$, will be a long-range event controlling the coupled SSP, and the remaining events will have short range, only depending on the $\sig$-algebras $\sM_{B'}$ for $B'$ in a small radius of $B$. Throughout this section, all bounds hold for sufficiently large $L$ given a fixed $\de > 0$.
We let the block $B$ be our frame of reference. Setting some notation for the section, let $\sC = \sC_B := D_\sB(B, 4 L^\de), \sC^- = \sC^-_B := D_\sB(B, 2 L^\de)$.
Let $\cJ_B$ be the event
\[
\cJ_B = \Big\{\forall B'\in \sC^-: \{B'\} \cap [\fD(B', L^{\delta/20})] = \{B'\} \cap [\fD(B', m_{B'})]\Big\}.
\]
Using the notation of Lemma \ref{L:global-local-ests}, we have $\cJ_B = \bigcap_{B'\in \sC^-} \cD_{B'}(L^{\delta/20}, m_{B'})$. By equation~\eqref{E:transitivity}, on the event $\cG_\nu \cap \cJ_B$ the components of $\sC^- \cap [\fD]$ agree with a local approximation of radius $L^{\delta/20}$.
By Lemma~\ref{L:global-local-ests} we have that
\begin{equation}
\label{E:cJB-bound}
\P[\cJ_B] \geq 1 - \exp(-L^{c\delta/\log\log L}).
\end{equation}
We move on to defining the short-range events.
Our next event implies that every local neighbourhood of radius $L^{\delta/15}$ blocks has density at least $\frac23$ of red blocks. We set
\[
\cL_B = \Big\{\forall B'\in \sC^-: |\{B''\subset D_\sB(B', L^{\delta/15}) : B''\in [\fD(B'', L^{\delta/20})]\}| \leq \frac13 L^{2\delta/15}\Big\}
\]
and have the following strong estimate.
\begin{lemma}
\label{E:cLB-bound}
We have $
\P[\cL_B] \geq 1 - \exp(-cL^{\delta/30})$.
\end{lemma}
\begin{proof}
For each block $B''$ we have that $\P(B''\in [\fD(B'', L^{\delta/20})]) \leq \frac14$ for large enough $L$ by Corollary \ref{C:blue-domination} and \eqref{E:Mrrk} in Lemma \ref{L:largest-scale}. Moreover, any collection of events
$$
\{B_i \in [\fD(B_i, L^{\delta/20})]\}, \quad i = 1, \dots, k
$$
are independent if $d_\sB(B_i, B_j) > 2L^{\delta/20}$ for all $i \ne j$.
Applying a standard concentration estimate, see Lemma~\ref{l:dependentPerc} immediately below, yields the bound.
\end{proof}
\begin{lemma}\label{l:dependentPerc}
Let $p > 0$, and let $\{Y_x\}_{x\in \Z^2}$ be a field of Bernoulli$(p_x)$ random variables where $p_x \le p$ for all $x$. Suppose that $\{Y_x\}_{x\in A}$ are independent for any finite set $A\subset \Z^2$ whose pairwise distances are at least $R$. Then for any set $\Lambda$ and any $y>2p|\Lambda|$,
\[
\P[\sum_{x\in \Lambda} Y_x \geq y] \leq R^2 \exp(-c y^2/(R^2|\Lambda|)).
\]
Similarly, if $p_x \ge p$ for all $x$ then for any set $\Lambda$ and any $y< p|\Lambda|/2$ we have
\[
\P[\sum_{x\in \Lambda} Y_x \leq y] \leq R^2 \exp(-c y^2/(R^2|\Lambda|)).
\]
\end{lemma}
\begin{proof}
We only prove the first inequality as the second is similar. For $(j_1,j_2)\in \{0,\ldots,R-1\}^2$ let
\[
\Lambda_{j_1,j_2} = \{(x_1,x_2)\in\Lambda: x_1 \equiv j_1 \ \hbox{mod} \ R, x_2 \equiv j_2 \ \hbox{mod} \ R\}
\]
If $\sum_{x\in \Lambda} Y_x \geq y$ then there must exist some $(j_1,j_2)$ such that
\[
\sum_{x\in \Lambda_{j_1,j_2}} Y_x \geq y\lf(\frac{1}{4R^2} + \frac{3|\Lambda_{j_1,j_2}|}{4 |\Lambda|}\rg).
\]
By Hoeffding's inequality and using that $\E \sum_{x\in \Lambda_{j_1,j_2}} Y_x \le |\Lambda_{j_1,j_2}| p \le \tfrac{|\Lambda_{j_1,j_2}|y}{2 |\Lambda|} $ we have
\begin{align*}
\P[\sum_{x\in \Lambda_{j_1,j_2}} Y_x &\geq |\Lambda_{j_1,j_2}|y\lf(\frac{1}{4R^2 |\Lambda_{j_1,j_2}|} + \frac{3}{4 |\Lambda|} \rg)] \\
& \le \exp\lf(-2 |\Lambda_{j_1,j_2}| y^2\lf(\frac{1}{4R^2 |\Lambda_{j_1,j_2}|} + \frac{1}{4 |\Lambda|} \rg)^2 \rg) \le \exp\lf(- c \frac{y^2}{R^2 |\Lambda|}\rg).
\end{align*}
The final inequality follows by minimizing the exponent in $|\Lambda_{j_1,j_2}|$. The lemma follows by a union bound.
\end{proof}
On the event $\cG_\nu \cap\cJ_B\cap\cL_B$, for every $B' \in \sC^-$, each ball $D_\sB(B', L^{\de/15})$ is coloured at most one-third blue. All these balls are completely contained in the larger set $\sC$. Our next event controls the movement of particles from this larger set.
Define
\[
\cN_B:= \bigcap_{ B'\in \sC} \bigcap_{ a\in H_{B'}^{++}} \Big\{ |a(t)-a(0)| \leq L^{1 +\frac{\delta}{10}}\lceil t/L^2 \rceil^{\frac12+ \frac{\delta}{5}} \quad \forall t \in [0, L^{3}]\Big\}.
\]
With notation as in Lemma \ref{L:far-particles}, we have
\begin{equation*}
\cN_B \supset \bigcap_{ B'\in \sC} \bigcap_{i=1}^L \cE_B(0,i L^2, (iL^2)^{1/2 + \de/10}),
\end{equation*}
so by that lemma and a union bound we have
\begin{equation}
\label{E:NB-bound}
\P[\cN_B] \geq 1 - \frac12 \exp(-cL^{\delta/3})
\end{equation}
for large enough $L$.
If $B' \in \sC$ and $B'' \notin D_\sB(B', L^{3\de/5})$ then $d(B, B'') \ge L^{1 + 3\delta/5}/5$ by \eqref{E:block-distance}. The Lipschitz bound \eqref{E:Lipschitz} then implies that on the event $\cN_B$ a particle in $H_B^{++}$ will not reach $B''$ before time $\tau_B'' \wedge (\tau_{B'} + L^3)$ and thus cannot become the ignition particle for $B''$ prior to time $\tau_{B'} + L^3$. It follows that at most $L^{3\de}$ particles in $\bigcup \{H_{B'}^{++} : B' \in \sC\}$ become ignition particles prior to time $\tau_{B'} + L^3$.
A significant difficulty in checking when particles near $B$ become infected given only the information in in $\sM_{\sC}$ is that this does not tell us the relative times of the $\tau_{B'}$. To accommodate this we will discretize the relative times $\tau_B-\tau_{B'}$ and take a union bound over this discretization. Similarly, we don't know exactly which blocks are blue so we will take a union bound over the possibilities.
For a block $B$ let $\sU=\sU_B=\{0,1\}^{\sC}\times \Z^{\sC}\times (\Z^2)^{\sC}$.
Let
\begin{align*}
\sU^* = \sU^*_B =\bigg\{(r,s,x)\in \sU&: \max_{B'\in \sC} |s(B')| \leq 9L^{\delta}\xi, \qquad x(B') \in (B')^{\#} \; \forall B' \in \sC, \\
&\sum_{B'' \in D_\sB(B', L^{\delta/15})} r(B'') \geq \frac{1}{3} L^{2\delta/15} \qquad \forall B'\in \sC^-
\bigg\}.
\end{align*}
We will be interested in the random variable $U_B\in \sU$ defined by $U_B=(r_B,s_B,x_B)$ where $r_B(B'')$ is the indicator that block $B'$ is red, $x_B(B')$ is the ignition location for $B'$ and $s_B(B') = \lfloor \tau_{B} - \tau_{B'} \rfloor$.
Since the colouring process spreads at least at rate $\xi$ (see \eqref{E:first-lipschitz}) we have that $\max_{B'\in \sC^-} |s_{B}(B')| \leq 9L^{\delta}\xi$ and $x_B(B') \in (B')^\#$ for all $B'$ by construction. Moreover, the condition on $r$ required for membership in $\sU^*$ holds for $r_B$ on the event $\cG_\nu \cap \cJ_B\cap\cL_B$. Therefore on this event, $U_B\in \sU^*$. Note that
\begin{equation}
\label{E:cLde}
|\sU^*| \leq L^{c L^{\delta}}.
\end{equation}
For $u=(r,s,x)\in \sU_B^*$ we say a particle $a\in H_{B}^+$ is \textbf{covered} by a particle $a'\in Q_{B',x(B')}$ with respect to $u$ if $r(B')=1$ and there exists $t\in[\frac12 L^{2+\delta}, L^{2+\delta}-1]$ and $y\in\Z^2$ such that $a(t)=y$ and $a'(t' + s(B'))=y$ for all $t'\in [t,t+1]$. By construction, if $(r,s,x)=(r_B,s_B,x_B)$ and $a$ is covered by $a'$ for $t$ and $x$ then $\oa(t+\tau_B)=\oa'(t+\tau_B)=y$ since $a'(t')=y$ for all
\[
t'\in \Big[t + \lfloor \tau_{B} - \tau_{B'} \rfloor + \tau_{B'},t + \lfloor \tau_{B} - \tau_{B'} \rfloor + \tau_{B'} +1\Big],
\]
which includes $t+\tau_B$. We will write $\cW_{B,u,a}$ for the event that $a\in H_B^+$ is covered by at least $L$ particles in $\bigcup_{B' \in \sC} Q_{B', x(B')}$ with respect to $u\in \sU_B^*$. We ask that $L$ particles cover $a$ rather than just 1 since some of the particles may become ignition particles and then not follow their original path. However, recall on the event $\cN_B$ at most $L^{3 \delta}$ particles will become ignition particles prior to time
$$
\inf_{B' \in \sC} \tau_{B'} + L^3 \ge \tau_B + L^3/2.
$$
Therefore some of the $L$ particles that cover $a$ will not change their original path prior to time $\tau_B + L^3/2$.
\begin{lemma}
\label{L:WBua-bd}
Defining
$$
\cW_B = \bigcap_{u=(r,s,x)\in \sU^*} \Big(\bigcap_{a\in H_B^+} \cW_{B,u,a} \Big) \cup \cT_{B, u}^c,
$$
where $\cT_{B, u} := \{\forall B'\in \sC : r(B') \leq \mathbf{1}(\cA_{B'})\}$,
we have
$
\P[\cW_B^c, \cN_B] \leq \exp(-c L^2)$.
\end{lemma}
\begin{proof}
We will establish the result by a union bound over $u$ and $a$. Let $N$ be the number of particles $a' \in \bigcup_{B' \in \sC} Q_{B', x(B')}$ which cover a fixed $a \in H_B^+$ and define
\[
R= \sum_{\substack{B'\in \sC:\\r(B')=1}} \sum_{a'\in Q_{B',x(B')}} \int_{\frac12 L^{2+\delta}}^{L^{2+\delta}-1} \mathbf{1}\Big(a'(t' + s(B'))=a(t) \;\; \forall t'\in[t,t+1] \Big) dt.
\]
Let $\sI$ denote the $\sigma$-algebra generated by all $\{\cA_B\}_{B'\in \sC^-}$, all particles in $\cup_{B'\in \sC^-} H_{B'}^+ \cup W_{B, \operatorname{ig}}$ up to time $L^2$ and the particle $a(t)$ up to time $L^{2+\delta}$:
\[
\sI=\sigma\Big\{\{\cA_B\}_{B'\in \sC^-}, \{a(t)\}_{t\in [0,L^{2+\delta}]}, \{a'(t)\}_{a'
\in \cup_{B'\in \sC^-} H_{B'}^+ \cup W_{B, \operatorname{ig}}, t\in [0,L^{2}]} \Big\}.
\]
The future trajectories of the $a'(t)$ after time $L^2$ are simple random walks independent of $\sI$, so conditional on $\sI$, we are in the setting of Lemma~\ref{l:hittingNumber} and Remark \ref{rem:hittingNumberConcentration}. This gives that
\begin{equation}
\label{E:RcI}
\P[N \leq \frac{c}{\log L} \E[R \mid \sI]\mid \sI] \leq \exp(-\frac{c'}{\log L} \E[R \mid \sI])
\end{equation}
for constants $c, c' > 0$. Next, we estimate $\E[R \mid \sI]$.
For this, let
\[
Z_{B,u,a} = \mathbf{1}(\cT_{B, u} \cap \cN_B^*).
\]
where $\cN_B^* := \{\P[\cN_B \mid \cI] > 0\}$ is the part of the probability space where information from $\cI$ does not preclude the set $\cN_B$. The indicator $Z_{B,u,a}$ is $\sI$-measurable.
Now fix $t\in [\frac12 L^{2+\delta}, L^{2+\delta}-1]$.
Let $B_t$ denote the $\sI$-measurable block containing the location $a(t)$. On the event $\cN_B^*$, the movement of $a$ is restricted and so we necessarily have $B_t \in \sC^-$.
Therefore on $\cN_B^*$, we have that $D_\sB(B_t, L^{\de/15}) \sset \sC$, and so for any $B' \in D_\sB(B_t, L^{\de/15})$ any $a' \in Q_{B',x'(B')}$, on $\cN_B^*$ the value of $a'(L^2)$ must be within distance $2L^{1 + \de/5}$ of $B_t$. Hence
\[
d(a'(L^2), a(t)) \le 3L^{1 + \de/5}.
\]
Therefore for any such $a'$ we have
\begin{align*}
\P[a'(t' + s(B'))=a(t) \quad \forall t'\in[t,t+1] \mid \sI] &= e^{-1} \P[a'(t + s(B'))=a(t) \mid \sI] \\
&\ge c L^{-2-\delta}Z_{B,u,a}.
\end{align*}
Since $u \in \sU^*$ and $B_t \in \sC^-$, at least one-third of the blocks $B' \in D_\sB(B_t, L^{\de/15})$ have $r(B')=1$. If we also have that $\cT_{B, u}$ holds then $\mathbf{1}(\cA_{B'})=1$ for all these blocks.
Since $|Q_{B',x(B')}|\geq L^{2-(\log L)^{-1/2}}$ for such blocks, we have
\begin{align*}
\E\Big[\sum_{\substack{B'\in \sC:\\r(B')=1}} \sum_{a'\in Q_{B',x}} &\mathbf{1}(a'(t' + s(B'))=a(t) \quad \forall t'\in[t,t+1] ) \mid \sI\Big] \\
&\geq c' L^{-\de + 2\de/15 -(\log L)^{-1/2}} Z_{B,u, a}
\end{align*}
and so upon integrating we get that
\[
\E[R\mid \sI] \geq c L^{2+2\delta/15 -(\log L)^{-1/2} }Z_{B,u, a} \geq L^{2} \log(L) \ Z_{B,u, a}.
\]
Therefore by \eqref{E:RcI}, we have
\[
\P[\cW_{B,u,a}^c,Z_{B,u, a}=1 \mid \sI]\leq \exp(-c L^2)
\]
and so unconditionally
\[
\P[\cW_{B,u,a}^c \cap \cT_{B, u} \cap \cN_B^*]\leq \exp(-c L^2).
\]
Taking a union bound over $u, a$ and using that $\cN_B \sset \cN_B^*$ we have that
$
\P[\cW_B^c, \cN_B] \leq \exp(-c L^2)
$
which completes the proof.
\end{proof}
On the event $\cW_B$ we must have $\cW_{B, U_B, a}$ for all ${a\in H_B^+}$ and hence all such $a$ are covered.
Next, we write $\cK_{B}$ for the event that every particle in $H_B^+ \cup W_{B, \operatorname{ig}}$ gets a healing event in the interval $[\frac12 L^7, L^7]$. This is the final event we need to ensure all particles in $H_B$ are removed. The following lemma then summarizes the results of this section.
\begin{lemma}
\label{L:final-event}
On the event $\cG_\nu \cap\cJ_B\cap\cL_B\cap\cN_B \cap \cW_B \cap \cK_B$ every particle in $H_B \cup W_{B, \operatorname{ig}}$ is removed by time $\tau_B+L^7$. Moreover, we have the following probability bounds, in addition to the bound \eqref{E:cJB-bound} on $\P [\cJ_B]$.
\begin{align}
\label{eq:cLNWbound}
\P[\cL_B\cap\cN_B \cap \cW_B] &\geq 1 - \exp(-cL^{\delta/30}) \\
\label{eq:cKbound}
\P[\cK_B] &\geq 1-\exp(-\frac13 L).
\end{align}
\end{lemma}
\begin{proof}
On the event $\cG_\nu\cap \cJ_B\cap\cL_B\cap\cN_B \cap \cW_B$ every particle in $B$ is infected by time $\tau_B + L^{2+\delta}$ since each particle in $H_B^+$ will collide with a particle in some $Q_{B',x_B(B')}$ from a red block $B'$ while it is still infected by the discussion prior to Lemma \ref{L:WBua-bd}, so if we also intersect with $\cK_B$ then every particle in $H_B \cup W_{B, \operatorname{ig}}$ will be removed by time $\tau_B+L^7$. The bound \eqref{eq:cLNWbound} follows from Lemma \ref{E:cLB-bound}, \eqref{E:NB-bound}, and Lemma \ref{L:WBua-bd}. Finally, $|H_B^+|$ is stochastically dominated by a Poisson with mean $\frac32 \mu L^2$ by Lemma~\ref{l:HBsize} and each particle receives a healing event in $[\frac12 L^7, L^7]$ with probability at least $1-\exp(-\frac12 L)$ by the relationship \eqref{E:Lnu-relationship}. This yields \eqref{eq:cKbound}.
\end{proof}
\subsection{Events to bound the number of susceptible particles}
\label{S:upper-bound-s}
Having established that particles in most blocks $B$ become infected soon after $\tau_B$ we next want to establish that the density of susceptible or infected particles in a given region near $B$ remains low. Concretely, we will prove the following estimate.
\begin{prop}
\label{P:big-estimate}
Fix a block $B$ and let $s \ge L^{10}$. Let $z_B$ be the square closest to the center of $B$ and for $z \in \Z^2$ define $\Lambda_z := D(z + z_B, s^{49/100})$. Then there is an event $\cX = \cX_{B, s}$ such that:
\begin{itemize}
\item $\P[\cX] \ge 1 - \exp(-s^{c/\log \log s})$ and $\cX$ is $\sM_{D_{\sB}(B, m_B + 3s^{3/4})}$-measurable.
\item On the event $\cX \cap \cG_\nu$, we have
\begin{equation}
\label{E:bara-sum}
\frac{1}{|\Lambda_z|} \sum_{B' \in D_\sB(B, s^{3/4})} \sum_{a \in H_{B'}^{++}} \mathbf{1}(\bar a(\tau_B + s') \in \Lambda_z, \bar a \notin \mathbf{R}_{\tau_B + s'}) \le L^{-20}
\end{equation}
for all $s' \in [s-1, s]$ and $z \in \Z^2$.
\end{itemize}
\end{prop}
To shorten notation, we write $\sD = D_\sB(B, s^{3/4})$ and define
\[
\sD_1=\{B'\in \sD: \cJ_{B'}^c\}
\]
and
\[
\sD_2 = \{B'\in \sD: (\cL_{B'}\cap\cN_{B'} \cap \cW_{B'}\cap \cK_{B'})^c\}.
\]
Lemma \ref{L:final-event} shows that for $B'\in \sD \setminus(\sD_1 \cup \sD_2)$ all the particles in $H_{B'} \cup W_{B, \operatorname{ig}}$ are removed by time $\tau_{B'} + L^{7} \leq \tau_B + s$. Thus in the sum in \eqref{E:bara-sum} we need only consider particles from blocks in $\sD_1 \cup \sD_2$. For small $s$, this observation already allows us to prove Proposition \ref{P:big-estimate}.
\begin{proof}[Proof of Proposition \ref{P:big-estimate} for $L^{10} \le s \le L^{5000}$]
In this case, we let $\cX = \{\sD_1 \cup \sD_2 = \emptyset\}$. On the event $\cX \cap \cG_\nu$, the left hand side of \eqref{E:bara-sum} is $0$ for all $z$. For $B' \in D_\sB(B, s^{3/4})$ the events $\{B' \in \sD_1\}, \{B' \in \sD_2\}$ are measurable given $$
\sM_{D_\sB(B', m_{B'} + 10 L^\de)} \sset \sM_{D_\sB(B, m_{B'} + 2s^{3/4})} \sset \sM_{D_\sB(B, m_{B} + 3s^{3/4})}
$$
by the definitions of $\cJ_B, \cL_B, \cN_B, \cW_B,$ and $\cK_B$. It remains to check the probability bound on $\cX$. By \eqref{E:cJB-bound}, Lemma \ref{L:final-event}, and a union bound, we have
\begin{equation*}
\P[|\sD_1 \cup \sD_2| \geq 1] \leq L^{10000}\exp(-L^{c'\delta/\log \log L}) \leq \exp(-s^{c/\log \log s})
\end{equation*}
as long as $c$ is sufficiently small given $c'$, where in the final inequality we have used that $s \le L^{5000}$.
\end{proof}
For the remainder of this section we assume $s \ge L^{5000}$. In this case, Proposition \ref{P:big-estimate} will be proven by using the estimates in \eqref{E:cJB-bound} and Lemma \ref{L:final-event} applied to different blocks $B'$ in a moderate radius around $B$.
To have a concentration bound that improves with $s$,
we will use the fact that all the events in Lemma~\ref{L:final-event} (aside from $\cG_\nu$ itself) are defined locally, along with the concentration estimate in Lemma \ref{l:dependentPerc}.
First, with notation as in Lemma \ref{L:far-particles}, define
\begin{equation}
\label{E:cX'}
\cX' = \bigcap_{B' \in D_\sB(B, s^{3/4})} \cE_{B'}(0, 2s, s/4).
\end{equation}
By Lemma \ref{L:far-particles} and a union bound we have
\begin{equation}
\label{E:X'-bound}
\P[\cX'] \ge 1- Cs^{3/2} L^2 \exp (- cs) \ge 1 - \exp(-s^{c/\log \log s})
\end{equation}
and by construction and the Lipschitz bound \eqref{E:first-lipschitz}, on $\cX'$ we have
$$
\sum_{B' \in D_\sB(B, s^{3/4})} \sum_{a \in H_{B'}^{++}} \mathbf{1}(d(\bar a(\tau_B + s'), B) > s) = 0
$$
for all $s' \in [s-1, s]$.
In particular, on $\cX'$, the inequality \eqref{E:bara-sum} holds for all $z \ge 2s$. Therefore by a union bound, to complete the proof it suffices to show that for every $z$ with $|z| \le 2s$, we can define a $\sM_{D_{\sB}(B, m_B + s^{3/4})}$-measurable event $\cQ_z$ satisfying $
\P[\cQ_z] \ge 1 - \exp(-s^{c/\log \log s})
$ and such that \eqref{E:bara-sum} holds on $\cQ_z \cap \cG_\nu$ for all $s' \in [s-1, s]$ for that particular $z$.
To count how many particles that originated in blocks contained in $\sD_1 \cup \sD_2$ are close to $\Lambda_z$ at time $\tau_B + s$, we will obtain concentration estimates on the density of $\sD_1 \cup \sD_2$ on different geometric scales. Define $j_{{\max}} = \frac32 \log_2 (sL^{-1})+2$ and for $i\in\{1,2\}$ and $1\leq j \leq j_{{\max}}$ define
\[
D_{i,j}= \{B' \in \sD_i: B' \in D_\sB(B_z , 2^j s^{1/2} L^{-1}) \},
\]
where $B_z$ is the block containing the reference vertex $z + z_B$.
We will let
\[
\cR_{i,j} = \{|D_{i,j}| \leq L^{-1000} 4^j s\}, \qquad \cR =\bigcap_{j=1}^{j_{{\max}}} (\cR_{1,j}\cap \cR_{2,j})
\]
and will establish the following bound.
\begin{lemma}
\label{eq:sumCSbound}
For any $s \ge L^{5000}$ and any $z \in \Z^2$ with $|z| \le 2s$ we have
$$
\P[\cR] \geq 1 - \exp(-s^{c'/\log \log s}).
$$
\end{lemma}
\begin{proof}
Events $\{B_i'\in \sD_2\}, i = 1, \dots, k$ are independent as long as $d_\sB(B_i', B_j') \ge 10 L^\de$ whenever $i \ne j$. Hence by Lemma~\ref{l:dependentPerc} we have
\begin{align}\label{eq:sumCS2bound}
\P[\cR_{2,j}^c] &\leq 100 L^{2 \de} \exp(-c L^{-2000} 4^j s L^{-2\delta}).
\end{align}
To estimate $\P[\cR_{1,j}^c]$ we can first use \eqref{E:transitivity} and the definition of $\cJ_B, \sD_1$ to write
\begin{align}
\nonumber
&\{B'\in \sD_1\} = \bigcup_{B'' \in \sC_{B'}^-} [\cD_{B''}(L^{\de/20}, m_{B''})]^c = \cK^1_{B'} \cup \cK^2_{B'} \text{ where }\\
\nonumber
\qquad &\cK_{B'}^1 = \bigcup_{B'' \in \sC_{B'}^-} [\cD_{B''}(s^{1/10}, m_{B''})]^c, \quad \cK_{B'}^2 = \bigcup_{B'' \in \sC_{B'}^-} [\cD_{B''}(L^{\de/20}, s^{1/10})]^c.
\end{align}
From here we can use Lemma \ref{L:global-local-ests} and the fact that the events $\cK_{B'}^2$ are independent for blocks that are distance at least $3 s^{1/10}$ apart to bound $\P[\cR^c_{2, j}]$. Indeed, by combining Lemma \ref{L:global-local-ests}, Lemma~\ref{l:dependentPerc}, and a union bound we have
\begin{align*}
\P[\cR^c_{2, j}] &\le \P\Big(\sum_{B' \in D_\sB(B_z , 2^j s^{1/2} L^{-1})} \mathbf{1}\lf(\cK_{B'}^1\rg) > 0 \Big) + \P\Big(\sum_{B' \in D_\sB(B_z , 2^j s^{1/2} L^{-1})} \mathbf{1}\lf(\cK_{B'}^2\rg) > L^{-2000} 4^j s \Big) \\
&\le 4^j s L^{-2} \exp(- c s^{c/\log \log s}) + 9s^{1/5} \exp(-c L^{-2000}4^j s/s^{1/5}).
\end{align*}
Combining this bound with \eqref{eq:sumCS2bound}, summing over $j$ and using that $s \ge L^{5000}$ yields the desired bound.
\end{proof}
\begin{corollary}
\label{C:cY}
Fix $s \ge L^{5000}$ and $z \in \Z^2$ with $|z| \le 2s$ and define
\begin{equation}
\label{E:Ydef}
\cY_{j} = \{\sum_{B'\in \sD_{1,j}\cup \sD_{2,j}} |H_{B'}^{++}| \leq L^{-800} 4^j s\}, \qquad \cY =\bigcap_{j=1}^{j_{{\max}}} \cY_{j}.
\end{equation}
Then
\begin{equation*}
\P[\cY^c] \leq \exp(-s^{c/\log \log s}).
\end{equation*}
\end{corollary}
\begin{proof}
We will estimate the probability of $\cY_{j}$ given $\cR_{1,j}\cap \cR_{2,j}$.
On $\cR_{1,j}\cap \cR_{2,j}$ there are at most $L^{-1000} 4^j s$ blocks $B'$ in $D_{1,j}\cup D_{2,j}$. There are at most ${4^j s \choose L^{-1000} 4^j s}$ choices of these blocks from the blocks in $D_\sB(B_z , 2^j s^{1/2} L^{-1})$. Given a deterministic selection of $L^{-1000} 4^j s$ blocks $B'$, the total number of particles in $H_{B'}^{++}$ is stochastically dominated by Poisson random variable with mean $C L^{2-1000} 4^j s$ by Lemma \ref{l:HBsize} and so
\begin{align*}
\P[\cY_{j}^c,\cR_{1,j}\cap \cR_{2,j}] &\leq {4^j s \choose L^{-1000} 4^j s} \P[\hbox{Pois}(C L^{2-1000} 4^j s) > L^{-800} 4^j s]\\
&\leq \exp(- L^{-800} 4^j s).
\end{align*}
Hence by a union bound,
\begin{equation}\label{eq:sumCYbound}
\P[\cY^c,\cR] \leq \exp(-s^{c/\log \log s}).
\end{equation}
The result then follows from \eqref{eq:sumCYbound} and Lemma \ref{eq:sumCSbound}.
\end{proof}
We will use Corollary \ref{C:cY} to control the event
\[
\cQ'_z =\bigg\{ \sup_{s' \in [s-1, s]} \sum_{B'\in \sD_1 \cup \sD_2}\sum_{a\in H_{B'}^{++}} \mathbf{1}(\oa(\tau_B+s') \in \Lambda_z) \leq L^{-20} |\Lambda_z| \bigg \}.
\]
Here we again face the issue that the functions $\oa$ depends implicitly on the non-local differences $\tau_B - \tau_{B'}$. By \eqref{E:first-lipschitz} we know that $|\tau_B - \tau_{B'}|\leq \xi s^{3/4}$ so we let $G_{a}$ be the event that for a particle $a \in H_{B'}^{++}$ we have $a(t')\in \Lambda_z$ for some time $t'$ with $|t'-s|\leq \xi s^{3/4} + 1$. The event $G_{a}$ must hold if a particle is in $\Lambda_z$ at time $\tau_B+s'$ for some $s' \in [s-1, s]$. Hence we define
\begin{equation}
\label{E:Qdef}
\cQ_z =\bigg\{\sum_{B'\in \sD_1 \cup \sD_2} \sum_{a\in H_{B'}^{++}} \mathbf{1}(G_{a}) \leq L^{-20} |\Lambda_z| \bigg \}.
\end{equation}
With this definition, $\cQ'_z \subset \cQ_z$, and we have the following estimate.
\begin{lemma}
\label{L:Q'-lemma}
For $s \ge L^{5000}$ we have $\P[\cQ_z] \ge 1 - \exp(-s^{c/\log \log s}).$
\end{lemma}
\begin{proof}
First, let $\cO$ be the event that no particle from $\sD$ moves more than distance $s^{5/12}$ by time $s^{4/5}$:
\[
\cO:=\bigcap_{B'\in \sD} \cE_{B'}(0, s^{4/5}, s^{5/12})
\]
which by Lemma \ref{L:far-particles} and a union bound holds with probability
\begin{align}\label{eq:CObound}
\P[\cO] &\geq 1 - C s^{3/2} L^2 \exp(-c s^{1/30}).
\end{align}
By \eqref{eq:CObound} and Corollary \ref{C:cY}, it suffices to show that
\begin{equation}\label{eq:cQ}
\P[\cQ^c,\cY,\cO] \leq \exp(-s^{c/\log \log s}).
\end{equation}
Now let $a \in H_{B'}^{++}$ for some $B' \in \sD$. It follows from our construction that $a(s^{4/5} + t)-a(s^{4/5}), t \ge 0$, the trajectory of $a$ after time $s^{4/5}$, is independent of $\sD_1, \sD_2, \cY$ and $\cO$. In particular, if we define the annulus
$$
V_j= D_\sB(B_z , 2^j s^{1/2} L^{-1}) \smin D_\sB(B_z , 2^{j-1} s^{1/2} L^{-1})
$$
then for $a\in B' \subset V_j$, by a random walk estimate similar to Lemma \ref{L:rw-estimate} we have
\[
\P[G_{a}\mid \sD_1, \sD_2, \cY,\cO] \leq \exp(-c 2^{j})|\Lambda_z|s^{-1}
\]
since conditional on $\cO$ the path of $a$ travels at most distance $s^{5/12}$ by time $s^{4/5}$ and after that it is a random walk that needs to travel distance $c2^j s^{1/2}$ in time $s-o(s)$ to reach $\Lambda_z$. Note that the window of time $\xi s^{3/4}+1$ in the definition of $G_{a}$ is
lower order compared to the square of the side-length of $\Lambda_z$ so this window of time will only affect the random walk estimate by a constant factor.
On the event $\cY$ there are at most $L^{-800} 4^j s$ particles in these blocks so
\begin{align*}
\P&[\sum_{B' \subset V_j }\mathbf{1}(B'\in \sD_1 \cup \sD_2) \sum_{a\in H_{B'}^+} \mathbf{1}(G_{a}) \geq 2^{-j} L^{-20} |\Lambda_z|,\cY,\cO]\\
&\leq \P[\hbox{Bin}(L^{-1500} 4^j s,\exp(-c 2^{j})|\Lambda_z|s^{-1}) \geq 2^{-j} L^{-20} |\Lambda_z|]\\
&\leq \exp(-s^{c/\log\log s}),
\end{align*}
which yields \eqref{eq:cQ} after a union bound over $j$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P:big-estimate} when $s \ge L^{5000}$]
We set
$$
\cX = \cX' \cap \bigcap_{z \in \Z^2, |z| \le 2s} \cQ_z.
$$
This satisfies the bound in the first bullet of Proposition \ref{P:big-estimate} by Lemma \ref{L:Q'-lemma} and satisfies \eqref{E:bara-sum} for $s' \in [s-1, s]$ and $|z| \ge 2s$ by construction. The $\sM_{D_\sB(B, m_B + 3s^{3/4})}$-measurability of $\cX$ follows by tracing back the various definitions:
\begin{itemize}[nosep]
\item Each event $\cE_{B'}, B' \in D_\sB(B, s^{3/4})$ used in the definition of $\cX'$ only depends on $\sM_{B'}$.
\item Given $\sD_1, \sD_2$, the event $\cQ_z$ only depends on $\sM_{B'}, B' \in D_\sB(B, s^{3/4})$.
\item For $B' \in \sD$ the events $\{B' \in \sD_1\}, \{B' \in \sD_2\}$ are measurable given $\sM_{D_\sB(B, m_{B'} + 3s^{3/4})}$ as in the $s \le L^{5000}$ case.
\end{itemize}
Finally, on the event $\cG_\nu$, for $|z| \le 2s$ we have
\begin{align*}
\sum_{B' \in \sD} \sum_{a \in H_{B'}^{++}} \mathbf{1}(\bar a(\tau_B + s') \in \Lambda_z, \bar a \notin \mathbf{R}_{\tau_B + s}) \le &\sum_{B'\in \sD_1 \cup \sD_2}\sum_{a\in H_{B'}^{++}} \mathbf{1}(\oa(\tau_B+s') \in \Lambda_z),
\end{align*}
which on $\cQ_z \sset \cQ'_z$ is at most $L^{-20}|\Lambda_z|$ for all $s' \in [s-1, s]$. This yields \eqref{E:bara-sum} for $|z| \le 2s$.
\end{proof}
\section{Surviving particles}
\label{S:survival}
In this section we define a \emph{survival event} which will guarantee that a certain particle in a block $B$ survives long past when its block gets coloured. This is the first step in showing that with positive probability, the infection survives forever but a positive density of particles never become infected.
For this section and the next, in addition to examining measurability of events with respect to the block $\sig$-algebras $\sM_B$, we will also examine measurability with respect to $\sig$-algebra $\sF_t$ generated by trajectories $\bar a(s), s \in (-\infty, t]$ for all particles $a \in H_B^{++}$ for some $B \in \sB$ with $\tau_B \le t$. In other words, $\sF_t$ is the natural filtration for the process of coloured particles.
The most important component of our survival event is an extremely rare event $\cU_B$ which is $\sM_{D_\sB(B, 2)}$-measurable. We have three main goals when defining this event:
\begin{itemize}[nosep]
\item To specify the movements of a particle $a^* \in H_B$ that will have a good chance of surviving forever.
\item To use the other particles in $H_B$ to build a protective wall of infected particles around $a^*$ that will ensure it is difficult for particles from other blocks to reach $a^*$ before they have recovered.
\item To ensure that when doing this, $B$ does not become a blue seed.
\end{itemize}
Naturally, the precise definition of $\cU_B$ must be somewhat technical. For the definition, we need to distinguish a square in $B$. Rather arbitrarily, we let $v_B$ be the square closest to the center of $B$, where ties are broken using the lexicographic order.
\begin{definition}
\label{D:survivor}
For a block $B$ let $\cU_{B}$ be the event where the following things occur:
\begin{enumerate}
\item \emph{Particle count:} $H^+_B = H^-_B$.
\item \emph{Healing:} For every particle $a \in H^+_B$, we have $a^h \cap [0, L^{20}-2] = \emptyset$ and $a^h \cap [L^{20}-2, L^{20}-1] \ne \emptyset$.
\item \emph{Ignition particles:} For every block $B'$ with $d(B', B) \le 2$, the ignition trajectory $W_{B', \operatorname{ig}}$ satisfies $W_{B', \operatorname{ig}}(s) = W_{B', \operatorname{ig}}(0)$ for all $0 \le s \le L^4$ and the ignition healing clock satisfies $W^h_{B', \operatorname{ig}} \cap [0, L^3] = \emptyset$ and $W^h_{B', \operatorname{ig}} \cap [L^3, L^4] \ne \emptyset$.
\item \emph{The survivor:}
There is some particle $a^* \in H^+_B$ with $a^*(t) = v_B$ for all $0 \le t \le L^{20}$.
\item \emph{Other trajectories:} We can partition $H_B^+ \smin \{a^*\}$ into sets
$$
\{H_{B, B', x}^+ : x \in \del B^\#, d(B, B') = 1, B' \not\in U_x \}
$$
such that:
\begin{enumerate}
\item $|H_{B, B', x}^+| = L^{90}$ for all $x, B'$.
\item Every particle $a \in H_{B, B', x}^+$ satisfies:
\begin{align}
\label{E:66}
d(a(0), B^c) &\ge L/10 \\
\label{E:67}
\{a(t) : 0 \le t \le \xi/(2\log \log L) \} &= U_x \smin \{v_B\}, \\
\label{E:68}
\{a(t) : t \in [0, L^{20}]\} &\sset D(v_B, L^{20}) \smin \{v_B\}.
\end{align}
\item The particle $a$ first exits $U_x$ in the interval $[\xi/(2\log \log L), \xi/\log \log L)$. When it does so, it enters $B'$.
\end{enumerate}
\item \emph{Protective wall:} For every $b \in D(v_B, L^{20}) \smin \{v_B\}$,
there are $L^{45}$ particles in each $H_{B, B', x}^+$ that satisfy
$$
a(t) = b, \qquad \xi \le t \le L^{20}.
$$
\end{enumerate}
\end{definition}
The protective wall in the final part of Definition \ref{D:survivor} will guarantee that it is hard for a particle starting in a block far away from $B$ to infect $a^*$. However, the protective wall is not effective at preventing infection from nearby blocks. Because of this, we need to define a different specific event to handle blocks $B'$ close to $B$. To consolidate later proofs, we will build this event in a similar way to $\cU_B$; however, the exact structure here is not nearly as important.
\begin{definition}
\label{D:survivor-2}
For blocks $B' \ne B$, let $\cV_{B', B}$ be the event where the following things occur:
\begin{enumerate}
\item \emph{Particle count and ignition trajectories:} Events $1$ and $3$ in Definition \ref{D:survivor} hold for the block $B'$.
\item \emph{Healing:} For every particle $a \in H^+_{B'}$, we have $a^h \cap [0, L^{15}] = \emptyset$ and $a^h \cap [L^{15}, L^{15} + 1] \ne \emptyset$.
\item \emph{Trajectories:} We can partition $H_{B'}^+$ into sets
$$
\{H_{B', B'', x}^+ : x \in \del B^{\prime \#}, d(B'', B') = 1, B'' \not\in U_x \}
$$
such that:
\begin{enumerate}
\item $|H_{B', B'', x}^+| = L^{90}$ for all $x, B''$.
\item Every particle $a \in H_{B', B'', x}^+$ satisfies:
\begin{align}
d(a(0), B^{\prime c}) &\ge L/10 \\
\{a(t) : 0 \le t \le \xi/(2\log \log L) \} &= U_x \smin \{v_B\}, \\
\label{E:71}
\{a(t) : t \in [0, 2L^{20}]\} &\sset D(v_B, L^{20}) \smin \{v_B\}.
\end{align}
\item The particle $a$ first exits $U_x$ in the interval $[\xi/(2\log \log L), \xi/\log \log L)$. When it does so, it enters $B'$.
\end{enumerate}
Note that this is essentially the same as Definition \ref{D:survivor}.5, except we have no survivor particle $a^*$, and the block $v_B$ is not located in $B'$.
\end{enumerate}
\end{definition}
We now define
$$
\cY_{B} = \cU_B \cap \bigcap_{B' \in D_\sB(B, L^{10})\smin \{B\}} \cV_{B, B'}.
$$
We have the following easy deterministic consequences of the construction of $\cY_B$.
\begin{lemma}
\label{L:red-boxes}
On the event $\cY_B$, the following events hold.
\begin{enumerate}[label=(\roman*)]
\item Every block $B' \in D_\sB(B, L^{10})$ is not a blue seed.
\item Suppose that the block $B$ is ignited. For all times $t \in [\tau_B + \xi, \tau_B + L^{20} - 2]$ and all locations $b \in D(v_B, L^{20}) \smin \{v_B\}$, there is at least one infected particle satisfying $\bar a(t) = b$.
\item Suppose that the block $B$ is ignited. Then by the time $\tau_B + L^{20}-1$, all particles in the set $S_B = \bigcup \{H_{B'} : B' \in D_\sB(B, L^{10}) \}$ have been removed, other than the survivor particle $a^*$ for the block $B$. (For this point, recall that the ignition particle for $B'$ belongs to $H_{B'}$ and we think of the event where a particle becomes an ignition particle for another block as removal).
\item Suppose that the block $B$ is ignited. Then the survivor particle $a^*$ does not come into contact with any infected particles in the set $S_B$.
\end{enumerate}
Moreover, the event $\cY_B$ is measurable with respect to either the $\sig$-algebra $\sM_{D_\sB(B, L^{10} + 2)}$ or the $\sig$-algebra $\sF_{\tau_B + 3L^{20}/2}$.
\end{lemma}
\begin{proof}
Throughout the proof, we work on the event $\cY_B$.
We start with (i). We refer the reader back to \eqref{E:cA123} and surrounding discussion for the detailed definition of a blue seed.
Let $B' \in D_\sB(B, L^{10})$. There are two cases, depending on whether or not $B' = B$ -- their proofs are identical so we only treat the case when $B' = B$. Definition \ref{D:survivor}.3 guarantees the events $\cA_B^{(1)}$ and $\cA_B^{(3)}$. Definition \ref{D:survivor}.2 guarantees the event $\cA_B^{(4)}$. Definition \ref{D:survivor}.5 and the fact that none of the ignition trajectories in block $B'$ with $D(B', B) \le 2$ move prior to time $L^3$ (Definition \ref{D:survivor}.3) guarantee that each of the events $\cA^{(2)}_{B, B', x}$ and $\cA^{(5)}_{B, x}$ hold for $x \in \del B^\#, B' \notin U_x$. Therefore $\cA_{B}$ holds, and so $B$ is not a blue seed.
We now prove (ii). If $B$ is ignited at a location $x \in \del B^\#$, then by Definition \ref{D:survivor}.$3$ that square contains an infected particle for all times in $[\tau_B, \tau_B + L^3]$. Choose $B'' \notin U_x$ with $d(B, B'') = 1$ arbitrarily. Equation \eqref{E:67} in Definition \ref{D:survivor}.$5$ guarantees that all particles in $H^+_{B, B'', x}$ visit the site $x$ prior to time $\tau_B + \xi/(2 \log \log L)$, and equation \eqref{E:68} in Definition \ref{D:survivor}.$5$ guarantees that no more than $5 L^{40}$ of these particles become ignition particles prior to time $\tau_B + L^{20}$.
Definition \ref{D:survivor}.$2$ and Definition \ref{D:survivor}.$6$ then guarantee that at least $L^{45} - 5 L^{40}$ infected particles are at every site in $D(v_B, L^{20}) \smin \{v_B\}$ at all times $t \in [\tau_B + \xi, \tau_B + L^{20} - 2]$ and all locations $b \in D(v_B, L^{20}) \smin \{v_B\}$.
For part (iii), we first deal with particles from $H_B$. The ignition particle recovers by time $\tau_B + L^4$ by Definition \ref{D:survivor}.3. The remaining particles recover by time $\tau_B + L^{20} - 1$ if they are infected prior to that time by Definition \ref{D:survivor}.2. The fact that they are infected prior to that time follows from \eqref{E:68} and part (ii). Now let $B' \in D_\sB(B, L^{10})$. The ignition particle for $H_{B'}$ recovers prior to time $\tau_{B'} + L^4$, which is bounded above by $\tau_B+ L^{12}$ by \eqref{E:first-lipschitz}. For the remaining particles, observe that by \eqref{E:first-lipschitz}, we have
$$
[\tau_{B'}, \tau_{B'} + L^{12}] \cap [\tau_B + \xi, \tau_B + L^{20}] \ne \emptyset,
$$
and so by (ii) and \eqref{E:71}, all particles in $H_{B'}$ are infected by time $\tau_{B'} + L^{12}$. Hence by Definition \ref{D:survivor-2}.2 all particles have recovered by time $\tau_{B'} + L^{15} \le \tau_B+L^{20} - 1$.
For part (iv), by part (iii), we just need to check that $a^*$ does not contact any infected particles in $S_B$ in the interval $[\tau_B, \tau_B + L^{20}]$. For particles in $H_B \smin \{a^*\}$, this is guaranteed by \eqref{E:68} and Definition \ref{D:survivor}.3 for the ignition particle. For particles in $S_B \smin H_B$ this is guaranteed by \eqref{E:71}, Definition \ref{D:survivor-2}.1 for the ignition particle, and the Lipschitz bound \eqref{E:Lipschitz}.
The $\sM_{D_\sB(B, L^{10} + 2)}$-measurability claim is immediate from construction and the measurability given $\sF_{\tau_B + 3 L^{20}/2}$ follows by the Lipschitz bound \eqref{E:Lipschitz}.
\end{proof}
We also need to define events that control the possibility that the particle $a^*$ comes into contact with an infected particle outside of the set $S_B$, and to control the SSP near $B$. Here we use some notation from previous sections.
First, using the notation of Lemma \ref{L:far-particles}, define
$$
\cR_{B, B'} := \cE_{B'}(0, [d(B, B')]^{3/2}, d(B, B')/4).
$$
Also let $\cT_{B'}$ be the event where for every $b\in H_{B'}^{++}$ and every $a \in [0, L^{17}]$, the healing process $b^h$ satisfies
$$
b^h \cap (a, a + L^{10}/2] \ne \emptyset
$$
and the set
$$
\{b(t) : t \in [a, a + L^{10}/2]\}
$$
has diameter at most $L^{10}/5$.
Note that $\cR_{B, B'}, \cT_{B'}$ are $\sM_{B'}$-measurable events.
\begin{lemma}
\label{L:other-particles}
Let $b \in H_{B'}^{++}$ for some $B'$ with $L^{10} < d(B, B')$.
\begin{enumerate}
\item If additionally $L^{15} < d_\sB(B, B')$, then on the event $\cR_{B, B'}$ the particle $b$ never comes within distance $d(B, B')/2$ of the location $v_B$ before time $\tau_B + L^{20}$.
\item If additionally $L^{10} \le d_\sB(B, B') \le L^{15}$, then on the event $\cT_{B'} \cap \cR_{B, B'} \cap \cY_B$, if the block $B$ was ignited, then the particle $b$ never comes within distance $L^{10}/4$ of the location $v_B$ before time $\tau_B + L^{10}$ and will be removed by that time.
\end{enumerate}
\end{lemma}
\begin{proof}
Part 1 of the lemma follows from Corollary \ref{C:no-far-particles}, noting that when $L^{15} < d_\sB(B,B')$, we have
$$
L^{20} + L^2 d(B, B') \le [d(B, B')]^{3/2}.
$$
For part 2 of the lemma, by the Lipschitz bound \eqref{E:first-lipschitz} we have
\begin{equation}
\label{E:tauB-spread}
|\tau_B + \xi - \tau_{B'}| \le \xi (d_\sB(B, B') + 1) \le [d(B,B')]^{3/2}.
\end{equation}
Therefore since we are working on the event $\cR_{B, B'}$, we have $d(B, B')/2 \le d(\bar b(t), B) \le 2d(B, B')$ for all $t \in [\tau_{B'}, \tau_B + \xi]$.
Therefore Lemma \ref{L:red-boxes}(ii) guarantees that $b$ has been infected prior to time $\tau_B + \xi$. Therefore again using \eqref{E:tauB-spread} and the definition of $\cT_{B'}$, the particle $\bar b$ will be removed prior to time $\tau_B + \xi + L^{10}/2 < \tau_B + L^{10}$ and will never come within distance $L^{10}/4$ of $B$ in that time.
\end{proof}
The events $\cT_{B'}$ and $\cR_{B, B'}$ have high probability.
\begin{lemma}
\label{L:AC-estimates}
For any block $B'$ we have
\begin{align*}
\P [\cT_{B'}] \ge 1 - \exp (-c L^{4}).
\end{align*}
and for any $B'$ with $d(B, B') \ge L^{10}$, we have
$$
\P [\cR_{B, B'}] \ge 1 - \exp (-c d(B, B')^{1/2}).
$$
Moreover, both of these bounds hold conditional on $\cY_B$ whenever $B' \notin D_\sB(B, L^{10})$.
\end{lemma}
\begin{proof}
The estimate on $\cR_{B, B'}$ follows from Lemma \ref{L:far-particles}. For the estimate on $\cT_{B'}$, let $\cZ_{m}$ be the event where
\begin{equation}
\label{E:Xib}
h^b \cap (m, m + L^{10}/4] \ne \emptyset
\end{equation}
for all $b \in H_{B'}^{++}$.
Then with notation as in Lemma \ref{L:far-particles} we have
$$
\cT_{B'} \supset \bigcap_{m \in \N \cap [0, L^{18}]} \cZ_m \cap \cE_{B'}(m, 2L^{10}, L^{10}/4).
$$
The probability of \eqref{E:Xib} failing for a fixed $b$ is simply the probability that a Poisson random variable of mean $\nu L^{10}/2$ equals $0$, which is $e^{-\nu L^{10}/2}$. Therefore by a union bound and Lemma \ref{l:HBsize},
$$
\P [\cZ_{m}^c] \le \E (|H_{B'}^+| + 1) e^{-\nu L^{10}/2} \le 2 \mu L^2 e^{-\nu L^{10}/2}.
$$
We can bound $\P [\cE_{B'}(m, 2L^{10}, L^{10}/4)]$ with Lemma \ref{L:far-particles}. Combining these estimates with \eqref{E:Lnu-relationship}, a union bound and simplifying yields the bound on $\cT_{B'}$.
For the conditional claims, note that both $\cT_{B'}, \cR_{B, B'}$ are $\sM_{B'}$-measurable and so they are independent of $\cY_{B}$ unless $d(B', B'') \le 2$ for some $B'' \in D_\sB(B, L^{10})$.
In the adjacent case, conditioning on $\cY_B$ changes the behaviour of the ignition particle in $B'$ by forcing it to stay still until time $L^3$, and affecting $W^h_{B', \operatorname{ig}}$ in the interval $[0, L^4]$. It is easy to repeat the computations above for $\P[ \cT_{B'}], \P [\cR_{B, B'}]$ under this conditioning to get the same bounds.
\end{proof}
Next, we analyze the SSP close to $B$, conditional on the event $\cY_B$.
\begin{lemma}
\label{L:cYB-conditional}
Conditional on the event $\cY_B$, the blue seed process $\fB$ is still $1$-dependent and is stochastically dominated by a Bernoulli process of mean $\de_\nu$ as in the unconditional case (Corollary \ref{C:blue-domination}).
\end{lemma}
\begin{proof}
First, by Lemma \ref{L:red-boxes}(i), on $\cY_B$ every block $B' \in D_\sB(B, L^{10})$ is not a blue seed. Moreover, the remaining process of blue seeds still forms a $1$-dependent process under this conditioning, so as in the proof of Corollary \ref{C:blue-domination} it is enough to show that for every $B' \notin D_\sB(B, L^{10})$, we have
\begin{equation}
\label{E:PcA}
\P(\cA_{B'}^c \; | \; \cY_B) \le \ep_\nu.
\end{equation}
For $B' \in D_\sB(B, L^{10} + 2)$ we can write $\cY_B = \cW \cap \cX$, where $\cX$ is independent of $\cA_{B'}$ and $\cW$ is of the form of the event in Lemma \ref{L:blueSeedconditional}. That lemma implies the bound \eqref{E:PcA}. For $B' \notin D_\sB(B, L^{10} + 2)$, since $\cY_B$ is $\sM_{D_\sB(B, L^{10}+2)}$-measurable by Lemma \ref{L:red-boxes}, the events $\cY_B$ and $\cA_{B'}$ are independent, so Proposition \ref{p:blueSeed} yields \eqref{E:PcA}.
\end{proof}
Given Lemma \ref{L:AC-estimates}, we have the following.
\begin{corollary}
\label{C:blue-prob}
Recall the definition of the events $\cD_B(r, r')$ in Lemma \ref{L:global-local-ests}. We have the following:
\begin{enumerate}
\item $\fD(B, L^{10}) = \emptyset$ on the event $\cY_B$. In particular, $B \notin [\fD]$ on the event $\cY_B \cap \cD_B(L^{10})$.
\item As in Lemma \ref{L:global-local-ests}, for all $0 < r < r'$ and $B'$ we have $\P[\cD_{B'}(r, r') \; | \; \cY_B] \ge 1 - \exp(-c r^{c/\log \log r})$.
\end{enumerate}
\end{corollary}
We can now put everything together to construct the survival event we will need moving forward.
\begin{lemma}
\label{L:putting-things-together}
For a block $B$, let
$$
\cS_B = \cY_B \cap \cD_B(L^{10}, m_B) \cap \bigcap_{B' : L^{10} < d_\sB(B, B') \le L^{15}} \cT_{B'} \cap \bigcap_{B' : L^{10} < d_\sB(B, B') \le m_B} \cR_{B, B'}.
$$
If $m_B \le L^{10}$, we omit the $\cD_B(L^{10}, m_B)$ and $\cR_{B, B'}$ events above.
Then recalling the global event $\cG_\nu$, we have the following:
\begin{enumerate}[label=(\roman*)]
\item On $\cS_B \cap \cG_\nu$, at all times in $[\tau_B + L^{20} - 1, \tau_B + L^{20}]$, there is a susceptible particle $a^*$ at location $v_B$ and no unremoved particles within distance $L^{15}$ of $v_B$. All particles other than $a^*$ that were coloured in a block in $D_\sB(B, L^{15})$ are removed before time $\tau_B + L^{20} - 1$.
\item There exists $\delta=\de(L) > 0$ such that $\P [\cY_B] = \de$ for all $B \in \sB$. Moreover, $\P(\cS_B \; | \; \cY_B) \ge 1 - \exp(- L^{c /\log \log L})$.
\item The event $\cS_B$ is $\sM_{D_\sB(B, m_B + L^{10} + 2)}$-measurable.
\end{enumerate}
\end{lemma}
\begin{proof}
For part (i), we first check that the block $B$ is ignited on $\cG_\nu \cap \cS_B$. First, on $\cG_\nu$, any block not in $[\fD]$ gets coloured red and hence is ignited by Proposition \ref{P:SI-to-BR}.
Next, the event $\cG_\nu$ implies the event $\cD_B(m_B)$. Since the event $\cD_B(L^{10}, m_B)$ occurs on $\cS_B$, by \eqref{E:transitivity} this implies the event $\cD_B(L^{10})$, so therefore $B$ gets coloured red as long as $B \notin [\fD(B, L^{10})]$. Corollary \ref{C:blue-prob} implies that $[\fD(B, L^{10})] = \emptyset$ on $\cY_B$, which contains $\cG_\nu \cap \cS_B$.
Next, $\cL_B(B', [d(B, B')]^{3/2}) \sset \cR_{B, B'}$ for any $B, B'$, so the event $\cG_\nu \cap \cS_B$ implies $\cR_{B, B'}$ for all $B' \notin D_\sB(B, L^{10})$. Therefore by Lemma \ref{L:other-particles}.1, in the time interval $[\tau_B, \tau_B + L^{20}]$, no particles that originated in some $B'$ with $d_\sB(B, B') > L^{15}$ come within distance $d(B, B')/2 > L^{15}$ of $v_B$. Moreover, by Lemma \ref{L:other-particles}.2 and Lemma \ref{L:red-boxes}(iii, iv), all particles other than $a^*$ that originated in some $B'$ with $d_\sB(B, B') \le L^{15}$ are recovered by time $a^*$ and none of these come into contact with $a^*$.
Now we turn to (ii). First, $\P[\cY_B]$ does not depend on $B$ by construction. Moreover, we claim that $\cY_B$ is a finite intersection of finitely many independent positive probability events and hence $\P \cY_B > 0$. Indeed, let $\tilde \cV_{B, B'}$ and $\tilde \cU_B$ be version of $\cV_{B', B}$ and $\cU_B$ where we omit any restrictions on ignition particles. The events $\tilde \cV_{B, B'}$ and $\tilde \cU_B$ are, respectively, $\sM_{B'}$ and $\sM_B$-measurable. Now, $\cY_B$ is then the intersection of all the $\tilde \cV_{B, B'}$ and $\tilde \cU_B$ with a collection of events restricting the behaviour of each ignition particle in $D_\sB(B, L^{10} + 2)$ which are independent of each other, all the $\tilde \cV_{B, B'}$, and $\tilde \cU_B$.
The bound on $\P(\cS_B \; | \; \cY_B)$ follows from the conditional bounds in Lemma \ref{L:AC-estimates} and Corollary \ref{C:blue-prob} along with a union bound.
Finally, the measurability follows from the `Moreover' in Lemma \ref{L:red-boxes} and the $\sM_{B'}$-measurability of the events $\cT_{B'}, \cR_{B, B'}$.
\end{proof}
Moving forward, we will also need more localized versions of $\cS_B$. This is the content of the following lemma.
\begin{lemma}
\label{L:79}
For each block $B$ and $r > L^{15}$ define
$$
\cS_B^r := \cY_B \cap \cD_B(L^{10}, r \wedge m_B) \cap \bigcap_{B' : L^{10} < d(B, B') \le L^{15}} \cT_{B'} \cap \bigcap_{B' : L^{10} < d(B, B') < m_B \wedge r} \cR_{B, B'}.
$$
Again, if $m_B \le L^{10}$, we omit the $\cD_B(L^{10}, r \wedge m_B)$ and $\cR_{B, B'}$ events.
Then $\cS_B^r$ is measurable given either the $\sig$-algebra $\sF_{\tau_B + 2 r^{3/2}}$ or the $\sig$-algebra $\sM_{D_\sB(B, m_B + L^{10} + 2)}$ and we have the bound
$\P[\cS_B \mid \cS_B^r] \ge 1 - \exp(- r^{c / \log \log r})$.
\end{lemma}
\begin{proof}
The $\sM_{D_\sB(B, m_B + L^{10} + 2)}$-measurability follows as in Lemma \ref{L:putting-things-together}. The $\sF_{\tau_B + 2 r^{3/2}}$-measurability follows from the Lipschitz bound \eqref{E:first-lipschitz} since all of the events in the definition of $\cS_B^r$ that come from any fixed $B'$ concern times in the window $(-\infty, \tau_{B'} + r^{3/2}]$. The conditional probability bound follows in the exact same way as in Lemma \ref{L:putting-things-together}, using the transitivity $\cD_B(L^k, r) = \cD_B(L^k, r \wedge m_B) \cap \cD_B(r \wedge m_B, r)$ (Equation \eqref{E:transitivity}).
\end{proof}
\subsection{The relationship with Section \ref{S:upper-bd-density}}
To move from the rare local survival event $\cY_B$ to a global survival event, we need to check that a version of the arguments from Section \ref{S:upper-bd-density} still goes through even after we condition on $\cY_B$. Our goal will be to show the following lemma, which is the conditional analogue of Proposition \ref{P:big-estimate}. Here all notation is as in Section \ref{S:upper-bd-density}.
\begin{lemma}
\label{L:upper-bound-conditional}
Let $B \in \sB$, $s \ge L^{20}$. Let $z_B$ be the square closest to the center of $B$ and for $z \in \Z^2$ define $\Lambda_z := D(z + z_B, s^{49/100})$. Then there is an event $\tilde \cX = \tilde \cX_{B, s}$ such that:
\begin{itemize}
\item $\P[\tilde \cX \mid \cY_B] \ge 1 - \exp(-s^{c/\log \log s})$ and $\tilde \cX$ is $\sM_{D_\sB(B, m_B + 3s^{3/4})}$-measurable.
\item On the event $\tilde \cX \cap \cS_B \cap \cG_\nu$, we have
\begin{equation}
\label{E:bara-sum*}
\frac{1}{|\Lambda_z|} \sum_{B' \in D_\sB(B, s^{3/4})} \sum_{a \in H_{B'}^{++}} \mathbf{1}(\bar a(\tau_B + s') \in \Lambda_z, \bar a \notin \mathbf{R}_{\tau_B + s'}) \le 2L^{-20}
\end{equation}
for all $s' \in [s-1, s]$ and $z \in \Z^2$.
\end{itemize}
\end{lemma}
The event $\tilde \cX$ will simply be equal to the original event $\cX$ from Proposition \ref{P:big-estimate} but with some conditions in the local radius around $B$ removed. In particular, we have $\cX \sset \tilde \cX$.
\begin{proof}
First, on $\cS_B \cap \cG_\nu$, all particles that originated in $D(B, L^{15})$ not equal to $a^*$ are recovered by time $\tau_B + s-1$ by Lemma \ref{L:putting-things-together}.1. Therefore the left-hand side of \eqref{E:bara-sum*} is less than or equal to
\begin{equation}
\label{E:sumsumsum}
\frac{1}{|\Lambda_z|}\Big(1 + \sum_{B' : L^{15} < d_\sB(B, B') \le s^{3/4}} \sum_{a \in H_{B'}^{++}} \mathbf{1}(\bar a(\tau_B + s') \in \Lambda_z, \bar a \notin \mathbf{R}_{\tau_B + s'}) \Big)
\end{equation}
Here the $+ 1$ is for the one susceptible particle guaranteed by Lemma \ref{L:putting-things-together}. Therefore it suffices to find an event $\tilde \cX$ satisfying the first condition of the lemma, and such that \eqref{E:sumsumsum} $\le 2L^{-20}$ on $\tilde \cX \cap \cY_B \cap \cG_\nu$.
Let $\tilde \sD = \{B' : L^{15} < d(B, B') \le s^{3/4}\}$, and define $\tilde \sD_1, \tilde \sD_2$ exactly as in the proof of Proposition \ref{P:big-estimate} but with $\tilde \sD$ in place of $\sD$. For $L^{20} \le s \le L^{5000}$, define
$$
\tilde \cX = \P\{\tilde \sD_1 \cup \tilde \sD_2 = \emptyset\}
$$
and for $s > L^{5000}$, define $\tilde \cX = \tilde \cX' \cap \bigcap_{z \in \Z^2, |z| \le 2s} \tilde \cQ_z$, where
\begin{align*}
\tilde \cX' &= \bigcap_{B' : L^{15} < d(B, B') \le s^{3/4}} \cE(0, 2s, s/4),\\
\tilde \cQ_z &= \bigg\{\sum_{B'\in \tilde \sD_1 \cup \tilde \sD_2} \sum_{a\in H_{B'}^{++}} \mathbf{1}(G_{a}) \leq L^{-20} |\Lambda_z| \bigg \},
\end{align*}
where $G_a$ is as in the proof of Proposition \ref{P:big-estimate}. Exactly as in that proposition, the event $\tilde \cX$ implies that the double sum in \eqref{E:sumsumsum} is at most $L^{-20} |\Lambda_z|$, so the whole expression is at most $2L^{-20} |\Lambda_z|$. The measurability claim for $\tilde \cX$ also follows as in the proof of Proposition \ref{P:big-estimate}.
Now, conditional on $\cY_B$, the process of blue seeds still forms a $1$-dependent process that is stochastically dominated by a Bernoulli process of mean $\de_\nu$ by Lemma \ref{L:cYB-conditional}, so all estimates involving $\sD_1$ for the proof of Proposition \ref{P:big-estimate} go through verbatim for $\tilde \sD_1$. Moreover, the set $\tilde \sD_2$ is independent of $\cY_B$. Indeed, the event $\{B' \in \tilde \sD_2\}$ is $\sM_{D_\sB(B', 10 L^\de)}$-measurable by construction and hence $\tilde \sD_2$ is $\sM_{D_\sB(B, L^{14})^c}$-measurable whereas $\cY_B$ is $\sM_{D_\sB(B, L^{10} + 2)}$-measurable by Lemma \ref{L:red-boxes}. Therefore all estimates involving $\sD_2$ for the proof of Proposition \ref{P:big-estimate} go through verbatim for $\tilde \sD_2$. Finally, given $\sD_1, \sD_2$, all events in the definition of $\tilde \cX$ depend only on $M_{B'}, L^{15} < d(B, B') \le s^{3/4}$ and hence are independent of $\cY_B$. Therefore all remaining estimates in the proof of Proposition \ref{P:big-estimate} go through verbatim here, yielding the result.
\end{proof}
\section{The proof of Theorem \ref{T:main-2}}
\label{S:herd}
The results of Section \ref{S:upper-bd-density} guarantee that most particles in a region recover in an $O(1)$ amount of time after the infection front initially visits that region. The results of Section \ref{S:recovery-local} then guarantee that the infection must die out in this region soon afterwards, and so the infection will typically pass through a region in an $O(1)$ amount of time. The results of Section \ref{S:survival} ensure that a particle has some probability of surviving for an arbitrarily long time after the infection enters its block, which together with the results of Sections \ref{S:upper-bd-density} and \ref{S:recovery-local} indicates that some particles will survive forever. In summary, together the results of these three sections suggest the main content Theorem \ref{T:main-2}. However, putting them together is delicate. This is the goal of the present section.
\subsection{Events and scales for the proof}
For $n \in \N$, define time scales $s_n = L^{20} 2^{n-1}$. We will analyze the SIR process near a block $B$ in the increasing geometric time intervals $[\tau_B + s_n, \tau_B + s_{n+1}]$. We start by defining two events to guarantee herd immunity.
\begin{itemize}
\item $\cN_{n, B}$: For all $t \in [\tau_B + s_n, \tau_B + s_{n+1}]$ we have
\begin{equation*}
\bigcup_{B' \in D_\sB(B, m_B + s_n^{3/4})} \{a \in H_{B'} : a \in {\bf I}_t, d(\bar a(t), B) \le s_n^{3/5} \} = \emptyset.
\end{equation*}
\item $\cM_{n, B}$: \quad We only care about this event when working on $\cY_B$. On $\cY_B$, there is a distinguished particle $a^* \in H_B$, see Definition \ref{D:survivor}. Let $\cM_{n, B}$ be the event where
$$
|a^*(t) - a^*(0)| \le s_n^{5/9}
$$
for all $0 \le t \le
s_{n+1}$.
\end{itemize}
\begin{lemma}
\label{L:herd-and-survive}
Fix any block $B$.
\begin{enumerate}[label=\arabic*.]
\item On the event $\cG_\nu$, there are no infected particles $a$ with $d(\bar a (t), B) \le s_n^{3/5}$ for some $t \in [\tau_B + s_n,\tau_B + s_{n+1}]$ from any $H_{B'}$ with $d_\sB(B, B') > m_B + s_n^{3/4}$.
\item On the event $\cG_\nu \cap \cN_{n, B}$, there are no infected particles $a$ with $d(\bar a (t), B) \le s_n^{3/5}$ for some $t \in [\tau_B + s_n,\tau_B + s_{n+1}]$.
In particular, on the event
$$
\operatorname{Herd}_{B, n} := \cG_\nu \cap \bigcap_{m \ge n} \cN_{m, B},
$$
there are no infected particles in the block $B$ after time $\tau_B + s_n$.
\item On the event
$$
\operatorname{Survive}_{B} := \cG_\nu \cap \cS_B \cap \bigcap_{n \ge 1} (\cN_{n, B} \cap \cM_{n, B}),
$$
there is a particle $a^* \in H_B$ such that $a^* \in \mathbf{S}_t$ for all $t > 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
For part $1$, for $B'$ with $d_\sB(B, B') > m_B + s_n^{3/4}$, the event $\cG_\nu$ is contained in the event $\cL_B(B', [d_\sB(B, B')]^{3/2}) \sset \cL_B(B', s_{n+1})$. Moreover, $d(B, B')/2 > s_n^{3/5}$, so by \eqref{E:LB-event} in Corollary \ref{C:no-far-particles}, no particles from $H_{B'}$ come within distance $s_n^{3/5}$ of $B$ in the interval $[\tau_B + s_n, \tau_B + s_{n+1}]$.
Part $2$ follows from part $1$ and the definition of $\cN_{n, B}$.
For part $3$, the event $\cG_\nu \cap \cS_B$ guarantees that there is a susceptible particle $a^* \in H_B$ contained in the block $B$ at time $\tau_B + s_1 = \tau_B + L^{20}$. This uses Lemma \ref{L:putting-things-together}.1. On the event $\bigcap_{n \ge 1} \cM_{n, B}$, we have that $d(\bar a^*(t), B) \le 4 s_n^{5/9}$ for all $t \in [s_n, s_{n+1}]$. Since $4s_n^{5/9} < s_n^{3/5}$, part $2$ implies that $a^*$ will never encounter an infected particle after time $\tau_B + s_1$.
\end{proof}
The majority of this section is devoted to proving the following two propositions which give bounds on the behaviour of the events $\operatorname{Herd}_{B, n}$ and $\operatorname{Survive}_{B}$ on the event $\cG_\nu$.
\begin{prop}
\label{P:herd-immunity}
For all $B\in \sB$ and $n \ge 2$ we have
$$
\P( \operatorname{Herd}_{B, n} \;|\; \cG_\nu) \ge 1 - \exp(-{(2^n L)}^{c/ \log (n + \log L)}).
$$
\end{prop}
\begin{prop}
\label{P:exists-delta}
There exists a constant $\alpha = \alpha(\nu) > 0$ such that almost surely,
$$
\liminf_{m \to \infty} \frac{1}{m^2} \sum_{B : d_\sB(B, 0) \le r} \mathbf{1}(\operatorname{Survive}_B) \ge \al \mathbf{1}(\cG_\nu).
$$
More precisely, for all large enough $r$ we have
$$
\P \Big(\frac{1}{m^2} \sum_{B : d_\sB(B, 0) \le m} \mathbf{1}(\operatorname{Survive}_B) \le \al \; \Big| \; \cG_\nu \Big) \le C_\nu \exp(- m^{c/ \log \log m}).
$$
\end{prop}
\subsection{Probability bounds and the proof of Proposition \ref{P:herd-immunity}}
\label{SS:prob-bounds}
The key to bounding the probabilities in Proposition \ref{P:herd-immunity} and Proposition \ref{P:exists-delta} is understanding the events $\cN_{n, B}$, both conditionally on $\cG_\nu$ and conditionally on the rarer event $\cG_\nu \cap \cS_B$. We will also establish a degree of spatial independence for these events in order to prove Proposition \ref{P:exists-delta}. The first step is to find a way to apply the local recovery result in Proposition \ref{P:death-with-specifics}. This proposition requires us to start with a process that has a low density of particles locally. To ensure this, we require two more events:
\begin{itemize}
\item $\cO_{n, B}, n \in \N$: For every $B'$ satisfying $m_B + s_n^{3/4} \ge d_\sB(B, B') > s_n^{3/4}$ and every $a \in H_{B'}^{++}$ we have
$$
d(a(t), B) \ge 2 s_n^{4/7} \quad \text{ for all } |t| \le s_{n+1} + L^2 d_\sB(B, B').
$$
Note that by the Lipschitz bound \eqref{E:first-lipschitz}, the event $\cO_{n, B}$ implies that
$$
d(\bar a(t), B) \ge 2 s_n^{4/7} \quad \text{ for all } t \in [\floor{\tau_B}, \tau_B + s_{n+1}].
$$
\item $\cP_{n, B}:$ We define this for all $n \ge 2$. Let $z_B$ be the square closest to the center of $B$ and for $z \in \Z^2$ define $\Lambda_{z, n} := D(z + z_B, s_n^{49/100})$.
For every $z \in \Z^2$, there are at most
$$
L^{-20}|\Lambda_{z, n}|
$$
unrecovered particles in the ball $\Lambda_{z, n}$ that belong to $H_{B'}$ for some $B'$ with $d_\sB(B, B') \le s_n^{3/4}$ at time $\floor{\tau_B} + s_{n-1}$. The discretization of $\tau_B$ here is in preparation for a discrete martingale argument in Section \ref{S:proof83}.
\end{itemize}
We would like to say that the events $\cO_{n, B}, \cP_{n, B}$ are local, high-probability events. In the case of $\cO_{n, B}$, this is straightforward.
For this lemma recall that $\sF_t, t \ge 0$ is the filtration generated by the process of coloured particles up to time $t$, see Section~\ref{S:SI-colouring}.
\begin{lemma}
\label{L:cO}
Let $A_\sB(B, r_1, r_2) = \{B' \in \sB: r_1 \le d_\sB(B, B') \le r_2\}$ and $n \ge 1$.
The event $\cO_{n, B}$ is both $\sM_{A_\sB(B, s_n^{3/4}, m_B + s_n^{3/4})}$-measurable and $\sF_{\tau_B + s_{n+2} + 2\xi m_B}$-measurable, is independent of $\cY_{B}$, and satisfies
\[
\P[\cO_{n, B}] \ge 1 - \exp(- c s_n^{1/2}) = 1 - \exp(-c' L^{10} 2^{n/2}).
\]
\end{lemma}
\begin{proof}
Using the notation of Lemma \ref{L:far-particles}, we can write
$$
\cO_{n, B} \supset \bigcap_{B': m_B + s_n^{3/4} \ge d(B, B') > s_n^{3/4}} \cE_{B'}(0, s_{n+1} + L^2 d_\sB(B, B'), d(B, B')/8).
$$
The $\sM_{A_\sB(B, s_n^{3/4}, m_B + s_n^{3/4})}$-measurability and independence from $\cY_{B}$ is immediate from the $\sM_{B'}$-measurability of each $\cE_{B'}$ and the independence from $\cY_{B}$ follows since $s_n^{3/4} \ge L^{15}$ and $\cY_B$ is $\sM_{B, L^{10} + 2}$-measurable (Lemma \ref{L:red-boxes}). The $\sF_{\tau_B + s_{n+2} + 2 \xi m_B}$-measurability follows from the Lipschitz bound \eqref{E:first-lipschitz}. The bound on $\P[\cO_{n, B}]$ follows from Lemma \ref{L:far-particles}.
\end{proof}
While the event $\cP_{n, B}$ does not have a local definition, it contains a high-probability local version both conditionally on $\cG_\nu$ and conditionally on $\cG_\nu \cap \cS_B$. This next lemma follows immediately from Proposition \ref{P:big-estimate} and Lemma \ref{L:upper-bound-conditional}.
\begin{lemma}
\label{L:local-Pbrief}
Fix a scale $n$ and a block $B$. Then with notation as in Proposition \ref{P:big-estimate} and Lemma \ref{L:upper-bound-conditional} we have
$$
\cG_\nu \cap \cX_{B, s_{n-1}} \sset \cP_{n, B} \qquad \text{ and } \qquad \cG_\nu \cap \cS_B \cap \tilde \cX_{B, s_{n-1}} \sset \cP_{n, B}.
$$
\end{lemma}
We will use the events $\cO_{n, B}, \cP_{n, B}$ to help understand our main event of interest, $\cN_{n, B}$.
\begin{lemma}
\label{L:NnB-conditional}
For $n \ge 2$, define the $\sig$-algebras
\begin{align*}
\sE_{n, B} &:= \sig(\sF_{\floor{\tau_B} + s_{n-1}}, \sM_{D_\sB(B, s_n^{3/4})}\cap \sF_{\tau_B + s_{n+1}}), \\
\quad \sI_{n, B} &:= \sig(\sF_{\floor{\tau_B} + s_{n-1}}, \sM_{D_\sB(B, s_n^{3/4})^c})
\end{align*}
For $n \ge 2$ and $B \in \sB$ there are events $\tilde \cN_{n, B}$ which are measurable given $\sE_{n, B}$ such that
$$
\tilde \cN_{n, B} \cap \cO_{n, B} \cap \cG_\nu \sset \cN_{n, B}.
$$
Moreover, we have the following bound:
\begin{align}
\label{E:sInB}
\P(\tilde \cN_{n, B} \mid \sI_{n, B}) &\ge (1 - \exp(- c L^4 2^{n/2}))\mathbf{1}(\cP_{n, B}).
\end{align}
\end{lemma}
\begin{proof}
The idea is to appeal to Proposition \ref{P:death-with-specifics}. The details are as follows.
Condition on $\sF_{\floor{\tau_B} + s_{n-1}}$, and consider the configuration of unrecovered particles at time $\floor{\tau_B} + s_{n-1}$. Split the set of unrecovered particles at time $\floor{\tau_B} + s_{n-1}$ into two groups:
\begin{itemize}[nosep]
\item $P$, consisting of all unrecovered particles in $\bigcup \{H_{B'} : d_\sB(B, B') \le s^{3/4} \}$.
\item $Q$, consisting of all other unrecovered particles.
\end{itemize}
Given $\sF_{\floor{\tau_B}+ s_{n-1}}$, the future trajectories of all particles in $P$ are independent continuous time random walks.
Now, if we are on the $\sF_{\floor{\tau_B} + s_{n-1}}$-measurable event $\cP_{n, B}$ then the $P$-particles at time $\floor{\tau_B} + s_{n-1}$ satisfy
\begin{equation}
\label{E:P-bound}
|P(\tau_B + s_{n-1}) \cap B^{j \sqrt{s_{n-1}}, z}| \le 2L^{-20} |B^{j \sqrt{s_{n-1}}, z}|
\end{equation}
for every translate $z \in \Z^2, j \in \N$. The factor of $2$ enters here since we may not be able to exactly tile $B^{j \sqrt{s_{n-1}}, z}$ with sets of the form $B^{s_n^{49/100}, y}$.
In particular, if we shift time back by $\floor{\tau_B} + s_{n-1}$ and recenter at $z$, then the corresponding process of $P$-particles satisfies assumption \eqref{E:IC-assumption} with $\delta = 2 L^{-20}$ and $M = \sqrt{s_{n-1}}$. Therefore by the relationship between $L$ and $\nu$, \eqref{E:Lnu-relationship}, we are in the setting of Proposition \ref{P:death-with-specifics}, and so for every $z \in \Z^2$ we can define an event $\cB_{P, z}$ such that we have:
\begin{itemize}[nosep]
\item $\P[\cB_{P, z} \mid \sF_{\floor{\tau_B}+ s_{n-1}}] \ge (1 - e^{-c L^{-6} \sqrt{s_{n-1}}}) \mathbf{1}(\cP_{n, B}) \ge 1 - e^{-c L^4 2^{n/2}}\mathbf{1}(\cP_{n, B}).$
\item $\cB_{P,z}$ is measurable given only
$\sF_{\floor{\tau_B} + s_{n-1}}$ and
$\sM_{B, s_n^{3/4}} \cap \sF_{\tau_B + s_{n+1}}$ (i.e. the trajectories of $P$-particles up to time $\tau_B + s_{n+1}$).
\item On the event $\cD_{Q, z} \cap \cB_{P, z}$, where
$$
\mathcal D_{Q, z} = \{ \text{For all } t \in [\floor{\tau_B} + s_{n-1}, \tau_B + s_{n+1}], \text{ we have } Q(t) \cap B^{2\sqrt{s_{n-1}}, z} = \emptyset \}
$$
there are no infected particles in $B^{\sqrt{s_{n-1}}, z}$ at any $t \in [\tau_B + s_n, \tau_B + s_{n+1}]$.
\end{itemize}
Now, we set
$$
\tilde \cN_{n, B} = \bigcap_{z : d(z, B) \le s_n^{3/5}} \cB_{P, z}.
$$
By the first and second bullet points above, the event $\tilde \cN_{n, B}$ satisfies the measurability claims in the lemma. It also satisfies \eqref{E:sInB} with $\sF_{\floor{\tau_B} + s_{n-1}}$ in place of $\sI_{n, B}$. As conditioning on the finer $\sig$-algebra $\sI_{n, B}$ only gives us additional information about the future trajectories of $Q$-particles and these are independent of the future trajectories of the $P$-particles, $\tilde \cN_{n, B}$ satisfies \eqref{E:sInB}.
To get the containment claim, it is enough to observe that
\[
\cO_{n, B} \cap \cG_\nu \sset \bigcap_{z : d(z, B) \le s_n^{3/5}} \cD_{Q, z}
\]
which follows from the definition of $\cO_{n, B}$ and Lemma \ref{L:herd-and-survive}.1.
\end{proof}
\begin{corollary}
\label{C:avgNnB}
For every $n \ge 2$ we have
$$
\P[\tilde \cN_{n, B} \mid \cG_\nu] \ge 1 - \exp(-{(2^n L)}^{c/ \log (n + \log L)}).
$$
\end{corollary}
\begin{proof}
We have
\begin{align*}
\P[\tilde \cN_{n, B} \mid \cG_\nu] &\ge \P[\tilde \cN_{n, B} \cap \cP_{n, B}\mid \cG_\nu]
\ge 2\P[\tilde \cN_{n, B} \cap \cP_{n, B}] - 1
\end{align*}
where in the second inequality we have used the inequality $\P(A \mid B) \ge 1 - \P(A^c)/\P(B) = 1 + (\P(A)-1)/\P(B)$ along with the fact that $\P \cG_\nu \ge 1/2$. The bound then follows from averaging the bound in Lemma \ref{L:NnB-conditional}, applying the first containment in Lemma \ref{L:local-Pbrief}, and using the bound in Proposition \ref{P:big-estimate}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{P:herd-immunity}]
This follows from the fact that $\tilde \cN_{n, B} \cap \cO_{n, B} \cap \cG_\nu \sset \cN_{n, B}$ (Lemma \ref{L:NnB-conditional}), Lemma \ref{L:cO}, Corollary \ref{C:avgNnB}, and a union bound.
\end{proof}
We finish this subsection with two more event estimates that will be needed for the proof of the more difficult Proposition \ref{P:exists-delta}. The first lemma addresses the event $\cN_{n, 1}$ which was not addressed in Lemma \ref{L:NnB-conditional}.
\begin{lemma}
\label{L:NnB-conditional-small}
We have
$$
\cG_\nu \cap \cS_B \cap \cM_{1, B} \cap \cO_{1, B} \sset \cN_{1, B},
$$
\end{lemma}
\begin{proof}
On the event $\cG_\nu \cap \cS_B $ there are no unrecovered particles from $H_{B'}, B' \in D_\sB(B, L^{15})$ after time $\floor{\tau_B} + s_1$ except for the special particle $a^*$ located at $v_B$ by Lemma \ref{L:putting-things-together}(i).
Also, on the event $\cO_{1, B}$, no particles from a block $B'$ with $s_1^{3/4} < d_\sB(B, B') \le s_1^{3/4} + m_B$ come within distance $2 s_1^{4/7}$ of $B$
in the interval $[\floor{\tau_B}, \tau_B + s_2]$. The same holds for particles from $B'$ with $m_B + s_1^{3/4} < d(B, B')$ since we work on $\cG_\nu$ by the reasoning in Lemma \ref{L:herd-and-survive}.
Therefore no infected particles $a$ satisfy $d(\bar a(t), B) \le s_1^{3/5}$ in the interval $[\tau_B + s_1, \tau_B + s_2]$ unless $a^*$ becomes infected during this time. However, this cannot happen since on $\cM_{1, B}$, we have $d(\bar a^*(t), B) < L^{20}$ in this time window.
\end{proof}
\begin{lemma}
\label{L:srw-M}
For $n \ge 1$ we have $\P[\cM_{n, B} \mid \cY_B] \ge 1 - \exp(-c L^{20/9} 2^{n/9}).$
\end{lemma}
\begin{proof}
On $\cY_B$ the particle $a^*$ performs a random walk conditioned not to move in the time interval $[0, L^{20}]$. Therefore conditional on $\cY_B$, for any $t$ the random variable $\max_{s \in [0, t]} |a^*(t) - a^*(0)|$ is stochastically dominated by the version of the random variable for an unconditioned random walk, and so the bound follows from Lemma \ref{L:rw-estimate}.
\end{proof}
\subsection{The proof of Proposition \ref{P:exists-delta}}
\label{S:proof83}
The proof of Proposition \ref{P:exists-delta} given the bounds in Section \ref{SS:prob-bounds} is more involved than the proof of Proposition \ref{P:herd-immunity} as we need will need to establish that far away events $\operatorname{Survive}_B$ behave independently. For this we set up a martingale argument.
Fix a large $r \in \N$, and let $n \in \N, n \ge 2$ be such that $s_n \le r$. For each $i \in \{0, \dots, 5s_{n-1} - 1\}$ and each $p \in \{0, 1, \dots, r-1\}^2$ we will define a martingale $J_{(\ell, x)} = J_{(\ell, x)}^{r, n, i, p}$. In the definition and various notation that follows, we will typically suppress the dependence on $r, n, i, p$ when these values do not play an important role.
To specify the index set of $(\ell, x)$, we first need a definition. For $x \in \Z^2$, define the block
$$
B^x = L (r x + p) + \{-L/2, \dots, L/2 - 1\}^2 \in \sB.
$$
Now let $K = \{x \in \Z^2 : B^x \in D_\sB(0, r^5)\}$, where here $0$ is the block containing the origin. The index set for $J$ is all $(\ell, x) \in I = I_{r, p}$, where
$$
I := \{\emptyset\} \cup I', \quad \text{and} \quad I' = I'_{r, p} = \{0, 1, \dots, r^5\} \times K.
$$
We totally order $I$ so that $\emptyset$ is the least element, and $I'$ has the lexicographic order. We note for later use that $|I| \le C r^{13}$.
For $(k, y) \in I'$, define
$$
\cT_{(k, y)} = \{\floor{\tau_{B^{y}}} + s_{n-1} = 5s_{n-1} k + i\} \cap \cS^{s_{n-1}^{2/3}/2}_{B^y}
$$
where $\cS^r_B$ is defined as in Lemma \ref{L:79}.
Finally, we define the martingale:
\begin{align*}
J_{(\ell, x)} &= \sum_{(k, y) \le (\ell, x)} \lf[\mathbf{1}(\cT_{(k, y)} \cap \tilde \cN_{n, \mathbf{y}}) - \P (\cT_{(k, y)} \cap \tilde \cN_{n, B^y} \mid \sH_{(k, y)}) \rg]
\end{align*}
and let $J_\emptyset = 0$.
The sequence $J$ is a martingale with respect to $\sH_{(k, y)}$ if $\sH_{(k, y)}, (k, y) \in I$ is any filtration for which $J_{(\ell, x)}$ is $\sH_{(k, y)}$-measurable whenever $(\ell, x) < (k, y)$, i.e.
$$
\E[J_{(\ell, x)} \mid \sH_{(\ell, x)}] = J_{(\ell, x)^-},
$$
where $(\ell, x)^-$ is the predecessor of $(\ell, x)$ in the total order on $I$. (Note that our indexing of the filtration here is not the standard one which would have $\sH_{(\ell, x)}$ labeled as $\sH_{(\ell, x)^-}$; we have done this to ease notation later on).
We will work with the following filtration satisfying this. Set $\sH_\emptyset = \emptyset$ and for $(k, y) \in I_r$ define $\sH_{(k, y)}$ to be the $\sig$-algebra generated by:
\begin{itemize}
\item For $(\ell, x) < (k, y)$, the events
$
\cT_{(k, y)} \cap \tilde \cN_{n, B^y}.
$
\item The $\sig$-algebra $\sF_{5 s_{n-1} k + i}$.
\end{itemize}
By construction, $J$ is a martingale with increments bounded by $1$. As a result, we can record the following concentration bound.
For this next corollary, we write $\bar J^{r, n, i, p}$ for the final state of the martingale.
\begin{corollary}
\label{C:mart-conc}
For any $r, n, i, p$ and any $\la > 0$ we have:
$$
\P \lf(|\bar J^{r, n, i, p}| \ge \la \rg) \le 2\exp (-c \la^2 r^{-13}).
$$
In particular, letting $J^{r, n} := \sum_{i \in [0, 5s_n-1], p \in [0, r-1]^2} \bar J^{r, n, i, p}$, we have
$$
\P \lf(|J^{r, n}| \ge \la \rg) \le 5 s_n r^2 \exp \lf( \frac{- c\la^2 r^{-13} }{ s_n^2 r^{4}}\rg) \le 5 r^3 \exp ( - c'\la^2 r^{-19} ).
$$
\end{corollary}
\begin{proof}
The first inequality is simply Azuma's inequality and the second inequality follows by a union bound.
\end{proof}
The usefulness of this martingale comes from the following observation.
\begin{lemma}
\label{L:Elx-cond}
For any $(\ell, x) \in I'$ we have the following bound:
\begin{equation}
\label{E:TnBy}
\begin{split}
\P (\cT_{(\ell, x)}& \cap \tilde \cN_{n, B^x} \mid \sH_{(\ell, x)}) \ge (1 - \exp(- c L^{4} 2^{n/2}))\mathbf{1}(\cT_{(\ell, x)} \cap \cP_{n, B^{x}}).
\end{split}
\end{equation}
\end{lemma}
To prove the lemma we use the following simple fact.
\begin{lemma}
\label{L:cond-exp-fact}
Suppose that $B$ is an event, and $\sF, \sG$ are $\sig$-algebras such that $B \in \sF$ and
$$
A \in \sF \quad \implies \quad A \cap B \in \sG.
$$
Then for any event $C$ we have $\P(\P(B \cap C \mid \sG) \mid \sF) = \P(B \cap C \mid \sF)$.
\end{lemma}
\begin{proof}
First observe that by letting $A$ be the whole space in the assumption of the lemma, we get that $B \in \sG$. Now let $A \in \sF$ be arbitrary. Then
\begin{align*}
\E \P(\P(B \cap C \mid \sG) \mid \sF) \mathbf{1}(A) &= \E \P(B \cap C \mid \sG) \mathbf{1}(A) \\
&= \E \P(C \mid \sG) \mathbf{1}(A \cap B) = \P(A \cap B \cap C)
\end{align*}
where the second equality uses that $B \in \sG$ and the third equality uses that $A \cap B \in \sG$. On the other hand,
$$
\E \P(B \cap C \mid \sF) \mathbf{1}(A) = \P(A \cap B \cap C)
$$
by definition, completing the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{L:Elx-cond}] The basic idea of the proof is to apply Lemma \ref{L:NnB-conditional}, conditioning on the $\sig$-algebra $\sI_{n, B^x}$. To pass from conditioning on $\sI_{n, B^x}$ to $\sH_{(\ell, x)}$ we will use Lemma \ref{L:cond-exp-fact} with $B = \cT_{(\ell, x)}$.
First we claim that $\cT_{(\ell, x)}$ is $\sF_{5 s_{n-1} \ell + i}$-measurable and hence is also $\sH_{(\ell, x)}$-measurable. Indeed, by Lemma \ref{L:79} the event $\cS^{s_{n-1}^{2/3}/2}_{B^x}$ is $\sF_{\tau_{B^x} + s_{n-1}/\sqrt{2}}$-measurable, so the whole event $\cT_{(\ell, x)}$ is $\sF_{\floor{\tau_{B^x}} + s_{n-1}}$-measurable and hence is also
$\sF_{5 s_{n-1} \ell + i}$-measurable since
\begin{equation}
\label{E:equality}
\floor{\tau_{B^x}} + s_{n-1} = 5 s_{n-1} \ell + i
\end{equation}
on the event $\cT_{(\ell, x)}$. Next, we claim that
\begin{equation}
\label{E:implication}
A \in \sH_{(\ell, x)} \quad \implies \quad A \cap \cT_{(\ell, x)} \in \sI_{n, {B^x}}.
\end{equation}
It is enough to check this for a collection of events $A$ that generate the $\sig$-algebra $\sH_{(\ell, x)}$. First, since \eqref{E:equality} holds on $\cT_{(\ell, x)}$ and $\sF_{\floor{\tau_{B^x}} + s_{n-1}} \sset \sI_{n, B^x}$, the implication \eqref{E:implication} holds whenever $A \in \sF_{5 s_{n-1} \ell + i}$.
Now suppose
$
A = \cT_{(k, y)} \cap \tilde \cN_{n, B^y}
$
for some $(k, y) < (\ell, x)$. If $k < \ell$ then on $\cT_{(k, y)} \cap \cT_{(\ell, x)}$ we have
$$
\tau_{B^y} + s_{n+1} < \floor{\tau_{B^x}} + s_{n-1},
$$
and so
$$
\cT_{(\ell, x)} \cap \cT_{(k, y)} \cap \tilde \cN_{n, B^y} \in \sF_{\floor{\tau_{B^x}} + s_{n-1}} \sset \sI_{n, B^x}
$$
by the $\sE_{n, B^y}$-measurability of $\tilde \cN_{n, B^y}$ in Lemma \ref{L:NnB-conditional}. Alternately, suppose $k = \ell$ but $y \ne x$. In this case $\cT_{(\ell, x)} \cap \cT_{(k, y)}$ implies that
$$
\floor{\tau_{B^y}} + s_{n-1} = \floor{\tau_{B^x}} + s_{n-1}
$$
and since the points $r x + p, r y + p$ are distance more than $r \ge 2s_n^{3/4}$ apart we have that
$$
\sM_{D_\sB(B^y, s_n^{3/4})} \sset \sM_{D_\sB(B^x, s_n^{3/4})^c} \sset \sI_{n, B^x}.
$$
Therefore again using the measurability claim in Lemma \ref{L:NnB-conditional}, we have $\cT_{(\ell, x)} \cap \cT_{(k, y)} \cap \tilde \cN_{n, B^y} \in \sI_{n, B^x}$, completing the proof of \eqref{E:implication}.
Now, we can compute that
\begin{equation}
\label{E:conditioning-pf}
\begin{split}
\P (\cT_{(\ell, x)} &\cap \tilde \cN_{n, B^x} \mid \sI_{n, B^x}) \\
&= \mathbf{1}(\cT_{(\ell, x)}) \P (\tilde \cN_{n, B^x} \mid \sI_{n, B^x}) \\
&\ge (1 - \exp(- c L^{4} 2^{n/2}))\mathbf{1}(\cT_{(\ell, x)} \cap \cP_{n, B^x}).
\end{split}
\end{equation}
The equality here uses the $\sI_{n, B^x}$-measurability of $\cT_{(\ell, x)}$, which follows from \eqref{E:implication} when $A$ is the whole space. The inequality then follows from Lemma \ref{L:NnB-conditional}. Now take conditional expectations of both sides of \eqref{E:conditioning-pf} with respect to $\sH_{(\ell, x)}$. The right side is unchanged: it is $\sF_{\floor{\tau_{B^x}} + s_{n-1}}$-measurable by \eqref{E:equality} and the definition of $\sI_{n, B^x}$ and hence is also
$\sH_{(\ell, x)}$-measurable.
The left-hand side becomes $\P (\cT_{(\ell, x)} \cap \tilde \cN_{n, B^x} \mid \sH_{(\ell, x)})$ by Lemma \ref{L:cond-exp-fact}.
\end{proof}
For this next corollary and for the remainder of this section, to simplify notation we write $\cS_{n, B} = \cS^{s_{n-1}^{2/3}/2}_{B}$ and $\tilde \cX_{n, B} = \tilde \cX_{B, s_{n-1}}$.
\begin{corollary}
\label{C:mart-summation}
On the event $\cG_\nu$ for any $r \in \N$ and $n \ge 2$ with $s_n \le r$ we have
\begin{equation}
\label{E:Cor-eqn}
\begin{split}
&\sum_{B \in D_\sB(0, r^5)} \mathbf{1}(\tilde \cN_{n, B}^c \cap \cS_{B}) \le - J^{r, n} + \\
&\sum_{B \in D_\sB(0, r^5)}\mathbf{1}(\cS_{n, B} \smin \cS_B) + \exp(- c L^{4} 2^{n/2}) \mathbf{1}(\cS_B) + \mathbf{1}(\cS_B \smin \tilde \cX_{n, B}).
\end{split}
\end{equation}
\end{corollary}
\begin{proof}
Fix $r, n$. To simplify notation in the proof we write $\ep = \exp(- c L^{4} 2^{n/2})$. For any $r, n, i, p$ using Lemma \ref{L:Elx-cond} we have that $J^{r, n, i, p}$ is bounded above by
\begin{align*}
\sum_{(k, y) \in I_{r, p}'} \mathbf{1}(\cT_{(k, y)} \cap \tilde \cN_{n, B^y}) - (1 - \ep)\mathbf{1}(\cT_{(k, y)} \cap \cP_{n, B^{y}}),
\end{align*}
and so summing over $i, p$ we get that
\begin{align}
\label{E:triplesum}
J^{r, n} \le \sum_{B \in D_\sB(0, r^5)} \sum_{i=0}^{5 s_{n-1} - 1} \sum_{k = 0}^{r^5} \mathbf{1}(\cT_{B, k, i} \cap \tilde \cN_{n, B}) - (1 - \ep)\mathbf{1}(\cT_{B, k, i} \cap \cP_{n, B}),
\end{align}
where
$$
\cT_{B, k, i} = \{\floor{\tau_B} + s_{n-1} = 5s_{n-1} k + i\} \cap \cS_{n, B}.
$$
Now, by \eqref{E:first-lipschitz} we necessarily have $\tau_B \le L^2 d_\sB(B, 0) \le L^2 r^5$ and so $\floor{\tau_B} + s_{n-1} \le 5 s_{n-1} r^5$. Therefore $\floor{\tau_B} + s_{n-1}$ must equal $5s_{n-1} k + i$ for some $i \in \{0, \dots, 5 s_{n-1} - 1\}, k \in \{0, \dots,r^5\}$, and so the triple sum in \eqref{E:triplesum} equals
\begin{align*}
\sum_{B : d(B, 0) \le m} \mathbf{1}(\cS_{n, B} \cap \tilde \cN_{n, B}) - (1 - \ep)\mathbf{1}(\cS_{n, B} \cap \cP_{n, B}).
\end{align*}
Next, $\cG_\nu \cap \tilde \cX_{n, B} \cap \cS_{B} \sset \cP_{n, B}$ by Lemma \ref{L:local-Pbrief} and $\cS_B \sset \cS_{n, B}$ by construction, so on $\cG_\nu$ we have
\begin{align}
\label{E:mart-bd}
J^{r, n}
&\le \sum_{B \in D_\sB(0, r^5)} \mathbf{1}(\cS_{n, B} \cap \tilde \cN_{n, B}) - (1 - \ep)\mathbf{1}(\tilde \cX_{n, B} \cap \cS_B).
\end{align}
We now convert this into the desired bound on
$$
\sum_{B \in D_\sB(0, r^5)} \mathbf{1}(\tilde \cN_{n, B}^c \cap \cS_{B}).
$$
Indeed, again using that $\cS_{B} \sset \cS_{n, B}$ we can write
$$
\mathbf{1}(\tilde \cN_{n, B}^c \cap \cS_{B}) \le \mathbf{1}(\tilde \cN_{n, B}^c \cap \cS_{n, B}) = \mathbf{1}(\cS_{n, B} \smin \cS_B) + \mathbf{1}(\cS_B) - \mathbf{1}(\cS_{n, B} \cap \tilde \cN_{n, B}),
$$
which after plugging in the bound in \eqref{E:mart-bd} gives that the left-hand side of \eqref{E:Cor-eqn} is bounded above by
\begin{align*}
- J^{r, n} + \sum_{B \in D_\sB(0, r^5)}\mathbf{1}(\cS_{n, B} \smin \cS_B) + \mathbf{1}(\cS_B) - (1-\ep)\mathbf{1}(\tilde \cX_{n, B} \cap \cS_B).
\end{align*}
Using the estimate $\mathbf{1}(\cS_B) - (1-\ep)\mathbf{1}(\tilde \cX_{n, B} \cap \cS_B) \le \ep \mathbf{1}(\cS_B) + \mathbf{1}(\cS_B \smin \tilde \cX_{n, B})$ then completes the proof.
\end{proof}
For this next lemma and the remainder of the section, we let $\de = \de_L > 0$ be as in Lemma \ref{L:putting-things-together} so that for all $B \in \sB$ we have
$$
\de = \P[\cY_B] \ge \P[\cS_B] \ge 2 \de/3.
$$
\begin{lemma}
\label{L:conc-P-S}
Fix $n, r \in \N$ with $r$ large and with $s_n \le r$ and let $\la > 8 \de L^{-1} \exp(- c 2^{cn /\log n})$. For $n \ge 2$ let
$$
W_{n, B} = \mathbf{1}(\cS_{n, B} \smin \cS_B) + \mathbf{1}(\cS_B \smin \tilde \cX_{n, B}) + \mathbf{1} (\cS_B \smin (\cM_{n, B} \cap \cO_{n, B})).
$$
Also define $W_{1, B} = \mathbf{1} (\cS_B \smin (\cM_{1, B} \cap \cO_{1, B}))$. Then
$$
\P \Big( \sum_{B \in D_\sB(0, r^5)}
W_{n, B} \ge \la r^{10} \Big) \le 4 r^2 \exp(- c \la^2 r^8).
$$
Similarly, for any $m \in \N$ and $\la < \de/4$ we have
$$
\P \Big( \sum_{B \in D_\sB(0, r^5)} \mathbf{1} (\cS_B) \le \la r^{10} \Big) \le 4 r^2 \exp(-c \la^2 r^8).
$$
\end{lemma}
\begin{proof}
First let $n \ge 2$. For any $B \in D_\sB(0, r^5)$, since $2 r > m_B + s_n$ for large enough $r$, the events
$$
\tilde \cX_{n, B}, \quad \cM_{n, B}, \quad \cO_{n, B}, \quad \cS_B, \quad \cS_{n, B},
$$
are all measurable given $\sM_{D_\sB(B, 2 r)}$. This uses Lemma \ref{L:putting-things-together} for $\cS_B$, Lemma \ref{L:79} for $\cS_{n, B} = \cS^{s_{n-1}^{2/3}/2}_{B}$, Lemma \ref{L:upper-bound-conditional} for $\tilde \cX_{n, B} = \tilde \cX_{B, s_{n-1}}$, Lemma \ref{L:cO} for $\cO_{n, B}$, and is immediate from the definition for $\cM_{n, B}$.
Moreover
$$
\P(W_{n, B} \ge 1) \le C \de L^{-1} \exp(- 2^{cn /\log n}).
$$
for $L$ large enough.
This uses the fact that all events in the definition of $W_{n, B}$ are contained in $\cY_B$ along with the fact that $\P[\cY_B] = \de$ (Lemma \ref{L:putting-things-together}), the bound in Lemma \ref{L:79} on $\P[\cS_B \mid \cS_{n, B}]$, the bound on $\P[\tilde \cX_{n, B} \mid \cY_B]$ from Lemma \ref{L:upper-bound-conditional}, the bound on $\P[\cO_{n, B}]$ from Lemma \ref{L:cO} along with the independence of $\cO_{n, B}$ and $\cY_B$, and finally the bound on $\P[\cM_{n, B} \mid \cY_B]$ from Lemma \ref{L:srw-M}. The first bound in the lemma for $n \ge 2$ then follows from Lemma \ref{l:dependentPerc}. The $n=1$ case is identical except we do not need to worry about the first two $W_{n, B}$ terms.
The second bound also follows from Lemma \ref{l:dependentPerc},
this time using the lower bound of $\P[\cS_B] \ge \de/2$. \end{proof}
\begin{proof}[Proof of Proposition \ref{P:exists-delta}]
We will just prove the conditional probability bound, as the liminf claim then follows from the Borel-Cantelli lemma. It is also enough to prove the conditional probability bound when $m=r^5$ for sufficiently large $r \in \N$.
We work on the event $\cG_\nu$ throughout the proof. First, by the containments in Lemma \ref{L:NnB-conditional-small} and Lemma \ref{L:NnB-conditional} we can write
\begin{align}
\nonumber
\sum_{B \in D_\sB(0, r^5)} &\mathbf{1}(\operatorname{Survive}_B) \\
\label{E:Bd0B}
&\ge \sum_{B \in D_\sB(0, r^5)} \mathbf{1}\Big(\cS_B \cap \bigcap_{n\ge 1} (\cM_{n, B} \cap \cO_{n, B}) \cap \bigcap_{n\ge 2} \tilde \cN_{n, B}\Big).
\end{align}
Now let $n(r) \in \N$ be the largest value of $n$ such that $s_n \le r$ so that $n(r) \sim \log_2 r$ as $r \to \infty$.
By Lemmas \ref{L:cO} and \ref{L:srw-M} and Corollary \ref{C:avgNnB}, the probability that $\tilde \cN_{n, B} \cap \cO_{n, B} \cap \cM_{n, B}$ fails for some $B$ with $d(B, 0) \le r^5$ and $n \ge n(r)$ is at most
$
\exp(-{(r L)}^{c/ \log \log (r L)})
$
for large enough $r$.
Therefore with probability at least $1- \exp(-{(r L)}^{c/ \log \log (r L)})$, the right-hand side of \eqref{E:Bd0B} is equal to
\begin{align*}
&\sum_{B \in D_\sB(0, r^5)} \mathbf{1}\Big(\cS_B \cap \bigcap_{n=1}^{n(r)} (\cM_{n, B} \cap \cO_{n, B}) \cap \bigcap_{n= 2}^{n(r)} \tilde \cN_{n, B}\Big) \\
&\ge \sum_{B \in D_\sB(0, r^5)} \Big( \mathbf{1}(\cS_B) -\sum_{n=2}^{n(r)} \mathbf{1}(\cS_B \cap \tilde \cN_{n, B}^c) -\sum_{n=1}^{n(r)} \mathbf{1} (\cS_B \smin (\cM_{n, B} \cap \cO_{n, B}) \Big).
\end{align*}
Now by Corollary \ref{C:mart-summation}, using the $W_{n, B}$ notation from Lemma \ref{L:conc-P-S}, this is bounded below by
\begin{align*}
& \sum_{B : d(0, B)\le r^5}\frac{3}{4}\mathbf{1}(\cS_B)
- \sum_{n=2}^{n(r)} |J^{r, n}| - \sum_{n=1}^{n(r)} \sum_{B : d(0, B)\le r^5} W_{n, B}
\end{align*}
for large enough $L$. Then as long as $L$ is sufficiently large and $r$ is sufficiently large given $L$ we have the following bounds. The first of the three terms above is bounded below by $\de r^{10}/20$ with probability at least $1 - \exp(-c r)$ by Lemma \ref{L:conc-P-S}. Each of the summands in the second term is bounded above by $\de r^{10}/\log^2(r)$ with probability at least $1-\exp(-r^{1/2})$ by Corollary \ref{C:mart-conc}. Each of the summands in the final term is bounded above by $\de r^{10}/\log^2(r)$ with probability at least $1 - \exp(-c r)$ by Lemma \ref{L:conc-P-S}. Putting everything together implies that the above expression is bounded below by $\de r^{10}/40$ with probability at least $1-\exp(-r^{1/2})$ for large enough $r$, yielding the desired probability bound in Proposition \ref{P:exists-delta} with $\al = \de/40$.
\end{proof}
\subsection{Proof of Theorem \ref{T:main-2}}
At this point, we just need to gather together everything we have done in the last few sections to prove Theorem \ref{T:main-2}.
First, $\P\cG_\nu \to 1$ as $\nu \to 0$ by Corollary \ref{L:global-is-rare}. Next, $\cG_\nu$ is contained in the event $\cC$ in \eqref{E-event}, which by Theorem \ref{T:random-red-blue} and Proposition \ref{P:SI-to-BR} implies that on $\cG_\nu$ infinitely many blocked become ignited, and so $\mathbf{I}_t \ne \emptyset$ for all $t \ge 0$, giving point $1$.
We next prove point $3$. For a site $x \in \Z^2$, let $B(x)$ denote the block containing $x$, let $\sig_{B(x)}$ denote the final time an infected particle is in the block $B(x)$ and let $N(x)$ denote the smallest value of $n$ such that $\operatorname{Herd}_{B(x), n}$ succeeds. Then by Lemma \ref{L:herd-and-survive}.2, we have
$$
D_x \le \sig_{B(x)} - \tau_{B(x)} \le s_{N(x)} = L^{20} 2^{N(x)-1}
$$
and by Proposition \ref{P:herd-immunity} we have
$$
\P(\sig_{B(x)} - \tau_{B(x)} > L^{20} 2^{n-1} \mid \cG_\nu) \le \P( \operatorname{Herd}_{B, n} \;|\; \cG_\nu) \ge 1 - \exp(-{(2^n L)}^{c/ \log (n + \log L)}),
$$
which yields \eqref{E:Dxm} after simplification. By the Borel-Cantelli lemma there exists a random $D > 0$ such that for all $x \in \Z^2$, on $\cG_\nu$ we have
\begin{equation}
\label{E:sigBx}
\sig_{B(x)} - \tau_{B(x)} \le D + [\log (\|x\|_1 + 3)]^{C \log \log \log (\|x\|_1 + 3)},
\end{equation}
which yields \eqref{E:DxDbound}, completing the proof of point $3$.
Next, the Lipschitz bound \eqref{E:Lipschitz} and \eqref{E:sigBx} implies that $\sig_{B(x)} \le 10 L \|x\|_1$ for all large enough $x$, and so $\mathbf{I}_t \cap B(0, ct) = \emptyset$ for all large enough $t$ for some $\nu$-dependent $c > 0$. Also, there exists $C > 0$ such that almost surely, $\mathbf{I}_t \sset B(0, Ct)$ for all large enough $t$ by the corresponding result for the SI model without recovery, \cite[Theorem 1]{kesten2005spread}. Together these results give point $2$.
Finally, let $\mathbf{S}_{t, r}$ denote the set of susceptible particles at time $t$ in some $H_B$ with $d(B, 0) \le r$. Standard random walk estimates (Lemma \ref{L:rw-estimate}) imply that for any $\ep, r > 0$, for all large enough $t$ we have
$$
\mathbf{S}_{t, t(r- \ep)} \sset \mathbf{S}_t \cap D(0, rt) \sset \mathbf{S}_{t, t(r + \ep)}.
$$
Combining this with Proposition \ref{P:exists-delta} and Lemma \ref{L:herd-and-survive}.3 yields point $4$.
\bibliographystyle{alpha}
|
1,108,101,565,965 | arxiv | \section{Introduction}
Many physical, chemical, economic, social, and ecological problems lead to the investigation of competition processes between the interacting components of the system.
Competition processes play an important role in the course of evolution of nonlinear open spatially extended systems. For example, the spatial and spatio-temporal pattern formation can be considered as a process of interaction and competition between unstable modes of the system, as a result of which one or a small number of such modes subjugate all others \cite{Haken1}.
Noises have a no less important and quite nontrivial influence on the evolution of nonlinear open spatially extended systems. In addition to the effects investigated in \cite{LiuJin2,GosMar3,RiazDut4,Lindn5,KawSail6,BucIba7,Zhou8, SanzZhab9, SegShap10, ZimTor11, SanSan12, IbaGar13, WanJun14, NeiPei15, ZaikSchim16, GenSan17, GamHan18, SantCol19, ElsSel20, BroPar21, ParBro22, GarSan23, MarGam24, BroPar25, GarHer26,Deis27,Kur28,Kur29}, they can lead to the appearance of new noise-induced statistically steady states \cite{Mikh30}.
The purpose of the present paper is to study numerically the influence of external noise on the competition process in a nonlinear open spatially extended system. We will not consider the general case here, but confine ourselves to the study of a simple, but biologically important model. The outline of the rest of the paper is as follows. The model under study is presented in Sec. II. The results obtained in Ref.\cite{Mikh30} are briefly discussed. The numerical method used for simulation is presented in Sec. III. The possibility of its application to the problem under consideration is established. The results of simulation of competition processes for different values of some parameters of the problem are presented in Sec. IV. Three types of solutions found are described. One type of solution is new and it corresponds to the situation, when an initially "strong" species surrenders in competitive fighting and disappears. Finally, some conclusions are reported in Sec. V.
\section{The model}
In the paper \cite{Mikh30} a model of the Volterra type describing the interaction of two biological species relying on the same resource was introduced. It is assumed that individuals of one species are able to move in space, which is modeled by the diffusion term in the appropriate equation and the rate of the resource density growth changes randomly in space and in time. The model equations are as follows:
\[
{\frac{\partial s}{\partial t}} = (B r-A)s,
\]
\begin{equation}
\label{eq1}
{\frac{\partial w}{\partial t}} = (b r-a)w + D \nabla ^2 w,
\end{equation}
\[
{\frac{\partial r}{\partial t}} = Q + f(\textbf{r},t) - Gr - Cs - cw,
\]
where $s, w$ are the population densities of "strong" and "weak" species respectively, $r$ is the resource density; $A,a (B, b)$ are the coefficients of natural change of population; $Q$ is the rate of resource growth; $C, c$ are the coefficients of its consumption; $G$ is the coefficient of the natural decline of resource. The term $D{\nabla}^2w$ takes into account the mobility of individuals of the "weak" species. The random field $f(\mathbf{r},t)$ with zero mean defines spatial and temporal fluctuations of resource density growth. All the coefficients in Eq.~(\ref{eq1}) are positive. It is additionally assumed that the conditions $A/B < a/b, Q > GA/B$ are fulfilled.
\par As noted in \cite{Horst31}, fluctuations in the environment represent the summarized effect of many weakly coupled factors. Therefore, according to the central limit theorem fluctuations of the external source have a Gaussian distribution. The ergodic Markovian and Gaussian properties of the fluctuating environment limit the choice of random fields for modeling the fluctuations of the environment by a stationary homogeneous isotropic Gaussian field with the exponential time- and space-correlation function. Therefore, in this paper as in \cite{Kur28}
\begin{equation}
\label{eq2}
\langle f(\textbf{r},t)f(\textbf{r}',t') \rangle = 2G \theta \exp(-k_{f}|\textbf{r}-\textbf{r}'|)\exp(-k_{t}|t-t'|)
\end{equation}
Here $r_f=k_f^{-1}$ determines the characteristic spatial scale of fluctuations, $r_t=k_t^{-1}$ determines the characteristic temporal scale of fluctuations, $\theta$ is their intensity. The correlation time is significantly shorter than all characteristic times of the problem. In the paper \cite{Mikh30} field $f(\mathbf{r},t)$ is a $\delta$-correlated in time.
\par The only stable solution of the local deterministic system (\ref{eq1}), defined as
\[
w_s=0, r_s=A/B, s_s=(Q-Gr_s)/C
\]
corresponds to Gause competitive exclusion principle \cite{Gause32}.
It was shown in \cite{Mikh30} that the situation becomes different from the classical one if the rate of resource density fluctuates in space and in time. Beginning with some critical noise intensity stationary statistical coexistence of two competing species becomes possible. The authors \cite{Mikh30} named this phenomenon "medium populating". "Weak" species population density average with respect to volume and asymptotic over time becomes equal to:
\[
{{\langle w \rangle}_{V}}_{s}=
{\left\{
{\begin{array}{*{20}l}
{0,\theta < \theta _{c} ;} \hfill \\
{ (b{p}_{1}/R)(1/{\theta}_{c} - 1/\theta),\theta > \theta _{c} } \hfill \\
\end{array}}
\right.}
\]
Here $R=3{\sqrt{2}}b^{3}c/[4G {w}^2_{0}{(D{k}^2_{f})}^{3/2} ],$$\omega_{0}=$$\sqrt{B(Q-GA/B)},$ $\theta_{c}=p_{1}D{k}^2_{f}/b, p_{1}=a/b - A/B.$ The latter value determines the resource deficit for the reproduction of "weak" species individuals in the steady state. Herewith it is also shown in \cite{Mikh30} that without diffusion fluctuations of the resource growth rate do not prevent asymptotic extinction of "weak" species, therefore this kinetic transition is fundamentally associated with the presence of diffusion. Thus, in conditions of fluctuating environment mobility is the factor ensuring the survival of the species.
Analyzing the system (\ref{eq1}) analytically the authors of paper \cite{Mikh30} used a number of restrictions significantly narrowing the range of applicability of the obtained results: complex hierarchy of microscopic scales with the dimension of reverse time, restrictions on the noise intensity, the smallness of deviations of the concentrations $w$ and $s$ from the stationary values, the vicinity to the transition point to the regime with non-zero "weak" species population density average with respect to volume ${{\langle w \rangle}_{V}}\ne0$ etc. In reference with the above carrying out numerical analysis of the system~(\ref{eq1}) evolution in the absence of the limitations above appears to be interesting.\\
\section{The numerical method}
Let us focus on the description and justification of the possibility of applying the numerical method used for the simulation of the system (\ref{eq1}) evolution in more detail.
We assume that species interaction occurs in a large, but finite area of space. Let us consider a one-dimensional problem. Then the system of equations (\ref{eq1}) with account for normalization can be rewritten as follows:
\[
{\frac{\partial \widetilde{s}}{\partial \tau}} = (\widetilde{r}-1)\widetilde{s},
\]
\begin{equation}
\label{eq3}
{\frac{\partial \widetilde{w}}{\partial \tau}} = \left( \frac{b}{B} \widetilde{r}-\frac{a}{A} \right) \widetilde{w} + \frac{D}{A} \frac{\partial^2 \widetilde{w}}{\partial x^2},
\end{equation}
\[
{\frac{\partial \widetilde{r}}{\partial \tau}} = \frac{B}{A^2} \left[ Q + f(x,\tau) - \frac{AG}{B}\widetilde{r}
- \left(Q-G\frac{A}{B}\right)(\widetilde{s} + \frac{c}{C}\widetilde{w}) \right],
\]
where $\tau=At; \widetilde{s}=s/s_s; \widetilde{r}=r/r_s; \widetilde{w}=w/s_s$. The fixed boundaries are assumed to be impermeable:
\[
{\left.{\frac{\partial \widetilde{s}}{\partial x} } \right|}_{x=0;L}=0,
{\left.{\frac{\partial \widetilde{w}}{\partial x} }\right|}_{x=0;L}=0,
{\left.{\frac{\partial \widetilde{r}}{\partial x} }\right|}_{x=0;L}=0,
\]
where $L$ is the characteristic size significantly exceeding all characteristic spatial scales of the problem.
The scheme of the numerical method used for integration (\ref{eq3}) is written on the basis of the following. Let us formally write the solution of the system (3) in its equivalent integral form:
\[
\widetilde{s}= {\widetilde{s}}_{0} + \int \limits_{t_{0}} ^{t} (\widetilde{r}-1)\widetilde{s} d \tau,
\]
\begin{equation}
\label{eq4}
\widetilde{w}= {\widetilde{w}}_{0} + \int \limits_{t_{0}} ^{t}{ \left[ \left(\frac{b}{B} \widetilde{r}-\frac{a}{A}\right)\widetilde{w} + \frac{D}{A} \frac{\partial^2 \widetilde{w}}{\partial x^2} \right]} d \tau,
\end{equation}
\[
\widetilde{r}= {\widetilde{r}}_{0} + \frac{B}{A^2} \int \limits_{t_{0}} ^{t}\left[ Q - \frac{AG}{B}\widetilde{r}- \left(Q-G\frac{A}{B}\right)(\widetilde{s} + \frac{c}{C}\widetilde{w}) \right] d \tau
\]
\[
\qquad + \frac{B}{A^2} \int \limits_{t_{0}} ^{t} f(x,\tau)d \tau
\]
Real noise as opposed to white one has realizations continuous at almost all points, therefore, due to the smoothing effect of integration the process with the realizations differentiable at almost all points will correspond to the solution (\ref{eq4}) of the system of stochastic differential equations (SDE) (\ref{eq3}). That is why (\ref{eq3}) can be interpreted as a system of ordinary differential equations for realizations and $ \int\limits_{t_0}^t f(x,\tau) d\tau$ from the third equation ~(\ref{eq4}) can be understood in the sense of the Riemann integral.
All the aforesaid makes it possible to use the conventional two-layer finite-difference scheme for SDE (\ref{eq3}) in which $f(x,\tau)$ can be interpreted as part of the nonlinear function in the right-hand side. Then we obtain on a rectangular uniform grid $[0\le x\le L]\times[0\le \tau\le T]$ with the time step $\Delta\tau$ and space step $h$:
\[
\widetilde{s}^{j}_{i}= \Delta \tau (\widetilde{r}^{j-1}_{i} -1)\widetilde{s}^{j-1}_{i} +\widetilde{s}^{j-1}_{i}
\]
\begin{equation}
\label{eq5}
{\begin{array}{l}
\widetilde{w}^{j}_{i-1}- \left( 2+\frac{Ah^2}{D \Delta \tau \sigma} \right)\widetilde{w}^{j}_{i} + \widetilde{w}^{j}_{i+1}= \\
\qquad -\frac{1-\sigma}{\sigma}\left(\widetilde{w}^{j-1}_{i-1}-2\widetilde{w}^{j-1}_{i} +\widetilde{w}^{j-1}_{i+1} \right)-\frac{Ah^2}{D \Delta \tau \sigma}\widetilde{w}^{j-1}_{i}\\
\qquad -\frac{Ah^2}{D \Delta \tau \sigma}\left(\frac{b}{B}\widetilde{r}^{j-1}_{i} -\frac{a}{A} \right)\widetilde{w}^{j-1}_{i}\\
\end{array}}
\end{equation}
\[
\begin{array}{l}
\widetilde{r}^{j}_{i} = \Delta \tau \frac{B}{A^2} \left[ Q -
\frac{AG}{B}\widetilde{r}^{j-1}_{i} - \left( Q-G\frac{A}{B} \right) \left(\widetilde{s}^{j-1}_{i}+
\frac{c}{C}\widetilde{w}^{j-1}_{i}\right) \right]\\
\qquad + \Delta \tau \frac{B}{A^2}{f}^{j-1}_{i} + \widetilde{r}^{j-1}_{i}
\end{array}
\]
Here $f_i^j$ are realizations of a random Gaussian field with the appropriate correlation function, $\sigma=1/2$ is the weighting factor of the scheme at the spatial derivative from the upper layer.
The realizations of the field were obtained from the following considerations. Let us suppose that time dependence of the field could be considered practically the same at all points in space. Then we can write $f(x,\tau)=u(x)v(\tau)$. For such a field the correlation function takes the form $B(x,\tau)=B_u(x)B_v(\tau)$ that corresponds to the form (\ref{eq2}). Further processes $u_i^j$ and $v^j$ are implemented according to the scheme:
\[
\begin{array}{l}
u^{j}_{i}= [\theta_{1}(1-\exp(-2k_{f}|x_{i}-x_{i-1}|))]^{1/2}e_{i} \\
\qquad + u^{j}_{i-1}\exp(-k_{f}|x_{i}-x_{i-1}| ),
\end{array}
\]
\[
\begin{array}{l}
v^{j}= [\theta_{2}(1-\exp(-2k_{t}|t^{j}-t^{j-1}|))]^{1/2}e^{j} \\
\qquad +v^{j-1}\exp(-k_{t}|t^{j}-t^{j-1}| )
\end{array}
\]
$\theta_1\theta_2=2G\theta$, $u_0^j$ is a random Gaussian number with zero expectation and variance $\theta_1$, $v^0$ is a random Gaussian number with zero expectation and variance $\theta_2$, $e_i$ and $e^j$ are random Gaussian numbers with zero expectations and unit variances. Then $f_i^j=u_i^jv^j$.
Difference boundary conditions are determined by the expressions
\[ \widetilde{w}^{j}_{-1}= \widetilde{w}^{j}_{1} , \widetilde{w}^{j}_{max i-1}= \widetilde{w}^{j}_{maxi+1} \]
\begin{equation}
\label{eq6}
\widetilde{s}^{j}_{-1}= \widetilde{s}^{j}_{1} , \widetilde{s}^{j}_{max i-1}= \widetilde{s}^{j}_{maxi+1}
\end{equation}
\[ \widetilde{r}^{j}_{-1}= \widetilde{r}^{j}_{1} , \widetilde{r}^{j}_{max i-1}= \widetilde{r}^{j}_{maxi+1} \]
The system~(\ref{eq5}) with boundary conditions~(\ref{eq6}) is solved by the tridiagonal matrix algorithm or any other method of solving systems of linear algebraic equations.
The second equation of the scheme~(\ref{eq6}) represents six-point difference scheme of the Crank - Nicholson type. The first and third equations are implementations of the simplest Euler scheme.
During the simulation of system (\ref{eq1}) dynamics steady statistical characteristics are determined for parameter values close to bifurcation. As the system approaches the bifurcation point the phenomenon of critical slowing-down is observed due to which the achievement of a statistically steady state requires large intervals of model time. Therefore in simulating the evolution of (\ref{eq1}) it is necessary to keep to the asymptotic stability condition for the second equation of scheme (\ref{eq5}).
\section{The results of simulation}
By using the scheme~(\ref{eq5}) we studied the change of the system~(\ref{eq3}) dynamics depending on the values of parameters $D, p_1,$ and $\theta$. Volume averaged species population densities
\[{{\langle\widetilde {w}(t) \rangle}_{V}}=\frac{1}{L}\int\limits_0^L \widetilde {w}(x,t) dx,
{{\langle\widetilde {s}(t) \rangle}_{V}}=\frac{1}{L}\int\limits_0^L \widetilde {s}(x,t) dx\]
and them statistically steady values
\[{{\langle\widetilde {w} \rangle}_{Vs}}=\lim\limits_{t\to\infty}{\langle\widetilde {w}(t) \rangle}_{V},
{{ \langle\widetilde {s}\rangle}_{Vs}}=\lim\limits_{t\to\infty}{\langle\widetilde {s}(t) \rangle}_{V},\]
were chosen as values characterizing this change. In Fig.~\ref{fig1} the parametric diagrams on the planes of parameters $ D - p_1$ and $D - \theta$ are presented. The regions corresponding to different outcomes of competitive fighting are distinguished. The figure shows that there are three different outcomes for system~(\ref{eq3}) -- the three regimes of behavior, with the region of the U-shaped form on both planes corresponding to one of them.
\begin{figure}[h!]
\includegraphics[width=2.345in,height=2.73in]{fig1a-eps-converted-to.pdf}\\
(a)\\
\includegraphics[width=2.438in,height=2.797in]{fig1b-eps-converted-to.pdf}\\
(b)\\
\caption{\label{fig1}Parametric diagrams of the system~(\ref{eq3}) (a) on plane $D - p_1$; (b) on plane $D - \theta$. I is the region of classical solution (Gause principle); II is the region of the "medium populating" regime; III is the region of the "inversion" regime.}
\end{figure}
For the parameters belonging to region I in the Figs.~\ref{fig1} a,b the outcome determined by the Gause competitive exclusion principle is realized: the population density of the "weak" species ${{\langle\widetilde {w}\rangle}_{Vs}}$ asymptotic over time and average with respect to volume tends to zero, i.e. "weak" species disappears.
"Medium populating" by individuals of "weak" species, predicted in paper \cite{Mikh30} is observed in the parameter region corresponding to the region II in Figs.~\ref{fig1} a, b. A statistically steady state is established wherein the population densities of species ${{\langle\widetilde {w} \rangle}_{Vs}}$, ${{\langle\widetilde {s} \rangle}_{Vs}}$, average with respect to the volume and asymptotic over time are different from zero. Typical time dependencies of volume averaged densities ${{\langle\widetilde {w}(t) \rangle}_{V}}$ and ${{\langle\widetilde {s}(t) \rangle}_{V}}$ in region II are presented in Fig.~\ref{fig2}a.
\begin{figure}[h!]
\includegraphics[width=3.11in,height=1.773in]{fig2a-eps-converted-to.pdf}\\
(a)\\
\includegraphics[width=3.105in,height=1.77in]{fig2b-eps-converted-to.pdf}\\
(b)\\
\caption{\label{fig2}Typical time dependences of volume averaged population densities of "weak" species ${{\langle\widetilde {w}(t) \rangle}_{V}}$ and "strong" species ${{\langle\widetilde {s}(t) \rangle}_{V}}$ .(a) The "medium populating" regime $D=0.01$. (b) The "inversion" regime $D=0.5$. The other parameters of the model are $A=B=1; a=4.755; b=4.752; C=c=1; Q=9.25; G=3.68; \theta=0.7; k_f=5.4$.}
\end{figure}
Our studies have shown that the model~(\ref{eq3}) has yet another previously unknown regime of behavior wherein ${{\langle\widetilde {w} \rangle}_{Vs}}\ne0$ and ${{\langle\widetilde {s}\rangle}_{Vs}}\to0$. That is, the "strong" species on the average asymptotically disappears, conceding to the "weak" species in the competition process (region III in Figs.~\ref{fig1} a,b). In this parameter region "inversion" of properties of the species occurs: a "weak" species becomes a "strong" one. Thus, in conditions of fluctuating environment mobility is a factor, which not only ensures the survival of the species, but is a margin of victory in competitive fighting. Typical dependencies ${{\langle\widetilde {w}(t) \rangle}_{V}}$ and ${{\langle\widetilde {s}(t) \rangle}_{V}}$ in region III are presented in Fig.~\ref{fig2}b. \
The peculiarities of system behavior according to parameters $D, p_1$ are quite understandable. With increasing resource deficit available to "weak" species Gause principle is realized "weak" motionless species disappearing under condition $D\to0$ and $p_1\to0$ with an arbitrarily small resource deficit. Decreasing resource deficit available to "weak" species leads to smoothing out the differences in the dynamics of "weak" and "strong" species and mobility provides additional competitive advantages for "weak" species. As a result, at first the possibility of coexistence of species emerges and then the "inversion" regime comes into being as $p_1$ decreases. The plots reflecting the results of the competitive process depending on parameter $p_1$ are presented in Fig.~\ref{fig3}a.
\begin{figure}[h!]
\includegraphics[width=2.74in,height=1.955in]{fig3a-eps-converted-to.pdf}\\
(a)\\
\includegraphics[width=2.793in,height=1.845in]{fig3b-eps-converted-to.pdf}\\
(b)\\
\includegraphics[width=2.747in,height=1.943in]{fig3c-eps-converted-to.pdf}\\
(c)\\
\caption{\label{fig3}Dependencies of population densities of "weak" species ${{\langle\widetilde {w} \rangle}_{Vs}}$ and "strong" species ${{\langle\widetilde {s} \rangle}_{Vs}}$ asymptotic over time and average with respect to volume (a) on resource deficit $p_1$; (b) on the diffusion coefficient $D$; (c) on noise intensity $\theta$. Other model parameters are the same as in Fig.~\ref{fig2}.}
\end{figure}
Let us remark here that the "inversion" regime is threshold over the parameter $D$ (see Fig.~\ref{fig3}b): under condition $D\to D_с(p_1,\theta)$ additional competitive advantages disappear. Increase of the mobility coefficient $D$ in the limit of large values leads to the fact that the last term in the second equation of system~(\ref{eq3}) becomes prevailing. In the case of high mobility individuals of the "weak" species pass through the region with resource surplus too fast and cannot use it effectively. In the asymptotics under condition $D\to\infty$ this equation admits of damped solutions of the diffusion type regardless of the values of resource deficit. This, in particular, explains the U-shaped form of region II in Fig.~\ref{fig1}.
The separation of regimes on the plane $D-\theta$ is also easily explained. The smaller the intensity of the fluctuations the smaller the resource available exclusively to the "weak" species and randomly arising in randomly distributed areas of space. The possibility of coexistence of species arises when the intensity fluctuation $\theta$ is higher than the first critical value $\theta_{c1}$. The "weak species" gains an advantage when the intensity fluctuation $\theta$ is higher than the second critical value $\theta_{c2}$ since this type of resource is directly accessible only to it and the possibility of "inversion" arises. The change of densities of the number of "weak" species ${{\langle\widetilde {w} \rangle}_{Vs}}$ and "strong" species ${{\langle\widetilde {s}\rangle}_{Vs}}$ asymptotic over time and average with respect to volume according to value $\theta$ is presented in Fig.~\ref{fig3}c.\\
\section{Conclusion}
In our paper we have studied numerically the influence of external real noise on the competition processes in a nonlinear spatially extended system. Our studies have shown that model~(\ref{eq1}) admits of three different outcomes of competitive fighting: classical disappearance of the "weak" species; noise-induced "medium populating" by individuals of "weak" species predicted in \cite{Mikh30}; as well as a nontrivial outcome -- noise-induced extinction of initially "strong" species. Some parameter region of the problem corresponding to the above competition results were determined. It is reflected on the parametric diagrams in Fig.~\ref{fig1}.
The studies presented are interesting in that they explain at least some reasons because of which a nontrivial result of competitive fighting occurring in a fluctuating environment is possible. They can also help determine the strategy and tactics of behavior of any competing communities aimed at winning in competitive fighting, including communities of a nonbiological origin.
\section*{Acknowlegment}
The study has been supported by the Ministry of Education and Science of the Russian Federation, Competitiveness Enhancement Program of SSAU for 2013-2020 years and Project No.102 of State assignment to educational and research institutions; Grants No. 13-01-970050 r\_povolzhie\_a of the Russian Foundation for Basic Research.
|
1,108,101,565,966 | arxiv | \section{Introduction}\label{s:Introduction}
Quasi-periodic pulsations (QPPs) are frequently seen in the time profiles of solar and stellar flare emission
\citep[see reviews][]{2009SSRv..149..119N, 2016SSRv..200...75N}. Among all the possibilities of QPPs explanation, two reasons are discussed more actively: magnetohydrodynamic (MHD) oscillations in plasma waveguides and oscillations in current sheets.
MHD oscillations in plasma waveguides observed in the microwave and X-ray bands cover the wide range of time scales, from a few seconds \citep{1969ApJ...155L.117P, 1983ApJ...271..376K, 2001ApJ...562L.103A, 2003ApJ...588.1163G, 2005A&A...439..727M, 2008ApJ...684.1433F, 2009A&A...493..259I, 2013SoPh..284..559K}
up to tens of minutes \citep{2006A&A...460..865M,2014SoPh..289.1239N, 2014ARep...58..573K}. Microwave and X-ray emissions are
usually related to energetic electrons accelerated during the flare. Gyrating around the magnetic field lines the relativistic electrons may emit via the gyrosynchrotron mechanism. At the same time electrons precipitate into loop footpoints where they produce hard X-ray (HXR) emission via bremsstrahlung \citep{1982SvAL....8..132Z}.
MHD wave processes in coronal magnetic structures can have an effect on the process of electron acceleration, or on the emission of already accelerated electrons, leading to quasi-periodicities in, \textit{e.g.}, microwave and in X-ray emissions.
Oscillations in current sheets are usually connected with series of reconnection \citep[see][ for review]{2009SSRv..149..119N}.
When we are speaking about the reconnections and type~III bursts associated with them, we mean the characteristic times of a few seconds \citep{1994ApJ...431..432A, 2005A&A...437..691N, 2013A&A...555A..40A}. \citet{2009A&A...494..329M}
found oscillations with periods from one up to few minutes as a result of reconnection driven by simulations of magnetic flux emerging in a coronal hole.
\citet{2012ApJ...749...30M}
studied the analogous model and found that the physical mechanism of oscillatory reconnection could generate quasi-periodic vertical outflows. Periods of 1.75--3.5~minutes could be obtained by varying the magnetic strength of the buoyant flux tube.
\citet{2015ApJ...804....4K} found observational evidence of quasi-periodic reconnection with the period near 3.3~minutes.
Repetitive reconnection was found to occur near footpoints of a loop arcade where the magnetic field had a fan configuration with a null-point \citep[][]{1998ApJ...502L.181A}.
Consecutive reconnections
could cause the periodic acceleration of non-thermal electrons which propagated to the opposite footpoint along the arcade appearing there as repetitive increases in the brightness of X-ray and ultra violet emission.
Type~III radio bursts are created by accelerated electron beams resonantly exciting Langmuir waves \citep[see \textit{e.g.}][]{1973SoPh...31..207J, 2014RAA....14..773R}. The radio waves of type~III bursts are connected to a non-linear transformation of the electrostatic Langmuir waves to electromagnetic waves. The scattering of plasma waves on the ions of the background plasma leads to electromagnetic emission in the vicinity of the plasma frequency. The non-linear interaction of two plasma waves produces the emission around the second harmonic of the plasma frequency. \citet{2015ApJ...807...72L} detected a similar periodicity in X-rays, emission of Extreme ultraviolet (EUV) lines, and in radio (several MHz) emission. They found that the peaks of the type~III burst corresponded to peaks of hard X-ray and extreme ultraviolet emissions.
The mechanism for these correlations is still not clear. However, close agreement between properties of the QPPs and type III bursts could be an indicator that QPPs are caused by current sheet oscillations.
\citet[][]{2016AdSpR..57.1456K} (hereafter Paper~1) revealed high-amplitude quasi-periodic pulsations during the impulsive phase of a solar flare on May 6, 2005. The QPPs were co-phased in microwave and HXR emission. However, an unambiguous conclusion about the origins of the observed QPPs was not reached. The study was carried out based on the
hard X-ray and microwave data
and did not consider type III radio bursts. The aim of this study is to reveal the most probable scenario of the observed QPPs based on analysis of the flare impulsive phase using additional data at microwave, X-ray, and radio wavelengths. As the spatial structure of the sources of microwaves and X-rays was analyzed in detail in Paper~1, we consider here the time behavior of the emissions and the derived plasma parameters.
The paper is organized as follows. Section~\ref{s:Observations} contains an overview of the flare observations used in the current study. Analyses of the time profiles are summarized in Section~\ref{s:Timeprofs}. The periodic properties of the time series are analyzed using withe the use of the wavelet transform and the results are summarised in Section~\ref{s:QPP}. Comparison and discussion of arguments \textit{pro} and \textit{contra} of different interpretations are presented in Section~\ref{s:Discussion} and our conclusion and final remarks are in Section~\ref{s:Conclusion}.
\section{Observations}\label{s:Observations}
A C9.3 GOES class solar flare occurred on May 6, 2005 near the western limb with heliographic coordinates S05W67. In this study, we consider the impulsive phase that occurred between 03:07:00~UT to 03:11:00~UT.
\subsection{X-ray data}\label{s:Xrays}
\begin{figure}
\centerline{
\includegraphics[width=0.5\textwidth,clip=]{Song_RHESSI_timeprofs1.eps}
\includegraphics[width=0.5\textwidth,clip=]{Song_RHESSI_timeprofs2.eps}
}
\caption{
Time profiles of hard X-ray emission during the impulsive phase of the flare on May 6, 2005 normalised by the maxima. .
(a) \textit{RHESSI} fluxes at 50--100~keV (black line) and \textit{SONG} fluxes at 43--82~keV (blue)
(b) \textit{RHESSI} fluxes at 100--300~keV (red) and \textit{SONG} fluxes at 82--230~keV (green). }
\label{f:SONG_RHESSI}
\end{figure}
We used simultaneous hard X-ray observations of the flare by the
{\it Reuven Ramaty High Energy Solar Spectroscopic Imager} (RHESSI)
\citep[][]{2002SoPh..210....3L} and the {\it Solar Neutron and Gamma rays} (SONG) experiment onboard the three-axis stabilized {\it Complex Orbital Observations in the Near-Earth space of the Activity of the Sun} (CORONAS-F) solar space observatory \citep[][]{podorolsky2004first1466063}.
These instruments together provide the 4-s time resolution measurements
of X-rays and gamma rays in the range 3 keV to 200 MeV. We use the 1D time profiles obtained by two different instruments to check for the possible effect and/or presence of artifacts in the QPPs observations \citep{2011A&A...530A..47I}.
The time profiles of \textit{RHESSI} fluxes at 50--100~keV and 100--300~keV, and the SONG fluxes at 43--82~keV and 82--230~keV are shown in Figure~\ref{f:SONG_RHESSI}. It is clear that the time profiles correspond to each other for the related energy bands and hence any artificial effects in the HXR time profiles could be excluded.
\subsection{Microwaves and radio waves}\label{s:MWRadio}
\begin{figure}
\centerline{\includegraphics[width=0.75\textwidth,clip=]{RSTN_NoRH_RHESSI.eps}
}
\caption{
Time profiles (in UT) of NoRH, RHESSI, and RSTN Learmonth fluxes normalised by the maxima. RHESSI time profile is shifted artificially upward by 0.1 of arbitary unit in order to distinguish the time profiles of X-ray emission and NoRH flux.}
\label{f:5}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=0.48\textwidth,clip=]{RSTN_spectrum.eps}
\includegraphics[width=0.48\textwidth,clip=]{RSTN_sp_lcurves.eps}
}
\caption{
Left: dynamic spectrum for type~III bursts obtained with {\it the Learmonth Solar Radio Spectrograph}.
The horizontal axis is the time (in seconds) strating from 03:07:02~UT,
the vertical axis is frequency.
Five selected frequency bands are enclosed between the pairs of similarly colored dashed horizontal lines.
Right: the time profiles of the signals in these five spectral bands.
The color of a curve corresponds to the color of the boundaries of the frequency bands shown in the left panel.
}
\label{f:RSTN}
\end{figure}
We used the data obtained by the {\it Nobeyama Radioheliograph} (NoRH) and by the {\it Nobeyama Radio Polarimeters} (NoRP), Japan, \citep{1985PASJ...37..163N, 1994IEEEP..82..705N} for the analysis of the microwave emission. Both instruments have high time resolution $\Delta t = 1$~s. The NoRH integrated time profiles at the frequencies 17~GHz and 34~GHz are calculated by integration of the values over the map of the intensity distribution at each time. The maps of the intensity distribution at the frequencies 17~GHz and 34~GHz are synthesized using the Fujiki and Hanaoka routines, correspondingly
(\texttt{http://solar.nro.nao.ac.jp/norh/doc/man\_v33e.pdf}).
The spectral properties and time evolution of the flaring radio emission from 25~MHz to 180~MHz are studied using data from the ground-based {\it Learmonth Solar Radio Spectrograph}, Australia, included in {\it The Radio Solar Telescope Network} (RSTN). The cadence of the data is $\Delta t = 3$~s. We have checked the agreement between the timelines of Nobeyama Observatory data and data from the {\it Learmonth Solar Radio Spectrograph}. For this purpose, we used microwave data within 1--17~GHz range obtained by both observatories.
The comparison of the time profiles obtained by both instruments at similar frequencies revealed the advanced time shift of {\it the RSTN} data by 5~s relative to the NoRP data. We assumed the Nobeyama observatory time scale as the standard and so we added the time shift of 5~s to the RSTN data.
\section{Analysis of time profiles}\label{s:Timeprofs}
\begin{figure}
\centerline{
\includegraphics[width=0.48\textwidth,clip=]{Te_06May05.eps}
\includegraphics[width=0.52\textwidth,clip=]{SourceSize_timeprof.eps}
}
\caption{Left: time profiles of plasma temperature (red curve) and electron spectral index $\delta$ (blue curve) defined using RHESSI data with the cadence time $\Delta t =$~4~s. The error bars are the 1--sigma errors returned by the fitting routine for each fit parameter. The black curve corresponds to the NoRH 17~GHz flux.
Right: normalized time profiles of the source size measured within 30\% level (blue curve), 50\% level (green curve), 70\% level (orange curve), and 90\% level (red curve) of the maximal RHESSI 3--10~keV flux.}
\label{f:T_delta}
\end{figure}
The microwave and HXR emission of the flare demonstrate a good correlation between the data obtained by different instruments and within different energy bands (see Figure~\ref{f:5}, and also Figure~2 of Paper~1).
Moreover, the analysis of the spatial structure of the microwave sources reveals a good correlation of the fluxes in different areas of the flaring region (see Figure~9 and Figure~12 of
Paper~1).
The cross-correlation coefficients are $R \approx 0.8$--0.95 for zero time delay.
The similarity of the time profiles means the observed fluxes have a solar origin, excluding any instrumental effects and any impact of the Earth's atmosphere.
The type III bursts seen at the frequencies between 25 to 180 MHz were observed during the flare simultaneously with the microwaves and X-rays (Figure~\ref{f:RSTN}). From \textit{Wind}/WAVES data we see that each type III burst continued to lower frequencies down to 200 kHz.
To study the relationship between the radio bursts and the microwaves/X-rays we analyse five frequency bands in the 25--180~MHz range with mean frequencies 32~MHz, 39~MHz, 53~MHz, 81~MHz, and 152~MHz. The bandwidths $\Delta f = $~3~MHz are selected for frequencies $f < 75$~MHz, and $\Delta f = $~5~MHz for frequencies $f > 75$~MHz.
The difference in the bandwidths is caused by the difference in the frequency resolution for these two frequency bands. The bounds of each slice are marked by colored dashed lines in the left panel of Figure~\ref{f:RSTN}.
We selected the above slices to have the minimal noise level.
For each slice, the 1D time series is calculated as a total over the bandwidth and normalized by the bandwidth. Each time profile is overplotted in the right panel of Figure~\ref{f:RSTN} after subtraction of the minimal flux.
We identify four distinct type~III bursts in the time range 03:07:00 to 03:10:30 UT (Figure~\ref{f:5}).
The first burst with the maximum at 03:07:40~UT does not appear to be related to a peak in the microwave or HXR emission. It occurs just before the flare impulsive phase. However, sources of the flaring emission appear in the microwave and X-ray maps starting from 03:07:15--03:07:30~UT (Paper~1).
The second, third, and fourth radio bursts correspond much better to peaks in the microwaves.
The second peak with the maximum at 03:08:20~UT corresponds well to that at 34~GHz. However, it is only well pronounced at the
lowest frequencies below 35~MHz. At higher frequencies (35--75~MHz) only a sharp onset appears at 03:08:00--03:08:05~UT and so the burst could correspond to the smaller peak in 34~GHz 15~s earlier.
The third and fourth radio bursts at the lowest frequencies reach their maxima at 03:08:50~UT and at 03:09:55~UT respectively.
These peaks occur 10~s and 25~s after the peaks in the GHz, respectively. The fourth peak is also quite broad in time compared to the other type~III bursts.
As these peaks have a long rise time, an alternative method to estimate the delays between the radio and microwave/HXR bursts is to use the instants of time when the emission rises. We estimated the delay between the rises of the bursts in microwaves and the radio bands to be
$\approx 6\pm1.5$~s for the third radio peak and $\approx 11\pm1.5$~s for the fourth radio peak, with the uncertainty from the temporal resolution of the radio data.
The temporal profiles of the plasma parameters were obtained as a result of fitting the HXR average spectra observed by RHESSI with a model consisting of a thermal and a thick target (based on power-law function) components. We used RHESSI software and the SPectrum EXecutive (SPEX) software \citep{2002SoPh..210..165S} for spectrum processing. The flare occurred on the limb of the Sun so we did not need to correct for albedo.
We analyzed the time profiles of the plasma temperature and electron spectral index $\delta$ within the time interval 03:07:00--03:10:00~UT with the cadence time $\Delta t =$~4~s (Figure~\ref {f:T_delta}).
Temperatures are defined with big errors until 03:08:20~UT. The errors are caused by the small soft X-ray flux, or thermal component, in the beginning of the flare. Starting from 03:07:20~UT the temperature decreases quasi-periodically from 22--23~MK to 19~MK, approximately. The quasi-periodicities have the shape of damped oscillations with the average amplitude $\Delta T / T \approx 10$\%.
The hardest electron spectral index ($\delta \approx 3$) occurrs about 03:08:40 UT and coincides with the peak of the maximum intensity. The time profile of the electron spectral index does not show the quasi-periodic pulsations as clearly as the HXR flux, but does show the typical soft-hard-soft behavior observed during in the majority of flares.
Three peaks are pronounced near 03:08:20, 03:08:40, and 03:09:20~UT, and coincide in time with the maxima of the microwave and HXR emission (Figure~\ref{f:5}), as well as with the maxima of the temperature (Figure~\ref {f:T_delta}, left panel). The temperature time profile also has a maximum at 03:09:45~UT.
\section{Quasi-periodic variations}\label{s:QPP}
\begin{figure}
\centerline{
\includegraphics[width=0.50\textwidth,clip=]{wavelet_RHESSI_Source_size.eps}
\includegraphics[width=0.50\textwidth,clip=]{wavelet_RHESSI_003_010.eps}
}
\centerline{
\includegraphics[width=0.50\textwidth,clip=]{wavelet_RHESSI_050_100.eps}
\includegraphics[width=0.50\textwidth,clip=]{wavelet_RHESSI_100_300.eps}
}
\centerline{
\includegraphics[width=0.50\textwidth,clip=]{wavelet_NoRH_17GHz.eps}
\includegraphics[width=0.50\textwidth,clip=]{wavelet_NoRH_34GHz.eps}
}
\centerline{
\includegraphics[width=0.50\textwidth,clip=]{wavelet_RSTN_32MHz.eps}
}
\caption{
Wavelet power spectra (colored plots) of the high-frequency component of different data. The width of the smoothing interval $\tau = 60$~s is used. In each panel, the normalized time profile is overplotted in the wavelet power spectrum. The green contour indicates the 99\% significance level. The plot to the right of the colored one is the global wavelet spectrum obtained by integration of the wavelet power spectrum over time. The dashed line here shows the 99\% significance level. The panels correspond to the following time profiles:
(a) the thermal X-ray source size calculated within 50\% level of the maximal flux at 3--10~keV,
(b) the RHESSI flux at 3--10~keV,
(c) the RHESSI flux at 50--100~keV,
(d) the RHESSI flux at 100--300~keV,
(e) the NoRH flux at 17~GHz,
(f) the NoRH flux at 34~GHz,
(g) the RSTN Learmonth flux at 32~MHz.
The time axis corresponds to the interval 03:07:00--03:10:30~UT for panels from (a) to (f)
and to 03:07:00--03:12:00~UT for panel (g).
}
\label{f:Wavelets}
\end{figure}
We study the periodic properties of the flare emission using both standard wavelet routines with the Morlet base functions, and Fourier periodogram analysis. The wavelet procedure can be applied to the centred time series only.
We apply a method of smoothing with running average over time $\tau$ from 30~s to 200~s
to subtract a low-frequency background from the original time series \citep[see][for details]{2010SoPh..267..329K}.
The residual is a high-frequency component of the signal, which constitutes a centred time series.
We note that the total amplitudes of the high-frequency component are considered without Fourier filtration \citep{2009A&A...493..259I}. Both techniques give similar results so we show in this section the wavelets only.
Fitting the RHESSI energy spectrum allows us to distinguish between thermal ($\leq$~10 keV) and non-thermal ($\geq$~25--50~keV) components of the X-ray emission
(Paper~1).
We estimate the source size within different levels, from the maximal value of the flux at 3--10~keV at each time from 03:07:00 to 03:11:50~UT with $\Delta t = 4$~s. The time behavior of the source size within 30\%, 50\%, 70\%, and 90\% of
the maximum is shown by different colored lines
in the right panel of Figure~\ref{f:T_delta}.
All the time profiles look similar and so we choose the 50\% level time profile as the example of the source area variation.
It is clear from the wavelet spectrum (Figure~\ref{f:Wavelets}, panel a) that there are no periodic changes in the size of the thermal source.
The wavelet spectrum for the thermal X-ray flux reveals a very weak spot at the period of 40~s (Figure~\ref{f:Wavelets} panel b). If we look at the 99\% significance level, it is obvious that the spot covers only one or one and half periods, so this periodicity can not be considered as a QPPs.
The non-thermal emission reveals high-amplitude quasi-periodic changes with the dominant time scales of $P_1 \approx $~50~s and $P_2 \approx $~30~s. Both periods are well pronounced in the microwaves at 17~GHz and at 34~GHz (Figure~\ref{f:Wavelets} panels e and f) and HXRs at 50--100~keV and 100--300~keV bands (Figure~\ref{f:Wavelets} panels c and d), as well as at 25--50~keV (Paper~1). A similar periodicity, $P_1 \approx $~40--50~s, is also found in the flux at 32~MHz (Figure~\ref{f:Wavelets} panel g).
This confirms that the $P_1$ and $P_2$ periodicities are not simply instrumental. Besides,
three or four cycles of the oscillation are well pronounced in both the
original signals and in the high-frequency component
for all smoothing intervals $\tau$. This allows us to exclude (or sufficiently decrease) the artificial impact of the mathematical method on the results of the spectral analysis.
The average relative amplitude of the variations in microwaves and X-rays is very high, $\Delta I / I \approx 30$\%, and it reaches 80\% after the flare maximum.
The longer period, $P_3 \approx $~80~s variations are present in the time profile at 32~MHz (Figure~\ref{f:Wavelets} panel g). It is attributed to two smooth peaks visible from 03:09:10~UT to 03:12:00~UT. Periods $P_4 \leq$~100~s, seen in the global wavelet spectra at 17~GHz and 34~GHz, are attributed to the global trend of the flare.
All the wavelet spectra are shown in this paper for a smoothing interval of $\tau = 60$~s.
However, the QPPs keep their periods for all $\tau$ from 30~s to 200~s.
Therefore the QPPs belong to the signal
and they are not artifacts of the smoothing.
The results of the wavelet analysis are confirmed by the results of Fourier analysis.
Due to the discreteness of the frequency grid in Fourier and wavelet transforms
the precisions of the periods determined are $\Delta P_1 = \pm 4$~s
for the period $P_1 \approx 50$~s and $\Delta P_2 = \pm 1$~s for period $P_2 \approx 30$~s.
These peaks in the Fourier periodogram are rather wide with the full width at half power
$FWHP_1 \approx 16$~s and $FWHP_2 \approx 7$~s respectively.
\section{Discussion}\label{s:Discussion}
Analyzing the time behavior of the flare on May 6, 2005 we found common periodic variations (40--50~s) of the microwave and hard X-ray emission and in the type III radio bursts. The average relative amplitude of the variations is high at 30\% above the background flux level, reaching 80\% during the flare maximum. However, we did not find this periodicity either in the thermal X-ray flux component or source size dynamics.
We consider two probable explanations of QPPs in flare emission: MHD oscillations in plasma waveguides and oscillatory reconnection. In both cases we would observe pulsations in HXRs and microwaves caused by accelerated electrons. The MHD oscillation model explains the high amplitude of oscillations by modulation of the emission produced by accelerated electrons \citep[see][as pioneer studies]{1969ApJ...155L.117P, 1983Natur.305..292N, 1983ApJ...271..376K} whilst the oscillatory reconnection modulates the process of electron acceleration \citep{2009SSRv..149..119N}.
The wavelet analysis found a similar periodicity in the time profiles with period $P \approx 40$--$50$~s
for all the studied spectral regions. This confirms that the observed QPPs are of solar origin and not artifacts
due to data handling or due to instrumental effects \citep{2011A&A...530A..47I}.
\subsection{MHD oscillations}\label{s:MHD}
Microwave emission from solar flares is often attributed to gyrosynchrotron emission from accelerated electrons. The simplified equation for the gyrosynchrotron emissivity \citep{1982ApJ...259..350D} gives the relationship between the intensity $I$ and the magnetic field $B$
$$ I \propto N B^{0.9\delta - 0.22}.$$
Here, $N$ is number of accelerated elecrtons with the energies more than 10~keV.
MHD waves do not significantly affect the variations of number of accelerated electrons.
Thus we may assume that the number $N$ is nearly constant.
The spectral index $\delta$ of accelerated electrons derived from the HXR observations varied from $\delta \approx 3.5$ in the beginning of the impulsive phase to $\delta \approx 5.5$ at the end of the impulsive phase (Figure~\ref{f:T_delta}). If we assume the angle between the magnetic field and the line of sight $\theta = const$ then the dependence of the intensity on the magnetic field is roughly estimated as $I \propto B^3$ for the harder spectrum with $\delta \approx 3.5$. Taking the logarithmic derivative from this relationship we calculate $\Delta B/B = \Delta I/3I$. The average amplitude of the QPPs found in Section~\ref{s:QPP} is $\Delta I / I \approx 30$\% so we obtain $\Delta B/B \approx 10$\%.
In a similar way we estimate the variations of the magnetic field for the softer spectrum at the end of the impulsive phase. The spectral index $\delta \approx 5.5$ leads to $\Delta B/B = \Delta I/5I \approx 6$\%.
The amplitude of the QPPs reaches $\Delta I / I \approx 80$\% at the flare maximum where $\delta \approx 4.5$. In this case the variations of the magnetic field should be $\Delta B/B \approx 20$\% in order to provide such a high modulation depth. It is hard to explain this property in terms of the modulation of the emission by MHD oscillations of the emitting volume.
Another argument against the production of the detected QPP by direct modulation of the emitting plasma or magnetic field by an MHD wave is the absence of pronounced oscillations in the thermal X-ray fluxes or the thermal X-ray source size dynamic (Figure~\ref{f:Wavelets} panels a and b).
We found the time profiles of the microwaves, HXRs and type III radio bursts varying quasi-periodically, with a common period of 40--50 s. Speculatively, such stable periodicity within so wide a height range is difficult to explain by MHD oscillations. Moreover, it is not obvious how an MHD oscillation would affect the amplitude of type III emission to cause the observed periodicity. Type III bursts are believed to be caused by discrete electron beams traveling around $10^{10}$~cm~s$^{-1}$, much faster than typical speeds of MHD waves.
\subsection{Quasi-periodic reconnections}\label{s:Reconnections}
The good correlation between the time profiles in microwaves and HXRs (Figure~\ref{f:5}) suggests that both emissions are generated by the same, time-varying population of accelerated electrons. This fact allows us to consider periodic reconnection as a possible scenario of observed
variations of the flare emission intensity.
\citet{2015ApJ...807...72L} found QPPs with $P \approx $~4~min in HXRs, in the chromospheric and coronal line emissions, and in the radio emission. The authors showed that each radio peak corresponds to a type III burst. The results of that study indicate that the QPPs observed in this flare were generated by the non-thermal electrons accelerated by a set of quasi-periodic magnetic reconnections.
As it was shown in Section~\ref{s:Timeprofs} the type III bursts in the radio MHz band look like they are related to the microwaves and the HXRs (Figure~\ref{f:5}). The relative phase relationships between these emissions could indicate that electrons are accelerated in the corona giving rise both to the development of the microwave and X-ray sources in lower layers, as well as to the type~III bursts in the higher layers of the solar corona. Moreover, we found the $P_1 \approx $~40--50~s periodicity which is similar to the periodicity detected in HXR and microwave emission (Figure~\ref{f:Wavelets} panels c to g).
The time evolution of the electron spectral index $\delta$ calculated using RHESSI data is characterized by the presence of three peaks (Figure~\ref{f:T_delta}) in the spectral hardness, near the times 03:08:20, 03:08:40, and 03:09:20~UT. These peaks coincide in time with the peaks of microwave emission at 17~GHz (Figure~\ref{f:5}). When the microwave flux increases, the spectral index decreases and the spectrum becomes harder, implying progressive acceleration of electrons. Similarly, the decrease in the microwave flux and simultaneous increase in $\delta$ reveal the relaxation of the acceleration process.
The second, third and fourth type III bursts appear to be related to the bursts observed in microwaves and HXRs. There is an expected time delay between a peak in microwaves and a corresponding peak at 32 MHz, related to the travel time of the electrons from the acceleration region to the upper corona. Assuming a density model \citep{1958ApJ...128..677P} of the quiet Sun, we attribute 32~MHz to around $5.4 \times 10^{10}$~cm. A modest type~III speed of 0.2~$c$ (where $c$ is the light speed) gives just under 10~s travel time that is similar to the delay found between the rise in microwaves and the rise in type III emission (Section \ref{s:Timeprofs}). This provides further evidence for a common acceleration region responsible for the energetic electrons that created the type III emission, microwaves and HXRs.
\section{Conclusion and final remarks}\label{s:Conclusion}
Analyzing the microwaves, hard X-rays and type III bursts emitted during a solar flare we found the following results: a stable 40--50~s periodicity from the microwaves, hard X-rays and type III radio bursts, a similar periodic behavior of the electron spectral index, and an absence of any periodicity in the thermal X-ray lightcurves or the thermal X-ray source size. We conclude that the observed QPPs were most probably caused by quasi-periodic acceleration of electrons caused by oscillations in the current sheet rather than high-amplitude MHD pulsations.
This work adds to the other confirmations from observations \citep{2015ApJ...804....4K} and from simulations \citep{2009A&A...494..329M, 2012ApJ...749...30M} that magnetic reconnection can be modulated by MHD waves with minute periods.
The oscillations in the current sheet could be caused by slow magneto-acoustic waves coming from the lower layers of the solar atmosphere. The indirect evidence in favor of MHD driven reconnections could be the slow magneto-acoustic waves found during the decay phase of this flare with an approximately similar period (Paper~1).
This suggestion is in agreement with the results of simulations made by \citet{2012A&A...548A..98M}
who considered non-linear fast magnetoacoustic waves that deform a magnetic X-point as a driver for oscillatory reconnection with periods from 56.3 s to 78.9 s.
However, diagnostics of the wave type is impeded because of the absence of EUV data with high temporal resolution.
\begin{acks}
This work was (partly) carried out on the Solar Data Analysis System
operated by the Astronomy Data Center in cooperation with
the International Consortium for the Continued Operation of Nobeyama Radioheliograph (ICCON).
This research was partly supported by the grants of the Russian Foundation
for Basic Research No.No. 14-02-00924, 15-02-08028, 15-02-03717,
by the Program of Russian Academy of Sciences No.22,
by the RAS Presidium program No. 0344-2015-0017,
by the Marie Curie FP7-PEOPLE-2011-IRSES-295272,
and by STFC consolidated grant ST/L000741/1 (HASR).
EK is a beneficiary of a mobility grant from the Belgian Federal Science Policy Office.
Authors thank an anonumous referee for helpful comments on the manusctript.
\end{acks}
\bibliographystyle{spr-mp-sola}
|
1,108,101,565,967 | arxiv | \section{Introduction} In a recent paper \cite{Singh:2018} we have proposed that space-time arises as a consequence of localisation of the wave function of macroscopic objects due to the dynamical mechanism of spontaneous localisation. In the present paper we present the same result, along with new physical insights and an experimental prediction, from a different perspective. We start by noting that there is a `problem of time in quantum theory'. One possible resolution of this problem is to invoke an operator space-time in which time is no longer a classical parameter. Classical space-time, along with classical matter, is recovered from operator space-time by invoking a relativistic generalisation of spontaneous collapse of macroscopic objects. In so doing, we predict the new phenomena of spontaneous localisation in time, and quantum interference in time, which should be looked for in laboratory experiments. We explain how the standard quantum theory on a classical curved space-time background is recovered from an underlying quantum theory on an operator space-time, by suppressing the operator nature of time. The originally proposed resolution of the quantum measurement problem via spontaneous collapse \cite{Ghirardi:86} is seen as an inevitable
by-product of the relativistic spontaneous localisation that we propose in the present work to recover classical space-time from operator space-time.
\vskip\bigskipamount}
\section{The need for a formulation of quantum theory without classical space-time}
Dynamics as we know it can be very roughly divided into two classes: classical dynamics on a classical space-time, and quantum dynamics on a classical space-time. This is depicted in the cartoon in Fig. 1 below.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\linewidth]{Fig1twolevels}
\caption{A rough classification of dynamics. Level III. is Classical Mechanics (CM) on a Classical Space-Time (CST). Level II. is Quantum Theory (QT) on the same classical space-time CST.}
\end{figure}
Level III. in this figure symbolically depicts/includes Newtonian mechanics and Galilean relativity, special relativity, and also general relativity. The curved-space metric is suppressed for simplicity, the key emphasis being that classical objects and fields produce and co-exist with classical space-time.
Level II. in this figure symbolically depicts quantum theory on classical space-time, and includes non-relativistic and relativistic quantum mechanics, quantum field theory, and quantum field theory on a curved space-time. The key emphasis here is the assumption that quantum systems can co-exist with a classical space-time. At a fundamental level, this assumption is problematic, as is depicted in Fig. 2 below \cite{Singh:2012}.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\linewidth]{Fig2pbmtimeqt}
\caption{The problem of time in quantum theory.}
\end{figure}
The time parameter which keeps track of evolution in quantum theory, is part of a classical space-time manifold, whose overlying geometry is produced by classical bodies. But these classical bodies are in turn a limiting case of quantum theory, thus making quantum theory depend on its own classical limit. It is a consequence of the application of the Einstein hole argument that if only quantum systems are present, one will have quantum fluctuations in the metric, and as a result one cannot give physical meaning to the point structure of the underlying space-time manifold \cite{Carlip2001}. Thus level II. in Fig. 1 is only an approximate/effective description of the dynamics and it requires the dominant pre-existence of classical matter fields in the universe. At a fundamental level, where there are no classical systems, there ought to exist a formulation of quantum theory which does not refer to classical space-time. We call this Level I. It follows that Level II. should be arrived at from Level. I, in a suitable approximation.
\section{A possible formulation of quantum theory without classical space-time}
We would like to make a minimal departure from classical space-time, in order to arrive at Level I. Ignoring gravity for the present, we assume that there is a Minkowski space-time metric on Level III. We then propose that physical laws are invariant under inertial coordinate transformations of {\it non-commuting} coordinates, which now acquire the status of operators (equivalently matrices), (${\hat t}, {\hat{\bf{x}}}$). The transition from Level II. to Level I. is made by bringing in non-commutativity of the coordinates, with the coordinates obeying arbitrary commutation relations. There is thus an operator space-time, and from the operator line-element a scalar Trace time $s$ is defined as follows:
\begin{equation}
ds^2 = {\rm Tr}\; d\hat{s}^2 \equiv {\rm Tr} [c^2\; d\hat{t}^2 - d {\bf \hat {x}^2 } ]
\label{ost}
\end{equation}
In analogy with special relativity, one can construct a Poincare-invariant dynamics for the operator matter degrees of freedom which live on this space-time. We call this a non-commutative special relativity - it is a classical matrix dynamics on an operator space-time. Given a Lagrangian for the system, one can write down the equations of motion, where time evolution is now recorded by the Trace time. And one can write down Hamilton's equations of motion for the canonical position operators and their conjugate momenta operators, like in conventional classical mechanics \cite{Lochan-Singh:2011}.
One then constructs a statistical thermodynamics for these matrix degrees of freedom, following the theory of Trace dynamics developed by Adler and collaborators \cite{Adler:04}. Remarkably, it is shown that at thermodynamic equilibrium, the thermal averages of the fundamental degrees of freedom obey the rules of relativistic quantum theory. But this is now on the operator space-time metric (\ref{ost}), with the operator coordinates now commuting with each other and with the matter degrees of freedom. Evolution is still recorded by the trace time. Following the techniques of Trace dynamics, one can develop a relativistic quantum field theory on this operator space-time. However, for our present considerations, we will restrict ourselves to a many particle relativistic system. Given a system with $n$ matrix degrees of freedom labelled $q_i^\mu$, it obeys a Lorentz invariant Schr\"{o}dinger equation for the wave-vector $\ket\psi$, which evolves with Trace time, and the index $\mu$ signifies that $q^{\mu}_{i}$ is the `position' four-operator in operator space-time, for the $i$th particle. This quantum dynamics on the operator space-time is our sought after formulation of quantum theory without classical space-time \cite{Lochan:2012}. We could have written this down straight away, but starting from non-commutative special relativity elegantly shows the underlying symmetry, and the minimal departure from classical space-time that is introduced by non-commutativity of coordinates. This is the desired Level I, and it is depicted in Fig. 3 below.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\linewidth]{Fig3levelone}
\caption{Introducing Level I. Quantum theory without classical space-time, and the extended Hilbert space. Here, classical space-time is replaced by the Operator Space-Time (OST), which transforms the Hilbert space of quantum theory to the Extended Hilbert Space.}
\end{figure}
Level I. has a very significant feature. It is that the Hilbert space, endowed with the operator metric, is now the entire physical universe. There is no more any classical physical space or classical space-time, outside this `Extended Hilbert Space'. Thus there is no longer the uneasy tension between the conventional quantum Hilbert space on the one hand - where the wave-function resides - and the particles on the other hand, which this wave-function is supposed to describe, but which live in physical 3-space. In the standard picture, the Hilbert space and the physical 3-space have no apparent physical connection. By endowing the Hilbert space with an operator metric, we overcome that discord \cite{Singh:2018}.
We must now understand how to descend from Level I. to Levels II. and III. First we propose that a transition has to be made from Level I. to Level III. (see Fig. 4 below). This is done by invoking a relativistic generalisation of the spontaneous collapse mechanism of the Ghirardi-Rimini-Weber (GRW) theory,
\begin{figure}[H]
\centering
\includegraphics[width=1.0\linewidth]{Fig4levelonethree}
\caption{Recovering Level III. from Level I. by invoking relativistic spontaneous localisation.}
\end{figure}
\section{Space-time from collapse of the wave-function}
We define the self-adjoint space-time operator $\hat{x}^\mu$ as $x^\mu = (\hat t, {\bf \hat x})$, where all the four operators commute with each other and with the ${\bf \hat q_n}$s. In the `position' representation, the state of the system is labelled by eigenvalues of $\hat{x}^{\mu}$, and is hence written as
$\psi(x^\mu_1, x^\mu_2, ..., x^\mu_N)$. Evolution is governed by the trace time $s$ defined above.
The dynamics is then given by the following relativistic generalisation of the two GRW postulates \cite{Singh:2018}.
1. Given the wave function $\psi(x^\mu_1, x^\mu_2, ..., x^\mu_N)$ of an $N$ particle quantum system in extended Hilbert space, the $n$-th particle undergoes a `spontaneous collapse' to a random eigenvalue $x^{\mu}$ of ${ \hat x}^\mu$, as defined by the following jump operation:
\begin{eqnarray}
{\psi_{s}(x^\mu_1, x^\mu_2, ..., x^\mu_N)\quad
\longrightarrow \quad}
\frac{L_{n}({x^\mu}) \psi_{s}(x^\mu_1, x^\mu_2, ..., x^\mu_N)}{\|L_{n}({x^\mu}) \psi_{s}(x^\mu_1, x^\mu_2, ..., x^\mu_N)\|}
\end{eqnarray}
The jump operator $L_{n}({x^\mu})$ is a Lorentz invariant linear operator defined to be the normalised Gaussian:
\begin{equation}
L_{n}(x^\mu) =
\frac{1}{(\pi c t_C)^{2}} e^{- ({
{ \hat q}^\mu_n} - {x^\mu})^2/2 c^2 t _C^2}
\end{equation}
$\hat{q}^\mu_{n}$ is the position operator for the $n$-th particle of the system and the random variable ${x^\mu}$ is the eigenvalue of ${\hat x}^\mu$ to which the jump occurs. $t_C$ is a new constant of nature.
The probability density for the $n$-th particle to jump to the eigenvalue ${x^\mu}$ of ${ \hat x}^\mu$ is assumed to be given by:
\begin{equation}
p_{n}({ x^\mu}) \quad \equiv \quad \|L_{n}({x^\mu}) \psi_{s}(x^\mu_1, x^\mu_2, ..., x^\mu_N)\|^2
\end{equation}
Also, it is assumed that the jumps are distributed in trace time $s$ as
a Poissonian process with frequency $\eta_{GRW}$, which is the second new constant of the model.
2. Between two consecutive jumps, the state vector evolves according to the following generalised Schr\"odinger equation
\begin{equation}
i\hbar \frac{\partial\psi}{\partial s} = H \psi (s)
\label{ose}
\end{equation}
The particles described by ${q^\mu_n}$ `live' in the ${ \hat x}^\mu$ operator space-time, and the aforesaid ${x^\mu}$ values are actually eigenvalues of ${\hat{x}^\mu}$. Rapid collapse localises a macroscopic object to one of the eigenvalues of ${ \hat{x}^\mu}$. Using these eigenvalues as reference points, one interprets the collection of eigenvalues as the four dimensional classical space-time we are familiar with. Space-time could be said to be that which is between GRW jumps in the operator space-time. A quantum mechanical particle which has not undergone collapse still `lives' in the space-time operator space ${ \hat{x}^\mu}$.
Classical space and time are thus approximations to the operator space and time described by $({\bf \hat{x}}, \hat{t})$, the approximation being caused by GRW quantum jumps. One can consider the classical line-element $(c^2 dt^2 - d{\bf x}^2)$
to be one of the eigenvalues of the operator line element $( c^2\; d\hat{t}^2 - d {\bf \hat {x}^2 } )$ and the Lorentz invariance of the latter ensures the Lorentz invariance of the former. The proper time of special relativity can be said to be the classical correspondence of Trace time. The transition from Level I. to Level III. is depicted in Fig. 5 below. In the process, Level II. is bypassed - we return to Level II. in the next section. It is evident from this figure that we actually live in the Extended Hilbert Space
\begin{figure}[H]
\centering
\includegraphics[width=1.0\linewidth]{Fig5levelonethreecoll}
\caption{Recovering classical space-time of Level III. from Level I. by invoking relativistic spontaneous localisation of macroscopic objects.}
\end{figure}
Just as a macroscopic object spontaneously collapses to a specific position in space and repeated collapses keep it there, spontaneous collapses in time keep it frozen at a specific value of classical time. How then does it evolve in time? This is a serious difficulty with the model as it stands. One possible solution is to propose that spontaneous collapse takes place not onto space-time points, but to space-time paths. Paths are more fundamental than points. Instead of constructing paths from points, we should construct points from paths. Evolution in time is then a perception - the entire space-time path is in fact pre-given, in the spirit of the principle of least action, which determines the entire path in one go. The mathematical formulation of this proposal is presently being attempted.
It is also interesting to note that starting from non-commutative special relativity on level I. one could consider recovering the usual special relativity at Level III., perhaps by a mechanism analogous to spontaneous localisation. This process entirely bypasses quantum theory, and might be worthy of further investigation.
\section{Recovering quantum theory on classical space-time, and the significance of quantum interference in time}
The way Level II. is usually constructed, is shown in Fig. 6 below.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\linewidth]{Fig6levelonetotwo}
\caption{Recovering Level II from Level I.}
\end{figure}
That is, we take quantum theory from Level I. (without the postulate of spontaneous localisation) and we take classical space-time from Level III. and we make a hybrid dynamics at Level II. In the light of our discussion in the first section, and in the light of the spontaneous localisation postulate of Level I, we now know that this hybrid dynamics of Level II. cannot be the full story. In fact quantum dynamics can be correctly described only at level I, by using the operator space-time metric. If spontaneous localisation is ignorable (microscopic systems) we get linear quantum theory on an operator space-time, which as we shall soon see, differs from quantum theory on CST by way of predicting interference in time. If we insist on using a classical space-time, as in Level II., then the minimum we must do is have the GRW theory, expressed by the following standard postulates (non-relativistic theory) \cite{Ghirardi:86,Bassi:03}.
1. Given the wave function $\psi ({\bf x_1}, {\bf x_2}, ..., {\bf x_N})$ of an $N$ particle quantum system in Hilbert space, the $n$-th particle undergoes a `spontaneous collapse' to a random spatial position ${\bf x}$ as defined by the following jump operator:
\begin{eqnarray}
{\psi_{t}({\bf x}_{1}, {\bf x}_{2}, \ldots {\bf x}_{N}) \quad
\longrightarrow \quad}
\frac{L_{n}({\bf x}) \psi_{t}({\bf x}_{1},
{\bf x}_{2}, \ldots {\bf x}_{N})}{\|L_{n}({\bf x}) \psi_{t}({\bf
x}_{1}, {\bf x}_{2}, \ldots {\bf x}_{N})\|}
\end{eqnarray}
The jump operator $L_{n}({\bf x})$ is a linear operator defined to be the normalised Gaussian:
\begin{equation}
L_{n}({\bf x}) =
\frac{1}{(\pi r_C^2)^{3/4}} e^{- ({\bf
\hat q}_{n} - {\bf x})^2/2r_C^2}
\end{equation}
${\bf \hat q}_{n}$ is the position operator for the $n$-th particle of the system and the random variable ${\bf x}$ is the spatial position to which the jump occurs. $r_C$, the width of the Gaussian, is a new constant of nature.
The probability density for the $n$-th particle to jump to the position
${\bf x}$ is assumed to be given by:
\begin{equation}
p_{n}({\bf x}) \quad \equiv \quad \|L_{n}({\bf x}) \psi_{t}({\bf
x}_{1}, {\bf x}_{2}, \ldots {\bf x}_{N})\|^2
\end{equation}
Also, it is assumed in the GRW theory that the jumps are distributed in time as
a Poissonian process with frequency $\lambda_{\text{\tiny GRW}}$. This is the second
new parameter of the model.
2. Between
two consecutive jumps, the state vector evolves according to the
standard Schr\"odinger equation.
It is not difficult to see that the GRW theory above can equivalently be expressed by assuming spatial position to be an operator:
We define a set of three new self-adjoint `space operators' ${\bf \hat x}$ which commute with each other and with the ${\bf \hat q}_n$s.. The state of the system is described by the wave function $\psi ({\bf x_1}, {\bf x_2}, ..., {\bf x_N})$, where ${\bf x_n}$ is a set of three degrees of freedom associated with the $n$-th particle, these being real eigenvalues of the newly introduced `space operator' ${\bf \hat x}$ which belongs to the Hilbert space. The state evolves with time according to the following two postulates, which are essentially the same as the GRW postulates, except that one gets rid of classical physical space.
1. Given the wave function $\psi ({\bf x_1}, {\bf x_2}, ..., {\bf x_N})$ of an $N$ particle quantum system in Hilbert space, the $n$-th particle undergoes a `spontaneous collapse' to a random eigenvalue ${\bf x}$ of ${\bf \hat x}$, as defined by the following jump operator:
\begin{eqnarray}
{\psi_{t}({\bf x}_{1}, {\bf x}_{2}, \ldots {\bf x}_{N}) \quad
\longrightarrow \quad}
\frac{L_{n}({\bf x}) \psi_{t}({\bf x}_{1},
{\bf x}_{2}, \ldots {\bf x}_{N})}{\|L_{n}({\bf x}) \psi_{t}({\bf
x}_{1}, {\bf x}_{2}, \ldots {\bf x}_{N})\|}
\end{eqnarray}
The jump operator $L_{n}({\bf x})$ is a linear operator defined to be the normalised Gaussian:
\begin{equation}
L_{n}({\bf x}) =
\frac{1}{(\pi r_C^2)^{3/4}} e^{- ({\bf
\hat q}_{n} - {\bf x})^2/2r_C^2}
\end{equation}
${\bf \hat q}_{n}$ is the position operator for the $n$-th particle of the system and the random variable ${\bf x}$ is the eigenvalue of ${\bf \hat x}$ to which the jump occurs. $r_C$, the width of the Gaussian, is a new constant of nature.
The probability density for the $n$-th particle to jump to the eigenvalue ${\bf x}$ of ${\bf \hat x}$ is assumed to be given by:
\begin{equation}
p_{n}({\bf x}) \quad \equiv \quad \|L_{n}({\bf x}) \psi_{t}({\bf
x}_{1}, {\bf x}_{2}, \ldots {\bf x}_{N})\|^2
\end{equation}
Also, it is assumed that the jumps are distributed in time as
a Poissonian process with frequency $\lambda_{\text{\tiny GRW}}$. This is the second
new parameter of the model.
From the structure of these postulates, and from their comparison with the relativistic postulates of Sections II. and III. the following facts are evident: (i) if the operator nature of time is suppressed, and spontaneous localisation is ignored, then relativistic quantum field theory on level I. coincides with relativistic quantum field theory on Level II. (ii) if the operator nature of time is suppressed, and spontaneous localisation is invoked, then one arrives from the relativistic collapse model of Section III. to the non-relativistic GRW theory at Level II. (iii) In order to make a relativistic version of the GRW theory, we must invoke an operator nature for time.
Thus quantum theory at Level I. differs from quantum theory at Level II, in that at level I. time is an operator, while at level II. it is not. This is the feature that is lost in the hybrid dynamics at level II. What is the evidence for the operator nature of time, and why do we not see it easily? If time is an operator, we should see quantum interference in time. We believe we have a convincing explanation as to why quantum interference in time is so much harder to see than the usual spatial quantum interference. From the relativistic collapse postulates of Section III, and from the GRW postulates, it is plausible to make the assumption that $\eta_{\rm GRW} = \lambda_{\rm GRW}$, and that $ct_C = r_C$. If we assume for $r_C$ the GRW value of $10^{-5}$ cm, then we get that
$t_C=r_C /c\sim 10^{-16}$ s. If we were to make `time slits' with a separation significantly larger than $10^{-16}$ s, then even for microscopic systems, spontaneous collapse in time will destroy quantum interference in time. On the other hand if the time slits have a separation of the order $10^{-16}$ s or smaller, interference in time will be observed. Remarkably enough, attosecond scale interference in time may have already been observed in the laboratory several years ago \cite{L2005}, and we could possibly consider this to be evidence for the operator nature of time and for the ideas presented in this work. We predict that for time slit separations significantly larger than $10^{-16}$ s, interference in time will not be observed. Confirmation of this prediction will constitute experimental evidence for relativistic spontaneous localisation in operator space-time, and for collapse of the wave-function as the the mechanism for emergence in space-time.
Outstanding open challenges in this program are generalisation to quantum field theory, and to include gravity. This is currently being attempted. There is perhaps a direct connection of this program, with non-commutative differential geometry.
I would like to thank Angelo Bassi, Kinjalk Lochan, Hendrik Ulbricht, Bhavya Bhatt, Shounak De, Sandro Donadi, Priyanka Giri, Anirudh Gundhi, Navya Gupta, Manish, Ruchira Mishra, Shlok Nahar, Branislav Nikolic, Raj Patil and Anjali Ramesh for helpful discussions.
|
1,108,101,565,968 | arxiv | \section{Introduction}
\indent
The Bose gas is one of the simplest models in quantum statistical mechanics, and yet it has a rich and complex phenomenology.
As such, it has garnered much attention from the mathematical physics community for over half a century.
It consists in infinitely many identical Bosons and is used to model a wide range of physical systems, from photons in black body radiation to gasses of helium atoms.
Whereas photons do not directly interact with each other, helium atoms do, and such an interaction makes studying such systems very challenging.
To account for interactions between Bosons, Bogolyubov\-~\cite{Bo47} introduced a widely used approximation scheme that accurately predicts many observables\-~\cite{LHY57} {\it in the low density} regime.
Even though Bogolyubov theory is not mathematically rigorous, it has allowed mathematical physicists to develop the necessary intuition to prove a wide variety of results about the Bose gas, such as the low density expansion of the ground state energy of the Bose gas in the thermodynamic limit\-~\cite{Dy57,LY98,YY09,FS20,BCS21,FS22}, as well as many other results in scaling limits other than the thermodynamic limit (see\-~\cite{Sc22} for a review, as well as, among many others, \cite{LSY00,LS02,NRS16,BBe18,BBe19,DSY19,BBe20,DS20,NT21,BSS22,BSS22b,HST22,NNe22}).
In this note, we will focus on the ground state in the thermodynamic limit.
\bigskip
\indent
In 1963, E.H.\-~Lieb\-~\cite{Li63,LS64,LL64} introduced a new approximation scheme to compute properties of the ground state of Bose gasses, called the {\it Simplified approach}, which has recently been found to yield surprisingly accurate results\-~\cite{CJL20,CJL21,CHe21,Ja22}.
Indeed, while Bogolyubov theory is accurate at low densities, the Simplified approach has been shown to yield asymptotically accurate results at both {\it low and high} densities\-~\cite{CJL20,CJL21} for interaction potentials that are of positive type, as well as reproduce the qualitative behavior of the Bose gas at intermediate densities\-~\cite{CHe21}.
In addition to providing a promising tool to study the Bose gas, the derivation of the Simplified approach is different enough from Bogolyubov theory that it may give novel insights into longstanding open problems about the Bose gas.
\bigskip
\indent
The original derivation of the Simplified approach\-~\cite{Li63} is quite general, and applies to any translation invariant system (it even works for Coulomb\-~\cite{LS64} and hard-core\-~\cite{CHe21} interactions).
In the present paper, we extend this derivation to systems that break translation invariance.
This allows us to formulate the Simplified approach for systems with external potentials, and with a large class of boundary conditions.
In addition, it allows us to compute observables in systems with translation invariance, but whose computation requires breaking the translation invariance.
We will discuss an example of such an observable: the momentum distribution.
\bigskip
\indent
The momentum distribution $\mathcal M(k)$ is the probability of finding a particle in the state $e^{ikx}$.
Bose gasses are widely expected to form a Bose-Einstein condensate, although this has still not been proven (at least for continuum interacting gasses in the thermodynamic limit).
From a mathematical point of view, Bose-Einstein condensation is defined as follows: if the Bose gas consists of $N$ particles, the average number of particles in the constant state (corresponding to $k=0$ in $e^{ikx}$) is of order $N$.
The {\it condensate fraction} is defined as the proportion of particles in the constant state.
The momentum distribution is an extension of the condensate fraction to a more general family of states.
In particular, computing $\mathcal M(k)$ for $k\neq 0$ amounts to counting particles that are {\it not} in the condensate.
This quantity has been used in the recent proof\-~\cite{FS20,FS22} of the energy asymptotics of the Bose gas at low density.
\bigskip
\indent
The main results in this paper fall into two categories.
First, we will derive the Simplified approach without assuming translation invariance, see Theorem\-~\ref{theo:simple}.
To do so, we will make the so-called ``factorization assumption'', on the marginals of the ground state wavefunction, see Assumption\-~\ref{assum:factorization}.
This allows us to derive a Simplified approach for a wide variety of situations in which translation symmetry breaking is violated, such as in the presence of external potentials.
Second, we compute a prediction for the momentum distribution using the Simplified approach.
The Simplified approach does not allow us to compute the ground state wavefunction directly, so to compute observables, such as the momentum distribution, we use the Hellmann-Feynman technique and add an operator to the Hamiltonian.
In the case of the momentum distribution, this extra operator is a projector onto $e^{ikx}$, which breaks the translation invariance of the system.
In Theorem\-~\ref{theo:Nk}, we show how to compute the momentum distribution in the Simplified approach using the general result of Theorem\-~\ref{theo:simple}.
In addition, we check that the prediction is credible, by comparing it to the prediction of Bogolyubov theory, and find that both approaches agree at low densities and small $k$, see Theorem\-~\ref{theo:Nk_bog}.
\bigskip
\indent
The rest of the paper is structured as follows.
In Section\-~\ref{sec:model}, we specify the model and state the main results precisely.
We then prove Theorem\-~\ref{theo:simple} in Section\-~\ref{sec:simple}, Theorem\-~\ref{theo:Nk} in Section\-~\ref{sec:Nk_proof}, and Theorem\-~\ref{theo:Nk_bog} in Section\-~\ref{sec:Nk_bog}.
The proofs are largely independent and can be read in any order.
\bigskip
\section{The model and main results}\label{sec:model}
\indent
Consider $N$ Bosons in a box of volume $V$ denoted by $\Omega_V:=[-V^{\frac13}/2,V^{\frac13}/2]^3$, interacting with each other via a pair potential $v\in L_{1}(\Omega_V^2)$ that is symmetric under exchanges of particles: $v(x,y)\equiv v(y,x)$.
The Hamiltonian acts on $L_{2,\mathrm{sym}}(\Omega_V^N)$ as
\begin{equation}
\mathcal H:=
-\frac12\sum_{i=1}^N\Delta_i
+
\sum_{1\leqslant i<j\leqslant N}v(x_i,x_j)
+
\sum_{i=1}^N P_i
\label{ham}
\end{equation}
where $\Delta_i\equiv\partial_{x_i}^2$ is the Laplacian with respect to the position of the $i$-th particle and $P_i$ is an extra single-particle term of the following form: given a self-adjoint operator $\varpi$ on $L_2(\Omega_V)$,
\begin{equation}
P_i:=\mathds 1^{\otimes i-1}\otimes \varpi\otimes\mathds 1^{\otimes N-i}
.
\label{Ppi}
\end{equation}
For instance, if we take $\varpi$ to be a multiplication operator by a function $v_0$, then $\sum_i P_i$ is the contribution of the external potential $v_0$.
Or $\varpi$ could be a projector onto $e^{ikx}$, which is what we will do below to compute the momentum distribution.
Because $P_i$ acts on a single particle, it breaks translational symmetry as soon as it is not constant.
\bigskip
\indent
We may impose any boundary condition on the box, as long as the Laplacian is self-adjoint.
We will consider the thermodynamic limit, in which $N,V\to\infty$, such that
\begin{equation}
\frac NV=\rho
\end{equation}
is fixed.
We consider the ground state $\psi_0$, which is the eigenfunction of $\mathcal H$ with the lowest eigenvalue $E_0$:
\begin{equation}
\mathcal H\psi_0=E_0\psi_0
.
\label{eigval}
\end{equation}
(It is a standard argument to prove that $\psi_0$ exists, and is both real and non-negative.)
\bigskip
\indent
In order to take the thermodynamic limit, we will assume that $v$ is uniformly integrable in $V$:
\begin{equation}
|v(x,y)|\leqslant \bar v(x,y)
,\quad
\int_{\mathbb R^3} dy\ \bar v(x,y)\leqslant c
\label{intv}
\end{equation}
where $\bar v$ and $c$ are independent of $V$.
In addition, we assume that, for any $f$ that is uniformly integrable in $V$,
\begin{equation}
\int dx\ \varpi f(x)\leqslant c
.
\label{bound_varpi}
\end{equation}
\bigskip
\subsection{The Simplified approach without translation invariance}\label{sec:general}
\indent
The crucial idea of Lieb's construction\-~\cite{Li63} is to consider the wave function $\psi$ as a probability distribution, instead of the usual $|\psi|^2$.
Since $\psi\geqslant 0$, this can be done by normalizing $\psi$ by its $L_1$ norm.
We then define the $i$-th marginal of $\psi$ as
\begin{equation}
g_i(x_1,\cdots,x_i)
:=
\frac{\int\frac{dx_{i+1}}V\cdots\frac{dx_N}V\ \psi(x_1,\cdots,x_N)}{\int\frac{dy_{1}}V\cdots\frac{dy_N}V\ \psi(y_1,\cdots,y_N)}
\equiv
V^i\frac{\int dx_{i+1}\cdots dx_N\ \psi(x_1,\cdots,x_N)}{\int dy_{1}\cdots dy_N\ \psi(y_1,\cdots,y_N)}
.
\label{gdef}
\end{equation}
In particular, for $i\in\{2,\cdots,N\}$,
\begin{equation}
\int\frac{dx_i}V\ g_i(x_1,\cdots,x_i)=g_{i-1}(x_1,\cdots,x_{i-1})
,\quad
\int\frac{dx}V\ g_1(x)=1
.
\label{grec}
\end{equation}
Because of the symmetry of $\psi$ under exchanges of particles, $g_i$ is symmetric under $x_i\leftrightarrow x_j$.
\bigskip
\indent
Inspired by\-~\cite{Li63}, we will make the following approximation.
\bigskip
\theoname{Assumption}{Factorization}\label{assum:factorization}
For $i=2,3,4$,
\begin{equation}
g_i(x_1,\cdots,x_i)
=
\prod_{1\leqslant j<l\leqslant i}
W_i(x_j,x_l)
\label{g_factorized}
\end{equation}
with
\begin{equation}
W_i(x,y)=f_i(x)f_i(y)(1-u_i(x,y))
\label{W_fact}
\end{equation}
in which $f_i$ and $u_i$ are bounded independently of $V$ and $u_i$ is uniformly integrable in $V$:
\begin{equation}
|u_i(x,y)|\leqslant\bar u_i(x,y)
,\quad
\int dy\ \bar u_i(x,y)\leqslant c_i
\label{assum_bound}
\end{equation}
with $c_i$ independent of $V$.
We further assume that, for $i=1,2,3$,
\begin{equation}
\lim_{V\to\infty}\int dx_i\ \Delta_{x_i} g_i(x_1,\cdots,x_i)=0
\end{equation}
in other words, these boundary terms vanish in the thermodynamic limit.
\endtheo
\bigskip
In other words, $g_i$ factorizes exactly as a product of pair terms $W_i$.
The $f_i$ in $W_i$ allow for $W_i$ to be modulated by a slowly varying density, which is the main novelty of this paper compared to\-~\cite{Li63}.
The inequality\-~(\ref{assum_bound}) ensures that $u_i$ decays sufficiently fast on the microscopic scale.
Note that, by the symmetry under exchanges of particles, $u_i(x,y)\equiv u_i(y,x)$.
\bigskip
\indent
Here, we use the term ``assumption'' because it leads to the Simplified approach.
However, it is really an {\it approximation} rather than an assumption: this factorization will certainly not hold true exactly.
At best, one might expect that the assumption holds approximately in the limit of small and large $\rho$, and for distant points, as numerical evidence suggests in the translation invariant case.
In the present paper, we will not attempt a proof that this approximation is accurate, and instead explore its consequences.
Suffice it to say that this approximation is one of {\it statistical independence} that is reminiscent of phenomena arising in statistical mechanics when the density is low, that is, when the interparticle distances are large.
In the current state of the art, we do not have much in the way of an explanation for why this statistical independence should hold; instead, we have extensive evidence, both numerical\-~\cite{CHe21} and analytical\-~\cite{CJL20,CJL21}, that this approximation leads to very accurate predictions.
\bigskip
\indent
The equations of the Simplified approach are derived from Assumption\-~\ref{assum:factorization}, using the eigenvalue equation\-~(\ref{eigval}) along with
\begin{equation}
\int\frac{dx}V\ g_1(x)=1
\label{g11}
\end{equation}
\begin{equation}
\int\frac{dy}V\ g_2(x,y)=g_1(x)
\label{g2g1}
\end{equation}
\begin{equation}
\int\frac{dz}V\ g_3(x,y,z)=g_2(x,y)
\label{g3g2}
\end{equation}
\begin{equation}
\int\frac{dz}V\frac{dt}V\ g_4(x,y,z,t)=g_2(x,y)
\label{g4g2}
\end{equation}
(all of which follow from\-~(\ref{grec})) to compute $u_i$ and $f_i$.
\bigskip
\indent
In the translation invariant case, the factorization assumption leads to an equation for $g_2$ alone, as $g_1$ is constant.
When translation invariance is broken, $g_1$ is no longer constant, and the Simplified approach consists in two coupled equations for $g_1$ and $g_2$.
We formulate these in terms of $g_1$ and $u_2$, with
\begin{equation}
g_2(x,y)=:g_1(x)g_1(y)(1-u_2(x,y))
.
\end{equation}
\bigskip
\theo{Theorem}\label{theo:simple}
If $g_i$ satisfies Assumption\-~\ref{assum:factorization}, the eigenvalue equation\-~(\ref{eigval}) and\-~(\ref{g11})-(\ref{g4g2}), then $g_1$ and $u_2$ satisfy the two coupled equations
\begin{equation}
\left(
-\frac\Delta 2
+\left(\varpi-\left<\varpi\right>\right)
+2\left(\mathcal E(x)-\left<\mathcal E(y)\right>\right)
+\frac12\left(\bar A(x)-\left<\bar A\right>-\bar C(x)\right)
\right)g_1(x)
+\Sigma_1(x)
=0
\label{compleq_g1}
\end{equation}
and
\begin{equation}
\begin{largearray}
\left(-\frac12(\Delta_x+\Delta_y)+v(x,y)-2\rho \bar K(x,y)+\rho^2\bar L(x,y)+\bar R_2(x,y)\right)
g_1(x)g_1(y)(1-u_2(x,y))
+\\\hfill+
\Sigma_2(x,y)=0
\label{compleq_g2}
\end{largearray}
\end{equation}
where
\begin{equation}
\left<f\right>:=\int\frac{dy}V\ g_1(y)f(y)
,\quad
\left<\varpi\right>\equiv \int\frac{dy}V\ \varpi g_1(y)
\label{avgdef}
\end{equation}
\begin{equation}
\bar S(x,y):=v(x,y)(1-u_2(x,y))
,\quad
f_1\bar\ast f_2(x,y):=\int dz\ g_1(z)f_1(x,z)f_2(z,y)
\end{equation}
\begin{equation}
\mathcal E(x):=
\frac\rho2\int dy\ g_1(y)\bar S(x,y)
,\quad
\bar A(x):=
\rho^2\bar S\bar\ast u_2\bar\ast u_2(x,x)
\label{EA}
\end{equation}
\begin{equation}
\bar C(x):=
2\rho^2\int dz\ g_1(z)u_2\bar\ast\bar S(x,z)
+2\rho\int dy\ \varpi_y(g_1(y)u_2(x,y))
.
\label{C}
\end{equation}
\begin{equation}
\bar K(x,y)
:=
\bar S\bar\ast u_2(x,y)
\end{equation}
\begin{equation}
\begin{largearray}
\bar L(x,y)
:=
\bar S\bar\ast u_2\bar\ast u_2(x,y)
-2u_2\bar\ast(u_2(u_2\bar\ast\bar S))(x,y)
+\\\hfill+
\frac12\int dzdt\ g_1(z)g_1(t)\bar S(z,t) u_2(x,z)u_2(x,t)u_2(y,z)u_2(y,t)
\end{largearray}
\end{equation}
\begin{equation}
\begin{array}{r@{\ }>\displaystyle l}
\bar R_2(x,y)
=&
2\left(\mathcal E(x)+\mathcal E(y)-2\left<\mathcal E\right>\right)
+\left(\varpi_x+\varpi_y-2\left<\varpi\right>\right)
+\\[0.3cm]&+
\frac12\left(\bar A(x)+\bar A(y)-2\left<\bar A\right>-\bar C(x)-\bar C(y)\right)
+2\rho u_2\bar\ast\left(u_2(\mathcal E-\left<\mathcal E\right>)\right)
+\\[0.3cm]&+
\rho\int dz\ \varpi_z(g_1(z)u_2(x,z)u_2(y,z))
-
\rho u_2\bar\ast u_2\left<\varpi\right>
\label{R}
\end{array}
\end{equation}
in which $\varpi_x$ is the action of $\varpi$ on the $x$-variable, and similarly for $\varpi_y$
and
\begin{equation}
\Sigma_i\mathop{\longrightarrow}_{V\to\infty}0
\end{equation}
pointwise.
Furthermore, the prediction for the energy per particle is
\begin{equation}
e:=\left<\mathcal E\right>+\left<\varpi\right>+\Sigma_0
\label{simplen}
\end{equation}
where $\Sigma_0\to0$ as $V\to\infty$.
\endtheo
\bigskip
This theorem is proved in Section\-~\ref{sec:simple}.
\bigskip
\indent
Let us compare this to the equation for $u$ in the Simplified approach in the translation invariant case\-~\cite[(5)]{CHe21}, \cite[(3.15)]{Ja22}:
\begin{equation}
-\Delta u(x)
=
(1-u(x))\left(v(x)-2\rho K(x)+\rho^2 L(x)\right)
\label{compleq}
\end{equation}
\begin{equation}
K:=
u\ast S
,\quad
S(y):=(1-u(y))v(y)
\label{K}
\end{equation}
\nopagebreakaftereq
\begin{equation}
L:=
u\ast u\ast S
-2u\ast(u(u\ast S))
+\frac12
\int dydz\ u(y)u(z-x)u(z)u(y-x)S(z-y)
.
\label{L}
\end{equation}
We will prove that these follow from Theorem\-~\ref{theo:simple}:
\bigskip
\theoname{Corollary}{Translation invariant case}\label{cor:check}
In the translation invariant case $v(x,y)\equiv v(x-y)$ and $\varpi=0$ with periodic boundary conditions, if\-~(\ref{compleq_g1})-(\ref{compleq_g1}) has a unique translation invariant solution, then (\ref{compleq_g2}) reduces to\-~(\ref{compleq}) in the thermodynamic limit.
\endtheo
\bigskip
\indent
The idea of the proof is quite straightforward.
Equation\-~(\ref{compleq_g2}) is very similar to\-~(\ref{compleq}), but for the addition of the extra term $\bar R_2$.
An inspection of\-~(\ref{R}) shows that the terms in $\bar R_2$ are mostly of the form $f-\left<f\right>$, which vanish in the translation invariant case, and terms involving $\varpi$, which is set to 0 in the translation invariant case.
The only remaining extra term is $\bar C(x)+\bar C(y)$, which we will show vanishes in the translation invariant case due to the identity\-~(\ref{g2g1}).
\bigskip
\indent
Theorem\-~\ref{theo:simple} is quite general, and can be used to study a trapped Bose gas, in which there is an external potential $v_0$.
In this case, $\varpi$ is a multiplication operator by $v_0$.
A natural approach is to scale $v_0$ with the volume: $v_0(x)=\bar v_0(V^{-1/3}x)$ in such a way that the size of the trap grows as $V\to\infty$, thus ensuring a finite local density in the thermodynamic limit.
Following the ideas of Gross and Pitaevskii\-~\cite{Gr61,Pi61}, we would then expect to find that\-~(\ref{compleq_g1}) and\-~(\ref{compleq_g2}) decouple, and that (\ref{compleq_g2}) reduces to the translation invariant equation\-~(\ref{compleq}), with a density that is modulated over the trap.
However, the presence of $\bar R_2$ in\-~(\ref{compleq_g2}) and $\bar C$ in\-~(\ref{compleq_g1}) breaks this picture.
Further investigation of this question is warranted.
\subsection{The momentum distribution}\label{sec:Nk}
\indent
The momentum distribution for the Bose gas is defined as
\begin{equation}
\mathcal M^{(\mathrm{Exact})}(k):=\frac1N\sum_{i=1}^N\left<\psi_0\right|P_i\left|\psi_0\right>
\label{Mdef}
\end{equation}
where
\begin{equation}
\varpi f:=\epsilon| e^{ikx}\big>\big< e^{ikx}|f
\equiv
\epsilon e^{ikx}\int dy\ e^{-iky}f(y)
\label{varpiNk}
\end{equation}
and $P_i$ is defined as in\-~(\ref{Ppi}):
\begin{equation}
P_i\psi(x_1,\cdots,x_N)= \epsilon e^{ikx_i}\int dy_y\ e^{iky_i}\psi(x_1,\cdots,x_{i-1},y_i,x_{i+1},\cdots,x_N)
\end{equation}
Equivalently,
\begin{equation}
\mathcal M^{(\mathrm{Exact})}(k)=\frac\partial{\partial\epsilon}\left. \frac{E_0}N\right|_{\epsilon=0}
\end{equation}
where $E_0$ is the energy in\-~(\ref{eigval}) for the Hamiltonian\-~(\ref{ham}).
Using the Simplified approach, we do not have access to the ground state wavefunction, so we cannot compute $\mathcal M$ using\-~(\ref{Mdef}).
Instead, we use the Hellmann-Feynman theorem, which consists in adding $\sum_iP_i$ to the Hamiltonian.
However, doing so breaks the translational symmetry.
This is why Theorem\-~\ref{theo:simple} is needed to compute the momentum distribution.
(A similar computation was done in\-~\cite{CHe21}, but, there, the derivation of the momentum distribution for the Simplified approach was taken for granted.)
\bigskip
\indent
By Theorem\-~\ref{theo:simple}, and, in particular, (\ref{simplen}), we obtain a natural definition of the prediction of the Simplified approach for the momentum distribution:
\begin{equation}
\mathcal M(k):=\frac{\partial}{\partial\epsilon}\left.\left(\left<\mathcal E\right>+\left<\varpi\right>\right)\right|_{\epsilon=0}
.
\end{equation}
\bigskip
\theoname{Theorem}{Momentum distribution}\label{theo:Nk}
Under the assumptions of Theorem\-~\ref{theo:simple}, using periodic boundary conditions, if $v$ is translation invariant and $\varpi=0$, then, if $k\neq 0$, in the thermodynamic limit,
\begin{equation}
\mathcal M(k)=\frac{\partial}{\partial\epsilon}\left.\frac\rho2\int dx\ (1-u(x))v(x)\right|_{\epsilon=0}
\end{equation}
where
\begin{equation}
-\Delta u(x)=(1-u(x))v(x)-2\rho K(x)+\rho^2L(x)+\epsilon F(x)
\end{equation}
where $K$ and $L$ are those of the translation invariant Simplified approach\-~(\ref{K})-(\ref{L}) and
\begin{equation}
F(x):=-2\hat u(-k)\cos(kx)
.
\label{F}
\end{equation}
\endtheo
\bigskip
\indent
We thus compute the momentum distribution.
To check that our prediction is plausible, we compare it to the Bogolyubov prediction, which can easily be derived from\-~\cite[Appendix\-~A]{LSe05}:
\begin{equation}
\mathcal M^{(\mathrm{Bogolyubov})}(k)=-\frac1{2\rho}\left(1-\frac{k^2+2\rho\hat v(k)}{\sqrt{k^4+4k^2\rho\hat v(k)}}\right)
\end{equation}
(this can be obtained by differentiating\-~\cite[(A.26)]{LSe05} with respect to $\epsilon(k)$, which returns the number of particles in the state $e^{ikx}$, which we divide by $\rho$ to obtain the momentum distribution).
Actually, following the ideas of\-~\cite{LHY57}, we replace $\hat v$ by a so-called ``pseudopotential'', which consists in replacing $v$ by a Dirac delta function, while preserving the scattering length:
\begin{equation}
\hat v(k)=4\pi a
\end{equation}
where the scattering length $a$ is defined in\-~\cite[Appendix\-~C]{LSe05}.
Thus,
\begin{equation}
\mathcal M^{(\mathrm{Bogolyubov})}(k)=-\frac1{2\rho}\left(1-\frac{k^2+8\pi\rho a}{\sqrt{k^4+16\pi k^2\rho a}}\right)
.
\label{Mbog}
\end{equation}
\bigskip
\indent
We prove that, for the Simple Equation, as $\rho\to0$, the prediction for the momentum distribution coincides with Bogolyubov's, for $|k|\lesssim\sqrt{\rho a}$.
The length scale $1/\sqrt{\rho a}$ is called the {\it healing length}, and is the distance at which pairs of particles correlate\-~\cite{FS20}.
It is reasonable to expect the Bogolyubov approximation to break down beyond this length scale.
\bigskip
\indent
The momentum distribution for the Simple equation, following the prescription detailed in\-~\cite{CJL20,CJL21,CHe21,Ja22}, is defined as
\begin{equation}
\mathcal M^{(\mathrm{simpleq})}(k)=\frac{\partial}{\partial\epsilon}\left.\frac\rho2\int dx\ (1-u(x))v(x)\right|_{\epsilon=0}
\label{M_simpleq}
\end{equation}
where\-~\cite[(1.1)-(1.2)]{CJL20}
\begin{equation}
-\Delta u(x)=(1-u(x))v(x)-4eu+2\rho e u\ast u+\epsilon F(x)
,\quad
e:=\frac\rho2\int dx\ (1-u(x))v(x)
\label{simpleq}
\end{equation}
where $F$ was defined in\-~(\ref{F}).
\bigskip
\theo{Theorem}\label{theo:Nk_bog}
Assume that $v$ is translation and rotation invariant ($v(x,y)\equiv v(|x-y|)$), and consider periodic boundary conditions.
We rescale $k$:
\begin{equation}
\kappa:=\frac{k}{2\sqrt e}
\end{equation}
we have, for all $\kappa\in\mathbb R^3$,
\begin{equation}
\lim_{e\to0}\rho\mathcal M^{(\mathrm{simpleq})}(2\sqrt e\kappa)
=\lim_{e\to0}\rho\mathcal M^{(\mathrm{Bogolyubov})}(2\sqrt e\kappa)
=-\frac12\left(1-\frac{\kappa^2+1}{\sqrt{(\kappa^2+1)^2-1}}\right)
.
\label{Msimpleqbog}
\end{equation}
\endtheo
\bigskip
\indent
The rotation invariance of $v$ is presumably not necessary.
However, the proof of this theorem is based on\-~\cite{CJL21}, where rotational symmetry was assumed for convenience.
\section{The Simplified approach without translation invariance, proof of Theorem \expandonce{\ref{theo:simple}}}\label{sec:simple}
\subsection{Factorization}
\indent
We will first compute $f_i$ and $u_i$ in Assumption\-~\ref{assum:factorization}.
\bigskip
\subsubsection{Factorization of $g_2$}
\indent
We start by considering $g_2$.
\bigskip
\theo{Lemma}\label{lemma:g2}
Assumption\-~\ref{assum:factorization} with $i=2$ and\-~(\ref{g11})-(\ref{g2g1}) imply that
\begin{equation}
g_2(x,y)=g_1(x)g_1(y)(1-u(x,y))(1+O(V^{-2}))
.
\end{equation}
\endtheo
\indent\underline{Proof}:
Assumption\-~\ref{assum:factorization} implies
\begin{equation}
g_2(x,y)=f_2(x)f_2(y)(1-u_2(x,y))
.
\end{equation}
and by\-~(\ref{g2g1}),
\begin{equation}
g_1(x)=f_2(x)\int \frac{dy}V f_2(y)(1-u_2(x,y))
.
\label{g1_fact}
\end{equation}
\bigskip
\point
Let us first take an expansion to order $V^{-1}$.
By~\-(\ref{assum_bound})
\begin{equation}
\int\frac{dy}V\ f_2(y)u_2(x,y)=O(V^{-1})
\end{equation}
and so
\begin{equation}
g_1(x)=f_2(x)\left(\int\frac{dy}V f_2(y)+O(V^{-1})\right)
.
\label{g1f2}
\end{equation}
Applying $\int\frac{dx}V\cdot$ to both sides of\-~(\ref{g1f2}), we find that
\begin{equation}
\int\frac{dy}Vf_2(y)=1+O(V^{-1})
\end{equation}
so\-~(\ref{g1f2}) yields
\begin{equation}
f_2(x)=g_1(x)(1+O(V^{-1}))
.
\label{f1V1}
\end{equation}
\bigskip
\point
We now push the expansion to order $V^{-2}$.
Inserting\-~(\ref{f1V1}) into\-~(\ref{g1_fact}),
\begin{equation}
g_1(x)=f_2(x)\int\frac{dy}V\ f_2(y)-g_1(x)\left(\int\frac{dy}V\ g_1(y)u_2(x,y)+O(V^{-2})\right)
.
\end{equation}
However, by\-~(\ref{g2g1}),
\begin{equation}
g_1(x)\int\frac{dy}V\ g_1(y)(1-u_2(x,y))=g_1(x)
\end{equation}
so, by\-~(\ref{g11}),
\begin{equation}
\int dy\ g_1(y)u_2(x,y)=0
\label{intu0}
\end{equation}
and
\begin{equation}
g_1(x)(1+O(V^{-2}))=f_2(x)\int\frac{dy}V\ f_2(y)
.
\end{equation}
Taking $\int\frac{dx}V\cdot$ on both sides, we find that
\begin{equation}
f_2(x)=g_1(x)(1+O(V^{-2}))
.
\end{equation}
\qed
\bigskip
{\bf Remark}:
Note that this proof can easily be generalized to show that $f_2=g_1(1+O(V^{-n}))$ for any $n$.
\subsubsection{Factorization of $g_3$}
\indent
We now turn to $g_3$.
\bigskip
\theo{Lemma}\label{lemma:g3}
Assumption\-~\ref{assum:factorization} with $i=2,3$ and\-~(\ref{g11})-(\ref{g3g2}) imply that
\begin{equation}
g_3(x,y,z)=g_1(x)g_1(y)g_1(z)(1-u_3(x,y))(1-u_3(x,z))(1-u_3(y,z))(1+O(V^{-2}))
\end{equation}
with
\begin{equation}
u_3(x,y):=u_2(x,y)+\frac{w_3(x,y)}V
\label{u3}
\end{equation}
\begin{equation}
w_3(x,y):=(1-u_2(x,y))\int dz\ g_1(z)u_2(x,z)u_2(y,z)
.
\label{w3}
\end{equation}
\endtheo
\bigskip
\indent\underline{Proof}:
Using\-~(\ref{g3g2}) in\-~(\ref{g_factorized}),
\begin{equation}
g_2(x_1,x_2)=W_3(x_1,x_2)
\int \frac{dx_3}V\ W_3(x_1,x_3)W_3(x_2,x_3)
.
\label{g2_factor_inproof}
\end{equation}
\bigskip
\point
We first expand to order $V^{-1}$.
By\-~(\ref{assum_bound}),
\begin{equation}
\int\frac{dz}Vf_3^2(z)u_3(x,z)=O(V^{-1})
\label{f3V1}
\end{equation}
so, by\-~(\ref{W_fact}),
\begin{equation}
g_2(x,y)=f_3^2(x)f_3^2(y)(1-u_3(x,y))
\left(\int \frac{dz}V\ f_3^2(z)
+O(V^{-1})\right)
.
\end{equation}
By Lemma~\-\ref{lemma:g2},
\begin{equation}
g_1(x)g_1(y)(1-u_2(x,y))=f_3^2(x)f_3^2(y)(1-u_3(x,y))\left(\int\frac{dz}V\ f_3^2(z)+O(V^{-1})\right)
.
\end{equation}
We take $\int\frac{dy}V\cdot$ on both sides of this equation.
By\-~(\ref{intu0}) and\-~(\ref{f3V1}),
\begin{equation}
g_1(x)=f_3^2(x)\left(\left(\int\frac{dy}Vf_3^2(y))\right)^2+O(V^{-1})\right)
\end{equation}
and, integrating once more implies that $\int\frac{dy}Vf_3^2(y)=1+O(V^{-1})$.
Therefore,
\begin{equation}
f_3^2(x)=g_1(x)(1+O(V^{-1}))
\label{3fV}
\end{equation}
and
\begin{equation}
u_3(x,y)=u_2(x,y)(1+O(V^{-1}))
.
\label{3V}
\end{equation}
\bigskip
\point
We push the expansion to order $V^{-2}$: (\ref{g2_factor_inproof}) is
\begin{equation}
g_2(x,y)=f_3^2(x)f_3^2(y)(1-u_3(x,y))\int\frac{dz}{V}f_3^2(z)
\left(
1
-u_3(x,z)-u_3(y,z)
+u_3(x,z)u_3(y,z)
\right)
.
\end{equation}
By\-~(\ref{3fV})-(\ref{3V}) and Lemma\-~\ref{lemma:g2},
\begin{equation}
\begin{largearray}
f_3^2(x)f_3^2(y)(1-u_3(x,y))\int\frac{dz}{V}f_3^2(z)
=g_1(x)g_1(y)(1-u_2(x,y))
\cdot\\[0.3cm]\hfill\cdot
\left(1+\int\frac{dz}{V}\ (g_1(z)(u_2(x,z)+u_2(y,z)-u_2(x,z)u_2(y,z)))+O(V^{-2})\right)
.
\end{largearray}
\end{equation}
Therefore, by\-~(\ref{intu0}),
\begin{equation}
\begin{largearray}
f_3^2(x)f_3^2(y)(1-u_3(x,y))\int\frac{dz}{V}f_3^2(z)=g_1(x)g_1(y)(1-u_2(x,y))
\cdot\\\hfill\cdot
\left(1-\int\frac{dz}{V}g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right)
.
\end{largearray}
\end{equation}
Now, let us apply $\int\frac{dy}V\cdot$ to both sides of the equation.
Note that, by\-~(\ref{assum_bound}),
\begin{equation}
\int\frac{dy}V\ g_1(y)u_2(x,y)\int\frac{dz}Vg_1(z)u_2(x,z)u_2(y,z)=O(V^{-2})
.
\label{tech1}
\end{equation}
Furthermore, by\-~(\ref{intu0}),
\begin{equation}
\int \frac{dy}V\ g_1(y)u_2(x,y)=0
,\quad
\int\frac{dy}V\ g_1(y)\int\frac{dz}V\ g_1(z)u_2(x,z)u_2(y,z)=0
\end{equation}
and by\-~(\ref{3fV}) and\-~(\ref{3V}),
\begin{equation}
\int\frac{dy}V\ f_3^2(y)u_3(x,y)=\int\frac{dy}V\ g_1(y)u_2(x,y)+O(V^{-2})=O(V^{-2})
.
\label{tech2}
\end{equation}
We are thus left with
\begin{equation}
f_3^2(x)\left(\int\frac{dy}V\ f_3^2(y)\right)^2
=
g_1(x)(1+O(V^{-2}))
.
\end{equation}
Taking $\int\frac{dx}V\cdot$, we thus find that
\begin{equation}
\left(\int\frac{dx}V f_3^2(x)\right)^3=1+O(V^{-2})
\end{equation}
and
\begin{equation}
f_3^2(x)=g_1(x)(1+O(V^{-2}))
.
\end{equation}
Therefore,
\begin{equation}
1-u_3(x,y)=(1-u_2(x,y))\left(1-\frac1V\int dz\ g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right)
.
\end{equation}
\qed
\subsubsection{Factorization of $g_4$}
\theo{Lemma}\label{lemma:g4}
Assumption\-~\ref{assum:factorization} and\-~(\ref{g11})-(\ref{g4g2}) imply that
\begin{equation}
g_4(x_1,x_2,x_3,x_2)=
g_1(x_1)g_1(x_2)g_1(x_3)g_1(x_4)
\left(\prod_{i<j}(1-u_4(x_i,x_j))\right)
(1+O(V^{-2}))
\end{equation}
with
\begin{equation}
u_4(x,y):=u_2(x,y)+\frac{2w_3(x,y)}V
\label{u4}
\end{equation}
where $w_3$ is the same as in Lemma\-~\ref{lemma:g3}.
\endtheo
\bigskip
\indent\underline{Proof}:
Using\-~(\ref{g4g2}) in\-~(\ref{g_factorized}),
\begin{equation}
g_2(x_1,g_2)=W_4(x_1,x_2)\int\frac{dx_3dx_4}{V^2}\
W_4(x_1,x_3)
W_4(x_1,x_4)
W_4(x_2,x_3)
W_4(x_2,x_4)
W_4(x_3,x_4)
.
\end{equation}
\bigskip
\point
We expand to order $V^{-1}$.
By\-~(\ref{assum_bound}),
\begin{equation}
\int\frac{dz}Vf_4^3(z)u_4(x,z)=O(V^{-1})
\label{f4V1}
\end{equation}
so by\-~(\ref{W_fact}),
\begin{equation}
g_2(x,y)=f_4^3(x)f_4^3(y)(1-u_4(x,y))\left(\int\frac{dzdt}{V^2}f_4^3(z)f_4^3(t)+O(V^{-1})\right)
.
\end{equation}
By Lemma\-~\ref{lemma:g2},
\begin{equation}
g_1(x)g_1(y)(1-u_2(x,y))=
f_4^3(x)f_4^3(y)(1-u_4(x,y))\left(\left(\int\frac{dz}{V}f_4^3(z)\right)^2+O(V^{-1})\right)
.
\end{equation}
Applying $\int\frac{dy}V\cdot$ to both sides of the equation, using\-~(\ref{intu0}) and\-~(\ref{f4V1}),
\begin{equation}
g_1(x)=f_4(x)^3\left(\left(\int\frac{dy}V\ f_4^3(y)\right)^3+O(V^{-1})\right)
.
\end{equation}
Integrating once more, we have $\int\frac{dy}Vf_4^3(z)=1+O(V^{-1})$ and
\begin{equation}
f_4^3(x)=g_1(x)(1+O(V^{-1}))
.
\label{4fV}
\end{equation}
Therefore,
\begin{equation}
u_4(x,y)=u_2(x,y)(1+O(V^{-1}))
.
\label{4V}
\end{equation}
\bigskip
\point
We push the expansion to order $V^{-2}$:
by\-~(\ref{assum_bound}),
\begin{equation}
\int \frac{dzdt}{V^2}u_4(x,z)u_4(y,t)=O(V^{-2})
,\quad
\int \frac{dzdt}{V^2}u_4(x,z)u_4(z,t)=O(V^{-2})
\end{equation}
\begin{equation}
\int \frac{dzdt}{V^2}u_4(x,z)u_4(x,t)=O(V^{-2})
\end{equation}
so
\begin{equation}
\begin{largearray}
g_2(x,y)=f_4^3(x)f_4^3(y)(1-u_4(x,y))
\left(\int\frac{dzdt}{V^2}
f_4^3(z)f_4^3(t)
+\right.\\[0.5cm]\hfill\left.+
\int\frac{dzdt}{V^2}
g_1(z)g_1(t)(-2u_2(x,z)-2u_2(y,z)-u_2(z,t)+2u_2(x,z)u_2(y,z))
+O(V^{-2})
\right)
.
\end{largearray}
\end{equation}
By\-~(\ref{4fV}), (\ref{4V}), and Lemma\-~\ref{lemma:g2},
\begin{equation}
\begin{largearray}
f_4^3(x)f_4^3(x)(1-u_4(x,y))\left(\int\frac{dz}V\ f_4^3(z)\right)^2
=
g_1(x)g_1(y)(1-u_2(x,y))
\cdot\\[0.5cm]\hfill\cdot
\left(1+
\int\frac{dzdt}{V^2}\ g_1(z)g_1(t)(2u_2(x,z)+2u_2(y,z)+u_2(z,t)-2u_2(x,z)u_2(y,z))
+O(V^{-2})
\right)
.
\end{largearray}
\end{equation}
By~\-(\ref{intu0}),
\begin{equation}
\begin{largearray}
f_4^3(x)f_4^3(y)(1-u_4(x,y))\left(\int\frac{dz}V\ f_4^3(z)\right)^2
=\\[0.3cm]\hfill=
g_1(x)g_1(y)(1-u_2(x,y))\left(1-2\int\frac{dz}{V}g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right)
.
\end{largearray}
\end{equation}
We apply $\int\frac{dy}V\cdot$ to both sides of the equation.
By\-~(\ref{tech1})-(\ref{tech2}), we find
\begin{equation}
f_4^3(x)\left(\int\frac{dy}Vf_4^3(z)\right)^3=g_1(x)(1+O(V^{-2}))
.
\end{equation}
Taking $\int\frac{dx}V\cdot$, we find that
\begin{equation}
f_4(x)=1+O(V^{-2})
\end{equation}
and
\begin{equation}
f_4^3(x)=g_1(x)(1+O(V^{-2}))
.
\end{equation}
Therefore,
\begin{equation}
1-u_4(x,y)=(1-u_2(x,y))\left(1-\frac2V\int dz\ g_1(z)u_2(x,z)u_2(y,z)+O(V^{-2})\right)
.
\end{equation}
\qed
\subsection{Consequences of the factorization}
\point
We first rewrite\-~(\ref{eigval}) as a family of equations for $g_i$.
\bigskip
\subpoint
Integrating~\-(\ref{eigval}) with respect to $x_1,\cdots,x_N$, we find that
\begin{equation}
E_0=
G^{(2)}_0
+F^{(1)}_0
+B_0
\label{E0}
\end{equation}
with
\begin{equation}
G^{(2)}_0:=
\frac{N(N-1)}{2V^2}\int dxdy\ v(x,y)g_2(x,y)
\end{equation}
\begin{equation}
F^{(1)}_0:=
\frac NV\int dx\ \varpi g_1(x)
\end{equation}
and $B_0$ is a boundary term:
\begin{equation}
B_0=-\frac{N}{2V}\int dx\ \Delta g_1(x)
.
\end{equation}
\bigskip
\subpoint
If, now, we integrate~\-(\ref{eigval}) with respect to $x_2,\cdots,x_N$, we find
\begin{equation}
-\frac\Delta 2g_1(x)
+\varpi g_1(x)
+G^{(2)}_1(x)
+G^{(3)}_1(x)
+F^{(2)}_1(x)
+B_1(x)
=E_0g_1(x)
\label{g1}
\end{equation}
with
\begin{equation}
G^{(2)}_1(x):=\frac{N-1}V\int dy\ v(x,y)g_2(x,y)
\end{equation}
\begin{equation}
G^{(3)}_1(x):=
\frac{(N-1)(N-2)}{2V^2}\int dydz\ v(y,z)g_3(x,y,z)
\end{equation}
\begin{equation}
F^{(2)}_1(x):=\frac{N-1}V\int dy\ \varpi_y g_2(x,y)
\end{equation}
in which we use the notation $\varpi_y$ to indicate that $\varpi$ applies to $y\mapsto g_2(x,y)$,
and $B_1$ is a boundary term
\begin{equation}
B_1(x):=-\frac{N-1}{2V}\int dy\ \Delta_y g_2(x,y)
.
\end{equation}
\bigskip
\subpoint
If we integrate with respect to $x_3,\cdots,x_N$, we find
\begin{equation}
\begin{largearray}
-\frac12(\Delta_x+\Delta_y)g_2(x,y)
+v(x,y)g_2(x,y)
+(\varpi_y+\varpi_x)g_2(x,y)
+\\\hfill+
G^{(3)}_2(x,y)
+G^{(4)}_2(x,y)
+F^{(3)}_2(x,y)
+B_2(x,y)
=E_0g_2(x,y)
\label{g2}
\end{largearray}
\end{equation}
where, here again, $\varpi_y$ indicates that $\varpi$ applies to the $y$-degree of freedom, whereas $\varpi_x$ applies to $x$,
with
\begin{equation}
G^{(3)}_2(x,y):=
\frac{N-2}V\int dz\ (v(x,z)+v(y,z))g_3(x,y,z)
\end{equation}
\begin{equation}
G^{(4)}_2(x,y):=
\frac{(N-2)(N-3)}{2V^2}\int dzdt\ v(z,t)g_4(x,y,z,t)
\end{equation}
\begin{equation}
F^{(3)}_2(x,y):=
\frac{N-2}V\int dz\ \varpi_z g_3(x,y,z)
\end{equation}
and $B_2$ is a boundary term
\begin{equation}
B_2(x):=-\frac{N-2}{2V}\int dz\ \Delta_z g_3(x,y,z)
.
\end{equation}
\point
We rewrite\-~(\ref{E0}), (\ref{g1}) and~\-(\ref{g2}) using Lemmas\-~\ref{lemma:g2}, \ref{lemma:g3} and\-~\ref{lemma:g4}.
\bigskip
\subpoint We start with\-~(\ref{E0}): by\-~(\ref{intv}) and Lemma\-~\ref{lemma:g2},
\begin{equation}
G_0^{(2)}=
\frac{N(N-1)}{2V^2}\int dxdy\ v(x,y)g_1(x)g_1(y)(1-u_2(x,y))+O(V^{-1})
\end{equation}
so
\begin{equation}
E_0=
\frac{N(N-1)}{2V^2}\int dxdy\ v(x,y)g_1(x)g_1(y)(1-u_2(x,y))
+
\frac NV\int dx\ \varpi g_1(x)
+B_0
+O(V^{-1})
.
\end{equation}
\bigskip
\subpoint We now turn to\-~(\ref{g1}): by\-~(\ref{intv}) and Lemma\-~\ref{lemma:g2},
\begin{equation}
G_1^{(2)}(x)=\frac{N}Vg_1(x)\left(\int dy\ v(x,y)g_1(y)(1-u_2(x,y))+O(V^{-2})\right)
\end{equation}
and by Lemma\-~\ref{lemma:g3},
\begin{equation}
\begin{largearray}
G_1^{(3)}(x)=
g_1(x)\left(\frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(x,y))(1-u_2(x,z))(1-u_3(y,z))
-\right.\\\hfill\left.-
\frac{3N}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))
+O(V^{-1})\right)
\end{largearray}
\end{equation}
(we used\-~(\ref{u3}) to write $u_3=u_2+O(V^{-1})$; this works fine for $u_3(x,y)$ and $u_3(x,z)$ because the integrals over $y$ and $z$ are controlled by $v(y,z)w_3(x,y)$ and $v(y,z)w_3(x,z)$ using\-~(\ref{intv}) and\-~(\ref{assum_bound}); in the first term, it does not work for $u_3(y,z)$, as $v(y,z)w_3(y,z)$ can only control one of the integrals, and not both; the second term has an extra $V^{-1}$ that lets us replace $u_3$ by $u_2$)
and by\-~(\ref{assum_bound}) and\-~(\ref{bound_varpi}),
\begin{equation}
F_1^{(2)}(x)=
g_1(x)\left(\frac NV\int dy\ \varpi_y(g_1(y)(1-u_2(x,y)))
-\frac 1V\int dy\ \varpi g_1(y)
+O(V^{-1})
\right)
.
\end{equation}
The first term in $G_1^{(3)}$ is of order $V$:
\begin{equation}
\begin{largearray}
\frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(x,y))(1-u_2(x,z))(1-u_3(y,z))
=\\[0.3cm]\indent=
\frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))
-
\frac{N^2}{2V^3}\int dydz\ v(y,z)g_1(y)g_1(z)w_3(y,z)
+\\[0.5cm]\hfill+
\frac{N^2}{2V^2}\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))(-u_2(x,y)-u_2(x,z)+u_2(x,y)u_2(x,z))
+O(V^{-1})
\end{largearray}
\end{equation}
in which the only term of order $V$ is the first one, and is equal to the first term of order $V$ in $E_0$, and thus cancels out.
There is a similar cancellation between the second term of order $V$ in $F_1^{(2)}$ and $E_0$. All in all,
\begin{equation}
\left(
-\frac\Delta 2
+\varpi
+\bar G^{(2)}_1(x)
+\bar G^{(3)}_1(x)
+\bar F^{(2)}_1(x)
+\bar E_0
-B_0
\right)
g_1(x)
+B_1(x)
=g_1(x)O(V^{-1})
\label{g1bar}
\end{equation}
with, recalling $\rho:=N/V$,
\begin{equation}
\bar G_1^{(2)}(x):=\rho\int dy\ v(x,y)g_1(y)(1-u_2(x,y))
\end{equation}
and using\-~(\ref{w3}),
\begin{equation}
\begin{largearray}
\bar G_1^{(3)}(x):=
-\frac\rho2\int \frac{dydz}V\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))\left(3+\rho \int dt\ g_1(t)u_2(y,t)u_2(z,t)\right)
+\\[0.5cm]\hfill+
\frac{\rho^2}2\int dydz\ v(y,z)g_1(y)g_1(z)(1-u_2(y,z))(-u_2(x,y)-u_2(x,z)+u_2(x,y)u_2(x,z))
\end{largearray}
\end{equation}
\begin{equation}
\bar F_1^{(2)}(x):=
-\rho\int dy\ \varpi_y(g_1(y)u_2(x,y))
-\int \frac{dy}V\ \varpi g_1(y)
\end{equation}
\begin{equation}
\bar E_0:=
\frac\rho2\int \frac{dxdy}V\ v(x,y)g_1(x)g_1(y)(1-u_2(x,y))
.
\end{equation}
Rewriting this using\-~(\ref{avgdef})-(\ref{C}), we find\-~(\ref{compleq_g1}) with
\begin{equation}
\Sigma_1(x):=B_1(x)-B_0g_1(x)+O(V^{-1})
.
\end{equation}
\bigskip
\subpoint Finally, we rewrite (\ref{g2}): by\-~(\ref{intv}) and Lemma\-~\ref{lemma:g3},
\begin{equation}
\begin{largearray}
G_2^{(3)}(x,y)=
\frac NVg_1(x)g_1(y)(1-u_2(x,y))
\cdot\\\hfill\cdot
\left(\int dz\ (v(x,z)+v(y,z))g_1(z)(1-u_2(x,z))(1-u_2(y,z))+O(V^{-1})\right)
\end{largearray}
\end{equation}
and by Lemma\-~\ref{lemma:g4},
\begin{equation}
\begin{largearray}
G^{(4)}_2(x,y)=
g_1(x)g_1(y)\left(\frac{N^2}{2V^2}(1-u_4(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_4(z,t))\Pi(x,y,z,t)
-\right.\\\hfill\left.-
\frac{5N}{2V^2}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t))
+O(V^{-1})
\right)
\end{largearray}
\end{equation}
\begin{equation}
\Pi(x,y,z,t):=
(1-u_2(x,z))(1-u_2(x,t))(1-u_2(y,z))(1-u_2(y,t))
\label{Pi}
\end{equation}
and by\-~(\ref{assum_bound}) and\-~(\ref{bound_varpi}),
\begin{equation}
\begin{largearray}
F^{(3)}_2(x,y)=
g_1(x)g_1(y)\left(
\frac NV(1-u_3(x,y))\int dz\ \varpi_z(g_1(z)(1-u_2(x,z))(1-u_2(y,z)))
-\right.\\\hfill\left.-
\frac2V(1-u_2(x,y))\int dz\ \varpi g_1(z)
+O(V^{-1})
\right)
.
\end{largearray}
\end{equation}
\vfill
\eject
The first term in $G_2^{(4)}$ is of order $V$: by\-~(\ref{u4}),
\begin{equation}
\begin{largearray}
\frac{N^2}{2V^2}(1-u_4(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_4(z,t))
\Pi(x,y,z,y)
=\\[0.5cm]\indent=
\frac{N^2}{2V^2}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t))
-\\[0.5cm]\indent-
\frac{N^2}{V^3}w_3(x,y)\int dzdt\ v(z,t)g_1(z)g_1(y)(1-u_2(z,t))
-\\[0.5cm]\indent-
\frac{N^2}{V^3}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)w_3(z,t)
+\\[0.5cm]\indent+
\frac{N^2}{2V^2}(1-u_2(x,y))\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t))
\left(\Pi(x,y,z,t)-1\right)
+O(V^{-1})
\end{largearray}
\end{equation}
in which the only term of order $V$ is the first one, and is equal to the term of order $V$ in $E_0$, and thus cancels out.
There is a similar cancellation between the term of order $V$ in $F_2^{(3)}$ and $E_0$.
All in all,
\begin{equation}
\begin{largearray}
\left(
-\frac12(\Delta_x+\Delta_y)
+v(x,y)
+\varpi_x+\varpi_y
+\bar G^{(3)}_2(x,y)
+\bar G^{(4)}_2(x,y)
+\bar F^{(3)}_2(x,y)
+\bar E_0
-B_0
\right)
\cdot\\\hfill\cdot
g_1(x)g_1(y)(1-u_2(x,y))
+B_2(x,y)
=
g_1(x)g_1(y)O(V^{-1})
\end{largearray}
\label{g2bar}
\end{equation}
with
\begin{equation}
\bar G_2^{(3)}(x,y):=
\rho\int dz\ (v(x,z)+v(y,z))g_1(z)(1-u_2(x,z))(1-u_2(y,z))
\end{equation}
and by\-~(\ref{w3}),
\begin{equation}
\begin{array}{r@{\ }>\displaystyle l}
\bar G_2^{(4)}(x,y)
:=&
-\frac\rho2\left(5+2\rho \int dr\ g_1(r)u_2(x,r)u_2(y,r)\right)\int \frac{dzdt}V\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t))
-\\[0.3cm]&-
\rho^2\int \frac{dzdt}V\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t))\int dr\ g_1(r)u_2(z,r)u_2(t,r)
+\\&+
\frac{\rho^2}2\int dzdt\ v(z,t)g_1(z)g_1(t)(1-u_2(z,t))
\left(\Pi(x,y,z,t)-1\right)
\end{array}
\end{equation}
\begin{equation}
\begin{largearray}
\bar F^{(3)}_2(x,y):=
\rho\int dz\ \varpi_z(g_1(z)(-u_2(x,z)-u_2(y,z)+u_2(x,z)u_2(y,z)))
-\\\hfill-
\left(2+\rho\int dr\ g_1(r)u_2(x,r)u_2(y,r)\right)\int \frac{dz}V\ \varpi g_1(z)
\end{largearray}
\end{equation}
\begin{equation}
\bar E_0=
\frac\rho2\int \frac{dxdy}V\ v(x,y)g_1(x)g_1(y)(1-u_2(x,y))
.
\end{equation}
\bigskip
\subpoint
Expanding out $\Pi$, see\-~(\ref{Pi}), we find\-~(\ref{compleq_g2}) with
\begin{equation}
\begin{array}{r@{\ }>\displaystyle l}
\bar R_2(x,y)
:=&
\rho\int dz\ g_1(z)\left(
\bar S(x,z)+\bar S(y,z)
-2\int \frac{dt}V\ g_1(t)\bar S(t,z)
\right)
+\\[0.3cm]&+
\frac{\rho^2}2\left(
\bar S\bar\ast u_2\bar\ast u_2(x,x)+\bar S\bar\ast u_2\bar\ast u_2(y,y)
-2\int \frac{dt}V\ g_1(t)\bar S\bar\ast u_2\bar\ast u_2(t,t)
\right)
+\\[0.3cm]&+
\rho^2\int dzdt\ g_1(z)g_1(t)u_2(x,z)u_2(y,z)\left(
\bar S(z,t)
-\int\frac{dr}V\ g_1(r)\bar S(z,r)
\right)
-\\[0.3cm]&-
\rho^2\int dt\ g_1(t)(\bar S\bar\ast u_2(x,t)+\bar S\bar\ast u_2(y,t))
+\bar F_2^{(3)}(x,y)
+\varpi_x+\varpi_y
\end{array}
\label{R1}
\end{equation}
and
\begin{equation}
\Sigma_2(x,y):=B_2(x,y)-B_0g_1(x)g_1(y)(1-u_2(x,y))+O(V^{-1})
.
\end{equation}
Using\-~(\ref{EA}) and\-~(\ref{C}), (\ref{R1}) becomes\-~(\ref{R}).
\bigskip
\point
Finally, (\ref{simplen}) follows from\-~(\ref{E0}) with
\begin{equation}
\Sigma_0:=B_0+O(V^{-1})
.
\end{equation}
\qed
\subsection{Sanity check, proof of Corollary \expandonce{\ref{cor:check}}}\label{sec:trsl_inv}
\indent
Assuming the translation invariance of the solution, $g_1(x)$ is constant.
By\-~(\ref{g11}),
\begin{equation}
g_1(x)=1
.
\label{g1const}
\end{equation}
Furthermore, $\varpi\equiv 0$.
We then have
\begin{equation}
\bar S(x,y)=S(x-y)
,\quad
\bar K(x,y)=K(x-y)
,\quad
\bar L(x,y)=L(x-y)
\end{equation}
(see\-~(\ref{K})-(\ref{L})).
Furthermore,
\begin{equation}
\mathcal E(x)\equiv \mathcal E(y)\equiv\left<\mathcal E\right>=\frac\rho2\int dy\ S(y)
\end{equation}
\begin{equation}
\bar A(x)\equiv\bar A(y)\equiv\left<\bar A\right>=\rho^2 S\ast u\ast u(0)
\end{equation}
\begin{equation}
\bar C(x)\equiv \bar C_2(y)
=2\rho^2\int dz\ u(z)\int dt\ S(t)
\end{equation}
which vanishes by\-~(\ref{g2g1}).
Thus,
\begin{equation}
\bar R_2(x,y)\equiv0
.
\end{equation}
We conclude by taking the thermodynamic limit.
\qed
\section{The momentum distribution}
\subsection{Computation of the momentum distribution, proof of Theorem \expandonce{\ref{theo:Nk}}}\label{sec:Nk_proof}
\indent
We use Theorem\-~\ref{theo:simple} with $\varpi$ as in\-~(\ref{varpiNk}).
Note that, by\-~(\ref{varpiNk}),
\begin{equation}
\int dx\ \varpi f(x)=0
\end{equation}
which trivially satisfies\-~(\ref{bound_varpi}).
\bigskip
\point
We change variables in\-~(\ref{compleq_g2}) to
\begin{equation}
\xi=\frac{x+y}2
,\quad
\zeta=x-y
\end{equation}
and find
\begin{equation}
\begin{largearray}
\left(
-\frac14\Delta_\xi-\Delta_\zeta+v(\zeta)
-2\rho\bar K(\xi+{\textstyle\frac\zeta 2},\xi-{\textstyle\frac\zeta 2})
+\rho^2\bar L(\xi+{\textstyle\frac\zeta 2},\xi-{\textstyle\frac\zeta 2})
+\bar R_2(\xi+{\textstyle\frac\zeta 2},\xi-{\textstyle\frac\zeta 2})
\right)
\cdot\\\hfill\cdot
g_1(\xi+{\textstyle\frac\zeta 2})g_1(\xi-{\textstyle\frac\zeta 2})
(1-u_2(\xi+{\textstyle\frac\zeta 2},\xi-{\textstyle\frac\zeta 2}))
=-\Sigma_2
.
\label{g2_xi}
\end{largearray}
\end{equation}
In addition, by\-~(\ref{simplen}),
\begin{equation}
e=\frac\rho2\int \frac{d\xi d\zeta}V\ g_1(\xi+{\textstyle\frac\zeta 2})g_1(\xi-{\textstyle\frac\zeta 2})v(\zeta)(1-u_2(\xi+{\textstyle\frac\zeta 2},\xi-{\textstyle\frac\zeta 2}))
+\int\frac{dx}V\ \varpi g_1(x)
+\Sigma_1
.
\label{Nken}
\end{equation}
We expand in powers of $\epsilon$:
\begin{equation}
g_1(x)=1+\epsilon g_1^{(1)}(x)+O(\epsilon^2)
,\quad
u_2(\xi+{\textstyle\frac\zeta2},\xi-{\textstyle\frac\zeta 2})=u_2^{(0)}(\zeta)+\epsilon u_2^{(1)}(\xi+{\textstyle\frac\zeta2},\xi-{\textstyle\frac\zeta 2})+O(\epsilon^2)
\end{equation}
in which we used the fact that, at $\epsilon=0$, $g_1(x)|_{\epsilon=0}=1$, see\-~(\ref{g1const}).
In particular, the terms of order $0$ in $\epsilon$ are independent of $\xi$.
Note, in addition, that, by\-~(\ref{g11}),
\begin{equation}
\int\frac{dx}V\ g_1^{(1)}(x)=0
.
\label{intg11}
\end{equation}
\bigskip
\point
The trick of this proof is to take the average with respect to $\xi$ on both sides of\-~(\ref{g2_xi}).
Since we take periodic boundary conditions, the $\Delta_\xi$ term drops out.
We will only focus on the first order contribution in $\epsilon$, and, as was mentioned above, terms of order $0$ are independent of $\xi$.
Thus, the average over $\xi$ will always apply to a single term, either $g_1^{(1)}$ or $u_2^{(1)}$.
By\-~(\ref{g11}), the terms involving $g_1^{(1)}$ have zero average.
We can therefore replace $g_1^{(1)}$ by 1.
(The previous argument does not apply to the terms in which $\Delta_\zeta$ acts on $g_1$, but these terms have a vanishing average as well because of the periodic boundary conditions.)
In particular, by\-~(\ref{g2g1}) and Lemma\-~\ref{lemma:g2},
\begin{equation}
\int\frac{d\xi}V\ (1-u_2^{(1)}(\xi+{\textstyle\frac\zeta2},\xi-{\textstyle\frac\zeta 2}))
=1
\end{equation}
so
\begin{equation}
\int\frac{d\xi}V\ u_2^{(1)}(\xi+{\textstyle\frac\zeta2},\xi-{\textstyle\frac\zeta 2})
=0
\end{equation}
and thus, we can replace $u_2$ with $u_2^{(0)}$.
Thus, using the translation invariant computation detailed in Section\-~\ref{sec:trsl_inv}, we find that the average of\-~(\ref{g2_xi}) is
\begin{equation}
(-\Delta+v(\zeta)-2\rho K(\zeta)+\rho^2 L(\zeta))(1-u_2^{(0)}(\zeta))+\epsilon F(\zeta)+O(\epsilon^2)+\Sigma_2=0
\label{eqNk_inproof}
\end{equation}
where $K$ and $L$ are defined in\-~(\ref{K}) and\-~(\ref{L}) and $F$ comes from the contribution to $\bar R_2$ of $\varpi$, see\-~(\ref{R}):
\begin{equation}
\begin{largearray}
F(\zeta):=\epsilon^{-1}\int \frac{d\xi}V\
\left(
\varpi_x+\varpi_y-2\left<\varpi\right>
+\rho\int dz\ \varpi_z(u_2^{(0)}(\xi+{\textstyle\frac\zeta2}-z)u_2^{(0)}(\xi-{\textstyle\frac\zeta 2}-z))
-\right.\\\hfill\left.-
\rho\int dz\ \varpi_zu_2^{(0)}(\xi+{\textstyle\frac\zeta2}-z)
-\rho\int dz\ \varpi_zu_2^{(0)}(\xi-{\textstyle\frac\zeta 2}-z)
\right)(1-u_2^{(0)}(\zeta))
.
\end{largearray}
\end{equation}
Similarly, (\ref{Nken}) is
\begin{equation}
e=\frac\rho2\int d\zeta\ v(\zeta)(1-u_2^{(0)}(\zeta))
+\int\frac{dx}V\ \varpi g_1(x)
+\Sigma_1
+O(\epsilon^2)
.
\end{equation}
\bigskip
\point
Furthermore, by\-~(\ref{varpiNk}),
\begin{equation}
\int dz\ \varpi_z f(z)=0
\end{equation}
for any integrable $f$, so
\begin{equation}
F(\zeta)=\epsilon^{-1}\int \frac{d\xi}V\
\left(\varpi_x+\varpi_y\right)(1-u_2^{(0)}(\zeta))
\end{equation}
and
\begin{equation}
e=\frac\rho2\int d\zeta\ v(\zeta)(1-u_2^{(0)}(\zeta))
+\Sigma_1
+O(\epsilon^2)
.
\label{Nken_inproof}
\end{equation}
Now,
\begin{equation}
\varpi_x f(x-y)
=
e^{ikx}
\int dz\
e^{-ikz}f(z-y)
\end{equation}
so
\begin{equation}
\varpi_x f(\zeta)
=
\epsilon e^{ik(\xi+{\textstyle\frac\zeta2})}
\int dz\
e^{-ik(z+(\xi-{\textstyle\frac\zeta 2}))}f(z)
=
\epsilon e^{ik\zeta}
\int dz\
e^{-ikz}f(z)
=\epsilon e^{ik\zeta}\hat f(-k)
.
\end{equation}
Similarly,
\begin{equation}
\varpi_y f(\zeta)
=\epsilon e^{-ik\zeta}\hat f(-k)
.
\end{equation}
Thus
\begin{equation}
F(\zeta)=2\cos(k\zeta)(\delta(k)-\hat u_2^{(0)}(-k))
.
\label{F_inproof}
\end{equation}
Since $k\neq 0$, the $\delta$ function drops out.
We conclude the proof by combining\-~(\ref{eqNk_inproof}), (\ref{Nken_inproof}) and\-~(\ref{F_inproof}) and taking the thermodynamic limit.
\qed
\subsection{The simple equation and Bogolyubov theory, proof of Theorem \expandonce{\ref{theo:Nk_bog}}}\label{sec:Nk_bog}
\point
We differentiate\-~(\ref{simpleq}) with respect to $\epsilon$ and take $\epsilon=0$:
\begin{equation}
(-\Delta+v+4e+4e\rho u\ast)\partial_\epsilon u=-4\partial_\epsilon eu+2\partial_\epsilon e\rho u\ast u+F
.
\end{equation}
Let
\begin{equation}
\mathfrak K_e:=(-\Delta+v+4e(1-\rho u\ast))^{-1}
\end{equation}
(this operator was introduced and studied in detail in\-~\cite{CJL21}).
We apply $\mathfrak K_e$ to both sides and take a scalar product with $-\rho v/2$ and find
\begin{equation}
\partial_\epsilon e=\rho\partial_\epsilon e\int dx\ v(x)\mathfrak K_e(2u(x)-\rho u\ast u(x))-\frac\rho2\int dx\ v(x)\mathfrak K_eF(x)
\end{equation}
and so, using\-~(\ref{M_simpleq}),
\begin{equation}
\mathcal M^{(\mathrm{simpleq})}(k)=\partial_\epsilon e
=-\frac{\frac\rho2\int dx\ v(x)\mathfrak K_eF(x)}{1-\rho\int dx\ v(x)\mathfrak K_e(2u(x)-\rho u\ast u(x))}
\end{equation}
and, by\-~(\ref{F}),
\begin{equation}
\mathcal M^{(\mathrm{simpleq})}(k)
=\rho\frac{\hat u(k)\int dx\ v(x)\mathfrak K_e\cos(kx)}{1-\rho\int dx\ v(x)\mathfrak K_e(2u(x)-\rho u\ast u(x))}
.
\end{equation}
Note that
\begin{equation}
\int\frac{dk}{(2\pi)^3}\mathcal M^{(\mathrm{simpleq})}(k)
=
\frac{\rho\int dx\ v(x)\mathfrak K_e u(x)}{1-\rho\int dx\ v(x)\mathfrak K_e(2u(x)-\rho u\ast u(x))}
\end{equation}
which is the expression for the uncondensed fraction for the simple equation\-~\cite[(38)]{CHe21}.
\bigskip
\point
By\-~\cite[(5.8),(5.27)]{CJL21},
\begin{equation}
\mathcal M^{(\mathrm{simpleq})}(k)=\rho
\left(\hat u(k)\int dx\ v(x)\mathfrak K_e\cos(k(x))\right)
(1+O(\rho e^{-\frac12}))
.
\end{equation}
Furthermore, by the resolvent identity,
\begin{equation}
\mathfrak K_e\cos(kx)
=
\xi-\mathfrak K_e(v\xi)
,\quad
\xi:=\mathfrak Y_e(\cos(kx))
:=(-\Delta+4e(1-\rho u\ast))^{-1}\cos(kx)
\end{equation}
in terms of which, using the self-adjointness of $\mathfrak K_e$,
\begin{equation}
\mathcal M^{(\mathrm{simpleq})}(k)=\rho\hat u(k)\left(
\int dx\ v(x)\xi(x)
-
\int dx\ \mathfrak K_ev(x)(v(x)\xi(x))
\right)
.
\label{pde}
\end{equation}
\bigskip
\point
Now, taking the Fourier transform,
\begin{equation}
\hat\xi(q)\equiv\int dx\ e^{ikx}\xi(x)=\frac{(2\pi)^3}2\frac{\delta(k-q)+\delta(k+q)}{q^2+4e(1-\rho\hat u(q))}
\end{equation}
and so
\begin{equation}
\int dx\ v(x)\xi(x)
=
\int\frac{dq}{(2\pi)^3}\hat v(q)\hat\xi(q)
=
\frac{\hat v(k)}{k^2+4e(1-\rho\hat u(k))}
\end{equation}
and thus
\begin{equation}
\rho\hat u(k)\int dx\ v(x)\xi
=
\rho\hat v(k)\frac{\hat u(k)}{k^2+4e(1-\rho\hat u(k))}
.
\end{equation}
We recall\-~\cite[(4.25)]{CJL20}:
\begin{equation}
\rho\hat u(k)=\frac{k^2}{4e}+1-\sqrt{\left(\frac{k^2}{4e}+1\right)^2-\hat S(k)}
\label{rhou}
\end{equation}
and, by\-~\cite[(4.24)]{CJL20},
\begin{equation}
\hat S(0)=1
.
\label{S1}
\end{equation}
Therefore, if we rescale
\begin{equation}
k=2\sqrt{e}\kappa
\end{equation}
we find
\begin{equation}
\rho\hat u(k)\int dx\ v(x)\xi
=
\frac{\hat v(0)}{4e}\frac{\kappa^2+1-\sqrt{(\kappa^2+1)^2-1}}{\sqrt{(\kappa^2+1)^2-1}}
+o(e^{-1})
.
\label{pde1}
\end{equation}
\bigskip
\point
Now,
\begin{equation}
\int dx\ e^{iqx}v(x)\xi(x)
=
\frac12\frac1{k^2+4e(1-\rho\hat u(k))}
\int dp\ \hat v(q-p)(\delta(k-p)+\delta(k+p))
\end{equation}
so
\begin{equation}
\int dx\ e^{iqx}v(x)\xi(x)
=
\frac12\frac{\hat v(q-k)+\hat v(q+k)}{k^2+4e(1-\rho\hat u(k))}
.
\end{equation}
Therefore,
\begin{equation}
\int dx\ \mathfrak K_ev(x)(v\xi)
=
\frac12\frac1{k^2+4e(1-\rho\hat u(k))}
\int\frac{dq}{(2\pi)^3}\
\widehat{\mathfrak K_e v}(q)
(\hat v(k-q)+\hat v(k+q))
\end{equation}
which, using the $q\mapsto-q$ symmetry, is
\begin{equation}
\int dx\ \mathfrak K_ev(x)(v\xi)
=
\frac1{k^2+4e(1-\rho\hat u(k))}
\int\frac{dq}{(2\pi)^3}\
\widehat{\mathfrak K_e v}(q)
\hat v(k+q)
\end{equation}
that is,
\begin{equation}
\rho\hat u(k)\int dx\ \mathfrak K_ev(x)(v\xi)
=
\frac{\rho\hat u(k)}{k^2+4e(1-\rho\hat u(k))}
\int dx\
e^{-ikx}
\mathfrak K_e v(x)
v(x)
\end{equation}
in which we rescale
\begin{equation}
k=2\sqrt e\kappa
\end{equation}
so, by\-~(\ref{rhou})-(\ref{S1}),
\begin{equation}
\rho\hat u(k)\int dx\ \mathfrak K_ev(x)(v\xi)
=
\frac{\kappa^2+1-\sqrt{(\kappa^2+1)^2-1}}{4e\sqrt{(\kappa^2+1)^2-1}}
(1+o(1))\int dx\
e^{-i2\sqrt e\kappa x}
v(x)\mathfrak K_e v(x)
.
\end{equation}
Therefore, by dominated convergence (using the argument above\-~\cite[(5.23)]{CJL21} and the fact that $\mathfrak K_e$ is positivity preserving), and by\-~\cite[(5.23)-(5.24)]{CJL21},
\begin{equation}
\rho\hat u(k)\int dx\ \mathfrak K_ev(x)(v\xi)
=
\frac{\kappa^2+1-\sqrt{(\kappa^2+1)^2-1}}{4e\sqrt{(\kappa^2+1)^2-1}}
(-4\pi a+\hat v(0))+o(e^{-1})
.
\label{pde2}
\end{equation}
\bigskip
\point
Inserting\-~(\ref{pde1}) and\-~(\ref{pde2}) into\-~(\ref{pde}), we find
\begin{equation}
\mathcal M^{(\mathrm{simpleq})}(k)
=
\frac{\pi a}{e}\frac{\kappa^2+1-\sqrt{(\kappa^2+1)^2-1}}{\sqrt{(\kappa^2+1)^2-1}}
+o(e^{-1})
.
\end{equation}
Finally, we recall\-~\cite[(1.23)]{CJL20}:
\begin{equation}
e=2\pi\rho a(1+O(\sqrt\rho))
\label{erho}
\end{equation}
so
\begin{equation}
\mathcal M^{(\mathrm{simpleq})}(k)
=
\frac{1}{2}\frac{\kappa^2+1-\sqrt{(\kappa^2+1)^2-1}}{\sqrt{(\kappa^2+1)^2-1}}
+o(e^{-1})
.
\label{final1}
\end{equation}
\bigskip
\point
Finally, by\-~(\ref{Mbog})
\begin{equation}
\mathcal M^{(\mathrm{Bogolyubov})}(2\sqrt e\kappa)=-\frac1{2\rho}\left(1-\frac{\frac{4e}{8\pi\rho a}\kappa^2+1}{\sqrt{\frac{e^2}{4\pi^2\rho^2a^2}\kappa^4+\frac{e}{\pi\rho a} \kappa^2}}\right)
\end{equation}
so by\-~(\ref{erho}),
\begin{equation}
\mathcal M^{(\mathrm{Bogolyubov})}(2\sqrt e\kappa)=-\frac1{2\rho}\left(1-\frac{\kappa^2+1}{\sqrt{\kappa^4+2\kappa^2}}\right)
.
\end{equation}
This, together with\-~(\ref{final1}), implies\-~(\ref{Msimpleqbog}).
\qed
\vfill
\eject
|
1,108,101,565,969 | arxiv | \section{Introduction}
Quantum key distribution (QKD) \cite{c1,c2,c3} is a branch of quantum cryptography, whose goal is to provide an elegant way that allows two distant legitimate partners, Alice and Bob, to share a random secure key over unsecure quantum and classical channels. Its security is provided by the laws of quantum physics \cite{c4,c5}. QKD has spurred lots of interest over the last three decades, giving birth to two main approaches, i.e., discrete-variable (DV) QKD \cite{c6,c7,c8} and continuous-variable (CV) QKD \cite{c9,c10,c11,c12,c13,c14}. In the first approach, the key bits are usually encoded to the polarization status of single photons. Different from the former approach, in CVQKD, the sender Alice usually encodes key bits in the quadratures ($\hat{x}$ and $\hat{p}$) of optical field with Gaussian modulation \cite{c15}, while the receiver Bob can restore the secret key bits through homodyne or heterodyne detection techniques \cite{c16,c17}.
In the traditional CVQKD protocol, there are usually the amplitude and phase quadratures used for the symmetrical modulations.
However, in an asymmetric CVQKD protocol, there is only one quadrature for information modulation (e.g., an amplitude modulator or a phase modulator), which is called the UCVQKD\cite{c18,c19}, was suggested to reduce the complexity and the cost of apparatus, facilitating the commercialization of the practical CVQKD. Moreover, in the UCVQKD protocol it could avoid creating a \emph{hole} in the center of Gaussian probability distribution by adopting a simple single-quadrature modulation \cite{c18} and allows the implementation using more standard and cheaper devices. However, it was still challenged by the degree of performance degeneration and the influence of finite-size effect of the UCVQKD, due to the ambiguous relationship of the parameters related to the unmodulated quadratures.
As for the security CVQKD protocols, it can usually be analyzed in the asymptotic case, the finite-size regime and the composable security. In the asymptotic case, the asymptotic secure key rate can be achieved with the covariance matrix of whole quantum system. However, the asymptotic secure key rate is a theoretically computed value which ignores the finite size effect of raw keys, and its upper bound cannot be achieved in practice. In order to solve this problem, a security analysis which takes the finite-size effects into account was proposed \cite{c20}. As a result, the secure key rates are more pessimistic than those obtained in asymptotic case, but it is more closer to the practice. After that, the composable security for symmetrically modulated Gaussian coherent-state protocols \cite{c21,c22,c23} was proposed to provide several refined security proofs and improved bounds for the secret key rates. Leverrier \cite{c24} suggested the composable security proof for CVQKD with coherent states against collective attacks and confirmed that the Gaussian attacks are optimal asymptotically in the case of composable security framework. It is the enhancement of security based on uncertainty of the finite-size effect \cite{c25}, and thus, one can achieve the best security, namely the tightest bound, by subtly dividing the failure probabilities in the CVQKD system.
In this paper, we give an overall security analysis of the UCVQKD protocol, which is based on the Gaussian modulation of a single quadrature of the coherent-state of light, in both asymptotic case and finite-size regime. We derive the relationship of the parameters related to the unmodulated quadrature in the suitable secure regions with two extreme scenarios, which show the oretical performance of asymptotic secret key rate of the UCVQKD protocol. To render the performance close to the reality, we analyze the composable security of the UCVQKD protocol against collective attacks, and obtain the tightest bound of the UCVQKD protocol.
This paper is structured as follows. In Sec. \uppercase\expandafter{\romannumeral2}, we demonstrate the structure of the UCVQKD protocol. In Sec. \uppercase\expandafter{\romannumeral3}, we establish the relationship of the parameters related to the unmodulated quadrature, and derive the symptomatic secret key of the UCVQKD protocol. In Sec. \uppercase\expandafter{\romannumeral4}, we illustrate the composable security of the UCVQKD protocol. Finally, conclusions are drawn in Sec. \uppercase\expandafter{\romannumeral5}.
\section{Scheme design of the UCVQKD}
\begin{figure}
\begin{centering}
\includegraphics[width=4.5in,height=2.7in]{UCVQKD}
\caption{Scheme of the UCVQKD protocol. (Top) Prepare-and-measure model. Alice prepares a coherent state using a laser source and then displaces the state along the modulated quadrature by using modulator M, $V_{M}$ is the modulation variance. The states are subsequently sent to Bob through a general phase-sensitive channel with transmittance $\eta_{x,p}$ and excess noise $\varepsilon_{x,p}$. (Bottom) Equivalent entanglement-based model using two-mode squeezed vacuum state (EPR state), Alice measures mode $A$ using homodyne detection which projects the other half of EPR state onto a squeezed state $B$, then a squeezing operation is applied to transform mode $B$ to a coherent state $B_1$. Subsequently mode $B_1$ is sent to Bob through the generally phase-sensitive channel controlled by Eve.}
\label{fig:UCVQKD}
\end{centering}
\end{figure}
The data process of the UCVQKD focuses on using only one quadrature of coherent states to modulate information. This is in stark contrast to previous protocols where two quadratures are modulated simultaneously. As can be seen from the top panel of FIG. \ref{fig:UCVQKD}, it illustrates the prepare-and-measure UCVQKD protocol. The trusted sender, Alice, prepares coherent states with laser source, where one of the quadratures $\hat{x}$ (amplitude quadrature) or $\hat{p}$ (phase quadrature) is modulated using modulator $M$. As a result, each coherent state is displaced with displacement variance $V_{M}$ according to a random number drawn from a one-dimensional Gaussian alphabet. Without loss of generality, we assume $\hat{x}$ is modulated for rendering the simple derivation. The prepared states are subsequently sent to the remote trusted party Bob through a generally phase-sensitive channel with transmittance $\eta_{x}$, $\eta_{p}$ and excess noise $\varepsilon_{x}$, $\varepsilon_{p}$ in $\hat{x}$ and $\hat{p}$ quadratures, respectively \cite{c18}. Bob applies either heterodyne or homodyne detector to perform coherent detection of the quadratures. Note that although $\hat{p}$ quadrature is not modulated, Bob still needs to measure it (measuring most of the time $\hat{x}$ quadrature and sometimes $\hat{p}$ quadrature) to gather statistics on the properties of the channel in $\hat{p}$ quadrature \cite{c19}. The data acquired by Bob while measuring the amplitude quadrature $\hat{x}$ is correlated with Alice's modulated data. After several runs, this correlation can be used to extract a secret key by post-processing. The most advantage of the UCVQKD protocol is that the protocol would well simplify the implementation with more standard and cheaper devices, and hence reduces the complexity of CVQKD system.
\section{Asymptotic security of the UCVQKD}
To simplify the security analysis, we switch to the equivalent entanglement-based (EB) scheme, which allows the explicit description of the trusted modes and correlations, as shown at the bottom panel of FIG. \ref{fig:UCVQKD} where quantum channel is replaced by eavesdropper Eve and the so called \emph{entangling cloner} \cite{c15,c26,c27} can be used for launching the proven optimal collective Gaussian attack. Eve could replace quantum channel with transmittance $\eta_{x,p}$ and excess noise referred to the input $\chi_{x,p}$ by preparing the ancilla $|E\rangle$ of variance $W_{x,p}$ and a beam splitter of transmittance $\eta_{x,p}$. The value $W_{x,p}$ can be tuned to match the noise of the real channel $\chi_{x,p}=(1-\eta_{x,p})/\eta_{x,p}+\varepsilon_{x,p}$. In order to simplify the description, we only focus on the UCVQKD with reverse reconciliation (RR), while the direct reconciliation (DR) version can be derived through interchanging the sides of Alice and Bob.
According to the extremity of Gaussian quantum states \cite{c15,c28,c29}, the lower bound of the asymptotic secret key rate of the UCVQKD protocol under collective attack strategy can be given by
\begin{equation}
\begin{aligned}
K=\beta I(A:B_{2})-\chi_{E},
\end{aligned}
\end{equation}
where $\beta$ is the reconciliation efficiency, $I(A:B_{2})$ is the Shannon mutual information on quadrature $\hat{x}$ available to the trusted parties Alice and Bob, and Eve's information $\chi_{E}=S(E)-S(E|x_{B})$ is the Holevo bound \cite{c30} of the upper mutual information extractable from Eve and Bob for RR. After Bob applies homodyne measurement, Eve purifies the whole system, rendering the mutual information between Eve and Bob measurement expressed as
\begin{equation}
\begin{aligned}
\chi_{E}&=S(E)-S(E|x_{B})\\
&=S(AB_{2})-S(A|x_{B}).
\end{aligned}
\end{equation}
Therefore, the asymptotic secret key rate of the UCVQKD protocol for RR is derived as
\begin{equation}
\begin{aligned}
K_{RR}=\beta I(A:B_{2})-(S(AB_{2})-S(A|x_{B}))
\end{aligned}.
\end{equation}
As mentioned in the UCVQKD protocol, it uses only one quadrature (says $\hat{x}$) to modulate information, which results in its covariance matrix no longer symmetric in both quadratures as its counterpart, the symmetrical Gaussian modulation coherent-state QKD protocol (i.e. GG02 protocol \cite{c17}). In EB UCVQKD scheme, Alice prepares two-mode squeezed vacuum (TMSV) states $|\Psi\rangle$ of variance $V$ and each TMSV state involves two modes A and B, which can be expressed by
\begin{equation}
|\Psi\rangle=\sqrt{1-z^{2}}\sum_{i=0}^{\infty}z^{i}|i_{A}\rangle\otimes|i_{B}\rangle,
\end{equation}
where $z\in[0,1)$ and ${|i\rangle}_{i\in\mathbb{N}}$ denotes the Fock state. Alice keeps mode A of TMSV state to herself and sends mode B to Bob. For the UCVQKD protocol, such a scheme can be realized by performing a local squeezing operation S with a squeezing parameter $-\log\sqrt{V}$ onto mode B before it is sent to quantum channel, which results in the following covariance matrix:
\begin{equation}
\begin{aligned}
\Gamma_{AB_{1}}=
\begin{pmatrix}
\begin{smallmatrix}
V & 0 & \sqrt{V(V^{2}-1)} & 0 \\
0 & V & 0 & -\sqrt{\frac{V^{2}-1}{V}} \\
\sqrt{V(V^{2}-1)} & 0 & V^{2} & 0 \\
0 & -\sqrt{\frac{V^{2}-1}{V}} & 0 & 1 \\
\end{smallmatrix}
\end{pmatrix}.
\end{aligned}
\end{equation}
Thus, the EB scheme is then equivalent to the Gaussian displacement of coherent states along the $\hat{x}$ quadrature with variance $V_{M}=V^{2}-1$. As the states travel through quantum channel with transmittance $\eta_{x,p}$ and excess noise $\varepsilon_{x,p}$, the transformed covariance matrix is formed as follow:
\begin{equation}
\begin{aligned}
& \Gamma_{AB_{2}}=
\left(
\begin{array}{cc}
\gamma_{A} & \sigma_{AB_{2}} \\
\sigma_{AB_{2}} & \gamma_{B_{2}} \\
\end{array}
\right),
\end{aligned}
\end{equation}
where $\gamma_{A}=\sqrt{V_{M}+1}\mathbb{I}$, $\mathbb{I}$ represent diag(1,1), and
\begin{equation}
\begin{aligned}
& \gamma_{B_{2}}=
\left(
\begin{array}{cc}
1+\eta_{x}(V_{M}+\varepsilon_{x}) & 0 \\
0 & 1+\eta_{p}\varepsilon_{p} \\
\end{array}
\right),
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
& \sigma_{AB_{2}}=
\left(
\begin{array}{cc}
(\eta_{x}V_{M}\sqrt{V_{M}+1})^{\frac{1}{2}} & 0 \\
0 & (\frac{\eta_{p}V_{M}}{\sqrt{V_{M}+1}})^{\frac{1}{2}} \\
\end{array}
\right)\sigma_{z},
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
& \sigma_{z}=
\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right).
\end{aligned}
\end{equation}
It is worth noting that, since there is no modulation in $\hat{p}$ quadrature at Alice' side, the channel transmittance and excess noise thereby cannot be estimated in $\hat{p}$ quadrature for the parameter estimation. Therefore, Bob cannot obtain the estimated values of $\eta_{p}$ and $\varepsilon_{p}$, respectively. In fact, since Bob needs to measure $\hat{p}$ quadrature, he can acquire the result of the variance of the channel output in $\hat{p}$ quadrature rather than the parameter $\eta_{p}$ and $\varepsilon_{p}$. That is to say, the item noted as $1+\eta_{p}\varepsilon_{p}$ in matrix $\gamma_{B_{2}}$ is known. However, Bob cannot acquire the correlation between the two trusted modes in $\hat{p}$ quadrature due to $\eta_{p}$ and $\varepsilon_{p}$ are unknowable. In order to estimate the asymptotic key rate of the UCVQKD protocol, we have to derive the relationship of the two unknown parameters $\eta_{p}$ and $\varepsilon_{p}$.
Theoretically, the two parameters can be set to any values limited in their domain. However, according to the Heisenberg uncertainty principle \cite{c31}, the unknown parameters must be bounded by the requirement of physicality, which satisfies the following constraint
\begin{equation}
\begin{aligned}
\Gamma_{AB_{2}}+i\Omega\geqslant0,
\end{aligned}
\end{equation}
where $\Omega$ is the symplectic form with
\begin{equation}
\begin{aligned}
\Omega=\bigoplus^{n}_{i=1}\omega,\quad \omega=
\left(
\begin{array}{cc}
0 & 1 \\
-1 & 0 \\
\end{array}
\right).
\end{aligned}
\end{equation}
For the UCVQKD protocol, the two unknown parameters satisfy the physical constraint
\begin{equation}
\begin{aligned}
(\kappa\sqrt{\eta_{x}}-\sqrt{\eta_{p}})^{2}
\leqslant
(1-\kappa\eta_{x})(1+\eta_{p}\varepsilon_{p}-\kappa),
\end{aligned}
\end{equation}
where $\kappa=\frac{1}{1+\eta_{x}\varepsilon_{x}}$.
\begin{figure}
\begin{centering}
\includegraphics[width=3.2in,height=2.6in]{PaS04005}
\caption{Regions bounded by physicality and the positive secret key rate with the varied parameters $\eta_{p}$ and $\varepsilon_{p}$. Colored bar at the right side represents the positive secret key rate. Modulation variance is $V_{M}=100$, channel transmittance is $\eta_{x}=0.4$, and excess noise in quadrature $\hat{x}$ is $\varepsilon_{x}=0.05$ respectively.}
\label{fig:physicalitySecurity}
\end{centering}
\end{figure}
In Fig. \ref{fig:physicalitySecurity}, we illustrate the regions bounded by physicality and the positive secret key rate with the given parameters $\eta_{x}=0.4$ and $\varepsilon_{x}=0.05$ (these value could represent one of the usual cases). The whole plane is divided into three regions, i.e., unphysical region, unsecure region, and secure region. In unphysical region, it denotes the restricted zone in which the current values of $\eta_{p}$ and $\varepsilon_{p}$ cannot be set simultaneously, otherwise it will violate Heisenberg uncertainty principle. That is to say, even though the maximum secret key rate (point (1)) existing in this area, it is impractical to achieve such highest secret key rate in reality. In unsecure region, it shows (the blank area within the unphysical region excluded) that the UCVQKD protocol cannot generate the positive secret key rate. In secure region, it shows the UCVQKD protocol with suitable parameters $\eta_{p}$ and $\varepsilon_{p}$ can generate the positive secret key rate under the optimal collective attack. Therefore, the accessible maximum asymptotic secret key rate can be achieved at the point (2) instead of the unrealistic point (1). Moreover, in order to ensure the security, one should further take the more pessimistic case into account. The pessimistic case and the optimal case can be derived as the two extreme scenarios in Appendix B.
In what follows, we show the asymptotic performance of the UCVQKD protocol. In the traditional communication system, one expects the values of channel loss and excess noise in both quadratures are symmetric, namely $\eta_{p}=\eta_{x}=\eta$ and $\varepsilon_{p}=\varepsilon_{x}=\varepsilon$. Therefore, the previous covariance matrix $\Gamma_{AB_{2}}$ turns to:
\begin{equation}
\begin{aligned}
& \Gamma_{AB_{2}}^{sym}=
\left(
\begin{array}{cc}
\gamma_{A} & \sigma_{AB_{2}^{sym}} \\
\sigma_{AB_{2}^{sym}} & \gamma_{B_{2}^{sym}} \\
\end{array}
\right),
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
& \gamma_{B_{2}}^{sym}=
\left(
\begin{array}{cc}
1+\eta(V_{M}+\varepsilon) & 0 \\
0 & 1+\eta\varepsilon \\
\end{array}
\right),
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
& \sigma_{AB_{2}}^{sym}=
\left(
\begin{array}{cc}
(\eta V_{M}\sqrt{V_{M}+1})^{\frac{1}{2}} & 0 \\
0 & -(\frac{\eta V_{M}}{\sqrt{V_{M}+1}})^{\frac{1}{2}} \\
\end{array}
\right).
\end{aligned}
\end{equation}
\begin{figure}
\begin{centering}
\includegraphics[width=3.2in,height=2.6in]{UCS}
\caption{Comparison of the UCVQKD protocol with symmetrically modulated Gaussian coherent-state protocol. The blue line denotes the performance of corresponding symmetrically modulated Gaussian coherent-state protocol, while the red line represents the UCVQKD protocol. The inset shows the maximum tolerable excess noise at each channel loss. Modulation variance is $V_{M}=20$, reconciliation efficiency is $\beta=95\%$, and excess noise is $\varepsilon=0.01$. }
\label{fig:keyratecompare}
\end{centering}
\end{figure}
Taking the loss rate 0.2dB/km and the modulation variance $V_{M}=20$, we compare the performance of the UCVQKD protocol and the symmetrical Gaussian modulation coherent-state protocol \cite{c15,c17,c28}, as shown in Fig. \ref{fig:keyratecompare} (See Appendix C for calculation of the asymptotic secret key rate). We find that the UCVQKD protocol is obviously outperformed by the symmetrical Gaussian modulation coherent-state protocol. Actually, this result is what we expect, since the unidimensional modulation scheme uses only one quadrature to carry the useful information, while its counterpart, the symmetrical Gaussian modulation coherent-state protocol, uses both quadrature $\hat{x}$ and $\hat{p}$ to carry information, which certainly results in a higher secret key rate. As a result, one may have to make a tradeoff between the secret key rate and the implementation for the given modulation variance.
Fortunately, a better performance of the UCVQKD protocol can be achieved by dynamically choosing an optimal modulation variance $V_{M}$. As shown in Fig. \ref{fig:optimizedVm}, we plot the asymptotic key rate of the UCVQKD protocol and the symmetrical coherent-state protocol with the optimized modulation variance $V_{M}$ at each channel loss. The performance of the UCVQKD protocol is dramatically improved by choosing the suitable modulation variance $V_{M}$, whereas the best performance of the symmetrical coherent-state protocol has been achieved. Moreover, the gap of the performance between the UCVQKD protocol and the symmetrical coherent-state protocol is shortened for the optimized $V_{M}$. Therefore, by choosing the optimal modulation variance $V_{M}$, we can achieve the relatively high performance, which approaches to the corresponding symmetrical coherent-state protocol, while paying only a little price.
\begin{figure}
\begin{centering}
\includegraphics[width=3.2in,height=2.6in]{optimizedVm}
\caption{The asymptotic security key rate of the UCVQKD protocol (red line) and the corresponding symmetrically modulated Gaussian coherent-state protocol (blue line) as a function of channel loss for every optimal modulation variance $V_{M}$. The inset shows the optimal $V_{M}$ for the current maximal secret key rate. Reconciliation efficiency is $\beta=95\%$, and excess noise is $\varepsilon=0.01$. }
\label{fig:optimizedVm}
\end{centering}
\end{figure}
\section{Composable security analysis of the UCVQKD protocol}
In the composable security analysis, we consider the detailed data processing in the UCVQKD system so that one can obtain the tightest secure bound of the protocol. In this section, we give the first composable security analysis of the UCVQKD protocol against collective attacks. The definitions of the composable security can be found in \cite{c24,c32,c33}. We, in what follows, elaborate the composable security analysis of the UCVQKD protocol when confronting collective attacks.
\begin{table}
\begin{centering}
\caption{The parameters of the UCVQKD protocol in the composable security analysis}
\label{tab:1}
\begin{tabular}{ll}
\hline\noalign{\smallskip}
\it{parameter} & \it{definition} \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\hline
$2n$ & number of light pulses (with single-quadraturemodulation) \\
& exchanged in the UCVQKD. \\
\hline
$\lambda$ & percentage of the modulated quadratures detected correctly by Bob.\\
\hline
$l$ & size of the final key when the protocol did not abort. \\
\hline
$d$ & number of bits on which each measurement result is encoded. \\
\hline
$leak_{EC}$ & size of Bob's communication to Alice for error correction.\\
\hline
$\epsilon_{PE}$ & maximum failure probability of parameter estimation. \\
\hline
$\epsilon_{cor}$ & small probability of the failure that the keys of Alice and Bob \\
& do not identical and the protocol did not abort.\\
\hline
$n_{PE}$ & number of bits that Bob sends to Alice in parameter estimation.\\
\hline
$\Omega_{a}^{max}$,$\Omega_{b}^{max}$, & bounds on covariance matrix elements, which must be apt in \\
$\Omega_{c}^{min}$ & the realization of the protocol.\\
\noalign{\smallskip}\hline
\end{tabular}
\end{centering}
\end{table}
In this section, we focus on the EB UCVQKD protocol with RR, which can be characterized by the similar parameters derived in the composable security case as shown in Tab. \ref{tab:1}. First of all, the two trusted parties, Alice and Bob, obtain $2n$-mode state respectively and form the global state denoted by $\rho_{AB_{2}}^{2n}$. Then, homodyne detections are applied by Alice and Bob to measure their respective obtained modes. It is known that homodyne detection on mode A of two-mode squeezed vacuum state will project mode B onto a squeezed state, which is subsequently transformed to a coherent state after passing a squeezing operation. Note that it is not necessary to measure mode B$_{2}$ using heterodyne detection since only one quadrature has been modulated. Bob continues to apply homodyne measurements over mode B$_{2}$ with the probability $\lambda$, obliging Alice to discard the measurement results of unmodulated quadrature. After that, Alice and Bob obtain two continuous variables $X,Y\in \mathbb{R}^{2\lambda n}$. Bob discretizes his $2\lambda n$-vector $Y$ to obtain $m$-bit string $U$, where $m=2\lambda dn$, i.e. each correct measurement result is encoded with $d$ bits. During the error corrections, Bob sends syndrome of $U$ which agreed on in advance to Alice and lets Alice guess $U_{A}$ for the string of Bob. Bob computes a hash of $U$ of length $\lceil \log(1/\epsilon_{cor})\rceil$ and sends it to Alice who compares it with her own hash. The protocol resumes if both hashes coincide. The value $leak_{EC}$ corresponds to the total number of bits sent by Bob during this step. Subsequently, in parameter estimation, Bob sends $n_{PE}$ bits of $U$ to Alice, which is required for calculations of $\omega_{a}$, $\omega_{b}$ and $\omega_{c}$ in Eqs. (17) (18) and (19). The protocol continues for $\omega_{a}\leq\Omega_{a}^{max}$ and $\omega_{b}\leq\Omega_{b}^{max}$ and $\omega_{c}\geq\Omega_{c}^{min}$. Finally, Alice and Bob apply a random universal$_{2}$ hash function to their respective strings, resulting in the two final strings $S_{A}$ and $S_{B}$ of size $l$.
In the following, we elaborate the detailed composable security analysis of the UCVQKD protocol.
\subsection{State preparation}
Alice prepares $2n$ TMSV states $|\Psi\rangle^{\otimes2n}$ of variance $V$ and each state involves two modes A and B, as expressed in Eq. (4). Without loss of generality, we assume quadrature $\hat{x}$ is the modulated quadrature. The transformed covariance matrix can be derived in Eq. (6) after the state is transmitted through quantum channel with transmittance $\eta_{x,p}$ and excess noise $\varepsilon_{x,p}$. Once Alice and Bob collect $2n$ pulses, the protocol will start to the next step.
\subsection{Measurement}
In the measurement, both Alice and Bob have access to $2n$ modes, where Alice measures with heterodyne detector and Bob measures with homodyne detector, respectively. Because only one quadrature $\hat{x}$ is modulated, Alice discards the measurement results derived from quadrature $\hat{p}$ meanwhile Bob measures quadrature $\hat{x}$ with probability $\lambda$. The reason is that Eve may know the protocol that Alice and Bob obeyed, and hence the two trusted parties must measure the correct quadrature with a certain probability $\lambda$. Assume that the two trusted parties are perfect so that Eve cannot know the executive probability. After that, Alice and Bob then form two vectors of length $2\lambda n$ that can be denoted by $X=(X_{1},...,X_{2\lambda n})$ for Alice and $Y=(Y_{1},...,Y_{2\lambda n})$ for Bob, respectively. Notice that $X$ and $Y$ are continuous variables, and hence we have to discretize them in order to let the data suitable for processing. Firstly, the real axis can be divided into $2^{d}$ intervals and this partition should be chosen to maximize the secret key rate when the quantum channel acts as a Gaussian channel with the fixed transmittance and excess noise. The average variance of Bob's measurement can be calculated as $\frac{1}{2\lambda n}\left|\left| Y\right|\right|^{2}$. Thus, each interval (that follows normal distribution $\mathcal{N}(0,\frac{1}{2\lambda n}\left|\left| Y\right|\right|^{2})$) can be assigned a distinct value by applying the discretization map $\mathcal{D}:Y\mapsto U$. The detailed discretization discretization schemes can be found in the literature \cite{c34}. Finally, Alice obtains $X\in\mathbb{R}^{2\lambda n}$, whereas Bob obtains $Y\in\mathbb{R}^{2\lambda n}$ and $U\in{\{1,...,2^{d}\}}^{2\lambda n}$.
\subsection{Error correction}
Reverse reconciliation is applied for Alice to generate the string $U$. More specifically, Bob sends the value of $\left|\left|Y\right|\right|^{2}$, which is used for discretization function $\mathcal{D}$, to Alice. An effective technique for error correction is to perform sparse parity-check code (LDPC) \cite{c35,c36}, which can be functionally defined by a sparse parity-check matrix $H$ of size $(2\lambda dn)\times(2\lambda dn-K)$, where $K$ represents the length of check bits. Bob computes the syndrome $HU$ of his vector (after discretization) and sends the syndrome to Alice. This syndrome can be deemed side information for most of the leakage in error correction. Thus, a parameter $\beta$ called \emph{reconciliation efficiency} can be used to assess the quality of error correction
\begin{equation}
\beta=\frac{2\lambda dn-leak_{EC}}{2\lambda n\log_{2}(1+SNR)},
\end{equation}
where $SNR=\eta_{x,p}V_{M}/(2+\eta_{x,p}\varepsilon_{x,p})$ stands for signal-to-noise ratio, which is that of the expected Gaussian channel mapping $X$ and $Y$. The value of reconciliation efficiency $\beta$ quantifies the performance of error correction procedure and $\beta=1$ denotes the perfect reconciliation efficiency. In practice, this parameter can be achieved to about $0.95$ for Gaussian channel \cite{c36}.
After receiving the side information from Bob, Alice can recover the estimated $\hat{U}$ of $U$ by decoding the code in the coset corresponding to the syndrome $HU$. For a part of the composable security analysis, it is necessary to know whether error correction works, i.e. whether $\hat{U}=U$ or not. As mentioned above, Bob chooses a hash function to map $2\lambda dn$-bit strings to strings of length $\lceil \log(1/\epsilon_{cor})\rceil$, and then he sends it to Alice who compares it with her own hash. If both hashes are the identical, the protocol is $\epsilon_{cor}$-correct \cite{c32}.
\subsection{Parameter estimation}
The goal of parameter estimation is to infer the transmitted quantum state when one has access to a small number outcomes of parameters of the underlying quantum state. It can be deemed a rough version of quantum tomography \cite{c37}, which allows us to estimate the covariance matrix of the whole UCVQKD system.
As the protocol goes on, Bob sends $n_{PE}$ bits of $U$ to Alice so that allows her to calculate the estimated values $\omega_{a}$, $\omega_{b}$ and $\omega_{c}$, where
\begin{equation}
\omega_{a}=\frac{\left|\left| X\right|\right|^{2}}{2\lambda n}\left[1+2\sqrt{\frac{\log(36/\epsilon_{PE})}{n}}\right]-1,
\end{equation}
\begin{equation}
\omega_{b}=\frac{\left|\left| Y\right|\right|^{2}}{2\lambda n}\left[1+2\sqrt{\frac{\log(36/\epsilon_{PE})}{n}}\right]-1,
\end{equation}
\begin{equation}
\omega_{c}=\frac{\langle X,Y\rangle}{2\lambda n}-5(\left|\left| X\right|\right|^{2}+\left|\left| Y\right|\right|^{2})\sqrt{\frac{\log(8/\epsilon_{PE})}{n^{3}}}.
\end{equation}
Thanks to the parameter estimation performed after error correction, therefore knows the values of $\left|\left| X\right|\right|^{2}$, $\left|\left| Y\right|\right|^{2}$ and $\langle X,Y\rangle$ at the end of error correction. Subsequently, she applies a PE test \cite{c24} to obtain a confidence region for the covariance matrix of the states $|TMSV\rangle^{\otimes2n}$. If the estimated values are all obey the restraint of $\omega_{a}\leq\Omega_{a}^{max}$ and $\omega_{b}\leq\Omega_{b}^{max}$ and $\omega_{c}\geq\Omega_{c}^{min}$, the protocol continues, otherwise aborts. The failure probability of parameter estimation is denoted by $\epsilon_{PE}$.
However, for specific the UCVQKD protocol with $\hat{x}$ quadrature modulation, the corresponding coefficients of the covariance matrices of quadrature $\hat{x}$ and quadrature $\hat{p}$ are not identical, and only quadrature $\hat{x}$ carries the useful information. Thus, in order to give the rigorous restraint of composable security analysis for the UCVQKD protocol, one should choose the three apt bounds as
\begin{equation}
\Omega_{a}^{\max}=\sqrt{V_{M}+1}+\delta_{a},
\end{equation}
\begin{equation}
\Omega_{b}^{\max}=1+\eta_{x}(V_{M}+\varepsilon_{x})+\delta_{b},
\end{equation}
\begin{equation}
\Omega_{c}^{\min}=(\eta_{x}V_{M}\sqrt{V_{M}+1})^{\frac{1}{2}}-\delta_{c},
\end{equation}
where $\delta_{a}$, $\delta_{b}$ and $\delta_{c}$ are small positive constants which are optimized to ensure maximum secret key rate.
\subsection{Privacy amplification}
According to the above-mentioned data processing, Alice and Bob now obtain two strings $\hat{U}$ and $U$, respectively. In order to discard the information known by Eve, Alice chooses an universal$_{2}$ hash function \cite{c38,c39} and extracts $l$ bits of secret key $S_{A}$ from $\hat{U}$. Subsequently, Alice informs Bob which function she has chosen and Bob uses it to compute $S_{B}$ \cite{c32}.
In this step, the string $U$ can be utilized for generating a key of size $l$ which is $\epsilon_{sec}$-secret provided that \cite{c40}
\begin{equation}
\epsilon_{sec}=\mathop{\min}\limits_{\epsilon'}\frac{1}{2}\sqrt{2^{l}-H_{\min}^{\epsilon'}(U|E')}+2\epsilon'
\end{equation}
where $E'$ represents all the information that Eve learns from the UCVQKD.
\begin{figure}
\begin{centering}
\includegraphics[width=3.2in,height=2.6in]{UCC}
\caption{Secret key rate of the UCVQKD protocol against collective attacks in the frame of composable security, as a function the number of exchanged signals $2n$. The black dashed line denotes $15$ km performance of the symmetrically modulated Gaussian GG02 protocol, while the other lines, from top to bottom, represent $10$ km (blue line), $12$ km (red line), $13$ km (green line) and $15$ km (brown line) performances of the UCVQKD protocol, respectively. The modulation variance is optimized, reconciliation efficiency is $\beta=95\%$, the excess noise is $\varepsilon_{x}=0.01$, and the discretization parameter is $d=5$.}
\label{fig:UCC}
\end{centering}
\end{figure}
Subsequently, we can calculate the secret key rate of the UCVQKD protocol (See Appendix D for the derivation of secret key rate when taking composable security into account). In Fig. \ref{fig:UCC}, we show the secret key rate of the UCVQKD protocol against collective attacks in the frame of composable security. The brown line shows the maximum transmission distance (about $15$ km) of the UCVQKD protocol, leading to the limitation of signal numbers $10^{12}$. As a comparison, we also plot the performance of symmetrically modulated Gaussian GG02 protocol \cite{c17} with the transmittance of the quantum channel corresponds to distances of $15$ km for losses of $0.2$ dB/km. The simulation result, which shows that the UCVQKD protocol is outperformed by GG02 protocol, is meet our expectation and the trend of previous asymptotic analysis. Although the performance of both protocols in the frame of composable security seems worse than that of the asymptotic case, it is close to the practical implementation. By applying composable security analysis of the UCVQKD protocol, we can obtain the tightest bound of secret key rate of the UCVQKD protocol.
\section{Conclusion}
We have investigated the composable security of the UCVQKD protocol in the asymptotic finite-size regime. We illustrate the relationship of the parameters related to the unmodulated quadrature of the UCVQKD system and estimate the precise secure region with two extreme scenarios. We propose the composable security against collective attacks, and achieve the tightest bound of the UCVQKD protocol.
Numerical simulations show the balanced secret key rate of the UCVQKD protocol. Although the key rate of the UCVQKD protocol is slightly low in the case of the composable security analysis, it is can be simply implemented in practice at the low cost.
\begin{acknowledgements}
We would like to thank V. C. Usenko for the helpful discussions. This work is supported by the National Natural Science Foundation of China (Grant No. 61379153, No. 61572529), and the Fundamental Research Funds for the Central Universities of Central South University (Grant No. 2017zzts147), and partly by China Postdoctoral Science Foundation (Grant No. 2013M542119, No. 2014T70772), Science and Technology Planning Project of Hunan Province, China (Grant No. 2015RS4032).
\end{acknowledgements}
|
1,108,101,565,970 | arxiv | \section{Introduction}
Gamma-ray bursts (GRB) have been detected in the past year at X-ray, as well
as optical and radio frequencies (e.g. Costa, et al. 1997, Sahu, et al.
1997, Frail et al. 1997; recent results are summarized in Meegan, Preece
\& Koshut, 1997). A cosmological origin is indicated by the measurements of
redshifts in at least two objects (Metzger, et al. 1997, Kulkarni, et al.
1998). The radiation is generally interpreted in terms of nonthermal
continuum emission from shocks in a relativistic fireball outflow, both
in the early high energy emission (Rees \& M\'esz\'aros~, 1992; M\'esz\'aros~ \& Rees,
1993; Piran, Shemi \& Narayan, 1993; Katz, 1994; Rees \& M\'esz\'aros~, 1994; Sari
\& Piran, 1995; Papathanassiou \& M\'esz\'aros~, 1996; Panaitescu, Wen, Laguna \&
M\'esz\'aros~, 1997) and in the subsequent afterglows at longer wavelenghts (M\'esz\'aros~
\& Rees, 1997a; Vietri, 1997; Waxman, 1997; Wijers, Rees \& M\'esz\'aros~, 1997).
While the outflow is typically
assumed to be chemically homogeneous and smooth on average (except for
instabilities and shocks), it could have a substantial component of blobs
of denser material (e.g. from the small mass fraction near the surface of a
disrupted neutron star torus) which are entrained by the average outflow,
and coexist with it in pressure equilibrium. This denser material would be
richer in heavy elements, and could have significant spectral effects
caused by absorption edges from metals such as Fe, with consequences for
the early $\gamma$-ray and X-ray emission from GRB (the related effects
in GRB afterglows will be discussed elsewhere). In what follows we
investigate the physical conditions in such blobs, and calculate the effects
they have on the observed spectrum associated with internal shocks in GRB.
\section{Baryonic Outflow and Dense Blob Entrainment }
In a fireball outflow arising from the disruption of a compact binary
or the collapse of a fast rotating stellar core, internal shocks and
nonthermal radiation leading to $\gamma$-ray emission arise at radii
$r_{sh} = c t_v \eta^2 = 3\times 10^{14} t_v \eta_2^2$ cm, where $t_v \mathrel{\mathpalette\simov >}
10^{-3}$ s is the variability timescale and $\eta=10^2 \eta_2$ is the terminal
coasting bulk Lorentz factor, determined by the baryonic loading of the
outflow. We do not know to what extent the outflow is
beamed, but for the present discussion we suppose it is confined inside
channels of solid angle $\theta^2$. For a total luminosity $L=10^{51} L_{51}$ and
mass outflow rate $\dot M = L /( c^2 \eta )$ lasting for a time $t_w \mathrel{\mathpalette\simov >} t_v$,
the mean comoving density of nuclei in the smooth outflow is
\begin{equation}
n_o = (L/4\pi \theta^2 r^2 \eta^2 A m_p c^3)= 3\times 10^{12}L_{51}\theta^{-2}
r_{13}^{-2} A_o^{-1} \eta_2^{-2} ~\hbox{cm}^{-3}~,
\label{eq:no}
\end{equation}
where $r = 10^{13} r_{13}$ and $A_o$ is the mean particle atomic weight.
The total baryonic mass per unit logarithmic radius is $ M_o = 4\pi \theta^2
r^3 \eta^{-1} n_o A_o m_p = 4\times 10^{26} L_{51} \eta_2^{-3} r_{13}^2 A_o$ g,
and the corresponding smoothed- out column density of nuclei is
\begin{equation}
\Sigma_o = 3\times 10^{23} L_{51} r_{13}^{-1}\theta^{-1} A_o^{-1} \eta_2^{-3}~~
\hbox{cm}^{-2}~.
\label{eq:Sigmao}
\end{equation}
The outflow can also carry magnetic fields whose comoving energy
density in the frame moving with $\eta$, expressed as a fraction $\xi_B$
of the total energy density, gives $B= 3 \times 10^5 L_{51}^{1/2} \xi_B^{1/2}
\theta^{-1} r_{13}^{-1} \eta_{2}^{-1}$. If the outflow is magnetically-driven
from the central object, then $\xi_B$ would be not much less than unity. An
important consequece of such strong fields is that the gyroradii are small. This
means that the flow can be treated as fluid-like. Moreover, conductivity and
diffusion are severely inhibited, at least across the field, so that blobs or
filaments of cooler and denser material could exist, in pressure balance with
their surroundings. (This possibility has been discussed in other contexts by
Celotti et al., 1998).
In addition to a smooth distribution of baryons, dense blobs of (possibly
Fe-enriched) matter may be able to survive and be entrained in the flow.
A small blob moving with bulk Lorentz factor $\Gamma_b$ (possibly less than
$\eta$) whose gas temperature was of order of the comoving photon temperature
$T \sim 10^7 T_7 \Gamma_{b2}^{-1}$ K (or $\sim$ 100 keV in the observer frame)
could have a particle density (measured in its own comoving frame) of up to
\begin{equation}
n_b \simeq 2\times 10^{18} L_{51}
\theta^{-2} r_{13}^{-2} T_7^{-1} \Gamma_{b2}^{-2} ~\hbox{cm}^{-3}~,
\label{eq:nb}
\end{equation}
This maximum density would be reached if its internal pressure balanced the
total external (magnetic and particle) pressure. If the blobs were composed of
iron-rich material from neutron-star debris, the density of nuclei would be
lower than $n_b$ by a factor $1/Z_b$, the average charge of the ions.
Such blobs are much denser than the corresponding ``background" baryon
density given in equation (\ref{eq:no}). We return in \S 4 to discuss the
internal thermal balance, and to show that they could indeed remain with $T_7
\mathrel{\mathpalette\simov <} 1$. However, we first consider the geometry and dynamics of such blobs.
Suppose the blobs have a volume filling factor $f_v = {\bar n_b}/n_b$.
This is of course likely to be a very small number. However, if the blobs are
individually very small, the surface covering factor $f_s$ can nonetheless be
substantial. If the blobs were spheres of characteristic radius $r_b$ then
$r_b n_b f_s$ would equal the smoothed-out column density over one comoving
length scale $c t_{exp}=r/\Gamma_b$ in the frame of the blobs, ${\bar \Sigma_b} =
{\bar n_b} (r/\Gamma_b)$. We
obtain $r_b=(r/\Gamma_b)( {\bar n_b} /n_b)f_s^{-1}=(r/\Gamma_b)f_v f_s^{-1}$. If one
sets the smoothed-out density of blobs moving at $\Gamma_b$, as seen in the
flow frame $\eta$, equal to a fraction $\alpha$ of the average flow comoving
particle density, $ {\bar n_b} =\alpha n_o \eta \Gamma_b^{-1}$, the volume filling
factor is $f_v=1.5 10^{-6} \alpha \eta_2^{-1}\Gamma_{b2} A_o^{-1}T_7$. The blob size
$r_b={\bar \Sigma_b}/(n_b f_s)$ is given by
\begin{equation}
r_b= {\bar \Sigma_b} /(n_b f_s) = 1.5\times 10^5\alphar_{13} \eta_2^{-1}
A_o^{-1} T_7 f_s^{-1}~\hbox{cm}~,
\label{eq:rb}
\end{equation}
while the column density through a single blob is just $\Sigma_b=r_b n_b=
{\bar \Sigma_b} f_s^{-1}$, and the average smoothed out column density from blobs is
${\bar \Sigma_b}= \alpha \Sigma_o (\eta/\Gamma_b)^2$ in the $\Gamma_b$ frame. In order
to have a surface coverage factor $f_s >1$, there is an upper limit $r_b <
{\bar \Sigma_b}/n_b$ on the blob sizes. Realistically, the blobs are likely to be
streaks or filaments elongated along the magnetic field direction, the field
itself being predominantly perpendicular to the radial direction.
The above formulae carry over provided we identify $r_b$ with the smallest
dimension: this is likely to be the dimension transverse to the field, and can
readily be small enough to permit a large covering factor, while nonetheless
being large enough (compared to the gyroradius) to ensure a fluid-like
behavior.
\section{Blob Velocities}
The blobs, even if consisting of gas entrained from a slower moving
environment, will tend to be accelerated by the mean MHD jet outflow.
This flow starts at some lower radius $r_l =10^6 r_{l,6}$ cm, and reaches
its saturation bulk Lorentz factor $\eta = L/({\dot M} c^2)= 10^2 \eta_2$ well
before internal shocks reconvert a significant fraction of the bulk kinetic
energy into radiation at radii $r_{sh} \sim 10^{13}r_{13}$ cm; still further out,
there may be a deceleration shock where the ejecta encounter the external
medium. Blobs entrained into the flow near $r_l$, or from the boundary of the
channel at larger radii, are accelerated by the flow; at or above the shock
radius Compton scattering of the intense photon flux is of comparable
importance for the dynamics.
The comoving radiation energy density in the flow is $u_\gamma =L/(4\pi
\theta^2 r^2 c \eta^2) = 3\times 10^9 L_{51} r_{13}^{-2} \theta^{-2} \eta_2^{-2} ~
\hbox{erg ~cm}^{-3}$. The radiation pressure would accelerate any
optically-thin blob into a frame in which the net Compton drag were zero,
on a timescale $t_{dr}= A m_p c^2/(\sigma_T c u_\gamma )= 4\times 10^1
L_{51}^{-1} r_{13}^2 \theta^2 \eta_2^2 A ~\hbox{s}~$.
This timescale (calculated taking account of the inertia of the ions, which
are, on the macroscopic level, constrained to move with the leptons)
is shorter than the comoving expansion (dynamic) time of the flow
$t_{ex}=r(c\eta)^{-1}=3~ r_{13} \eta_2^{-1} ~\hbox{s}$ for radii $r_{13} \mathrel{\mathpalette\simov <}
0.75\times 10^{-1} L_{51} \eta_2^{-3}\theta^{-2} A_o^{-1}$.
For an optically thin blob released at some radius $r_o$ the terminal Lorentz
factor achievable is (Phinney 1987) $\Gamma_{b,max}= (L / L_{Ed})^{1/3} \sim
2\times 10^4 L_{51}^{1/3}$. For an optically thick blob, the effective
acceleration is lowered by a factor ${{\cal N}_o}^{-1} = (\Sigma_{bo}/
1.5\times 10^{24} \hbox{cm}^{-2})^{-1}$, so
$\Gamma_{b,max} = (L/L_{Ed})^{1/3} {{\cal N}_o}^{-1/3} = 2\times 10^4
L_{51}^{1/3} {{\cal N}_o}^{-1/3}$.
Although the above expressions refer to radiation-pressure acceleration,
similar considerations apply to acceleration by the ram pressure and Poynting
flux of the smooth relativistic outflow. If $L$ is defined as the total
energy flux, the results are identical provided that ${\cal N}_o$ exceeds
$1.5\times 10^{24}$.
A blob immersed in a hydromagnetic flow carrying a flux $L$
behaves in a similar way. If its column density is sufficiently
low, its motion adjusts to the same Lorentz factor as the
surrounding flow. The condition for this to happen is that
\begin{equation}
{{\cal N}_o} < (L/L_{Ed}) (r/r_o)^{-1} \Gamma_b^{-3}~.
\label{eq:calN}
\end{equation}
Note that the dependence on $\Gamma_b$ arises because, if the blob
moved with a slightly different speed from the mean flow, the
drag force on it (in the comoving frame) scales as $r^{-2}\Gamma_b^{-2}$
and the time available, at a given $r$, scales as $r \Gamma_b^{-1}$.
A blob for which ${\cal N}_o$ is low enough to satisfy the condition
(\ref{eq:calN}) at the radius where the velocity of the mean outflow saturates
will coast stably outwards in pressure balance with its surroundings.
Blobs with higher ${\cal N}_o$, for which (\ref{eq:calN}) is not satisfied,
would be accelerated by the ram pressure associated with
the energy flux $L$, but would not attain the same Lorentz factor
as their surroundings. The thickness of such blobs would adjust
to be equal to the scale height corresponding to the acceleration,
which would be proportional to the blob temperature $T$, and also
proportional to ${\cal N}_o$ times $\Gamma_b^2$.
Thus we expect that the flow, out at radii $\sim 10^{13}$ cm, would
contain small blobs with Lorentz factor of order $\eta$, and
also larger blobs with lower Lorentz factors. As we discuss
later, this slower-moving material could have an important
effect on the time-evolution of the spectra of gamma ray bursts.
The proportions of slow-moving and fast-moving blobs would
depend on the uncertain details of how the initial entrainment
occurs, and also on the effects of instabilities during the
outflow. Blobs small enough to satisfy the condition (\ref{eq:calN}), which
in effect constitute a ``mist" of clouds or filaments embedded in the
flow (preserved by strong magnetic fields against diffusion
effects), are not subject to any obvious dynamical instability.
Larger blobs, on the other hand, would seem in principle vulnerable
to both Rayleigh-Taylor and Kelvin-Helmholtz instabilities.
However, in a magnetically-dominated outflow, acceleration
of blobs could plausibly occur without triggering Rayleigh-
Taylor instability. The situation could be analogous to, for
instance, solar prominences, where magnetic stresses support
cool gas against gravity (c.f. the classic work of Kippenhahn
\& Schl\"uter, 1957, and many later variants). Kelvin-Helmholtz
instabilities are more problematic: even though these tend to be
suppressed by magnetic fields with a component along the flow direction
(e.g. Hardee, et al., 1992) or by fields in the blobs themselves, it is
unclear to what degree they are, and it is unlikely that they can be
eliminated completely. What we are envisaging is a more
extreme version of what we know is going on in SS433 (where
a combination of mass flux and emissivity constraints forces
one to a model involving cool blobs with small volume-filling
factor accelerated to 10,000 times their internal sound speed).
The range of blob sizes (and blob Lorentz factors) at $10^{13}$ cm
will therefore depend on (a) the nature of the entrainment process:
(b) the extent to which slower (heavier) blobs are shredded by
Kelvin-Helmholtz instabilities; and (c) the possible countervailing
effect of coalescence. which can be important when the covering
factor is of order unity and a range of velocities is present.
We regard this as an open question, and turn now to consider the
thermal equilibriun within blobs, which depends primarily on
the radiation field and the pressure.
\section{Temperature and Ionization State}
The ionization rate is expected to be extremely high in a GRB outflow,
but the blobs are so dense that the recombination rate is exceptionally
high as well. This has two important consequences. First, the
'ionization parameter', which depends on the ratio of ionization and
recombination rates, and determines the equilibrium state of ionization in the
blobs, is not vastly different from what is familiar in some X-ray sources.
Second, because the recombination timescale is so short, each electron can
recombine (and be reionized) during the outflow timescale, so the blobs can
reprocess most of the photon flux from a burst, even though their total mass
is low.
The ionization parameter $\Xi =L/ n_b r^2$ (e.g. Kallman \& McCray, 1982),
evaluated in the comoving frame, is $\Xi =L/ n_b r^2 \Gamma_b^2 = 5\times 10^2
\theta^2 T_7 $. For $\Xi \mathrel{\mathpalette\simov >} 10^3$, most Fe would be present as FeXXVI (i.e.
H-like) or fully stripped; this would still be true if the material were so
enriched in Fe that this is the dominant species.
Self-shielding would be inevitable if the total number of recombinations
became comparable with the number of ionizing photons available. The total
number of recombinations per second per unit logarithmic radius for a plasma
with mean ionic charge $Z_b$ is ${\cal R}_r \sim \alpha n_i n_e V f_v$, where
$V= 4\pi\theta^2 r^3\eta^{-1}$, $n_e \simeq n_b$ is electron density,
$n_i= n_b /Z_b$ is ion density, and $\alpha \sim 2\times 10^{-11} Z^2 T^{-1/2}$
is the recombination coefficient for hydrogenic ions.
The total number of ionizations per second in the same volume will be
approximately equal to the number of ionizing photons injected per second
above the shock region, ${\cal R}_i \sim L /(\Gamma_b^2 h \nu_i)$, where, for
10 KeV photons in the comoving frame, $h\nu_i \sim 10^{-8}\varepsilon_{10}$ erg. Thus
\begin{eqnarray}
{\cal R}_r & \simeq & 4\times 10^{54} \alpha L_{51}^2 \xi_Br_{13}^{-1}\theta^{-2}
\eta_2^{-2}\Gamma_{b2}^{-3} A_o^{-1} T_7^{-3/2} Z_b ~~\hbox{s}^{-1}~; \\
{\cal R}_i & \simeq & 10^{55}\Gamma_{b2}^{-2} \varepsilon_{10}^{-1}~~\hbox{s}^{-1}~.
\label{eq:ioniz}
\end{eqnarray}
If the blob parameters were such that ${\cal R}_r \mathrel{\mathpalette\simov >} {\cal R}_i$, the
optically thin assumption would not be self-consistent, and self-shielding
could be important. Bound-free and bound-bound line cooling could then
have an additional effect in determining the blob temperature. However, the
blobs cannot cool below the black-body temperature of the comoving radiation
field $u_\gamma =3\times 10^9 L_{51} r_{13}^{-2} \theta^{-2} \eta_2^{-2}~
\hbox{erg ~cm}^{-3}$, which is $T_{bb} \sim 10^6 L_{51}^{1/4}(r_{13}\theta
\eta_2)^{-1/2}$ K; this suggests that H will always be almost completely
ionized by collisions. Note also that absorption by ions in the diffuse flow
is negligible, because for a given total mass the recombination rate in blobs
is larger by the same ratio as the densities.
We have already shown that small blobs could contribute a covering factor
of order unity. In conjunction with the above inference that the recombination
rate can be comparable with the total photon production rate, this tells us that
the blobs could 'reprocess' much of the radiation. The optically thin estimate
(\ref{eq:ioniz}) and the above temperature estimates indicate that,
independently of any self-shielding, substantial recombinations of highly
ionized heavy elements such as Fe would be expected. They can thereby create
absorption features, the absorbed energy being re-emitted as (very broadened)
lines.
\section{Optical Depth and Spectral Widths}
Absorption edges are expected to form at energies corresponding to the
K-$\alpha$ absorption of hydrogenic ions.
The hydrogenic photoionization cross section is $\sigma_{th} \simeq 8 \times
10^{-18} Z^{-2} \hbox{cm}^2$ at the threshold $h\nu_{th} \simeq 13.6 Z^2$
eV, decreasing above that as $(\nu/\nu_{th})^{-3}$. E.g., for FeXXVI the
threshold in the blob frame is at 9.28 KeV, and the cross section is
$\sigma_{th} \sim 1.2\times 10^{-20}$ cm$^2$. Multiplying by the mean ion
column density from blobs ${\bar \Sigma_b} / Z_b =\alpha\Sigma_o Z_b^{-1}
(\eta_2/\Gamma_{b2})^2$ (equation [\ref{eq:Sigmao}]), for hydrogenic ions the mean
optical depth and the observer frame threshold energy are
\begin{eqnarray}
\tau_{th} & \simeq & 1.4\times 10^{2}\alphaL_{51}r_{13}^{-1} \theta^{-2} A_o^{-1}
\eta_2^{-1} \Gamma_{b2}^{-2} x_i (Z_b/26)^{-3}~~; \\
h\nu_{th} & \simeq & 0.928 ~(Z_b/26)^2 \Gamma_{b2}~~\hbox{MeV}~,
\label{eq:nuedge}
\end{eqnarray}
where we normalized to Fe XXVI blobs, $x_i$ being the ionic abundance fraction
by number. For Fe XXV the optical depth would be similar, modulo the
ionization fraction, and the threshold is at $.883\Gamma_2$ MeV, while for HeII
the optical depth could be larger, if $\Xi \mathrel{\mathpalette\simov <} 10^2$, and the edge would be
at $0.544\Gamma_{b2}$ KeV. (An HI edge at $0.136\Gamma_{b2}$ KeV might just be possible
if $\Xi \mathrel{\mathpalette\simov <} 50$ for cooler blobs at larger radii).
Bluewards of the absorption edges one would expect the flux to be blanketed
up to a comoving photon energy $\nu_{max}$ such that $\sigma_{th}
(\nu_{max}/\nu_{th})^{-3} ({\bar \Sigma_b} /Z_b) =1$, where it gradually rejoins
the continuum level.
In addition to edges, K-$\alpha$ resonant features are also expected at energies
redwards of the edges, e.g. at comoving energies of 6.9 KeV for FeXXVI, or
$0.69\Gamma_2$ MeV in the observer frame. The expected equivalent width in the
damping wing dominated regime is $(W_\nu / \nu ) \simeq 0.15~(\alphaL_{51}
r_{13}^{-1}\theta^{-2} A^{-1} Z_b^{-1} \eta_2^{-1}\Gamma_{b2}^{-2} x_{i,-1} )^{1/2}$,
if we normalize to abundances $x_i\sim 10^{-1} x_{i,-1}$; there would be similar
resonant lines for other ion species, since hydrogenic ions have similar $f$
and $A_{ul}\lambda_{lu}^2$ values. While such widths would be significant, bulk
velocity broadening (see below) would smear out any line features even more.
Moreover, absorption lines would be partially compensated by emission
from the blobs.
It would be tempting to speculate that such features could be associated with
the lines reported by Ginga (e.g. Murakami et al., 1988, Fenimore, et al.,
1988). However, this would require special circumstances leading to a fairly
narrow range of blob velocites, which might only be present in a small fraction
of all cases. In general, any spectral features will be spread out due to the
range of bulk Lorentz factors $\Gamma$ sampled by the line of sight. Emission
line features associated with recombination will be further broadened because,
even for a given $\Gamma$, there would be contributions with different Doppler
blue-shifts from material with velocity making different angles with our line
of sight. Even for a single value of $\Gamma_b$ this would introduce a
broadening by $(\Delta \nu /\nu)_{ang}\sim 0.3-0.5$. The effect of this is to
smear by this amount the red wing of any of the above spectral features.
This smearing, however, would not be as important for the deep edges discussed
above (equation [8]), which would be expected to survive.
The maximum blob Lorentz factor is $\eta$, but there would be a
spread below this maximum, given by values of $\Gamma_b$ for which ${\cal N}$
exceeds the value (\ref{eq:calN}).
Slower blobs moving towards the observer take longer ($\propto \Gamma_b^{-2}$
in observer time) to reach a given radius. Therefore, early in the burst only
high- $\Gamma$ blobs will have reached the radius ($\sim c t_v \eta^2$) where
internal shocks occur. However, when the burst has been active for times $\gg
t_v$, slower blobs whose Lorentz factor is of order $\eta (t_{ob}/t_v)^{-1/2}$
will have had time to reach the location of the emission, where $t_{ob}$ is the
observer frame time measured from the start of the burst. This leads to an
increasing spread of absorbing blob Lorentz factors
\begin{equation}
(\Delta \Gamma / \Gamma)_b \simeq
(\Gamma_{b,f}-\Gamma_b(t_{ob})/\Gamma_{b,f} =
1 - (t_{ob,o} / t_{ob} )^{1/2}~,
\label{eq:delgammab}
\end{equation}
where $t_{ob,o} \simeq 10^{-1}r_{13}\Gamma_{b2}^{-2}$ s is the observer frame blob
dynamic time at $r_{13} \sim 1$ (which in the wind regime used here is unrelated
to the burst duration). All lines, edges and maxiumum blanketing energies will
therefore have an increasing spread
$\Delta\nu/\nu \sim (\Delta\Gamma /\Gamma)_b$ with the time dependence of
equation (10),
extending from an upper value
corresponding to $\Gamma_{b,f}$ down to a lower limit which moves to softer
energies in time. The FeXXVI bound-free absorption will therefore move from
blanketing the range $0.9\Gamma_2 - 2.2\Gamma_2$ MeV down to blanketing the range
$0.09\Gamma_2 - 2.2 \Gamma_2$ MeV in a time $\sim 10 r_{13}\Gamma_{b2}^{-2}$ s after the
burst starts.
\section {Conclusions}
Even though the emission from gamma-ray bursts is primarily non-thermal, we
have shown that the observed spectrum may be substantially modified by the
presence of highly ionized thermal plasma, with blueshifts of 10-100.
The rate of absorption and re-emission by a thermal plasma, per unit mass,
scales with density; the high ambient and ram pressure of the relativistic
outflow can confine plasma to such high densities that only a very small
total mass can have conspicuous effects. The material would be in a 'mist'
of blobs or filaments filling a small fraction of the volume, but which are
individually so small that they provide a significant covering factor. Even
though small, these blobs can be envisaged as fluid-like because the gyroradius
in megagauss magnetic fields is much smaller still. They can be accelerated
to relativistic speed, without necessarily being disrupted, by the momentum
of the jet-like outflow or by radiation pressure. This material may be debris
from a disrupted neutron star, e.g. M\'esz\'aros~ \& Rees, 1997b (in which case it could
be highly enriched in heavy elements), or entrained from the boundaries of
the jet in a 'hypernova' (e.g. Paczy\'nski, 1998) model.
We obviously cannot predict how much material would be expelled in this form,
nor how the conditions near the central engine may evolve over the duration of
long bursts. Nor do we know how the blobs would be distributed across the jet,
though entrained material would tend to be more prominent near the boundaries
(i.e. angles of order $\theta$ from the axis) rather than on the axis. However,
some general trends seem generic to this picture.
The most prominent feature would be absorption above the photoionization edge
of FeXXVI, leading to a feature at this energy (i.e. 9.3 Kev multiplied by the
appropriate Doppler shift). In prolonged and complex bursts, it is likely that
the primary emission comes from a series of internal shocks, at a distance
$10^{13} - 10^{14}$ cm from the compact object. We would expect the feature to
shift towards lower energies, becase later in the burst there would be time
for lower-$\Gamma$ material to have reached the location of the reverse shock.
(If different sub-bursts occur in shocks at different radii, then the absorption
effects should be more conspicuous in those close in, and this may introduce
a scatter about the general tendency for the cut-off to soften towards the
end of long bursts. Spectra as observed by BATSE (most photons measured being
in the range 50 - 500 KeV) would tend therefore to indicate, for objects with
Fe-rich blobs, a spectral softening in time. Initially the burst would be
classified as an HE (having a high energy component in the fourth LAD channel
above 350 KeV), later to become an NHE (without significant emission above 350
KeV), with departures due to the previous scatter, e.g. as reported by
Pendleton, et al. 1998. Also, when an average temporal evolution of many bursts
is considered, it has been shown by Fenimore, 1998 that there is a clear
trend towards softening as the burst progresses.
While there are alternative
explanations for this softening, such as slowing down and cooling of the
emitting material, we suggest that absorption of the kind discussed in this
paper (characterized by the time dependence of equation [10]) may be relevant
to such correlations.
\acknowledgements{This research has been supported by NASA NAG5-2857 and
the Royal Society}.
|
1,108,101,565,971 | arxiv | \section{Introduction}
The accelerating pace of digitisation is driving digital interaction into all areas of our daily life, and from the resulting mass of data, a substantial portion can be quantified into human behavioural signals.
Learning to recognise emotional cues in interactions e.\,g.,\, taking place via video, is the purpose of the growing field of Emotion AI. In this process, various modalities, such as body language, voice, text, and facial expression, are examined for patterns that help map the cues to specific emotions. The reference data necessary to learn the mapping is annotated by humans, often as category labels (e.\,g.,\,{} happy, sad, surprised, etc.) and continuous annotations. For continuous mapping, behavioural and cognitive scientists assume that the human brain is not divided into hard-wired regions and better represented by dominant primitives (dimensions) whose complex interaction results in a specific emotion (e.\,g.,\, the dimensional axes of arousal and valence) \cite{russell1980_Circumplex}.
The growing demand for emotion technology in various domains led to an increased interest in the annotation of such data. However, the annotation process itself is not trivial to execute, and obtaining meaningful reference data to develop models for automatic pattern recognition is a challenge. One such challenge is the dependency on humans raters. When rating the perceived data (e.\,g.,\, videos), time-delays in the reaction \cite{nicolaou2014dynamic}, as well as systematic disagreement due to personal bias and other task-related reasons
are well known~\cite{booth2018novel, atcheson2018demonstrating}.
To counteract, it is common practice to involve multiple humans in the annotation of the same source and fuse these perceptions. Since emotions are inherently subjective, these fused signals are coined as gold-standard.
To date, none of the proposed fusion methods has become a de-facto standard. One reason for this may be that a convenient comparison of the fusion outcomes is hardly feasible. The implementation of the methods is often distributed over many different source bases, coded in different programming languages and frameworks, or is not publicly available at all. An issue is also the dependency on outdated software (package) dependencies.
Another unresolved problem is the transformation of continuous emotion signals into more general class labels that are easier for humans to interpret. In an empirical approach, Hoffmann et al.~\cite{Hoffmann_2012} mapped discrete emotions into the dimensional emotion space \cite{russell1980_Circumplex}. Similarly, Laurier et al.~\cite{Laurier_2009} aimed to cluster emotion tags to find clusters corresponding to the four quadrants of the arousal-valence dimensions.
A tool that supports this transformation process by the automatic creation of meaningful classes has not yet been presented in the literature.
With this contribution, we want to tackle both of these issues by proposing an easy-to-use, well-documented toolbox. The input data can be any continuous annotation recorded by an annotation software (e.\,g.,\,{} a human-controlled joystick or mouse) or directly from a (physiological) device (e.\,g.,\,{} smartwatches). Additionally, the annotations can be easily standardised, smoothed and fused by the most common gold-standard creation techniques, such as \emph{Estimator Weighted Evaluator\,} (EWE), \emph{DTW Barycenter Averaging\,} (DBA), and \emph{Generic-Canonical Time Warping\,} (GCTW). This elegantly makes a comparison of the multiple available fusion methods easily possible, leaving broad flexibility for database creators while allowing reproducibility and exchange over the set of parameters used. To this end, we propose a novel gold-standard method \emph{Rater Aligned Annotation Weighting\,} (\textsc{RAAW\,}) to the set of fusion tools, which we derived from methods introduced here and which is inspired by the results we obtained during the work on the toolbox and the limitations of the provided fusion methods. Furthermore, we propose a simple way to extract time-series features from these signals, which may aid the creation of emotional classes from emotion dimensions. The toolbox can be started directly from a Docker container without installing dependencies, and an open-source Github repository is available to the community for further development.
Note, the core focus of this work is emotional annotations. However, all kinds of time-series data are omnipresent in our daily life. Changes in stocks, energy consumption, or weather are all recorded over time and, thus, have natural time-series properties. Predicting these values in time is often challenging and a simplification by fusing them (e.\,g.,\, energy consumption of several households) transforming sequences into summary classes by clustering (e.\,g.,\, days in a week) may be beneficial for any of the other applications as well.
\vspace{-0.2cm}
\section{Methodology and System Overview}
In the following section, we first describe the methods that underpin the functionality of our toolbox and conclude by placing them in the context of the functionalities in \Cref{sec:box_overall}.
\vspace{-0.2cm}
\subsection{Smoothing of Annotations\label{sec:smoothnorm}}
\begin{figure}[t!]
\centering
\includegraphics[width=.45\textwidth]{figures/smoothing.pdf}
\caption{An example of valence annotation signals of three annotators. Figure (a) depicts the raw signals, while the other two figures show the filtered signals of a moving average filter (b), and a cubic Savitzky-Golay filter (c), respectively, with a filter frame size of 17 values (4.25\,s). Evidently, the moving average indicates a visibly stronger smoothing effect, when compared to the Savitzky-Golay filter, which preserves signal features more closely.}
\label{fig:annow_raw}
\vspace{-0.1cm}
\end{figure}
As for all fine-grained time-series, short-term errors and distortions can occur in the annotation process. Smoothing digital filters are useful to mitigate these negative noise effects~\cite{thammasan2016investigation, wang2018towards}. One common signal processing approach for this is the \emph{Savitzky-Golay filter} (SavGol) which increases the precision of the data points using a low degree polynomial over a moving filter~\cite{savitzky1964smoothing}. In our context, this method has the advantage that it still preserves high-frequency characteristics ~\cite{thammasan2016investigation}. Also widely applied is the \emph{Moving Average Filter} (MAF). It employs a moving average of a given window to smooth the signal gently. The MAF applied with $4.25\,s$ filter frame (or 17 time steps) is illustrated in \Cref{fig:annow_raw}, alongside a SavGol example, and the raw annotations.
\vspace{-0.1cm}
\subsection{Gold Standard Fusion Methods}
\begin{figure}[t!]
\centering
\includegraphics[ width=0.49\columnwidth]{figures/100_A.pdf}
\includegraphics[width=0.49\columnwidth]{figures/100_A_warping_and_fused.pdf}
\caption{The left side shows all fusion methods in \textsc{MuSe-Toolbox}\,{} on a sample annotation (\textsc{MuSe-CaR}\,{} database, video id: 100, arousal). The right side is a detailed illustration of the \emph{Rater Aligned Annotation Weighting\,}{} (\textsc{RAAW\,}) alignment, including the warping paths.}
\label{fig:methods}
\end{figure}
A gold-standard method tries to establish a consensus from a group of individual ratings. Some methods are specifically developed for emotion annotations, i.\,e.,\, \textsc{EWE\,}, and \textsc{RAAW\,}, while others are derived from more generic principles of time-series aggregation, i.\,e.,\, \textsc{DBA\,}, \textsc{GCTW\,}. A comparison of all methods is visualised in \Cref{fig:methods}.
\subsubsection{Estimator Weighted Evaluator (EWE)\label{sec:ewe}}
The \emph{Estimator Weighted Evaluator\,} (\textsc{EWE\,}) is based on the reliability evaluation of the raters~\cite{schuller2013intelligent}. It is essentially a weighted mean of all rater-dependent annotations, sometimes interpreted as the weighted mean of raters' similarity~\cite{grimm2005evaluation, hantke2016introducing}. To compute the weights, the cross-correlations of an annotation to the mean of all other annotations is calculated for each annotation. It can be formally expressed by
\begin{equation}
\hat{x}_{n}^{E W E}=\frac{1}{\sum_{k=1}^{K} r_{k}} \sum_{k=1}^{K} r_{k} \hat{x}_{n, k},
\end{equation}
where $r_k$ is the similarity of the k-th annotator to the other time-series. A typical method for calculating similarity between time-series is the Euclidean metric or Pearson coefficients. However, since both do not take sequence order, phase shift, and scaling variance into account, it was replaced by the concordance correlation coefficient (CCC) for similarity calculation:
\begin{equation}
CCC(\hat{\theta}, \theta) = \frac{2 \times COV(\hat{\theta}, \theta)}{\sigma^2_{\hat{\theta}} + \sigma^2_\theta + (\mu_{\hat{\theta}} - \mu_\theta)^2} = \frac{2E[(\hat{\theta}-\mu_{\hat{\theta}})(\theta-\mu_{\theta})]}{\sigma^2_{\hat{\theta}} + \sigma^2_\theta + (\mu_{\hat{\theta}} - \mu_\theta)^2}.
\end{equation}
Here, $x$ is the time-series data, $\theta$ is a series of $n$ annotations, and $\hat{\theta}$ the reference annotation. This method is broadly applied across different tasks in affective computing~\cite{ringeval2013introducing, ringeval2017avec,kossaifi2019sewa,stappen2020muse}.
\subsubsection{DTW Barycentre Averaging (DBA)\label{sec:dba}}
Averaging in Dynamic Time Warping (DTW) spaces is widely adopted for similarity-based temporal alignment in the field of machine learning. Similar to the Euclidean metric and CCC, DTW implements a distance metric, adding elastic properties that compute the best global alignment based on a one-to-many mapping of points in two time-series.
The \emph{DTW Barycenter Averaging\,} (\textsc{DBA\,}) method available in our framework is based on an algorithm originally developed for general time-series barycentre computation to compute the optimal average sequence. A barycentre is a time-series $b$ based on the computation of continuous representative propensities from multiple time series points $x$. In this particular version, these tendencies are determined by a sub-gradient, majorize-minimize algorithm of $d$ \cite{schultz2018nonsmooth} with the advantage of fusing of time-series of varied length. DTW can be expressed as:
\begin{equation}
DTW = \min \sum_{i} d\left(b, x_{i}\right)^{2}.
\end{equation}
\subsubsection{Generic-Canonical Time Warping (GCTW)\label{sec:gctw}}
Another extension of DTW is Canonical Time Warping (CTW)~\cite{zhou2009canonical}, which in addition to DTW integrates Canonical Correlation Analysis~\cite{anderson1958introduction}, a method for extracting shared features from two multi-variate data points. CTW was originally developed with the goal of aligning human motion and multimodal time series more precisely in time~\cite{zhou2009canonical}. The combination with these two approaches allows a more flexible way of time-warping by adding monotonic functions that can better handle local spatial deformations of the time series. The same authors~\cite{zhou2015generalized} further extended this approach to \emph{Generic-Canonical Time Warping\,} (\textsc{GCTW\,}), which enables a computationally efficient fusion of multiple sequences by reducing the quadratic to linear complexity. Furthermore, the identified features with high correlation are emphasised by weighting.
\subsubsection{\emph{Rater Aligned Annotation Weighting\,} (\textsc{RAAW\,})\label{sec:awe}}
In the context of emotions, we propose a novel method \emph{Rater Aligned Annotation Weighting\,} (\textsc{RAAW\,}) for the fusion of dimensional annotations for gold-standard creation. \textsc{RAAW\,} capitalises on the merits of the underlying alignment technique DTW and the inherent nature of the EWE method. More specifically, DTW is used to align the varying and changing response times of individual annotators over time ({cf.\,} \Cref{fig:methods}.
This alignment between the fused signal was previously made brute-force by shifting the global or individual annotation by a few seconds and measure the resulting performance.
The optimal number of emotion annotators is estimated to be at least three depending on their experience and the difficulty of the task~\cite{honig2010many}. To perform an alignment in a resource-efficient manner --- even for many annotations --- we utilise the DTW variant GCTW~\cite{zhou2015generalized}. Subsequently, the similarity is calculated using the CCC for the individual aligned signals to accommodate the inter-rater agreement (subjectivity). The signals weighted according to this can be completely disregarded when negatively correlated before they are finally merged using EWE~\cite{grimm2005evaluation}.
\vspace{-0.1cm}
\subsection{Emotional Signal Features\label{sec:signalfeatures}}
Emotion annotations can be seen as a quasi-continuous signal with a high sampling rate \cite{kossaifi2019sewa, stappen2020summary, stappen2020muse}. Extracting features from audio-visual and psychological signals is fairly common in intelligent computational analysis~\cite{schuller2013intelligent, schuller2020interspeech}. In the context of this work, we extract (time-series) features from an emotional signal segment to summarise the time period in a meaningful way. The resulting representation summarising the segment over time is a vector of the size of the selected features. Starting with the most interpretable features, common statistical measures are extracted~\cite{Sagha17-PTP}, such as the standard deviation ($std$), mean, median and a range of quantiles ($q_x$). However, these features do not reflect the characteristics of changes over time.
For this reason, the toolkit further offers to extract more complex time-series features namely: relative energy (\textit{relEnergy})~\cite{christ2018time}, mean absolute change (\textit{MACh}), mean change (\textit{MCh}), mean central approximation of the second derivatives (\textit{MSDC}), relative crossings of a point $m$ (\textit{CrM})~\cite{christ2018time}, relative number of peaks (\textit{relPeaks})~\cite{christ2018time,palshikar2009simple}, skewness~\cite{doane2011measuring,ekman1992argument}, kurtosis~\cite{westfall2014kurtosis}, relative longest strike above the mean (\textit{relLSAMe}), relative longest strike below the mean (\textit{relLSBMe}), relative count below mean (\textit{relCBMe}), relative sum of changes (\textit{relSOC}), first and last location of the minimum and maximum (\textit{FLMi}, \textit{LLMi}, \textit{FLMa}, \textit{LLMa}), and percentage of reoccurring data points ($PreDa$).
Note that features labelled as ``relative'' are normalised by the length of a segment, in order to limit the influence of varying segment lengths on the unsupervised clustering.
\vspace{-0.1cm}
\subsection{Dimension Reduction\label{sec:dimreduc}}
Large dimensional feature sets often lead to unintended side effects, such as the curse of dimensionality~\cite{trunk1979problem}. However, by reducing or selecting certain dimensions of the available features, these effects can be counteracted.
Principal component analysis (PCA) is a well-known dimension reduction method that transforms features into principal components~\cite{Zaki:31:PCA}. These components are generated by projecting the original features into a new orthogonal coordinate system. This enables the reduction of the dimensions while preserving most of the data variation.
Another method for dimensionality reduction is Self-organising Maps (SOM), a type of unsupervised, shallow neural network that transforms a high-dimensional input space into a low-dimensional output space~\cite{Kohonen:1990}. Each output neuron competes with the other neurons to represent a particular input pattern, which makes it possible to obtain a comprehended representation of the most relationships in the dataset. SOM can also be used as a clustering or visualization tool, as they are considered to have low susceptibility to outliers and noise~\cite{Vesanto:7}.
\vspace{-0.1cm}
\subsection{Clustering\label{sec:cluster}}
\subsubsection{K-means and fuzzy c-means clustering\label{sec:kmeans}}
A common way to differentiate k-means\, from fuzzy c-means\, algorithms is how a datapoint belongs to the resulting outcome, which can either be an assignment to exactly one cluster (crisp), or to multiple ones with a certain probability (fuzzy). The most popular fuzzy clustering method is the fuzzy c-means\, algorithm~\cite{Bezdek:1984}, based on the k-means\, algorithm~\cite{Hartigan:1979}. To this end, a fixed number of clusters is defined. The cluster centres are initially set randomly, and the Euclidean distances from them to the data points are calculated. These are assigned to the clusters so that there is a minimal variance increase. By step-wise optimisation (similar to an expectation maximisation (EM) algorithm) of the centres and assignments, the algorithm converges after a few iterations. For the fuzzy version, the degree of overlap between clusters can be specified using the fuzzifier $m$ parameter.
\vspace{-0.1cm}
\subsubsection{Gaussian mixture model \label{sec:gmm}}
Similar to c-means, a Gaussian Mixture Model\, (GMM) introduces fuzziness into the clustering process and allows the weak assignment of a single datapoint to several clusters simultaneously. For this purpose, a probabilistic model is generated that attempts to describe all data by Gaussian distributions with different parameters. The optimisation process to find a suitable covariance structure of the data as well as the centres of the latent Gaussian distributions uses the EM algorithm as in k-means\,.
\vspace{-0.1cm}
\subsubsection{Agglomerative clustering\label{sec:agllo}}
Besides the k-means\,, two other types of crisp clustering are common: agglomerative~\cite{Kaushik:26} and density clustering. Agglomerative is a hierarchical clustering technique in which each datapoint starts as its own cluster and is successively merged with the closest datapoint (i.\,e.,\, cluster) into higher-level clusters. As soon as the distance between two clusters is maximised or the minimum number of clusters is reached, the clustering process is terminated.
\vspace{-0.1cm}
\subsubsection{Density-Based Spatial Clustering of Applications with Noise (DBSCAN)\label{sec:dense}}
Density-clustering algorithms such as Density-Based Spatial Clustering of Applications with Noise \, (DBSCAN)
have became more popular over the last years~\cite{Campello:2013}. The main difference to other methods is that it also uses the local density of points instead of relying only on distance measures~\cite{Ester:1996}. DBSCAN provides an answer to two common problems in clustering: a) the number of clusters does not have to be specified in advance and b) it automatically detects outliers which are then excluded from the clustering~\cite{Sharma:25,Kaushik:26}. With other methods, these outliers have to be removed manually after a manual check, otherwise, there is a risk that the clusters would get distorted. The reason for this is that each point must contain at least a minimum number of points in a given radius, called min\_samples parameter in the $\epsilon$-neighborhood. However, this simultaneously causes a firm reliance on the defined parameters
\vspace{-0.1cm}
\subsubsection{ Measures\label{sec:measures}}
Clusters are usually evaluated using internal metrics and external assessment. The internal metrics focus on how similar the data points of a cluster are (compactness), and how far the clusters differ from each other (separation) \cite{Liu:6}.
The Calinski-Harabasz Index (CHI) calculates the weighted average of the sums of squares within and between clusters.
Also distance-based is the Silhouette Coefficient (SiC), but it is bounded within an interval of -1 to 1 (1 corresponds to an optimal cluster), allowing for easier comparability between runs and procedures~\cite{Zaki:31:S}. The Davies-Bouldin Index (DBI) is based on similarity measures and decreases with increasing cluster separability~\cite{Zaki:31:DB}. Specifically for fuzzy c-means\,, the Fuzzy Partition Coefficient (FPC) can be employed, and measures the separability of fuzzy c-means\, using Dunn's partition coefficients \cite{Dunn}. Finally, we use the S\_Dbw-Index, which is based on intra-cluster variance to measure compactness, where the average density in the area between clusters and the density of clusters is calculated (smaller is better).
\vspace{-0.1cm}
\subsection{MuSeFuseBox System Overview\label{sec:box_overall}}
\vspace{-0.1cm}
\begin{figure}[ht!]
\centering
\includegraphics[width=\columnwidth]{figures/AnnotationToolbox.pdf}
\caption{System overview of \textsc{MuSe-Toolbox}\,}
\label{fig:overview}
\end{figure}
The introduced methodology is integrated into the \textsc{MuSe-Toolbox}\, as depicted in \Cref{fig:overview}. The upper part shows the annotation fusion process. Given the input of multiple annotations, these can first be smoothed and/or normalised ({cf.\,} \Cref{sec:smoothnorm}), which has shown benefits in previous works~\cite{martinez2020msp, ringeval2017avec}. The normalisation is either applied on video- or annotator-level. Next, the pre-processed annotations are fused using either \textsc{DBA\,}, \textsc{EWE\,}, \textsc{GCTW\,}, or \textsc{RAAW\,} ({cf.\,} \Cref{sec:smoothnorm}). The lower part represents the creation process of discrete classes from a given signal. All, or a selection of the introduced features from \Cref{sec:signalfeatures} are extracted from segments of the fused annotation signal. These summary features are either clustered directly by one of the methods described in \Cref{sec:cluster} or first reduced in dimensionality ({cf.\,} \Cref{sec:dimreduc}) and subsequently clustered. There is an option to either cluster on all data or on the training partition only. For the latter option, the classes of the development and test partitions are predicted based on the resulting clusters from the training set. For internal evaluation, the measures described in~\Cref{sec:measures} are calculated. Since the generated clusters are intended to be used as classification targets, an exclusion of clustering proposals based on a rule-of-thumb can be activated to avoid strong class imbalances. This excludes cluster proposals where one or more clusters are smaller than a factor of the by chance level. For example, the prediction of four classes has a by chance level of 25\,\%. If the factor is set to 0.5, then, the smallest proposed cluster has to cover at least 12.5\,\% of the data. Finally, the profiling provides all the information necessary to enable an additional external evaluation by a human. For profiling, we provide a) standard features (mean, standard deviation, etc.), b) visualisations, such as radar charts of the top distinctive features and scatter plots, and c) correlation of the features within a cluster. Based on these, the resulting clusters can be interpreted, and a name can be given.
\vspace{-0.1cm}
\subsection{Implementation Details}
The \textsc{MuSe-Toolbox}\, is implemented in Python and relies on several packages, most notably numpy, pandas, scikit-learn, oct2py, and scipy. It can be used as a command line tool (over 50 different settings and configurations are available) or from the Python API. The implementation of DBA is adapted from \cite{Petitjean2011-DBA,Petitjean2014-ICDM-2,Forestier2017-ICDM}\footnote{\url{https://github.com/fpetitjean/DBA},
GNU General Public License} and DTW components are adapted from the Matlab implementation\footnote{\url{https://github.com/zhfe99}, free for research use (no licence)} of \cite{zhou2015generalized}, which we transformed into code of the open-source programming language and environment for octave and access it for our calculations. The code is publicly available on GitHub under the GNU General Public license\footnote{\url{https://github.com/lstappen/MuSe-Toolbox}}.
\section{Experiments}
To demonstrate the capabilities of the \textsc{MuSe-Toolbox}\,, we run experiments on the produced gold standards. By doing so, we used them to train models for dimensional affect recognition. To this end, we utilise the \textsc{MuSe-CaR}\, database \cite{stappen2021multimodal}, used in the 2020 and 2021 Multimodal Sentiment Analysis real-life media Emotion Challenges (MuSe) \cite{stappen2020muse,stappen2021muse}, and several other works ~\cite{stappen2021sentiment,stappen2021estimation,stappen2021unsupervised,sun2020multi,fu2020aaec,li2020multi}.
\vspace{-0.1cm}
\subsection{Continuous Emotion Fusion}
In this section, we present the results of several experiments based on outputs from our toolkit to demonstrate its functionality. As explained in the previous sections, gold-standard methods lead to qualitatively different results, meaning that the quantitative results alone are only of limited value.
For our experiments, we build on the MuSe \cite{stappen2020muse,stappen2021muse}, a challenge-series co-located to the ACM Multimedia Conference, which aims to set benchmarks for the prediction of emotions and sentiment with deep learning methods in-the-wild. Since the experimental conditions are predefined and publicly available, this is an ideal test ground. The database utilised for the challenge is called \textsc{MuSe-CaR}\,, which provides 40 hours of YouTube review videos of human-emotion interactions. Each 250 ms of the video dataset is labelled by at least five annotators, which are used for the following experiments. For more information, we refer the interested reader to the challenge \cite{stappen2021muse} and database paper \cite{stappen2021multimodal}.
We use two of the provided feature sets, \textsc{VGGish\,} and \textsc{BERT\,}, from the challenge \cite{stappen2021muse} to predict arousal and valence. \textsc{VGGish\,}{}~\cite{hershey2017cnn}, is a 128 dimensional audio feature set pre-trained on an audio dataset including YouTube snippets (AudioSet) with the aid of deep learning methods. These audio samples were differentiated into more than 600 different classes. \textsc{BERT\,}~\cite{devlin2019bert} embeds words in vectors by using transformer networks. Its deep learning architecture is upfront trained on several datasets and training tasks. The embeddings used here is the sum of the last four output layers, which consists of a total of 768 dimensions. Both embeddings were extracted at the same sample rate as the labels. Furthermore, the LSTM-RNN baseline model made available by the organisers is utilised and re-trained for 100 epochs with batch size 1024 and learning rate $lr=0.005$ on the new targets. Further, we run a parameter optimisation for the hidden state dimensionality $h = \{32, 64\}$ for arousal and $h = \{64, 128\}$ to predict valence, as this selection has previously worked well for the \textsc{MuSe-CaR}\, data, as shown in the 2021 MuSe Challenge baseline publication \cite{stappen2021muse}. As the challenges use the CCC as the competition measure, we use the CCC for evaluation as well as the loss function.
\subsubsection{Smoothing}
\begin{table}[t!]
\caption{Results comparing with and without pre-smoothing using a savgol filter with a size of 5 on all annotation fusion techniques.}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}lrr|rr||rr|rr@{}}
\toprule
& \multicolumn{4}{c||}{\textbf{Arousal}} & \multicolumn{4}{c}{\textbf{Valence}}\\
\midrule
& \multicolumn{2}{c|}{--} & \multicolumn{2}{c}{smooth} & \multicolumn{2}{c}{--} & \multicolumn{2}{c}{smooth} \\
& \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l|}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l||}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l|}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l}{Test} \\
\midrule
\textbf{DBA} & .2634 & .2615 & .2368 & .2480 & .3580 & .4209 & .2583 & .3638 \\
\textbf{GCTW} & .4809 & .3481 & .4840 & .3502 & .4394 & .5594 & .4503 & .5848 \\
\textbf{EWE} & .4410 & .2513 & .4386 & .3210 & .4476 & .5614 & .4454 & .5703 \\
\textbf{RAAW} & .4266 & .2778 & .4225 & .3514 & .4589 & .5493 & .4482 & .5698 \\
\hline
$\varnothing$ & .4030 & .2847 & .3955 & .3177 & .4260 & .5228 & .4006 & .5222 \\
\bottomrule
\end{tabular}
}
\label{tab:Smoothing}
\end{table}
The effect of smoothing can be seen in~\Cref{fig:annow_raw}, c) compared to the raw annotations and the filtered signal using the Savitzky-Golay filter. It is apparent that the moving average filter smooths the signal much more than the Savitzky-Golay filter, even to a point at which information from the signal is lost. Hence, we adjust the filter frame-size of the moving average filter to be a smaller value compared to the Savitzky-Golay filter.
Following the pre-processing and fusion, the fused signal can further be smoothed using convolutional smoothing. The kernel size of 15 has proven to yield high-quality gold standard annotations whilst reduced signal noise.
We further compare the performance of all fusion methods when applying the Savitzky-Golay filter for pre-smoothing in \Cref{tab:Smoothing}. In general, it is noticeable that the DBA results are considerably below the level of the other three models. When predicting arousal, the models tend to overfit, while underfitting can be observed for the prediction of valence. This was also found in \cite{stappen2020muse, li2020multi, sun2020multi} and is possibly due to the chosen data split, which is speaker-independent, hence leading to imbalances in the label distribution \cite{stappen2021multimodal}.
For arousal, the results without normalisation are slightly stronger on the development set. On the test set, the overfitting gap for \textsc{EWE\,} and \textsc{RAAW\,} decreases by at least by .07 CCC with the application of the pre-smoothing filter. For valence, the results without the pre-smoothing filter are also slightly better on the development set, with the exception of \textsc{GCTW\,}. Pre-smoothing, however, produces atypically low results for DBA, which may indicate the sensitivity of the fusion method. With the other methods, the test result improved moderately.
\subsubsection{Normalisation}
\begin{table}[t!]
\caption{Results comparing different standardisation techniques (no pre-smoothing) on all annotation fusion techniques.}
\resizebox{\linewidth}{!}{
\begin{tabular}{@{}lrr|rr|rr||rr|rr|rr@{}}
\toprule
& \multicolumn{6}{c||}{\textbf{Arousal}} & \multicolumn{6}{c}{\textbf{Valence}}\\
\midrule
& \multicolumn{2}{c|}{--} & \multicolumn{2}{c|}{per video} & \multicolumn{2}{c||}{per annotator} & \multicolumn{2}{c|}{--} & \multicolumn{2}{c|}{per video} & \multicolumn{2}{c}{per annotator} \\
& \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l|}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l|}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l||}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l|}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l|}{Test} & \multicolumn{1}{l}{Devel.} & \multicolumn{1}{l}{Test} \\
\midrule
\textbf{DBA} & .2811 & .1993 & .3616 & .2685 & .2634 & .2615 & .3072 & .2868 & .3580 & .4209 & .2800 & .3991 \\
\textbf{GCTW} & .4969 & .3558 & .5175 & .3207 & .4809 & .3481 & .4353 & .5345 & .4256 & .5170 & .4394 & .5594 \\
\textbf{EWE} & .4750 & .3563 & .4923 & .2746 & .4410 & .2513 & .4452 & .5551 & .4479 & .5193 & .4476 & .5614 \\
\textbf{RAAW} & .4546 & .2814 & .4898 & .3817 & .4266 & .2778 & .4411 & .5326 & .4430 & .5568 & .4589 & .5493 \\
\hline
$\varnothing$ & .4269 & .2982 & .4653 & .3114 & .4030 & .2847 & .4072 & .4773 & .4186 & .5035 & .4065 & .5173 \\
\bottomrule
\end{tabular}
}
\label{tab:Normalisation}
\end{table}
Across all methods, the maximum deviation on test of the average results is low at .02 CCC for arousal and .04 CCC for valence ({cf.\,} \Cref{tab:Normalisation}). On an individual level, there are stronger differences, e.\,g.,\, the results for the fusion of arousal with \textsc{RAAW\,} differ by more than .05 CCC on the development set and .1 CCC on the test set, with clear advantages for standardisation at the video level. This is the case for most gold standard procedures in predicting arousal (development set). The results for the prediction of valence are predominantly highest when standardised on the annotator level.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[trim={3.25cm 0 0 0},clip,width=\textwidth]{figures/correlation_labels_features_abs.pdf}
\caption[Network2]%
{{\small Absolute correlation between a label and all other features.}}
\label{fig:sub_corr}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[trim={0 0 -1.2cm 0},clip,width=\textwidth]{figures/clustered_data_visualisation_c_0_c_1.pdf}
\caption[]%
{{\small Cluster classes visualisation\\ after post-PCA.}}
\label{fig:sub_cloud}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[trim={0 0 0 0},clip,width=\textwidth]{figures/distinctive_features_all_clusters_standardised.pdf}
\caption[]%
{{\small Distinctive features across all classes.}}
\label{fig:sub_radar}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.23\textwidth}
\centering
\includegraphics[trim={0 0 0 0},clip,width=\textwidth]{figures/clustered_data_visualisation_labels_percentile_90_valence.pdf}
\caption[]%
{{\small Example correlation between a label and the 90th percentile.}}
\label{fig:sub_perentile}
\end{subfigure}
\caption[ The average and standard deviation of critical parameters ]
{\small Exemplary visualisation capabilities of \textsc{MuSe-Toolbox}\, for the class extraction process.}
\label{fig:visual}
\end{figure*}
\vspace{-0.1cm}
\subsection{Emotional Class Extraction}
Clustering is by nature an unsupervised machine learning process, and so, human monitoring of the found class clusters ensures they are based on meaningful patterns. The \textsc{MuSe-Toolbox}\, provides a number of tools for this purpose. After each clustering outcome, detailed profiling is carried out, which contains statistics, e.\,g.,\, mean and standard deviation, as well as visualisations of the obtained clusters. \Cref{fig:visual} summarises these: a) shows a correlation between each feature and the cluster classes. This aids identification of influential features. b) offers a visual interpretation of the clustered features through dimension reduction. c) provides an overview of the degrees of influence for individual features in the entire cluster class, ordered by the overall importance (distance from the average value across all classes), while d) shows the statistical (normalised) distribution of a single feature per class.
The outcome of clustering is highly dependent on the dataset, and specifically the distribution of underlying emotional annotations. For this reason, it is difficult to generalise the current findings. In the following, we summarise a few general tendencies that we observe from current experiments.
For this, we run experiments applying k-means\,, fuzzy c-means\,, GMM, and agglomerative clustering\,
on \textsc{MuSe-CaR}\,. For the input features, we select four different feature sets:
distribution-based features $set_{basic}$\footnote{mean, median, std., $q_{\{5, 10, 25, 33, 66, 75, 90, 95\}}$},
time-series features as in
$set_{change}$\footnote{std., rel.\ energy, rel.\ sum of changes, rel.\ number peaks, rel.\ long strike below mean, rel.\ long strike above mean, rel.\ count below mean},
$set_{ext.}$\footnote{$set_{basic} \cup set_{change}$ + rel.\ number crossing 0, percentage of reoccurring data points to all data points},
and a very large feature set $set_{large}$\footnote{$set_{ext.}$ + skewness, kurtosis, mean abs.\ change, mean change, mean second derivative central, and the first and last location of the minimum and maximum, respectively}. We further explore the reducing the dimensions before the clustering setting the PCA parameter to \{None, 2, 5\}, and specify the number of clusters to \{3, 5\}.
We defined one criterion of a fruitful outcome, i.\,e.,\, if the cluster measures achieve optimal results ({cf.\,} \Cref{sec:measures} for difficulties).
Furthermore, the identification of distinct cluster characteristics and a similar size of the classes may express optimal clusters. The experiments show that the composition of the features has a major influence on achieving the desired results. The features describing the distribution ($set_{basic}$) achieve slightly better results in terms of clustering measures than the feature set describing changes over time $set_{change}$. However, the latter seems to capture specific clusters very well, which is expressed by a small set of features ({cf.\,} \Cref{fig:sub_radar}) that stands out strongly from the average characteristics of these across all clusters. Mixing these two feature sets to the $set_{ext.}$ leads to the most evenly distributed class sizes. We recommend experimenting with the two general feature types and compiling your own set of reliable features for a given dataset, depending on your criteria and results obtained.
Regarding the class distribution, in 9 out of 96 setups created, at least one cluster does not cover enough percentage of the total amount of data points to fulfil our class-size-by-chance threshold of 25\,\%. With an increasing number of clusters (above five), all algorithms tend to split up existing smaller class clusters into even smaller ones, making it more likely to violate the class size rule. This behaviour occurs regardless of the feature set used.
In our feature reduction experiments, brute force was used to determine the best number of components. It showed that almost all clustering metrics except S\_dbw perform better when a two-component PCA is used before clustering. However, in terms of the ability to predict the generated class clusters, in our case, five components is the better choice (by chance level vs maximum result). Another decisive aspect in this process is the size (and types) of the feature sets to use for dimension reduction.
Prediction results obtained by using this process can be found in \cite{stappen2021muse}.
Finally, we find two other high impact aspects noteworthy: the segment length and the data basis for clustering.
Regarding the segment length, the time series features (e.\,g.,\, long strike below mean) are sensitive to the length of the segment compared to the features that only describe the distribution (e.\,g.,\, quantile). If segments of varying length are given, it is recommended to adjust the length of the segments if possible and to convert the features by length from an absolute to a relative value corresponding to the length of the segment, avoiding the creation of meaningless classes.
For the affected features implemented in this toolkit, the normalisation by length is already performed by default.
Depending on the partitioning of the dataset, i.\,e.,\, the homogeneity between training, development, and test partitions, a clustering algorithm can generate completely different cluster classes.
If the tool is used in the sense of an end-to-end process, where first the continuous signals are predicted and then a transformation into classes is automatically performed by a pre-trained clustering model, the exclusive use of the training dataset is advisable to test the method under real conditions. If it is a one-off process where suitable discrete classes are to be found for a given continuous annotation, the extraction can also be carried out on all data.
Of further note, we have found that using DBSCAN\footnote{DBSCAN parameters: $\epsilon=\{0.01, 0.05, 0.1, 0.25\}$; min\_samples={3, 5}; PCA=\{None, 2, 5\}} for this task is less optimal. First, the class size threshold must be disabled because at least one resulting class does not meet the minimum size (e.\,g.,\, the noise cluster). Second, the algorithm tends to produce a very low (1-2) or very high number of classes (up to 300).
\vspace{-0.1cm}
\section{Conclusions}
In this paper, we introduced the \textsc{MuSe-Toolbox}\, -- a novel annotation toolkit for creating continuous and discrete gold standards. It provides capabilities to compare different fusion strategies of continuous annotations to a gold standard as well as simplify this gold standard to classes by extracting and clustering temporary and local signal characteristics. Hence, we provided a unified way to create regression and classification targets for emotion recognition. Furthermore, we introduced \textsc{RAAW\,} combining the annotation alignment on every time step and intelligently weighting of the individual annotation. Finally, important configuration parameter were highlighted in our series of experiments to which illustrated the toolkit's capabilities on the \textsc{MuSe-CaR}\, dataset. In the future, we plan to add further functionality, such as extending the dimension reduction to T-SNE and LDA.
\footnotesize
\bibliographystyle{ACM-Reference-Format}
|
1,108,101,565,972 | arxiv | \section{Introduction}
The Rabi model \cite{Rabi1937}, which describes a two-level atom interacting with a single-mode classical filed, plays an important role in understanding atom-field interaction. A solvable fully quantum-mechanics model, Jaynes-Cummings (JC) model \cite{Jaynes1963}, gives the general and basic physics of quantum Rabi model in the rotating-wave approximation (RWA) \cite{Gerry2005}. The development of experiments in circuit quantum electrodynamics \cite{Niemczyk2010,Crespi2012,Gely2017,Mezzacapo2014,Yoshihara2016}, 2D electron gases \cite{Hagenmuller2011,Smolka2014}, and trapped ions \cite{Pedernales2015,ChengXH2018,LvD2018,CaiML2021}, etc, has already driven the light-matter interaction into the strong coupling regime where RWA is not applicable and the counter-rotating wave terms (CRTs) can not be neglected. More and more generalized quantum Rabi model were proposed to study different kinds of interaction beyond RWA, e.g. Rabi-Stark model \cite{Eckle2017,XieYF2019a,XieYF2019b,ChenXY2020}, Dicke model \cite{Dicke1954,Garraway2011,Kirton2019}, Buck-Sukumar model \cite{Cordeiro2007,Rodriguez-Lara2014}, anisotropic Rabi model \cite{ZhangG2015,XieQT2014,XieQ2017,WangGC2019,Skogvoll2021}, and asymmetric Rabi model \cite{XieQ2017,Ashhab2020,ChenQH2012,LiZM2021,Reyes-Bustos2021,Braak2011,Yoshihara2017}, etc. It's thus necessary to find a proper treatment of CRTs to make the extended quantum Rabi models solvable. Numerical and analytical approaches were presented to obtain the energy spectrum of these models, such as $G-$function \cite{Braak2011}, Bogoliubov operators \cite{ChenQH2012}, and generalized rotating wave approximation \cite{Irish2007,ZhangYY2016}, etc. Very recently a quantum Rabi triangle system has been proposed as an elementary building block to explore the nature of emerging quantum many-body phases \cite{ZhangYY2021}. An adiabatic scheme for the fast and deterministic generation of a two-qubit Bell state and arbitrary single-photon multimode W states was proposed based on one-photon solutions to the multiqubit multimode quantum Rabi Model \cite{PengJ2021}. The quantum phase transition has been observed in the experiments with a single $^{171}{\rm Yb}{^+}$ ion confined in a linear Paul trap \cite{LvD2018,CaiML2021}. An experimental scheme with a transmon qubit capacitively coupled to a LC resonator has been proposed to implement the anisotropic quantum Rabi model in a circuit quantum electrodynamics system via periodic frequency modulation \cite{WangGC2019}. A magnon-spin-qubit ensemble in which a spin qubit exchange-coupled to an anisotropic ferromagnet is suggested to physically realize the quantum Rabi model from the isotropic to the Jaynes-Cummings limit with coupling strengths that can reach the deep-strong regime \cite{Skogvoll2021}. By inductively coupling a flux qubit and an LC oscillator via Josephson junctions, superconducting qubit-oscillator circuits in the deep strong coupling regime has been realized with a flux bias \cite{Yoshihara2017}.
The Floquet theory is generally applied in the periodic driven quantum system to gain the nontrivial physical properties \cite{Rahav2003,Goldman2014,Eckardt2015,Bukov2015,Bukov2014,Kohler2017,Oka2019,Rechtsman2013,Else2016,Lindner2011,Shirley1965,Casas2001}. According to the theory, the time-evolution operator from initial time $t_i$ to the final time $t_f$ can be written as
\begin{align}
\hat{U}(t_f,t_i) = {\rm e}^{-{\rm i}\hat{K}(t_f)}{\rm e}^{-\frac{\rm i}{\hbar}\hat{H}_{F}\cdot (t_f-t_i)}{\rm e}^{{\rm i}\hat{K}(t_i)},
\end{align}
where $\hat{H}_{F}$ is the time-independent Floquet Hamiltonian to describe the long-time evolution and $\hat{K}(t)$ is the kick operator to describe the short-time behavior. There are two choices of the description, the stroboscopic and the non-stroboscopic dynamics. The former concludes that both the stroboscopic Floquet Hamiltonian $\hat{H}_{F}[t_0]$ and the stroboscopic kick operator $\hat{K}_F[t_0](t)$ depend on the Floquet gauge $t_0$, which is defined as the time where the first period begins. A very efficient tool to compute the stroboscopic Floquet Hamiltonian in the high-frequency limit is the the Magnus expansion, which is a perturbative scheme in the inverse driving frequency $1/\Omega$ to compute $\hat{H}_{F}[t_0]$ \cite{Bukov2015,Blanes2009}. It is specifically suitable to use this stroboscopic dynamics to describe the system in the simplest case when the initial time and final time are fixed on $t_0$ and $t_0+nT$ with the driving period $T=2\pi /\Omega$, as the stroboscopic kick operator in this case reduces to the zero. This method is widely applied in Floquet engineering of many-body localization \cite{Abanin2016}, counterdiabatic protocols \cite{Kuwahara2016}, and generic transient dynamics \cite{Claeys2019} in quantum many-body system. The other one is the non-stroboscopic dynamics which is described by the $t_0$-independent effective Floquet Hamiltonian $\hat{H}_{\rm eff}$ and the non-stroboscopic kick operator $\hat{K}_{\rm eff}(t)$. This approach offers the advantage that the dependence on the Floquet gauge will not enter the inverse-frequency expansion of $\hat{H}_{\rm eff}$ \cite{Goldman2014,Bukov2015}. If one is interested in Floquet non-stroboscopic dynamics, in current and linear response, or the spectral properties of the Floquet Hamiltonian, then the effective description offers an advantage, since it gives a Hamiltonian which does not contain terms that depend on the phase of the drive. Non-stroboscopic dynamics is capable of capturing the evolution governed by the Floquet Hamiltonian of any observable associated with the effective high-frequency model \cite{Bukov2014,Kohler2017,Oka2019}.
In recent years, some Rabi-type models has been studied by applying Floquet theory \cite{LeeTE2015,XieQ2018,Dasgupta2015,Bastidas2012,WangYF2021,DuanLW2020}, including engineering the non-Hermitian Hamiltonian with semiclassical Rabi models and driving fully quantum Rabi-type models with time periodical parameters. Light in its nature is temporal periodic and the Rabi model is a periodically driven system in the first place. But the application of the Floquet theory directly on the quantum Rabi models is rare, as quantum optics description utilizing the field quantization maps the atom-field interaction into a time independent Hamiltonian - the field creation and annihilation operators does not depend on time in Schr\"odinger picture. The analysis of the quasienergy and the dynamic evolution of Rabi model is expected to lead to different view of understanding the various kinds of atom-light interaction. Although the Floquet stroboscopic dynamics is sufficient for the time evolution, the non-stroboscopic effective model in many cases provides the analytical form of the quasi-energy spectrum and Floquet modes, which is applicable in the evaluation of dynamics of physical observables.
In this paper, we consider two extended Rabi models, i.e. the anisotropic and asymmetric quantum Rabi models, and study their non-stroboscopic dynamics in the framework of Floquet theory. We deliberately break the symmetry in the standard Rabi model, in one case, the $U(1)$ symmetry in atom-field coupling - the rotating and counter-rotating interactions are governed by two different coupling constants, in the other case, the $Z_2$ parity symmetry of total excitation number - a bias field is applied on the transverse direction. In the rotating frame the Hamiltonian can be regarded as a periodic driving in the interaction picture. In Section II, we apply the Floquet theory and high-frequency expansion to the anisotropic Rabi model. The quasi-energy spectrum and population dynamics derived from the time-independent effective model will be investigated and compared with the numerical result in the extended Floquet Hilbert space. In Section III, we carry out the similar procedure on the asymmetric Rabi model, and the driving dynamics of some physical observables will be analyzed. By comparing the numerical result and the effective model, we aim to find the parameter regime for the application of the high-frequency expansion. We conclude our results in Section IV and the details of the the expansion of the effective Hamiltonian and the numerical scheme for quasi-energy spectrum are presented in the appendix.
\section{Anisotropic Rabi model}
\subsection{Formalism and high-frequency expansion}
Our first model is the anisotropic Rabi Model(AiRM) \cite{XieQT2014,XieQ2017}, which can be described by the Hamiltonian in lab frame
\begin{eqnarray}
\hat{H}_{\rm{AiRM}} &=& \frac{1}{2}\hbar\omega_{0}\hat{\sigma}_{z}+\hbar\omega\hat{a}^{\dag}\hat{a}+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}) \nonumber\\
&&+ g^{\prime}(\hat{a}^{\dag}\hat{\sigma}_{+}+\hat{a}\hat{\sigma}_{-}).
\label{oH1}
\end{eqnarray}
Here, $\hat{a}^{\dag}$ and $\hat{a}$ are the creation and annihilation operators for photons of single-mode frequency $\omega$, $\hat{\sigma}_+ =(\hat{\sigma}_x+{\rm i}\hat{\sigma}_y)/2$ and $\hat{\sigma}_-= (\hat{\sigma}_x-{\rm i}\hat{\sigma}_y)/2$ are the atomic transition operators, and $\hat{\sigma}_i (i=x,y,z)$ are the Pauli matrices of the atom with the level difference characterized by the frequency $\omega_0$. $g$ is the coupling strength between the atom and the field of the RWTs, while $g^{\prime}$ is the coupling strength of CRTs. Clearly, when $g^\prime = 0$, the AiRM reduces to the JC model with rotating wave approximation, while for $g^{\prime} = g$, it becomes the standard quantum Rabi model. So this is an appropriate candidate to study the effect caused by CRTs.
To apply the Floquet theory, one needs the Hamiltonian to be time-dependent and temporal periodic. There is a convenient way to achieve this, i.e. putting the Hamiltonian in the rotating frame. After the time-dependent gauge transformation by the unitary operator $\hat{V}(t) = \exp[-{\rm i}\omega(\hat{a}^{\dag}\hat{a}+\hat{\sigma}_z/2)t]$, the Hamiltonian in the rotating frame has the form
\begin{align}
\hat{H}_{\rm{AiRM}}^{\rm rot} (t)&= \hat{V}(t)^{\dag}\hat{H}_{\rm{AiRM}}\hat{V}(t)+{\rm i}\hbar{\frac{\partial\hat{V}^{\dag}(t)}{\partial t}}\hat{V}(t)\notag\\
&=\frac{1}{2}\hbar\Delta\hat{\sigma}_{z}+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+})\notag\\
&+g^{\prime}({\rm e}^{{\rm i}2\omega t}\hat{a}^{\dag}\hat{\sigma}_{+}+{\rm e}^{-{\rm i}2\omega t}\hat{a}\hat{\sigma}_{-}).\label{Hrot}
\end{align}
Comparing it with the Hamiltonian (\ref{oH1}) in the original gauge, we find the free field term has been eliminated and the field information is entangled with the atom. The RWTs remain time-independent in the new gauge, while the CRTs have the time dependence with the frequency $\Omega = 2\omega$, which can be regarded as a periodic driving. The time-independent part consists the JC model without the free field Hamiltonian, and the atomic level difference is characterized by the detuning between the photon and atom $\Delta = \omega_0 - \omega$. For high frequency driven system one needs that the photon frequency $\omega$ is large enough such that the atoms are in near resonance with the optical mode.
The Floquet theory gives the guidance of solving these periodic driven cases in the high-frequency limit, i.e. the effective Hamiltonian can be expanded in order of $\Omega^{-n}$ as $\hat{H}_{\rm eff} = \sum_{n=0}^{\infty}H_{\rm eff}^{(n)}$. The corresponding kick operator can also be expanded as $\hat{K}_{\rm eff}(t) = \sum_{n=1}^{\infty}K_{\rm eff}^{(n)}(t)$. The first few terms of high-frequency expansion are calculated as (See Appendix A for details)
\begin{align}
\hat{H}^{(0)}_{\rm eff} &= \frac{1}{2}\hbar\Delta\hat{\sigma}_{z}+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}),\label{H0}\\
\hat{H}^{(1)}_{\rm eff} &= \frac{g^{\prime 2}}{\hbar\Omega}(\hat{a}^{\dag}\hat{a}\hat{\sigma}_z-\hat{\sigma}_-\hat{\sigma}_+),\label{H1}\\
\hat{H}^{(2)}_{\rm eff} &= -\frac{g^{\prime 2}}{(\hbar\Omega)^2}[\hbar\Delta (\hat{a}^{\dag}\hat{a}\hat{\sigma}_z-\hat{\sigma}_-\hat{\sigma}_+)+g(\hat{a}^{\dag}\hat{a}\hat{a}^{\dag}\hat{\sigma}_-+h.c.)].\label{H2}
\end{align}
The zeroth order term of expansion \eqref{H0}, which is exactly the time-independent part of Hamiltonian in the rotating frame \eqref{Hrot}, plays a major role in the effective Hamiltonian as we expected. The CRTs enter in the first order correction \eqref{H1} showing the dependence of the effective Hamiltonian on the coupling strength $g'$. The second order correction \eqref{H2} consists of two parts: apart from a term similar to the first order correction (dressed by the detuning $\Delta$), there appears a two-photon interaction process which depends on both coupling strengths of RWTs and CRTs. With the increase of the correction order, more multiple-photon process will be brought into the effective Hamiltonian. On the other hand, the first few terms of the corresponding effective kick operator can be evaluated as
\begin{align}
\hat{K}_{\rm eff}^{(1)}(t) &= \frac{g^{\prime}}{{\rm i}\hbar\Omega}(\hat{a}^{\dag}\hat{\sigma}_{+}{\rm e}^{{\rm i}\Omega t}-h.c.),\\
\hat{K}_{\rm eff}^{(2)}(t) &= \frac{g^{\prime}}{{\rm i}(\hbar\Omega)^2} [(g\hat{a}^{\dag 2}\hat{\sigma}_z-\hbar\Delta\hat{a}^\dag\hat{\sigma}_+){\rm e}^{{\rm i}\Omega t}-h.c.].
\end{align}
We see that the zeroth-order term in the Floquet Hamiltonian \eqref{H0} is simply the time-averaged Hamiltonian, whereas the zeroth-order non-stroboscopic kick operator is identically zero.
\subsection{Quasi-energy and Floquet modes}
Notice that the effective Hamiltonian conserves the total number of excitation $\hat{N} = \hat{a}^{\dag}\hat{a}+(\hat{\sigma}_z+1)/2$
, which enlarges the $Z_2$ symmetry of the original Hamiltonian \eqref{oH1} to $U(1)$ symmetry. Therefore, it only couples pairs of states such as $|1\rangle = |n,+\rangle$ and $\ |2\rangle = |n+1,-\rangle$ where $n$ is the photon number and $|\pm\rangle$ denotes the excited and ground states of the atom. The matrix of the effective Hamiltonian in these basis is given by
\begin{widetext}
\begin{equation}
\hat{H}_{\rm eff} = \begin{bmatrix}
\frac{\hbar\Delta}{2}+\frac{g^{\prime 2}}{\hbar\Omega}n-\frac{\hbar\Delta g^{\prime 2}}{(\hbar\Omega)^2}n&g\sqrt{n+1}-\frac{gg^{\prime 2}}{(\hbar\Omega)^2}(n+1)\sqrt{n+1}\\
g\sqrt{n+1}-\frac{gg^{\prime 2}}{(\hbar\Omega)^2}(n+1)\sqrt{n+1}&-\frac{\hbar\Delta}{2}-\frac{g^{\prime 2}}{\hbar\Omega}(n+2)+\frac{\hbar\Delta g^{\prime 2}}{(\hbar\Omega)^2}(n+2)
\end{bmatrix}.\label{matrix}
\end{equation}
\end{widetext}
We should firstly consider a special case that the total excitation number is zero. There is only one state which forms an independent space from others with the quasi-energy and the eigenvector
\begin{align}
E_0 &= -\frac{\hbar\Delta}{2}-\frac{g^{\prime2}}{\hbar\Omega}+\frac{\hbar\Delta g^{\prime 2}}{(\hbar\Omega)^2},\\
|\Phi_0\rangle &= |0,-\rangle. \label{phi0}
\end{align}
It's straightforward to get the quasi-energy spectrum by solving the eigenvalue problem of the matrix \eqref{matrix} for the case of non-zero total excitation number. Expansion of $E_{n\pm}$ with respect to $(\hbar\Omega)^{-1}$ up to the second order can be written as
\begin{align}
E_{n\pm}^{(0)} &= \pm\frac{\Omega_{R}}{2},\\
E_{n\pm}^{(1)} &= \frac{g^{\prime 2}}{\hbar\Omega}\left[-1\pm\frac{\hbar\Delta(n+1)}{\Omega_{R}}\right],\\
E_{n\pm}^{(2)} &=\frac{g^{\prime 2}}{(\hbar\Omega)^2}\left\{\hbar\Delta\mp\frac{[\hbar\Delta g^{\prime}(n+1)]^2}{\Omega_R^{ 3}}\right.\notag\\
&\left.\pm\frac{(g^{\prime 2}-2g^2)(n+1)^2-(\hbar\Delta)^2(n+1)}{\Omega_{R}}\right\},
\end{align}
where $\Omega_R = \sqrt{(\hbar\Delta)^2+4g^2(n+1)}$ is so-called Rabi frequency of the quantum Rabi model. We can also obtain the normalized eigenvectors of the effective Hamiltonian in the combination of the basis chosen before,
\begin{align}
|\Phi_{n\pm}\rangle =\frac{1}{\sqrt{1+ C^2_{n\pm}}} (C_{n\pm}|n,+\rangle+|n+1,-\rangle),\label{phinpm}
\end{align}
where the coefficient up to the second order reads
\begin{align}
C_{n\pm}^{(0)} &= \frac{\hbar\Delta\pm\Omega_R}{2g\sqrt{n+1}},\\
C_{n\pm}^{(1)} &= \frac{1}{\hbar\Omega}\frac{g^{\prime 2}\sqrt{n+1}}{g}\left(1\pm\frac{\hbar\Delta}{\Omega_R}\right),\\
C_{n\pm}^{(2)} &= \frac{1}{(\hbar\Omega)^2}\frac{g^{\prime 2}\sqrt{n+1}}{2g}\left\{-\hbar\Delta \mp\frac{(\hbar\Delta)^2}{\Omega_R}\right.\notag\\
&\left.\pm\frac{8g^{\prime 2}g^2(n+1)^2}{\Omega_R^{3}}\right\},
\end{align}
and the Floquet modes in the rotating frame which form an orthonormal basis at time $t$ can be written as
\begin{align}
|\phi_0\rangle &= {\rm e}^{-{\rm i}\hat{K}_{\rm eff}(t)}|\Phi_0\rangle,\\
|\phi_{n\pm}\rangle &= {\rm e}^{-{\rm i}\hat{K}_{\rm eff}(t)}|\Phi_{n\pm}\rangle.
\end{align}
We can easily check that in the case $g^{\prime} = 0$ the effective model reduces to the JC model. The quasi-energy is left with the zeroth order term and the system is characterized by the energy scale $\Omega_R$ and the driving frequency $\Omega$. This gives exactly the JC energy eigenvalues when rotating back to the lab frame. The kick operator $\hat{K}_{\rm eff}(t)$ becomes zero and the Floquet modes become time-independent and reduce to the eigenstates of the JC Hamiltonian.
\begin{figure}[tbp]
\includegraphics[width=0.5\textwidth]{fig1.eps}
\caption{(Color online) The quasi-energy spectrum of the AiRM from numerical method in the extended Floquet Hilbert space (red solid) and from the effective Hamiltonian with odd parity (blue dashed) and even parity (green dotted) in the first Brillouin zone up to the photon number cutoff $n_{\rm cutoff} =3$ as a function of $g/\hbar \omega$. The set up is $\Delta = 0.1\omega$ and $g^{\prime} = 0.1 \hbar \omega$.}
\label{Fig1}
\end{figure}
For the case that the CRTs' coupling strength $g^\prime$ is non-zero, we compare the quasi-energy spectrum result from the effective Hamiltonian with that numerically calculated in the extended Floquet Hilbert space (See Appendix B for detailed numerical scheme). The Floquet theory \cite{Eckardt2015} dictates that the solution of the Schr\"odinger equation with a time periodical Hamiltonian reads:
\begin{align}
\Psi_n (t) = {\rm e}^{-iE_n t/\hbar} \phi_n (t),
\end{align}
where $E_n$ is the quasi-energy and $\phi_n (t)$ is the Floquet mode. Multiplying the Floquet mode by a phase factor $\exp(i m\Omega t)$ yields the identical Floquet state but with the shifted quasi-energy $E_n +m\hbar\Omega$ with $m$ an integer. Hence the quasi-energy can be mapped into the first Brillouin zone $[-\hbar\Omega/2,\hbar\Omega/2]$ which is similar to the first Brillouin zone in the spatial lattice model, as the spectrum is invariant if translated by an integer multiple of $\hbar\Omega$. The results are shown in Fig. \ref{Fig1}, which illustrates the first few quasi-energy levels $E_{n\pm}$ with even and odd parities, i.e. the total excitation number, respectively. We fixed the CRTs coupling strength $g^{\prime} = 0.1 \hbar \omega$, and Figure 1 shows that the effective model fits the numerical result pretty well even when the rotating-wave coupling is in the deep-strong coupling regime $g\simeq\hbar\omega$. The coupling strength tends to separate the eigen-energies of the system into upper and lower branches and level crossing occurs for different parities, and avoided level crossing may occur for quasi energy with the same parity as shown below. The high-frequency expansion results, on the other hand, fails to predict the this avoid-crossing. This can be understood as follows. As we know, the conservation of total excitation number in JC model leads to the level crossing of states with the same parity, the inclusion of the counter-rotating terms in the Rabi model, either anisotropic or isotropic, however, explicitly breaks this conservation and correspondingly the level crossing of the same parity subspace cannot happen. The effective Hamiltonian employed in the Floquet calculation conserves the total excitation number $\hat{N}$ and the non-conserving terms are carried by the kick operator $\hat{K}$. Consequently, within a given parity subspace level crossings occur in the energy spectrum as derived from the analytical results, whereas these level crossings are not present in the numerical results due to the existence of the counter-rotating terms. Explicitly the avoiding occurs at point $A$ when the positive branch for the total excitation number $N=2$ meets the negative branch for $N=4$, both of which are even parity. Similar phenomenon can be observed at point $B$ for odd parity. If we increase the photon number cutoff, more and more avoided level crossings emerge when the positive branch spectrum lines for $N$ cross the negative branch lines for $N + 2$ with the same parity. One special case is shown at point $C$, where the level for zero excitation number $E_0$ meets the even parity level for $N=2$. On the other hand, if we fix the RWTs' coupling strength $g=0.1\hbar\omega$ and vary the CWTs' $g^\prime$, the effective model coincides the numerical results well only below the regime $g^\prime\simeq 0.3\hbar\omega$ and the effective model loses its accuracy quickly for sequentially increasing $g^\prime$. In addition, we see that the detuning opens a gap $\delta E=\hbar\Delta$, defined as the difference of upper branch and lower branch levels when $g$ approaches zero, i.e. $\delta E=\lim_{g\rightarrow0}(E_{n+}-E_{n-})$, which would be filled for large enough photon number or very strong coupling.
\subsection{Population Dynamics}
We now turn to the study of the population dynamics of the system, in particular, we examine the dynamics with the atom initially prepared in the excited state and the field in the coherent state, which can be expressed as
\begin{align}
|\Psi_i\rangle = |\alpha\rangle \otimes |+\rangle = {\rm e}^{- \frac{|\alpha|^2}{2}}\sum_{n=0}^{\infty} \frac{\alpha^n}{\sqrt{n!}} |n\rangle\otimes|+\rangle. \label{initial}
\end{align}
with $\alpha$ is a complex parameter related to the amplitude of the coherent state. According to the Floquet theory, the time evolution of this initial state in lab frame is governed by the following time evolution operator
\begin{align}
\hat{U}(t,0) = \hat{V}(t){\rm e}^{-{\rm i}\hat{K}_{\rm eff}(t)}{\rm e}^{-\frac{\rm i}{\hbar}\hat{H}_{\rm eff} t }{\rm e}^{{\rm i}\hat{K}_{\rm eff}(0)}\hat{V}^{\dag}(0).
\end{align}
An important physical quantity is the atomic inversion, which describes the population probability difference between the $|+\rangle$ and $|-\rangle$ of the atom,
\begin{align}
W(t) = \langle\Psi(t)| \hat{\sigma}_z | \Psi(t) \rangle.
\end{align}
Both numerically and analytically result of the time evolution of the atomic inversion are shown in the Fig. 2. For clarity we plot the population dynamics for a fixed RWTs and CRTs ratio $g^\prime/g=0.5$ and a detuning $\Delta = 0.1\omega$. Due to the mean photon number in coherent state amounts to $|\alpha|^2$, for a state with $\alpha=3$ we increase our cutoff photon number to 20 to insure the calculation accuracy. The dynamics from the effective model highly coincides with the numerical result when the CRTs can not be neglected. The oscillation of the atomic inversion shows the behavior of collapse rapidly and then revival as time goes long enough. We see that with the increasing coupling strength $g$, henceforth $g\prime$, the mean atomic inversion, about which the revival signal oscillates, gradually decrease from a value in the weak coupling case, which is determined by the detuning $\Delta$, to zero, due to the involvement of RWTs and CWTs in the population probability. The collapse-revival period is shorter and shorter for increasing $g$ and the high frequency expansion describes the dynamics precisely even when the coupling strength grows above $0.2\hbar\omega$.
\begin{figure}[tbp]
\includegraphics[width=0.5\textwidth]{fig2.eps}
\caption{(Color online) The time evolution of the atomic inversion in AiRM by analytical method (blue) and numerical method (red) with detuning $\Delta = 0.1\omega$, the amplitude of the coherent sate $\alpha=3$, the fixed ratio between RWTs and CRTs $g^\prime/g=0.5$, and the cutoff photon number $n_{\rm cutoff} = 20$. The time scale is $T = 2\pi/\Omega$. (a)$g=0.05\hbar\omega$, (b)$g=0.1\hbar\omega$, (c)$g=0.15\hbar\omega$, (d)$g=0.2\hbar\omega$ , (e)$g=0.25\hbar\omega$.}
\label{Fig2}
\end{figure}
\section{Asymmetric Rabi model}
\subsection{Formalism and high-frequency expansion}
Another model we choose to illustrate the power of high-frequency expansion is the asymmetric quantum Rabi model (AsRM) \cite{Larson2013,LiuMX2017} with Hamiltonian written as
\begin{align}
\hat{H}_{\rm{AsRM}} = \frac{1}{2}\hbar\omega_{0}\hat{\sigma}_{z}+\varepsilon\hat{\sigma}_x+\hbar\omega\hat{a}^{\dag}\hat{a}+g(\hat{a}^{\dag}+\hat{a})(\hat{\sigma}_++\hat{\sigma}_-),
\end{align}
where a bias external field $\varepsilon$ is applied along the $x-$axis, which sometimes is regarded as the intrinsic transition strength of the atom \cite{Wakayama2017}, whereas other parameters are kept the same as in the anisotropic Rabi model in previous section. Here the atom-field coupling is chosen as the Rabi type with equal coupling strength of RWTs and CRTs. This $\varepsilon$ term also breaks the $Z_2$ symmetry of the quantum Rabi model, however, provides more realistic description of the circuit QED experiments employing flux qubits than the Rabi model itself \cite{Niemczyk2010}. After similar steps we applied earlier, the Hamiltonian in the rotating frame reads
\begin{align}
\hat{H}_{\rm{AsRM}}^{\rm rot}(t)&=\frac{\hbar\Delta}{2}\hat{\sigma}_z +g(\hat{a}^{\dag}\hat{\sigma}_{-}+ \hat{a}\hat{\sigma}_{+})\notag\\
&+\varepsilon({\rm e}^{{\rm i}\omega t}\hat{\sigma}_{+}+{\rm e}^{-{\rm i}\omega t}\hat{\sigma}_{-})\notag\\
&+g({\rm e}^{{\rm i}2\omega
t}\hat{a}^{\dag}\hat{\sigma}_{+}+{\rm e}^{-{\rm i}2\omega t}\hat{a}\hat{\sigma}_{-}).
\label{HAsRM}
\end{align}
Clearly there exist two driving terms with frequencies $\omega$ and $2\omega$ and we choose the driving frequency as $\Omega = \omega$. The first few terms of the effective Hamiltonian $\hat{H}_{\rm eff}$ are calculated as
\begin{align}
H_{\rm eff}^{(0)} &= \frac{1}{2}\hbar\Delta\hat{\sigma}_z+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}),\label{Heff0}\\
H_{\rm eff}^{(1)} &= \frac{1}{\hbar\Omega}[\varepsilon^{2}\hat{\sigma}_{z}+\frac{g^2}{2}(\hat{a}^{\dag}\hat{a}\hat{\sigma}_z-\hat{\sigma}_-\hat{\sigma}_+)],\label{Heff1}\\
H_{\rm eff}^{(2)} &=-\frac{1}{(\hbar\Omega)^{2}}\{\hbar\Delta\varepsilon^2\hat{\sigma}_z+2g\varepsilon^2(\hat{a}^{\dag}\hat{\sigma}_-+h.c.)\notag\\
&+\frac{1}{4}[\hbar\Delta g^2(\hat{a}^{\dag}\hat{a}\hat{\sigma}_z-\hat{\sigma}_-\hat{\sigma}_+)+g^3(\hat{a}^{\dag }\hat{a}\hat{a}^{\dag }\hat{\sigma}_-+h.c.)]\} \label{Heff2}
\end{align}
with the corresponding kick operator $\hat{K}_{\rm eff}(t)$
\begin{align}
K_{\rm eff}^{(1)}(t) &= \frac{1}{2{\rm i}\hbar \Omega}\left(2\varepsilon\hat{\sigma}_+{\rm e}^{{\rm i}\Omega t}+g\hat{a}^{\dag}\hat{\sigma}_+{\rm e}^{{\rm i}2\Omega t}-h.c.\right),\\
K_{\rm eff}^{(2)}(t) &=\frac{1}{4{\rm i}(\hbar\Omega)^{2}}[\varepsilon(7 g\hat{a}^{\dag}\hat{\sigma}_z-4\hbar\Delta{\sigma}_+){\rm e}^{{\rm i}\Omega t}\notag\\
&+g(ga^{\dag2}{\sigma}_z-\hbar\Delta \hat{a}^{\dag}{\sigma}_+){\rm e}^{{\rm i}2\Omega t}-h.c.].
\end{align}
As we can see, the zeroth order effective Hamiltonian (\ref{Heff0}) takes the same JC form as in the anisotropic Rabi model and again plays a major role in the effective Hamiltonian. The bias $\varepsilon$ will bring direct bearing on the effective Hamiltonian by means of the quasi-level difference as it induces an additional $\sigma_z$ term, and together with the coupling parameter $g$ the driving terms contribute to the atom-field interaction up to $(l+1)$-photon process for the $l$-th order correction. The kick operators responsible for the dynamics are classified into two terms with a two-frequency driving with frequencies $\Omega$ and $2\Omega$, controlled jointly by the bias $\varepsilon$ and coupling parameter $g$.
\subsection{Quasi-energy and eigenstates}
The diagonalization of the effective Hamiltonian in the basis $|1\rangle = |n,+\rangle$ and $\ |2\rangle = |n+1,-\rangle$ gives the quasi-energy spectrum. The sole state with zero excitation number $| \Phi_0 \rangle=|0, - \rangle$ is the same as in AiRM model with quasi energy
\begin{align}
E_0 & = -\frac{\hbar\Delta}{2}-\frac{\varepsilon^2+g^2/2}{\hbar\Omega} + \frac{\hbar\Delta(\varepsilon^2+g^2/4)}{(\hbar\Omega)^2},
\end{align}
while the expansion of the quasi energy for those with non-zero total excitation number are
\begin{align}
E_{n\pm}^{(0)} &= \pm\frac{\Omega_{R}}{2},\\
E_{n\pm}^{(1)} &=\frac{1}{2\hbar\Omega}\left\{-g^2\pm\frac{\hbar\Delta \Omega_\varepsilon^2}{\Omega_{R}}\right\},
\end{align}
and
\begin{widetext}
\begin{align}
E_{n\pm}^{(2)} &= \frac{1}{(2\hbar\Omega)^2}\left\{\hbar\Delta g^{2}\pm\frac{\Omega_\varepsilon^4-8\Omega_\varepsilon^2g^2(n+1)+6g^4(n+1)^2-(\hbar\Delta)^2 [2\Omega_\varepsilon^2+\Omega_\varepsilon^2/\Omega_R^{ 2}- g^2(n+1)]}{\Omega_{R}}\right\},
\end{align}
\end{widetext}
where we have defined a bias-related frequency $\Omega_\varepsilon = \sqrt{2\varepsilon^2+g^2(n+1)}$ in a similar way as the Rabi frequency $\Omega_R$. And the corresponding eigenvectors take the same form as in anisotropic Rabi model in eq. (\ref{phinpm}), with the coefficients given by
\begin{align}
C_{n\pm}^{(0)} &= \frac{\hbar\Delta\pm\Omega_R}{2g\sqrt{n+1}},\\
C_{n\pm}^{(1)} &=\frac{1}{\hbar\Omega}\frac{1}{g\sqrt{n+1}}\left(\varepsilon^2+\frac{g^2(n+1)}{2}\pm\frac{\hbar\Delta\Omega_\varepsilon^2}{2\Omega_R}\right),\\
C_{n\pm}^{(2)} &= \frac{g\sqrt{n+1}}{(\hbar\Omega)^2}\left\{-\frac{\hbar\Delta}{8}\mp\frac{(\hbar\Delta)^2}{8\Omega_R} \pm\frac{\Omega_\varepsilon^4}{\Omega_R^{3}}\right\}.
\end{align}
\begin{figure}[tbp]
\includegraphics[width=0.5\textwidth]{fig3.eps}
\caption{(Color online) The quasi-energy spectrum for $n=0,1,2,3$ of the AsRM from numerical method in the extended Floquet Hilbert space (red solid) and from the effective Hamiltonian with odd parity (blue dashed) and even parity (green dotted) in the first Brillouin zone calculated up to the photon number cutoff $n_{\rm cutoff} =10$ as a function of $g/\hbar \omega$. The set up is $\Delta = 0.1\omega$ and the coupling strength $g = 0.1\hbar \omega$. Inset: The numerical (red solid) and analytical (black dashed) results of the gap $\delta E$ dependence on the bias field $\varepsilon$ at the limit $g \rightarrow 0$.}
\label{Fig3}
\end{figure}
Numerical result of the quasi energy spectrum and that from the effective model are shown in Fig. \ref{Fig3}. Here the photon number cutoff is taken as $n_{\rm cutoff}=10$ in order to assure the accuracy of the levels for $n=0,1,2,3$ for a comparison of the matrix diagonalization result in the extended Floquet Hilbert space and the high frequency expansion result for even and odd parities, respectively. We again present the spectrum in the first Brillouin zone, $[-\hbar\Omega/2,\hbar\Omega/2]$. To see more clearly the role played by the bias field $\varepsilon$, both the detuning and the coupling strength are fixed to a moderate value of $0.1\hbar \omega$, and similar with the case of the AiRM, the effective model provides an efficient tool for the treatment of the quasi energy spectrum in a rather wide parameter regime of the bias field up to $0.3\hbar \omega$. We see the bias field tends to cluster the upper and lower branches $E_{n\pm}$ into two bundles, although the concentration point given by the effective model is earlier than the numerical results. Avoided level crossing never happens here due to the asymmetric structure of the AsRM model, as the bias breaks the parity symmetry in the standard Rabi model.
As in the AiRM, the detuning opens a gap $\delta E$ in the quasi energy spectrum. What different here is that in the case of AsRM this gap is bias dependent. For concreteness, in the inset of Fig. \ref{Fig3} we plot this gap as a function of the bias field $\varepsilon$ in the limit $g \rightarrow 0$. Evidently the effective model already capture the main feature of this gap. From the first few terms of the effective Hamiltonian eqs. (\ref{Heff0}) to (\ref{Heff2}) one can easily see that the diagonal terms in the form of $\sigma_z$ will determine the gap dependence on the bias as
\begin{align}
\delta E=\hbar\Delta+2(\hbar\Omega-\hbar\Delta)\left(\frac{\varepsilon}{\hbar\Omega}\right)^2.
\end{align}
This quadratic denpendence fits the numerical result for a bias field $\varepsilon$ up to $0.4 \hbar \omega$. Note here the high frequency expansion request a more strict condition for the atom-field coupling $g \simeq 0.1\hbar\omega$, to make sure that the driven frequency $\Omega$ do dominate the energy scale because the driven frequency here is half of that in AiRM. But for the low-energy, such as the photon number $n$ is zero, the effective model fits well even in the deep-strong coupling regime as we mentioned before.
\begin{figure}[tbp]
\includegraphics[width=0.5\textwidth]{fig4.eps}
\caption{(Color online) The long time evolution of some physical observables by analytical method (blue) and numerical method (red) in the same panel with the set up that coupling strength $g=0.1\hbar\omega$, detunning $\Delta=0.1\omega$, and the bias strength $\varepsilon=0.1\hbar\omega$. Shown in the panels are the expectation value of (a) the atomic inversion, (b) the transverse magnetization, and (c) the atom-field correlation. }
\label{Fig4}
\end{figure}
\subsection{Driving Dynamics and Fourier Spectrum}
The physical observables, such as the atomic inversion $W(t)$ introduced in last section, evolve with time and we are interested in the driving dynamics and the steady oscillation properties for long enough driving time. For the AsRM, it is of interest to consider the magnetization $M(t)$ induced by the transverse bias field $\varepsilon$, and the atom-field correlation $G(t)$ mediated by the coupling parameter $g$. The initial state (\ref{initial}) is chosen the same as that in the previous model and the definition of the latter two physical observables are given by
\begin{align}
M(t) &= \langle\Psi(t)| \hat{\sigma}_x | \Psi(t) \rangle,\\
G(t) &= \langle\Psi(t)| (\hat{a}^\dagger+\hat{a})\hat{\sigma}_x | \Psi(t) \rangle.
\end{align}
As we can see in Fig. \ref{Fig4}, for a short time the evolution of all these observables exhibits a collapse and revival phenomenon which takes the form of wave packets. The amplitude of the wave packet decreases and the oscillation tends to be stable as time goes by so that the wave packet can not be observed any more. With suitable choice of the system parameters, the analytical method is sufficiently accurate to describe the system evolution by comparing with the numerical results. To further understand the nature of these oscillation, we apply the Fourier spectrum analysis to extract the frequency in the oscillation of these three observables. The Fourier transform is the fundamental technique of Fourier analysis, and it decomposes the original data into its frequency components, which is often referred to as the frequency spectrum. The Fourier transform is represented as
\begin{align}
\bar{F}(\nu)= \int_{0}^{+\infty} {\rm d}t F(t) {\rm e}^{-{\rm i}2\pi\nu t},
\end{align}
where $\bar{F}(\nu)$ is the output spectrum that is a function of frequency $\nu$, $F(t)$ is the input data that is a function of time $t$. The driving dynamics of three observables are similar and we take the atomic inversion $W(t)$ as an example. The frequency spectrum of atomic inversion shows the feature of two-frequency driving behavior in Fig. \ref{Fig5}, i.e. both the analytical and numerical results indicate that the fundamental frequency is located at $\Omega$ and the second harmonics is located at $2\Omega$ as expected. A relatively large external bias field $\varepsilon=0.3\hbar \omega$ is to enhance the peak value at the fundamental frequency as the bias dominate the oscillation $e^{\pm {\rm i} \omega t}$ in the rotating frame Hamiltonian (\ref{HAsRM}). The involvement of many-photon Fock states in the coherent state leads to the broadening of the spectral functions at both the fundamental frequency and the second harmonics, as well as the complicated oscillation around the inevitable frequency mixing at $0.5, 1.5$ and $2.5 \Omega$. The double and triple revival sequences for the two- and three-qubit systems have been found in the probability of finding all qubits in the initial $|+\rangle$ state as a consequence of having two or three Rabi frequencies \cite{Agarwal2012,Mao2016}. However, the second harmonics here originates from a rather different mechanics as the effective Hamiltonian for AsRM is basically a two-frequency driving system.
\begin{figure}[tbp]
\includegraphics[width=0.5\textwidth]{fig5.eps}
\caption{(Color online) The Fourier frequency spectrum analysis of the time-evolution of the atomic inversion, where we choose the parameters $g=0.1\hbar\omega$ $\varepsilon = 0.3\hbar\omega$ $\Delta=0.1\omega$.}
\label{Fig5}
\end{figure}
The other observation is that for long enough time the system is driven into a steady state. It is of interest how the bias field could control the system and what time-averaged value of the observables would be reached. We define the time-averaged values of atomic inversion $W_0$, transverse magnetization $M_0$, and atom-field correlation $G_0$ as the average over the 150 driving periods, and show their dependence on the varying $\varepsilon$. Fig. \ref{Fig6} shows the numerical and high frequency expansion results for a small coupling parameter $g$, to focus on the controllability of the bias field. The time-averaged value of atomic inversion $W_0$ experiences a competition between detuning $\Delta$ and the coupling strength $g$. The effect of detuning is equivalent to hindering the atomic inversion, while the effect of $g$ is to induce transition between upper and lower energy levels. When $g$ and $\varepsilon$ are both small, they fight against detuning together, making the population begin to reverse. When $\varepsilon$ increases to a certain extent, it will start to compete with $g$. Thus we see a regime where $W_0$ is nonetheless slightly increased. The atom will start to reverse again until the effect of $g$ is completely eliminated and $\varepsilon$ takes the dominant role. On the other hand, the time-averaged value of the transverse magnetization $M_0$ increases with applied bias field linearly as expected, whereas the bias field serves to destroy the correlation between atom and field. In other parameter regime the competition between three parameters $g$, $\varepsilon$ and $\Delta$ remains, even leading to negative atom-field correlation, which is not shown in Fig. \ref{Fig6}. The effective model proves to be very accurate till $\varepsilon \sim 0.2 \hbar \omega$ thus provides a powerful tool in estimating the dynamics and the time-averaged values of these observables.
\begin{figure}[tbp]
\includegraphics[width=0.5\textwidth]{fig6.eps}
\caption{(Color online) The time-averaged value of atomic inversion $W_0$, transverse magnetization $M_0$, and atom-field correlation $G_0$ over the 150 driving periods as a function of bias field $\varepsilon$ for $g=0.01\hbar\omega$ and $\Delta=0.1\omega$.}
\label{Fig6}
\end{figure}
Finally we discuss regime of validity of our high-frequency expansion scheme used in this paper. First of all, the high frequency expansion need the driving frequency $\Omega$ to be large for both AiRM and AsRM. Secondly, for a fixed RWT coupling $g=0.1\hbar\omega$ the effective AiRM model works reasonably well in a range of detuning $-1 < \Delta/\omega < 2$, i.e. either blue $\Delta<0$ or red detuning $\Delta>0$, provided that the CRT or the bias is below $0.3\hbar\omega$. For fixed CRTs coupling strength $g^\prime=0.1\hbar\omega$ the effective AiRM model fits the numerical result surprisingly well even for the RWT coupling $g\sim1.5\hbar\omega$ in the whole range of detuning. This is due to that fact that the $g^\prime$ term is a driving term while $g$ term is not - the relatively larger $g$ only gives a boost to the zeroth-order effective Hamiltonian in (\ref{H0}). The valid regime for $g^\prime=0.1\hbar\omega$ is thus a rectangle in the $\Delta-g$ plane, i.e $-1<\Delta/\omega <2$, $0<g<1.5\hbar\omega$, as shown in the Figure 8 in reference \cite{Hausinger2010}. If we increase the driving term $g^\prime$, the left up corner will first become invalid and for a strong enough driving term $g^\prime=0.25\hbar\omega$ the method is inaccurate also in a small area near the resonance even for small $g$. Further increase of $g^\prime$ will totally invalidate the high frequency expansion. From the view point of dynamics, it is clearly that close to resonance the analytical results match the numerics for fixed $g^\prime/g=0.5$ provided that $g^\prime$ is below $0.125\hbar\omega$. For fixed $g=\hbar\omega$, we also find good match for $g^\prime$ up to $0.125\hbar\omega$. For AsRM, we fixed the couplings in the ultra-strong regime, i.e. $g=0.1\hbar\omega$, and a small detuning $\Delta=0.1\omega$, and find the high frequency expansion is valid for a relative large bias $\varepsilon =0.3\hbar\omega$.
It is also necessary to compare our results with other analytical methods presented in the literature. Generalized rotating wave approximation (GRWA) \cite{Irish2007} and the Van Vleck perturbation (VVP) theory \cite{Hausinger2008,Hausinger2010} are both valid in the case of large blue detuning, $-1<\Delta/\omega<-0.4$, from the weak to the deep-strong coupling $0<g<1.5\hbar\omega$. The GRWA is preferable to VVP at weak coupling, in particular close to resonance and red detuning. On the contrary, VVP works better at strong coupling strengths. High frequency expansion perfectly fills in the blank left by these two methods on the right part of the red detuning regime, for a ultra-strong CRT coupling parameter up to $g^\prime=0.25\hbar\omega$.
\section{Conclusion}
In conclusion, we transformed two extended Rabi models, i.e. the AiRM and AsRM, into the rotating frame, and regard them as the periodically driven models. By applying the Floquet theory and the high-frequency expansion, we obtained the effective model Hamiltonian and the quasi-energy spectrum both analytically and numerically. For AiRM, the effective model agrees pretty well with the numerical diagonalization in the extended Floquet Hilbert space for $g$ up to $2 \hbar\omega$ for a CRT coupling in the ultrastrong coupling regime $g^\prime=0.1 \hbar \omega$. The effective model fails to predict the avoided level crossing occurred in the same parity due to its conservation of total excitation number. The population dynamics governed by the effective model, however, is accurate enough for a fixed ratio of the RWTs and CRT coupling strength $g^\prime/g=0.5$. For AsRM, the quasi energy spectrum is found to be clustered into two bundles by the bias field, which breaks the parity symmetry in the Rabi model. In both cases, the detuning opens a gap in the quasi energy spectrum illustrated in the first temporal Brillouin zone, which is exactly the detuning energy in the AiRM and depends quadratically on the bias field in the AsRM. The driving dynamics of several observables are studied by means of the Fourier analysis and the two-frequency driving nature is manifested in the frequency spectrum. The time-averaged value of these oscillation may be controlled by the bias field, while a competition with detuning and atom-field coupling is expected to provide more versatile means to manipulate the driving dynamics. The Floquet method in the extended Rabi models provides an alternative tool in the study of interaction between atom and light and is readily applied to more sophisticated models where more qubits, more cavity modes, or many-body interaction are involved.
\addcontentsline{toc}{chapter}{Appendix A: Derivation of the effective Hamiltonian}
\section*{Appendix A: Derivation of the effective Hamiltonian}
The formula for high-frequency expansion of the effective Hamiltonian up to the second order can be written as \cite{Bukov2015, Goldman2014}
\begin{align}
\hat{H}_{\rm eff}^{(0)} &= H_{0},\\
\hat{H}_{\rm eff}^{(1)} &=\frac{1}{\hbar\Omega}\sum_{l=1}^{\infty}\frac{[H_{l},H_{-l}]}{l},\\
\hat{H}_{\rm eff}^{(2)} &= \frac{1}{(\hbar\Omega)^{2}}\sum_{l\neq0}\left(\frac{[[H_{l},H_{0}],H_{-l}]}{2l^{2}}\right.\notag\\
&\left.+\sum_{l^{\prime}\neq0,-l}\frac{[[H_{l},H_{l^{\prime}}],H_{-(l+l^{\prime})}]}{3l(l+l^{\prime})}\right).
\end{align}
For $\hat{K}_{\rm eff}^{(n)}(t)$ up to order $\frac{1}{\Omega^{2}}$, we can get
\begin{align}
\hat{K}_{\rm eff}^{(1)}(t) &= \frac{1}{{\rm i}\hbar\Omega}\sum_{l\neq0}\frac{H_{l}{\rm e}^{{\rm i}l\Omega t}}{l},\\
\hat{K}_{\rm eff}^{(2)}(t) &=\frac{1}{{\rm i}(\hbar\Omega)^{2}}\sum_{l\neq 0}\frac{[H_{l},H_{0}]{\rm e}^{{\rm i}l\Omega t}}{l^{2}}\notag\\
&+\frac{1}{2{\rm i}(\hbar\Omega)^{2}}\sum_{l\neq0}\sum_{l^{\prime}\neq0,-l}\frac{[H_{l},H_{l^{\prime}}]{\rm e}^{{\rm i}(l+l^{\prime})\Omega t}}{l(l+l^{\prime})},
\end{align}
where $H_l$ is the Fourier expansion coefficients of $l$-th order,
\begin{align}
H_l = \frac{1}{T}\int_{0}^{T}\hat{H}(t){\rm e}^{-l\Omega t} {\rm d}t.
\end{align}
For the anisotropic Rabi model \eqref{Hrot}, we have
\begin{align}
H_0 &= \frac{1}{2}\hbar\Delta\hat{\sigma}_{z}+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}),\\
H_{-1} &= g^{\prime}\hat{a}\hat{\sigma}_{-},\\
H_{1} &= g^{\prime}\hat{a}^{\dag}\hat{\sigma}_{+},
\end{align}
while for the asymmetric Rabi model \eqref{HAsRM}, we can get
\begin{align}
H_0 &= \frac{1}{2}\hbar\Delta\hat{\sigma}_{z}+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}),\\
H_{-1} &= \varepsilon\sigma^{-},\qquad H_{1} = \varepsilon\sigma^{+},\\
H_{-2} &= g\hat{a}\sigma^{-},\qquad H_{2} = g\hat{a}^{\dag}\sigma^{+}.
\end{align}
By plugging them into the formula above, we can get the effective Hamiltonian and effective kick operators for each model.
\addcontentsline{toc}{chapter}{Appendix B: Numerical method for quasi-energy spectrum}
\section*{Appendix B: Numerical method for quasi-energy spectrum}
The numerical procedure is based on the block diagonalization of the quasi energy operator in the extended Floquet Hilbert space by means of degenerate perturbation theory in Ref. \cite{Eckardt2015}. Let us consider an extended Floquet Hilbert space $\mathscr{F}$, which is given by the direct product of the state space $\hat{H}$ and the space of square-integrable $T$-periodically time-dependent function $\mathcal{L}_T$. In this case, a complete set of orthonormal basis states $|\alpha m(t)\rangle\rangle$ of space $\mathscr{F}$ can be constructed by combining a complete set of orthonormal basis states of $\hat{H}$, $| \alpha \rangle = | n, \pm \rangle$, with the complete set of time-periodic functions ${\rm e}^{{\rm i}m\Omega t}$ labeled by the integer $m$. In matrix form, we can get
\begin{align}
|\alpha m(t)\rangle\rangle =| n, \pm \rangle {\rm e}^{{\rm i}m\Omega t}.
\end{align}
Using the definition of the scalar product in the extended Floquet Hilbert space, we can get the matrix elements $\mathcal{F}_{m^\prime m}^{\alpha^\prime\alpha}=\langle\langle\alpha^{\prime} m^{\prime}|\hat{\mathcal{F}}|\alpha m\rangle\rangle$ of the Floquet operator $\hat{\mathcal{F}}=\hat{H}^{\rm rot}(t)-{\rm i}\hbar \frac{\partial }{\partial t}$ with respect to the basis $|\alpha m\rangle\rangle$,
\begin{align}
\mathcal{F}_{m^\prime m}^{\alpha^\prime\alpha} &= \frac{1}{T}\int_0^{T}{\rm d}t\ {\rm e}^{-{\rm i}m^\prime\Omega t}\langle\alpha^{\prime}|\hat{H}^{\rm rot}(t) -{\rm i}\hbar \frac{\partial}{\partial t}|\alpha\rangle{\rm e}^{{\rm i}m\Omega t}\notag\\
&=\langle\alpha^\prime|H_{m^\prime-m}|\alpha\rangle +\delta_{m^\prime m}\delta_{\alpha^\prime\alpha}m\hbar\Omega,
\end{align}
where
\begin{align}
H_{m^\prime-m} = \frac{1}{T}\int_0^{T}{\rm d}t{\rm e}^{-{\rm i}(m^\prime-m)\Omega t}\hat{H}^{\rm rot}(t).
\end{align}
Quasi-energy is the eigenvalue of this infinite matrix.
\subsection*{B1. Anisotropic Rabi model}
For the anisotropic Rabi model, the Floquet operator matrix can be expressed as
\begin{align}
\begin{pmatrix}
\ddots&\vdots&\vdots&\vdots&\vdots&\ddots\\
\cdots&H_0-\hbar\Omega&H_{-1}&0&0&\cdots\\
\cdots&H_1&H_0&H_{-1}&0&\cdots\\
\cdots&0&H_1&H_{0}+\hbar\Omega&H_{-1}&\cdots\\
\cdots&0&0&H_{1}&H_{0}+2\hbar\Omega&\cdots\\
\ddots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{pmatrix},
\end{align}
where
\begin{align}
H_{m^\prime-m} = \begin{cases}
g^{\prime}\hat{a}\hat{\sigma}_{-},&m^\prime-m=-1\\
\frac{1}{2}\hbar\Delta\hat{\sigma}_{z}+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}),&m^\prime-m=0\\
g^{\prime}\hat{a}^{\dag}\hat{\sigma}_{+},&m^\prime-m=1.
\end{cases}
\end{align}
This infinite matrix cannot be solved analytically, so we have to cut off the $m$-index and the photon number $n$ and diagonalize the matrix numerically. The maximum of these two numbers are $m_{\rm max}=10$ and $n_{\rm max} = 4$.
\subsection*{B2. Asymmetric Rabi model}
For asymmetric Rabi model, the Floquet operator matrix can be expressed as
\begin{align}
\begin{pmatrix}
\ddots&\vdots&\vdots&\vdots&\vdots&\ddots\\
\cdots&H_0-\hbar\Omega&H_{-1}&H_{-2}&0&\cdots\\
\cdots&H_1&H_0&H_{-1}&H_{-2}&\cdots\\
\cdots&H_{2}&H_1&H_{0}+\hbar\Omega&H_{-1}&\cdots\\
\cdots&0&H_{2}&H_{1}&H_{0}+2\hbar\Omega&\cdots\\
\ddots&\vdots&\vdots&\vdots&\vdots&\ddots
\end{pmatrix},
\end{align}
where
\begin{align}
H_{m^\prime-m} = \begin{cases}
g\hat{a}\hat{\sigma}_{-},&m^\prime-m=-2\\
\varepsilon\hat{\sigma}_-,&m^\prime-m=-1\\
\frac{1}{2}\hbar\Delta\hat{\sigma}_{z}+g(\hat{a}^{\dag}\hat{\sigma}_{-}+\hat{a}\hat{\sigma}_{+}),&m^\prime-m=0\\
\varepsilon\hat{\sigma}_+,&m^\prime-m=1\\
g\hat{a}^{\dag}\hat{\sigma}_{+},&m^\prime-m=2.
\end{cases}
\end{align}
The cut-off condition we choose is the same as in the anisotropic Rabi model.
\addcontentsline{toc}{chapter}{Acknowledgment}
\section*{Acknowledgment}
The authors are grateful to Dr. C.-M. Dai for illuminating discussions on Magnus expansion. This work is supported by the National Natural Science Foundation of China (Grant No. 12074340) and the Science Foundation of Zhejiang Sci-Tech University (ZSTU) under Grant no. 20062098-Y.
|
1,108,101,565,973 | arxiv | \section{Introduction}
\label{sec:Introduction}
In a recent article~\cite{EmelyanovKlinkhamer2018}, we discussed
Coulomb scattering of two electrically charged
elementary particles, with one of these particles
inside the Schwarzschild black-hole horizon and the other outside.
We, then, proposed a \textit{Gedankenexperiment}
which uses this quantum scattering process to
transfer information from inside the black-hole horizon to outside.
Now, the question arises if it is, in principle, possible to extract
an electric charge from a static nonrotating black hole by the exchange
of a charged vector boson?
In the present note, we will give an affirmative answer to this question,
starting from an electron-positron pair
inside the Schwarzschild black-hole horizon.
This answer will be obtained by using the results from Ref.~\cite{EmelyanovKlinkhamer2018}
and those of an earlier preprint version~\cite{EmelyanovKlinkhamer2017v6}.
\section{Scattering set-up}
\label{sec:Scattering-set-up}
Consider the following elastic scattering process
from the standard model of elementary particles in Minkowski spacetime:
\begin{equation}
\label{eq:e-nue-scattering}
e^{-} + \nu_e \rightarrow e^{-} + \nu_e\,,
\end{equation}
as discussed in, e.g., Sec.~8.5 of Ref.~\cite{Taylor1976}
and Sec.~12.6.2 of Ref.~\cite{ItzyksonZuber1980}.
The relevant position-space Feynman diagrams at tree level
are given in Fig.~\ref{fig:1}.
The corresponding process in the black-hole context
is given by the set-up of Fig.~\ref{fig:2},
which needs to be compared to Figs.~2 and 3 of Ref.~\cite{EmelyanovKlinkhamer2018}.
For the set-up of Fig.~\ref{fig:2} in this note,
we also assume that the initial inside-horizon electron ($e^{-}$)
was produced by pair creation and
that the corresponding initial inside-horizon positron ($e^{+}$)
does not participate in the scattering with the
initial outside-horizon neutrino ($\nu_e$).
The total electric charge of this
initial inside-horizon positron-electron pair is zero.
\vspace*{0mm}
\begin{figure}[t]
\includegraphics[scale=0.5]{charge-extraction-fig1-v2.eps}
\vspace*{-2mm}
\caption{Position-space Feynman diagrams for $e^{-}\,\nu_e$ scattering
at tree level in Minkowski spacetime.
The arrows in the diagrams show the flow of lepton number.
The double lines indicate that electric charge is transported.
}
\label{fig:1}
\vspace*{10mm}
\includegraphics[scale=0.5]{charge-extraction-fig2-v2.eps}
\vspace*{-2mm}
\caption{Across-horizon $e^{-}\,\nu_e$ scattering
allows for electric-charge extraction from a static nonrotating black hole.
The position-space Feynman diagrams from Fig.~\ref{fig:1}
now hold in a local inertial coordinate system
near the Schwarz\-schild black-hole horizon.
Specifically, the electric-charge-extraction process follows from
the charged-current position-space Feynman diagram on the left.
The projected black-hole horizon is indicated
by the dashed line with the black-hole center to its left.
The positron inside the black-hole horizon does not participate
in the scattering.}
\label{fig:2}
\end{figure}
\section{Charge extraction}
\label{sec:Charge-extraction}
The position-space Feynman diagram on the left of
Fig.~\ref{fig:2} allows for electric charge extraction, as long as
the initial electron is inside the event horizon and
the final electron outside the event horizon
with an appropriate outgoing momentum
(similar to the recoil electron in elastic $e^{-}\,\mu^{-}$ scattering
as discussed in Ref.~\cite{EmelyanovKlinkhamer2018}).
Observe that the electric-charge-extraction process from Fig.~\ref{fig:2}
does not change the total lepton number inside the black-hole horizon.
The Einstein Equivalence Principle allows us to analytically describe this
process by employing the Minkowski-spacetime approximation for the
computation of $S$-matrix.
An essential part of this approximation is
that we can neglect spacetime curvature effects contributing to
the diagrams of Fig.~\ref{fig:2} by considering initial particles with a
sufficiently large center-of-mass energy~\cite{Endnote1}.
In other words, the characteristic
length scale describing the scattering reaction has to be much smaller
than the curvature length scale near the event horizon for the
approximation to be reliable.
This condition can always be fulfilled
by making the black-hole mass in the
\textit{Gedankenexperiment} arbitrarily large~\cite{Endnote2}.
Further discussion
of the near-horizon region of a Schwarzschild black hole
and the local Minkowski coordinates $(T,\, X,\, Y,\, Z)$
can be found in Sec.~2 of Ref.~\cite{EmelyanovKlinkhamer2018}.
From now on, we will use the particle-physics conventions
of Sec.~20.2 in Ref.~\cite{PeskinSchroeder1995}.
The tree-level \mbox{$W$-exchange} diagram of Fig.~\ref{fig:2}
corresponds to the following momentum-space probability amplitude:
\begin{eqnarray}
\label{eq:mom-space-amplitude}
\mathcal{M}_{W}(e^{-}\,\nu_{e} \rightarrow e^{-}\,\nu_{e}) &=&
- \frac{g^2}{2}\,\frac{1}{q^2 -m_{W}^2 + i\varepsilon}\,
\left[\bar{u}^{s'}(k')\gamma_\mu u^r(p)\right]
\;\left[\bar{u}^{r'}(p')\gamma^\mu u^s(k)\right]\,,
\end{eqnarray}
with the $SU(2)$ gauge coupling constant $g$
and the gauge-boson 4-momentum
$q \equiv k-p' = k'-p$ for 4-momenta
$p/p'$ and $k/k'$ of the initial/final neutrino and electron,
respectively. On the left-hand side of \eqref{eq:mom-space-amplitude},
we have suppressed the spin indices
$r,\,r'$ of the neutrino and $s,\,s'$ of the electron.
Next, specialize to spin-up fermions, zero neutrino mass, and
the following 4-momenta:%
\begin{subequations}
\begin{eqnarray}
\label{eq:choice-momenta}
k^{\,\mu} &=& \left(\sqrt{k^2 + m_{e}^2},\,k,\,0,\,0 \right)\,,
\\[2mm]
p^{\,\mu} &=& \left(p,\,-\sqrt{1/3}\;p,\,+\sqrt{2/3}\;p,\,0 \right)\,,
\\[2mm]
k'^{\,\mu} &=& k^{\,\mu} \,,
\\[2mm]
p'^{\,\mu} &=& p^{\,\mu} \,,
\end{eqnarray}
\end{subequations}
where both $k$ and $p$ are taken positive. Then, we have
\begin{eqnarray}\hspace{-10mm}
\label{eq:mom-space-amplitude-result}
\mathcal{M}_{W}(e^{-}\,\nu_{e} \rightarrow e^{-}\,\nu_{e}) &=&
\frac{2\,g^2}{\sqrt{3}}\;
\frac{p\,k}{m_{e}^2 - \sqrt{4/3}\;p\,k - 2\,p\,\sqrt{k^2 + m_{e}^2} -m_{W}^2
+ i\varepsilon}
\nonumber\\[1mm]
&\sim&
\frac{g^2}{2}\,\left(1-\sqrt{3}\right) \;\neq\; 0\,,
\end{eqnarray}
in the ultrarelativistic limit of $k$ and $p$
[specifically, $\min(k,\,p) \gg m_{W} \gg m_{e}$].
For the calculation of the position-space
amplitude shown on the left of Fig.~\ref{fig:2}, we must
fold the above momentum-space amplitude with appropriate
wave packages, as discussed in Appendix C of an earlier
version of our article~\cite{EmelyanovKlinkhamer2017v6}.
The nonvanishing $e^{-}\,\nu_e$ scattering amplitude
\eqref{eq:mom-space-amplitude-result} implies, according to
the argument of Secs.~2 and 3 in Ref.~\cite{EmelyanovKlinkhamer2018}
and App. C in Ref.~\cite{EmelyanovKlinkhamer2017v6},
that we can extract electric charge from a static nonrotating
black hole by repeating the scattering process of Fig.~\ref{fig:2}
a large number of times ($N \gg 1$). Only a few events ($n \ll N$)
extract electric charge, as both Feynman diagrams of Fig.~\ref{fig:2}
contribute to the process and
even the charged-current diagram may not give an
appropriate outgoing electron
(cf. App.~B in Ref.~\cite{EmelyanovKlinkhamer2018}).
\section{Discussion}
\label{sec:Discussion}
It is, of course, possible to change the electric charge
of a static nonrotating black hole by other processes than the
one considered up till now.
Different from the electric-charge-extraction process
(left-diagram of Fig.~\ref{fig:2}) is,
for example, the electric-charge-reduction process as illustrated by Fig.~\ref{fig:3}.
Observe that the electric-charge-reduction process from Fig.~\ref{fig:3}
changes the total lepton number inside the black-hole horizon.
Note also that, for both processes,
the backreaction on the metric has been neglected
in Figs.~\ref{fig:2} and \ref{fig:3}.
\begin{figure}[t]
\includegraphics[scale=0.55]{charge-extraction-fig3-v2.eps}
\vspace*{-2mm}
\caption{Position-space Feynman diagram
(in a near-horizon local inertial coordinate system)
for electric-charge reduction of a static nonrotating charged black hole.
Shown is a black hole with an initial negative electric charge $Q=-e$,
which is changed to $Q=0$ by an infalling positron
(electric charge $e>0$ and lepton number $-1$),
while the corresponding electron
(electric charge $-e<0$ and lepton number $1$) escapes to spatial infinity.
The double line denotes an electron and the single line a quark
(up or down quark with electric charge $2e/3$ or $-e/3$, respectively,
each quark having baryon number $1/3$),
where the arrows indicate the flow of negative electric charge.
The exchange particle in the sub-diagram on the right can be
either a photon or a $Z^{0}$ boson.
The projected black-hole horizon is shown
by the dashed line with the black-hole center to its left.}
\label{fig:3}
\end{figure}
Returning to our electric-charge-extraction process (Fig.~\ref{fig:2}),
several remarks are in order.
First, the very process of charge extraction considered
in Sec.~\ref{sec:Charge-extraction}
does not imply that causality is violated, as there is no real
(on-shell) particle that moves superluminally.
Second, we have thus seen that a quantum scattering process
not only allows for the extraction of information from a
static nonrotating black hole
but also for the extraction of electric charge.
In this respect, we note that
electric charge is not just a measure of how strongly
two electrons scatter with each other, but, rather,
a measure of how the corresponding photon propagates
(cf. the discussion on electric charge in Sec.~III.7,
pp. 204--205 of Ref.~\cite{Zee2010}).
This becomes clear if we examine the structure
of radiative corrections contributing to charge
renormalization. It turns out that charge
renormalization depends only on the photon
propagator (including vacuum-polarization effects).
Thus, we can say that the electric charge belongs
to the gauge field
(determining how the photon propagates),
rather than to the matter field.
Third, our result for electric-charge extraction
from a static nonrotating black hole can be extended
to other types of charge involving a gauge boson
and to nonstatic rotating black holes.
\vspace*{-0mm}
\section*{\hspace*{-5mm}Acknowledgments}
\vspace*{-0mm}\noindent
We thank P. Soler for asking,
after an invited talk by V.A.E. at the May 2018 Heidelberg meeting of
TRR33 ``The Dark Universe,''
the question formulated in Sec.~\ref{sec:Introduction} of the present note.
\newpage
|
1,108,101,565,974 | arxiv | \section{Introduction}
Because photons couple in point-like fashion to quarks, observation,
among the final-state particles in a high energy collision, of photons carrying
large values of transverse momentum provides an incisive probe of the short
distance hadron dynamics of the collision. This fact explains the substantial
theoretical and experimental interest shown in studies of the cross section
for production of photons at large angles in hadron-hadron and
lepton-hadron scattering and in electron-positron annihilation processes. At
stake are precise tests of the theory of perturbative quantum chromodynamics
(QCD) and use of data to determine properties of the relativistic proton such
as the momentum distribution of its constituent gluons and quarks. Discovery
of the charm quark and, later, of the bottom quark stimulated interest in the
dynamics of their relatively copious production in high energy interactions of
hadrons. Recent experimental advances now offer the possibility of studies of
the associated production of a photon $(\gamma)$ carrying large transverse
momentum along with a heavy quark $(Q)$ whose transverse momentum balances a
substantial portion of that of the photon.\cite{cdf} In this paper, we
report a fully analytic next-to-leading order QCD calculation of the
two-particle inclusive distribution for prompt photon plus associated heavy
flavor production at large values of transverse momentum, with specification
of the momentum variables of both the final prompt photon and the final heavy
quark. These results should facilitate further experimental tests of
correlations inherent in the QCD matrix elements and provide a means for
measuring the charm quark density in the nucleon.
Although a qualitative description may be obtained from lowest-order
perturbation theory, more precise predictions of the momentum distribution
for the inclusive production a heavy quark (or antiquark) require perturbative
calculations that extend to higher order.\cite{qnlo} Likewise, perturbative
QCD calculations of inclusive and isolated prompt single photon production
are available.\cite{gamnlo1,gorvogel,berqiu} At the level of two-particle
inclusive final states, next-to-leading order QCD calculations have been done
for ${\gamma\gamma}$ production\cite{aur2,boboo}, for $\gamma$-hadron
production \cite{aur3} and for $\bar{Q}Q$ correlations.\cite{correl} The
cross section for the production of two hadronic jets has been studied at
$O(\alpha ^3_s)$ by several authors.\cite{sdellis}
Constraints on the charm and strange quark densities from data on intermediate
vector-boson production are discussed in Ref.~\cite{ELBW}.
For values of transverse momentum $p^Q_{T}$ of the heavy quark
significantly larger than the mass $m_Q$ of the heavy quark, the cross section
for the two-particle inclusive reaction $p +\bar{p}\rightarrow \gamma + Q + X$
may be calculated from the leading order QCD subprocess, the quark-gluon
Compton process, $g + Q \rightarrow \gamma + Q$. This subprocess is of first
order in the strong coupling strength $\alpha_s$. The cross section is
obtained as a convolution of the hard-scattering QCD matrix with probability
distributions that specify the initial gluon and heavy quark constituent
momentum densities in the incident hadrons, $p$ and $\bar{p}$. At
next-to-leading order in QCD, several subprocesses contribute to the $\gamma +
Q$ final state:
\begin{mathletters}\label{eq:1}
\begin{eqnarray}
g &+ Q \rightarrow g + Q + \gamma\label{eq:11}\\
g &+ g \rightarrow Q +\bar{Q} + \gamma\label{eq:12}\\
q &+ \bar{q} \rightarrow Q +\bar{Q} + \gamma\label{eq:13}\\
q &+ Q \rightarrow q + Q + \gamma\label{eq:14}\\
\bar{q} &+ Q \rightarrow \bar{q} + Q + \gamma\label{eq:15}\\
Q &+ \bar{Q} \rightarrow Q + \bar{Q} + \gamma\label{eq:16}\\
Q &+ Q \rightarrow Q + Q + \gamma\label{eq:17}
\end{eqnarray}
\end{mathletters}
For computation of the cross section for $\bar{Q}$ production, the set
of next-to-leading order subprocesses is obtained from those of
Eq.~(\ref{eq:1}) after
replacement of the initial $Q's$ by $\bar{Q}'s$
in Eqs.~(\ref{eq:11}), (\ref{eq:14}), (\ref{eq:15}), (\ref{eq:17}).
We note that for values of $p^Q_{T}$ that are comparable to or
less than $m_Q$ there would be no $O(\alpha_s)$ subprocess, and the proper hard
scattering expansion would entail only the subprocesses of Eqs.~(\ref{eq:12})
and (\ref{eq:13}). For the remainder of this paper, we limit
ourselves to charm production, and we work with the massless Q approximation,
$m_c = 0$.
We are interested ultimately in the fully differential two-particle
inclusive cross section, $E_\gamma E_Qd\sigma/d^3p_\gamma d^3p_Q$, where
$(E,p)$ represents the four-vector momentum of the $\gamma$ or $Q$. For each
subprocess listed in Eq.~(\ref{eq:1}), this calculation requires integration
of the
momentum of the unobserved final parton ($g$, $\bar{Q},q$, or $\bar{q}$) and
over the initial parton momentum densities. Collinear singularities are handled
analytically by dimensional regularization and absorbed into initial-state
parton momentum densities or final-state fragmentation functions. To make
the analytic calculation tractable, we
chose to work in terms of the transverse momentum of the final $\gamma$,
$p^{\gamma}_{T}$, and the ratio of the heavy quark and photon transverse
momenta:
\begin{equation}
z = - {{p^Q_{T}. p^{\gamma}_{T}}\over {(p^\gamma_{T})^2}}. \label{eq:zdef}
\end{equation}
To warrant use of perturbation theory (and the massless $Q$ approximation), we
limit our considerations to $z > 0$ and $p^\gamma_{T} > 10$ GeV. The results
should be applicable quantitatively for $p^c_{T} \gg m_c$. The
distribution in $z$ from the leading order subprocess
$g + Q \rightarrow \gamma + Q$ is peaked sharply at $z = 1$
(a $\delta(1-z)$ function in the naive collinear initial parton
approximation). The next-to-leading order processes alter the
size of this sharp peak and produce a broad distribution above and below
$z = 1$.
Contributions to hard photon production from long-distance quark to photon
and gluon to photon fragmentation processes have been
emphasized theoretically,\cite{field} parametrized phenomenologically in
leading order,\cite{frag1} and evolved in next-to-leading
order.\cite{frag2,frag3} These terms
may account for more than half of the
calculated inclusive single photon cross section at modest values of transverse
momentum at the Fermilab Tevatron collider. Because of our kinematic
restriction $z > 0$, there will be no contributions to the final cross
section from $Q \rightarrow \gamma$ fragmentation, where $Q$ is the observed
quark/anti-quark, from among the subprocesses
in Eq.~(\ref{eq:1}). On the other hand, fragmentation of the unobserved final
parton into a photon in subprocesses (1.a-g) will contribute to the cross
section and produce photons that carry $p_T$ less than that of $p^Q_{T}$,
mostly populating the region $z > 1$. Photons originating through
fragmentation are likely to emerge in the neighborhood of associated hadrons.
An experimental
isolation restriction is needed before a clean identification can be made of
the photon and a measurement made of its momentum. Isolation reduces the size
of the observed fragmentation contribution. To represent the effects of
isolation, we should use fragmentation functions defined with a cone size.
Photon
isolation complicates the theoretical interpretation of results, however, since
it threatens to upset the cancellation of infra-red divergences in perturbation
theory.\cite{berqiu} In this paper, we calculate the contributions from photon
fragmentation at leading order only, and, except for one illustrative figure,
we neglect the isolation requirements.
After integration over the longitudinal momentum of the heavy quark, we
present our results in terms of the cross section
$d\sigma/dp^\gamma_{T} dy^\gamma dz$. Here, $y^\gamma$ represents the
rapidity of the $\gamma$. Our
desire to perform a fully analytic calculation restricts our ability to provide
a more differential cross section in this paper (i.e., a cross section also
differential in $y^Q$). In a later more detailed paper, we will present such
results obtained from a versatile combination of analytic and Monte Carlo
techniques.\cite{moncarlo} In that method, selections may be made on several
variables and photon isolation restrictions are easier to impose. An earlier
theoretical paper addresses prompt photon plus associated charm
production at large values of transverse momentum, as we do here, but
our analysis differs from that of Ref.~\cite{strvog}.
The calculation of the photon-plus-charm cross section in
Ref.~\cite{strvog} is done in lowest order while ours is done at
next-to-leading order. In lowest order, the
subprocesses $gg\rightarrow \gamma c \bar{c}$ and $q\bar{q}\rightarrow
\gamma c \bar{c}$ contribute in the massive case, whereas
$c g \rightarrow \gamma c$ plus fragmentation processes contribute in the
massless case. In a forthcoming paper, we
intend to examine the massive case in detail and to discuss comparisons with
the massless case in the regions of phase space of their respective
applicability. As remarked above, our massless approach should be appropriate
and applicable in the domain in which there is effectively only one large
scale,
$p^c_{T} \gg m_c$.
For the interval in $p^\gamma_{T}$ of current experimental interest,
10 GeV $< p^\gamma_{T} < 50$ GeV, the $g c$ and $g g$ subprocesses of
Eqs.~(\ref{eq:11}) and (\ref{eq:12}) are the most important quantitatively
at Fermilab Tevatron
energies, owing to the strength of the gluon density. For $p^\gamma_{T} > 70$
GeV, calculations of the inclusive yield of single photons indicate that the
$q \bar{q}$ subprocess begins to dominate, but the cross section is
small in this region. Dominance of the perturbative
subprocess initiated by $g c$ scattering is preserved after the next-to-leading
terms are included, justifying use of data from
$p +\bar{p}\rightarrow \gamma + c + X$ in attempts to measure the charm
quark momentum density in the nucleon. However, we show that other
subprocesses account for about
$50\%$ of the cross section at currently accessible values of
$p^\gamma_{T}$. The ``background" associated with these
subprocesses must be taken into account in analyses done to extract
the charm density.
Our results are provided in terms of the
momentum of the charm quark. In a typical experiment,\cite{cdf} the momentum
of the
quark may be inferred from the momentum of prompt lepton decay products or
the momentum of charm mesons, such as $D^*$'s. Alternatively, our
distributions in $z$ or $p^c_{T}$ may be convoluted with charm quark
fragmentation functions, deduced from, e.g., $e^+e^-$ annihilation
data, to provide distributions for the prompt leptons or $D^*$'s.
In Sec. II, we present our analysis of the leading and next-to-leading order
contributions to the partonic hard-scattering cross sections. Numerical results
are described in Sec. III, and a summary of our conclusions is provided in
Sec. IV.
An Appendix is included in which we present our method for performing the
required three-particle final-state integrals in n-dimensions to extract the
singularities of the two-particle inclusive hard cross section.
\section{Analytical Calculation}
We consider the two particle inclusive reaction
$A +B \rightarrow \gamma + c + X$ where $A$ and $B$ denote incident hadrons;
$p^\gamma$ and $p^c$ denote the four-vector momenta of the photon and charm
quark. The usual Mandelstam invariants are defined in terms of the momenta of
the two incoming hadrons $P_A$ and $P_B$, and the momentum fractions of
the initial partons, $x_1$ and $x_2$, via
\begin{eqnarray}
\hat{s}&=&(x_1 P_A+x_2 P_B)^2=x_1 x_2 s \nonumber \\
\hat{t}&=&(x_1 P_A - p^{\gamma})^2 \nonumber \\
\hat{u}&=&(x_2 P_B - p^{\gamma})^2. \label{eq:maldel}
\end{eqnarray}
Here $\sqrt{s}$ is the center-of-mass energy in the hadronic system.
We define
\begin{eqnarray}
v&=&1+\frac{\hat{t}}{\hat{s}} \nonumber \\
w&=&\frac{-\hat{u}}{\hat{s}+\hat{t}}. \label{eq:vw}
\end{eqnarray}
\subsection{Leading Order Contributions}
In leading order in perturbative QCD, only one direct subprocess
contributes to the hard-scattering cross section, the QCD Compton process
$c g\rightarrow \gamma c$,
unlike the case for single inclusive prompt photon production, where the
annihilation process $q\bar{q}\rightarrow \gamma g$ also contributes.
Since the leading order direct partonic subprocess has a two-body final state,
the photon and $c$-quark are produced with balancing transverse
momenta, and the variable $z$, defined in Eq.~(\ref{eq:zdef}), is always unity.
The leading order direct partonic cross section is
\begin{equation}
\frac{d\hat{\sigma}}{dvdzdw}=\frac{d\hat{\sigma}}{dv}\delta(1-z)\delta(1-w),
\label{eq:borna}
\end{equation}
where $d\hat{\sigma}/dv$ is the partonic Born cross section:
\begin{equation}
\frac{d\hat{\sigma}}{dv}(c g\rightarrow \gamma c)=\frac{1}{N_C}\frac{\pi
\alpha_{em}\alpha_s e_q^2}{\hat{s}}\frac{1+(1-v)^2}{1-v}.\label{bornb}
\end{equation}
Here $\alpha_{em}$ and $\alpha_s$ are the electromagnetic and strong
coupling constants, respectively, $N_C=3$ is the number of colors, and
$e_q$ denotes the quark charge.
The full expression for the physical cross section in leading order is
\begin{equation}
\frac{d\sigma}{dp_T^{\gamma}dy^{\gamma}dz}=2\pi p_T^{\gamma}\frac{1}{\pi
s}\int^1_{V W} \frac{dv}{1-v}
f_g^A(x_1,M^2)f_c^B(x_2,M^2)\frac{d\hat{\sigma}}{dv}\delta(1-z)\delta(1-w)
+ (c \leftrightarrow g). \label{eq:losigma}
\end{equation}
Quantities $V$ and $W$ are defined similarly to $v$ and $w$,
Eq.~(\ref{eq:vw}), but
in the hadronic system; $f^A(x_1,M^2)$ denotes the parton density in hadron $A$
as a function of the momentum fraction $x_1$ and factorization scale $M$.
In addition to the lowest order direct subprocess just discussed,
$c g\rightarrow \gamma c$, there are fragmentation contributions that are also
effectively of leading order in $\alpha_s$. In these contributions the photon
is produced through fragmentation of a final-state parton from any of the
$O(\alpha_s^2)$ subprocesses listed below. The fragmentation functions are
essentially of order $O(\alpha_{em}/\alpha_s)$
\begin{eqnarray}
c+g&\rightarrow& g+c \nonumber \\
g+g&\rightarrow& c+\bar{c} \nonumber \\
c+q&\rightarrow& c+q \nonumber \\
c+\bar{q}&\rightarrow&c+\bar{q} \nonumber \\
c+c&\rightarrow& c+c \nonumber \\
c+\bar{c}&\rightarrow & c+\bar{c} \nonumber \\
q+\bar{q}&\rightarrow & c+\bar{c}. \label{eq:fragproc}
\end{eqnarray}
We are interested in configurations in which the photon and charm quark
have relatively large and to-some-extent balancing values of transverse
momentum. Therefore, in the cases of the first, third, and fourth of the
subprocesses listed above, the photon is produced from fragmentation of the
$g$ and non-charm quark $q$, respectively. In the
other cases it is produced in the fragmentation of one of the
(anti)charm quarks. The expression we use to evaluate the fragmentation
contributions is
\begin{eqnarray}
\frac{d\sigma}{dp_T^{\gamma}dy^{\gamma}dz}&=&2\pi p_T^{\gamma}\frac{1}{\pi
s}\int^1_{1-V+V W}\frac{dz'}{z'^2}\int^1_{V W} \frac{dv}{1-v}
f_a^A(x_1,M^2)f_b^B(x_2,M^2)\frac{d\hat{\sigma}^{ab\rightarrow i X}}{dv}
\nonumber \\
& &\times D_{\gamma/i}(z',Q^2)\delta(\frac{1}{z}-z'). \label{eq:fragsigma}
\end{eqnarray}
In a fully consistent next-to-leading calculation, one should calculate
the subprocesses in Eq.~(\ref{eq:fragproc}) to $O(\alpha_s^3)$, since the
photon
fragmentation functions that are convoluted with the hard subprocess
cross sections are of $O(\alpha_{em}/\alpha_s)$. For simplicity, we
include them in $O(\alpha_s^2)$ only. In fact, next-to-leading order
fragmentation contributions to single prompt photon production have been
included only once before\cite{gorvogel}. We expect the next-to-leading order
corrections to the fragmentation contributions to be insignificant numerically
especially after isolation cuts are imposed.
\subsection{Next-to-leading order Contributions}
There are two classes of contributions in next-to-leading order. First there
are the
virtual gluon exchange corrections to the lowest order process. Examples are
shown in Fig.1b. These amplitudes interfere with the Born amplitudes and
contribute at $O(\alpha_{em}\alpha_s^2)$. They have been calculated
twice before.\cite{gamnlo1,gorvogel} We use the results of
Ref.~\cite{gorvogel}. The virtual contributions are proportional to
$\delta(1-z)$ and $\delta(1-w)$. At next-to-leading order there are also
three-body final-state contributions, listed in Eq.~(\ref{eq:1}). The
matrix elements for these are also taken
from Ref.~\cite{gorvogel}, where they are calculated for
single inclusive prompt photon production.
The main task of our calculation is to integrate the three-body matrix elements
over the phase space of
the unobserved particle in the final state. The situation here is
different from the standard case of single inclusive particle
production, first developed in Ref.~\cite{rke}, since we wish to
retain as much control as possible over the kinematic variables of a
second particle in the final state, while at the same time integrating
over enough of the phase space to ensure cancellation of all infrared
and collinear divergences, inherent when massless
particles are assumed. Because our goal is to provide a
fully analytic calculation, we find it necessary to integrate over the
full range of rapidity of one of the observed final-state particles. We
choose to integrate over that of the charm quark, since the photon is
usually considered the trigger particle in the experiments.
The situation here is similar to that met by
Aurenche {\it{et al}}\cite{aur2,aur3}, and
we use a similar technique to perform the phase space integrals. We
give a fairly detailed outline of the method since it is necessary to
adapt it to our situation and also because it has not been widely used.
We believe our presentation clarifies certain details which are not
stressed in the above references.
The three-body phase space integration is
done in the rest frame of the observed $c$ (or $\bar{c}$-quark) and the third
unobserved parton. Denoting the momenta of the process by
$p_1+p_2\rightarrow k_1+k_2+k_3$, we work in the rest frame of $k_2$ and
$k_3$, where $k_1$ is the momentum of the trigger photon. The final form
of the three-particle phase space integral (see the Appendix) is
\begin{eqnarray}
PS^{(3)}&=&\frac{\pi\hat{s}}{8(2\pi)^5}\left(
\frac{4\pi}{\hat{s}}\right)^{\epsilon}
\frac{v}{\Gamma(1-2\epsilon)}\left(\frac{4\pi}{\hat{s}wv(1-v)}\right)^{\epsilon}
v^{-\epsilon}(1-w)^{-\epsilon}2\sqrt{\frac{w(1-v)}{(1-v w)}}\nonumber \\
& &\times\left[ \frac{1-w+4w(1-v)z(1-z)}{1-v w}\right]^{-\epsilon}\int^\pi_0
d\theta_2 \sin^{-2\epsilon}(\theta_2) . \label{eq:phasespace}
\end{eqnarray}
We are left to perform the final integration of the squared matrix
elements over $\theta_2$.
As in the case of single inclusive cross section calculations,
documented extensively elsewhere, one can use relations
among the Mandelstam variables to reduce complex combinations of them
to simple products and ratios. The phase space integral over $\theta_2$ is
performed in $4-2\epsilon$ dimensions, thereby exposing collinear
and soft singularities as poles in $\epsilon$.
After the three-particle phase space integrals are performed, we obtain
a three-body final state
hard-scattering cross section that we represent by the expression
\begin{displaymath}
\frac{d\sigma^R_{ij}}{dv dw dz}\left( \hat{s},v,w,z,\frac{1}{\epsilon^2},
\frac{1}{\epsilon}\right) .
\end{displaymath}
Superscript $R$ indicates that this is the subprocess cross section for a
real three-body final-state contribution, as distinct from the contribution
from the virtual gluon exchange contributions that we denote $\sigma^V_{ij}$.
The subscripts ${ij}$ designate one of the processes in Eq.~(\ref{eq:1}). In
general,
$\sigma^R_{ij}$ has single and double poles in $\epsilon$. In accord with
the factorization theorem of perturbative QCD, the double and some of the
single poles cancel between the real and virtual contributions. The remaining
single poles in $\epsilon$ represent collinear divergences that are
subtracted into parton densities and fragmentation functions.
In order to illustrate how the collinear singularities are handled we discuss
a few representative examples.
(a) $c + g\rightarrow\gamma + c + X$
This is the QCD Compton process plus higher order corrections. We label
the momenta by
\begin{equation}
c(p_1) + g(p_2)\rightarrow \gamma(k_1) + c(k_2) + g(k_3) .\label{eq:c}
\end{equation}
In performing the phase space integration, we expect to encounter
singularities where the gluon $k_3$ becomes soft and/or parallel to
$p_1,p_2$ or $k_2$. Since we require that the observed charm quark and
$\gamma$ be in opposite hemispheres, we will not encounter any
singularity where $k_1$ and $k_2$ are collinear (see the Appendix). In the
cases where the gluon is either soft and/or parallel to $p_1$ or $p_2$,
then $z=1$. We expose the $z\rightarrow 1$ singularities by using the
expansion
\begin{equation}
\frac{1}{|1-z|^{1+2\epsilon}}=\frac{1}{-2\epsilon}\delta(1-z)+\frac{\theta(1-z)
}{(1-z)_+}+\frac{\theta(z-1)}{(z-1)_+}-2\epsilon
\left(\frac{\ln(1-z)}{1-z)}\right)_+\theta(1-z)+O(\epsilon^2).\label{eq:zexp}
\end{equation}
There are plus-distributions in the variable $z$, as well as the usual
ones in $w$ that arise in the single particle inclusive case
and correspond to the gluon becoming either soft or collinear to $k_2$.
Plus-distributions in $z$ and $w$ can be encountered simultaneously and
must be treated carefully in the numerical evaluation of the cross
section.
Once the phase space integrals are performed and the soft and
collinear poles are exposed, we can add the real three-body
contributions to the virtual gluon exchange terms, after which all the double
poles cancel along with some single poles. The remaining collinear poles must
be factored into the parton distribution and fragmentation
functions. We perform these subtractions in the universal or
$\overline{MS}$ scheme, described in detail in many places.
To account for all collinear configurations
allowed in the subprocess, the counter cross section or factorization
formula that must be added to our results in order to cancel the
collinear poles is
\begin{eqnarray}
\frac{1}{\hat{s}v}\frac{d\sigma^F}{dv dw dz}&=&
-\frac{\alpha_s}{2 \pi}\left[\frac{1}{\hat{s}v}H_{cc}(w,M^2)
\frac{d\sigma^{cg\rightarrow\gamma
c}}{dv}(w\hat{s},v,\epsilon)\delta(1-z) \right. \nonumber \\
&+&\left. \frac{1}{\hat{s}(1-v w)}H_{gg}\left( \frac{1-v}{1-v w},M^2\right)
\frac{d\sigma^{cg\rightarrow\gamma c}}{dv}(w\hat{s},v
w,\epsilon)\delta(1-z)\right. \nonumber \\
&+&
\left.
\frac{1}{\hat{s}v}\tilde{H}_{cc}(z,M''^2)\frac{d\sigma^{cg\rightarrow\gamma c}}
{dv}(\hat{s},v,\epsilon)\theta(1-z)\delta(1-w) \right] .\label{eq:faccg}
\end{eqnarray}
\begin{equation}
H_{ij}(z,Q^2)=-\frac{1}{\hat{\epsilon}}P_{ij}(z)
\left[\frac{\mu^2}{Q^2}\right]^\epsilon+f_{ij}(z) ,\label{eq:hfunc}
\end{equation}
and
\begin{equation}
\tilde{H}_{ij}(z,Q^2)=-\frac{1}{\hat{\epsilon}}P_{ij}(z)
\left[\frac{\mu^2}{Q^2}\right]^\epsilon+d_{ij}(z) .\label{eq:hfunc2}
\end{equation}
Functions $P_{ij}(z)$ are the one-loop splitting
functions [18], $f_{ij}(z)=0$ and $d_{ij}(z)=0$ in the
$\overline{MS}$ factorization
scheme, and $\mu$ is the renormalization scale.
In the $\overline{MS}$ scheme,
$1/\hat{\epsilon}\equiv1/\epsilon-\gamma_E+\ln4\pi$.
In Eq.~(\ref{eq:faccg}), we distinguish
the factorization scale $M$ and the quark to quark plus gluon fragmentation
scale $M''$. The last term indicates that we factor the collinear
singularity that arises when the observed charm quark $k_2$ becomes parallel
to the gluon, $k_3$, into a fragmentation function at scale $M''^2$, for the
production of a charm quark. Note that this singularity occurs in the
region $z\leq 1$, since the photon must balance the momentum of the
charm-gluon system.
We are free to convolute our cross section with a fragmentation function
that describes the formation of specific charm decay
products (e.g., D or D$^*$ mesons), but we choose not to do so in this
paper.
(b) $g + g\rightarrow \gamma + c + \bar{c}$
In the gluon-gluon fusion process, $g g\rightarrow \gamma c \bar{c}$,
the photon may become collinear to the unobserved final-state quark,
a situation not encountered in the $g c$ process discussed above. This
singularity occurs at $z=z_1$ where $z_1=1/(1-v+v w)$, and, as discussed
in the Appendix, we use an expansion similar to that in
Eq.~(\ref{eq:zexp}) to expose
the singularity. Note that this singularity occurs in the region $z\geq 1$,
and that $z$ is exactly the reciprocal of the usual fragmentation
variable for a parton to fragment into a particle with a fraction
of its momentum, $1/z$. The factorization formula for this process is
\begin{eqnarray}
\frac{1}{\hat{s}v}\frac{d\sigma^F}{dv dw dz}&=&
-\frac{\alpha_s}{2 \pi}\left[\frac{1}{\hat{s}v}H_{cg}(w,M^2)
\frac{d\sigma^{cg\rightarrow\gamma
c}}{dv}(w\hat{s},v,\epsilon)\delta(1-z)\right. \nonumber \\
&+&\left. \frac{1}{\hat{s}(1-v w)}H_{cg}\left( \frac{1-v}{1-v w},M^2\right)
\frac{d\sigma^{gc\rightarrow\gamma c}}{dv}(w\hat{s},v
w,\epsilon)\delta(1-z) \right. \\ \label{eq:facgg}
&+&\left. \frac{1}{\hat{s}(1-v+v w)}\tilde{H}_{\gamma
\bar{c}}(1-v+v w,M'^2)\frac{d\sigma^
{g g\rightarrow c \bar{c}}}{dv}(\hat{s},\frac{v w}{1-v+v
w},\epsilon)\delta(z_1-z) \right].\nonumber
\end{eqnarray}
In this equation, we distinguish
the factorization scale $M$ and the quark to photon fragmentation
scale $M'$.
(c) $q + \bar{q}\rightarrow \gamma + c + \bar{c}$
The process $q \bar{q}\rightarrow \gamma c \bar{c}$, as well as that
of Eq.~(\ref{eq:16}),
has a final-state collinear singularity when a gluon splits into a
collinear $c\bar{c}$ pair, and, in addition, a singularity when the photon is
produced from fragmentation of a final-state quark.
The factorization formula for this case is
\begin{eqnarray}
\frac{1}{\hat{s}v}\frac{d\sigma^F}{dv dw dz}&=&
-\frac{\alpha_s}{2
\pi}\left[\frac{1}{\hat{s}v}\tilde{H}_{cg}(z,M''^2)\frac{d\sigma^
{q \bar{q}\rightarrow \gamma
g}}{dv}(\hat{s},v,\epsilon)\theta(1-z)\right. \nonumber \\
&+&\left. \frac{1}{\hat{s}(1-v+v w)}\tilde{H}_{\gamma \bar{c}}(1-v+v
w,M'^2)\frac{d\sigma^
{q \bar{q}\rightarrow c \bar{c}}}{dv}(\hat{s},\frac{v w}{1-v+v
w},\epsilon)\right. \nonumber \\
& &\times\left. \delta(z_1-z)\right]. \label{eq:facqqb}
\end{eqnarray}
\subsection{Physical cross section}
Once all singularities are dealt with, we calculate the physical
cross section by convoluting the hard partonic cross section with parton
distribution functions. In terms of the variables we are using, the
cross section at next-to-leading order is
\begin{eqnarray}
\frac{d\sigma}{dp_T^{\gamma}dy^{\gamma}dz}&=&2\pi p_T^{\gamma}\frac{1}{\pi s}
\sum_{i,j}\int^1_{VW}\frac{dv}{1-v}\int^1_{V
W/v}\frac{dw}{w}f_i^A(x_1,M^2)f_j^B(x_2,M^2) \nonumber \\
& &\left[
\frac{1}{v}\frac{d\hat{\sigma}^{ij}}{dv}\delta(1-z)\delta(1-w)+
\frac{\alpha_s(\mu^2)}{2\pi}K_{ij}(\hat{s},v,w,z,\mu^2,M^2,M'^2,M''^2)\right].
\label{eq:nlosigma}
\end{eqnarray}
The first term within the square brackets is the leading order part,
and
\begin{displaymath}
K_{ij}(\hat{s},v,w,z,\mu^2,M^2,M'^2,M''^2)
\end{displaymath}
is the next-to-leading order
correction term; $K_{ij}$ may include virtual gluon exchange contributions.
Taking the $c g$ subprocess as an example, we outline
how we obtain the function $K_{ij}(\hat{s},v,w,z,\mu^2,M^2,M'^2,M''^2)$.
The virtual gluon exchange contributions are represented by
\begin{displaymath}
\frac{d\sigma^V_{cg}}{dvdwdz}\left( \hat{s},v,\mu^2,
\frac{1}{\epsilon^2},\frac{1}{\epsilon} \right).
\end{displaymath}
They are proportional to $\delta(1-w)$ and $\delta(1-z)$. The real three-body
contributions are denoted
\begin{displaymath}
\frac{d\sigma^R_{cg}}{dv dw dz}\left( \hat{s},v,w,z,\frac{1}{\epsilon^2},
\frac{1}{\epsilon}\right) .
\end{displaymath}
Combining the three-body final-state contribution and the virtual gluon
exchange
contribution and adding to these the subtraction term in
Eq.~(\ref{eq:faccg}), we derive
a finite subprocess cross section:
\begin{eqnarray}
K_{cg}(\hat{s},v,w,z,\mu^2,M^2,M''^2)&=&\frac{d\sigma^V_{cg}}{dvdwdz}
\left( \hat{s},v,\mu^2,\frac{1}{\epsilon^2},\frac{1}{\epsilon}\right)+
\nonumber \\
& &\frac{d\sigma^R_{cg}}{dv dw dz}\left( \hat{s},v,w,z,\frac{1}{\epsilon^2},
\frac{1}{\epsilon}\right)+ \nonumber \\
& &\frac{d\sigma^F_{cg}}{dvdwdz}\left( \hat{s},v,w,z,
\frac{1}{\epsilon},M^2,M''^2\right).\label{eq:f}
\end{eqnarray}
At this stage all single and double poles cancel, and we are left with a
finite cross section dependent on the factorization scale $M$ and
fragmentation scale $M''$. Because of the additional variable $z$, the
function $K_{cg}$ is quite lengthy when compared
to that for inclusive single photon production.\cite{gorvogel}
In schematic notation, where only the
$z$-distributions are made explicit, we can write the hard-scattering cross
section as
\begin{eqnarray}
K_{cg}(\hat{s},v,w,z,\mu^2,M^2,M''^2)&=&c_1(v,w)\delta(1-z)+c_2(v,w)
\frac{\theta(1-z)}{(1-z)_+}\nonumber \\
&+&c_3(v,w)\frac{\theta(z-1)}{(z-1)_+}+c_4(v,w)
\left(\frac{\ln(1-z)}{1-z}\right)_+\nonumber \\
&+&c_5(v,w,z).\label{eq:g}
\end{eqnarray}
The functions $c_i(v,w)$ contain, in general, distributions in
$(1-w)$, and they
can be expressed by
\begin{eqnarray}
c_i(v,w)&=&c_i^1(v)\delta(1-w)+c_i^2(v)\frac{1}{(1-w)_+}+c_i^3(v)\left(
\frac{\ln(1-w)}{1-w}\right)_+\nonumber \\
&+&c_i^4(v,w) .\label{h}
\end{eqnarray}
Similar expressions can be written for the other subprocesses.
These will generally involve the fragmentation scale on the photon leg,
$M'$, and additional distributions in $(z_1-z)$ and $(z-z_1)$. These are
defined as normal plus-distributions, but in the intervals $[0,z_1]$, and
$[z_1,z_{max}]$ , respectively. We integrate the
distributions between limits other than these. For example, if the
limits in the first case are $[z_a,z_1]$, we must make the
replacement
\begin{equation}
\frac{1}{(z_1-z)_+}=\frac{1}{(z_1-z)_{z_a}}+\delta(z_1-z)\ln(z_1-z_a) ,
\label{i}
\end{equation}
where the new distribution is defined by
\begin{equation}
\int^{z_1}_{z_a}dz\frac{f(z)}{(z_1-z)_{z_a}}=
\int^{z_1}_{z_a}dz\frac{f(z)-f(z_1)}{z_1-z}.\label{j}
\end{equation}
By expanding our integrated
matrix elements as plus-distributions in $z$, we are able to expose the
singularities that occur at $z=1$ and $z=z_1$. This procedure ensures that
these integrable singularities can be treated numerically. However, it also
means that our analytic distributions in z are singular at $z=1$
and $z=z_1$. For comparison with experiment, we provide predictions for the
$z$ dependence in the form of histograms with finite bin-widths $\Delta z$,
reminiscent of experimental resolution. As in Ref.~\cite{aur2}, we define
\begin{equation}
\frac{d\sigma}{dp_T^{\gamma}dy^{\gamma}dz}=\frac{1}{\Delta z}
\int^{z+\frac{\Delta
z}{2}}_{z-\frac{\Delta z}{2}}
\frac{d\sigma}{dp_T^{\gamma}dy^{\gamma}dz'}dz' .\label{k}
\end{equation}
For distributions in $p_T^{\gamma}$, we integrate over a specified range
of $z$,
\begin{equation}
\frac{d\sigma}{dp_T^{\gamma}dy^{\gamma}}=
\int^{z_b}_{z_a}
\frac{d\sigma}{dp_T^{\gamma}dy^{\gamma}dz}dz .\label{l}
\end{equation}
This completes our discussion of the calculation. Further details can be
found in the Appendix.
\section{Numerical Results and Discussion}
In this section we present and discuss explicit
evaluations of the correlated production cross section
of charm plus a prompt photon.
We provide results at $\bar p p$ center-of-mass energy
$\sqrt{s} = $ 1.8 TeV appropriate for the CDF and D0
experimental investigations underway at Fermilab.
The cross sections we evaluate are those derived in the text:
Eqs.~(\ref{eq:losigma}), (\ref{eq:fragsigma}) and (\ref{eq:nlosigma}). For
the electromagnetic coupling strength we use $\alpha_{em} = 1/137$,
and we employ a two-loop expression for
$\alpha_s (\mu^2)$ with quark threshold effects handled properly.
We choose identical values for the renormalization, factorization, and
fragmentation scales, $\mu=M=M'=M''$. In the results presented below,
we vary $\mu$ to examine the sensitivity of the cross section
to its choice.
We choose $\Lambda^{(4)}_{QCD}$ according to the parton distribution set
we use; $\Lambda^{(4)}_{QCD}=0.200$ for the GRV parton
distributions.\cite{parden1} The sums run over 4 flavors of quarks
$(u, d, c, s)$, all assumed massless. We do not
include a $b$ quark contribution in our calculation.
Most of the calculations reported here are done with the
GRV parton densities\cite{parden1}. We observe some differences
when we use instead the CTEQ3M densities\cite{parden2}. The magnitude
and Bjorken $x$ dependence of the charm quark density in these two sets
are similar, as shown in Fig. 2, but show some differences at large $x$,
leading to a $30\%$ difference in the cross section at
$p_T^{\gamma}=60 GeV$. In these densities, the charm
quark probability is generated through perturbative evolution, and there
is no non-perturbative intrinsic charm\cite{STANB} component. Neither
density may be correct since there
is little direct experimental information to constrain this
density\cite{ELBW}.
A goal of our analysis is to ascertain the extent to which the $gc$ initial
state is expected to dominate the cross section for
$p +\bar{p}\rightarrow \gamma + c + X$, and, thus, the extent to which data
from this reaction may serve to measure the charm quark density.
The quark-to-photon fragmentation function is expressed as
\begin{eqnarray}
z\, D_{q \rightarrow \gamma} (z,\mu^2) &=&
\frac{\alpha _{em}}{2\pi} \left[
e_{q}^{2}\
\frac{2.21-1.28z+1.29z^{2}}{1-1.63\,\ell n\left(1-z\right)}\,
z^{0.049} +0.002 \left( 1-z \right) ^{2} z^{-1.54}
\right] \nonumber \\
&&\times \ell n \left( \mu^{2} / \mu^{2}_0 \right).
\label{eq:fragfunc}
\end{eqnarray}
The gluon-to-photon fragmentation function is
\begin{equation}
z\, D_{g \rightarrow \gamma} (z,\mu^2)
= \frac{\alpha _{em}}{2\pi}\,
0.0243 \left( 1-z \right)
z^{-0.97}\, \ell n \left( \mu^{2} / \mu^{2}_0 \right).
\label{vvv}
\end{equation}
These expressions for $D_{q\rightarrow \gamma}$ and
$D_{g\rightarrow \gamma}$, taken from Ref.~\cite{frag1}, are used as
a guideline for our estimates. The physical significance of scale
$\mu_0$ is that the fragmentation function vanishes for energies less
than $\mu_0$. For the $u, d, s$, and $c$ quarks, we set
$\mu_0 = \Lambda^{(4)}_{QCD}$, as in Ref.~\cite{frag1}.
We remark that we use simple leading order fragmentation
functions in our calculation, in contrast to the fact that we have done
a next-to-leading order $\overline{\mbox{MS}}$ calculation. It would
be more consistent and, therefore, preferable to use
$\overline{\mbox{MS}}$ fragmentation functions evolved in
next-to-leading order. Our choice of leading-order fragmentation
functions is motivated by our desire to work with
analytic expressions. In published analyses of next-to-leading order
fragmentation functions,\cite{frag2,frag3} the general formalism is
presented but the fragmentation functions themselves must be obtained
through numerical evolution codes. Our
primary purpose in this paper is to provide a theoretical framework
for the analysis of the correlated production of charm and prompt photon
not necessarily to present the most up-to-date
numerical predictions. Thus, we believe our leading-order fragmentation
functions are adequate.
In several figures to follow, we show the predicted behavior of the
photon yield as a function of $p^\gamma_{T}$ and $z$,
as well as the breakdown of the total yield into contributions
from the leading order and the various next-to-leading order pieces.
The ratio $z$ is defined in Eq.~(\ref{eq:zdef}).
We choose to display cross sections as a function of
the ratio $z$, for fixed values of $p^\gamma_{T}$, or as a function
of $p^\gamma_{T}$. We choose the
renormalization/fragmentation scale
$\mu = p^\gamma_{T}$. Since both the photon and final charm particle
carry large transverse momentum, we could perhaps equally well choose
$\mu = p^c_{T}$ or some combination of the two. In selecting $p^\gamma_{T}$,
we focus upon the photon as the ``trigger" particle whose transverse momentum
is well determined. We display the $\mu$ dependence of our results below.
Throughout this paper, for clarity and simplicity of the discussion,
we refer consistently to charm production, e.g.,
$p +\bar{p}\rightarrow \gamma + c + X$. However, the numerical values of
the cross sections shown in the figures are those for the sum of
charm and anticharm production in $p \bar{p}$ scattering. In Fig. 3, we
present the photon yield
as a function of the ratio z for two choices of $p^\gamma_{T}$.
The same results are displayed in Fig. 4 as a function of
$p^\gamma_{T}$ for $z$ integrated over the interval 0.2 to 2.0.
We restrict $z > 0.2$ as otherwise the transverse momentum of the
charm quark could become unacceptably small.
In Fig. 3(a), the net lowest order contribution is shown at
$p^\gamma_{T}$ = 15 GeV. The lowest
order contribution is made up of the lowest order direct term, $c g \rightarrow
\gamma c$, and the fragmentation terms discussed in Sec. II.A. The direct
term provides a $\delta$-function at $z=1$ since the photon and charm quark
carry equal but opposite transverse momenta at this order. The parton to
photon fragmentation contributions populate the region $z > 1$. In the
collinear fragmentation, the photon's transverse momentum is opposite
to that of the charm quark but its magnitude is less. One of the
striking features of Fig.3a, is that the net fragmentation contribution
to the cross section is quite small compared to the case of inclusive
photon production. At Tevatron energies, fragmentation accounts for about
$50\%$ of the inclusive yield at this value of $p_T$ \cite{field,gorvogel}.
(Note that we have not yet imposed any isolation
cuts on the cross section.) One reason for the small fragmentation contribution
is that fragmentation from the $cg$ initiated process is strongly
suppressed due to our restriction that the charm quark and photon be in
opposite hemispheres ($z \geq 0$). Thus only fragmentation from the gluon
leg is included, and the $g\rightarrow \gamma$ fragmentation function is in
general smaller than that for $q\rightarrow \gamma$.
In Figs. 3(b) and Fig. 3(c), we show the $z$ distribution after the next-to-
leading order contributions are included. The solid lines show the full result
in which both the lowest order and all next-to-leading order terms are
incorporated. Comparing the solid curve in Fig. 3(b) with that in Fig. 3(a),
we note that the $z$ distribution is substantially altered once the
next-to-leading order terms are included. In particular, the peak at $z$ = 1
is reduced in magnitude by about a factor of 2, and the $z$ distribution gains
significant breadth below and above $z$ = 1. The reduction in the magnitude of
the peak at $z = 1$ is attributed to the effect of the $O(\alpha ^2_s)$
collinear contributions on the initial parton legs. These collinear terms
provide the same event structure as the lowest order direct subprocess, viz.,
a final-state photon and charm quark with equal but opposite transverse
momenta, but their contribution is negative due to $\ln(1-z)$ terms
from the phase space and large logarithms of
$(1-z_{min})$ and $(z_{max}-1)$ from the $1/(1-z)_+$ and
$(z-1)_+$ distributions; $z_{min}$ and $z_{max}$ are the lower and
upper edges of the bins around $z=1$. On the other hand, away from collinear
configurations, the $O(\alpha_{em}\alpha ^2_s)$
subprocesses, listed in Eq.~(\ref{eq:1}), generate three body final states in
which three final partons share the transverse momentum balance. The
non-collinear contributions therefore populate a broad interval
in $z$.
In addition to the complete result through next-to-leading order, the solid
line in Figs. 3(b) and 3(c), we display also contributions from three of the
$O(\alpha^2_s)$ terms. The sum of the contributions from the other four
$O(\alpha^2_s)$ terms is negligible by comparison at $p^\gamma_{T}$ = 15 GeV.
The individual contributions show the important role that
the $O(\alpha^2_s)$ terms play at values of $z$ below and above 1.
Contrasting Figs. 3(b) and 3(c), we see that the peak near $z$ = 1 is predicted
to sharpen as $p^\gamma_{T}$ is increased, reflecting a diminishing importance
of
the $O(\alpha^2_s)$ terms at larger transverse momentum.
In Fig. 4, we show the cross section as a function of the transverse momentum
of the photon, $p^\gamma_{T}$. To obtain these results, we integrate over
the interval 0.2 $< z <$ 2.0. These results show that the $cg$ intial state
dominates the cross section until $p^\gamma_{T}$ approaches 100 GeV. It
accounts for $60\%,\ 55\%$, and $50\%$ of total at $p^\gamma_{T}$ = 15, 45, and
60 GeV, respectively. The $gg$
contribution is important at small values of $p^\gamma_{T}$, but it falls
off more steeply with $p^\gamma_{T}$ than the $cg$ contribution. The
contribution from the valence subprocess,
$q \bar{q} \rightarrow c \bar{c} \gamma$, is negligible at small
$p^\gamma_{T}$,
but it overtakes the contribution of the $cg$ subprocess at sufficiently large
$p^\gamma_{T}$. Owing to the fact the valence quarks carry significantly
harder
fractional momentum than the gluons and charm quarks, a major role for the
valence subprocess is expected at large enough $p^\gamma_{T}$. However, the
numerical results indicate that the hard-scattering matrix element
overcomes this effect at modest values of $p^\gamma_{T}$, resulting in
dominance of the $cg$ initial state. Comparison of Fig. (4) and Figs. 3(b)
and 3(c) shows significant $z$ variation in the fraction of the total cross
section accounted for by various subprocesses.
Dependence on the renormalization/factorization scale $\mu$ is displayed in
Figs. 5 and 6. As $\mu$ is increased, $\alpha_s$ decreases, resulting in a
reduction of the hard-scattering cross sections. The parton densities also
steepen as $\mu$ is increased. Both effects contribute to the typical
decrease of the cross section at fixed large $p^\gamma_{T}$ as $\mu$ is
increased, as shown in Fig. 5. The $\mu$ dependence of the $z$
distribution presented in Fig.6 is considerably more significant. The
distribution becomes more sharply peaked at $z$ = 1 as $\mu$ is increased.
As shown in Fig. 3(a), the leading order direct contribution produces
a sharp peak at $z$ = 1, whereas the next-to-leading order contributions
broaden the distribution, as shown in Figs. 3(b) and 3(c). The decrease
of $\alpha_s$ as $\mu$ increases diminishes the relative importance of the
next-to-leading order contributions.
The functional form of $D_{q\rightarrow \gamma} \left( z, \mu^2\right)$,
Eq.~(\ref{eq:fragfunc}), shows that the fragmentation contribution increases
logarithmically as $\mu$ is increased. If the fragmentation contributions
played a major role in the final answer, one would expect different $\mu$
dependence from that shown in Fig.~6.
In Fig. 7, we present the ``$K$-factor" as a function of $p^\gamma_{T}$.
Here $K$ is defined as the ratio of the complete answer through next-to-leading
order to the full leading order answer (including the leading order
fragmentation terms). Our results show that for $z >$ 0.2, the inclusive
$K$ factor is about 2 for $p^\gamma_{T} >$ 15 GeV. In the inclusive case,
no isolation requirement is imposed on the photon. To make contact with
experiment, an isolation restriction is necessary. Because fragmentation
contributions do not play a significant role in the associated production
of photon plus charm for $z >$ 0.2, we do not expect a great change of the
K-factor after isolation is imposed. To estimate the impact of isolation, we
use a combination of analytic and Monte Carlo methods.\cite{moncarlo} We
choose an isolation cone size $R =$ 0.7, and energy resolution parameter,
$\epsilon=2~GeV/p_T^{\gamma}$, as is done in the CDF experiment
\cite{cdf}. We find that the K-factor is reduced to about 1.5, in
respectable agreement with experimental indications.\cite{cdf}
\section{Conclusions}
In summary, we have computed the contributions through $O(\alpha^2_s)$ in
perturbative QCD for inclusive associated production
of a prompt photon and a charm quark at large values of transverse momentum in
high energy hadron-hadron collisions. The next-to-leading order terms alter
the expected distribution in the ratio of the magnitude of the transverse
momenta of the charm quark and prompt photon in an interesting and measurable
fashion. The overall cross section increases by about a factor of two after
the next-to-leading terms are included. Dominance of the perturbative
subprocess initiated by $gc$ scattering is preserved after the next-to-leading
terms are included, justifying use of data from
$p +\bar{p}\rightarrow \gamma + c + X$ in attempts to measure the charm
quark momentum density in the nucleon\. However, other subprocesses are shown
to account for about
$50\%$ of the cross section at currently accessible values of
$p^\gamma_{T}$, and the ``background" associated with some of these
subprocesses, which are not initiated by charm quark scattering, such as
in Eqs.~(\ref{eq:12}) and (\ref{eq:13}) must be taken into account in
analyses done to extract the charm density.
\section{Acknowledgment}
We thank Dr. Bob Bailey and Dr. Stephen Mrenna for valuable discussions.
This work was supported
by the U.S. Department of Energy, Division of High Energy Physics,
Contract W-31-109-ENG-38.
\pagebreak
|
1,108,101,565,975 | arxiv | \section{Introduction}
Active galactic nuclei (AGN) consist of an accreting supermassive
black hole (SMBH) in the center of a galaxy and sometimes present
powerful radio emitting jets (Begelman et al. 1984). Radio-loud AGNs
have continuum emission along the whole electromagnetic spectrum, from
radio to gamma rays (e.g. Boettcher 2007). This radiation basically
comes from the accretion disc and bipolar relativistic jets originated
close to the central SMBH. Radiation of accretion origin can be
produced by the thermal plasma of either an optically-thick
geometrically-thin disc under efficient cooling (Shakura \& Sunyaev
1973), or an optically-thin geometrically-thick corona (e.g. Liang \&
Thompson 1979). The emission from the jets is non-thermal and
generated by a population of relativistic particles likely accelerated
in strong shocks, although other mechanisms are also possible (Rieger
et al. 2007). This non-thermal emission is thought to be produced
through synchrotron and inverse Compton (IC) processes
(e.g. Ghisellini et al. 1985), although hadronic models have been also
considered to explain gamma-ray detections (e.g. Mannheim 1993,
M\"ucke \& Protheroe 2001, Aharonian 2002).
In addition to continuum radiation, AGNs also present optical and
ultra-violet lines. Some of these lines are broad, emitted by gas
moving with velocities $v_{\rm g}>1000$~km~s$^{-1}$ and located in a
small region close to the SMBH, the so-called broad line
region (BLR). The structure of this region is not well known but some
models assume that the material in the BLR could be formed by dense
clouds confined by a hot ($T \sim 10^8$~K) external medium (Krolik et
al. 1981) or by magnetic fields (Rees 1987). These clouds would be
ionized by photons from the accretion disc producing the observed
emission lines, which are broad because of the cloud motion within the
SMBH potential well. An alternative model proposes that the broad
lines are produced in the chromosphere of evolved stars (Penston 1988)
present in the nuclear region of AGNs.
The presence of material surrounding the base of the jets in AGNs
makes jet-medium interactions likely. For instance, the interaction of
BLR clouds with a jet in AGNs was already suggested by Blandford \&
K\"onigl (1979) as a mechanism for knot formation in the radio galaxy
M87. Also, the gamma-ray production due to the interaction of a cloud
from the BLR with a proton beam or a massive star
with a jet were studied in the context of AGNs by Dar \& Laor (1997)
and Bednarek \& Protheroe (1997), respectively.
In this work, we study the interaction of BLR clouds with the innermost
jet in an AGN and its observable consequences at high
energies. The approach adopted is similar to that followed in Araudo
et al. (2009) for high-mass microquasars (for a general comparison
between these sources and AGNs see Bosch-Ramon 2008),
where the interaction of stellar wind clumps of the companion star
with the microquasar jet was studied. Under magnetic fields below
equipartition with the jet kinetic energy (i.e. the jet should be
matter dominated), cloud penetration will lead to the formation of a
relativistic bow shock in the jet and a slow shock inside the
cloud. Electrons and protons can be efficiently accelerated in the bow
shock and produce non-thermal emission, in situ via synchrotron and
synchrotron self-Compton (SSC) mechanism, and in the cloud through
proton-proton ($pp$)
collisions. For magnetic fields well below equipartition, the SSC
component becomes the dominant electron cooling channel, that leads
to significant gamma-ray production. Since the bow shock downstream is
almost at rest in the laboratory reference frame (RF), this emission
will not be significantly boosted. The resulting spectrum and the
achieved luminosities in one jet-cloud interaction depend strongly on
the magnetic field, the location of the interaction region,
the cloud size, and the
jet luminosity. However, many clouds could be inside the jet
simultaneously, and then the BLR global properties, like size and
total number of clouds, would also be relevant. Depending on whether
one cloud or many of them penetrate into the jet, the lightcurve will
be flare-like or rather steady, respectively.
In order to explore the radiative outcomes of jet-cloud interactions
in AGNs, we apply our model to both Faranoff-Riley galaxies I (FR~I)
and II (FR~II). In particular, we consider Centaurus A (Cen A) and
3C~273, the nearest FR~I
and a close and very bright flat spectrum radio quasar (with
FR~II as parent population), as illustrative cases. Although in FR~I
the BLR is not well-detected, clouds with similar characteristics to
those found in FR~II galaxies may surround the SMBH (Wang et
al. 1986, Risaliti 2009). Cen~A has been detected at high- (HE)
(Hartman et al. 1999; Abdo et al. 2010) and very high-energy (VHE) gamma
rays (Aharonian et al. 2009), whereas 3C~273 has been
detected so far only at HE gamma rays (Hartman et al. 1999;
Abdo et al. 2010). We have computed the contribution of jet-cloud
interactions to the gamma-ray emission in these sources, and estimated
the gamma-ray luminosity in a wide range of cases. We find that
gamma rays from jet-cloud interactions could be detectable by
present and future instrumentation in nearby low-luminous AGNs at HE
and VHE, and for powerful and nearby quasars only at HE, since the VHE
radiation is absorbed by the dense nuclear photon fields. In the case
of sources showing boosted gamma rays (blazars), the isotropic
radiation from jet-cloud interactions will be masked by the jet beamed
emission, which will not be the case in non-blazar sources.
The paper is organized as follows: in Sect.~\ref{jet-cloud}, the
dynamics of jet-cloud interactions is described;
in Sect.~\ref{acc}, a model for particle acceleration and emission is
presented for one interaction, whereas
in Sect.~\ref{Many-clouds} the case of many clouds interacting with the
jet is considered;
in Sects.~\ref{FRI} and \ref{FRII}, the
model is applied to FR~I and FR~II galaxies, focusing on the
sources Cen A and 3C~273; finally, in Sect.~\ref{disc}, the
results of this work are summarized and discussed. We adopt cgs units
through-out the paper.
\section{The jet-cloud interaction}\label{jet-cloud}
Under certain combinations of the jet ram pressure, and the cloud
size and density, cloud-jet penetration is expected to occur. The
details of the penetration process itself are complex. Here we do
not treat them in detail, but just assume that penetration occurs if
certain conditions are fulfilled. For low magnetic fields,
a cloud inside the jet may represent a
hydrodynamic situation in which a supersonic flow interacts
with a body of approximately
spherical shape at rest. The cloud, as long as it has not been accelerated by
the jet ram pressure up to the jet speed ($v_{\rm j}$), produces a
perturbation in the jet medium in the form of a steady bow shock
roughly at rest in the laboratoty RF, with a velocity with respect to
the jet RF approximately equal to $v_{\rm j}$. Since the cloud is not
rigid, a wave propagates also through it. Since the cloud temperature
is much lower, and the density much higher than in the jet, this wave
will be still supersonic but much slower than the bow shock. The jet
pressure exerts a force on the cloud leading to cloud
acceleration along the axis, hydrodynamical instabilites and,
eventually, cloud fragmentation. In the following, the jet-cloud
interaction is described. Further discussion, and a proper account of
the literature, can be found in Araudo et al. (2009). Sketchs of the
jet-cloud interaction and the jet-BLR scenario are shown in
Fig.~\ref{blr}.
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.35\textwidth]{fig_1.eps}
\caption{Sketch, not to scale, of an AGN at the spatial
scales of the BLR region. In the top part of the figure,
the interaction between a cloud and the jet is also shown.}
\label{blr}
\end{center}
\end{figure}
\begin{table}[]
\begin{center}
\caption{Values assumed in this work for BLR clouds and jets.}
\label{const}
\begin{tabular}{ll}
\hline Description & Value \\
\hline Cloud size & $R_{\rm c} = 10^{13}$~cm \\
Cloud density & $n_{\rm c} = 10^{10}$~cm$^{-3}$ \\
Cloud velocity & $v_{\rm c} = 10^{9}$~cm~s$^{-1}$ \\
Cloud temperature & $T_{\rm c} = 2\times10^{4}$~K \\
Jet Lorentz factor & $\Gamma = 10$ \\
Jet half-opening angle & $\phi\approx 6^{\circ}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
We adopt here clouds with typical density $n_{\rm
c}=10^{10}$~cm$^{-3}$ and size $R_{\rm c}=10^{13}$~cm (Risaliti 2009).
The velocity of the cloud is taken
$v_{\rm c}=10^9$~cm~s$^{-1}$ (Peterson 2006). The jet Lorentz
factor is fixed to $\Gamma=10$, implying $v_{\rm j}\approx c$, with a
half-opening angle $\phi\approx 6^{\circ}$, i.e. the jet radius/height
relation fixed to $R_{\rm j}=\tan(\phi)\,z=0.1\,z$. All these
parameters are summarized in Table~\ref{const} and
will not change along the paper;
from them, the jet density $n_{\rm j}$ in the
laboratoty RF can be estimated:
\begin{eqnarray}
n_{\rm j} & = & \frac{L_{\rm j}}{(\Gamma-1)\,m_{\rm p}\,c^3 \sigma_{\rm j}}
\approx 8\times 10^{4}\left(\frac{L_{\rm j}}{10^{44}\,\rm{erg\, s^{-1}}}\right)
\nonumber\\
{} & \times &
\left(\frac{\Gamma-1}{9}\right)^{-1}
\left(\frac{z}{10^{16}\,{\rm cm}}\right)^{-2} \,\rm{cm^{-3}},
\end{eqnarray}
where $\sigma_{\rm j} = \pi R_{\rm j}^2$ and
$L_{\rm j}$ is the kinetic power of the matter dominated jet.
The jet ram pressure should not destroy the cloud before it
has fully entered into the jet. This means that
the time required by the cloud to penetrate into the jet should be:
\begin{equation}
t_{\rm c}\sim \frac{2 R_{\rm c}}{v_{\rm c}} =
2\times 10^4\left(\frac{R_{\rm c}}{10^{13}\,\rm{cm}}\right)
\left(\frac{v_{\rm c}}{10^9\,\rm{cm\, s^{-1}}}\right)^{-1}\,{\rm s}\,,
\end{equation}
should be shorter than the cloud lifetime inside the jet. To estimate
this cloud lifetime, let us
compute first the time required by the shock in the cloud to cross it
($t_{\rm cs}$). The
velocity of this shock, $v_{\rm cs}$, can be derived making equal the
jet and the cloud shock ram pressures:
$(\Gamma-1)\,n_{\rm j}\,m_{\rm p}\,c^2=n_{\rm c}\,m_{\rm p}\,v_{\rm cs}^2$,
valid as long as $v_{\rm cs}\ll c\,$. Then:
\begin{eqnarray}
v_{\rm cs} & \sim & \chi^{-1/2}\, c\sim
3\times10^8\left(\frac{n_{\rm c}}{10^{10}\,\rm{cm^{-3}}}\right)^{-1/2}
\nonumber\\
{} & \times & \left(\frac{z}{10^{16}\,{\rm
cm}}\right)^{-1}\left(\frac{L_{\rm j}}{10^{44}\,\rm{erg\,
s^{-1}}}\right)^{1/2} \, \rm{cm\,s^{-1}},
\end{eqnarray}
where $\chi$ is the cloud to jet density ratio,
$n_{\rm c}/n_{\rm j}(\Gamma-1)$. This yields a clould shocking time:
\begin{eqnarray}
t_{\rm cs} &\sim& \frac{2R_{\rm c}}{v_{\rm cs}}\simeq
7\times10^4 \left(\frac{R_{\rm c}}{10^{13}\, \rm{cm}} \right)\,
\left(\frac{n_{\rm c}}{10^{10}\,\rm{cm^{-3}}}\right)^{1/2} \nonumber\\
{}& \times {}& \left(\frac{z}{10^{16}\,{\rm
cm}}\right)\left(\frac{L_{\rm j}}{10^{44}\,\rm{erg\,
s^{-1}}}\right)^{-1/2}\,{\rm s}\,.
\end{eqnarray}
Therefore, for a penetration time ($t_{\rm c}$) at least as short
as $\sim t_{\rm cs}$, the
cloud will remain being an effective obstacle for the jet
flow. Setting $t_{\rm c}\sim t_{\rm cs}$, we obtain a minimum value
for $\chi$ and hence for $z$.
Hydrodynamical instabilities produced by the interaction with the jet
material will affect the cloud. First of all, the jet exerts a
force on the cloud through the contact discontinuity. The acceleration
applied to the cloud can be estimated from the jet ram pressure
$P_{\rm j}$, and the cloud section $\sigma_{\rm c}\sim \pi\,R_{\rm c}^2$
and mass $M_{\rm c}\sim (4/3)\,\,\pi R_{\rm c}^3\,n_{\rm j}\,m_{\rm p}$:
\begin{equation}
g=\frac{P_{\rm j}\,\sigma_{\rm c}}{M_{\rm c}}\sim
\frac{3}{4}\frac{c^2}{\chi\,R_{\rm c}}=\frac{3}{2}\,
\frac{v_{\rm cs}}{t_{\rm cs}}\,.
\end{equation}
Given the acceleration exerted by the jet in the cloud,
Rayleigh-Taylor (RT) instabilities will develop in the cloud, at the
jet contact discontinuity, with timescale:
\begin{equation}
t_{\rm RT} \sim \sqrt{\frac{l}{g}}=
\sqrt{\frac{4\,\chi\,l\,R_{\rm c}}{3\,c^2}}\,,
\end{equation}
where the instability length $l$ is the spatial scale of the
perturbation. For perturbations of the size of the cloud: $l\sim
R_{\rm c}$, which are those associated to cloud significant
disruption, one gets $t_{\rm RT}\sim t_{\rm cs}$.
In addition to RT instabilities, Kelvin-Helmholtz (KH) instabilities
also grow in the cloud walls in contact with shocked jet material
that surrounds the cloud. Accounting for the high relative
velocity, $v_{\rm rel}\la v_{\rm j}$, one obtains:
\begin{equation}
t_{\rm KH}\ga \sqrt{\frac{l}{g_{\rm rel}}}=\frac{\chi\,l}{c}\,,
\end{equation}
where $g_{\rm rel}\sim c^2/\chi\,l$. For $l\sim R_{\rm c}$, we obtain
$t_{\rm KH}\ga t_{\rm cs}$. In the previous estimates of $t_{\rm
RT}$ and $t_{\rm KH}$ we have not taken into account the effect of the
magnetic field (e.g. Blake 1972), since we assume that it is
dynamically negligible. We note that, given $g$, the time to
accelerate the cloud up to the shock velocity $v_{\rm cs}$ is $\sim
t_{\rm cs}$. However, the timescale to accelerate the cloud up to
$v_{\rm j}$ is $\gg t_{\rm cs}$ provided that $v_{\rm j}\gg v_{\rm
cs}$, and before, the cloud will likely fragment.
Finally, there are two additional timescales also relevant for our
study, the bow-shock formation time, $t_{\rm bs}$, and the time
required by the cloud to cross the jet, $t_{\rm j}$. The timescale
$t_{\rm bs}$ can be roughly estimated assuming that the shock
downstream has a cylindrical shape with one of the bases being the bow
shock, relativistic shock jump conditions, equal particle injection
and escape rates, and a escape velocity similar to the sound speed
$\sim c/\sqrt{3}$ (for a relativistic plasma). This yields a shock-cloud
separation distance of $Z \sim 0.3\,R_{\rm c}$, which implies:
\begin{equation}
t_{\rm bs} \sim \frac{Z}{c}=10^2
\left(\frac{R_{\rm c}}{10^{13}\, \rm{cm}} \right)\,{\rm s}\,.
\end{equation}
Since in general $t_{\rm bs}\ll t_{\rm cs}$, we can assume that the bow shock
is in the steady regime. The jet crossing time
$t_{\rm j}$ can be characterized by:
\begin{equation} t_{\rm j}\sim \frac{2 R_{\rm j}}{v_{\rm c}} =
2\times10^6 \left(\frac{z}{10^{16}\,{\rm cm}}\right)
\left(\frac{v_{\rm c}}{10^9\,\rm{cm\, s^{-1}}}\right)^{-1} \,{\rm s}\,.
\end{equation}
Note that if the cloud lifetime is $<t_{\rm j}$, the number of
clouds inside the jet will be smaller
than expected just from the BLR properties.
In order to summarize the discussion of the dynamics of the jet-clump
interaction, we plot in Fig.~\ref{timescales} the $t_{\rm cs}$ (for
different $L_{\rm j}$), $t_{\rm j}$, $t_{\rm c}$, and $t_{\rm bs}$ as a
function of $z$. As shown in the figure, for some values of $z$ and
$L_j$ the cloud could be destroyed by the jet before full penetration,
i.e. $t_{\rm cs}<t_{\rm c}$. This is a constraint to determine the
height $z$ of the jet at which the cloud can penetrate into it. Note
also that, in general, $t_{\rm bs}$ is much shorter than any other
timescale.
\begin{figure}
\includegraphics[angle=270, width=0.5\textwidth]{fig_2.ps}
\caption{The
jet crossing (blue dotted line), cloud penetration (red dot-dashed line),
bow-shock formation (violet dashed line) and cloud shocking
(green solid lines) times are plotted; all of them have been calculated using
the values given in Table~\ref{const}. The time $t_{\rm cs}$ is plotted for
$L_{\rm j}=10^{44}$, $10^{46}$ and $10^{48}$~erg~s$^{-1}$.}
\label{timescales}
\end{figure}
\subsection{The interaction height}
\label{InteractionHeight}
The cloud can fully penetrate into the jet if the cloud lifetime after jet
impact is longer than the penetration time (the weaker condition that
jet lateral pressure is $<n_{\rm c}\,m_{\rm p}\,v_{\rm c}^2$ is then
automatically satisfied). This determines the minimum interaction
height, $z_{\rm int}$, to avoid cloud disruption before full
penetration. Also, this interaction cannot occur below
the jet formation region,
$z_{\rm 0}\sim 100\,R_{\rm g}\approx 1.5\times 10^{15}\,(M_{\rm
bh}/10^8\,M_{\odot})$~cm (Junor et al. 1999). For BLR-jet interaction
and cloud penetration to occur, the size of the BLR should be
$R_{\rm blr}>z_0$ and $z_{\rm int}$.
The lifetime of the cloud depends on the fragmentation time, which is
strongly linked to, but longer than, $t_{\rm cs}$.
The value of $z_{\rm int}$ can be
estimated then setting $t_{\rm c}\la t_{\rm cs}$, since the cloud
should enter the jet before being significantly distorted by the
impact of the latter. Once shocked, the cloud can suffer lateral
expansion and conduction heating, which can speed up fragmentation due
to instabilities. In this work, we choose
$z_{\rm int}$ fixing $t_{\rm cs} = 2\,t_{\rm c}$:
\begin{eqnarray}
z_{\rm int} & \approx & 5\times10^{15}
\left(\frac{v_{\rm c}}{10^9\,\rm{cm\,s^{-1}}}\right)^{-1}
\left(\frac{n_{\rm c}}{10^{10}\,\rm{cm^{-3}}}\right)^{-1/2} \nonumber\\
{} & {\times} &
\left(\frac{L_{\rm j}}{10^{44}\,\rm{erg\, s^{-1}}}\right)^{1/2} \,\rm{cm}.
\end{eqnarray}
We note that the available power in the bow shock is
$L_{\rm bs}\sim (\sigma_{\rm c}/\sigma_{\rm j})\,L_{\rm j}\propto z^{-2}$.
Therefore, the most luminous individual jet-cloud
interaction will take place at $z\sim z_{\rm int}$.
The BLR size can be estimated through
an empirical relation obtained from sources with a well stablished BLR,
i.e. FR~II radio galaxies.
This relation is in general of the type
$R_{\rm blr} \propto L_{\rm blr}^{\alpha}$, where $L_{\rm blr}$ is the
luminosity of the BLR and $\alpha \sim 0.5 - 0.7$ (e.g. Kaspi et al. 2005,
2007; Peterson et al. 2005; Bentz et al. 2006).
In this paper we use the following relations:
\begin{equation}
\label{Rblr}
R_{\rm blr}\sim 6\times 10^{16}
\left(\frac{L_{\rm blr}}{10^{44}\,\rm{erg\,s^{-1}}}\right)^{0.7}\,\rm{cm},
\end{equation}
and
\begin{equation}
\label{Rblr_07}
R_{\rm blr}\sim 2.5\times 10^{16}
\left(\frac{L_{\rm blr}}{10^{44}\,\rm{erg\,s^{-1}}}\right)^{0.55}\,\rm{cm},
\end{equation}
from Kaspi et al. (2005, 2007).
In Fig.~\ref{zint-rblr} we show the relation of $z_{\rm int}$ and
$R_{\rm blr}$ with $L_{\rm j}$, assuming that $L_{\rm blr}$ is a
10\% of the disc luminosity, which is taken here equal to $L_{\rm j}$. As
seen in the figure, for reasonable parameters, the condition $z_{\rm
int}<R_{\rm blr}$ is fulfilled for a wide range of $L_{\rm j}$.
Figure~\ref{zint-rblr} also shows the relation between $z_0$ and
$M_{\rm bh}$, which shows that for $M_{\rm bh}\ga 10^9$~$M_{\odot}$
the jet could be even not (fully) formed at the BLR scales at the
lowest $L_{\rm j}$-values.
\begin{figure}
\includegraphics[angle=270, width=0.5\textwidth]{fig_3.ps}
\caption{The interaction height, $z_{\rm int}$ (red solid line), and the size
of the BLR, $R_{\rm blr}$ (green dashed line), for different values
of $L_{\rm j}$ (bottom horizontal axis). We have derived
$R_{\rm blr}(L_{\rm j})$ fixing $L_{\rm blr} = 0.1\,L_{\rm j}$ and
plotted $R_{\rm blr}$ using Eqs.~(\ref{Rblr}) and (\ref{Rblr_07}).
In the same plot, the height of the jet base, $z_0$ (blue dotted line),
is plotted as a function of $M_{\rm BH}$ (top horizontal axis).}
\label{zint-rblr}
\end{figure}
\section{Non-thermal particles and their emission}\label{acc}
In the bow and cloud shocks, particles can be accelerated through
diffusive shock acceleration (Bell 1978). However, the bow shock
should be more efficient accelerating particles
than the shock in the cloud because
$v_{\rm bs}\gg v_{\rm cs}$. In addition, the cloud shock luminosity is smaller
than the bow-shock luminosity by $\sim 1/(2\chi^{1/2})$. For these
reasons, we focus here on the particle acceleration in the bow shock.
In this section, we briefly
describe the injection and evolution of particles, and their emission,
remarking those aspects that are specific to AGNs. The details of the
emitting processes considered here (synchrotron, IC and $pp$
interactions) can be found in Araudo et al. (2009) and references
therein.
First, one can estimate the non-thermal luminosity, $L_{\rm nt}$,
injected at $z_{\rm int}$ in the bow shock in the form of relativistic
electrons or protons:
\begin{eqnarray}
L_{\rm nt}& = &\eta_{\rm nt} L_{\rm bs} \approx 4\times10^{39}
\left(\frac{\eta_{\rm nt}}{0.1}\right)
\left(\frac{R_{\rm c}}{10^{13}\rm{cm}}\right)^2\\\nonumber
{}&\times&
\left(\frac{L_{\rm j}}{10^{44}\,\rm{erg\,s^{-1}}}\right)\,\rm{erg\,s^{-1}}.
\end{eqnarray}
Then, the accelerator/emitter magnetic field in the bow-shock RF ($B$)
can be determined relating
$U_{\rm B} = \eta_{\rm B} U_{\rm nt}$,
where $U_{\rm B}=B^2/8\pi$ and $U_{\rm nt}=L_{\rm nt}/(\sigma_{\rm c}c)$
are the magnetic and the non-thermal energy densities,
respectively. For leptonic emission and to avoid supression of the IC channel,
high gamma-ray outputs require $\eta_{\rm B}$ well below 1. In this
context, $B$ can be parametrized as follows:
\begin{equation}
B\approx 10 \left(\frac{\eta_{\rm B}}{0.01}\right)
\left(\frac{v_{\rm c}}{10^{9}\, \rm{cm\,s^{-1}}}\right)^2
\left(\frac{n_{\rm c}}{10^{10}\, \rm{cm^{-3}}}\right)\,{\rm G}\,.
\end{equation}
Regarding the acceleration mechanism, since the bow shock is
relativistic and the treatment of such a shocks is complex
(see Achterberg et al. 2001), we adopt the following prescription
for the acceleration rate:
\begin{equation}
\dot{E}_{\rm acc}=0.1\,q\,B\,c\,,
\end{equation}
similar to that in the relativistic
termination shock of the Crab pulsar wind (de Jager et al. 1996).
Particles suffer different losses that balance the energy gain from
acceleration. The electron loss mechanisms are escape downstream,
relativistic Bremsstrahlung, synchrotron emission, and external
Compton (EC) and SSC. We note that $B$, $L_{\rm nt}$ and the
accelerator/emitter size at $z_{\rm int}$ are constant for different
$L_{\rm j}$ and fixed $\eta_{\rm B}$ and $\eta_{\rm nt}$, and only
$L_{\rm blr}$ and $L_{\rm d}$ are expected to change with $L_{\rm j}$.
Therefore, as
long as the external photon fields are negligible, the maximum
electron energy at $z_{\rm int}$ does not change for different jet
powers.
In Fig.~\ref{losses}, the leptonic cooling timescales are plotted together
with the escape time and the acceleration time for a bow
shock located at $z_{\rm int}$. A value for $\eta_{\rm B}$ equal to
0.01 has been adopted. The SSC cooling timescale is plotted
for the steady state. The escape time downstream the
relativistic bow shock is taken as:
\begin{equation}
t_{\rm esc}\sim \frac{3\,R_{\rm c}}{c} = 10^3\,
\left(\frac{R_{\rm c}}{10^{13}\, \rm{cm}}\right)\,{\rm s}\,.
\end{equation}
Synchrotron and EC/SSC are the dominant processes in the high-energy
part of the electron population, relativistic bremsstrahlung is
negligible for any energy, and electron escape is relevant in the
low-energy part. This yields a break in the electron energy
distribution at the energy at which the synchrotron/IC time and the
escape time are equal. The Thomson to KN transition is clearly seen in the
EC cooling curves, but is much smoother in the SSC case. The
maximun electrons energy are around several TeV. Given the
similar cooling timescale for electrons via relativistic
bremsstrahlung and protons through $pp$ collisions ($t_{\rm
brems/pp}\sim 10^{15}/n$~s, where $n$ is the target density), protons
will not cool efficiently in the bow shock. Photomeson production can
also be discarded as a relevant proton cooling mechanism in the bow
shock due to the relatively low achievable proton energies and photon
densities.
The maximum proton energy
of protons is constrained making equal the acceleration time and the
time needed to diffuse out of the bow shock. Assuming Bohm diffusion,
$t_{\rm diff}=3\,Z^2/2\,r_{\rm g}\,c$ (where $r_{\rm g}$ is the
particle gyroradius), maximum proton energy is:
\begin{equation}
E_{\rm p}^{\rm max} \sim 0.1\,q\,B\,R_{\rm c}=
5\times 10^3\,\left(\frac{B}{10\,{\rm G}}\right)\,\left(\frac{R_{\rm
c}}{10^{13}\,{\rm cm}}\right)\,{\rm TeV}\,.
\end{equation}
Electrons are injected in the bow shock region following a power-law
in energy of index 2.2 (Achterberg et al. 2001) with an exponential
cutoff at the maximum electron energy. The injection luminosity is
$L_{\rm nt}$. To first order, the electron evolution can be
computed assuming homogeneous conditions, following therefore a
one-zone approximation with all the mentioned cooling and escape
processes. The formulae for all the relevant radiative mechanisms, as well as
the solved electron evolution differential equation, can be found in
Araudo et al. (2009). In some cases, SSC is the dominant cooling
channel at high energies. In that case, the calculations have to be
done numerically
splitting the evolution of the electron population in different time
steps. In each step, the radiation field is updated accounting for the
synchrotron emission produced in the previous step until the steady
state is reached. The duration of each step should be shorter for the
earlier phases of the evolutionary state to account properly for the
rise of the synchrotron energy density in the emitter.
\begin{figure}
\includegraphics[angle=270, width=0.5\textwidth]{fig_4.ps}
\caption{Acceleration gain (orange dot-dashed line), escape (blue dashed line)
and cooling lepton timescales are plotted.
SSC (green two-dot-dashed line) is plotted when the steady state is reached
and EC for the both BLR (turquoise square line) and disc (maroon dotted line)
photon fields are shown for the
conditions of faint (BLR: $10^{44}$; disc: $10^{45}$~erg~s$^{-1}$) and
bright sources (BLR: $10^{46}$; disc: $10^{47}$~erg~s$^{-1}$).
Synchrotron (red solid line) and relativistic Bremsstrahlung (violet
dot-two-dashed line) are also plotted. We have fixed here
$\eta_{\rm B} = 0.01$. For protons, the
$pp$ cooling timescale is not shown for clarity, although it would
be similar to that of relativistic bremsstrahlung.}
\label{losses}
\end{figure}
Once the steady-state electron distribution in the bow shock is
computed, the spectral energy distribution (SED) of the non-thermal
radiation can be calculated. The synchrotron self-absorption effect
has to be taken into account, which will affect the low energy band of
the synchrotron emission. At gamma-ray energies, photon-photon
absorption due to the disc and the BLR radiation are to be considered,
the internal absorption due to synchrotron radiation being negligible.
Given the typical
BLR and disc photon energies, $\sim 10$~eV and $\sim 1$~keV,
respectively, gamma rays beyond 1~GeV and 100~GeV can be strongly
affected by photon-photon absorption. On the other hand, for most of
cases photons between 100~MeV
and 1~GeV energies will escape the dense disc photon field.
Although proton cooling is negligible in the bow-shock region, it may
be significant in the cloud. Protons can penetrate into the cloud if
$t_{\rm esc}>t_{\rm diff}$, which yields a minimum energy to reach the cloud of
$E_p\sim 0.4\,E_p^{\rm max}$. These protons will radiate in the form of
gamma rays only a fraction $\sim 3\times 10^{-4}\,(R_{\rm
c}/10^{13}\,{\rm cm})(t_{\rm pp}/10^5\,{\rm s})^{-1}$ of their
energy, wich makes the process rather unefficient. The reason is that
these protons cannot be efficiently confined and cross the cloud
at a velocity $\sim c$. For further details of the proton energy distribution
in the cloud, see Araudo et al. (2009).
\section{Many clouds inside the jet}\label{Many-clouds}
Clouds fill the BLR, and many of them can simultaneously
be inside the jet at different $z$, each of them producing non-thermal
radiation. Therefore, the total luminosity can be much larger than the
one produced by just one interaction, which is $\sim L_{\rm nt}$. The
number of clouds within the jets, at $z\le R_{\rm blr}$, can be
computed from the jet ($V_{\rm j}$) and
cloud ($V_{\rm c}$) volumes, resulting in:
\begin{equation}
\label{N_clouds}
N_{\rm c}^{\rm j} = 2\, f\, \frac{V_{\rm j}}{V_{\rm c}}\sim
9\left(\frac{L_{\rm j}}{10^{44}\,\rm{erg s^{-1}}}\right)^{2}
\left(\frac{R_{\rm c}}{10^{13}\,\rm{cm}}\right)^{-3},
\end{equation}
where the factor 2 accounts for the two jets and
$f \sim 10^{-6}$ is the filling factor of clouds in the whole
BLR (Dietrich et al. 1999). Actually, $N_{\rm c}^{\rm j}$
is correct if one neglects
that the cloud disrupts and fragments, and eventually dilutes inside
the jet. For instance, Klein et al. (1994) estimated a shocked cloud
lifetime in several $t_{\rm cs}$, and Shin et al. (2008) found that
even a weak magnetic field in the cloud can significantly increase its
lifetime. Finally, even under cloud fragmentation, strong bow shocks
can form around the cloud fragments before these have accelerated
close to $v_{\rm j}$. All this makes the real number of interacting
clouds inside the jet hard to estimate, but it should be between
$(t_{\rm cs}/t_{\rm j})\,N_{\rm c}^{\rm j}$ and $N_{\rm c}^{\rm j}$.
The presence of many clouds inside the jet, not only at $z_{\rm int}$
but also at higher $z$, implies that the total non-thermal luminosity
available in the BLR-jet intersection region is:
\begin{eqnarray}
\label{Lrad_tot}
L_{\rm nt}^{\rm tot} &\sim &2\, \int^{R_{\rm blr}}
\frac{{\rm d}N_{\rm c}^{\rm j}}{{\rm d}z} L_{\rm nt}(z)\, {\rm d}z\nonumber \\
{}&\sim & 2\times10^{40} \left(\frac{\eta_{\rm nt}}{0.1}\right)
\left(\frac{R_{\rm c}}{10^{13}\,\rm{cm}}\right)^{-1}
\left(\frac{L_{\rm j}}{10^{44}\,\rm{erg\,s^{-1}}}\right)^{1.7}
\end{eqnarray}
where ${\rm d}N_{\rm c}^{\rm j}/{\rm d} z$ is the number of clouds
located in a jet volume ${\rm d}V_{\rm j}=\pi\,(0.1z)^2\,{\rm
d}z$. In both Eqs.~\ref{N_clouds} and \ref{Lrad_tot},
$L_{\rm blr}$
has been fixed to $0.1\,L_{\rm j}$, approximately as in FR~II
galaxies, and $R_{\rm blr}$ has been derived using Eq.~(\ref{Rblr}).
In Fig.~\ref{L_tot}, we show estimates for the gamma-ray luminosity
when many clouds interact simultanously with the jet. For this, we
have followed a simple approach assuming that most of the non-thermal
luminosity goes to gamma rays. This will be the case as long as the
escape and synchrotron cooling time are longer than the IC cooling
time (EC+SSC) for the highest electron energies. Given the little
information for the BLR in the case of FR~I sources, we do not
specifically consider these sources here.
\begin{figure}
\includegraphics[angle=270, width=0.5\textwidth]{fig_5.ps}
\caption{Upper limits on the gamma-ray luminosity produced by
$N_{\rm c}^{\rm j}$ clouds inside the jet as a function of
$L_{\rm j}$ in FR~II sources. Two cases are plotted, one assuming that
clouds cross the jet without disruption (green solid lines), and one
in which the clouds are destroyed in a time as short as $t_{\rm cs}$
(green dashed lines).
The thick (solid and dashed) and thin (solid and dashed) lines
correspond to
$R_{\rm blr} \propto L_{\rm blr}^{0.7}$ (Kaspi et al. 2007) and
$R_{\rm blr} \propto L_{\rm blr}^{0.55}$ (Kaspi et al. 2005), respectively.
In addition, the sensitivity levels of {\it Fermi} in
the range 0.1--1~GeV (maroon dotted lines) are plotted for three
different distances $d=10$, 100 and 1000~Mpc.}
\label{L_tot}
\end{figure}
In the two next sections, we present more detailed calculations
applying the model presented in Sect.~\ref{acc} to two
characteristic sources, Cen~A (FR~I, one interaction) and 3C~273
(FR~II, many interactions).
\section{Application to FR~I galaxies: Cen~A} \label{FRI}
Cen~A is the closest AGN, with a distance $d\approx 3.7$~Mpc (Israel 1998).
It has been classified as an FR~I radio galaxy and as a Seyfert~2 optical
object. The mass of the black hole is $\approx 6\times
10^7$~$M_{\odot}$ (Marconi et al. 2000).
The angle between the jets and the line of sight is
large, $>50^\circ$ (Tingay et al. 1998),
thus the jet radiation towards the observer should
not suffer strong beaming. The jets of Cen~A are disrupted at kpc
scales, forming two giant radio lobes that extend $\sim 10^{\circ}$ in
the southern sky. At optical
wavelenghts, the nuclear region of Cen~A is obscured by a dense region
of gas and dust, probably as a consequence of a recent merger (Thomson
1992, Mirabel et al. 1999). At higher energies, {\it Chandra} and
{\it XMM-Newton} detected continuum X-ray emission coming from the
nuclear region with a luminosity $\sim 5\times10^{41}$~erg~s$^{-1}$
between 2--7~keV (Evans et al. 2004). These X-rays could be produced by the
accretion flow and the inner jet, although their origin is still
unclear. In the GeV range, Cen~A was detected above $200$~MeV by
\emph{Fermi}, with a bolometric luminosity of $\approx
4\times10^{40}$~erg~s$^{-1}$ (Abdo et al. 2009), and above $\sim
200$~GeV by HESS, with a bolometric luminosity of $\approx
3\times10^{39}$~erg~s$^{-1}$ (Aharonian et al. 2009). In both cases,
this HE emission is associated with the nuclear region.
Cen~A has been proposed to be a source of ultra HE cosmic rays (Romero
et al. 1996).
A BLR has not been detected so far in Cen A (Alexander et al. 1999),
although this could be a consequence of the optical obscuration
produced by the dust lane. One can still assume that clouds surround
the SMBH in the nuclear region (Wang et al. 1986, Risaliti et
al. 2002) but, as a consequence of the low luminosities of the
accretion disc, it is not expected that the photoionization of these
clouds will be efficient enough to produce lines. Since no emission
from these clouds is assumed, we only consider the EC scattering with
photons from the accretion flow.
We adopt here a jet power for Cen~A of $L_{\rm j}=10^{44}$~erg~s$^{-1}$.
From this value, and the values given
in Sect.~\ref{jet-cloud} for the remaining parameteres of the jet and
the cloud, $z_{\rm int}$ results in $\approx 5\times10^{15}$~cm. At
this jet height, the emission produced by the interaction between one
cloud and the jet is calculated assuming a $\eta_{\rm B}=0.01$, and
the corresponding SED is presented in Fig.~\ref{CenA}. As mentined in
Sect.~\ref{acc}, the low-energy band of the synchrotron
spectrum is self-absorbed at energies below $\sim 10^{-4}$~eV. At
gamma-ray energies, photon-photon absorption is negligible due to the
weak ambient photon fields (e.g. Rieger \& Aharonian 2009, Araudo et
al. 2009, 2010a,b). At high energies, SSC dominates the radiative output, with
the computed luminosity above 100~MeV being $\sim
2\times10^{39}$~erg~s$^{-1}$, and above 100~GeV about 10 times
less. These luminosities are below the sensitivity of \emph{Fermi} and
HESS and one order of magnitude smaller than the observed ones. Note
however that $L_{\rm nt}\propto R_{\rm c}^2$, and for slightly bigger
clouds, $L_{\rm nt}$ may grow up to detectable levels. The penetration
of a big clump in the base of the jet of Cen~A would lead to a flare
with a duration of about one day.
\begin{figure}
\includegraphics[angle=270, width=0.5\textwidth]{fig_6.ps}
\caption{Computed SED for one jet-cloud interaction at $z_{\rm int}$ in
Cen~A. We show also the SEDs of the detected emission by \emph{Fermi} and HESS,
as well as the sensitivity curves of these instruments.}
\label{CenA}
\end{figure}
\section{Application to FR II galaxies: 3C~273 (off-axis)}\label{FRII}
\begin{table}[]
\begin{center}
\caption{Adopted parameters for Cen A and 3C~273.}
\label{applications}
\begin{tabular}{lcc}
\hline
{} & Cen A & 3C 273 \\
\hline
Distance [Mpc] & 3.7 & $6.7\times10^2$\\
Black hole mass [$M_{\odot}$] & $6\times10^7$& $7\times10^9$ \\
Inclination angle [$^\circ$] & $> 50$ & $\sim 15$ \\
Jet luminosity [erg~s$^{-1}$] & $10^{44}$ & $4\times10^{47}$\\
Disc luminosity [erg~s$^{-1}$]&$5\times10^{41}$& $2\times10^{46}$ \\
Disc photon energy$^{\star}$ [eV] & $\sim 5\times10^3$ & 54 \\
BLR luminosity $L_{\rm blr}$ [erg~s$^{-1}$]& - & $4\times10^{45}$ \\
\hline
$^{\star}$ of the thermal component.
\end{tabular}
\end{center}
\end{table}
3C~273 is a powerfull radio-loud AGN at a distance of $d=6.7\times
10^2$~Mpc (Courvoisier 1998) with a SMBH mass
$M_{\rm BH} \sim 7\times10^9 M_{\odot}$
(Paltani \& T$\ddot{\rm u}$rler 2005). The angle of the jet with
the line of sight is
small, $\approx 6^\circ$, which implies the blazar nature of 3C~273
(Jolley et al. 2009). The whole spectrum of this source shows
variability (e.g. Pian et al. 1999) from years (radio) to few hours
(gamma rays).
At high energies, 3C~273
was the first blazar AGN detected in the MeV band by the COS-B
satellite and, later on, by EGRET (Hartman et
al. 1999). Recently, this source was also detected at GeV energies by
\emph{Fermi} and \emph{AGILE}, but it has not been detected yet in the
TeV range. Given the jet luminosity of 3C~273, $L_{\rm j}\approx
4\times10^{47}$~erg~s$^{-1}$ (Kataoka et al. 2002), $z_{\rm int}$
results in $\approx 3\times10^{17}$~cm.
The BLR luminosity of this source is $\approx
4\times10^{45}$~erg~s$^{-1}$ (Cao \& Jiang 1999), and its size
$7\times10^{17}$~cm (Ghissellini et al. 2010), which implies that
jet-cloud interactions can take place. The disc luminosity is high,
$\approx 2\times 10^{46}$~erg~s$^{-1}$, with typical photon energies
$\approx 54$~eV (Grandi \& Palumbo 2004).
The non-thermal SED of the radiation generated by jet-cloud
interactions in 3C~273 is shown in Fig.~\ref{3C273}. At $z_{\rm int}$,
the most important radiative processes are synchroton and SSC. The
bolometric luminosities by these processes in one interaction at
$z_{\rm int}$ are $6\times10^{38}$~erg~s$^{-1}$ and
$2\times10^{39}$~erg~s$^{-1}$, respectively. Given the presence of the
strong radiation fields from the disc and the BLR, the emission above
$\sim 10$~GeV is absorbed through photon-photon absorption,
and the maximum of the emission is around 0.1--1~GeV. Given
the estimated number of clouds in the BLR of 3C~273, $\sim 10^8$
(Dietrich et al. 1999), and the size is $R_{\rm blr}\approx
7\times10^{17}$~cm, the filling factor results in $f\sim 3\times
10^{-7}$. With this value of $f$, the number of clouds in the two jets
results in $\sim 2\times 10^3$ and $5\times 10^5$ for both the minimum
and the maximum values (see Sect.~\ref{Many-clouds}). Considering the
most optimistic case, the SSC luminosity would reach
$2\times10^{44}$~erg~s$^{-1}$. This value is well below the observed
luminosity by \emph{Fermi} in the GeV range, $\sim
3\times10^{46}$~erg~s$^{-1}$ in the steady state and $\sim
1.7\times10^{47}$~erg~s$^{-1}$ in flare (Soldi et
al. 2009). However, the detected emission is very likely of beamed
origin and should mask any unbeamed radiation. However, powerful
non-blazar AGNs (FR~II galaxies) do not present this beamed component,
which makes possible the detection of GeV emission from jet-cloud
interactions in these sources. In this case, given that many BLR clouds
can interact with the jet simultaneously, the radiation should be steady.
\begin{figure}
\includegraphics[angle=270, width=0.5\textwidth]{fig_7.ps}
\caption{Computed SED for one jet-cloud interaction at $z_{\rm int}$ in 3C~273.
The emission in the 0.1--1~GeV range from many clouds inside the jet
is also shown, together with the sensitivity level of \emph{Fermi} and
the observed SED above 200~MeV.}
\label{3C273}
\end{figure}
\section{Summary and discussion}\label{disc}
In this work, the interaction of clouds with the base of jets in AGNs
is studied. Considering reasonable cloud and jet parameters, we
estimate the relevant dynamical timescales of these interactions,
concluding that clouds can enter into the jet only above a certain
height, $\sim z_{\rm int}$. Below $z_{\rm int}$, the jet is too
compact and its ram/magnetic pressure will destroy the cloud before
fully penetrating into the jet. Once the cloud significantly interacts
with the jet, strong shocks are generated and gamma rays can be
produced with an efficiency that depends strongly on the bow-shock
magnetic field.
Bow shock $B$-values well below equipartition with non-thermal
particles allow significant gamma-ray emission. For very high
$B$-values (Poynting-flux dominated jets), the treatment performed
here does not apply. In that case, $z_{\rm int}$ could be still
defined adopting the jet magnetic pressure instead of the ram
pressue. If a cloud entered in such a jet, particle acceleration in
the bow shock could still occur due to, for instance, magnetic
reconnection. The study of such a case would require a completely
different approach than the one presented here. In general, for
bow-shock magnetic fields above equipartition with
non-thermal particles, the IC channel and gamma ray production will be
supressed in favor of synchrotron emission. Unless magnetic
dissipation reduced the magnetic field enough for IC to be
dominant. Bow shock $B$-values well below equipartition with
non-thermal particles allow significant gamma-ray emission. We note
that modeling of gamma rays from AGN jets uses to require relatively
low magnetic fields (e.g. Ghisellini et al. 2010). Therefore, it could
be that, even if the jet magnetic field were high at $z_{\rm int}$, it
could become small enough farther up due to bulk acceleration
(e.g. Komissarov et al. 2007) or some other form of magnetic
dissipation.
For very nearby sources, like Cen~A, the interaction of big clouds
with jets may be detectable as a flaring event, although the number of
these big clouds and thereby the duty cicle of the flares are difficult
to estimate. Given the weak external photon fields in these sources,
VHE photons can escape without suffering significant
absorption. Therefore, jet-cloud interactions in nearby FR~I may be
detectable in both the HE and the VHE range as flares with timescales
of about one day. Studying such a radiation would provide information
on the environmental conditions and the base of the jet in these
sources.
In FR~II sources, many BLR clouds could interact simultaneously with
the jet. The number of clouds depends strongly on the cloud lifetime
inside the jet, which could be of the order of several $t_{\rm cs}$.
Nevertheless, it is worthy noting that after cloud fragmentation
many bow shocks can still form and efficiently accelerate particles if
these fragments are slower than the jet. Since FR~II sources are
expected to present high accretion rates, radiation above 1~GeV
produced in the jet base can be strongly attenuated due to the dense
disc and the BLR photon fields, although gamma rays below 1~GeV should
not be significantly affected. Since jet-cloud emission should be
rather isotropic, it would be masked by jet beamed emission in blazar
sources, although since powerful/nearby FR~II jets do not present
significant beaming, these objects could indeed show gamma rays from
jet-cloud interactions. In the context of AGN unification (Urry \&
Padovani 1995), the number
of non-blazar (radio-loud) AGNs should be much larger than that of
blazars with the same $L_{\rm j}$. As shown in Fig.~\ref{L_tot}, close
and powerful sources could be detectable by deep enough observations
of {\it Fermi}. After few-years exposure a significant signal from these
objects could arise, their detection providing strong evidence that
jets are already strongly matter dominated at the bow shock regions, as well
as physical information on the BLR and jet base region.
\begin{acknowledgements}
We thank the referee Elena Pian for constructive comments and sugestions.
A.T.A. and V.B-R. thanks Max Planck Institut fuer Kernphysik for kind
hospitality and support. A.T.A. and G.E.R. are supported by CONICET
and the Argentine agency ANPCyT (grant BID 1728/OC-AR PICT 2007-00848).
V.B-R. and G.E.R. acknowledge support by the Ministerio de
Educaci\'on y Ciencia (Spain) under grant AYA 2007-68034-C03-01, FEDER funds.
\end{acknowledgements}
|
1,108,101,565,976 | arxiv | \section{Introduction}
\label{sec:intro}
GPS trajectories are the essential foundations of
many applications such as travel time estimation~\cite{zhang2018deeptravel, li2018multi}, traffic prediction~\cite{li2021traffic, zheng2020gman, jin2022gridtuner}, trajectory similarity measurement~\cite{li2018deep,yao2019computing,yang2021t3s, zhang2020trajectory, han2021graph} and etc.
In order to achieve good performances, most of these applications require a large number of high sample rate trajectories~\cite{ren2021mtrajrec}, as trajectories of low sample rate lose detailed driving information and increase uncertainty.
However, as pointed out in previous works~\cite{ren2021mtrajrec,zhao2019deepmm}, a large number of trajectories generated in real-life have low sample rate, e.g., taxis usually report their GPS locations every $2 \sim 6$ minutes to reduce energy consumption~\cite{yuan2010interactive}. Consequently, it is hard for most existing models developed for the applications mentioned above to utilize these trajectories effectively. In addition, GPS trajectories have to be first mapped to the road network via map matching before being used by many applications. Most existing map matching algorithms are based on Hidden Markov Model (HMM)~\cite{newson2009hidden} and its variants, and they can achieve a high accuracy only when the trajectories are sampled in a relatively high rate~\cite{lou2009map}. Although some existing works aim to increase the accuracy of map matching, including HMM-based methods~\cite{jagadeesh2017online, song2012quick} and learning based methods~\cite{lou2009map, zhao2019deepmm}, we have not yet found a good solution to address the issues caused by low-sample trajectories.
Trajectory recovery, that aims to increase the sample rate via recovering the missing points of a given trajectory, enriches low-sample trajectories from a different perspective. We can assume that vehicles are moving with the uniform speeds and insert new points (generated by linear interpolation) between every two consecutive GPS points in the input trajectory~\cite{hoteit2014estimating}. Though easy for implementation, it suffers from poor accuracy. Recently, two learning-based methods have been developed for trajectory recovery, including DHTR~\cite{wang2019deep} and MTrajRec~\cite{ren2021mtrajrec}.
Both methods follow sequence-to-sequence~\cite{sutskever2014sequence} diagram, with an encoder model to generate the representation of a given trajectory and a decoder model to recover the trajectory point by point.
Nevertheless, all existing works still suffer from following two major limitations.
\begin{itemize}
\item Most of these existing works ignore the road network structure, making the prediction lack spatial consistency to a certain degree.
\item Most of these existing works use a simple encoder model to represent trajectories and hence are unable to fully utilize the rich contextual information of GPS trajectories. For example, MTrajRec only employs a simple gated recurrent unit (GRU)~\cite{cho2014properties} for trajectory representation.
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[width=7.4cm]{fig/Fig-Introduction.pdf
\vspace{-0.08in}
\caption{An example of trajectory recovery task and detailed illustration to demonstrate the significance of road network structure.}
\label{fig1}
\vspace{-0.25in}
\end{figure}
To facilitate the understanding of the above two limitations, we plot an example in Fig.~\ref{fig1}(a). The three orange-colored points labelled as $p_i$ ($i\in [1,3]$) form an input trajectory of low sample rate. As observed, the distance between each two consecutive points is relatively long (as there are many missing GPS points between them). The task of trajectory recovery is to recover the missing points and map all the GPS points (including the input points and recovered points) to the road network to generate a high-sample trajectory, e.g., the 13 blue-colored points labelled as $q_j$ ($j\in [1,13]$) together with the underlying road segments (represented by black color lines) shown in Fig.~\ref{fig1}(a) form an output trajectory.
This task is not trivial.
Both the road network structure and context information of the input trajectory are essential to guarantee the high accuracy of the recovered trajectories. Their significance lies in four aspects.
i) When two newly generated consecutive points are located at two different road segments, we have to rely on the road network to recover the trajectory. As illustrated in Fig.~\ref{fig1}(b), new points $q_4$ and $q_5$ are near a major intersection $I_1$ that has complex topology. It's impossible to recover the underlying road segments if the road network structure is not considered. Note in Fig.~\ref{fig1}(b) and (c), circles represent intersections and lines with arrows stand for directional road segments from one intersection to another.
ii) Raw GPS points by nature have errors (e.g., most GPS-enabled smartphones are accurate to within a 4.9 m radius under open sky). As shown in Fig.~\ref{fig1}(a), raw GPS points $p_1$, $p_2$ and $p_3$ are not located at any road segments. Consequently, underlying road network provides an effective means to correct the errors as vehicles must move along the road network.
iii) Raw points in the input trajectory might be far away from each other, and underlying road network reveals how vehicles can move from a point to another, e.g., we cannot simply use a straight line to connect $p_1$ to $p_2$ in Fig.~\ref{fig1}(a).
iv) Two consecutive GPS points in the input trajectory may be located at two different road segments while there are multiple candidate routes available to connect them. For example, we plot three candidate routes from $p_1$ to $p_2$ in Fig.~\ref{fig1}(c), where triangles of the same color form one candidate route. The contextual information of the trajectory provides useful clues when filtering out impossible candidate route(s). For example, the position of $p_3$
suggests that the candidate route $T_3$ is unlikely to be the one as it requires detour to travel from $p_2$ to $p_3$. Those familiar with map matching might notice that some aspects discussed above are applicable to map matching too. Map matching could be considered as a sub-step of trajectory recovery, while trajectory recovery is more challenging as it not only maps GPS points to road segments but also recovers missing GPS points that truly capture a vehicle's movement.
To address the limitations of existing trajectory recovery methods and meanwhile take advantage of end-to-end framework,
we propose a novel transformer-based model,
namely RNTrajRec, a \textbf{R}oad \textbf{N}etwork enhanced \textbf{Traj}ectory \textbf{Rec}overy framework with spatial-temporal transformer. To capture the road network structure, RNTrajRec first develops a grid-partitioned road network representation module, namely GridGNN, to learn the hidden-state embedding for each road segment. To capture both the spatial-temporal features and the contextual information of trajectory, RNTrajRec then develops a novel transformer-based model, namely GPSFormer, which first represents each GPS point in a trajectory as a sub-graph road network that surrounds the GPS point through a Sub-Graph Generation module, and then introduces a novel spatial-temporal transformer model to learn rich spatial and temporal patterns of a GPS trajectory. It finally adopts a well-designed decoder proposed in \cite{ren2021mtrajrec} on top of the encoder model to recover the missing GPS points in the trajectory. Overall, our major contributions are summarized below.
\vspace{-0.02in}
\begin{itemize}
\item We propose a novel framework, namely RNTrajRec.
To the best of our knowledge, RNTrajRec is the first attempt to combine road network representation with GPS trajectory representation for the task of trajectory recovery.
\item To consider the road network structure around each GPS point in a given trajectory, we propose a novel spatial-temporal transformer network, namely GPSFormer, which consists of a transformer encoder layer for temporal modeling and a graph transformer model, namely graph refinement layer, for spatial modeling. Meanwhile, a Sub-Graph Generation module is developed to capture the spatial features for each GPS point.
\item We propose a novel model for road network representation, namely GridGNN, which seamlessly combines grid-level representation with road network representation.
\item We conduct extensive experiments on three real-life datasets to compare RNTrajRec with existing methods\footnote{Codes are available at \url{https://github.com/chenyuqi990215/RNTrajRec}.}. Experimental results demonstrate that RNTrajRec significantly outperforms state-of-the-art solutions.
\end{itemize}
\section{Related Work}
\noindent
\textbf{Spatial-Temporal Transformer.}
Transformers have achieved great success in many artificial intelligence fields, such as natural language processing~\cite{vaswani2017attention, devlin2018bert, liu2021gpt} and computer vision~\cite{carion2020end, liu2021swin, fang2021you}. Therefore, it has naturally attracted a lot of interest from academia and industry with many variants~\cite{beltagy2020longformer, zhou2021informer, kitaev2020reformer} being proposed. As a result, there is a growing interest in applying transformer architecture for graph representation\cite{yun2019graph, cai2020graph, dwivedi2020generalization}.
However, most of these works solve graph modeling on fixed graph structure, which is not suitable for modeling dynamic graphs with temporal dependency.
Recently, several works have been proposed for dynamic graph modeling with transformer framework to solve pedestrian trajectory prediction\cite{yu2020spatio}, dynamic scene graph generation\cite{cong2021spatial}, 3D human pose estimation\cite{zheng20213d}, activity recognition\cite{li2021groupformer, zhang2021stst} and etc. Specifically,
Yu et al~\cite{yu2020spatio} propose a novel spatio-temporal transformer framework with intra-graph crowd interaction for trajectory prediction. However, the number of pedestrians (i.e. the number of nodes at each timestamp) is quite small and fixed. Cong et al~\cite{cong2021spatial} use the same transformer structure for both spatial and temporal modeling with a masked multi-head self-attention layer for temporal modeling. However, the time cost for each layer is $O\left(l^2 \cdot v^2 \right)$, with $l$ the length of the sequence and $v$ the averaged number of nodes in each graph structure.
Zheng et al~\cite{zheng20213d} use a patch embedding for spatial position embedding and propose a spatial attention network. However, the input and the output have different structures and hence the model is not stackable. Li et al~\cite{li2021groupformer} propose a spatial-temporal transformer framework tailored for group activity recognition. Zhang et al~\cite{zhang2021stst} propose a spatial-temporal transformer with a directional temporal transformer block to capture the movement patterns of human posture.
In this work, we regard a GPS point as a weighted sub-graph of the road network surrounding the GPS point. Therefore, the graph structures of points along a trajectory
could be different from each other. We propose a spatial-temporal transformer network that has the following three advantages. i) Our model is more flexible as it has zero restriction on the number of nodes or edges for each graph. ii) The time cost for each layer is $O\left(l^2+l \cdot v\right)$, which is more efficient and scalable. Note that due to the sparseness of road network, a sub-graph of a road network with $v$ road segments only has $O\left(v\right)$ edges connecting these road segments. iii) The input and the output share the same structure, therefore, our model is stackable.
\noindent
\textbf{Road Network Representation Learning.}
Most of the existing works regard road network as a directed graph. Node2vec~\cite{grover2016node2vec} and DeepWalk~\cite{perozzi2014deepwalk} are two novel models to model road segments as shallow embeddings. With the rapid development of graph neural network (GNN), many graph convolutional networks (e.g., GCN~\cite{kipf2016semi}, GraphSage~\cite{hamilton2017inductive}, GAT~\cite{velivckovic2017graph} and GIN~\cite{xu2018powerful}) are suitable for road network representation. Recently, several works have been proposed specifically for road network representation learning~\cite{jin2021spatio, wu2020learning, chen2021robust}. Among them, STDGNN~\cite{jin2021spatio} regards road network as a dual graph structure with a node-wise GCN modeling the features of intersections and an edge-wise GCN modeling the features of road segments, HRNR~\cite{wu2020learning} constructs a three-level neural architecture to learn rich hierarchical features of road network, and Toast~\cite{chen2021robust} proposes a traffic context aware skip-gram module for road network representation and a trajectory-enhanced transformer module for route representation.
In this work, we regard each road segment as a sequence of grids that the road segment passes through and use recurrent neural network to model grid sequence dependency and graph neural network to model graph structure. With this design, our road network representation can capture both nearby neighborhood features and graph topology features.
\noindent
\textbf{GPS Trajectory Representation Learning.}
Learning-based methods for representing GPS trajectory have been studied wildly these years. Among them, T2vec~\cite{li2018deep}, NeuTraj~\cite{yao2019computing}, T3S~\cite{yang2021t3s}, and Traj2SimVec~\cite{zhang2020trajectory} are representative models. T2vec proposes the first deep learning model for trajectory similarity learning with BiLSTM~\cite{hochreiter1997long} modeling temporal dependency. NeuTraj integrates a spatial-memory network for trajectory encoding and uses distance-weighted rank loss for accurate and effective trajectory representation learning. T3S uses a self-attention based network for structural information representation and a LSTM~\cite{hochreiter1997long} module for spatial information representation. Traj2SimVec proposes a novel sub-trajectory distance loss and a trajectory point matching loss for robust trajectory similarity computation. However, these works only consider GPS trajectory in Euclidean space but ignore the important road network topology. Recently, Han et al~\cite{han2021graph} propose a graph-based method along with a novel spatial network based metric similarity computation. However, it mainly focuses on representing point-of-interests (POIs) in road networks. Since POIs are discrete GPS points on road network and GPS points in a trajectory are continuous, it cannot be directly used for GPS trajectory representation learning. To the best of our knowledge, our work is the first attempt to solve GPS trajectory representation learning in spatial networks.
\noindent
\textbf{Trajectory Recovery.}
Recovering low sample trajectory is important for reducing uncertainty~\cite{wang2019deep, xia2021attnmove, xi2019modelling, ren2021mtrajrec}. Specifically, DHTR~\cite{wang2019deep} proposes a two-stage solution that first recovers high-sample trajectory followed by a map matching algorithm (i.e. HMM~\cite{newson2009hidden}) to recover the real GPS locations. AttnMove~\cite{xia2021attnmove} designs multiple intra- and inter- trajectory attention mechanisms to capture user-specific long-term and short-term patterns. Bi-STDDP~\cite{xi2019modelling} integrates bi-directional spatio-temporal dependence and users’
dynamic preferences to capture complex user-specific patterns. However, both AttnMove and Bi-STDDP use user-specific history trajectories and are designed for recovering missing POI check-in data, which is very different from our setting.
MTrajRec~\cite{ren2021mtrajrec} proposes an end-to-end solution with a map-constraint decoder model, which significantly outperforms two-stage methods. However, it ignores the important road network structure, leaving rooms for further improving the accuracy.
In this paper, we propose a novel GPSFormer module to learn rich spatial and temporal features of trajectories from the road network.
\section{Preliminary}
\begin{table*}[htbp]
\renewcommand\tabcolsep{3.3pt}
\renewcommand{\arraystretch}{1.2}
\caption{The Summary of notations}
\centering
\resizebox{0.96\textwidth}{!}{%
\begin{tabular}{|c|l|}
\hline
\textbf{Notation} & \textbf{Definition} \\
\hline
$G$ & The topology structure of the road network. \\
\hline
$\hat{G}_{\tau}$ & The subgraphs structure for the given trajectory $\tau$. \\
\hline
$\hat{G}_{\tau, i}$ & The subgraph structure of the $i^{th}$ sample point in the given trajectory $\tau$. \\
\hline
$\Sigma^{grid}$ & The grid embedding table. \\
\hline
$\Sigma^{road}$ & The road segment embedding table. \\
\hline
$e_i, \bar{e}, \tilde{e}, \hat{e}$ & A certain road segment in the road network. \\
\hline
$X^{road}$ & The representation of the road network. \\
\hline
$\vec{Z}_{\tau}^{(l)}$ & The representation for the subgraphs of the given trajectory $\tau$ after $l$ layers of GPSFormer. \\
\hline
$\vec{Z}_{\tau, i}^{(l)}$ & The representation for the subgraph of the $i^{th}$ sample in the given trajectory $\tau$ after $l$ layers of GPSFormer. \\
\hline
$\vec{H}_{\tau}^{(l)}$ & The input features for the given trajectory $\tau$ to the transformer encoder layer at the $l^{th}$ layer of GPSFormer. \\
\hline
$H_{\tau}^{traj}$ & The final representation for every sample point in the given trajectory $\tau$. \\
\hline
$h_{\tau}^{traj}$ & The final trajectory-level representation for the given trajectory $\tau$. \\
\hline
$c$ & The constraint mask vector defined in the decoder model. \\
\hline
$\mathcal{L}_{id},\mathcal{L}_{rate},\mathcal{L}_{enc}$ & The loss functions employed in RNTrajRec. \\
\hline
\end{tabular}}
\label{tab5}
\end{table*}
In this section, we formally introduce the key concepts related to this work and define the task of trajectory recovery. The notations used in this paper are briefly summarized in Table \ref{tab5} and fully explained throughout the paper.
\textit{Definition 1:} (\textbf{Road Network}.) A \textit{Road Network} is modeled as a directed graph $G=\left(V,E\right)$, where $V$ represents the set of road segments and $E \subseteq V \times V$ captures the connectivity of these road segments, i.e., an edge $\langle e_i, e_j \rangle \in E$ if and only if there exists a direct connection from road segment $e_i$ to road segment $e_j$.
\textit{Definition 2:} (\textbf{Trajectory}.) A \textit{Trajectory} $\tau/\rho$ is defined as a sequence of $l$ tuples, i.e. $\tau / \rho = \langle \left(p_1, t_1\right)$, $\left(p_2, t_2\right)$, $\cdots$, $\left(p_{l}, t_{l}\right) \rangle$, where $l$ refers to the length of the trajectory, $p_i$ refers to the $i^{th}$ sample point in $\tau$/$\rho$, and $t_i$ is the sample time of $p_i$ in $\tau$/$\rho$.
Note, the time interval between two adjacent sample points in a trajectory (i.e., $t_{i+1}-t_i$) defines the sample interval $\epsilon$ of this trajectory.
GPS devices have measurement errors. Throughout the paper, we use terms \textbf{Raw GPS Trajectory} and \textbf{Map-matched GPS Trajectory} to refer to a GPS trajectory (denoted as $\tau$) directly obtained from certain GPS device and that (denoted as $\rho$) after running a map matching algorithm (e.g. HMM~\cite{newson2009hidden}) respectively.
Note, $p_i$ in a raw GPS trajectory $\tau$ records its exact location using latitude and longitude, while $p_j$ in a map-matched GPS trajectory $\rho$ captures its location based on the road segment $e_j$ that $p_j$ is located at and moving ratio $r_j\in [0,1)$ that captures the moving distance of $p_j$ over the total length of $e_j$ (e.g., if $r_j=0.5$, the point $p_j$ is located at the middle point of road segment $e_j$). In addition, a raw GPS trajectory $\tau$ typically does not have a fixed sample interval, and we use the average time interval $\epsilon_\tau$ instead. A low-sample trajectory has a long sample interval.
\textit{Definition 3:} (\textbf{Map-matched $\epsilon_\rho$-Sample Interval Trajectory}.) A \textit{Map-matched $\epsilon_\rho$-Sample Interval Trajectory} $\rho$ is a \textit{Map-matched GPS Trajectory} with a fixed sample interval $\epsilon_\rho$, i.e., $\rho=\langle \left(q_1, t_1\right), \left(q_2, t_1+\epsilon_\rho\right), \cdots, \left(q_{l_{\rho}}, t_1+(l_{\rho}-1)\epsilon_\rho\right) \rangle$.
\textit{Definition 4:} (\textbf{Trajectory Recovery}.) Given a low-sample \textit{Raw GPS Trajectory} $\tau$ with measurement errors (e.g., orange GPS points in Fig.~\ref{fig1}(a)), the task of \textit{Trajectory Recovery} aims to recover the real \textit{Map-matched $\epsilon_\rho$-Sample Interval Trajectory} $\rho$ (e.g., blue GPS points in Fig.~\ref{fig1}(a)). Specifically, for each low-sample trajectory, it infers the missing GPS points and maps each GPS point (including the GPS points in the input trajectory $\tau$) onto the road network to obtain the real GPS locations of the moving trajectory. Note that the sample interval $\epsilon_\rho$ of the recovered trajectory $\rho$ must be much smaller than
that of the given \textit{Raw GPS Trajectory} $\epsilon_\tau$.
\section{Methodology}
\begin{figure*}[htbp]
\centering
\includegraphics[width=18cm
{fig/Fig2-v7.pdf
\vspace{-0.1in}
\caption{The framework of RNTrajRec.
(a) The architecture of GridGNN. For each road segment, a grid GRU cell is used to aggregate the grid sequence followed by a stack of $M$ GAT modules to capture the spatial information. (b) The architecture of GPSFormer. Given a GPS trajectory, GPSFormer first extracts the road network features around each GPS point through the Sub-Graph Generation module. The features along with the sub-graphs are passed through several spatial-temporal transformer layers to learn the spatial and temporal features. (c) The architecture of decoder model. Given the output from the encoder model, an attention module is adopted to compute the similarity between the hidden-state vector of GRU cell and the outputs of encoder model and generate the input hidden-state vectors $a^{(j)}$ for timestamp $j$. A multi-task learning module is proposed that first predicts the target road segment $e_j$, followed by a regression task for predicting moving ratio on the predicted road segment, i.e. $r_j$. The entire model is trained end-to-end with Adam optimizer.}
\label{fig2}
\vspace{-0.15in}
\end{figure*}
This section introduces our proposed RNTrajRec,
with the overall framework presented in Fig.~\ref{fig2}.
\subsection{Model Overview}
The first component of RNTrajRec is GridGNN, a grid-partitioned road network representation module that learns spatial features for each road segment, as shown in Fig.~\ref{fig2}(a).
Given a road network $G=\left(V,E\right)$,
GridGNN learns rich road network features $X^{road} \in \mathbb{R}^{|V| \times d}$, where $d$ is the hidden size of the model.
The second component of RNTrajRec is GPSFormer, a spatial-temporal transformer based GPS trajectory encoder that encodes raw GPS points $\langle p_1, p_2, \cdots, p_
{l_\tau}\rangle$ in a trajectory $\tau$ into hidden vectors, as shown in Fig.~\ref{fig2}(b).
To obtain the input of GPSFormer, we first extract the road network features around each GPS point $p_i$ through the Sub-Graph Generation module. After the generation process, each GPS point $p_i\in \tau$ is represented as a weighted directed sub-graph $\hat{G}_{\tau, i}=\left(V_{\tau, i}, E_{\tau, i}, W_{\tau, i}\right)$, where $V_{\tau, i}$ captures the road segments selected by the module that surround the GPS point $p_i\in \tau$, $E_{\tau, i}$ is the set of edges in the selected sub-graph of the road network, and $W_{\tau, i}$ refers to the set of weights between $p_i$ and each selected road segment in the sub-graph. Note that we use $\hat{G}$ instead of $G$ to represent a sub-graph. Each generated sub-graph gathers road network features from $X^{road}$ to form its initial representation, i.e. $\vec{Z}_{\tau, i}^{(0)} \in \mathbb{R}^{|V_{\tau, i}| \times d}$. We further perform weighted mean pooling on graph to get the input representation of the trajectory $\tau$, i.e. $\hat{H}_{\tau}^{(0)} \in \mathbb{R}^{l_\tau \times d}$.
We then forward a mini-batch of $b$ trajectory features
along with the sub-graph structure
into $N$ stacked GPSFormerBlock layers, which is a combination of transformer encoder layer for temporal modeling and graph refinement layer, namely GRL, for spatial modeling.
The last component of RNTrajRec is a decoder model, proposed in \cite{ren2021mtrajrec} specifically designed for trajectory recovery task.
Given the outputs of a mini-batch of $b$ trajectories from the encoder model,
i.e. $\vec{H}^{(N)} \in \mathbb{R}^{b \times l_\tau \times d}$, the decoder model
first uses an attention module to calculate the similarity between the hidden-state vectors of the Gated Recurrent Unit (GRU)\cite{cho2014properties} cell (i.e., the query vectors) and the outputs from the encoder model (i.e., the key vectors) to obtain the input hidden vector $a^{(j)}$ at the $j^{th}$ timestamp. Furthermore, a multi-task learning module is proposed specifically for trajectory recovery task that first predicts the target road segment $e_j$ and then predicts the corresponding moving ratio $r_j$ via a regression task, which will be detailed in Section~\ref{GCL}.
\subsection{Road Network Representation: GridGNN}
As stated in Section~\ref{sec:intro}, the road network structure is essential for the task of trajectory recovery. GridGNN is proposed to well capture the spatial features of road network. It partitions the road network into $m \times n$ equal-sized grid cells. Accordingly, each road segment can be represented as a sequence of grids that the road segment passes through.
Formally speaking, we build a grid embedding table $\Sigma^{grid} \in \mathbb{R}^{m \times n \times d}$ for each grid cell. Similarly, given the road network $G=\left(V,E\right)$, we create a road segment embedding table $\Sigma^{road} \in \mathbb{R}^{|V| \times d}$, where $|V|$ is the total number of road segments in the road network.
For each road segment $e_i\in V$, let $S_i=\langle \tilde{g}_{i}^{1}, \tilde{g}_{i}^{2},...,\tilde{g}_{i}^{\phi_i} \rangle$ be a sequence of $\phi_i$ grids passed through by $e_i$.
Since the grid sequence of each road segment has sequential dependencies, we use GRU cell to model the grid-level representation. That is, for each road segment $e_i$ and its corresponding grid sequence $S_i$, the grid-level hidden-state vector is given by:
\begin{equation}
\begin{array}{rl}
g_{i}^{(j)} = & \operatorname{lookup}(\tilde{g}_{i}^{j}.x, \tilde{g}_{i}^{j}.y) \\
z_{i}^{(j)} = & \sigma\left(\mathbf{W}_z \cdot \left[ s_{i}^{(j-1)}, g_{i}^{(j)} \right] + \mathbf{b}_z \right) \\
r_{i}^{(j)} = & \sigma\left(\mathbf{W}_r \cdot \left[ s_{i}^{(j-1)}, g_{i}^{(j)} \right] + \mathbf{b}_r \right) \\
c_{i}^{(j)} = & tanh \left(\mathbf{W}_c \cdot \left[ r_{i}^{(j)} * s_{i}^{(j-1)}, g_{i}^{(j)} \right] + \mathbf{b}_c \right) \\
s_i^{(j)} = & \left(1 - z_{i}^{(j)} \right) * s_{i}^{(j-1)} + z_{i}^{(j)} * c_{i}^{(j)} \\
\end{array}
\label{eq2}
\end{equation}
Here, $j \in \{1, 2,\cdots, \phi_{i}\}$, $\operatorname{lookup}\left(i,j\right)$ retrieves the grid embedding of position $\left(i,j\right)$ from the grid embedding table $\Sigma^{grid}$, $\mathbf{W}_x$ represents the weight for the gate($x$) neurons, $\mathbf{b}_x$ represents the bias for gate($x$), and $\sigma\left(\cdot\right)$ represents the gated function, which is implemented as sigmoid function. The initial embedding for each road segment $r_i^{(0)}$ is:
\begin{equation}
r_i^{(0)}=\operatorname{ReLU}(s_i^{(\phi_i)} + \sigma_i^{road})
\label{eq3}
\end{equation}
Here, $\sigma_i^{road} \in \mathbb{R}^{d}$ is the road segment embedding of the $i^{th}$ road segment from $\Sigma^{road}$.
The obtained hidden-state vector $r_i^{(0)}$ considers each road segment independently, which does not capture the topology of the road network structure. In order to enhance the representation of road network, we integrate GNN. Specifically, we stack $M$ layers of Graph Attention Network (GAT)
module, which uses multi-head attention to efficiently learn the complex graph structure, to obtain final hidden-state vectors $\hat{X}^{road} \in \mathbb{R}^{|V| \times d}$:
\begin{eqnarray}
a_{ij, k}^{(l)}=&\frac{\exp \left(\operatorname{LeakyReLU}\left(\overrightarrow{\mathrm{a^k}}\left[\mathbf{\widehat{W}}^k r_{i}^{(l-1)} \| \mathbf{\widehat{W}}^k r_j^{(l-1)} \right]\right)\right)}{\sum_{n \in N_{i}} \exp \left(\operatorname{LeakyReLU}\left(\overrightarrow{\mathrm{a^k}}\left[\mathbf{\widehat{W}}^k r_{i}^{(l-1)} \| \mathbf{\widehat{W}}^k r_n^{(l-1)}\right]\right)\right)} \label{eq5}\\
r_{i}^{(l)}=&\Vert_{k=1}^{h} \operatorname{LeakyReLU}\left(\sum\nolimits_{j \in \mathcal{N}_{i}} a_{ij,k}^{(l)} \mathbf{W}^{k} r_{j}^{(l-1)}\right) \quad \quad \
\label{eq6}
\end{eqnarray}
Here, $l \in \{1, 2, \cdots, M\}$. For the $k^{th}$ attention head, $a_{ij, k}^{(l)}$ represents the attention score between road segments $e_i$ and $e_j$ at the $l^{th}$ layer, $\overrightarrow{\mathrm{a}^k}$ are learnable weights to obtain attention scores, $\mathbf{W}^k$ and $\mathbf{\widehat{W}}^k$ is a learnable weight for feature transformation. $\left[\cdot\|\cdot \right]$ represents the concatenation operation, and $N_i$ represents the neighborhood of road segment $e_i$ in the road network.
The final road network representation is given by the concatenation of $\hat{X}^{road}=\{ r_i^{(M)} \}$ for each $e_i \in V$ and the static features $f_s^{road} \in \mathbb{R}^{|V| \times f_r}$ (i.e. length of the road segment, number of in/out-going edges, etc), followed by linear transformation to obtain $d$ dimension vectors $X^{road}$.
\subsection{Sub-Graph Generation} \label{SG}
Given the representation $X^{road} \in \mathbb{R}^{|V| \times d}$ of the road network, a straight-forward way to obtain the representation of a GPS point is to average over the embeddings of the road segments around the GPS point. However, this approach suffers from the following two problems. a) The influence of the nearby road segments on the given GPS point varies significantly. b) For each GPS point in a trajectory, its surrounding sub-graph structure is important for understanding the movement of the trajectory. Therefore, it is necessary to take the graph structure into consideration.
For a given GPS point $p$, we first
locate the road segments within at most $\delta$ meters away from $p$, via R-tree\cite{guttman1984r} or any other spatial data structure. Note, $\delta$ is a hyper-parameter to control the receptive field of the GPS point.
Assume in total $\omega_p$ road segments are returned, denoted as $\{e_{p}^1, e_{p}^2, \cdots, e_p^{\omega_p}\}$.
We further follow the road network $G=(V,E)$ to connect these $\omega_p$ road segments into a sub-graph $\tilde{G}^p=\left(V^p, E^p\right)$, where $V^p=\{e_{p}^1, e_{p}^2, \cdots, e_p^{\omega_p}\}$ and $E^p = (V^p \times V^p) \cap E$.
Following \cite{ren2021mtrajrec}, we use the exponential function to model the influence of road segment $e$ on the given GPS point $p$, as defined in Eq.~\eqref{eq7}.
Here, $\operatorname{dist}(e, p)$ represents the distance between the GPS point $p$ and the road segment $e$, i.e. spherical distance between the GPS point $p$ and its projection to the road segment $e$, and $\gamma$ is a hyper-parameter with respect to the road network. To this end, we obtain the weighted sub-graph of a given GPS point, i.e. $\tilde{G}^p=\left(V^p, E^p, W^p\right)$, where $W^p$ can be derived by Eq.~\eqref{eq7}.
\begin{eqnarray}
\omega(e, p) &=& \exp \left( {-\operatorname{dist}^2(e, p)}/{\gamma^2} \right)
\label{eq7}\\
g^p &=& {\sum\nolimits_{\hat{e} \in V^p} W_{\hat{e}}^p * x_{\hat{e}}^{road}}/{\sum\nolimits_{\hat{e} \in V^p} W_{\hat{e}}^p}
\label{eq8}
\end{eqnarray}
We use mean pooling on graph to get the representation of a given GPS point $p$, as defined in Eq.~\eqref{eq8}.
Here, $x_{\hat{e}}^{road}$ represents the road segment representation of road segment $\hat{e}$ obtained from $X^{road}$ and $W_{\hat{e}}^p$ represents the influence of road segment $\hat{e}$ on the GPS point $p$, i.e. $W_{\hat{e}}^p=\omega(\hat{e}, p)$.
Given a trajectory $\tau=\langle \left(p_1, t_1\right), \left(p_2, t_2\right), \cdots, \left(p_{l_\tau}, t_{l_\tau}\right) \rangle$,
its hidden-state vector $H^{traj}_{\tau} \in \mathbb{R}^{{l_\tau} \times \left(d+3\right)}$ concatenates the road network empowered GPS representation $g_{\tau} =\langle g^{p_1}$,$g^{p_2}$, $\cdots$, $g^{p_{l_\tau}} \rangle$ $\in \mathbb{R}^{l_\tau \times d}$, timestamp sequence $\hat{t}_{\tau}=$ $\langle t_1$, $t_2$, $\cdots$, $t_{l_\tau} \rangle$ $\in \mathbb{R}^{l_\tau \times 1}$ and grid index $\hat{g}_{\tau}=$ $\langle (x_1, y_1)$, $(x_2, y_2)$, $\cdots$, $(x_{l_\tau}, y_{l_\tau}) \rangle \in \mathbb{R}^{l_\tau \times 2}$, where $(x_i, y_i)$ represents the index of the grid that the GPS point $p_i$ falls inside. Finally, we map $H^{traj}_{\tau}$ into $d$ dimension vectors $\hat{H}_{\tau}^{(0)}$ through linear transformation.
For the sub-graph input of the given trajectory $\tau$, $\hat{G}_{\tau}=$ $\langle \hat{G}_{\tau, 1}$, $\hat{G}_{\tau, 2}$, $\cdots, \hat{G}_{\tau, l_{\tau}} \rangle$, where $\hat{G}_{\tau, i}=\tilde{G}^{p_i}$ represents the generated weighted sub-graph for GPS point $p_i$. The initial graph-level sequence $\vec{Z}_{\tau}^{(0)} = \langle \vec{Z}_{\tau, 1}^{(0)}, \vec{Z}_{\tau, 2}^{(0)}, \cdots, \vec{Z}_{\tau, l_{\tau}}^{(0)} \rangle$,
where $\vec{Z}_{\tau, i}^{(0)}=\{ x_{\hat{e}}^{road} \}$ for every road segment $\hat{e} \in V_{\tau, i}$.
\subsection{Graph Refinement Layer (GRL)} \label{GR}
As mentioned in Section~\ref{SG}, graph structure is significant for representing GPS points in spatial network. To this end, we propose a graph refinement layer (GRL) that is capable of capturing rich spatial features
\begin{figure}[htbp]
\centerline{\includegraphics[width=8cm]{fig/Fig3-v4.pdf}
\vspace{-0.1in}
\caption{The framework Graph Refinement Layer (GRL) module. }
\label{fig3}
\vspace{-0.05in}
\end{figure}
Fig.~\ref{fig3} shows the architecture of the proposed graph refinement layer. Though it is inspired by the encoder of transformer model, there are five major differences between transformer encoder and the newly proposed GRL. a) Transformer encoder captures the temporal features across the sequence, while GRL captures the spatial features and uses the output of transformer encoder to update the local features. b) Transformer encoder only takes hidden vectors as input, while GRL takes both the hidden vectors and graph structures as input. c) Motivated by \cite{dwivedi2020generalization} which demonstrates that batch normalization\cite{ioffe2015batch} is more suitable for graph transformer, GRL replaces layer normalization\cite{vaswani2017attention} with a newly proposed graph normalization, which can be viewed as an extension of batch normalization for graph representation with temporal dependency, to normalize the hidden vectors.
d) GRL replaces multi-head attention in transformer encoder with gated fusion\cite{zheng2020gman} to adaptively fuse the input hidden vectors and the node features in the graph structure. e) GRL further replaces the feed forward in transformer encoder with graph forward to capture the rich spatial features around GPS points.
Assume the output hidden vectors at layer $l$ are $\vec{H}_{\tau}^{(l)} \in \mathbb{R}^{l_\tau \times d}$, and the corresponding output graph hidden vectors are $\vec{Z}_{\tau}^{(l)}$, where each sub-graph features $\vec{Z}_{\tau, i}^{(l)} \in \mathbb{R}^{|V_{\tau, i}| \times d}$. Following \cite{vaswani2017attention}, we employ a residual connection for both gated fusion sub-layer and graph forward sub-layer,
followed by graph normalization. That is, the output of each sub-layer is given by $\operatorname{GraphNorm}\left(x + \operatorname{SubLayer}\left(x \right)\right)$, where $\operatorname{SubLayer}$ represents a function that can be either $\operatorname{GatedFusion}$ or $\operatorname{GraphForward}$. $\operatorname{GraphForward}$ is implemented as a stack of $P$ standard GAT modules defined in Eq.~\eqref{eq5} and Eq.~\eqref{eq6}, while $\operatorname{GatedFusion}$ and $\operatorname{GraphNorm}$ are detailed below.
\subsubsection{Gated Fusion} \label{GF}
The hidden-state vectors from the output of transformer encoder layer, i.e. $\vec{Tr}_{\tau}^{(l)}$, capture rich temporal features of the given trajectory, while the graph structure, i.e. $\hat{G}_{\tau}$, captures rich spatial information of each GPS point in the trajectory. Therefore, it is necessary to design a fusion mechanism to adaptively fuse the spatial and temporal features. Inspired by \cite{zheng2020gman}, we propose to use gated fusion to combine the hidden vectors at the $i^{th}$ timestamp, i.e. $tr_{\tau, i}^{(l)} \in \mathbb{R}^{d}$ with the corresponding graph features $\vec{Z}_{\tau, i}^{(l-1)} \in \mathbb{R}^{|V_{\tau, i}| \times d}$:
\begin{eqnarray}
\tilde{Z}_{\tau, i}^{(l)} &=& z_{\tau, i}^{(l)} \odot \hat{tr}_{\tau, i}^{(l)} + \left(1 - z_{\tau, i}^{(l)}\right) \odot \vec{Z}_{\tau, i}^{(l-1)}\nonumber\\
z_{\tau, i}^{(l)} &=& \sigma \left( \hat{tr}_{\tau, i}^{(l)} \mathbf{W}_{z, 1} + \vec{Z}_{\tau, i}^{(l-1)} \mathbf{W}_{z, 2} + \mathbf{b}_z \right)
\label{eq11}
\end{eqnarray}
Here, $\hat{tr}_{\tau, i}^{(l)} \in \mathbb{R}^{|V_{\tau, i}| \times d}$ repeats $tr_{\tau, i}^{(l)}$ for $|V_{\tau, i}|$ times to ensure that $\hat{tr}_{\tau, i}^{(l)}$ and $\vec{Z}_{\tau, i}^{(l-1)}$ share the same size, $\mathbf{W}_{z, 1}, \mathbf{W}_{z, 2}$ and $\mathbf{b}_z$ represent the learnable weights in the module and $\sigma\left(\cdot \right)$ is the gated activation function, which is implemented as a sigmoid activation function.
\subsubsection{Graph Norm} \label{GN}
Inspired by \cite{dwivedi2020generalization}, we replace layer normalization with graph normalization to normalize graph features within a mini-batch. A mini-batch of input graph structures can be represented as $\{ \hat{G}_{\tau_1}, \hat{G}_{\tau_2}, \cdots, \hat{G}_{\tau_b} \}$ with hidden-state graph features $\{ \tilde{Z}_{\tau_1}^{(l)}, \tilde{Z}_{\tau_2}^{(l)}, \cdots, \tilde{Z}_{\tau_b}^{(l)}\}$, which can be the output of either $\operatorname{GatedFusion}$ sub-layer or $\operatorname{GraphForward}$ sub-layer, where $\tilde{Z}_{\tau}^{(l)}=\{ \tilde{Z}_{\tau, i}^{(l)} \}$ for each $i \in \{1,2,\cdots,l_\tau\}$ and $b$ represents the batch size.
We first perform mean pooling to obtain graph feature for each sub-graph via Eq.~\eqref{eq12}.
Here, $M^{(l)}=\{ m_{\tau_i, j}^{(l)} \}$ for each $i \in \{1, 2,\cdots,b\}$ and $j \in \{ 1, 2, \cdots, l_\tau\}$. Note that we assume trajectories $\tau$ in a mini-batch share the same length $l_\tau$.
\begin{equation}
m_{\tau_i, j}^{(l)}=\frac{1}{|V_{\tau_i, j}|} \sum\nolimits_{k=1}^{|V_{\tau_i, j}|} \tilde{Z}_{\tau_i, j, k}^{(l)}
\label{eq12}
\end{equation}
We then perform batch normalization on $M^{(l)} \in \mathbb{R}^{b \times l_\tau \times d}$ with the features $\{ \tilde{Z}_{\tau_1}^{(l)}, \tilde{Z}_{\tau_2}^{(l)}, \cdots, \tilde{Z}_{\tau_b}^{(l)}\}$ to get the output $\vec{Z}_{\tau}^{(l)}$:
\begin{equation}
\begin{array}{rl}
\mu_{\mathcal{B}} = & \frac{1}{b \times l_{\tau}} \sum_{i=1}^{b} \sum_{j=1}^{l_{\tau}} M_{\tau_i, j}^{(l)} \vspace{2ex} \\
\sigma_{\mathcal{B}} = & \frac{1}{\sum_{i=1}^{b}\sum_{j=1}^{l_{\tau}} |V_{\tau_i, j}|} \sum_{i=1}^{b}\sum_{j=1}^{l_{\tau}}\sum_{k=1}^{|V_{\tau_i, j}|} \left( \tilde{Z}_{\tau_i, j, k} - \mu_{\mathcal{B}} \right)^2 \vspace{2ex} \\
\widetilde{Z}_{\tau}^{(l)} = & \frac{\tilde{Z}_{\tau}^{(l)} - \mu_{\mathcal{B}}}{\sqrt{\sigma_{\mathcal{B}} + \epsilon}} \vspace{2ex} \\
\vec{Z}_{\tau}^{(l)} = & \gamma_{\mathcal{B}} \widetilde{Z}_{\tau}^{(l)} + \beta_{\mathcal{B}}
\end{array}
\label{eq13}
\end{equation}
Here, $\gamma_{\mathcal{B}}$ and $\beta_{\mathcal{B}}$ are learnable parameters for scalar transformation and shift transformation in batch normalization respectively, and $\vec{Z}_{\tau}^{(l)}$ is the output of graph normalization.
\subsection{Transformer Encoder Layer} \label{Trans}
Transformer encoder layer contains a multi-head attention sub-layer and a feed forward sub-layer with a residual connection for each of the two sub-layers, followed by layer normalization. That is, the output of each sub-layer is given by $\operatorname{LayerNorm}\left(x + \operatorname{SubLayer}\left(x \right)\right)$, where $\operatorname{SubLayer}$ represents a function that can be either $\operatorname{MultiHead}$ or $\operatorname{FFN}$.
\subsubsection{Multi-Head Attention}
Given a sequence input $X \in \mathbb{R}^{L \times d}$ with $L$ the length of the sequence, multi-head attention is given by:
\begin{eqnarray}
\operatorname{MultiHead\left(Q, K, V\right)} &=& Concat\left(head_1, \cdots, head_h\right)W^O \nonumber \\
head_i &=& \operatorname{Attention}\left(QW_i^{Q}, KW_i^{K}, VW_{i}^{V}\right) \nonumber \\
\operatorname{Attention\left(Q, K, V\right)} &=& \text{softmax} \left(\frac{QK^T}{\sqrt{d}}\right)V
\label{eq16}
\end{eqnarray}
Here, $Q$, $K$ and $V$ refer to the query, the keys and the values for the input $X$ respectively, $W_{i}^{Q}$, $W_{i}^{K}$ and $W_{i}^{V}$ are the learnable parameters of the $i^{th}$ attention head for the query, the keys and the values respectively, $W^O$ is a learnable parameter for the output, and $h$ captures the number of attention heads.
\subsubsection{Feed Forward}
Feed-forward network contains a fully connected network with ReLU activation function:
\begin{equation}
FFN\left(x\right) =\operatorname{ReLU}\left(xW_1 + b_1\right)W_2 + b_2
\label{eq17}
\end{equation}
Here, $W_1$ and $W_2$ are two learnable weights, and $b_1$ and $b_2$ are the bias for the layer.
\subsection{GPSFormer}
GPSFormer first extracts spatial features for each GPS point in the trajectory through the Sub-Graph Generation module introduced in Session~\ref{SG}, and then applies $N$ layers of GPSFormerBlock to capture rich trajectory patterns.
Given a mini-batch of $b$ trajectories $\{ \tau_1, \tau_2, \cdots, \tau_b\}$. We first obtain the initial hidden-state vectors are $\widehat{H}_{\tau}^{(0)}=$ $\{\hat{H}_{\tau_1}^{(0)}$, $\hat{H}_{\tau_2}^{(0)}$, $\cdots$, $\hat{H}_{\tau_b}^{(0)}\}$ $\in \mathbb{R}^{b \times l_\tau \times d}$, accompanied by the initial graph structure $\widehat{G}_{\tau} =$ $\{ \hat{G}_{\tau_1}$, $\hat{G}_{\tau_2}$, $\cdots,\hat{G}_{\tau_b} \}$ and initial graph features $\vec{Z}_{\tau}^{(0)}=$ $\{ \vec{Z}_{\tau_1}^{(0)}$, $\vec{Z}_{\tau_2}^{(0)}$, $\cdots, \vec{Z}_{\tau_b}^{(0)} \}$ though the Sub-Graph Generation module.
After that, we first add position embedding\cite{vaswani2017attention} to $\widehat{H}_{\tau}^{(0)}$ and obtain the input to the transformer encoder:
\begin{equation}
\vec{H}_{\tau}^{(0)} = \widehat{H}_{\tau}^{(0)} + PE\left(\widehat{H}_{\tau}^{(0)}\right)
\label{eq18}
\end{equation}
Next, we input the hidden-state vectors and graph structures to a stacked of $N$ GPSFormer cells:
\begin{equation}
\begin{array}{rl}
\vec{Tr}_{\tau}^{(l)} = & \operatorname{TransformerEncoder}\left( \vec{H}_{\tau}^{(l-1)} \right) \\
\vec{Z}_{\tau}^{(l)} = & \operatorname{GraphRefinement} \left( \vec{Tr}_{\tau}^{(l)}, \vec{Z}_{\tau}^{(l-1)}, \widehat{G}_{\tau} \right) \\
\vec{H}_{\tau}^{(l)} = & \operatorname{GraphReadout} \left(\vec{Z}_{\tau}^{(l)}, \widehat{G}_{\tau} \right)
\end{array}
\label{eq19}
\end{equation}
Here, $\operatorname{TransformerEncoder}$ and $\operatorname{GraphRefinement}$ stand for the transformer encoder layer and the graph refinement layer introduced in Section~\ref{Trans} and Section~\ref{GR} respectively, and $\operatorname{GraphReadout}$ represents graph mean pooling operation similar to Eq.~\eqref{eq12}. The final representation for a mini-batch trajectories is $H_{\tau}^{traj} = \vec{H}_{\tau}^{(N)} \in \mathbb{R}^{b \times l_\tau \times d}$. Besides, we concatenate the mean pooling of $H_{\tau}^{traj}$ with the environmental contexts $f^e_{\tau} \in \mathbb{R}^{b \times f_t}$ (e.g. hour of the day, holiday or not, etc), followed by linear transformation to obtain trajectory-level hidden-state vector $\hat{h}_{\tau}^{traj} \in \mathbb{R}^{b \times d}$.
\subsection{Decoder Model}
Let $H_{\tau}^{traj}$ be the outputs of the encoder model. For simplicity, we denote the representation of a single trajectory as $h^{traj}=\{h_1, h_2, \cdots, h_{l_\tau} \}$. The decoder model uses a GRU model to predict the road segment and moving ratio for each timestamp in the target trajectory. Assume the hidden-state vector in the GRU model at timestamp $t$ is given by $h_{gru}^{(t)}$ with $h_{gru}^{(0)}=\hat{h}_{\tau}^{traj}$. An attention mechanism is adopted to obtain the input of GRU cell, i.e. $a^{(t)}$:
\begin{equation}
a^{(t)} = \sum\nolimits_{i=1}^{l_\tau} \alpha_{i}^{(t)} h_i \nonumber
\label{eq20}
\end{equation}
\begin{equation}
\begin{array}{rl}
\alpha_{i}^{(t)} & = \exp \left( \mu_{i}^{(t)} \right) / \sum_{k=1}^{l_\tau} \exp \left( \mu_{k}^{(t)} \right) \vspace{2ex} \\
\mu_{i}^{(t)} & = \mathbf{v}^{T} \cdot \operatorname{tanh} \left( \mathbf{W}_g h_{gru}^{(t-1)} + \mathbf{W}_h h_i \right)
\end{array}
\label{eq21}
\end{equation}
Here, $\mathbf{v} \in \mathbb{R}^{d \times 1}$ represents the transformation weight, and $\mathbf{W}_g$ and $\mathbf{W}_h$ are learnable weights in the attention module.
To this end, the hidden-state vectors of the GRU cell are updated by:
\begin{equation}
h_{gru}^{(j)} = \operatorname{GRU}\left( \left[ x^{(j-1)} \| r^{(j-1)} \| a^{(j)} \right] \right)
\label{eq22}
\end{equation}
Here, $j \in \left[1, 2,\cdots,l_\rho \right]$, $x^{(j-1)}$ and $r^{(j-1)}$ represent the road segment embedding of the predicted road segment and its corresponding moving ratio at the $({j-1})^{th}$ timestamp,
and $\operatorname{GRU}$ represents the GRU cell defined in Eq.~\eqref{eq2}. Finally, $h_{gru}^{(j)}$ is forwarded to a multi-task block to recover the missing trajectory.
\section{Multi-Task Learning for Trajectory Recovery} \label{GCL}
Given a mini-batch of $b$ trajectories, we first use the GPSFormer to obtain the trajectory representation for each sample, i.e., $H_{\tau}^{traj} \in \mathbb{R}^{b \times l_\tau \times d}$, and then forward the hidden-state vectors to the decoder model to obtain the hidden-state vectors of the GRU cell, i.e., $h_{gru}^{(j)}$. For the task of predicting road segment ID, following \cite{ren2021mtrajrec}, we adopt the cross entropy as the loss function:
\begin{equation}
\mathcal{L}_{id} = -\sum\nolimits_{i=1}^{b} \frac{1}{{l_\rho}} \sum\nolimits_{j=1}^{{l_\rho}} \log \left(P_{\theta_{enc}, \theta_{dec}}\left( \tilde{e}_{i}^{(j)} | h_{gru}^{(j)}, c_{j} \right)\right) \nonumber
\label{eq23}
\end{equation}
\begin{equation}
P_{\theta_{enc}, \theta_{dec}}\left( \tilde{e}_{i}^{(j)} | h_{gru}^{(j)}, c_{j} \right) = \frac{\exp \left( h_{gru}^{(j)} \cdot {\mathbf{w}_{i}^{id}} \right) \odot c_{j,\tilde{e}_{i}^{(j)}} }{ \sum_{v \in V} \exp \left( h_{gru}^{(j)} \cdot {\mathbf{w}_{v}^{id}} \right) \odot c_{j,v} }
\label{eq24}
\end{equation}
Here, $\mathbf{w}^{id} \in \mathbb{R}^{d \times |V|}$ represents a learnable weight and $P_{\theta_{enc}, \theta_{dec}}\left( \tilde{e}_{i}^{(j)} | h_{gru}^{(j)}, c_{j} \right)$ represents the probability of predicting road segment $\tilde{e}_{i}^{(j)}$ at the $j^{th}$ timestamp for the $i^{th}$ trajectory in the mini-batch, given the hidden-state vectors $h_{gru}^{(j)}$ and the constraint mask $c_j$ as defined in the paragraph below, $\tilde{e}_{i}^{(j)}$ represents the ground truth road segment ID at the $j^{th}$ timestamp for the $i^{th}$ trajectory in the mini-batch, and $\theta_{enc}$ and $\theta_{dec}$ are the learnable parameters in the encoder and decoder model respectively.
\paragraph{Constraint Mask Layer} \label{CML}
The goal of the constraint mask layer is to accelerate the convergence of the decoder model and to tackle the difficulties of fine-grained trajectory recovery.
Given a raw GPS trajectory $\tau=\langle \left(p_1, t_1\right), \left(p_2, t_2\right), \cdots, \left(p_{l_\tau}, t_{l_\tau}\right) \rangle$ and the target map-matched $\epsilon_{\rho}$-sample interval trajectory $\rho=\langle \left(q_1, \hat{t}_1\right)$, $\left(q_2, \hat{t}_2\right)$, $\cdots, \left(q_{l_\rho}, \hat{t}_{l_\rho}\right) \rangle$, the constraint mask $c_j \in \mathbb{R}^{l_\rho \times |V|}$ is calculated for each timestamp $\hat{t}_{j}$ in the target trajectory.
For $\hat{t}_j \in \{ t_1, t_2, \cdots, t_{l_\tau} \}$, the GPS point at timestamp $\hat{t}_j$ is given in the input trajectory, i.e. $t_{k}=\hat{t}_j$ and $q_j=p_k$. Accordingly, we set $c_{j, i} = \omega\left(e_i, p_k\right)$ for each road segment $e_i$ having its distance to $p_k$ within the maximum error of the GPS device (e.g. $100$ meters) and set $c_{j,i}=0$ for other road segments. The function $\omega$ is defined in Eq.~\eqref{eq7}, except that we use another hyper-parameter $\beta$ to replace $\gamma$. For timestamp $\hat{t}_j$ that does not appear in the input trajectory, we set $c_{j,i}=1$ for all road segments $e_i \in V$.
Similarly, we adopt the mean square loss for moving ratio prediction task:
\begin{equation}
\mathcal{L}_{rate} = \sum\nolimits_{i=1}^{b} \frac{1}{{l_\rho}} \sum\nolimits_{j=1}^{{l_\rho}} \left( r_{i}^{(j)} - f \left( \left[x_{i}^{(j)} \| h_{gru}^{(j)} \right], \mathbf{w}_{rate} \right) \right)^2
\label{eq25}
\end{equation}
Here, function $f\left( x, y \right) = \sigma\left( x^T \cdot y \right) $, $\mathbf{w}_{rate} \in \mathbb{R}^{2*d \times 1}$ represents a learnable weight and $x_{i}^{(j)}$ represents the road segment embedding for the predicted road segment at the $j^{th}$ timestamp of the $i^{th}$ trajectory in the mini-batch, $r_{i}^{(j)}$ represents the ground truth moving ratio for the $i^{th}$ trajectory in the mini-batch at the $j^{th}$ timestamp, and $\sigma$ refers to the sigmoid function.
To further improve the accuracy of RNTrajRec, we propose a graph classification loss with constraint masks. Given the output graph structure from the last graph refinement layer, i.e. $\vec{Z}_{\tau}^{(N)}$, we calculate the graph classification loss as:
\begin{equation}
\mathcal{L}_{enc} = -\sum\nolimits_{i=1}^{b} \frac{1}{l_{\tau}} \sum\nolimits_{j=1}^{l_{\tau}} \log \left(P_{\theta_{enc}}\left( \tilde{e}_{i}^{(j)} | G_{\tau_i, j}, \vec{Z}_{\tau_i, j}^{(N)}\right)\right) \nonumber
\label{eq26}
\end{equation}
\begin{equation}
P_{\theta_{enc}}\left( \bar{e} | G_{\tau_i, j}, \vec{Z}_{\tau_i, j}^{(N)}\right) = \frac{\exp\left(\vec{Z}_{\tau_i, j, \bar{e}}^{(N)^{T}} \cdot \mathbf{w} \right) \odot W_{\tau_i, j, \bar{e}} }
{\sum_{\bar{v} \in V_{\tau_i, j}} \exp\left(\vec{Z}_{\tau_i, j, \bar{v}}^{(N)^{T}} \cdot \mathbf{w} \right) \odot W_{\tau_i, j, \bar{v}}}
\label{eq27}
\end{equation}
Here, $\mathbf{w} \in \mathbb{R}^{d \times 1}$ is a learnable weight, $\vec{Z}_{\tau_i, j, \bar{e}}^{(N)}$ ($W_{\tau_i, j, \bar{e}}$) represents the hidden-state vector (constraint weight) for road segment $\bar{e}$ in the sub-graph $\vec{Z}_{\tau_i, j}^{(N)}$ ($\hat{G}_{\tau_i, j}$), and $\bar{e}_{i}^{(j)}$ represents the ground truth road segment ID at the $j^{th}$ timestamp for the $i^{th}$ input raw GPS trajectory in the mini-batch.
Eq.~(\ref{eq28}) defines the total training loss, where $\lambda_1$ and $\lambda_2$ are hyper-parameters to linearly balance the three loss functions.
\begin{equation}
\mathcal{L}_{total} = \mathcal{L}_{id} + \lambda_1 \mathcal{L}_{rate} + \lambda_2 \mathcal{L}_{enc}
\label{eq28}
\end{equation}
\section{Experiment}
\subsection{Experiment Setting}
\subsubsection{Datasets}
Our experiments are based on three real-life trajectory datasets collected from three different cities, namely Shanghai, Chengdu and Porto. Three datasets consist of different number of trajectories collected at various time periods, as listed in Table~\ref{tab:datasets}.
Since trajectory pattern analysis in urban areas is typically more significant, we keep the central urban area in Chengdu and Porto as the training data, with the size of the selected urban area and the number of road segments covered listed in Table~\ref{tab:datasets}. To demonstrate the scalability of our model, we select a region in Shanghai that includes most suburban areas in addition to the central urban area and hence is much larger than the central urban area, and name this as Shanghai-L dataset. Its size of the selected area and the number of road segments covered are also listed in Table~\ref{tab:datasets}. Accordingly, we only consider trajectories passing through those selected areas. We want to highlight that considering a central urban area is a common approach used by many existing works~\cite{ren2021mtrajrec,fang2022spatio}. The selected areas, though much smaller than the entire road network, cover most heavy traffics.
We use around $150,000$ trajectories in each dataset for training, and split each dataset into training set, validation set and testing set with a splitting ratio of $7:2:1$.
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{fig/data.png}
\caption{Occurrence statistics of each road segment in the Porto dataset with 999,082 trajectories. In the figure, each road segment is color coded based on the total number of trajectories in the training set (in total 999,082 trajectories) that pass the segment, and the blue-colored rectangle bounds the training area selected. Clearly, the training area covers the major traffic (e.g., purple color coded segments refer to the cluster of road segments having the most heavy traffic and most, if not all, purple color coded segments are inside the blue-colored rectangle), which is consistent with the Pareto principle (i.e., 20 percent of the roads are used for 80 percent of the traffic).}
\label{fig:data}
\vspace{-0.2in}
\end{figure}
\end{comment}
To obtain high-sample map-matched trajectories, we use HMM\cite{newson2009hidden} algorithm on original raw GPS trajectories followed by linear interpolation\cite{hoteit2014estimating} to obtain map-matched $\epsilon_\rho$-sample interval trajectory and use the high-sample trajectories with sample interval in the range of $10 \sim 15$ seconds as the ground truth. To obtain low-sample
trajectories, we randomly sample trajectory points from high-sample trajectories
based on sample interval in the range of $80 \sim 192$ seconds. Specifically, for each dataset, we design a trajectory recovery task which uses only $12.5\%$ or $6.25\%$ points of the given trajectory to recover the remaining $87.5\%$ or $93.75\%$ missing points. Therefore, the average sample interval $\epsilon_\tau$ of the low-sample input trajectories is $8$ or $16$ times higher than that of the origin high-sample trajectories, i.e., $\epsilon_\tau=\epsilon_\rho\times $($8$ or $16$).
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.1}
\caption{Statistics of Datasets}
\vspace{-0.05in}
\label{tab:datasets}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{c|c|c|c}
\hline
Dataset & Shanghai-L & Chengdu & Porto \\
\hline
\# Trajectories & 2,694,958 & 8,302,421 & 999,082 \\
\# Road segments in training area & 34,986 & 8,781 & 12,613 \\
Size of training area ($\operatorname{km}^2$) & $23.0 \times 30.8$ & $8.3 \times 8.3$ & $6.8 \times 7.2$ \\
Average travel time per trajectory (s) & 699.57 & 868.86 & 783.14 \\
Trajectory collected time & Apr 2015 & Nov 2016 & Jul 2013-Mar 2014 \\
Trajectory raw sample interval (s) & 9.39 & 3.19 & 15.01 \\
Sample interval $\epsilon_\rho$ after processing (s) & 10 & 12 & 15 \\
\hline
\end{tabular}
}
\vspace{-0.2in}
\end{table}
\subsubsection{Evaluation Metrics}
The task of trajectory recovery is to recover high-sample $\epsilon_\rho$-interval trajectories from low-sample raw trajectories. Following \cite{ren2021mtrajrec}, we adopt both the accuracy of the road segments recovered and the distance error of location inference to evaluate the performances of different models.
\noindent
\textbf{MAE \& RMSE.}
Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) are common performance metrics typically used for regression tasks. Following \cite{ren2021mtrajrec}, we adopt road network distance to calculate the distance error between two GPS points. That is, given a predicted map-matched trajectory $\hat{\rho}=\langle \left(\hat{e}_1, \hat{r}_1, t_1 \right), \left(\hat{e}_2, \hat{r}_2, t_2 \right),...,\left(\hat{e}_{l_{\rho}}, \hat{r}_{l_{\rho}}, t_{l_{\rho}} \right) \rangle$, we derive the location of the GPS point $\hat{p}_i$ based on $\hat{e}_i$ and $\hat{r}_i$; similarly, we can find the ground truth GPS point $p_i$ for trajectory $\rho$.
Accordingly, $MAE\left(\rho, \hat{\rho}\right) = \frac{1}{l_{\rho}} \sum_{i=1}^{{l_{\rho}}} | \operatorname{dist} \left( p_i - \hat{p}_i \right) |$,
and $RMSE\left(\rho, \hat{\rho}\right) = \sqrt{\frac{1}{{l_{\rho}}} \sum_{i=1}^{l_{\rho}} \left(\operatorname{dist} \left( p_i - \hat{p}_i \right)\right)^2}$.
\noindent
\textbf{Recall \& Precision \& F1 Score.}
Given a predicted travel path $E_{\hat{\rho}}= \{\hat{e}_1, \hat{e}_2,...,\hat{e}_{l_{\rho}}\}$ extracted from a predicted trajectory $\hat{\rho}$ and the ground truth travel path $E_{\rho}= \{e_1, e_2,...,e_{l_{\rho}}\}$ extracted from the ground truth trajectory $\rho$,
we follow previous work\cite{cui2018personalized, kurashima2010travel, ren2021mtrajrec} and define $Recall\left(\rho, \hat{\rho}\right)=\frac{|E_{\rho} \cap E_{\hat{\rho}}|}{|E_{\rho}|}$,
and $Precision\left(\rho, \hat{\rho}\right)=\frac{|E_{\rho} \cap E_{\hat{\rho}}|}{|E_{\hat{\rho}}|}$.
We also adopt F1 score to further evaluate the models, i.e. $F1\left(\rho, \hat{\rho}\right)=\frac{2 \times Recall\left(\rho, \hat{\rho}\right) \times Precision\left(\rho, \hat{\rho}\right)}{Recall\left(\rho, \hat{\rho}\right) + Precision\left(\rho, \hat{\rho}\right)}$.
\noindent
\textbf{Accuracy.}
The accuracy between predicted trajectory $\hat{\rho}$ and ground truth trajectory $\rho$ is calculated by $Accuracy\left(\rho, \hat{\rho}\right)=\frac{1}{{l_{\rho}}} \sum_{i=1}^{{l_{\rho}}} \mathbf{1}\{ e_i=\hat{e}_i \}$, which evaluates the model's ability to match the recovered GPS trajectory to the corresponding road segments.
\noindent
\textbf{SR\%k.}
In order to further evaluate the robustness of the models, we create another task, namely elevated road recovery, to evaluate models' ability to accurately recover trajectories on elevated roads and nearby trunk roads. As mentioned in \cite{iland2018rethinking}, GPS locations can be inaccurate in densely populated and highly built-up urban areas, making the task of trajectory recovery more challenging and significant.
As we foresee the recovered trajectories in those areas contain more errors, we propose to use $SR\%k$ instead of average F1 score that is more sensitive to poor cases. Specifically, given a predicted travel path $E_{\hat{\rho}}$ and ground truth travel path $E_{\rho}$, we choose a sub-trajectory $\widehat{E}_{\rho}$ of length $\hat{l}_{\rho}$ which drives on or near an elevated road along with the predicted sub-trajectory $\widehat{E}_{\hat{\rho}}$. Then, $SR\%k$ calculates the proportion of trajectories with $F1(\widehat{E}_{\rho}, \widehat{E}_{\hat{\rho}})$ value exceeding $k$ to evaluate the models' ability to discriminate complex roads using contextual trajectory information.
\subsubsection{Parameter Settings}
In our experiments, we implement RNTrajRec and all the baseline models in Python Pytorch\cite{paszke2019pytorch} framework. We set the size of hidden-state vectors $d$ to $512$ for Chengdu and Porto datasets. Due to memory limitation, we set $d$ to $256$ for Shanghai-L dataset. We use the same size of hidden-state vectors for all the models across datasets. Also, we set the size of grid cell to $50m \times 50m$. For our model, we set both the number of GAT modules in road network representation $M$ and the number of GPSFormer layers $N$ to $2$, and set the number of GAT modules in graph refinement layer $P$ to $1$. Also, we set hyper-parameters $\delta$ and $\gamma$ to $400$ and $30$ meters respectively, set the number of attention heads $h$ in both Eq. \eqref{eq6} and Eq. \eqref{eq16} to $8$ and $\lambda_1, \lambda_2$ in Eq. \eqref{eq28} to $10$ and $0.1$ respectively. The size of $f_r$ is set to $11$, i.e. $8$ for level of road segment, $1$ for length of road segment, $1$ for number of in-edges and $1$ for number of out-edges, and that for $f_l$ is set to $25$, i.e. $24$ for one-hot vector for hour of the day and $1$ for holiday or not. Following \cite{ren2021mtrajrec}, we set hyper-parameter $\beta$ (used to set constraint mask) to $15$ meters. All the models are trained with Adam optimizer\cite{kingma2014adam} for $30$ epochs with batch size $64$ and learning rate $10^{-3}$. All the experiments are conducted on a machine with AMD Ryzen 9 5950X 16 cores CPU and 24GB NVIDIA GeForce RTX 3090 GPU.
\subsubsection{Compared models}
\begin{table*}[htbp]
\renewcommand\tabcolsep{3.3pt}
\renewcommand{\arraystretch}{1.2}
\caption{Performance evaluation for different methods on trajectory recovery task.}
\vspace{-0.15in}
\begin{center}
\begin{tabular}{|c|c|cccccc|cccccc|}
\hline
\multicolumn{2}{|c|}{\multirow{2}*{Method}}&\multicolumn{6}{c|}{Chengdu ($\epsilon_\tau=\epsilon_\rho*8$)}&\multicolumn{6}{c|}{Chengdu ($\epsilon_\tau=\epsilon_\rho*16$)} \\
\multicolumn{2}{|c|}{~} & Recall & Precision & F1 Score & Accuracy & MAE & RMSE & Recall & Precision & F1 Score & Accuracy & MAE & RMSE \\
\hline
\multicolumn{2}{|c|}{Linear + HMM} & 0.6597 & 0.6166 & 0.6351 & 0.4916 & 358.24 & 594.32 & 0.4821 & 0.4379 & 0.4564 & 0.2858 & 525.96 & 760.47 \\
\multicolumn{2}{|c|}{DHTR + HMM} & 0.6385 & 0.7149 & 0.6714 & 0.5501 & 252.31 & 435.17 & 0.5080 & 0.6930 & 0.5821 & 0.4130 & 325.14 & 511.62 \\
\hline
\multirow{7}{*}{\hspace{0.05cm}\rotatebox{90}{End-to-End Methods}\hspace{0.05cm}} & t2vec + Decoder & 0.7123 & 0.7870 & 0.7441 & 0.5601 & 194.29 & 307.22 & 0.6490 & 0.7725 & 0.7013 & 0.4627 & 254.29 & 375.16 \\
& Transformer + Decoder & 0.7365 & 0.8229 & 0.7742 & 0.5902 & 177.13 & 287.33 & 0.6091 & 0.7187 & 0.6537 & 0.4258 & 294.73 & 420.91 \\
& MTrajRec & 0.7565 & 0.8410 & 0.7938 & 0.6081 & 160.29 & 261.11 & 0.6643 & 0.7957 & 0.7202 & 0.4918 & 231.84 & 348.55 \\
& T3S + Decoder & 0.7535 & 0.8394 & 0.7913 & 0.6092 & 163.58 & 263.42 & 0.6634 & 0.7838 & 0.7144 & 0.4897 & 234.00 & 352.35 \\
& GTS + Decoder & 0.7514 & \uline{0.8428} & 0.7917 & 0.6105 & 157.83 & \uline{254.51} & 0.6569 & 0.7900 & 0.7131 & 0.4825 & 231.78 & 344.21 \\
& NeuTraj + Decoder & \uline{0.7608} & 0.8405 & \uline{0.7961} & \uline{0.6152} & \uline{156.25} & 254.70 & \uline{0.6644} & \uline{0.7979} & \uline{0.7213} & \uline{0.4942} & \uline{227.19} & \uline{341.03} \\
& RNTrajRec (Ours) & \textbf{0.7831} & \textbf{0.8812} & \textbf{0.8272} & \textbf{0.6609} & \textbf{132.69} & \textbf{219.20} & \textbf{0.6926} & \textbf{0.8573} & \textbf{0.7632} & \textbf{0.5413} & \textbf{195.91} & \textbf{304.52} \\
\hline
\end{tabular}
\vspace{0.5em}
\hspace{0em}
\begin{tabular}{|c|c|cccccc|cccccc|}
\hline
\multicolumn{2}{|c|}{\multirow{2}*{Method}}&\multicolumn{6}{c|}{Porto ($\epsilon_\tau=\epsilon_\rho*8$)}&\multicolumn{6}{c|}{Shanghai-L ($\epsilon_\tau=\epsilon_\rho*16$)} \\
\multicolumn{2}{|c|}{~} & Recall & Precision & F1 Score & Accuracy & MAE & RMSE & Recall & Precision & F1 Score & Accuracy & MAE & RMSE \\
\hline
\multicolumn{2}{|c|}{Linear + HMM} & 0.5837 & 0.5473 & 0.5629 & 0.3624 & 175.00 & 284.16 & 0.6055 & 0.5633 & 0.5801 & 0.3825 & 383.25 & 555.68 \\
\multicolumn{2}{|c|}{DHTR + HMM} & 0.5578 & 0.6837 & 0.6118 & 0.4250 & \uline{104.41} & \uline{168.83} & 0.5144 & 0.6533 & 0.5696 & 0.3974 & 308.72 & 454.87 \\
\hline
\multirow{7}{*}{\hspace{0.05cm}\rotatebox{90}{End-to-End Methods}\hspace{0.05cm}} & t2vec + Decoder & 0.6543 & 0.7546 & 0.6977 & 0.4738 & 124.77 & 184.57 & 0.6397 & 0.7487 & 0.6831 & 0.4544 & 298.35 & 420.22 \\
& Transformer + Decoder & 0.6343 & 0.7449 & 0.6816 & 0.4590 & 132.70 & 195.10 & 0.5850 & 0.7039 & 0.6306 & 0.4160 & 357.46 & 496.25 \\
& MTrajRec & 0.6449 & 0.7504 & 0.6905 & 0.4656 & 125.81 & 184.94 & 0.6106 & 0.7372 & 0.6603 & 0.4328 & 327.32 & 456.40 \\
& T3S + Decoder & 0.6392 & 0.7377 & 0.6816 & 0.4551 & 131.12 & 191.99 & 0.6282 & 0.7408 & 0.6721 & 0.4510 & 303.55 & 428.35 \\
& GTS + Decoder & 0.6474 & \uline{0.7612} & 0.6967 & 0.4761 & 118.07 & 173.77 & \uline{0.6441} & \uline{0.7809} & \uline{0.6987} & \uline{0.4714} & \uline{276.23} & \uline{391.74} \\
& NeuTraj + Decoder & \uline{0.6544} & 0.7558 & \uline{0.6984} & \uline{0.4808} & 119.45 & 176.27 & 0.6337 & 0.7472 & 0.6787 & 0.4542 & 293.65 & 414.59 \\
& RNTrajRec (Ours) & \textbf{0.6778} & \textbf{0.7950} & \textbf{0.7293} & \textbf{0.5230} & \textbf{97.66} & \textbf{145.87} & \textbf{0.6663} & \textbf{0.8294} & \textbf{0.7332} & \textbf{0.5145} & \textbf{229.74} & \textbf{335.19} \\
\hline
\end{tabular}
\label{tab1}
\vspace{-0.7cm}
\end{center}
\end{table*}
To evaluate the effectiveness of RNTrajRec,
we implement in total eight baselines.
i) \textbf{Linear} \cite{hoteit2014estimating}\textbf{+HMM}
uses linear interpolation to obtain a high-sample trajectory, and then HMM algorithm to obtain a map-matched $\epsilon_\rho$-sample interval trajectory.
ii) \textbf{DHTR+HMM}
replaces Linear with a hybrid Seq2Seq model with kalman filter\cite{kalman1960new}.
iii) \textbf{t2vec}
proposes a deep learning network for trajectory similarity learning with a BiLSTM\cite{hochreiter1997long} model to capture the temporal dependency of the given trajectory.
iv) \textbf{Transformer}\cite{vaswani2017attention} learns the representation with temporal dependency.
Follow \cite{ren2021mtrajrec}, we use grid cell index and time index for each sample point in the trajectory.
v) \textbf{NeuTraj}
adds a spatial attention memory model to LSTM
to capture rich nearby spatial features for trajectory representation.
vi) \textbf{T3S}
combines a self-attention network with a spatial LSTM model to capture spatial and structural features of trajectories.
vii) \textbf{GTS}\cite{han2021graph} is the state-of-the-art method for learning trajectory similarity in spatial network which uses POIs as input.
To adapt GTS to our problem setting, we regard each intersection in the road network as a POI,
and cut every long road segment into segments every $100$ meters and treat each cut point as a POI. Also, we use the embedding vector of the nearest POI to obtain the representation of each GPS point in the trajectory.
viii) \textbf{MTrajRec}
is the state-of-the-art method for trajectory recovery task.
\textit{Remark 1}: Traj2SimVec is a novel model for trajectory similarity learning with auxiliary supervision for sub-trajectory. However, its encoder model
is simply a RNN-based model, which is similar to t2vec.
Since we mainly compare the performance of different models on GPS trajectory representation learning, we do not include this method for comparison.
\textit{Remark 2}: We refer to \textbf{A+Decoder} as using the encoder model proposed in \textbf{A} and the decoder model proposed in \cite{ren2021mtrajrec} for trajectory recovery task.
\subsection{Performance Evaluation}
We report the experimental results in Table \ref{tab1}. Numbers in bold font indicate the best performers, and numbers underlined represent the best performers among existing baselines without considering our model.
As shown, RNTrajRec outperforms all baseline models on all the datasets.
In particular, Linear+HMM performs the worst on all the datasets and its performance drops significantly as the sample interval becomes longer, e.g. the accuracy drops from $0.4916$ to $0.2858$ when the sample interval increases from $96$ seconds to $192$ seconds on Chengdu dataset. We also observe that DHTR+HMM outperforms Linear+HMM, which confirms that linear interpolation is not suitable for recovering low sample GPS trajectories.
Another observation is that NeuTraj and GTS, two models proposed for GPS trajectory learning, outperform MTrajRec when they include a decoder model proposed in \cite{ren2021mtrajrec} on top of their encoder models. This well demonstrates that these two models are able to capture spatio-temporal information in low-sample trajectories.
It is observed that end-to-end methods perform better than two-stage methods.
Among existing end-to-end models, we observe that NeuTraj achieves the highest accuracy on Chengdu and Porto datasets, while GTS significantly outperforms NeuTraj on Shanghai-L dataset. We believe that it is due to the complex road network structure in Shanghai-L that leads to the high performance for graph-based method. In addition, our method consistently outperforms all these models with a large margin.
To be more specific, as compared with the best baseline, RNTrajRec improves F1 score and accuracy by an average of $4.85\%$ and $8.48\%$ respectively, and reduces MAE and RMSE by an average of $27.42$ and $35.91$ meters respectively for Chengdu dataset; it improves F1 score and accuracy by $4.94\%$ and $9.14\%$ respectively and reduces MAE and RMSE by $46.49$ and $56.55$ meters respectively for Shanghai-L dataset; it improves F1 score and accuracy of the best baseline by $4.42\%$ and $8.78\%$ respectively and reduces MAE and RMSE by $6.75$ and $22.96$ meters respectively on Porto dataset.
This clearly proves the effectiveness of RNTrajRec for trajectory recovery, which is mainly contributed by following two reasons.
Firstly, RNTrajRec pays attention to the important road network information and road network structure around each GPS point in the trajectory. Secondly, several novel modules are proposed in this paper, such as GridGNN and GPSFormer, that enable the model to learn rich spatial and temporal features of the given trajectory.
\subsection{Additional Experiment Results}
We conducts additional experiments on two new datasets, namely Shanghai and Chengdu-Few, to evaluate the performance of RNTrajRec in various data distributions.
For Shanghai dataset, we select another area in Shanghai with a total number of $9,298$ road segments, covering an area of $6.4 \times 14.4 \operatorname{km}^2$, and meanwhile keep the other settings (e.g., trajectory sample interval time) of original Shanghai-L dataset. Experiment result shown in Table \ref{tab:extend} verifies that RNTrajRec still performs the best among all the baseline models.
Because of its design, RNTrajRec might benefit more from a training dataset that contains a large number of trajectories, as compared to other baselines. In order to demonstrate that our model is equally competitive even when the training set contains much fewer trajectories, we construct a new dataset, namely Chengdu-Few, that randomly samples only 20,000 trajectories (roughly 20\% of the origin Chengdu dataset) from the original training dataset in Chengdu and meanwhile keep the other settings (e.g., \# of road segments and size of the training area) of original Chengdu dataset. In such a setting, our model needs to learn the representation of each road segment along with the representation of each subgraph with limited trajectories.
As reported in Table~\ref{tab:extend}, our method outperforms all the baseline in Chengdu-Few dataset and we believe the reasons lie in the following two aspects. Firstly, when the training set has a limited number of trajectories, many road segments appear only a few times in the training dataset. However, when graph neural network updates a certain road segment, the features of the surrounding road segments are also updated, which offers opportunities for each road segment to learn a good feature representation with limited number of trajectories. Secondly, the input features of each subgraph are obtained from the representation of the road network directly. Therefore, as long as each road segment in the road network has a well-trained feature representation, it's possible for each subgraph to obtain a good feature representation.
Last but not the least, we find that although our model still achieves the best results on the new Chengdu-Few dataset, the improvement is slightly less significant than that achieved under the original Chengdu dataset, e.g., on Chengdu dataset, our model outperforms the best baseline by 7.42\% in accuracy and 5.80\% in F1 score; on Chengdu-Few dataset, these two numbers drop to 6.57\% in accuracy and 2.75\% in F1 score. We think the main reason is that a transformer model typically requires a large amount of training data. Consequently, when the amount of training data is insufficient, the quality of the final representation will be affected (to a certain degree).
\begin{table*}[htbp]
\renewcommand\tabcolsep{3.3pt}
\renewcommand{\arraystretch}{1.2}
\vspace{0.05in}
\caption{Performance evaluation on additional Shanghai and Chengdu-Few datasets.}
\begin{center}
\begin{tabular}{|c|c|cccccc|cccccc|}
\hline
\multicolumn{2}{|c|}{\multirow{2}*{Method}}&\multicolumn{6}{c|}{Shanghai ($\epsilon_\tau=\epsilon_\rho*8$)}&\multicolumn{6}{c|}{Chengdu-Few ($\epsilon_\tau=\epsilon_\rho*8$)} \\
\multicolumn{2}{|c|}{~} & Recall & Precision & F1 Score & Accuracy & MAE & RMSE & Recall & Precision & F1 Score & Accuracy & MAE & RMSE \\
\hline
\multicolumn{2}{|c|}{Linear + HMM} & 0.7563 & 0.7158 & 0.7329 & 0.5730 & 205.82 & 331.43 & 0.6597 & 0.6166 & 0.6351 & 0.4916 & 358.24 & 594.32 \\
\multicolumn{2}{|c|}{DHTR + HMM} & 0.6682 & 0.7728 & 0.7123 & 0.5876 & 160.31 & 261.17 & 0.5729 & 0.6938 & 0.6243 & 0.4940 & 282.52 & 468.91 \\
\hline
\multirow{7}{*}{\hspace{0.05cm}\rotatebox{90}{End-to-End Methods}\hspace{0.05cm}} & t2vec + Decoder & 0.6914 & 0.7155 & 0.6965 & 0.5295 & 184.42 & 280.36 & 0.6591 & 0.7690 & 0.7055 & 0.5069 & 237.70 & 364.77 \\
& Transformer + Decoder & 0.7249 & 0.7702 & 0.7404 & 0.5786 & 161.03 & 253.22 & 0.6542 & 0.7569 & 0.6977 & 0.5051 & 237.07 & 364.29 \\
& MTrajRec & 0.7417 & 0.7874 & 0.7581 & 0.5924 & 148.26 & 232.41 & \uline{0.7026} & \uline{0.8083} & \uline{0.7483} & \uline{0.5418} & \uline{206.08} & \uline{320.35} \\
& T3S + Decoder & 0.7581 & 0.7923 & 0.7695 & 0.6009 & 145.58 & 231.79 & 0.6984 & 0.7964 & 0.7405 & 0.5374 & 207.97 & 322.31 \\
& GTS + Decoder & 0.7525 & \uline{0.8144} & \uline{0.7766} & \uline{0.6172} & \uline{134.58} & \uline{212.78} & 0.6888 & 0.8074 & 0.7396 & 0.5312 & 207.48 & 321.80 \\
& NeuTraj + Decoder & \uline{0.7588} & 0.7976 & 0.7726 & 0.6058 & 141.63 & 223.26 & 0.6973 & 0.7912 & 0.7378 & 0.5403 & 211.64 & 330.74 \\
& RNTrajRec (Ours) & \textbf{0.7824} & \textbf{0.8735} & \textbf{0.8218} & \textbf{0.6674} & \textbf{112.19} & \textbf{180.96} & \textbf{0.7205} & \textbf{0.8309} & \textbf{0.7689} & \textbf{0.5774} & \textbf{179.74} & \textbf{287.84} \\
\hline
\end{tabular}
\label{tab:extend}
\end{center}
\end{table*}
\subsection{Robustness Study}
\begin{comment}
\begin{table}[htbp]
\vspace{-0.6cm}
\renewcommand\tabcolsep{3.3pt}
\renewcommand{\arraystretch}{1.2}
\caption{Performance evaluation for different methods on elevated road trajectory task.}
\vspace{-0.07in}
\centering
\begin{tabular}{|c|ccccc|}
\hline
\multirow{2}{*}{Method}&\multicolumn{5}{|c|}{Chengdu ($\epsilon_\tau=\epsilon_\rho*8$)} \\
& SR\%90 & SR\%80 & SR\%70 & SR\%60 & SR\%50 \\
\hline
Linear + HMM & 0.0673 & 0.1694 & 0.2520 & 0.3170 & 0.3946 \\
DHTR + HMM & 0.0942 & 0.2873 & 0.4878 & 0.6132 & 0.6863 \\
\hline
t2vec + Decoder & 0.1232 & 0.4051 & 0.6791 & 0.8133 & 0.8922 \\
Transformer + Decoder & 0.1492 & 0.4536 & 0.7141 & 0.8465 & 0.9061 \\
MTrajRec & 0.1554 & 0.4495 & 0.7018 & 0.8453 & 0.9196 \\
T3S + Decoder & 0.1691 & 0.4995 & 0.7491 & 0.8650 & 0.9234 \\
GTS + Decoder & 0.1748 & 0.4883 & 0.7445 & 0.8675 & 0.9249 \\
NeuTraj + Decoder & \uline{0.1882} & \uline{0.5140} & \uline{0.7619} & \uline{0.8783} & \uline{0.9307} \\
\hline
RNTrajRec w/o GRL & 0.2066 & 0.5432 & 0.7873 & 0.8916 & 0.9421 \\
RNTrajRec w/o GCL & 0.2033 & 0.5489 & 0.7890 & 0.8974 & 0.9425 \\
RNTrajRec & \textbf{0.2534} & \textbf{0.5880} & \textbf{0.8097} & \textbf{0.9094} & \textbf{0.9505} \\
\hline
\end{tabular}
\vspace{-0.3cm}
\label{tab2}
\end{table}
\end{comment}
\begin{figure}
\centering
\vspace{-0.2cm}
\includegraphics[width=0.48\textwidth]{fig/robust2.png}
\vspace{-0.3cm}
\caption{Performance evaluation for different methods on elevated road trajectory task (Chengdu: $\epsilon_\tau=\epsilon_\rho*8$).}
\label{fig:robust}
\vspace{-0.5cm}
\end{figure}
Fig.~\ref{fig:robust} reports the experimental result for elevated road trajectory task on Chengdu dataset. We design this task to serve two main purposes.
First, elevated roads and nearby trunk roads normally experience high traffic volume and their traffic conditions are typically more complicated (e.g., elevated roads are more likely to experience traffic congestion during peak hours).
Therefore, evaluating the models' ability to discover these sophisticated spatio-temporal patterns is significant. Second, the road network structure around the elevated road is more complex than other urban roads. For example, there are usually two-way trunk road segments under the elevated road segments, which brings greater challenges to the recovery of the road segments on these elevated roads.
As shown in Fig.~\ref{fig:robust}, RNTrajRec significantly outperforms all the baseline models. Interestingly, all the learning-based models significantly outperform HMM-based methods (i.e. Linear/DHTR+HMM), which shows the inability of HMM to discover these complex patterns. In addition, our model can achieve a high F1 score ($>0.8$) for $58.8\%$ trajectories, which outperforms the best baseline by $14.4\%$.
\begin{figure*}
\centering
\vspace{-0.15in}
\includegraphics[width=0.95\textwidth
{fig/Fig-Casev3.png}
\caption{A case study for trajectory recovery. Purple circles in (a) represent the sample points in an input low sample trajectory. The red/black lines in dash-line rectangles of (a) represent the partial elevated/main road segments. The figures in the lower right corner of (b), (c), and (d) plot the detailed recovery results corresponding to the areas inside the red/black rectangles and the
marker shapes indicate the sample timestamps (e.g., a purple star and an orange star in the dash-line rectangle labelled \textcircled{2} in (b) represent a point recovered by MTrajRec and the corresponding ground truth point of the same timestamp.)
}
\label{fig:case}
\vspace{-0.1in}
\end{figure*}
\subsection{Efficiency Study}
\begin{comment}
\begin{table}[htbp]
\vspace{-0.6cm}
\renewcommand\tabcolsep{3.7pt}
\renewcommand{\arraystretch}{1.2}
\caption{Efficiency Study for different models.}
\vspace{-0.06in}
\centering
\begin{tabular}{|c|cccc|}
\hline
\multirow{2}{*}{Method}&\multicolumn{4}{|c|}{Chengdu ($\epsilon_\tau=\epsilon_\rho*8$)} \\
& Training & Inference & Memory & \#Para \\
\hline
Linear + HMM & - & 0.031s & - & - \\
DHTR + HMM & 2558s & 0.091s & 7.36GB & 35.66M \\
\hline
t2vec + Decoder & 1093s & 0.042s & 4.14GB & 13.08M \\
Transformer + Decoder & 1103s & 0.043s & 4.23GB & 15.84M \\
MTrajRec & 1082s & 0.042s & 4.14GB & 13.48M \\
T3S + Decoder & 1098s & 0.043s & 4.63GB & 27.62M \\
GTS + Decoder & 1108s & 0.048s & 4.18GB & 21.03M \\
NeuTraj + Decoder & 1131s & 0.052s & 4.71GB & 28.94M \\
\hline
RNTrajRec w/o GRL & 1822s & 0.081s & 10.63GB & 31.27M \\
RNTrajRec & 1928s & 0.083s & 17.99GB & 32.86M \\
\hline
\end{tabular}
\label{tab3}
\vspace{-0.2cm}
\end{table}
In addition to the effectiveness of different methods, we also evaluate their efficiency by considering four different aspects, including i) training time per epoch, ii) inference time, iii) memory usage, and iv) the number of parameters. Specifically,
the training time refers to the time spent by a learning-based method in training for each epoch; inference time refers to the time required to recover a trajectory during the inference phase; memory usage refers to the maximum GPU memory usage while training. We list the efficiency study result corresponding to Chengdu dataset having in total $105,879$ trajectories in its training dataset in Table \ref{tab3}.
DHTR+HMM is observed to spend the most time for both training and inference, probably due to the inefficiency of the adopted kalman filter module. Besides, all learning-based baseline models, except DHTR, share common performance bottleneck, i.e., calculating the constraint mask and predicting the GPS points one by one along the trajectory.
As these models share the same decoder model, they spend almost the same amount of time in training. What's more, NeuTraj+Decoder has the most parameters among end-to-end baseline models, mainly due to the grid memory cell in NeuTraj. Overall, these end-to-end baseline models spend $\sim 1100$ seconds for training an epoch and use $\sim 4.5$GB GPU memory. Due to the additional time cost in sampling sub-graphs for each GPS point and memory cost in storing these sub-graphs, RNTrajRec w/o GRL requires $1.65 \times$ training time and $2.36 \times$ GPU memory compared to baseline models. In addition, our RNTrajRec model requires $1.75 \times$ training time and $4 \times$ GPU memory compared to baseline models in training. However, during inference time, RNTrajRec requires $0.051$ seconds to recover a low-sample trajectory, which is efficient and practical in practice.
\end{comment}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.48\textwidth]{fig/Fig-Effiencicy.pdf}
\vspace{-0.35cm}
\caption{Efficiency study
on Chengdu dataset ($\epsilon_\tau=\epsilon_\rho*8$)}.
\vspace{-0.7cm}
\label{fig:effi}
\end{figure}
In addition to the effectiveness of different methods, we also evaluate their efficiency by considering two different aspects, including inference time that refers to the time required to recover a trajectory during the inference phase and the number of parameters.
As shown in Fig.~\ref{fig:effi}, RNTrajRec requires less time during the inference than NeuTraj+Decoder, and comparable time cost against GTS+Decoder when using only one layer of GPSFormer. However, our model significantly outperforms both NeuTraj+Decoder and GTS+Decoder in terms of accuracy even if we set $N=1$. Besides, DHTR+HMM is observed to spend the most time during inference, probably due to the inefficiency of the adopted kalman filter module. Another observation is that Linear+HMM requires the least time during inference, however it suffers from low accuracy for trajectories with low sample rate. In short, RNTrajRec requires $50.2$ microseconds to recover a low-sample trajectory, which is efficient and practical in practice.
\textit{Remark:} The time spent to obtain the road network representation via GridGNN is excluded from the inference time reported above. This is because the road network representation can be learned in advance as it is independent of the input trajectory used for the inference task.
\subsection{Case Study}
Fig.~\ref{fig:case} visualizes trajectories recovered by different models,
where the input trajectory is a low-sample elevated road
trajectory.
We sample partial underlying road network structures (e.g., elevated road segments represented by red lines and main road segments represented by black lines) in the two dash-line rectangles of Fig.~\ref{fig:case}(a). As we can observe from the visualization, the road network near the elevated roads is complicated.
Purple lines in figures represent the ground truth trajectory and orange, green, blue-colored trajectories represent the trajectory recovered by MTrajRec, GTS+Decoder, and RNTrajRec respectively.
We can observe that the trajectory recovered by our model matches the ground truth much better (e.g., the trajectories in the green circles provide one example), as RNTrajRec is able to capture the spatial and temporal patterns of trajectories in a more accurate manner.
We further plot snapshots of two sections of restored trajectories (bounded by rectangles labelled \textcircled{1} and \textcircled{2}) by different models in the elevated road using the small pictures in the lower right corner in Fig.~\ref{fig:case} (b)-(d), together with three recovered points from each section and their corresponding ground truth as examples. It can be observed that
both road sections restored by MTrajRec and those restored by GTS+Decoder deviate from the ground truth,
while the road sections restored by our model match the ground truth trajectory perfectly.
In addition, we want to highlight that points recovered by the two baseline models lack spatial consistency due to their insufficient use of road network. For example, the orange/green star in Fig.~\ref{fig:case} (b/c)-\textcircled{2} is a point on the main road, while the next recovered point (i.e., the orange/green circle) is located on the elevated road. Although the two points seem to be located on the same road in our visualization,
the shortest path distance between those two points is larger than $2000$ metres, implying that the recovered path between these two points is very different from the ground truth.
In other words,
the fact that our model can recover more accurate trajectories for elevated roads with complex network topology is significant.
\begin{table*}[htbp]
\renewcommand\tabcolsep{3.3pt}
\renewcommand{\arraystretch}{1.2}
\caption{Ablation studies on Chengdu and Porto datasets.}
\vspace{-0.1in}
\centering
\begin{tabular}{|c|cccccc|cccccc|}
\hline
\multirow{2}{*}{Variants}&\multicolumn{6}{c|}{Chengdu ($\epsilon_\tau=\epsilon_\rho*8$)} & \multicolumn{6}{c|}{Porto ($\epsilon_\tau=\epsilon_\rho*8$)} \\
& Recall & Precision & F1 Score & Accuracy & MAE & RMSE & Recall & Precision & F1 Score & Accuracy & MAE & RMSE \\
\hline
w/o GRL & 0.7696 & 0.8773 & 0.8177 & 0.6459 & 144.61 & 240.22 & 0.6671 & 0.7946 & 0.7227 & 0.5145 & 101.51 & 150.17 \\
w/o GF & 0.7725 & 0.8765 & 0.8191 & 0.6439 & 141.31 & 234.28 & 0.6697 & 0.7926 & 0.7234 & 0.5133 & 102.07 & 151.04 \\
w/o GAT & 0.7821 & 0.8729 & 0.8229 & 0.6292 & 144.70 & 237.82 & 0.6747 & \textbf{0.7962} & 0.7279 & 0.5195 & 98.65 & 147.98 \\
w/o GN & 0.7827 & 0.8672 & 0.8200 & 0.6306 & 146.56 & 241.25 & 0.6729 & 0.7951 & 0.7264 & 0.5171 & 99.84 & 148.05 \\
w/o GCL & 0.7773 & 0.8744 & 0.8209 & 0.6472 & 140.59 & 236.49 & 0.6683 & 0.7928 & 0.7227 & 0.5119 & 102.13 & 152.10 \\
\hline
RNTrajRec & \textbf{0.7831} & \textbf{0.8812} & \textbf{0.8272} & \textbf{0.6609} & \textbf{132.69} & \textbf{219.20} & \textbf{0.6778} & 0.7950 & \textbf{0.7293} & \textbf{0.5230} & \textbf{97.66} & \textbf{145.87} \\
\hline
\end{tabular}
\vspace{-0.15in}
\label{tab:ablation}
\end{table*}
\subsection{Ablation Study}
To further prove the effectiveness of the modules
proposed in the paper, we create five variants of RNTrajRec. \textbf{w/o GRL} replaces the graph refinement layer (GRL) in GPSFormer with standard transformer layer and ignores the graph structure input, i.e. $\widehat{G}_{\tau}$ and $\vec{Z}_{\tau}^{(0)}$; \textbf{w/o GF} replaces gated fusion discussed in Session~\ref{GF} with concatenation operation and feed forward network; \textbf{w/o GN} replaces graph normalization discussed in Session~\ref{GN} with standard layer normalization in transformer encoder; \textbf{w/o GAT} employs feed forward network but not graph attention network in graph refinement layer; \textbf{w/o GCL} removes graph classification (GCL) loss defined in Eq.~\eqref{eq26}. Experimental results are listed in Table \ref{tab:ablation}. We can observe that RNTrajRec consistently outperforms all its variants, which proves the significance of these modules.
As mentioned in Section~\ref{SG}, the local surrounding graph structure of GPS points is important for understanding the movement of a trajectory. Therefore, we observe a significant drop in overall performance after removing GRL,
especially for Recall and F1 Score. Besides, we observe that RNTrajRec w/o GRL significantly outperforms Transformer+Decoder, with the input to the transformer layer being their only difference. Therefore, we conclude that a well-designed input embedding is significant for complex encoding models like transformer.
GRL is considered the most important component in RNTrajRec, in which graph normalization, gated fusion and graph forward is replaced with layer normalization, multi-head attention and feed forward network in original transformer respectively. In order to better justify the design of these individual modules, we design ablation study to answer the following three questions.
$\bullet$ Q1: Why we use graph normalization to replace layer normalization in graph refinement layer?
$\bullet$ Q2: Why we use graph attention network to replace feed forward network in graph refinement layer?
$\bullet$ Q3: Why we use gate fusion instead of other simple method (say concatenation or MLP) in graph refinement layer?
\textbf{Q1: Why Graph Normalization?} The idea of designing the graph normalization is inspired by \cite{dwivedi2020generalization}, in which an experiment has been conducted to demonstrate that batch normalization is more suitable for graph transformers. Therefore, we design a similar normalization strategy for dynamic graph transformer (or spatial-temporal transformer).
In addition, we observe that layer normalization ignores other nodes in the same sub-graph, which may cause the normalized features of different nodes in the same sub-graph lack discrimination.
\textbf{Q2: Why Graph Attention Network?} There are two main reasons behind the design of an additional graph attention network in graph refinement layer. First, as mentioned in Session IV-C, each subgraph $\vec{Z}_{\tau, i}^{(0)}$ directly uses the features in the road segments as the input but totally ignores the relationship inside each subgraph.
Second, transformer encoder discussed in Session IV-E can effectively aggregate features from the entire trajectory, therefore, the relationship in each subgraph can be refined with the output of transformer encoder.
\textbf{Q3: Why Gated Fusion?} As mentioned in Session IV-D, we adopt gated fusion to adaptively fuse the input hidden
vectors and the node features in the graph structure.
Extensive results reported in Table~\ref{tab:ablation} well verify that the performance of RNTrajRec will degrade if any of the three parts are replaced/removed.
As for GCL, it is mainly to guide the process of trajectory encoding so as to generate more accurate trajectory representation. As shown in Table \ref{tab:ablation}, we observe that GCL indeed improves the accuracy of the recovered trajectory.
In addition, we study the effectiveness of different road network representation methods. As shown in Fig.~\ref{fig4}(a), we compare GridGNN with three novel graph neural networks (GNNs), namely GCN, GIN and GAT
for graph representation. All these models are implemented using standard DGL\cite{wang2019deep2} library. We observe that GridGNN consistently performs the best, which shows the effectiveness of integrating grid information for road network representation. In addition,
GAT outperforms GIN and GCN for graph representation, which shows the significance of self-attention mechanism.
\subsection{Parameter Analysis}
\subsubsection{The impact of the number of GPSFormerBlock $N$}
The number of GPSFormerBlock $N$ directly influences the complexity of RNTrajRec. To examine its impact, we vary $N$ in RNTrajRec from $1$ to $5$ and report the results in Fig.~\ref{fig4}(b).
Note, when $N$ is too large, the model will be more prone to overfitting, resulting in a decrease in its performance. We observe from the results that RNTrajRec achieves the highest accuracy when $N=3$. However, considering the efficiency of the model and the usage of GPU memory, we set $N$ to $2$.
\subsubsection{The influence of receptive field of GPS point $\delta$}
The value of receptive field of GPS point $\delta$ decides the size of the surrounding sub-graph of each GPS point in the trajectory.
A larger $\delta$ allows the sub-graph to capture more information of surrounding area with a higher memory cost. In order to study the impact of $\delta$ on RNTrajRec's performance, we vary $\delta$ from $100$ meters to $800$ meters.
As shown in Fig.~\ref{fig4}(c), RNTrajRec achieves the highest accuracy when $\delta$ is set to $600$ meters.
To balance between the effectiveness and efficiency of the model, we set $\delta$ to $400$ meters.
\subsubsection{The influence of hyper-parameter $\gamma$ in Eq. \eqref{eq7}}
We vary the hyper-parameter $\gamma$ from $10$ meters to $50$ meters. With the increase of $\gamma$, the initial hidden-state vectors $H_{\tau}^{(0)}$ will pay more attention to the road segments that are closer to the GPS point, while pay less attention to those far away from the GPS point. Because of the uncertainty of GPS errors, the effect of increasing $\gamma$ on the model is also uncertain.
As shown in Fig.~\ref{fig4}(d), we observe that the performance of the model does not vary much as $\gamma$ changes. We believe the main reason is that GPSFormer can dynamically adjust the weight of each node in the sub-graph for the current GPS point, making the model insensitive to the changes of hyper-parameter $\gamma$.
\begin{figure}[t]
\vspace{-0.1cm}
\centerline{\includegraphics[width=9cm]{fig/Fig7-v1.pdf}}
\vspace{-0.15in}
\caption{The performance of RNTrajRec on Chengdu dataset ($\epsilon_\tau=\epsilon_\rho*8$) under different hyper-parameters.}
\label{fig4}
\vspace{-0.6cm}
\end{figure}
\subsection{Discussion}
We have explored many different techniques during the design of RNTrajRec. Though most of our attempts are effective, there is one technique that we tried but did not work.
When designing the graph refinement layer, ideally the graph transformer network is able to refine not only the graph embedding of each subgraph but also the weight of each node in every subgraph. Specifically, we try to use the refined graph embedding at layer $l$, i.e. $\vec{Z}_{\tau}^{(l)}$, to obtain a new weight for each node by first transforming $\vec{Z}_{\tau}^{(l)}$, followed by either sigmoid function or softmax function. However, we find out that both sigmoid function and softmax function perform worse than a simple approach that does not refine the weight in each subgraph, i.e. directly using mean pooling as discussed in Section~\ref{GR}. We believe the main reason behind this failed attempt is that the linear transformation is too simple
to learn valuable weights without proper supervision.
\section{Conclusion}
Trajectory recovery is a significant task for utilizing low-sample trajectories effectively. In this paper, we propose a novel spatial-temporal transformer-based model, namely RNTrajRec, to capture rich spatial and temporal information of the given low-sample trajectory.
Specifically, we propose a
road network representation module, namely GridGNN and a novel spatial-temporal transformer module, namely GPSFormer for encoding a GPS trajectory.
We then forward the hidden-state vectors to a multi-task decoder model
to recover the missing GPS points.
Also, we propose a graph classification loss with constraint mask to guide the process of trajectory encoding.
Extensive experiments on three real-life datasets show the effectiveness and efficiency of the proposed method.
\section{Acknowledgements}
This research is supported in part by the National Natural Science Foundation of China under grant 62172107.
\normalem
\bibliographystyle{IEEEtran}
|
1,108,101,565,977 | arxiv | \section{Introduction}
The question of what causes the prominent east-west (zonal) jet
streams and banded cloud patterns on Jupiter, Saturn, Uranus, and
Neptune remains a major unsolved problem in planetary science.
On Jupiter and Saturn, there exist $\sim20$--30 jets, including
a broad, fast superrotating (eastward) equatorial jet. In contrast,
Uranus and Neptune exhibit only $\sim3$ jets each, with high-latitude
eastward jets and a subrotating (westward) equatorial jet.
All four planets also exhibit a variety of compact vortices, waves,
turbulent filamentary regions, short-lived convective events,
and other local features.
Recent observational studies demonstrate that small eddies pump momentum
up-gradient into the zonal jets on Jupiter and Saturn
\citep{2006Icar..185..430S, Del_Genio2007}, which strongly suggests
that cloud-layer processes are important in jet formation, although
this does not exclude a possible role for the deep interior too.
Scenarios to explain these diverse observations range from the
``shallow-forcing'' scenario, in which jet generation occurs via
injection of turbulence, absorption of solar
radiation and latent heat release in the cloud layer, to the
``deep-forcing'' scenario, in which the jet formation results
from convection occurring throughout the molecular envelope
\citep[for reviews see][]{ingersoll-etal-2004, 2005RPPh...68.1935V}.
Over the past several decades, a variety of idealized models
have been developed that successfully produce banded zonal flows
reminiscent of those on the giant planets
\citep{1978JAtS...35.1399W, cho-polvani-1996a, 1998JAtS...55..611H,
Heimpel2007, lian-showman-2008}.
Despite these successes, the equatorial jet direction and magnitude
have proved to be formidable puzzles that are difficult to explain.
Many published models predict the same equatorial jet direction for
all four giant planets and thereby fail to provide a coherent
explanation that encompasses both the gas giants (Jupiter/Saturn)
and the ice giants (Uranus/Neptune). Under relevant conditions,
one-layer shallow-water-type
models generally produce westward equatorial flow for all four
planets \citep{1996PhFl....8.1531C,1999PhFl...11.1272I, showman-2007,
scott-polvani-2007}, consistent with Uranus and Neptune but not
Jupiter and Saturn. In the parameter regime of giant planets,
some shallow-water models have produced equatorial superrotation
\citep{scott-polvani-2008}, but as yet these models make no predictions
for why superrotation should occur on Jupiter and Saturn but not Uranus
and Neptune. Some
recent three-dimensional (3D) shallow-atmosphere models can also produce
equatorial superrotation under specific conditions
\citep{Williams_2006,Williams_2002, Williams_2003_3, Williams_2003_2,
Williams_2003_1, 2005P&SS...53..508Y, lian-showman-2008}, but
this has generally required the addition of {\it ad hoc} forcing,
and even if such forcing were plausible, it is unclear why it
would occur on Jupiter/Saturn but not Uranus/Neptune.
In contrast, the deep convection models
produce equatorial superrotation in most cases \citep{2001GeoRL..28.2557A,
2001GeoRL..28.2553C, 2005Natur.438..193H,Heimpel2007, glatzmaier-etal-2008},
consistent with Jupiter and Saturn but inconsistent with Uranus and Neptune.
In the context of deep convection models,
\citet{aurnou-etal-2007} proposed that
Uranus and Neptune are in a regime where geostrophy breaks down
in the interior, leading to turbulent mixing of angular momentum
and a westward equatorial jet. However, this mechanism
occurs only at heat fluxes greatly exceeding than those observed on
Uranus and Neptune \citep{aurnou-etal-2007}. Moreover,
given that the heat fluxes on Jupiter and Saturn exceed those on Uranus
and Neptune, one might expect the mechanism to apply more readily to
the former pair than the latter pair; if so, one should see equatorial
subrotation on Jupiter/Saturn yet superrotation on Uranus/Neptune,
backward from the observed equatorial jet directions.
\citet{schneider-liu-2009} developed a 3D numerical model that produced banded
zonal jets and an equatorial superrotation on Jupiter. This is the first
model that combines the deep convection (via a simple convective adjustment
scheme) and absorption of solar radiation in a
shallow atmosphere. However, their model extends to only 3 bars pressure
and thus neglects the effects of latent heating, which may be crucial
in generating horizontal temperature contrasts in the cloud layer.
Furthermore, it is unclear whether their model can produce equatorial
subrotation on Uranus and Neptune by the same mechanism. It is fair to say
that we presently lack a coherent explanation for the equatorial jets
that encompasses both the gas giants (Jupiter/Saturn) and the ice
giants (Uranus/Neptune).
Here, we test the hypothesis that large-scale latent heating associated with
condensation of water vapor can pump the zonal jets on the four giant planets.
This hypothesis has been repeatedly suggested over the past 40 years
\citep{barcilon-gierasch-1970, 1976Icar...29..445G, 2000Natur.403..630I,
2000Natur.403..628G}, but this idea has not yet been adequately
tested in numerical models. While observations cannot yet constrain the
existence of large-scale latent heating, abundant evidence nevertheless
exists for moist convection on the giant planets.
Lightning has been identified in nightside
images of Jupiter from Voyager, Galileo, Cassini, and New Horizons;
these flashes typically occur within localized, opaque clouds that grow
to diameters up to $\sim3000\,$km over a few days, indicating
convective activity. The lightning illuminates finite regions
on the cloud deck, indicating that the flashes occur at depths
of 5--10 bars, in the expected water condensation region
\citep{borucki-williams-1986, dyudina-etal-2002}.
Near such storms, clouds are
sometimes observed whose tops are deeper than 4 bars, where the
only condensate is water \citep{1998Icar..135..230B, 2000Natur.403..628G}.
On Saturn, electrostatic discharges
presumably caused by lightning have been identified, as have explosive
convective clouds that probably cause them \citep{Porco2005,
dyudina-etal-2007}.
Whistlers and electrostatic discharges indicating the presence of
lightning have also been detected on Uranus and Neptune
\citep{zarka-pedersen-1986, gurnett-etal-1990,
kaiser-etal-1991}, suggesting that moist convection occurs on
these planets too. For plausible water abundances (a few
times solar or greater), latent heating can cause local temperature
increases great enough to have important meteorological effects.
To date, numerical models of jet formation have generally
not included moisture and its latent heat release.
Most two-dimensional (2D) and shallow-water models
adopt forcing that injects turbulence everywhere simultaneously
and is confined to a small range of wavenumbers \citep[e.g.][]
{1998JAtS...55..611H, scott-polvani-2007}, which does not
capture the sporadic and localized nature of moist-convective
events. Likewise, existing 3D models of Jovian jet formation
have been dry (no water vapor) and force the flow by imposing latitudinal
temperature differences rather than including moist convection
\citep[e.g.][]{Williams_2006,Williams_2002, Williams_2003_3, Williams_2003_2,
Williams_2003_1, 2005P&SS...53..508Y, lian-showman-2008}.
Notable efforts in the right direction are the one-layer studies by
\citet{Li-2006} and \citet{showman-2007}, which adopted a forcing
explicitly intended to represent the effects of moist convection.
\citet{Li-2006} adopted a quasigeostrophic model and
introduced isolated vorticity patches to represent moist-convective storms;
\citet{showman-2007} adopted the shallow-water
equations and introduced isolated mass pulses to represent the
moist convection. These studies show that, under planetary rotation,
the small-scale turbulent flow can inverse cascade to form large scale
dynamics: zonal jets dominate at low latitudes and vortices dominate
at high latitudes. Nevertheless, these models do not explicitly
include water vapor, and the moist convection events are, rather than
occurring naturally, injected by hand with prescribed sizes, lifetimes
and amplitudes. Studies have also been carried out that
investigate the effects of sophisticated cloud microphysics schemes
on the vertical structure in 1D column models \citep{Del_Genio1990} and 2D
height/latitude models \citep{nakajima-etal-2000, 2008Icar..194..303P}, but these studies
do not address whether moist convection can pump the jets.
Our previous 3-D studies with imposed latitudinal temperature variation
can produce baroclinic eddies that drive the zonal jets through
inverse-cascade of turbulence \citep{lian-showman-2008}. Those simulations
successfully reproduced some major dynamic features on Jupiter such as
banded zonal winds and equatorial superrotation; they also predicted that the
jets on Jupiter could extend significantly deeper than the eddy accelerations
that pump them. However, the nature of the imposed forcing
schemes was only a crude parameterization of the processes that produce latitudinal
temperature contrasts.
Here we present three-dimensional (3D) global numerical simulations
using the MITgcm to investigate whether large-scale latent heating can drive the
zonal jets on Jupiter, Saturn, Uranus, and Neptune. Specifically, we
investigate whether we can explain (i) the approximate number
and speed of jets, and (ii) the direction of the equatorial jet
on all four planets in the context of a single mechanism. We explicitly
include water vapor as a tracer in our numerical model.
Section 2 describes the numerical model,
section 3 presents the simulation results, and section 4 concludes.
\section{Models}
\subsection{Model setup}
We use a global circulation model, the MITgcm, to solve the 3D
hydrostatic primitive equations in pressure coordinates on a sphere.
Previous studies of jet formation on the giant planets have adopted
dry models \citep{lian-showman-2008, Williams_2003_1, showman-2007,
scott-polvani-2007, Li-2006}, but here
we explicitly treat the transport and condensation of water vapor.
Condensation occurs whenever the relative humidity exceeds 100\%,
and the resultant latent heating is explicitly added
to the energy equation.
The system is governed by the horizontal momentum, hydrostatic
equilibrium, mass continuity, energy, and water-vapor equations as follows:
\begin{equation}{{d {\bf v}}\over{dt}}+f\hat{k}\times{\bf v}+\nabla_p\Phi=0
\label{mom}
\end{equation}
\begin{equation}{\partial{{\Phi}}\over{\partial{p}}}=-{1\over\rho}
\label{geopotential}
\end{equation}
\begin{equation}\nabla_p\cdot{\bf v}+{\partial{\omega}\over{\partial{p}}}=0
\label{continuity}
\end{equation}
\begin{equation}
{d\theta \over {dt}}=Q_\theta+{L\over c_p}{\theta\over T}(\delta{ {q-q_s}\over{\tau_s}})
\label{LatentHeat}
\end{equation}
\begin{equation}
{dq\over{dt}}=-{{q-q_s}\over\tau_s}\delta + Q_{\rm deep}
\label{Condensation}
\end{equation}
where ${\bf v}$ is the horizontal wind vector (comprised of zonal
wind $u$ and meridional wind $v$), $\omega=dp/dt$ is vertical wind
in pressure coordinates, $f=2\Omega \sin\phi$ is the Coriolis
parameter (where $\phi$ is latitude and $\Omega$ is the rotation rate
of the planet), $\Phi$ is geopotential, $\hat{k}$ is the
unit vector in the vertical direction (positive upward), $\rho$
is density, $\nabla_p$ is the horizontal gradient operator at a
given pressure level, $d/dt$ is the total derivative operator given by
$d/dt=\partial/\partial t + {\bf v}\cdot\nabla_p + \omega\partial/\partial
p$, $q$ is the water-vapor mixing ratio (defined as kilograms
of water vapor per kilogram of dry H$_2$ air),
and $\theta=T(p_0/p)^{\kappa}$ is potential temperature. Here $T$
is temperature and $\kappa\equiv R/c_p$, which is a specified constant,
is the ratio of the gas constant to the specific heat at
constant pressure. In the equations above, density $\rho$ is calculated
at a given temperature and pressure from the ideal gas law, $\rho = \frac{pm}{R_u T}$,
where $R_u$ is the universal gas constant and $m$ is the mean molecular mass of
the moist air, given by
$m= \frac{m_{H_2}}{1 + q/\epsilon} + q \frac{m_{H_2}}{1 + q/\epsilon}$, where
$\epsilon = \frac{m_{H_2O}}{m_{H_2}}$ is the ratio of the mass of a water molecule
to the mass of dry air molecule. Note that we neglect the density perturbation associated
with condensate mass loading. Given the density field, geopotential $\Phi$ is then
calculated by integrating the hydrostatic equation vertically via Eq. (2).
The reference pressure $p_0$ is taken as $1\,$bar
(note, however, that the dynamics are independent of the choice of
$p_0$). Curvature terms are included in ${\bf v}\cdot\nabla{\bf v}$.
In all governing equations Eq.~\ref{mom} -- Eq.~\ref{Condensation},
the dependent variables ${\bf v}$, $\omega$, $\Phi$,
$\rho$, $\theta$, and $q$ are functions of longitude $\lambda$, latitude
$\phi$, pressure $p$, and time $t$.
The water-vapor equation (Eq.~\ref{Condensation}) governs the time
evolution of the water-vapor mixing ratio $q$. There are two source/sink
terms. The first, $-(q-q_s)\delta/\tau_s$, represents loss through
condensation.
We apply an ``on-off switch'' $\delta$ to make the condensation events
occur only when the environment is supersaturated:
when $q > q_s$ then $\delta=1$ and water vapor condenses;
when $q \le q_s$ then $\delta=0$ and water vapor does not condense.
Here $q_s$ is the saturated water-vapor mixing ratio, given by the
approximate expression
\begin{equation}
q_s={m_{H_2O}\over{m_{H_2}}}{{e_s}\over p},
\label{SpecificHumidity}
\end{equation}
\begin{equation}
e_s={e_0} exp{[-{L\over{R_v}} ({1\over T} -{1\over {T_0}} ) ]},
\label{VaporPressure}
\end{equation}
where $e_s$ is the saturation vapor pressure.
Other constants in Eqs.~\ref{SpecificHumidity}--\ref{VaporPressure}
are given as follows:
$e_0=609.14 \,\,{\rm Pa}$ is a reference saturation
water-vapor pressure at temperature $T_0=273\,{\rm K}$,
$R_v=461.0 \rm JK^{-1}kg^{-1}$ is the specific
gas constant of water vapor, $m_{H_2O}$ is the molecular mass of water and
$m_{H_2}$ is the molecular mass of hydrogen gas. The quantity
$\tau_s$ is the condensation timescale, generally taken as $10^4\,$sec
(almost 3 hours), representative of a typical convective time.
The second term, $Q_{\rm deep}$, represents a source of water
vapor applied near the bottom of the model (see below).
When condensation occurs, we apply the appropriate
latent heating to the energy equation (second term on right side of
Eq.~\ref{LatentHeat}). The specific latent heat of condensation is given by
$L=2.5\times 10^6 \rm \, J \, kg^{-1}$.
\citet{2008Icar..194..303P} point out, using simplified one- and
two-dimensional test cases, that cloud microphysics can interact
with large-scale dynamics on giant planets.
While recognizing that inclusion of microphysics in 3D is an
important goal for future work, we here make
the simplifying assumption that all of the condensate instantaneously
rains out the bottom of the model. This allows us neglect cloud
microphysics and thereby sidestep the
numerous complications associated with cloud-particle growth and
settling, the evaporation of falling precipitation, and other microphysical
processes that remain poorly understood --- and whose effects must be
parameterized in large-scale models. Depending on the complexity of
the adopted schemes, including such processes can introduce potentially
dozens of new free parameters into the model. In Earth climate models,
these parameters are generally tuned using a combination of laboratory
and field data, but it is unclear to what extent such parameter values
(or even the schemes themselves) translate into the giant planet
context \citep[for discussion see][]{2008Icar..194..303P}. Given
these difficulties, there is strong merit in exploring the dynamics
in the limiting case without microphysics, as presented here.
True moist convection, which involves the formation of cumulus
clouds and thunderstorms, occurs on length scales much smaller than the
horizontal grid resolutions achievable in most global-scale models (including
ours). Significant work has gone into developing sub-grid-scale cumulus
parameterization schemes to represent the effects of this cumulus convection
on the large-scale flow resolved by global models \citep[for reviews see, e.g.,][]
{emanuel-raymond-1993, arakawa-2004}. Incorporating such a scheme
into a Jovian model is a worthy goal, but such schemes are often complex,
and it is first useful to ascertain the effects of {\it large-scale}
latent heating, that is, latent heating associated with the
hydrostatically balanced circulation explicitly resolved by the model.
This is the approach we pursue here.
In general, we expect that the latent heating and decrease in molecular
mass accompanying condensation/rainout will lead to a vertical structure
where potential temperature increases with height and molecular mass decreases
with height. If so, this would imply that condensation would stabilize the
environment against convection, leading to a virtual potential temperature
that increases with height. Given this expectation, we do not include any
dry convective adjustment scheme in the current simulations.
To provide a crude representation of evaporating precipitation
and water vapor mixed upward from the deeper atmosphere (below
the bottom of our domain), we apply a source term of water vapor,
$Q_{\rm deep}$, to the bottom of the model. This term takes the
form $Q_{\rm deep} = (q_{\rm deep} - q)/\tau_{\rm replenish}$ and is applied
only at pressures exceeding a critical pressure $p_c$, which
is chosen to be deeper than the deepest possible condensation pressure
for the water-vapor abundance and thermal structure expected
in a given simulation. Here, $q_{\rm deep}$ is a specified planetary
water vapor abundance (e.g., 1, 3, 10, or 30 times solar) and
$\tau_{\rm replenish}$ is a relaxation time. Our goal is to force
the deep water abundance (at $p>p_c$) to be very close to $q_{\rm deep}$.
The relaxation time, $\tau_{\rm replenish}$, is thus not a free
parameter and is chosen to be very short (typically 5 hours). This
source term allows
the model to reach a statistical steady state in which the mean
total water vapor content of the atmosphere is nearly constant
over time --- despite the loss of water via condensation.
In the thermodynamic equation (Eq.~\ref{LatentHeat}),
$Q_\theta$ is the rate of heating
(expressed in ${\rm K}\,{\rm sec}^{-1}$) due to radiation. We adopt
a simple Newtonian relaxation scheme:
\begin{equation}
Q_\theta = -{{ \theta-\theta_{\rm ref}} \over {\tau_{\rm rad}}}
\label{newtonian}
\end{equation}
The equilibrium $\theta_{\rm ref}$ profiles, shown in Fig.~\ref{TP},
are based on pressure-temperature profiles following the radio-occultation and
Galileo-probe results \citep{1981JGR....86.8721L, lindal-etal-1985,
lindal-etal-1987, lindal-etal-1992, 1998JGR...10322857S}.
Each contains a deep neutrally stable troposphere,
an isothermal stratosphere and a smooth transition layer
between the two regions (Fig.~\ref{TP}).
The adopted relaxation timescales are 400 Earth days for Jupiter-type
simulations and 200 Earth days for Neptune-type simulations. These
timescales are shorter than expected radiative timescales in the deep
tropospheres of giant planets but allow us to perform simulations
in reasonable time while preserving the quasi-isentropic behavior of
the atmospheric motions over typical dynamical timescales of 1--10 days.
Importantly, we chose to make $\theta_{\rm ref}$ independent of
latitude for this study. This contrasts with previous studies
\citep[e.g.,][]{lian-showman-2008, Williams_2002,
Williams_2003_2}, where the equilibrium temperature $\theta_{\rm ref}$
is a function of latitude. Our choice here is motivated by
the fact that when $\theta_{\rm ref}$ depends
on latitude, the forcing imposes a zonally banded structure
on the flow, and it is thus unclear to what extent any zonal jet formation
result from such banded forcing rather than from the $\beta$ effect (where
$\beta$ is the gradient of Coriolis parameter with northward distance).
By making $\theta_{\rm ref}$ independent of latitude, we can ensure
that any banded flow structures result from $\beta$, not from
anisotropic forcing. Moreover, when $\theta_{\rm ref}$ depends
on latitude, then not only the latent heating but the radiation
cause latitudinal temperature differences and thus injects
available potential energy (APE) into the flow. The energy
source driving the flow would thus be ambiguous. Here, we specifically
aim to test whether large-scale latent heating can drive Jovian-type
jets, and by maintaining $\theta_{\rm ref}$ constant with latitude,
we ensure that the only mechanism for generating lateral temperature
contrasts (hence APE) is large-scale latent heating.
The upper boundary in our simulations is zero pressure and impermeable.
The lower boundary corresponds to an impermeable wall at a constant height;
because the pressure can vary along this surface, it is implemented in pressure
coordinates as a free surface through which no mass flow can occur
(see \citet{campin2004} for details). Both boundaries are free-slip in horizontal velocity
The mean bottom pressure of simulated domain is a free parameter
which varies from 100 to 500 bars depending on the simulation.
We adopt the ideal gas equation of state. The simulations include
no explicit viscosity, but a fourth-order Shapiro filter \citep{Shapiro_1970}
(analogous to eighth-order hyperviscosity) is added to maintain
numerical stability. The time step is 100 sec.
Initially there are no winds in our simulations. The abundance of
water vapor is set to be subsaturated ($\rm 95\%$ of saturation) in
the region where $p<p_c$ and $q_{\rm deep}$ at $p>p_c$.
In the initial condition, we introduce 5--9 random temperature
perturbations at pressures less than $p_c$ to break the horizontal
symmetry and initiate motions. Each of the initial perturbations, which are
positioned randomly within the simulated domain,
has a warm center and affects the temperature radially within $\rm 5^\circ$.
The initial perturbations for our Jupiter, Saturn, and Uranus/Neptune
cases adopt $\Delta{\theta}=5$, 5, and 10 K, respectively, and
are confined to pressures less than 7, 10, and 10 bars, respectively.
We performed tests that varied the initial location and number of these perturbations,
which show that the qualitative final dynamical state, including the equatorial jet
direction, is not sensitive to the number of perturbations or their locations.
The main purpose of the perturbations
is to induce sufficient motion to generate supersaturation
in localized regions only at the very beginning of the simulations;
once this occurs, the circulation becomes
self-generating.
Although the water abundances on Jupiter, Saturn, Uranus, and Neptune
are unknown,
Galileo probe data indicate that Jupiter's C, N, S, Ar, Kr, and Xe
abundances are all between 2--4 times solar.
Spectroscopic information suggests that methane
is 7 times solar on Saturn \citep{flasar-etal-2005} and
30--40 times solar on Uranus and Neptune \citep{fegley-etal-1991,
baines-etal-1995}. These values suggest that the water
abundance is modest at Jupiter, intermediate
at Saturn, and large at Uranus and Neptune. Predicted
condensation pressures are $\sim8\,$bars for Jupiter,
$\sim20\,$bars for Saturn, and 200--300 bars on Uranus
and Neptune, depending on the water abundance
\citep{flasar-etal-2005,fegley-etal-1991,baines-etal-1995}.
We explore a range of deep water-vapor abundances from 1--20 times
solar on Jupiter and Saturn and from 1--30 times solar on Uranus and Neptune.
Combined with the prescribed temperature structure
(see Fig.~\ref{TP}), these abundances determine the range of pressures
over which condensation will occur in any given simulation. We then set $p_c$,
the pressure at the top of our deep water vapor source $Q_{\rm deep}$,
to be deeper than the base of the condensation region.
We use $p_c=7 \,\rm bars$ with 3 times solar water abundance
and $p_c=10 \,\rm bars$ with 10 times solar water abundance for Jupiter
simulations. For Saturn, we used $p_c=17.3\,$bars for 5 times solar
and $19.2\,$bars for 10 times solar water abundance.
On Neptune, we use $p_c=120 \,\rm bars$ with 1 times solar water
abundance, $p_c=220 \,\rm bars$ with 10 times solar water abundance and
$p_c=330 \,\rm bars$ with 30 times solar water abundance.
Here 1 times solar water abundance is 0.01 kilogram of water vapor per kilogram
of dry air. Among all these simulations, Jupiter-type simulation with
3 times solar water abundance and Neptune-type simulation with 30 times
solar water abundance are the nominal cases.
We run simulations using the cube-sphere grid. Planetary parameters
are chosen based on Jupiter,
Saturn, and Neptune, the latter of which represents the Uranus/Neptune
pair. The parameters we implement for the nominal simulations are listed in
Table~\ref{parameter}, where
C128 stands for $\rm 128\times 128$ on each cubed-sphere face and
C128 is equivalent to $\rm 512\times 256$ in
longitude-latitude grid; C64 stands for $\rm 64\times 64$ on
each cubed-sphere face and C64 is
equivalent to $\rm 256\times 128$ in longitude-latitude grid;
$N_L$ is number of layers in vertical direction.
\begin{table}
\begin{scriptsize}
\begin{tabular} {|l|l|l|l|l|l|l|l|l|l|}
\hline
\hline
Planet & $a \rm{(km)}$ & $\Omega \rm{(s^{-1})}$ & $c_p \rm{(JK^{-1}kg^{-1})}$ & ${\kappa}$ & $g \rm{(ms^{-2})}$ & ${R_q}$ & $Res$ & ${N_L}$ & $p_b \rm{(bars)}$\\
\hline
Jupiter & 71492 & $\rm {1.7585\times 10^{-4}}$ & 13000 & 0.29 & 22.88 & -0.8778 & C128 &35 &100\\
Saturn & 60268 & $\rm {1.6570\times 10^{-4}}$ & 13000 & 0.29 & 8.96 & -0.8778 & C128 & 35 & 100\\
Neptune & 24746 & $\rm {1.0389\times 10^{-4}}$ & 13000 & 0.305 &11.7 & -0.8778 & C64 &38 &500\\
\hline
\end{tabular}
\caption[Parameter table]
{\label{parameter}
Note: $a$ is radius of planet, $\Omega$ is rotation rate,
$c_p$ is heat capacity, $\kappa={R \over {c_p}}$, $g$ is gravity,
$R_q={{1-\epsilon}\over \epsilon}$, where $\epsilon$ is the ratio of
molecular mass between H$_2$O and H$_2$, $Res$ is the horizontal
resolution, $N_L$ is the number of layers, and
$p_b$ is the mean bottom pressure of the simulated domain. }
\end{scriptsize}
\end{table}
\subsection{Diagnostics}
Before presenting our results, we describe the formalism we
use to diagnose our simulations following \citet{karoly-etal-1998}.
For any quantity $A$, we can
define $A=[A]+A^*$ where $[A]$ denotes the zonal mean and $A^*$
denotes the deviation from the zonal mean. Likewise, we can define
$A=\overline{A}+A'$, where $\overline {A}$ denotes the time average
and $A'$ denotes the deviation from the time average. Inserting these
definitions into the zonal momentum equation (Eq.~\ref{mom}) and
averaging in longitude and time, we obtain
\begin{eqnarray}
{\partial[\overline{u}]\over\partial t}=-{\partial\over\partial y}
([\overline{u'v'}] + [\overline{u}^*\overline{v}^*]) -
{\partial\over\partial p}([\overline{u'\omega'}] + [\overline{u}^*\overline{\omega}^*]) \nonumber \\
- [\overline{v}]{\partial [\overline{u}]\over\partial y}
- [\overline \omega]{\partial [\overline{u}]\over\partial p} + f[\overline{v}]
\label{eulerian-mean-mom}
\end{eqnarray}
In Eq.~\ref{eulerian-mean-mom}, $[\overline{u'v'}]$ and $[\overline{u'\omega'}]$,
are the
latitudinal and vertical fluxes of eastward momentum, respectively,
associated with traveling eddies;
$[\overline{u}^*\overline{v}^*]$ and $[\overline{u}^*\overline{\omega}^*]$,
are the latitudinal and vertical fluxes of eastward momentum,
respectively, associated with stationary eddies.
This equation states that latitudinal convergence
of horizontal eddy momentum flux, vertical convergence of vertical
eddy momentum flux, horizontal and vertical advection and Coriolis
acceleration drive the zonal winds.
\section{Results}
\subsection{Basic flow regime}
Our Jupiter simulation with 3 times the solar water abundance
and Uranus/Neptune simulation with 30 times the solar water abundance
produce zonal winds similar to those observed on Jupiter/Saturn and
Neptune/Uranus.
The similarities, in general, are multiple banded zonal jets with
equatorial superrotation on Jupiter/Saturn and high-latitude eastward
jets with broad equatorial subrotation on Neptune/Uranus. These are shown in
Fig.~\ref{jupiter_neptune_u_3d}.
The initial perturbations generate motion, which triggers condensation.
Once the circulation is initiated, it becomes
self-sustaining: horizontal temperature gradients induced by
large-scale latent heating drive a circulation that continues to dredge up
water vapor to the condensation region, allowing latent heating
and maintaining the temperature differences. The eddies produced
in this way interact with planetary rotation to generate large-scale zonal
flows. The resulting zonal flow at
the 1-bar level contains about 20 zonal jets for Jupiter/Saturn-type
simulations and $\sim3$ zonal jets for Uranus/Neptune-type
simulations (Fig.~\ref{jupiter_neptune_u_3d}).
First we examine the Jupiter simulations (Fig.~\ref{jupiter_3s_evol}).
By 55 Earth days, the winds show significant zonality, and after $\sim1100\,$days
the jet pattern becomes relatively stable
with $\sim20$ jets. The equatorial jet builds up rapidly,
reaching zonal-mean zonal wind speeds of $80\,{\rm m}\,{\rm sec}^{-1}$
and local speeds exceeding $100\,{\rm m}\,{\rm sec}^{-1}$.
Initially, the jet spans only a range of longitudes (e.g.,
Fig.~\ref{jupiter_3s_evol}, second panel). Low-latitude eddies
triggered by localized latent heating continuously
pump energy into the equatorial flow, which eventually makes the
equatorial superrotation encircle the whole globe by 116 Earth days.
The longitudinal variation of this equatorial superrotation becomes
small after $\sim1000$ Earth days. The jet
spans latitudes $\rm 10^\circ$ south to $\rm 10^\circ$ north with an average
wind speed of $\rm 80ms^{-1}$.
In our Jupiter simulations, numerous alternating east-west jets
also develop at higher latitudes with speeds of
$\sim5$--$10\,{\rm m}\,{\rm sec}^{-1}$.
These high-latitude zonal jets extend almost to the pole
(Fig.~\ref{jupiter_neptune_u_3d}).
Interestingly, however, the high-latitude jets are
not purely zonal but develop meanders with latitudinal positions that vary
in longitude, as can be seen in Figs.~\ref{jupiter_neptune_u_3d} and
\ref{jupiter_uv_vector}.
This meandering presumably occurs because the $\beta$ effect (which
is necessary for jet formation) weakens
at high latitudes. As a result of these meanders, a zonal average
smoothes through these jets, so the zonal-mean
zonal wind profile shows minimal structure poleward of $\sim30^{\circ}$
latitude (Fig.~\ref{jupiter_3s_evol}, rightmost panels); nevertheless,
the high-latitude jet structure remains evident in profiles without
zonal averaging (Figs.~\ref{jupiter_neptune_u_3d} and \ref{jupiter_3s_evol}).
{\t Interestingly, in the Saturn case, an eastward jet at $\sim 70^{\circ}$ latitude
develops meanders crudely resembling a polygon when viewed from over the pole
(Fig.~\ref{jupiter_neptune_u_3d}). This structure may have relevance to explaining
Saturn's polar hexagon \citep{godfrey-1988, baines-etal-2009}.}
Our Uranus/Neptune simulation with 30 times the solar water abundance,
however, behaves quite differently than our Jupiter and Saturn cases
(Fig.~\ref{neptune_30s_evol}).
By $\sim1000$ days, the profile stabilizes with three jets:
a broad westward equatorial flow extending from latitudes $40^\circ$
south to $40^\circ$ north and reaching speeds of almost $-100
\,{\rm m}\,{\rm sec}^{-1}$, and two high-latitude eastward jets
reaching peak speeds of almost $250\,{\rm m}\,{\rm sec}^{-1}$
at latitudes of 70--$80^{\circ}$ north and south.
Eddy activity, though still vigorous, is hardly visible in comparison
with the zonal flow after several hundred Earth days.
As a control experiment, we also performed a Neptune simulation where latent heating
and condensation were turned off (i.e., $\delta=0$ in Eqs.~\ref{LatentHeat}--\ref{Condensation}
regardless of the relative humidity). Because $\theta_{\rm ref}$ is independent
of latitude, radiation {\it removes} rather than adds available potential
energy, and thus the only source of available potential energy in this
simulation is provided by the thermal perturbations in the initial condition. Consistent
with this expectation, this simulation develops peak winds of only $\sim20\rm\,m\,sec^{-1}$,
an order-of-magnitude weaker than those our nominal simulation. This comparison demonstrates
the crucial role that latent heating plays in generating jets in our nominal simulations.
Figure.~\ref{JN_KE} shows the time evolution of kinetic energy in
our Jupiter and Uranus/Neptune simulations. The kinetic energy is vertically
and horizontally integrated
in region from 1 bar and above. Both Jupiter and Uranus/Neptune simulations
show that kinetic energy quickly spikes up in first 100 Earth days and
gradually drops afterwards. After 2500 Earth days, the variation of
kinetic energy becomes small, indicating the simulations get close to
a steady state from top down to 1 bar. This development of kinetic energy
is very similar to that of \citet{lian-showman-2008}. Nevertheless, the
barotropic winds continue to spin up at deep levels near the bottom of
the model.
Our simulations provide a possible explanation for the
equatorial superrotation on Jupiter/Saturn yet the equatorial
subrotation on Uranus/Neptune as well as the approximate number of jets
observed on all four planets.
We emphasize that our simulated jet
profiles --- including the equatorial jet direction ---
are fully self-generating and emerge spontaneously, without the
application of {\it ad hoc} forcing schemes. The only physical
differences between the two simulations in Fig.~\ref{jupiter_neptune_u_3d}
is the values of the planetary parameters (radius, rotation
rate, gravity) and deep water abundance $q_{\rm deep}$;
the forcing schemes are otherwise identical for the two cases.
In contrast, previous shallow-atmosphere studies either
produced superrotation only with artificially imposed forcing near the equator
\citep{Williams_2003_1, 2005P&SS...53..508Y, lian-showman-2008} or
produce superrotation more naturally but make no prediction for
Jupiter/Saturn versus Uranus/Neptune \citep{scott-polvani-2008, schneider-liu-2009}.
Ours is the first study to naturally produce superrotation in a Jupiter regime
yet subrotation in a Uranus/Neptune regime within the context
of a single model.
We emphasize that the jet widths that emerge in our simulations
are self-selecting; neither the scales of zonal jets nor the
scales of baroclinic eddies are controlled by the initial perturbations.
Our Jupiter-type simulation with 3 times solar water abundance has
jet widths ranging from several thousand kilometers at mid-to-high
latitudes to about $\rm 15,000$ kilometers
at the equator.
Our Uranus/Neptune simulation with 30 times solar water abundance
has jet widths of around 25,000 km. These
jet widths are similar (within a factor of $\sim 2$)
to the Rhines scale $\pi (2U/\beta)^{1/2}$,
where $U$ is the characteristic jet speed.
\begin{table}
\begin{scriptsize}
\begin{tabular} {|l|l|l|l|l|l|l|}
\hline
\hline
Planet & $\frac{a}{a^\circ}$ & $\frac{\Omega}{\Omega^\circ}$ & $\frac{q_{deep}}{q_{solar}}$ & $Res$ & ${N_L}$ &$p_b \rm{(bars)}$ \\
\hline
Saturn & 0.5 & 0.5, 1, 2 & 5 & C128 &35 &100\\
Saturn & 1 & 0.5, 1, 2 & 5 & C128 &35 &100\\
Saturn & 2 & 0.5, 1, 2 & 5 & C128 &35 &100\\
Saturn & 0.5 & 0.5, 1, 2 & 20 & C128 &35 &100\\
Saturn & 1 & 0.5, 1, 2 & 20 & C128 &35 &100\\
Saturn & 1 & 1 & 1 & C128 &35 &100\\
Saturn & 1 & 1 & 3 & C128 &35 &100\\
Saturn & 1 & 1 & 10 & C128 &35 &100\\
Saturn & 1 & 1 & 20 & C128 &35 &100\\
Neptune & 1 & 1 & 1 & C64 &38 &500\\
Neptune & 1 & 1 & 3 & C64 &38 &500\\
Neptune & 1 & 1 & 10 & C64 &38 &500\\
\hline
\end{tabular}
\caption[Parameter table for selected test cases]
{\label{parameter_variation}
Note: $\frac{a}{a^\circ}$ and $\frac{\Omega}{\Omega^\circ}$ are ratios between planet radius
and rotation rate in test cases and those in nominal cases listed in table~\ref{parameter} respectively.
$\frac{q_{deep}}{q_{solar}}$ is the ratio between deep water abundance in test cases and solar water abundace.
We maintain $c_p$, $\kappa$, $g$ and $R_q$ to be the same as those in the nominal cases.
}
\end{scriptsize}
\end{table}
To investigate the influence of water abundance on the circulation,
and to shed light on what causes the differences between our
Jupiter/Saturn and Uranus/Neptune cases (Fig.~\ref{jupiter_neptune_u_3d}),
we ran a series of simulations exploring a range of deep water-vapor
abundances (Table~\ref{parameter_variation}). This is carried out simply by varying the value of
the deep water abundance, $q_{\rm deep}$, adopted in our water-vapor
source term $Q_{\rm deep}$. Figure~\ref{jupiter_3s_10s_u}
shows the results for Jupiter cases with 3 and 10 times solar water
while Fig.~\ref{Neptune_U_cs} shows the results for Uranus/Neptune cases
with 1, 3, 10, and 30 times solar water. Interestingly, we find
in both cases that equatorial superrotation
preferably forms at low water-vapor abundance while subrotation forms
at high water-vapor abundance. For Jupiter, 3-times-solar water
yields superrotation while 10-times-solar water produces subrotation.
For Uranus/Neptune, solar water (panel {\it a}) produces a narrow
superrotating jet centered at pressures of $\sim100$--400 mbar;
at 3-times-solar water (panel {\it b}), a local maximum in zonal
wind still exists at that location, but its peak speeds are slightly
subrotating. The ten-times-solar case (panel {\it c}) bucks the
trend, developing superrotation in the lower stratosphere
(pressures $<100\,$mbar). By 30 times solar, however, the structure
becomes more barotropic and the equatorial jet direction is subrotating
throughout. Nevertheless, the superrotation in our low-water-abundance
Uranus/Neptune cases is weaker than that in our Jupiter/Saturn cases,
which suggests that water abundance is not the only factor that controls
the existence of superrotation in our simulations. We return to this issue
in Section \ref{parameter-variation}.
We also performed Saturn simulations with not only 5-times-solar water
abundance (as seen in Fig.~\ref{jupiter_neptune_u_3d}) but 10 and 20 times
solar as well. All these cases developed multiple banded zonal jets at
both low and high latitudes. The 5-times-solar
Saturn case developed equatorial superrotation, with zonal-mean speeds reaching
$\sim150\rm\,m\,s^{-1}$ eastward, while the 10-times-solar and 20-times-solar
cases developed equatorial subrotation with speeds reaching
$-100\rm\,m\,s^{-1}$ or more westward near 1 bar. While our ability to produce
superrotation in a Saturn case with 5-times-solar water is encouraging, Saturn's
water abundance is unknown and could easily be as high as 10 times solar
\citep[e.g.,][]{mousis-etal-2009}; moreover, even our superrotating case
produced a superrotating jet much weaker and narrower than
the observed jet (which extends from $30^{\circ}$N to
$30^{\circ}$S latitude and reaches peak speeds of $\sim400\rm\,m\,s^{-1}$).
This disagreement could mean that Saturn's
equatorial jet does not fit into the framework discussed here, and
that the observed jet results instead from a different mechanism.
On the other hand, processes excluded here, including moist convection,
evaporating precipitation, and realistic radiative transfer, will all
affect the tropospheric static stability and thus could influence the
eddy/mean-flow interactions that pump the equatorial jet.
Definitely assessing whether Saturn's equatorial
jet can result from cloud-layer processes will thus require next-generation
models that include these improvements.
Another trend that occurs in our simulations is that the
mean jet speeds and jet widths increase as the deep water-vapor
abundance is increased. As a result, simulations with less
water tend to have more jets than simulations with greater water.
This trend is most evident in our Uranus/Neptune cases in
Fig.~\ref{Neptune_U_cs}: the peak jet speed increases from
$\sim50\,{\rm m}\,{\rm sec}^{-1}$ to $250\,{\rm m}\,{\rm sec}^{-1}$
as water is increased from solar to 30 times solar, while
the number of jets drops from seven to three over this same sequence.
However, the case with 10-times-solar-water
does not fit the trend well. It has 7 zonal jets, the same as
our solar case, and its wind speeds are similar to that of our
3-times-solar case.
Figure~\ref{jupiter_T} shows the temperature structure in our
nominal Jupiter simulation at pressures of 0.1, 0.2, 0.9, and 5 bars
from top to bottom, respectively. The top two panels are in the
lower stratosphere and upper troposphere; the third panel is near
the top of the region with strong eddy accelerations, and the bottom
panel is just above the water condensation level. Interestingly,
in the deep regions where latent heating occurs (bottom two panels),
the latitudinal temperature contrasts primarily occur within
$\sim20^{\circ}$ of the equator. In the simulations of
\citet{Williams_2003_1} and \citet{lian-showman-2008},
equatorial superrotation developed only in the presence of
large latitudinal temperature contrasts near the equator; this lead to a
barotropic instability that pumped energy and momentum into the
superrotating jet. In their simulations, these near-equatorial
temperature gradients resulted from an {\it ad hoc}
Newtonian heating profile. Here, however, these sharp near-equatorial
temperature gradients develop naturally from the interaction of the
moist convection with the large-scale flow. Despite this difference,
the similarity of the resulting near-equatorial temperature profiles
suggest that the superrotating jet-pumping mechanism identified by
\citet{Williams_2003_1} and \citet{lian-showman-2008} could be
relevant here.
In the upper troposphere and lower stratosphere, our Jupiter simulation
develops a banded temperature pattern, with latitudinal temperature
differences of $\sim10\,$K, that bears some similarities to that
observed on Jupiter and Saturn. Interestingly, the simulated equatorial
zone is cold in the upper troposphere, consistent with observations
of Jupiter and Saturn; this temperature structure is associated with
the decay of the equatorial jet with altitude (see
Fig.~\ref{jupiter_3s_10s_u}) via thermal-wind balance.
Several regions of localized latent heating and eddy generation
are visible in both hemispheres; Section~\ref{storms}
presents a detailed discussion of these features.
Figure~\ref{jupiter_S} depicts the distribution of water vapor at
the 5-bar level for our nominal Jupiter simulation at the same time
as the temperature plots shown in the previous figure. A zonally banded structure
is evident,
with low water vapor near the equator and intermediate values at mid-to-high
latitudes. Localized regions of latent heating manifest as
regions of high water vapor abundance near the equator and at $\sim20^{\circ}$
latitude in both hemispheres (orange/red regions in the figure). While the connection
between temperature and water vapor is visually evident (compare Figs.~\ref{jupiter_S}
and \ref{jupiter_T}, bottom panel), scatter plots of temperature versus water-vapor
mixing ratio at a given isobar show a great deal of scatter, indicating that no
simple relationship connects the two quantities. Diagnostics of the mechanisms that
determine the water and temperature distributions will be presented in a future paper.
\subsection{What controls the equatorial flow}
\label{parameter-variation}
Among all our simulation results, the most interesting feature is that
the direction and strength of equatorial flow vary with
deep water abundance; eastward equatorial flow forms at low deep water
abundance and westward equatorial flow forms at high deep water abundance.
What causes this trend? Moreover, the planets in our simulations have
different radii and rotation rates. Can these affect the formation of
equatorial flow? Here we address these questions.
We performed additional test cases by varying the deep-water-vapor abundance,
planetary radius, and planetary rotation rate (Table~\ref{parameter_variation}).
By comparing the equatorial zonal wind, Brunt Vaisala frequency, and $\beta$,
we seek to reveal the major factor that affects the trend. In the following
discussion, the zonal wind is vertically averaged from the bottom to top of
the simulated domain, while the Brunt-Vaisala frequency is vertically averaged
from the bottom to 1 bar to exclude the large static stability in the stratosphere
and thus better demonstrate the effects of latent heating in the troposphere.
First, we vary the deep water abundance for Saturn and Neptune
simulations using the nominal planetary radii and rotation rates; the
results are shown in the top two panels of Fig.~\ref{parameter_sweep}.
Figure.~\ref{parameter_sweep}(a) clearly shows that when the deep
water abundance increases, the tropospheric Brunt-Vaisala frequency increases.
Figure~\ref{parameter_sweep}(b) confirms that larger deep water abundance
(hence tropospheric static stability) makes the equatorial flow more
westward. This correlation applies to both the Saturn and Neptune simulations
(solid and dashed lines and dashed lines in Fig.~\ref{parameter_sweep}, respectively)
except the case of Saturn with 1-times-solar water abundance, in which the equatorial
flow is very weak due to the low deep water abundance.
Although a clear correlation exists between greater deep water abundance
and faster westward equatorial flow (Fig.~\ref{parameter_sweep}b), the Neptune
simulations exhibit a different dependence than the Saturn simulations.
This suggests that other factors, such as planetary radius or rotation rate,
play a role in controlling the equatorial jet speed. In an attempt to untangle these effects,
we ran Saturn sensitivity studies that varied the planetary radius or rotation
rate from nominal Saturn values but kept all other parameters fixed. Cases with
deep water abundances of 5 and 20 times solar were explored. Figure~\ref{parameter_sweep}(c)
depicts the equatorial wind speed versus the equatorial value of $\beta$, the gradient of
the Coriolis parameter (just $2\Omega/a$ at the equator).
Solid lines denote the 5-times-solar cases while the dashed lines show the 20-times-solar
cases. Different symbols denote different planetary radii and/or deep-water abundances;
lines connect sequences of simulations performed at a given planetary radius and
deep-water abundance but with differing rotation rate.
At 5-times-solar water abundance, increasing the rotation rate (at constant planetary radius)
makes the equatorial flow more eastward. However the situation reverses at 20-times-solar,
where increasing the rotation rate either has minimal effect on the equatorial jet
(for Saturn's radius) or makes the jet more westward (for cases with half Saturn's radius).
At constant $\beta$ and water abundance, increasing the planetary radius makes the
flow more {\it eastward} for our 5-times-solar-water cases but either has minimal
effect or makes the equatorial flow more {\it westward} for our cases with
20-times-solar water.
In some cases, these dependences on radius and rotation rate can make the
difference between whether the equatorial flow is eastward or westward. For
example, the Saturn cases with 5-times-solar water all transition from westward
to eastward equatorial flow as the rotation rate increases. Likewise, cases with
5-times-solar water and $\beta$ of 5--$10\times10^{-12}\rm\,m\,sec^{-1}$ exhibit
equatorial superrotation when performed at Saturn's nominal radius but equatorial
subrotation when performed at half Saturn's radius.
To summarize our simulations, equatorial superrotation occurs only at
intermediate water abundances; large deep water abundance instead promotes
the development of equatorial
subrotation for all cases explored. Everything else being equal,
greater rotation rate and planetary radius promote equatorial superrotation
when the water abundance is intermediate ($\sim5$-times solar). The trends are
less clear at high water abundance ($\sim20$ times solar), but regardless
of the details our high-water-abundance cases all subrotate. Taken together, the
sensitivity studies described here show that the equatorial superrotation in our
successful Jupiter and Saturn simulations (e.g., as shown in
Fig.~\ref{jupiter_neptune_u_3d}) is enabled {\it not only} by the low-to-intermediate
water abundance adopted in those cases but {\it also} by the large planetary
radius and faster rotation rates of Jupiter and Saturn relative to Uranus
and Neptune. Loss of any one of those factors would promote weaker
superrotation or even a transition to equatorial subrotation. This is
consistent with the fact that the Uranus/Neptune cases shown in
Fig.~\ref{Neptune_U_cs} exhibit only rather weak superrotation (relative
to the Jupiter/Saturn cases) even when the water abundance is 1, 3, or 10
times solar.
\subsection{What drives the jets in weather layer}
Our Jupiter, Saturn, and Uranus/Neptune simulations show that the zonal jets
are predominantly driven by eddy and Coriolis acceleration.
We use the total acceleration of the terms
on the right side of Eq.~\ref{eulerian-mean-mom}
to characterize the main driving forces on the zonal flow, focusing here
on the mid- and low-latitude regions.
Figure~\ref{winds_acc} shows the zonal-mean zonal wind (top row),
horizontal eddy-momentum flux
$[ {\overline{u^\prime v^\prime}} ] + [{\overline{u}}^* {\overline{v}}^* ]$
(second row), the vertical eddy-momentum flux $[ {\overline{u^\prime
\omega^\prime}} ] + [ {\overline{u}}^* {\overline{\omega}}^* ]$
(third row), and Coriolis acceleration $f [\overline{v}]$ (bottom row).
Three cases are shown --- our 3-times-solar water Jupiter case (left column),
5-times-solar Saturn case (middle row), and 30-times-solar Uranus/Neptune
case (right column). The Jupiter and Saturn cases exhibits equatorial
superrotation while the Uranus/Neptune case exhibits equatorial subrotation.
Figure~\ref{winds_acc} shows that the horizontal eddy terms, vertical eddy terms, and Coriolis
accelerations all play important roles in the maintenance of the zonal winds.
In particular, for our Jupiter and Saturn cases, there are strong equatorward
fluxes of eastward momentum at pressures of $\sim0.2$--$1\,$bar and latitudes
of $\sim10^{\circ}$S to $10^{\circ}$N. These imply an eastward acceleration
that helps to maintain the superrotating equatorial jet. They additionally cause a
{\it westward} acceleration at latitudes of $\sim10^{\circ}$N and S, where a divergence
in horizontal flux occurs. In contrast, our Uranus/Neptune
case exhibits strong poleward fluxes of eastward momentum at pressures
$<1\,$bar and latitudes equatorward of $\sim40^{\circ}$. These fluxes induce
westward equatorial acceleration, which maintains the strong westward equatorial flows
at low pressure. A weaker version of the same phenomenon occurs in the Jupiter
and Saturn cases, which helps explain the westward equatorial stratospheric flow near the
top of the model (pressures
$<0.1\,$bar) in those cases. Nevertheless, the Uranus/Neptune
case also shows a localized region (from $1$--$2\,$bars and latitudes $\sim10^{\circ}$S
to $10^{\circ}$N) where eastward momentum fluxes (albeit weakly) toward the equator, leading
to an eastward equatorial acceleration. This relates to the fact that the westward jet,
once formed, weakens slightly between $\sim100$ and $1000\,$days (compare
second and third rows of Fig.~\ref{neptune_30s_evol}). Interestingly, all three cases
also show an overall downward eddy flux of eastward momentum in the equatorial
regions underlying the region of horizontal eddy fluxes. In the Jupiter/Saturn
cases, this term causes an acceleration counteracting that associated with
horizontal eddy-flux convergence and helps to explain why the eastward jets
penetrate to pressures $>10\,$bars despite the fact that the horizontal
eddy fluxes are confined primarily to pressures $<1\,$bar. The Coriolis
accelerations (bottom row) show localized regions of eastward acceleration
centered just off the equator in all three cases (at pressures 0.2--$1\,$bar
for Jupiter/Saturn and $\sim2$--$8\,$bars for Uranus/Neptune) resulting
from the effects of a meridional circulation cell near the equator. In
all three cases, this acts to counteract a {\it westward} acceleration associated
with horizontal eddy flux convergence at the same location.
An estimate of magnitudes shows that all these acceleration terms are important.
For example, focusing on Jupiter and Saturn,
the Coriolis acceleration reaches peak values up to a few $\times10^{-5}\rm\,m\,s^{-2}$
(Fig.~\ref{winds_acc}, bottom row).
The acceleration caused by horizontal eddy flux convergence is minus the gradient of
$[ {\overline{u^\prime v^\prime}} ] + [{\overline{u}}^* {\overline{v}}^* ]$,
which is approximately the difference in this quantity over a relevant length scale.
Figure~\ref{winds_acc}, second row, shows that the peak difference
in $[ {\overline{u^\prime v^\prime}} ] + [{\overline{u}}^* {\overline{v}}^* ]$ is
$\sim200\rm\,m^2\,s^{-2}$ and occurs over a latitudinal length scale of
$\sim8000\,$km, implying an acceleration of $\sim3\times10^{-5}\rm\,m\,s^{-2}$.
Likewise from Figure~\ref{winds_acc}, third row, the peak difference in $[ {\overline{u^\prime
\omega^\prime}} ] + [ {\overline{u}}^* {\overline{\omega}}^* ]$ is
$\sim1\rm\,Pa\,m\,s^{-2}$, which occurs over a pressure interval of $\sim10^5\,$Pa,
implying an acceleration of $\sim10^{-5}\rm\,m\,s^{-2}$.
For all these cases, there are partial cancellations between the
individual terms that generally leads to a net acceleration smaller
than the magnitude of the dominant individual terms. More than $\sim5$--$10^{\circ}$
away from the equator, there is an anticorrelation between the
Coriolis acceleration and acceleration due to horizontal eddy convergence,
leading to a significant cancellation between these terms.
Accelerations due to vertical eddy convergence play only a small
role in these regions because the ratio of accelerations due to
vertical and horizontal eddy convergences tends to scale as
the Rossby number, which is small away from the equator. However,
near the equator the Coriolis acceleration is weak, and
within $\sim3$--$5^{\circ}$ latitude of the equator, the dominant cancellation is between
the horizontal and vertical eddy terms for Jupiter, Saturn and
Uranus/Neptune. Because of these partial cancellations, the net
acceleration is relatively small once the jets have spun up,
leading to only gradual change in the zonal-jet speeds over time.
\subsection{Comparison between simulations and observations}
Now we compare our simulated jet profiles to the observed jet profiles
and their stability. Jupiter and Saturn's cloud-level winds violate
the barotropic stability criterion
\citep{ingersoll-etal-1981}
\begin{equation}
\frac{\partial^2 [u]}{\partial{y^2}} < \beta.
\end{equation}
The observed winds also violate the Charney-Stern criterion which states that
jets are stable if their potential vorticity profile is monotonic in latitude
\citep{dowling-1995}.
Figure~\ref{Jupiter_sim_observ} shows the comparison between observed
and simulated jet profiles for Jupiter.
The top row shows observations. The middle row
shows simulated winds at $\rm 163^\circ$ longitude. The bottom row
shows zonal-mean simulated zonal winds. The wind profile shown is at
$\rm \sim 0.6\,bars$ and potential vorticity shown is at $\rm \sim 0.12\,bars$.
We calculate quasi-geostrophic potential vorticity $q_G$ following
\citet{Read2006a}:
\begin{equation}
q_G=f+\zeta -f{\partial\over\partial p} \left[{p\Delta T(\lambda,\phi,p)
\over s(p)T_a(p)}\right],
\label{qgpv}
\end{equation}
where $\zeta$ is relative vorticity calculated on isobars,
$T_a(p)=\langle{T(\lambda,\phi,p)}\rangle$ is the horizontal mean
temperature calculated on isobars, $\Delta{T(\lambda,\phi,p)}=
T(\lambda,\phi,p)-T_a(p)$ is deviation of
temperature from its horizontal mean, and $s(p)$ is a stability factor
defined as
\begin{equation}
s(p)=-{\partial{\langle{\ln(\theta)}\rangle}\over{\partial{\ln p}}}.
\label{s-fac}
\end{equation}
where $\langle\theta\rangle$ is the horizontal mean potential temperature
calculated on isobars.
Our Jupiter simulation with 3 times solar water
abundance shows that the zonal winds are slightly weaker and
the equatorial superrotation is narrower than observed. The zonal winds
in some longitudinal cross sections violate the barotropic
and Charney-Stern stability criteria. Figure~\ref{Jupiter_sim_observ}(e)
shows that the wind curvature
$\partial^2 [u] /{\partial{y^2}}$ in a longitudinal slice
(here arbitrarily chosen as $\rm 163^\circ$ east) exceeds $3\beta$ at some latitudes.
At the same time,
the latitudinal gradient of quasigeostrophic potential vorticity changes sign at
many places, as shown in Fig.~\ref{Jupiter_sim_observ}(f). The zonal-mean
zonal wind, however, has a much weaker violation of the barotropic
stability criterion. It only exceeds $\beta$ at $\rm \pm 20^\circ$ in
latitude.
Figure~\ref{Saturn_sim_observ} compares observations and simulations
for Saturn. Zonal-mean zonal wind profiles are shown on the left,
with Voyager observations (solid), our Saturn 5-times-solar-water
simulation (dashed), and our Saturn 10-times-solar-water simulation
(dotted). The middle and right panels show $\partial^2[u]/\partial y^2$
for the observations and our simulations, respectively. As can
be seen, our simulations produce winds with speeds smaller than
observed, especially in the equatorial jet. Interestingly, our
Saturn simulation with 5-times-solar water, which produces
equatorial superrotation of $150\rm\,m\,s^{-1}$, violates
the barotropic stability criterion at some latitudes. In
contrast, our Saturn simulation with 10-times-solar water, which
produces equatorial subrotation, satisfies the barotropic stability
criterion at all latitudes.
We also compare jet profiles in our Uranus/Neptune simulation with 30 times
solar water abundance to observations. Figure~\ref{Neptune_sim_observ} shows
the observed winds on Uranus and Neptune and simulated winds at $\rm \sim 1\,$ bar.
Our simulated zonal winds share similarities with the observed winds, especially
the three-jet feature with a subrotating equator and high-latitude eastward jets.
Comparing to the zonal winds on Neptune, the equatorial subrotation
in our simulation achieves speed of
$\rm 100\,m\,s^{-1}$ which is weaker than the observed $\rm 400\, m\,s^{-1}$;
the polar eastward winds, though appearing at too high a latitude,
have similar strength
as observed. On the other hand, the simulated jets have similar strength
as both westward and eastward jets on Uranus. However, the westward jet on Uranus
is much narrower than that in our simulation and on Neptune. Furthermore, Uranus
has eastward jets peaking in the mid-latitudes rather than near the poles
as in our simulation.
Interestingly, zonal winds in our simulation violate the barotropic
stability criterion while those in observations do not.
Even though the zonal jets in our simulations violate the barotropic
stability criterion, the time-averaged jets are still stable which suggests a stable
configuration between the eddies and zonal jets. This stable configuration also
exists in our previous simulations \citep{lian-showman-2008}.
\subsection{Morphology of eddies generated by large-scale latent heating}
\label{storms}
In our Jupiter and Saturn simulations, large-scale latent
heating and rising motion often become concentrated into
localized regions, leading to the development of small
($\sim$3000--10,000-km diameter) warm-core eddies that play a key role in
driving the flow. These events share similarities with
storm clouds observed on Jupiter and Saturn, so here we describe them in detail.
For lack of a better word, we call these events ``storms'' but remind the
reader that our model lacks non-hydrostatic motions and sub-grid-scale
parameterizations of true moist convection.
Figures~\ref{jupiter_S_slides}--\ref{jupiter_T_slides_1bar} depict
time sequences that zoom into the region around one such event;
the sequences start at the upper left and move first downward and then
right in 2.8-Earth-day intervals.
Figure~\ref{jupiter_S_slides}, which depicts
the water-vapor mixing ratio, shows the active storms as bright
red regions with much greater water abundance than the surroundings.
Figure~\ref{jupiter_T_slides} demonstrates that these regions of
high humidity are warmer than the surroundings, which is the direct
result of latent heating in the rising air. Plots of vertical velocity
(not shown) show that these warm, moist regions are ascending.
Figure~\ref{jupiter_vort_r_slides} depicts the relative vorticity;
a careful comparison with Figs.~\ref{jupiter_S_slides}--\ref{jupiter_T_slides} shows that, at the base of
the storms near 5 bars, the hot, moist ascending regions generally
exhibit cyclonic relative vorticity, which results from the
Coriolis acceleration on the air converging horizontally into the
base of the storm at that pressure. Although the storms are
extremely dynamic, they are self-generating and can last for tens
of days.
Interestingly, the cyclonic
regions at the base of the updrafts
(blue in Fig.~\ref{jupiter_vort_r_slides})
typically co-exist with one or more localized anticyclonic
regions (red in Fig.~\ref{jupiter_vort_r_slides}), which are
locations where air descends, horizontally diverges, and thus
spins up anticyclonically. The existence of descending regions
in proximity to the ascending storm centers results from mass
continuity; as air rises in an active storm center, continuity
demands descent in the surrounding environment. A similar phenomenon
occurs around storms on Earth; theoretical
solutions show that such descent is typically confined to
regions within a deformation radius of the ascending storm center
\citep[][pp. 329-333]{Emanuel_1994}. This result can explain the
close proximity of the ascending and descending regions in our simulations,
as well as the fact that Jovian thunderstorms
are often observed next to localized regions that are clear down
to the 4-bar level or deeper \citep{1998Icar..135..230B,
2000Natur.403..628G}.
The behavior near the tops of the simulated storms differs
significantly from that near their bases. This is illustrated in
Fig.~\ref{jupiter_T_slides_1bar}, which shows the temperature
at 0.9 bars for the same storm event depicted in
Figs.~\ref{jupiter_S_slides}--\ref{jupiter_vort_r_slides}. Hot
regions in Fig.~\ref{jupiter_T_slides} generally correlate with
hot regions in Fig.~\ref{jupiter_T_slides_1bar}, which results from
the fact that the hot, moist air at 5 bars generally continues rising
until reaching altitudes at and above the 1-bar level. However,
the localized hot regions at 0.9 bars are much larger than at 5 bars.
This presumably results from the horizontal divergence that occurs near
the storm top, which spreads the hot regions out into ``anvils'' whose
horizontal extent significantly exceeds that at the base of the
storm. Conversely, we speculate that the horizontal convergence at the
storm base helps to keep the hot regions horizontally confined at that
pressure. The horizontal divergence near the storm top also implies that
the storms generate regions of anticyclonic vorticity as seen
at 1 bar (not shown). This is consistent with observations of
Jovian thunderstorms, which generally also develop anticyclonic
vorticity at the ammonia cloud level \citep{2000Natur.403..628G}.
The simulated storms exhibit a complex evolution. The storms
often appear in clusters with several simultaneously active
centers (Figs.~\ref{jupiter_S_slides}--\ref{jupiter_T_slides_1bar});
this could occur because the large-scale environment
in that region is primed for storm generation, but it also
suggests that the storms may self-interact in a way that can trigger
new storms. As shown in Figure~\ref{jupiter_T_slides_1bar},
active storm centers sometimes shed warm-core vortices that
have lifetimes up to tens of
days and propagate downstream away from the active storm region.
Fig.~\ref{jupiter_T_slides} exhibits several
examples where features appear to ``die out'' at 5 bars, but a comparison with
Fig.\ref{jupiter_T_slides_1bar} shows that, in several cases,
the features have not decayed but instead have simply ascended
to the $\sim1$-bar level. Indeed, several of the warm-core
vortices visible in Fig.~\ref{jupiter_T_slides_1bar} have
essentially no signature at 5 bars, indicating that these
vortices are no longer fed by upward motion
from near the water-condensation level.
Importantly, the properties of our simulated storms --- including
their size, amplitude, lifetime, and morphology --- are
self-consistently generated by the
dynamics and are therefore predictions of our model.
This contrasts with previous studies attempting to model the effect of
moist convection on the large-scale flow \citep{Li-2006, showman-2007},
which introduced mass pulses by hand to represent storms, and which
imposed the size, shape, amplitude, and lifetime of the storms
as free parameters.
Despite the lack of small-scale convective dynamics in our model,
our simulated storms show an encouraging resemblance to the evolution
of cloud morphology in real
moist-convection events on Jupiter and Saturn. On Jupiter,
individual storm clouds often last for up to several days and expand to
diameters up to $\sim1000$--$3000\,$km \citep{2000Natur.403..628G,
Porco2003, sanchez-lavega-etal-2008}, although rare events sometimes
reach sizes of $10,000\,$km or more \citep{hueso-etal-2002} and lifetimes
up to 10 days \citep{Porco2003}. On Saturn, storms lasting
tens of days have been observed by Voyager and Cassini
\citep{Sromovsky1983, Porco2005, dyudina-etal-2007}; individual
active storm centers reach $\sim2000\,$km diameters within
an active storm complex up to $\sim6000\,$km across. The observed
clouds are probably large-scale anvils fed by numerous individual
convective storms that become spatially organized, analogous to ``mesoscale
convective complexes'' on Earth \citep{2000Natur.403..628G}.
As already described, the rapid expansion and
generation of anticyclonic vorticity near the tops of our simulated
storms and the close proximity of ascending active storm centers to subsiding
regions are consistent with observations of jovian storms, although
our storm sizes and lifetimes are somewhat too large.
The creation of (generally short-lived) vortices from latent
heating, as occurs in our simulations, has
not been clearly observed on Jupiter, but storm-generated vortices
have tentatively been
captured in Cassini images on Saturn \citep{Porco2005, dyudina-etal-2007}.
A similar phenomenon was obtained in the shallow-water
simulations of \citet{showman-2007}.
It is worth re-iterating that our simulations adopt local hydrostatic
balance and thus cannot capture the strong vertical accelerations
associated with convectively unstable motions and lightning generation.
At present, such phenomena can only be resolved by regional-scale
non-hydrostatic models \citep[e.g.][]{yair-etal-1995, hueso-etal-2001},
although the effects of such convective motions
on the large-scale flow could potentially be represented in large-scale
models using a cumulus parameterization.
The large horizontal dimensions of observed jovian/saturnian
storm anvil clouds, and the qualitative similarity of our results to the
observed storms, supports the possibility that
such efforts could prove fruitful in attempting to explain
observations of storm-cloud evolution on Jupiter and Saturn.
\section{Conclusion}
We presented global, three-dimensional numerical models to simulate the
formation of zonal jets by large-scale latent heating on the giant planets.
These models explicitly include water vapor and its
condensation and the resulting latent heating. We find that latent heating
can naturally produce banded zonal jets similar to those observed on
the giant planets. Our Jupiter and Saturn
simulations develop $\rm \sim 20$ zonal jets and Uranus/Neptune simulations
develop $\rm 3 - 7$ zonal jets depending on the water abundance. The jet spacing is consistent
with Rhines scale $\pi (2U/\beta)^{1/2}$. The zonal jets in our
simulations produce
modest violations of the barotropic and Charney-Stern stability criteria at some latitudes.
In our simulations,
condensation of water vapor releases latent heat and produces baroclinic eddies. These
eddies interact with the large-scale flow and the $\beta$ effect to pump momentum
up-gradient into the zonal jets. At the same time, a meridional circulation develops
whose Coriolis acceleration counteracts the eddy accelerations in the weather layer.
This near-cancellation of eddy and Coriolis accelerations
leads to slow evolution of the zonal jets and maintains the steadiness
of zonal jets in the presence of continual forcing.
Such a process was suggested by \citet{2006Icar..182..513S} and \citet{Del_Genio2007}
and occurred also in the simulations of \citet{showman-2007} and \citet{lian-showman-2008}.
Our simulations also produce equatorial superrotation for Jupiter and Saturn and subrotation for Uranus
and Neptune. Although a number of previous attempts have been made to produce superrotation
on Jupiter/Saturn and subrotation on Uranus/Neptune, previous models have generally lacked
an ability to produce {\it both} superrotation on Jupiter/Saturn and subrotation on Uranus/Neptune
without introducing {\it ad hoc} forcing or tuning of model parameters.
Ours is the first study to naturally produce such dual behavior, without tuning,
within the context of a single model.
Although the speeds of the superrotation and subrotation are weaker than observed,
our simulations provide a possible mechanism to explain the dichotomy in equatorial-jet
direction between the gas giants (Jupiter/Saturn) and ice giants (Uranus/Neptune) within
the context of a single model. In our simulations, the strength,
scale, and direction of the equatorial flow are strongly affected by the abundance of water vapor
as well as by the planetary radius and rotation rate.
Equatorial superrotation preferably forms in simulations with modest water vapor abundance
and is further promoted by large planetary radii and fast rotation rates, as occur
at Jupiter and Saturn. In contrast, high water abundance leads to equatorial subrotation
regardless of the planetary radius and rotation rate explored here. In this
picture, the dichotomy in the equatorial jet direction between the gas and ice giants
would result from a combination of the faster rotation rates, larger radii, and
probable lower water abundances on Jupiter/Saturn relative to those on Uranus/Neptune.
Despite this encouraging result, Saturn poses a possible difficulty with this
picture. Our Saturn simulations generated equatorial superrotation when the
water abundance is five times solar but equatorial subrotation when it is ten
times solar. Saturn's actual water abundance is unknown but could easily lie
anywhere within this range. Moreover, even our five-times-solar Saturn case produced
a superrotation that is much weaker and narrower than observed --- about
$120\rm \,m\,s^{-1}$ in the simulation versus $\sim400\rm \,m\,s^{-1}$ on the real
planet. (The equatorial jet in our Jupiter simulations is also too narrow,
although the discrepancy is less severe.) However, a
variety of processes excluded here (e.g., cloud microphysics, realistic radiative
transfer, and moist convection) could significantly affect the jet profile. Future
investigations that include these effects are needed before a definite assessment can be
made.
Consistent with our earlier work, we again find that our simulations generate deep barotropic
jets despite the localization of the eddy accelerations to the weather layer (pressures less
than $\sim7\,$bars in our Jupiter simulations, for example). In some simulations,
the deep jets have similar strength as the winds in weather layer. The mechanism that forms
the deep jets is similar to the mechanism we previously identified
\citep{lian-showman-2008,2006Icar..182..513S}. However,
these deep jets are affected by the redistribution of water vapor, which makes the diagnostics
very complicated in the present case.
Our simulations successfully produce the large-scale dynamic features
on Jupiter and Uranus/Neptune under the effect of large-scale latent heating.
However, our simulations lack long-lived vortices such as the Great Red Spot
on Jupiter and Great Dark Spot on Neptune. We also ignore the precipitation
and re-evaporation of condensates and use an idealized radiative cooling
scheme. The grid resolution in our simulations is relatively low for
resolving the mesoscale moist convection events which have
typical horizontal scales of $1000$ kilometers \citep{Little1999, Porco2003}.
Future models can include cloud physics, a sub-grid-scale
parameterization of cumulus convection,
a more realistic radiative transfer scheme, and explore the coupling
between the deep interior and the weather-layer processes identified here.
\section{Acknowledgement}
This research was supported by NASA Planetary Atmospheres grant NNG06GF28G to APS.
\label{lastpage}
|
1,108,101,565,978 | arxiv |
\section{Parameter Estimation}
We summarize the parameter estimation procedure in Algorithm \ref{alg:Estimation}.
\begin{algorithm
\DontPrintSemicolon
\KwIn{Observations $\mathcal D = \{x_i, c_{ki}\}$, hyperparameters $\Phi=\{\mathbf\Omega_0, \delta, \nu\}$, learning rate $\epsilon $.}
\KwOut{Estimated $\hat{\Theta} = \left\{\{{\bf W}_z, \mathbf{\Sigma}_z\}_{z=1}^Z, \mathbf{\Omega}, {\bf U}, \tau\right\}$}
\Begin{
Initialize $\Theta$ \;
\Repeat{convergence or reach max\_iter}{
\tcp{Update ${\bf W}$ and ${\bf U}$ using SGD}
\For{each $k \in \{1,\cdots, K\}$ }{
Given an instance $\langle x_i, c_{ki}\rangle$ \;
\For{each $z \in \{1,\cdots, Z\}$ }{
$ {\bf w}_k^{\mathcal G_z} \leftarrow {\bf w}_k^{\mathcal G_z} - \epsilon \frac{\partial \mathcal O }{\partial {\bf w}_k^{\mathcal G_z}} $ (Equation (\ref{update_Wz}))\;
}
$ {\bf u}_k \leftarrow {\bf u}_k - \epsilon \frac{\partial \mathcal O }{\partial {\bf u}_k} $ (Equation (\ref{update_u}))\;
}
Update $\tau$ by $ \tau \leftarrow \tau - \epsilon \frac{\partial \mathcal O }{\partial \tau} $ (Equation (\ref{update_lambda}))\;
Update $\mathbf\Omega$ according to Equation (\ref{update_Omega}) \;
\For{each $z \in \{1,\cdots, Z\}$ }{
Update $\mathbf{\Sigma}_z$ according to Equation (\ref{update_Sigma}) \;
}
}
\Return $\hat{\Theta}$\;
}
\caption{Parameter Estimation}
\label{alg:Estimation}
\end{algorithm}
\section{Conclusion}
In this paper, we provided a systematic study on risk profiling by simultaneously modeling multiple complications in chronic disease care using T2DM as a case study.
We proposed a novel multi-task learning model, \emph {TREFLES}, that jointly captures relationships between risks, risk factors, and risk factor selection learned from the data with the ability to incorporate domain knowledge as priors.
TREFLES is favorable for healthcare applications because in additional to improved prediction performance, clinically meaningful insights about the relationships among different complications and risk factors can also be identified.
Extensive experiments on a T2DM patient dataset extracted from a large electronic medical claims database validated the improved prediction performance of TREFLES over current state of the art methods.
Also the risk associations learned as well as the risk factors identified by TREFLES lead to meaningful insights that were consistent with clinical findings.
There are a number of interesting future research directions.
First, different complications could correspond to different severities of diabetes and we can use this knowledge to impose additional constraints on the risk correlations to potentially improve performance.
Second, the coefficient shrinkage strategy can be extended to incorporate domain knowledge about the risk factors to potentially improve interpretability.
Finally, we are also interested in applying our model to other chronic disease conditions with multiple complications or comorbidities which might benefit from the proposed modeling innovations proposed here.
\section{Parameter Estimation}
Let $\Theta = \left\{\{{\bf W}_z, \mathbf{\Sigma}_z\}_{z=1}^Z, \mathbf{\Omega}, {\bf U}, \tau\right\}$ denote all parameters to be estimated, and $\Phi=\{\mathbf\Omega_0, \delta, \nu\}$ denote all hyperparameters. For each task $k$ we observe a set of complication events $\mathcal D_k=\{\langle x_i, c_{ki}\rangle \}_{i\in \mathcal N_k}$, where $\mathcal N_k$ represents the patients observed for complication $k$. The observed complication events are denoted as $\mathcal D= \{\mathcal D_k\}_{k=1}^K$. Given $\{\mathcal D, \Phi\}$ the posterior distribution,
\begin{equation*}
\begin{split}
& {\mathrm{Pr}}(\Theta| \mathcal D, \Phi)\\
& \propto {\mathrm{Pr}}(\tau) {\mathrm{Pr}}(\mathbf{\Omega}) \prod_{k=1}^K\prod_{i=1}^{\mathcal N_k}{\mathrm{Pr}}(c_{ki}|\boldsymbol{\beta}_k, {\bf x}_i)
\prod_{z=1}^Z {\mathrm{Pr}}({\bf W}_z) \prod_{j=1}^M {\mathrm{Pr}}({\bf u}_j) \\
& \propto
\frac{2}{\pi (1+\tau^2)} |\mathbf\Omega|^{-\frac{\nu + K + 1}{2}} \exp\left(-\frac{\delta}{2} \operatorname{tr} ( \mathbf\Omega_0 \mathbf\Omega^{-1}) \right) \prod_{k=1}^K\prod_{i=1}^{\mathcal N_k}{\mathrm{Pr}}(c_{ki}|\boldsymbol{\beta}_k, {\bf x}_i) \\
& \times \prod_{z=1}^Z \frac{\exp\left( -\frac{1}{2} \mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] \right)}{(2\pi )^{KG_z/2} |\mathbf{\Sigma}_z|^{K/2} |\mathbf{\Omega}|^{G_z/2} }
\prod_{j=1}^M \frac {\exp \left(-\frac{1}{2} {\bf u}^j \mathbf{\Omega}^{-1} ({\bf u}^j)^\top \right)}{ (2\pi )^{K/2}|\mathbf{\Omega}|^{1/2}}
.
\end{split}
\end{equation*}
We estimate the parameters via maximizing the log posterior $\ell(\Theta )= \log {\mathrm{Pr}}(\Theta| \mathcal D, \Phi)$.
\iffalse
\begin{equation}
\begin{split}
\ell(\Theta ) & = \sum_{k=1}^K\sum_{i=1}^{\mathcal N_k}\bigg\{ c_{ki} \log \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) + (1-c_{ki}) \log(1- \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i)) \bigg\} \\
& +\sum_{z=1}^Z \Bigg\{ -\frac{1}{2}\mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] - \frac{K}{2}\log |\mathbf{\Sigma}_z| - \frac{G_z}{2}\log |\mathbf{\Omega}| \Bigg\} \\
& + \sum_{j=1}^M \Bigg\{ -\frac{1}{2} {\bf u}_j^\top \mathbf{\Omega}^{-1} {\bf u}^j - \frac{1}{2}\log |\mathbf{\Omega}| \Bigg\} \\
& - 2\log(1+\tau^2) -\frac{\nu + K + 1}{2} \log |\mathbf\Omega| -\frac{1}{2} \operatorname{tr} (\delta \mathbf\Omega_0 \mathbf\Omega^{-1})
\end{split}
\end{equation}
\fi
\paragraph{Objective Function.}
We rewrite the negative log-posterior $\ell(\Theta )$ to obtain the following objective function $\mathcal O (\Theta)$ to minimize:
\begin{equation*}\label{Equ:obj_fun}
\begin{split}
\mathcal O & =
\sum_{k=1}^K\sum_{i=1}^{\mathcal N_k} \bigg\{ -c_{ki} \log \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) - (1-c_{ki}) \log(1- \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i)) \bigg\}\\
& + \frac{1}{2}\sum_{z=1}^Z \bigg\{ \mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] \bigg\}
+ \frac{2M + K + \nu + 1}{2} \log |\mathbf\Omega| \\
& + \frac{\delta}{2} \operatorname{tr} ( \mathbf\Omega_0 \mathbf\Omega^{-1})
+ \frac{K}{2} \sum_{z=1}^Z \log |\mathbf{\Sigma}_z|
+ \frac{1}{2}\sum_{j=1}^M {\bf u}^j \mathbf{\Omega}^{-1} ({\bf u}_j)^\top + 2\log(1+\tau^2) \\
\text{s.t.} & ~~\mathbf\Omega \succeq 0, \mathbf{\Sigma}_z \succeq 0.
\end{split}
\end{equation*}
where $\mathbf X \succeq 0$ means that the matrix $\mathbf X$ is positive semidefinite.
Solving the above optimization problem is non-trivial. The optimization problem is not convex since $\log |\mathbf\Omega|$ and $\log |\mathbf{\Sigma}_z|$ are concave functions. Therefore we adopt an iterative algorithm to solve the problem. Within each iteration, the blocks ${\bf W}_z$, $\mathbf{\Sigma}_z$, $\mathbf\Omega$, ${\bf U}$, and $\tau$ are updated alternatively.
\vspace{3pt}
{\noindent \bf Update ${\bf W}_z$ given others}:
With other parameters fixed, objective function w.r.t ${\bf W}_z$ becomes
\begin{equation*}\label{Equ:obj_fun_W}
\begin{split}
\underset{\{{\bf W}_z\}_{z=1}^Z}{\operatorname{arg\,min}} ~~& \sum_{k=1}^K\sum_{i=1}^{\mathcal N_k} \bigg\{ -c_{ki} \log \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) - (1-c_{ki}) \log(1- \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i)) \bigg\} \\
& + \sum_{z=1}^Z \Bigg\{ \frac{1}{2}\mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] \Bigg\}.
\end{split}
\end{equation*}
This is a convex optimization problem with respect to ${\bf W}_z$. We use stochastic gradient descent method to update the $\{{\bf W}_z\}_{z=1}^Z$.
The main process involves randomly scanning training instances and iteratively updating parameters. In each iteration, we randomly sample an instance $\langle x_i, c_{ki}\rangle$, and we minimize $\mathcal O (\Theta)$ using the update rule for
$ \Theta = \Theta - \epsilon \cdot \frac{\partial \mathcal O (\Theta)}{\partial \Theta}$, where $\epsilon $ is a learning rate. Note that ${\bf w}_k = [{\bf w}_{\mathcal G_1}, {\bf w}_{\mathcal G_2}, \dots, {\bf w}_{\mathcal G_Z}]^\top$ and ${\bf W}_z= \{{\bf w}^j\}_{j\in \mathcal G_z}$. Let ${\bf w}_k^{\mathcal G_z} = [w_{jk}, w_{jk}, \cdots, w_{jk}]^\top, j \in \mathcal G_z$ be the $k$ column of ${\bf W}_z$, then ${\bf w}_k^{\mathcal G_z}$ corresponds to the coefficients of features in group ${\mathcal G_z}$ in task $k$. Given an instance $\langle x_i, c_{ki}\rangle$, the gradient with respect to ${\bf w}_k^{\mathcal G_z}$ is
\begin{equation}\label{update_Wz}
\begin{split}
\frac{\partial \mathcal O }{\partial {\bf w}_k^{\mathcal G_z}}
= & -\Big(c_{ki} - \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) \Big) \frac{ \partial \boldsymbol{\beta}_k^\top {\bf x}_i}{\partial {\bf x}_i^{\mathcal G_z}}
+ \left[ \mathbf{\Sigma}_z^{-1} {\bf W}_z \mathbf{\Omega}^{-1} \right]_{k}^{\mathcal G_z}
\end{split}
\end{equation}
where ${\bf x}_i^{\mathcal G_z}$ is the features in group $z$, and $\left[ {\bf X} \right]_{k}$ means the $k$-th column of matrix ${\bf X}$.
So we have $\frac{ \partial \boldsymbol{\beta}_k^\top {\bf x}_i}{\partial {\bf x}_i^{\mathcal G_z}} = \tau \boldsymbol{\lambda}_k^{\mathcal G_z} \circ {\bf x}_i^{\mathcal G_z}$.
\vspace{2pt}
{\noindent \bf Update ${\bf U}$ given others}:
With other parameters fixed, the objective function w.r.t ${\bf U}$ becomes
\begin{equation*}\label{Equ:obj_fun_U}
\begin{split}
\underset{{\bf U}}{\operatorname{arg\,min}} & \sum_{k=1}^K\sum_{i=1}^{\mathcal N_k} \bigg\{ -c_{ki} \log \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) - (1-c_{ki}) \log(1- \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i)) \bigg\}\\
& + \frac{1}{2}\sum_{j=1}^M {\bf u}^j \mathbf{\Omega}^{-1} ({\bf u}^j)^\top
\end{split}
\end{equation*}
To apply SGD, we optimize columns ${\bf u}_k$ instead of rows ${\bf u}^j$. Note than $\sum_{j=1}^M {\bf u}^j \mathbf{\Omega}^{-1} ({\bf u}^j)^\top = \operatorname{tr}({\bf U}\mathbf{\Omega}^{-1} {\bf U}^\top)$.
Given an instance $\langle x_i, c_{ki}\rangle$, the gradient with respect to ${\bf u}_k$ is
\begin{equation}\label{update_u}
\begin{split}
\frac{\partial \mathcal O }{\partial {\bf u}_k}
= & -\Big(c_{ki} - \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) \Big) \frac{ \partial \boldsymbol{\beta}_k^\top {\bf x}_i}{\partial {\bf u}_k}
+ \left[{\bf U} \mathbf{\Omega}^{-1}\right]_k
\end{split}
\end{equation}
Note that $\beta_{jk}= \tau \lambda_{jk} w_{jk}$ with $\lambda_{jk} = \mathrm{tan}\left(\frac{\pi \Phi(u_{jk})}{2}\right), \Phi(u_{jk}) = \frac {1}{2}\left[1+\operatorname {erf} \left({\frac {u_{jk}}{ \sqrt {2\Omega_{kk} }}}\right)\right]$. Then $\frac{\partial \boldsymbol{\beta}_k^\top {\bf x}_i}{\partial {\bf u}_k} = \tau \frac{\partial f({\bf u}_k)}{\partial {\bf u}_k} \circ {\bf x}_i$, where $\left.\frac{\partial f({\bf u}_k)}{\partial {\bf u}_k}\right\vert_{jk} = \frac{\pi}{2}\mathrm{sec}^2\left(\frac{\pi \Phi(u_{jk})}{2}\right) \frac{1}{\sqrt {2\pi \Omega_{kk}^2}}\exp\left(-\frac{u_{jk}^2}{2\Omega_{kk}^2}\right)$.
\vspace{2pt}
{\noindent \bf Update $\tau$ given others}:
With other parameters fixed, the objective function w.r.t $\tau$ becomes
\begin{equation*}\label{Equ:obj_fun_tau}
\begin{split}
\underset{\tau}{\operatorname{arg\,min}} & \sum_{k=1}^K\sum_{i=1}^{\mathcal N_k} \bigg\{ -c_{ki} \log \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) - (1-c_{ki}) \log(1- \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i)) \bigg\}\\
& + 2\log(1+\tau^2)
\end{split}
\end{equation*}
The gradients with respect to $\tau$ are given by
\begin{equation}\label{update_lambda}
\begin{split}
\frac{\partial \mathcal O }{\tau}
= -\sum_{k=1}^K\sum_{i=1}^{\mathcal N_k} \Big(c_{ki} - \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) \Big) \boldsymbol{\lambda}_k^\top {\bf x}_i
+ \frac{4\tau}{1+\tau^2}
\end{split}
\end{equation}
where $\boldsymbol{\lambda}_k$ is the $k$-column of $\mathbf{\Lambda}$.
This allows $\tau$ to be updated using gradient decent.
\iffalse
\vspace{5pt}
{\noindent \bf Update $\tau$ given others}:
With other parameters fixed, objective function w.r.t $\tau$ becomes
\begin{equation*}\label{Equ:obj_fun_tau}
\begin{split}
\underset{\tau}{\operatorname{arg\,min}} & \sum_{k=1}^K\sum_{i=1}^{\mathcal N_k} \bigg\{ -c_{ki} \log \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) - (1-c_{ki}) \log(1- \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i)) \bigg\}\\
& + 2\log(1+\tau^2)
\end{split}
\end{equation*}
Given an instance $\langle x_i, c_{ki}\rangle$, the gradients with respect to $\tau$ is
\begin{equation}\label{update_lambda}
\begin{split}
\frac{\partial \mathcal O }{\tau}
= - \Big(c_{ki} - \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i) \Big) \boldsymbol{\lambda}_k^\top {\bf x}_i
+ \frac{4\tau}{1+\tau^2}
\end{split}
\end{equation}
where $\boldsymbol{\lambda}_k$ is the $k$-column of $\mathbf{\Lambda}$.
With gradients with respect to $\tau$ being derived, we update them using stochastic gradient decent rule.
\fi
\vspace{2pt}
{\noindent \bf Update $\mathbf\Omega$ given others}:
With other parameters fixed, the objective function w.r.t $\mathbf\Omega$ becomes
\begin{equation*}\label{Equ:obj_fun_Omega1}
\begin{split}
\underset{\mathbf\Omega}{\operatorname{arg\,min}} & \sum_{z=1}^Z \Bigg\{ \frac{1}{2}\mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] \Bigg\}
+ \frac{\delta}{2} \operatorname{tr} ( \mathbf\Omega_0 \mathbf\Omega^{-1}) + \frac{\xi}{2} \log |\mathbf\Omega|,
\end{split}
\end{equation*}
where $\xi = 2M + K + \nu + 1$.
The last term $\log |\mathbf{\Omega}|$ can be seen as a penalty on the complexity of $\mathbf{\Omega}$, and can be replaced with the constraint $\mathrm{tr}(\mathbf{\Omega}) = 1$~\cite{Zhang:MLT:UAI2010}.
Then above Equation (\ref{Equ:obj_fun_Omega1}) can be reformulated as:
\begin{equation}\label{Equ:obj_fun_Omega_relax}
\begin{split}
\underset{\mathbf\Omega}{\operatorname{arg\,min}}~~ & \sum_{z=1}^Z \Bigg\{ \frac{1}{2}\mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] \Bigg\}
+ \frac{\delta}{2} \operatorname{tr} ( \mathbf\Omega_0 \mathbf\Omega^{-1}) \\
\mathrm{s.t.}~~ & \mathbf{\Omega} \succeq 0, ~ \mathrm{tr}(\mathbf{\Omega}) = 1
\end{split}
\end{equation}
where $\mathbf{\Omega} \succeq 0$ means that the matrix $\mathbf{\Omega}$ is positive semidefinite.
Equation (\ref{Equ:obj_fun_Omega_relax}) has an analytical solution:
\begin{equation}\label{update_Omega}
\begin{split}
\mathbf{\Omega}
= \frac{ \bigg( \frac{1}{2} \sum_{z=1}^Z {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z + \frac{\delta}{2} \mathbf\Omega_0 \bigg)^{\frac{1}{2}} }
{\mathrm{tr} \Bigg[ \bigg( \frac{1}{2} \sum_{z=1}^Z {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z + \frac{\delta}{2} \mathbf\Omega_0 \bigg)^{\frac{1}{2}} \Bigg] }.
\end{split}
\end{equation}
\vspace{5pt}
{\noindent \bf Update $\mathbf{\Sigma}_z$ given others}:
With other parameters fixed, the objective function w.r.t $\mathbf\Sigma_z$ becomes
\begin{equation*}\label{Equ:obj_fun_Sigma1}
\begin{split}
\underset{\mathbf{\Sigma}_z}{\operatorname{arg\,min}}~ & \frac{1}{2}\mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] + \frac{K}{2} \log |\mathbf{\Sigma}_z|.
\end{split}
\end{equation*}
Similar to the case of updating $\mathbf\Omega$, the last term $\log |\mathbf{\Sigma}_z|$ in Equation (\ref{Equ:obj_fun_Sigma1}) can be seen as a penalty on the complexity of $\mathbf{\Sigma}_z$, and can be replaced with a constraint $\mathrm{tr}(\mathbf{\Sigma}_z) = 1$. Then above Equation (\ref{Equ:obj_fun_Sigma1}) can be reformulated as:
\begin{equation}\label{Equ:obj_fun_Sigma_relax}
\begin{split}
\underset{\mathbf{\Sigma}_z}{\operatorname{arg\,min}} & ~~\mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma}_z^{-1} {\bf W}_z \right] ~~
\mathrm{s.t.} ~~ \mathbf{\Sigma}_z \succeq 0, ~ \mathrm{tr}(\mathbf{\Sigma}_z) = 1.
\end{split}
\end{equation}
The Equation (\ref{Equ:obj_fun_Sigma_relax}) has an analytical solution:
\begin{equation}\label{update_Sigma}
\begin{split}
\mathbf{\Sigma}_z
= \frac{ \Big( {\bf W}_z \mathbf{\Omega}^{-1} {\bf W}_z^\top \Big)^{\frac{1}{2}} }
{\mathrm{tr} \bigg[ \Big( {\bf W}_z \mathbf{\Omega}^{-1} {\bf W}_z^\top \Big)^{\frac{1}{2}} \bigg] }.
\end{split}
\end{equation}
\iffalse
\begin{algorithm
\DontPrintSemicolon
\KwIn{Observations $\mathcal D = \{x_i, c_{ki}\}$, hyperparameters $\Phi=\{\mathbf\Omega_0, \delta, \nu\}$, learning rate $\epsilon $.}
\KwOut{Estimated $\hat{\Theta} = \left\{\{{\bf W}_z, \mathbf{\Sigma}_z\}_{z=1}^Z, \mathbf{\Omega}, {\bf U}, \tau\right\}$}
\Begin{
Initialize $\Theta$ \;
\Repeat{convergence or reach max\_iter}{
\tcp{Update ${\bf W}$ and ${\bf U}$ using SGD}
\For{each $k \in \{1,\cdots, K\}$ }{
Given an instance $\langle x_i, c_{ki}\rangle$ \;
\For{each $z \in \{1,\cdots, Z\}$ }{
$ {\bf w}_k^{\mathcal G_z} \leftarrow {\bf w}_k^{\mathcal G_z} - \epsilon \frac{\partial \mathcal O }{\partial {\bf w}_k^{\mathcal G_z}} $ (Equation (\ref{update_Wz}))\;
}
$ {\bf u}_k \leftarrow {\bf u}_k - \epsilon \frac{\partial \mathcal O }{\partial {\bf u}_k} $ (Equation (\ref{update_u}))\;
}
Update $\tau$ by $ \tau \leftarrow \tau - \epsilon \frac{\partial \mathcal O }{\partial \tau} $ (Equation (\ref{update_lambda}))\;
Update $\mathbf\Omega$ according to Equation (\ref{update_Omega}) \;
\For{each $z \in \{1,\cdots, Z\}$ }{
Update $\mathbf{\Sigma}_z$ according to Equation (\ref{update_Sigma}) \;
}
}
\Return $\hat{\Theta}$\;
}
\caption{Parameter Estimation}
\label{alg:Estimation}
\end{algorithm}
We summarize the parameter estimation procedure in Algorithm \ref{alg:Estimation}.
\fi
\section{Experiments}\label{sec:exp}
In this section we present empirical evaluations to carefully vet our model on patient level data extracted from a large real-world electronic medical claims database. %
\subsection{Experimental Setup and Data}
We conduct a retrospective cohort study using the MarketScan Commercial Claims and Encounter (CCAE) database from Truven Health\footnote{https://truvenhealth.com/}.
The data on the patients are contributed by a selection of large private employers' health plans, as well as government and public organizations.
We use a dataset of de-identified patients between the years 2011 and 2014.
The patient cohort used in the study consisted of T2DM patients selected based on the following criteria:
\begin{enumerate}
\setlength\itemsep{0.1cm}
\item[I.] The frequency ratio between Type 2 diabetes visits to Type 1 diabetes visits is larger than $0.5$; AND
\item[II-a.] The patient has two (2) or more Type 2 diabetes labeled events on different days; OR
\item[II-b.] The patient received insulin and/or antidiabetic medication.
\end{enumerate}
We focus on the risk of developing complications in the two year time window immediately following the initial T2DM diagnosis.
Guided by clinical experts and following the report from American Diabetes Association~\cite{american2003report}, we identified 12 common complications of T2DM.
We selected patients with at least two years of observations before the initial T2DM diagnosis.
Further, patients who were under 19 years or age or over 64 years or age at the initial T2DM diagnosis are removed.
Table \ref{label:T2DMcomp} shows the complications selected in this study and the corresponding number of patients. \\
We use following prediction variables:
\begin{packeditemize}
\setlength\itemsep{0.1cm}
\item {\bf Patient demographics:} age and gender.
\item {\bf Diagnoses:} historical medical conditions encoded as International Classification of Disease (ICD) codes. ICD codes are grouped according to their first three digits and ICD codes appearing in fewer than $200$ patients are filtered out. This results in $296$ unique ICD features. Patients with less than $10$ occurrences of ICD codes are removed.
\item {\bf Medications:} medications that were received before the initial T2DM diagnosis date. A total of $19$ therapeutic classes related to glucose control, cardiac related drugs, and antibiotics were selected.
\end{packeditemize}
This results in a total of $317$ features.
\begin{table*}[!t]
\addtolength{\tabcolsep}{0pt}
\caption{Performance comparisons between the proposed TREFLES model and the baseline approaches for the 12 complications. The AUC average and standard deviation (in parenthesis) over the 5-fold cross validation trials are reported.}\label{tabel:AUC}
\vspace{-0.4cm}
\begin{center}
\normalsize
\begin{tabular}{ c c c c c c c c c c c c c}
\toprule
Method & RET & NEU & NEP & VAS & CEL & PYE & OST & REN & HHS & KET & SEP & SHK \\ \midrule
\multirow{2}{*}{STL} & 0.5397 & 0.5889 & 0.5905 & 0.6581 & 0.5983 & 0.6222 & 0.7574 & 0.7351 & 0.6186 & 0.6558 & 0.7611 & 0.7794\\
& (0.0108) & (0.0092) & (0.0096) & (0.0096) & (0.0049) & (0.0263) & (0.0468) & (0.0110) & (0.0323) & (0.0240) & (0.0152) & (0.0410)\\ \midrule
\multirow{2}{*}{MTFL} & 0.5487 & 0.6034 & 0.6340 & 0.7059 & 0.6047 & 0.5604 & 0.8094 & 0.7801 & 0.6794 & 0.7011 & 0.7962 & 0.8316 \\
& (0.0073) & (0.0134) & (0.0086) & (0.0085) & (0.0077) & (0.0687) & (0.0565) & (0.0078) & (0.0311) & (0.0335) & (0.0099) & (0.0292)\\ \midrule
\multirow{2}{*}{MTRL} & 0.5643 & 0.6100 & 0.6456 & 0.7069 & 0.6283 & 0.6909 & 0.8480 & 0.7933 & 0.6990 & 0.7347 & 0.8182 & 0.8679\\
& (0.0087) & (0.0103) & (0.0099) & (0.0105) & (0.0046) & (0.0633) & (0.0534) & (0.0073) & (0.0187) & (0.0348) & (0.0178) & (0.0209)\\ \midrule
\multirow{2}{*}{FETR} & 0.5815 & 0.6488 & 0.6336 & 0.7290 & 0.6589 & 0.6913 & 0.8610 & 0.8087 & 0.6878 & 0.7320 & 0.8298 & 0.8709\\
& (0.0178) & (0.0063) & (0.0126) & (0.0137) & (0.0067) & (0.0474) & (0.0506) & (0.0163) & (0.0304) & (0.0416) & (0.0140) & (0.0262)\\ \midrule
\multirow{2}{*}{\bf TREFLES} & {\bf 0.5985} & {\bf 0.6697} & {\bf 0.6655} & {\bf 0.7478} & {\bf 0.6793} & {\bf 0.7194} & {\bf 0.8828} & {\bf 0.8316} & {\bf 0.7229} & {\bf 0.7626} & {\bf 0.8425} & {\bf 0.8784} \\
& (0.0150) & (0.0075) & (0.0130) & (0.0091) & (0.0074) & (0.0422) & (0.0571) & (0.0130) & (0.0242) & (0.0341) & (0.0165) & (0.0247)\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-0.3cm}
\end{table*}
\subsection{Evaluation Protocol}
{\noindent \bf Baselines.} We compare the new {\bf TREFLES} method with following set of strong baselines:
\begin{packeditemize}
\item Single task learning ({\bf STL}): For each task, we use a logistic regression to model the risk of each complication independently.
\item Multi-task feature learning ({\bf MTFL})~\cite{Argyriou:MLFT:nips2007,Argyriou:MLFT:MLJ2008}: MTFL assumes that task association is captured through a subset of features shared among tasks. It learns a few features common across the tasks using group sparsity, \emph{i.e.}, the $\ell_1/\ell_2$-norm regularization on ${\bf W}$, which both couples the tasks and enforces sparsity.
\item Multi-task relationship learning ({\bf MTRL})~\cite{Zhang:MLT:UAI2010}: MTRL assumes that the task association is revealed in the structure of the coefficient matrix ${\bf W}$, but it only considers the task correlations in ${\bf W}$ neglecting the correlations between features.
\item Feature and task relationship learning ({\bf FETR})~\cite{zhao2017FETR}: FETR learns the relationships both between tasks and between features simultaneously. It can be seen a special case of our model without feature grouping and correlated shrinkage.
\end{packeditemize}
{\noindent \bf Evaluation metrics.}
We evaluate the models using AUC (area under the receiver operating characteristic curve), which is a standard metric in predictive analytics.
\vspace{5pt}
{\noindent \bf Training and testing.}
We used 5-fold cross validation to report results for each model. All the models are implemented with gradient descent optimization and we apply the Adam~\cite{adam} method to automatically adapt the step size during parameter estimation.
\iffalse
AdaGrad~\cite{duchi2011adaptive}
The update formula is as follows:
\begin{equation}
\boldsymbol{\theta}^{k+1} = \boldsymbol{\theta}^{k} - \dfrac{\eta \cdot \boldsymbol{g}_{\theta}^k}{\sqrt{\sum_{s=1}^k (\boldsymbol{g}_{\theta}^s)^2 + \epsilon}}
\end{equation}
where $\boldsymbol{\theta}^{k}$ is the value of the parameter at the $k$-th step, $\boldsymbol{g}_{\theta}^k$ is its gradient at step $k$ and $\eta$ is a learning rate.
We set different step sizes $\eta \in\{ 0.5, 0.1, 0.01, 0.005\}$ and report the best result for each model.
\fi
\subsection{Incorporating Domain Knowledge}
{\noindent \bf Grouping of features.}
We group ICD features according to the domain knowledge encoded in the ICD ontologies.
Specifically, we group ICD-9 codes together when they have a same parent node (3 digits) in the ICD-9 hierarchy.
\vspace{3pt}
{\noindent \bf Prior risk association $\mathbf{\Omega}_0$.}
Note that our model can incorporate prior knowledge on complication associations through $\mathbf{\Omega}_0$. We construct prior associations using the human disease network~\cite{HumanDiseaseNetwork:PANS2007}, which provides the Phi-correlations between pairs of diseases. We aggregate the Phi-correlations between pairs of ICD codes under pairs of T2DM complications. This results in a $\mathbf{\Omega}_0$ that represents our prior knowledge about the correlations between the T2DM complications in our study.
\subsection{Results Comparison}
Table \ref{tabel:AUC} shows the AUCs between the proposed TREFLES model and the baseline approaches on all 12 complication risk prediction tasks. The average and standard deviation (in parenthesis) over the 5-fold cross validation trials are reported.
Our approach consistently outperforms the baseline methods on all the 12 tasks.
Figure \ref{fig:means} plots the average AUCs and standard deviations across the 12 tasks for the different methods.
\vspace{3pt}
{\noindent \bf MTL versus STL:} We observe that all multi-task learning models (MTFL, MTRL, FETR and TREFLES) consistently and significantly outperform the single task learning method. In particular, our TREFLES model outperforms the single task learning method by $9.1\%$ in AUC on average. This confirms our assumption that directly modeling complications as independent of one another can lead to suboptimal models. Note that the different complications are manifestations of a common underlying condition--hyperglycemia, so their risks should be related. By simultaneously modeling multiple complications, MTL can capture and leverage the associations between complications using a shared representation. As a result, MTL models can significantly outperform STL models in risk prediction of T2DM complications.
\vspace{3pt}
{\noindent \bf TREFLES model versus baseline MTL models:} As shown in Figure \ref{fig:means}, our TREFLES model outperforms all baseline MTL models. TREFLES (AUC $0.7501 \pm 0.0091$) is better than the best baseline model FETR (AUC $0.7278 \pm 0.0094$) by $2.2\%$ in AUC. We also observe that the task relationship learning based method MTRL (AUC $0.7173 \pm 0.0072$) is more favorable than the feature relationship learning based method MTFL (AUC $0.6879 \pm 0.0128$). FETR outperforms MTRL because it simultaneously learns the relationships both between tasks and between features. TREFLES not only captures the relationships between tasks and between features, it also identifies the common contributing risk factors through the correlated coefficient shrinkage mechanism and incorporates domain knowledge through carefully constructed priors. As a result, TREFLES can significantly improve upon FETR.
\begin{figure}[t]
\centering
\centering\includegraphics[width=0.4\textwidth]{figs/Fig_means.pdf}
\vspace{-0.4cm}
\caption{Performance comparisons between the proposed TREFLES model and the baseline approaches in terms of AUC (averaged over all 12 tasks).}\label{fig:means}
\vspace{-0.5cm}
\end{figure}
\begin{figure}
\centering
\centering\includegraphics[width=0.5\textwidth]{figs/cor_mat.pdf}
\vspace{-0.7cm}
\caption{Heatmap and dendrogram of the hierarchical clustering of the correlation matrix learned by TREFLES.}\label{fig:cor_heatmap}
\vspace{-0.6cm}
\end{figure}
\subsection{Learned Risk Associations}
In this section we discuss the estimated risk association matrix $\hat{\mathbf{\Omega}}$ from our TREFLES model. Matrix $\hat{\mathbf{\Omega}}$ represents the relatedness between complications learned from data. We first transfer the covariance matrix $\hat{\mathbf{\Omega}}$ to its correlation matrix $\hat{\mathbf{{\bf R}}}$, whose elements have a ranges from $-1$ to $1$. We observe that all the elements in the correlation matrix $\hat{\mathbf{{\bf R}}}$ learned by TREFLES have positive values. This is because all the complications are manifestations of a common underlying condition--hyperglycemia and they are positively correlated.
Then we perform a hierarchical clustering on $\hat{\mathbf{{\bf R}}}$. Figure \ref{fig:cor_heatmap} shows the heatmap and the dendrogram of the hierarchical clustering results. Darker colors indicate higher correlation. We can observe clusters between the risk associations of the 12 complications. In particular, CEL, NEU, VAS, OST, NEP and RET form one cluster while the remaining complications of KET, HHS, PYE, SEP, REN and SHK form a second cluster.
The clusters are clinically meaningful.
The first cluster of CEL, NEU, VAS, OST, NEP and RET represents the local complications caused by long standing or mismanaged diabetes, and the second cluster of KET, HHS, PYE, SEP, REN and SHK represents complications involving multiple sites or systemic complications.
Cluster 2 indicates more severe pathophysiologic manifestations of the disease than the cluster 1.
\subsection{Identified Risk Factors}
Table \ref{label:RiskFactors} shows the top-5 risk factors/predictors (according to their coefficients) for each diabetic complication identified by our model. Most of the risk factors identified by our model are known to be clinically associated with the corresponding diabetic complications (indicated by *). For example, the medical condition of ``Disorders of fluid, electrolyte, and acid-base balance'', which consistently appears in the top listing for all the diabetic complications, is indicative of many acid-based and electrolyte disorders that may be due to complications of T2DM and the medications diabetic patients receive.
Age is another major known risk factor for retinopathy, neuropathy, nephropathy and vascular disease including cardiovascular disease and the proposed method correctly identifies these associations. The underlying mechanism of age as a risk factor could be due to the fact that older adults tend to have long-standing diabetes, and consequently have associated microvascular and macrovascular complications.
Insulin treatment is identified as a risk factor for retinopathy, nephropathy, and cellulitis but not for the other complications.
\begin{table*}[!t]
\addtolength{\tabcolsep}{3pt}
\caption{Top-5 risk factors (with the highest coefficients) for each complication as identified by the TREFLES model.}\label{label:RiskFactors}
\vspace{-0.3cm}
\begin{center}
\rowcolors{2}{gray!25}{white}
\footnotesize
\begin{tabular}{ |L{5.3cm} | L{5.4cm} | L{5.3cm} |}
\rowcolor{gray!50}
\hline
\multicolumn{1}{|c|}{Retinopathy (RET)} & \multicolumn{1}{c|}{Neuropathy (NEU)} & \multicolumn{1}{c|}{Nephropathy (NEP)} \\ \hline
1.79 Antidiabetic Agents, Insulin* & 4.07 {\footnotesize Hereditary and idiopathic peripheral neuropathy}
& 1.98 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}*\\
1.42 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}* & 4.03 Inflammatory and toxic neuropathy
& 1.29 Heart failure\\
1.17 Other retinal disorders* & 2.42 Chronic ulcer of skin & 1.26 Antidiabetic Agents, Insulin\\
1.12 Age* & 2.07 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}* & 1.26 {\footnotesize Nonspecific findings on examination of urine}\\
0.89 {\footnotesize Nonspecific findings on examination of urine} & 1.70 Age* & 0.94 Age*\\ \hline
\multicolumn{1}{|c|}{Vascular Disease (VAS)} & \multicolumn{1}{c|}{Cellulitis (CEL)} & \multicolumn{1}{c|}{Pyelonephritis (PYE)} \\ \hline
8.32 Chronic ulcer of skin & 3.89 Chronic ulcer of skin* & 1.65 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}\\
3.10 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}* & 2.78 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance} & 1.51 {\footnotesize Other disorders of urethra and urinary tract}* \\
2.18 Hereditary and idiopathic peripheral neuropathy & 2.51 Bacterial infection in conditions classified elsewhere and of unspecified site* & 1.22 Bacterial infection in conditions classified elsewhere and of unspecified site* \\
2.16 Age* & 2.20 Antidiabetic Agents, Insulin & 1.11 Calculus of kidney and ureter* \\
1.88 Atherosclerosis* & 2.17 {\footnotesize Hereditary and idiopathic peripheral neuropathy}* & 0.91 Congenital anomalies of urinary system \\ \hline
\multicolumn{1}{|c|}{Osteomyelitis (OST)} & \multicolumn{1}{c|}{Renal (REN) } & \multicolumn{1}{c|}{Hyperosmolar state (HHS)} \\ \hline
3.73 Chronic ulcer of skin* & 8.23 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}* & 4.40 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance} \\
1.84 Bacterial infection in conditions classified elsewhere and of unspecified site* & 3.04 Heart failure & 1.52 Heart failure*\\
1.56 Open wound of foot except toes alone* & 2.71 Hypertensive chronic kidney disease* & 1.34 Disorders of mineral metabolism \\
1.44 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance} & 2.55 Chronic ulcer of skin & 1.25 Nondependent abuse of drugs*\\
1.37 {\scriptsize Other and unspecified protein-calorie malnutrition} & 2.25 Other diseases of lung & 1.19 Hypertensive chronic kidney disease \\
\hline
\multicolumn{1}{|c|}{Ketoacidosis (KET) } & \multicolumn{1}{c|}{Sepsis (SEP) } & \multicolumn{1}{c|}{Shock (SHK) } \\ \hline
5.68 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}* & 6.10 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}* & 6.10 {\scriptsize Disorders of fluid, electrolyte, and acid-base balance}* \\
1.23 Disorders of mineral metabolism & 3.46 Bacterial infection in conditions classified elsewhere and of unspecified site*
& 2.42 Other diseases of lung\\
1.22 {\footnotesize Nonspecific findings on examination of urine}* & 3.39 Chronic ulcer of skin* & 2.06 Heart failure* \\
1.10 Diseases of pancreas & 2.96 Other diseases of lung & 1.65 Pneumonia, organism unspecified \\
1.03 Nondependent abuse of drugs* & 2.43 Chronic kidney disease (CKD)& 1.55 {\footnotesize Certain adverse effects not elsewhere classified}*\\
\hline
\multicolumn{3}{l}{ * indicates that the medical conditions have been mentioned in the clinical literature as the risk factors for the corresponding complications.}\\
\end{tabular}
\end{center}
\vspace{-0.4cm}
\end{table*}
\section{Introduction}
Type 2 diabetes mellitus (T2DM) is a chronic disease that affects nearly half a billion people around the globe~\cite{world2016global}.
T2DM is characterized by hyperglycemia--- abnormally elevated blood glucose (blood sugar) levels, and is almost always associated with a number of complications~\cite{forbes2013mechanisms}. Over time, the chronic elevation of blood glucose levels caused by T2DM leads to blood vessel damage which in turn leads to associated complications, including kidney failure, blindness, stroke, heart attack, and in severe cases even death.
Meanwhile, the cost of diabetes care has been increasing over the past decades and the annual cost is a staggering~\cite{CDC:diabetes:2017,ncd2016worldwide}.
T2DM management requires continuous medical care with multifactorial risk-reduction strategies beyond glycemic control~\cite{american2013standards}. Risk profiling of T2DM complications is critical for healthcare professionals to appropriately adapt personalized treatment plans for patients in diabetes care, improving care quality and reducing cost.
The recent abundance of the electronic health records (EHRs) and electronic medical claims data has provided an unprecedented opportunity to apply predictive analytics to improve T2DM management. In this paper, we study the risk profiling of T2DM complications from longitudinal patient records: {\it what is the probability that a patient will develop complications within a time window after the initial T2DM diagnosis?} In the literature, EHRs and claims data have been leveraged for a wide range of healthcare applications~\cite{Kenney2016early,razavian2015population,himes2009prediction,cheng2016risk,choi2017using,wang:progression:kdd2014,wang2015towards,chen2016patient,he2014mining,tabak2013using,Nori:KDD2015}. However, there are unique difficulties that arise when performing data-driven risk prediction and profiling of T2DM complications from patient medical records:
\begin{packeditemize}
\vspace{4pt}
\item The first challenge stems from the need to effectively capture correlations between multiple T2DM complications. Considering that the different complications are manifestations of a common underlying condition--hyperglycemia, modeling complications as independent of one another leads to suboptimal models.
\item Patient medical record data contain rich information about relationships among medical concepts and risk factors, pertinent to T2DM. However, developing statistical methods that can effectively exploit this information is challenging.
\item Further, when using patient medical record data for risk prediction and profiling, each patient is typically represented by very high-dimensional data while only a small subset of the predictors are actually relevant. It is essential to be able to identify the subset of predictors that are useful for predictive analysis to facilitate model transparency and interpretability.
\item Finally, it is desirable for the model to have the ability to leverage T2DM domain knowledge. Such clinical domain knowledge is often available or partially available, and incorporating it into the analysis can lead to more accurate inferences.
\vspace{4pt}
\end{packeditemize}
In this paper, we address these challenges by developing methods for simultaneously modeling multiple complications for risk profiling in diabetes care. We begin by formulating T2DM complication risk prediction as a Multi-Task learning (MTL)~\cite{Caruana:MLT1997} problem with each complication corresponding to one task. MTL jointly learns multiple tasks using a shared representation so that knowledge obtained from one task can help the other tasks.
We then develop extensions that in addition to capturing task relationships driven by the underlying disease also model dependencies between information-rich features (risk factors). Further, assuming that similar T2DM complications have similar contributed risk factors, we endow our models with the ability to perform correlated shrinkage through a novel correlated Horseshoe distribution. This allows us to identify subsets of risk factors for different complications while accounting for associations between complications. %
We call the proposed method {\bf T}ask {\bf RE}lationship and {\bf F}eature relationship {\bf Le}arning with correlated {\bf S}hrinkage (TREFLES).
We formulate TREFLES in a hierarchical Bayesian framework, allowing us to easily capture domain knowledge through carefully chosen priors.
Finally, we assess our proposed innovations through extensive experiments on patient level data extracted from a large electronic medical claims database. The results show that the proposed approach consistently outperforms previous models by a significant margin and demonstrate the effectiveness of the simultaneous modeling framework over modeling each complication independently.
Furthermore, we show that the risk associations learned and the risk factors identified lead to meaningful clinical insights.
In summary, our key contributions are as follows:
\begin{packeditemize}
\item We provide a systematic study on risk profiling by simultaneously modeling of multiple complications in chronic disease care using T2DM as a case study.
\item We design a novel model, \emph {TREFLES}, that jointly captures relationships between risks, risk factors, and risk factor selection learned from the data with the ability to incorporate domain knowledge as priors.
\item We demonstrate the effectiveness of TREFLES in both predictive capability and clinical interpretability via a comprehensive study of T2DM complications using a large electronic medical claims database.
\end{packeditemize}
The proposed method is favorable for healthcare applications beyond diabetes care. It provides a powerful tool for not only improving predictive performance, but also for recovering clinically meaningful insights about relationships among different risks and risk factors.
\section{Simultaneous Modeling of Multiple Complications for Risk Profiling}
In this section, we first formulate the problem of diabetes complications risk profiling, and then introduce the proposed approach to simultaneously model multiple complications, addressing the aforementioned challenges.
\subsection{Diabetes Complications Risk Profiling}
The goal is to build an effective approach to predict the risk of a patient developing complication(s) within a follow-up window $\Delta t$ after the initial T2DM diagnosis. Specifically, as shown in Figure \ref{fig:prediction_framework}, for each patient $i\in\{1, \dots, N\}$ we observe a set of $M$ features (risk factors), denoted as ${\bf x}_i=[x_{i1}, x_{i2}, \dots, x_{iM}]^\top$, for an observation window up until the patient was initially diagnosed with T2DM. Let there be $K$ complications in consideration indexed by $k\in\{1, \dots, K\}$. We use $c_{ki}\in \{0,1\}$ to represent the onset event of patient $i$ developing complication $k$ in the follow-up window $\Delta t$ and use $y_{ki}$ to represent the event probability (risk). For each task $k$ we observe a set of complications $\mathcal D_k=\{x_i, c_{ki}\}_{i\in \mathcal N_k}$, where $\mathcal N_k$ are the patients observed in complication $k$. The set of all observed complication events are denoted as $\mathcal D= \{\mathcal D_k\}_{k=1}^K$. Given $\mathcal D$, we aim to build a predictive model $y_{ki}={\mathrm{Pr}}(c_{ki}|\Theta, {\bf x}_i)$, where $\Theta$ are the model parameters, to predict the risk that patient $i$ will develop complication $k$ during follow-up. Table \ref{table:math_notation} summarizes useful notation used in the remainder of the paper.
\begin{table
\caption{Mathematical Notations}\label{table:math_notation}
\vspace{-0.5cm}
\addtolength{\tabcolsep}{-4pt}
\begin{center}
\rowcolors{2}{gray!25}{white}
\begin{tabular}{ L{1.8cm} L{6.4cm}}
\rowcolor{gray!50}
\specialrule{.1em}{.1em}{.1em}\hline
Symbol & Description \\ \hline
$N, M, K$ & number of subjects, features, and complications\\
$i, j, k$ & index of subjects, features, and complications\\
$c_{ki}\in \{0,1\}$ & event of complication $k$ for patient $i$ where $1$ indicates an observed event and $0$ otherwise \\
$y_{ki}$ & probability (risk) of patient $i$ for complication $k$ \\
${\bf x}_i \in \mathbb{R}^M$ & vector of features for patient $i$ \\
${\bf w}_k \in \mathbb{R}^M$ & vector of coefficients for complication $k$\\
${\bf W} \in \mathbb{R}^{M\times K}$ & ${\bf W} = [{\bf w}_1, \cdots, {\bf w}_K]$ is the matrix of coefficients \\
${\bf w}^j \in \mathbb{R}^K$ & ${\bf w}^j$ is the $j^{\mathrm {th}}$ row of coefficient complication ${\bf W}$\\
$\mathbf{\Omega} \in \mathbb{R}^{K\times K}$ & matrix of relatedness between complications\\
$\mathbf{\Omega}_0 \in \mathbb{R}^{K\times K}$ & matrix of prior knowledge about risk association\\
$z, \mathcal{G}_z$ & index and the $z^{\mathrm {th}}$ group of features\\
$\mathbf{\Sigma}_z \in \mathbb{R}^{G_z\times G_z}$ & correlation matrix between features in group $G_z$\\
\hline\specialrule{.1em}{.1em}{.1em}
\end{tabular}
\end{center}
\vspace{-0.5cm}
\end{table}
\begin{figure*}
\centering
\begin{subfigure}[t]{0.67\textwidth}
\centering\includegraphics[width=1.05\textwidth]{figs/prediction_framework.pdf}
\caption{Multi-task learning formulation.}
\label{fig:MTL_formulation}
\end{subfigure}%
\hfill
\begin{subfigure}[t]{0.33\textwidth}
\centering\includegraphics[width=0.8\textwidth]{figs/framework3.pdf}
\caption{Coefficient matrix.}
\label{fig:coef_matrix}
\end{subfigure}%
\vspace{-0.3cm}
\caption{Proposed framework for simultaneous modeling of multiple T2DM complications. (a) Multi-task learning formulation: the predictions of multiple complications in consideration (\emph{e.g.}, retinopathy, neuropathy and vascular disease) are grouped into different tasks where each task models only one complication. Multi-task learning (MTL) is applied to capture the association between the different complications. Features are derived from patients' medical records up to the time of the initial T2DM diagnosis. Outcome is evaluated in the follow-up window. To simplify the illustration, only positive cases are shown. (b) The correlations between complication risks are revealed in the structure of the coefficient matrix ${\bf W}$, which captures both the relationships between T2DM complication risk profiling tasks and the correlation between the features.}\label{fig:prediction_framework}
\vspace{-0.3cm}
\end{figure*}
\vspace{-5pt}
\subsection{Learning Associations between Multiple Complications}
Given the features (risk factors) ${\bf x}_i=[x_{i1}, x_{i2}, \dots, x_{iM}]^\top$ observed up until the initial T2DM diagnosis for patient $i$, we model the risk of patient $i$ developing complication $k$ in the follow-up window $\Delta t$ as:
\begin{equation}\label{stl_model}
y_{ki} = {\mathrm{Pr}}(c_{ki}|\Theta, {\bf x}_i) = \sigma({\bf w}_k^\top {\bf x}_i)
\end{equation}
where ${\bf w}_k$ is the coefficient vector for complication $k$, and $\sigma (t)$ is a logistic function $\sigma (t)={\frac {1}{1+e^{-t}}}$. Then the event onset can be modeled as a draw from a Bernoulli distribution $c_{ki} \sim \mathrm{Bernoulli}(\sigma({\bf w}_k^\top {\bf x}_i))$.
To capture and leverage the association between the risks of the different T2DM complications, we formulate the complication risk prediction problem as a multi-task learning problem. As shown in Figure~\ref{fig:prediction_framework}, we group the predictions of multiple complications in consideration (e.g., retinopathy, neuropathy and vascular disease) into different learning tasks. Each task models only one complication risk via Equation (\ref{stl_model}).
Next, we apply multi-task learning to capture the association between different complications.
\subsection{Learning Multi-task and Feature Relationships with Correlated Shrinkage}
We aim to capture three types of dependencies in our framework. First, the complication tasks are related since they all stem from a common underlying condition--hyperglycemia. Second, there are associations between the features since they are derived from and represent the health status of the same set of real patients. Third, similar T2DM complications have similar contributing risk factors that lead to the development of those complications.
\subsubsection{Modeling Task and Feature Associations}
~\\Let ${\bf W} = [{\bf w}_1, {\bf w}_2, \dots, {\bf w}_K] \in \mathbb{R}^{M\times K}$ denote the matrix of coefficients of all $K$ complications in consideration. To explore the latent association between the risks of T2DM complications, we impose explicit structure on the coefficient matrix ${\bf W}$.
Specifically, we assume that the coefficient matrix ${\bf W}$ follows a Matrix Variate Normal (MVN) distribution:
\begin{equation}\label{Equ:MVN}
\boldsymbol{{\bf W}} \sim \mathcal{MVN} (\mathbf{0}, \mathbf{\Sigma}, \mathbf{\Omega}).
\end{equation}
The the first term $\boldsymbol{0}$ is a $M \times K$ matrix of zeros representing the mean of ${\bf W}$. The second term $\mathbf{\Sigma}$ is a $M \times M$ symmetric positive definite matrix representing the row-wise covariances of ${\bf W}$, {\it i.e.} the correlations between the features.
The third term $\mathbf{\Omega}$ is a $K \times K$ symmetric positive definite matrix representing the column-wise covariance of ${\bf W}$, {\it i.e.} the correlations between the tasks.
Equation (\ref{Equ:MVN}) captures both the relationships between tasks through $\mathbf{\Omega}$ and correlations among features through $\mathbf{\Sigma}$. As a result, this formulation is a generalization~\cite{zhao2017FETR} of the two most widely used MTL strategies: the task relation learning approaches~\cite{Zhang:MLT:UAI2010,Zhang:MTL:TKDD2014} and the feature relationship learning approaches~\cite{Argyriou:MLFT:nips2007,Argyriou:MLFT:MLJ2008}.
When $\mathbf{\Sigma}$ is diagonal, we recover task relationship learning, and
by setting $\mathbf{\Omega}$ to a diagonal matrix, we recover feature relationship learning.
In healthcare, features can be very fine-grained and domain knowledge is often available to group similar features into higher level representations.
In this paper, we leverage this knowledge and group the diagnosis features in the patient medical records according to the ontologies of the International Classification of Diseases (ICD)~\cite{world1978:ICD9}.
As a result, we group the features $\{x_j\}_{j=1}^M$ into $Z$ groups $\{\mathcal G_z\}_{z=1}^Z$, where $\mathcal G_z$ has $G_z$ features with $\sum_z G_z = M$.
Let ${\bf w}^j = [w_{j1}, w_{j2}, \dots, w_{jK}] \in \mathbb{R}^K$ be the $j$ row of coefficient complication matrix ${\bf W}$, then ${\bf w}^j$ is the $j^{\mathrm {th}}$ coefficient across the $K$ tasks. As shown in Figure \ref{fig:coef_matrix}, we group coefficient matrix ${\bf W}$ into $Z$ blocks where each ${\bf W}_z \in \mathbb{R}^{G_z \times K}$ is a matrix block where feature $j$ belongs to group $\mathcal G_z$, namely, ${\bf W}_z= \{{\bf w}^j\}_{j\in \mathcal G_z}$. We assume ${\bf W}_z$ follows a Matrix Variate Normal (MVN) distribution:
\begin{equation}\label{Equ:structure}
\begin{split}
{\bf W}_z & \sim \mathcal{MVN} (\mathbf 0, \mathbf{\Sigma}_z, \mathbf{\Omega})\\
\end{split}
\end{equation}
where ${\mathbf 0}$ is the mean, $\mathbf{\Sigma}_z$ is the correlations between features, and $\mathbf{\Omega}$ is the correlations between tasks. The zero mean indicates \emph{a-priori} the features are assumed to have no effect. As a result, Equation (\ref{Equ:structure}) captures both the relationships between T2DM complications and the relationships between features.
Then we have,
\begin{equation}
{\mathrm{Pr}}({\bf W}_z | \mathbf{0}, \mathbf{\Sigma}_z, \mathbf{\Omega}) = \frac{\exp\left( -\frac{1}{2} \mathrm{tr}\left[ \mathbf{\Omega}^{-1} {\bf W}_z^\top \mathbf{\Sigma_z}^{-1} {\bf W}_z \right] \right)}{(2\pi)^{KG_z/2} |\mathbf{\Sigma_z}|^{K/2} |\mathbf{\Omega}|^{G_z/2} }.
\end{equation}
\vspace{-5pt}
\subsubsection{Correlated Shrinkage}
EHR data is usually high-dimensional with a large numbers of potentially relevant features.
We are interested in identifying an informative subset of coefficients, which reflect the contributing risk factors responsible for the development of a specific complication, by shrinking irrelevant coefficients towards zero.
Sparsity-promoting priors are widely used in this context. Perhaps, the most popular example is the Laplacian prior which gives rise to the Lasso~\cite{Tibshirani:Lasso} $\ell_1$ regularizer. However, such a prior provides uniform shrinkage --- it shrinks values close and far from zero alike. The Horseshoe prior \cite{carvalho2009horseshoe,carvalho2010horseshoe} provides an attractive alternative. It maintains an infinitely tall spike at zero, while exhibiting Cauchy-like heavy tails. As a consequence, it shrinks small values to zero more strongly than the Laplace prior, while its heavy tails allow some coefficients to escape completely un-shrunk. This property allows the Horseshoe prior to be more robust to large signals while providing strong shrinkage towards zero to noise.
We can place a Horseshoe prior on $w_{jk}$ to promote sparsity on the $j^{\mathrm {th}}$ coefficient of task $k$ by setting,
\begin{equation}\label{Equ:HS}
\begin{split}
w_{jk}|\lambda_{jk}, \tau \sim \mathcal{N} (0, \lambda_{jk}^2 \tau^2),
\lambda_{jk} \sim \mathrm C^+ (0, 1), \tau \sim \mathrm C^+ (0, 1)
\end{split}
\end{equation}
where $\mathrm C^+ (0, 1)$ is a half-Cauchy distribution, $\lambda_{jk}$ is called the local shrinkage parameter, and $\tau$ is the global shrinkage parameter.
However, the vanilla Horseshoe prior fails to capture correlations among tasks.
Recall that in our MTL setting, we assume that similar T2DM complications (tasks) should have similar contributing features.
Note that ${\bf w}^j = [w_{j1}, w_{j2}, \dots, w_{jk} \dots, w_{jK}] \in \mathbb{R}^K$ is the $j^{\mathrm {th}}$ coefficient across the $K$ tasks.
Ideally, pairs of $w_{jk}, k\in\{1, \dots, K\}$ would have more similar shrinkage if their tasks ($k$) are positively correlated.
To do so, we introduce a novel \emph {correlated Horseshoe} prior. We construct the correlated Horseshoe prior by employing a Gaussian copula~\cite{Song:copulas:2009} to couple the local shrinkage parameters $\lambda_{jk}$ together via the task correlations reflected in $\mathbf{\Omega}$, while forcing the marginals of $\lambda_{jk}$ to retain their half-Cauchy distributions.
Let ${\bf u}^j=[u_{j1}, u_{j2}, \dots, u_{jK}] \in \mathbb{R}^K$ be a $K$-dimensional vector that follows a multivariate normal distribution
\begin{equation}
[u_{j1}, u_{j2}, \dots, u_{jk} \dots, u_{jK}] \sim \mathcal{MN}(\mathbf 0, \mathbf{\Omega}),
\end{equation}
Observe that $ {\bf u}^j$ preserves the correlations between tasks through $\mathbf{\Omega}$ and $u_{jk} \sim \mathcal{N}(0, \Omega_{kk})$.
Next, we need to ensure that $\lambda_{jk}$ follows the half-Cauchy distribution. We use inverse transform sampling~\cite{devroye1986sample} to guarantee half-Cauchy marginals. Inverse transform sampling is based on the result that given a uniform random variable $a \sim U(0,1)$, we can generate another random variable $b$ with a cumulative distribution function (cdf) $\mathrm F$, by setting $b = \mathrm F^{-1}(a)$, as long as $\mathrm F$ is invertible. Now, if $b\sim \mathrm C^+(0, 1)$, then $\mathrm F(b) = \frac{2}{\pi}\mathrm{tan}^{-1}(b)$ and since, $\Phi(u_{jk}) \sim U(0, 1)$, where $\Phi(u_{jk})$ is the cdf of $u_{jk}$, $\mathrm F^{-1}(\Phi(u_{jk}))$ follows a half-Cauchy distribution.
The correlated Horseshoe is thus completely specified as,
\begin{equation}\label{Equ:correlated_HS}
\begin{split}
&{\bf u}^j \sim \mathcal{MN}(\mathbf 0, \mathbf{\Omega}), \quad \Phi(u_{jk}) = \frac {1}{2}\left[1+\operatorname {erf} \left({\frac {u_{jk}}{ \sqrt {2\Omega_{kk} }}}\right)\right], \\
&\lambda_{jk} = \mathrm F^{-1}(\Phi(u_{jk})) = \mathrm{tan}\left(\frac{\pi \Phi(u_{jk})}{2}\right) \quad \forall k\in\{1, \dots, K\},
\\
&w_{jk}|\lambda_{jk}, \tau \sim \mathcal{N} (0, \lambda_{jk}^2 \tau^2), \quad
\tau \sim \mathrm C^+ (0, 1).
\end{split}
\end{equation}
We emphasize that $\lambda_{jk}$s are correlated via the latent variables ${\bf u}^j$, allowing us to preserve task correlations. At the same time their marginal half-Cauchy behavior retains the desirable properties of the Horseshoe distribution.
\subsubsection{Capturing Domain Knowledge}
In order to utilize available domain knowledge, we impose an Inverse-Wishart prior distribution
on $\mathbf{\Omega}$
\begin{equation}
\mathbf{\Omega} \sim \mathcal{IW}( \delta \mathbf{\Omega}_0,\nu).
\end{equation}
The Inverse-Wishart distribution is a conjugate prior for the multivariate Gaussian distribution.
$\mathbf{\Omega}_0$ is a known symmetric positive definite matrix that contains all prior knowledge about the risk associations.
$\delta$ and $\nu$ are two tuning parameters. When domain knowledge on risk associations is available, the prior distribution can leverage the information and help improve the estimation of $\mathbf{\Omega}$. When domain knowledge about risk associations is not available, we can set $\mathbf{\Omega}_0$ to be the identify matrix $\mathbf{I}$.
\subsection{Prediction}
Note that in Equation (\ref{Equ:correlated_HS}), we have $w_{jk}|\lambda_{jk}, \tau \sim \mathcal{N} (0, \lambda_{jk}^2 \tau^2)$ and $ \lambda_{jk}$ is a function of $u_{jk}$, which is sampled from $\mathcal{MN}(\mathbf 0, \mathbf{\Omega})$. An equivalent non-centered re-parameterization is given by $\tau \lambda_{jk}\cdot w_{jk}$, where $w_{jk} \sim \mathcal{N} (0, 1)$. Here, we use this equivalent parameterization for computational convenience. Let $\mathbf{\Lambda} \in \mathbb{R}^{M\times K}$ be a matrix with element $\lambda_{jk}$, then we can reparameterize the matrix of coefficients as
$
\boldsymbol{\beta} = \tau \mathbf{\Lambda} \circ {\bf W},
$
where $\circ$ represents a pointwise (Hadamard) product between $\mathbf{\Lambda}$ and ${\bf W}$. Finally, we model the risk of complication $k$ for patient $i$ as, $y_{ki}\mid \beta_k, {\bf x}_i = \sigma(\boldsymbol{\beta}_k^\top {\bf x}_i)$.
\section{Related Work}
From an applications perspective, our work falls into the category of studies that apply predictive analytics and use longitudinal patient records to improve the practice of healthcare management. Building predictive models from EHRs and electronic medical claims data have attracted significant attention from both academia and industry, and have been applied to disease onset prediction~\cite{Kenney2016early,razavian2015population,himes2009prediction,cheng2016risk,choi2017using}, disease progression~\cite{wang:progression:kdd2014}, patient stratification~\cite{wang2015towards,chen2016patient}, hospital readmission prediction~\cite{he2014mining,bardhan2014predictive}, and mortality prediction~\cite{tabak2013using,Nori:KDD2015}.
More recently, there have been some work focusing on diabetes. Razavian \emph{et al.} ~\cite{razavian2015population} show that claims data can be leveraged to predict T2DM onset. Oh \emph{et al.} ~\cite{oh2016type} applied EHRs to capture the trajectories of T2DM patients and found that different trajectories can lead to different risk patterns. Liu \emph{et al.} ~\cite{Liu:T2DM:aaai18} applied survival analysis to predict the onset of T2DM complications.
Yadav \emph{et al.} ~\cite{yadav2015mining} presents a comprehensive survey on EHR data mining.
Different from previous studies, this paper presents a comprehensive study to investigate the risk prediction and profiling of T2DM complications from patient medical records for diabetes care through a novel multi-task learning model.
Our work is also related to multi-task learning (MTL)~\cite{Caruana:MLT1997}, which aims to jointly learn multiple tasks using a shared representation so that knowledge obtained from one task can help other tasks. Recently, MTL models have been widely used in the healthcare domain~\cite{Zhou:KDD2011,Nori:KDD2015,Sun:KDD2015,wiens2016patient,Liu:T2DM:aaai18}. Feature relationship learning based approaches (known as MTFL)~\cite{Argyriou:MLFT:nips2007,Argyriou:MLFT:MLJ2008} and task relationship learning based approaches (known as MTRL)~\cite{Zhang:MLT:UAI2010} are the two most widely used MTL strategies~\cite{zhang:mtl_survey:2007}. MTFL assumes that task association is released through a subset of features shared among tasks. MTRL assumes that the task association is revealed in the structure of the coefficient matrix. Most similar to our approach is the feature and task relationship learning (FETR) method recently proposed by Zhao \emph{et al.} ~\cite{zhao2017FETR}. Similar to FETR, our proposed TREFLES model is a generalization of both MTRL and MTFL, and simultaneously learns the relationships both between tasks and between features. In healthcare analytics, associations between features are usually not ignorable. Different from FETR, TREFLES captures more fine-grained feature relationships by grouping features into groups according to domain knowledge. Furthermore, TREFLES is able to capture the correlated coefficient shrinkage among tasks through a novel correlated Horseshoe prior.
As shown in our study, TREFLES is favorable for healthcare applications where we not only obtain better prediction performances, but also derive clinically meaningful insights about the relationships among the different complications and among the different risk factors.
|
1,108,101,565,979 | arxiv |
\section{Introduction}
\vskip -2mm
Log-log plot of normalized stock market capitalizations ranked in descending order is called {\em capital distribution curve}.
For example, figures below display distribution of capital on the NASDAQ
market on three dates in 2014 (data source is {\tt http://www.google.com/finance\#stockscreener}).
Ranked market weights experienced relatively small fluctuations, despite significant changes in overall capitalization of the NASDAQ market during that period of time.
\vskip 0.1cm
\begin{figure}[H]
\centering
\includegraphics[width=0.65\linewidth, height=5.5cm,
trim=2.1cm 10.3cm 2cm 11.75cm,clip]{nall.pdf}
\end{figure}
\vskip -0.15cm
\begin{figure}[H]
\centering
\includegraphics[width=0.65\linewidth, height=5.5cm,
trim=2.1cm 10.25cm 2cm 11.75cm,clip]{top100.pdf}
\caption{\small NASDAQ capital distribution curves, all stocks (above) and top 100 stocks (below)}\label{fig3}
\end{figure}
One of the aims of this paper is to provide an example of a possible mechanism explaining
{\bf\em temporal stability} and
{\bf\em statistical equilibrium}
of
normalized stock capitalizations
by means of the Polya-Dirichlet Markov chain,
analogous to the Wright-Fisher model
of neutral theory of evolution.
\vsk
\prg{Classic and neutral evolution theory.}
Classic form of Darwinian theory suggests that forces of natural selection play central role in evolution of species. Theory of neutral evolution, proposed by Kimura, complements the classic theory by adding genetic dimension.
Kimura observed that discrepancies in traits,
such as
small variations in colouring of beaks or feathers in a population of birds,
occur at molecular-genetic level due to random effects in reproduction and majority of these variations are neutral with respect to fitness.
According to the neutral theory, force of natural selection is still be important since it purges deleterious mutations. However, majority of surviving mutations are neutral, and possibly only few are advantageous.
Mutations and random combinations of genes in new generations
lead to fluctuations of allelic frequencies or {\em genetic drift}.
The Wright-Fisher and Moran models describe stochastic evolution of genetic frequencies
as statistical equilibrium fluctuations, modelled by
diffusion process with stationary Dirichlet distribution.
\vsk
\prg{Evolution theory and finance.} Application of evolutionary ideas in finance has a long history dating back to Malthus, Marshall and many others.
Recently Evstigneev, Hens, and Schenk-Hopp{\'e} \cite{evstigneev2008evolutionary} developed descriptive model of Evolutionary Finance, which employs principle of natural selection for modeling dynamics of asset prices and
analysis of investment strategies.
Kirman
\cite{kirman1993ants}
considered
version of the Wright-Fisher model with mutation in a context of economic interpretation of behavior of ants searching for a food source. He
observed that proportion of ants choosing one of the possible food channels is better described by stationary distribution of a Markov chain, rather than by single point of equilibrium. He proposed that
the 'herding' behaviour on financial markets
as well is better described by means of stochastic equilibrium, rather than by single or multiple equilibria.
\vsk
\prg{Formation of market limit shape.} Standard and non-linear versions of the Polya process have been used by Arthur et al. \cite{arthur1994increasing} for illustration of
appearance of market structure.
Polya scheme has the following interesting property: proportions of balls converge to some limiting values, but these limits are random and described by the Dirichlet distribution.
\vsk
\prg{Markov lattice and reversibility.}
Polya-Dirichlet Markov chain with state space defined on
lattice of ordered integer partitions provides a framework for analysis and modeling of
stochastic equilibrium of market weights.
Transitions on the lattice of partitions are described in terms of
random \up- and \dn- operators proposed by Kerov \cite{kerov2003asymptotic}, Fulman\cite{fulman2005stein}, Borodin and Olshanski \cite{borodin2009infinite} and Petrov \cite{petrov2007two}.
Historically, Markov chains with \dn/\up-transitions in a context of Polya model were first studied by Costantini, Garibaldi, et al. in \cite{costantini2000purely}, \cite{garibaldi2004finitary}.
\vsk
\prg{Exchangeability and random fluctuations.}
Infinite \exty implies existence of \up- and \dn- random transitions, connecting adjacent levels of integer compositions. It is shown in Section \ref{stocheq} that probabilities of these transitions satisfy reversibility conditions and therefore induce a lattice of Markov chains.
Random transitions on this lattice correspond to statistical equilibrium behaviour of market weights or allelic frequencies not only for fixed, but also for varying market or population sizes.
\vsk
\prg{Neutral theory and financial markets.}
The Polya-Dirichlet Markov lattice corresponds to the discrete version of the Wright-Fisher process with mutations and provides a toy model of equilibrium markets behaviour.
\begin{itemize}
\item After initial phase of rapid expansion,
in a same way as proportions of balls converge to random limits in Polya scheme,
market weights settle down and form capital distribution curve.
\item Up- and down- changes in overall market capitalization lead to random drift of market weights fluctuating in stochastic equilibrium around limiting values, given by the capital distribution curve. The stationary distribution of market weights can be modeled by means of the \up- and \dn- Markov chain
\item In general, increase of market capitalization enforces market structure and decrease of capitalization leads to weakening of the structure and higher volatility, which creates an opportunity for market reshaping. This mechanism is analogous to the so-called {\em nearly neutral theory of evolution} proposed by Ohta \cite{ohta1992nearly}, in which smaller populations have faster molecular-genetic evolution and adaptation rate.
\item {\clddb This theory provides interpretation of market crises as markets self-adaption to changing economic conditions, where capitalization decrease leads to market reshaping and faster adjustment to new econo-financial landscape.
\item Arbitrage opportunities can be considered as corresponding to deleterious mutations, eliminated by forces of natural selection.}
\end{itemize}
\iffalse
\prg{Equilibrium and arbitrage.}
Two components of neutral theory: random drift and natural selection provide the following analogies for financial modeling. Normalized stock market capitalizations - market weights correspond to allele frequencies. Equilibrium distribution of random genetic drift is analogous to stochastic equilibrium of market weights comprising capital distribution curve. Natural selection force purging deleterious mutations can be paralleled to immediate exploitation of arbitrage opportunities. Hence, evolution of market weights ... fluctuating in stochastic equilibrium.
Genetic drift is central to the theory of neutral evolution - provides following analogy:
market weights fluctuate in stochastic equilibrium
The central idea of {\em genetic drift} is that population size may increase or decrease, but {\em proportions} of genes, due to randomness in reproduction, will fluctuate around some expected values.
In theory of neutral evolution population consists of individuals and each individual has number of genes, located at different parts of chromosomes. Each gene can take values, called alleles. Allelic frequency is defined as number of specific allelic types
\begin{itemize}
\item population experiences mutations
\item genes frequencies drift under some diffusion process with stationary distribution
\item only neutral or advantageous mutations remain, since natural selection eliminates deleterious mutations
\end{itemize}
\fi
\iffalse
\prg{Neutral theory.}
Modern Since main object of study of population biology is DNA the analysis is performed on a molecular level, where
strong form of evolution theory essentially divides all genetic changes into two categories: beneficial and deleterious. It is only generations with beneficial changes which survive.
In 19XY M. Kimura noticed that pure selection implies too strong conditions on abundance of genetic varieties. He observed that in many cases there are small changes in species which are neutral with respect to the survival, for instance there can be small variations in coloring of beaks or feathers in the same population of birds.
{\em Neutral theory of evolution}, proposed by Kimura, extends classic theory and suggests that
majority of genetic changes in surviving populations are neutral and only few are beneficial. In other words, there are three types of genetic changes: beneficial, neutral and deleterious and natural selection force purges only deleterious mutations.
Fisher-Wright model has a central role in the development of neutral theory.
Under this model, if population size remains relatively large, {\em proportions of genes in the generations experience random drift}. That is in each new generation proportions of genes is random, but it is remain approximately the same as in previous generations.
There are two mathematical aspects of this model. At first, marginal/stationary distribution of allele frequencies can be modeled by Polya or Dirichlet distribution. Second, associated stochastic process is represented by reversible and therefore stationary Markov chain.
\fi
\newpage
\prg{Mechanics, economics and reversible equilibrium.}
As pointed out by Garibaldi and Scalas \cite{garibaldiscalas2010}, equilibrium modeling in economics and finance was developed under strong influence of ideas of static or classical mechanics.
Alternative approach is provided by framework of stochastic equilibrium and reversibility conditions, which have roots in Boltzmann's work on statistical mechanics. Exhaustive treatment of econophysics from the point of view of \exty is contained in the book of Garibaldi and Scalas \cite{garibaldi2010finitary}.
Excellent explanation of the framework of reversible equilibrium
is contained in the classic book of Kelly \cite{kelly2011reversibility}.
\section{Polya process with down/up transitions}
In a classic form of Polya process colored balls are placed into a box with probabilities proportional to weights of balls of existing colors. The process provides a discrete counterpart of the Dirichlet distribution, since if vector $(\al_1,...,\al_m)$ represents initial/prior weights of balls of each color, limiting values of proportions of weights in Polya scheme have Dirichlet distribution with the same vector of parameters $\Dir_m (\al_1,...,\al_m)$.
Modified Polya process, in which balls can also be removed illustrates important ideas of
\begin{itemize}
\item appearance and temporal stability of ranked proportions, and
\item stochastic equilibrium of these weights.
\end{itemize}
Let us consider an artificial stock market with finite number of stocks represented by $m$ different colours. Initially in the box there are $m$ 'prior' balls of each colour and the same weight $\al$, such that total weight of all balls is $\te=m\al$. In other words, all stocks start with the same initial conditions and colours (or tickers) are used only to distinguish the stocks.
Vector $\vcc n m$ represents {\em stock capitalizations} equal to number of {\em placed} balls of each color at stage $n=n_1+...+n_m$, so at initial stage this vector is $\vc n=(0,...,0)$.
All possible market configurations with overall capitalization $n$ are represented by compositions (ordered partitions) in the integer-valued simplex
\[
\CC_n=\big\{\vc n=(n_1,...,n_m) \;\big|\; n_i \in\NN_0,\;
\textstyle\sum n_i=n \big\}
\]
%
At the first step one of the prior balls is drawn with probability $\al/\te=1/m$.
The ball is placed back into the box together with a ball of the same color and unit weight. At stage $n$ probability to add a ball of color $i$
is
\
p=\frac{\al+n_i}{\te+n},
\
where $n_i$ denotes number of balls of color $i$ in the box.
\iffalse
\prg{Blackwell-Hoppe-Pitman scheme.}
There is another way of modeling the same process, developed by Blackwell and MacQueen[], Hoppe[] and generalized to the two-parameter case by Pitman[]. Suppose that in initial configuration there is only one {\em clay ball} of weight $\te$ and of neutral color. Sometime this ball is called {\em mutator}, since it is responsible for introduction of colors. Each time this ball is drawn a ball of new color and unit weight is placed into the box. ..............
\fi
For instance, with $m=3$ colors, say red, green and blue, probability of drawing 3 red balls, 2 green ones and 1 blue ball in this particular sequence is
\[
p('rrrggb')=\frac{\al(\al+1)(\al+2)}{\te(\te+1)(\te+2)}
\cdot \frac{\al(\al+1)}{(\te+3)(\te+4)}
\cdot \frac{\al}{\te+5}
=\frac{\al^{[3]}\al^{[2]}\al^{[1]}}{\te^{[6]}},
\]
with raising or ascending factorial power defined as\vskip -0.2cm
\[\rfl \al k=\al(\al+1)\cdots(\al+k-1)=\frac{\GG(\al+k)}{\GG(\al)}\]
By combinatorial argument probability of configuration $\vc n=(n_1,...,n_m)$ at stage $n$ is
\eqn\label{Ply}
p(\vc n)=\binom n{n_1,...,n_m} \frac{\prod_{i=1}^m \rfl \al{n_i}}{\rfl \te n}
=\frac{n!}{\rfl \te n} \frac{\al^{[n_1]}}{n_1!}\cdots\frac{\al^{[n_m]}}{n_m!}
\nqe
For each level $n$ this formula establishes probability distribution in the simplex $\CC_n$, moreover this distribution is {\em exchangeable} or symmetric, in a sense that probability of any sequence does not depend on the order of balls drawn and depends only on the number of balls of each color.
Using asymptotic $\fm \al n \asymp \frac {n^{\al-1}}{\GG(\al)}$
\[
p(\vc n)\asymp \frac{\GG(\te)}{n^{\te-1}}
\frac {n_1^{\al-1}}{\GG(\al)} \dots \frac {n_m^{\al-1}}{\GG(\al)}
=\frac{\GG(\te)}{(\GG(\al))^m} \prod_{i=1}^m \Big(\frac{n_i}n\Big)^{\al-1}
\cdot \Big(\frac1n\Big)^{m-1}
\]
which corresponds to density of symmetric Dirichlet distribution
f_{\vc \al}(\vx)\dif \vx=\frac{\GG(\te)}{(\GG(\al))^m}\; x_1^{\al-1} \cdots x_m^{\al-1} \dif x_1 \cdots \dif x_{m-1}
\iffalse
\\
Secondly, distribution \eqref{Ply} appears as
integral over de Finetti's measure (or as a
moment of the Dirichlet distribution)
\[
p(\vc n)=\binom n{n_1,...,n_m} \int_{\Delta_m}
x_1^{n_1}\dots x_m^{n_m}
\cdot f_{\vc \al}(\vx)
\dif \vx,
\]
\fi
\iffalse
which can be considered as umbral linear functional acting on space of polynomials as
\eqn\label{q-map}
\mathcal{Q}:\quad x_1^{n_1}\dots x_m^{n_m} \longmapsto
\frac{n!}{\rfl \te n} \frac{\al^{[n_1]}}{n_1!}\cdots\frac{\al^{[n_m]}}{n_m!}
\nqe
\fi
\newpage
\prg{\up- and \dn- transitions.}
In a standard Polya model number of balls increases
at each stage, such that in configuration $\vcc nm \in \CC_n$ component $i$ increases by one with conditional probability
\eqn\label{pr-up}
{\mathsf u}_{i,n}=\frac{\al+n_i}{\te+n}
\nqe
This can be considered as random {\up}-transitions of configuration from simplex $\CC_n$ to $\CC_{n+1}$. In financial terms, {\up}-moves correspond to investment into particular stock and increase of capitalization. Clearly, stochastic dynamics of these transitions is of preferential attachment type, since conditional probability \eqref{pr-up} of stock growth is proportional to its capitalization $n_i$. As shown below \up-moves preserve probability distributions \eqref{Ply} on simplexes $\CC_n$.
It turns out, that \up-moves also implicitly define \dn-transitions, which randomly move configuration backwards from simplex $\CC_n$ to $\CC_{n-1}$. These \dn-transitions also preserve probability structure on simplexes. In terms of Polya's model \dn-move corresponds to
removing a ball of some color at random and financially it has interpretation of decrease of capitalization of one of the stocks by one unit.
Structure of exchangeable probability distribution plays an important role connecting \up- and \dn- transitions, dual to each other. For the sake of illustration let us consider case of two stocks, labelled by two colours. Structure of connections of probabilities between simplexes $\CC_0,\CC_1,\CC_2,...$ with $m=2$ is shown below, where $p_{k,n-k}$ denotes probability of configuration with $k$ balls of the first color and $n-k$ balls of the second color.
\[\xymatrixcolsep{3pc}\xymatrixrowsep{0.01pc}
\xymatrix{
& & p_{2,0} \\
& p_{1,0}\ar@{-}[ur]\ar@{-}[dr] & \\
p_{0,0}\ar@{-}[ur]\ar@{-}[dr]& & p_{1,1} \\
& p_{0,1}\ar@{-}[ur]\ar@{-}[dr] & \\
& & p_{0,2}
}
\]
Let us consider probability flows between levels $\CC_{n-1}$ and $\CC_n$ for $n_1+n_2=n$
\vskip -0.2cm
\[\xymatrixcolsep{4pc} \xymatrixrowsep{0.45pc}
\xymatrix{
& {p_{n_1+1,n_2-1}} \\
p_{n_1,n_2-1}\ar[ur]\ar@[blue][dr] |{\clb \frac{\al+n_2-1}{\te+n-1}}&\\
& {\clb p_{n_1,n_2}} \\
p_{n_1-1,n_2}\ar[dr]\ar@[blue][ur]|{\clb \frac{\al+n_1-1}{\te+n-1}} &\\
& p_{n_1-1,n_2+1}
}
\]
For instance, configuration $(n_1,n_2-1)$
can migrate to state $(n_1+1,n_2-1)$ with probability $\frac{\al+n_1}{\te+n-1}$
or go to state $(n_1,n_2)$ with probability $\frac{\al+n_2-1}{\te+n-1}$, thus contribution or {\em forward probability flow} to configuration $(n_1,n_2)$ is
\[
\mathsf{pf}_{(n_1,n_2-1) \to (n_1,n_2)}=p_{n_1,n_2-1}\frac{\al+n_2-1}{\te+n-1}
\]
Similarly it can be shown that probability flow from state $(n_1-1,n_2)$ to state $(n_1,n_2)$ is
\[
\mathsf{pf}_{(n_1-1,n_2) \to (n_1,n_2)}=p_{n_1-1,n_2}\frac{\al+n_1-1}{\te+n-1}
\]
It is easy to see that \up-moves preserve probability measure \eqref{Ply} on simplexes. For instance, total flow of probabilities into state $(n_1,n_2)$ is
\[\textstyl
\binom{n-1}{n_1}\frac{\al^{[n_1]}\rfl \al{n_2-1}}{\rfl \te {n-1}}
\cdot \frac{\al+n_2-1}{\te+n-1}+
\binom{n-1}{n_1-1}\frac{\al^{[n_1-1]}\rfl \al{n_2}}{\rfl \te {n-1}}
\cdot \frac{\al+n_1-1}{\te+n-1}
=\big(\binom{n-1}{n_1}+\binom{n-1}{n_1-1}\big)
\frac{\rfl \al {n_1} \rfl \al{n_2}}{\rfl \te {n}}
=\binom n{n_1,n_2} \frac{\rfl \al {n_1} \rfl \al{n_2}}{\rfl \te {n}}
\]
In other words, fraction
$
\binom{n-1}{n_1,n_2-1}\Big/\binom{n}{n_1,n_2}=\frac {n_2}n
$
of probability $p_{n_1,n_2}$ comes from configuration $(n_1,n_2-1)$ and fraction $\binom{n-1}{n_1-1,n_2}/\binom{n}{n_1,n_2}=\frac {n_1}n$ comes from configuration $(n_1-1,n_2)$. This means that given probability of state $p_{n_1,n_2}$ {\em backward probability flow} can be interpreted as random removal of one of the balls with $p=n_i/n$:
\vskip -0.2cm
\[\xymatrixcolsep{4pc} \xymatrixrowsep{0.45pc}
\xymatrix{
{}_{(n_1,n_2-1)}&\\
& {\clb {}_{(n_1,n_2)}}\ar@{.>}[ul]|{\frac{n_2}n}\ar@{.>}[dl]|{\frac{n_1}n} \\
{}_{(n_1-1,n_2)}\\
&
}
\]
In general, random {\sc down}-transition moves configuration $\vc n=(n_1,...,n_m)$ from simplex $\CC_n$ to $\CC_{n-1}$, and in terms of Polya model it removes a ball of color $i$ with probability
\eqn\label{pr-dn}
\mathsf d_{i,n}=\frac{n_i}n
\nqe
It is straightforward to show that \dn-moves also preserve probability measure \eqref{Ply}.
In terms of artificial market model \up- and \dn- transitions correspond to increase and decrease of capitalization (buying/selling) of a stock by one unit of money.
\section{Market shape formation
Classic Polya scheme (with \up-transitions) provides a model for simulation of market growth. It turns out that in this model, even if initial conditions are the same for all stocks, over certain period of time, or after reaching certain market capitalization, power-law shaped market structure begins to appear
Figures below illustrate shape formation of the capital distribution curve on the artificial market. Left side of figures contains realization of market weights, modeled by \up-transitions and right side displays log-log plot of ranked weights at the terminal stage.
All stocks have the same initial conditions modeled by equal prior $\al$.
After rapid period of 'Big Bang'-like chaotic expansion,
market weights
begin to settle down and after step 1500 do not change significantly.
\iftrue
\begin{figure}[H]\label{f-3}
\centerline{
\includegraphics[width=17.cm, height=6.0cm,
trim=0.6cm 8.5cm 0.6cm 9cm,clip]
{p520.pdf}}
\caption{\small Dynamics of market weights and capital distribution, 20 stocks $(\al=5,\te=100)$}
\end{figure}
Next figure illustrates that for smaller values of parameter $\al$ there is greater variation of market weights. Particular choice of parameter $\al=1$ corresponds to the uniform stick-breaking niche model.
\begin{figure}[H]\label{f-4}
\centerline{
\includegraphics[width=17.cm, height=6.0cm,
trim=0.6cm 8.5cm 0.6cm 9cm,clip]
{p120.pdf}}
\caption{\small Dynamics of market weights and capital distribution, 20 stocks $(\al=1,\te=20)$}
\end{figure}
\fi
\newpage
\section{Statistical equilibrium and Polya-Dirichlet Markov process}\label{stocheq}
One of the central ideas of neutral evolution theory is that proportions of genes in a population (allelic frequencies) experience {\em random drift}, in other words they fluctuate around some values. It is important that population size may increase or decrease (in reasonable amounts), but allelic frequencies remain approximately the same.
The same phenomenon is observed on financial markets: ranked equity capitalizations, comprising capital distribution curve display remarkable temporal stability, despite significant fluctuations of overall capitalization.
Wright-Fisher model (WF) and its generalizations provide a framework for modeling evolution of proportions of genes fluctuating in stochastic equilibrium.
Finite form of the $m$-allele WF-model is based on a Markov chain with state space of ordered integer partitions (compositions) with $m$ elements.
Since stationary distribution of proportions in limiting case of the WF-model with mutation is given by Dirichlet distribution, for consistency
it is assumed that finite version of WF-model is approximated by the Polya distribution \eqref{Ply}.
In Polya model with \dn/\up-transitions ordered partition
$\vcc nm$
may represent:
\begin{itemize}
\item vector of stock capitalizations with overall market capitalization $n$,
\item number of genes of each specific type in a population of size $n$.
\end{itemize}
As above it is assumed that vector of priors (or mutation rates, correspondingly), is a symmetric vector with all components equal to $\al$ and $\te=m\al$ denotes sum of all parameters.
Stationary distribution with transitions in integer simplex $\CC_n$ can be constructed by combining \dn- and \up-transitions, as proposed in \cite{borodin2009infinite},\cite{fulman2005stein},\cite{petrov2007two} and \cite{garibaldi2004finitary}.
For instance, transition where one item moves from category $i$ to category $j$ in \dn/\up-scheme, is modeled in two steps.
\[
(..,n_i,..,n_j,..)
\longmapsto (..,n_i-1,..,n_j,..)
\longmapsto (..,n_i-1,..,n_j+1,..)
\]
with probability $q_{i \mapsto j}=\frac{n_i}{n}\frac{n_j+\al}{n+\te-1}$ for $i\neq j$, and with $q_{i \mapsto i}=\frac{n_i}{n}\frac{n_i+\al-1}{n+\te-1}$ as probability
of return.
\vsk
\prg{Reversibility and stochastic equilibrium.}
If $q(\vc a \mapsto \vc b)$ denotes conditional probability of migration from state $\vc a$ to state $\vc b$, then
reversibility (detailed balance) conditions imply for all states
in state space $\mathcal S$
\eqn\label{d-balance}
p(\vc a)q(\vc a \mapsto \vc b)=p(\vc b)q(\vc b \mapsto \vc a),
\qquad \forall \vc a, \vc b \in \mathcal S
\nqe
Besides time-reversibility, there are some other interesting interpretations of detailed balance conditions.
\begin{itemize}
\item In mechanical systems if each process is matched by its reverse process, then the system is in equilibrium. Equilibrium may take place on a micro-level, while on a macro-level the system may stand still.
\item In terms of probability flows, essentially \eqref{d-balance} states that unconditional probability flow from $\vc a$ to $\vc b$ must be equal to probability flow from state $\vc b$ to state $\vc a$.
\end{itemize}
It is easy to show that both \dn/\up- and \up/\dn- schemes satisfy detailed balance conditions and therefore they induce reversible and hence stationary Markov chain with a state space of ordered partitions $\CC_n$ for each $n$.
More detailed analysis reveals that reversibility conditions connect probability distributions on all adjacent simplexes $\CC_{n-1}$ and $\CC_n$
\vskip -0.4cm
\[\xymatrixcolsep{4pc} \xymatrixrowsep{0.35pc}
\xymatrix{
& {(n_1+1,n_2-1)}\ar@{.>}[dl]|{} \\
(n_1,n_2-1)\ar@<0.75ex>[ur]|{} \ar@<0.75ex>[dr]|{} &\\
& {\;\;(\;n_1\;,\;n_2\;)}\ar@{.>}[dl]|{}\ar@{.>}[ul]|{} \\
(n_1-1,n_2)\ar@<0.75ex>[dr]|{}\ar@<0.75ex>[ur]|{} &\\
& (n_1-1,n_2+1)\ar@{.>}[ul]|{}
}
\]
For instance, probability flows between compositions $(n_1-1, n_2)$ and $(n_1, n_2)$ satisfy
\[
\frac{(n-1)!}{\rfl \te {n-1}} \frac{\rfl \al{n_1-1}\rfl \al{n_2}}{(n_1-1)!n_2!}
\cdot {\clb \frac{n_1-1+\al}{n+\te-1}}=
\frac{n!}{\rfl \te n} \frac{\rfl \al{n_1}\rfl \al{n_2}}{n_1!n_2!}
\cdot {\clb \frac{n_1}{n}},
\]
which also clarifies role of \up- and \dn- operators.
In other words, probability distributions \eqref{Ply} on simplexes are pairwise connected. Since for sufficiently large $n$ Polya distribution approximates Dirichlet distribution, sequences of \up- and \dn-operators between simplexes approximate equilibrium process with stationary Dirichlet distribution even with changing values of $n$.
This provides a framework for modeling evolution of market weights for fixed and variables values of $n$.
It turns out that behavior of equilibrium process depends on the population size.
When overall market capitalization is sufficiently large
market structure becomes enforced.
In contrast, decrease of market cap leads to higher variations of market weights.
For instance, figures below illustrate two phases of evolution of market weights. During the first phase, for $t\le500$ (Figure 5) or $t\le1500$ (Figure 6), the market experiences period of growth and formation of limiting weights, representing capital distribution curve, displayed on the right subplots with blue circles. Once market value reaches threshold value of $n=500$ or $n=1500$ market capitalization begin fluctuating in \dn/\up-transitions, which generates stochastic evolution of market weights. Corresponding capital distribution curve at the terminal period is shown with red circles. As it can be seen from the figures, stocks with larger capitalizations have higher trading activity.
\begin{figure}[H]
\centerline{
\includegraphics[width=17.25cm, height=6.cm]{c500.png}}
\caption{\small Fluctuations at level $n=500$ begin at $t=500$, $(\al=1,\te=20)$, 20 stocks}
\end{figure}
\begin{figure}[H]
\centerline{
\includegraphics[width=17.25cm, height=6.cm]{c1500.png}}
\caption{\small Fluctuations at level $n=1500$ begin at $t=1500$, $(\al=1, \te=20)$, 20 stocks
}
\end{figure}
Such behavior of weights, dependent on level $n$ is consistent with the so-called 'nearly neutral theory', developed by Ohta \cite{ohta1992nearly}, where it is proposed that smaller populations experience faster rate of genetic evolution.
In financial terms, it suggests the market crises can be 'explained' as markets self-adaption to changing economic conditions, where reduced market capitalization allows faster reshaping and adjustment of capitalization structure to new econo-financial landscape.
\iffalse
\newpage
Now it is possible to define reversible and therefore stationary Markov chain in simplexes $\CC_n$ with stationary distributions given by \eqref{Plya}.
If $q(\vc a \mapsto \vc b)$ denotes (conditional) probability of migration from configuration $\vc a$ to configuration $\vc b$, then reversibility (detailed balance) conditions are
\[
p(\vc a)q(\vc a \mapsto \vc b)=p(\vc b)q(\vc b \mapsto \vc a), \qquad \forall \vc a, \vc b \in \CC_n
\]
The most basic type of transitions is when distance between states $d(\vc a,\vc b)=\summ i1m |a_i-b_i|\le1$, that is only one item changes its place or configuration returns to the original state. For other transitions with $d(\vc a,\vc b)\ge2$ conditional probability of transition can be defined as $q(\vc a \mapsto \vc b)=0$.
change of color or withdraw and then place another
Let $q_{[i\to j]}$ denote probability that one item migrates from category $i$ to some other category $j\neq i$ in configuration $\vcc nm$.
In other words $q_{i\to j}$ is probability that one of the balls of color $i$ changes ...
\[
(..,n_i,..n_j,..) \mapsto (..,n_i-1,..n_j+1,..)
\]
We consider case when configuration returns (?) separately.
Then reversibility condition simplifies to
\[
\frac{\al^{[n_i]}}{n_i!}\frac{\al^{[n_j]}}{n_j!} q_{i\to j}
=
\frac{\al^{[n_i-1]}}{(n_i-1)!}\frac{\al^{[n_j+1]}}{(n_j+1)!} q_{j\to i}
\]
or
\eqn\label{reversb-ratio}
\frac{q_{ij}}{q_{ji}}=\frac{n_i (\al+n_j)}{(n_j+1)(\al+n_i-1)}
\nqe
This suggests that transition probabilities can be expressed as
\eqn\label{ratio-form}
q_{ij}=\frac{n_i (\al+n_j)}{Q_n},
\nqe
where $Q_n$ is some function which does not depend on composition structure $n_i$ but possibly depends on $n$, in which case \eqref{reversb-ratio} holds.
It is easy to see that {\sc down/up} transitions have required form, indeed
\[
q_{ij}=\frac{n_i }{n}\frac{(\al+n_j)}{\te+n-1}, \quad \text{and} \quad
q_{ii}=\summ j1m \frac{n_j }{n}\frac{(\al+n_j-1)}{\te+n-1}
\]
Thus {\sc down/up} Markov chain is reversible which stationary measure coincides with Polya's distribution \eqref{Plya}.
In terms of the simple market model this means that if market capitalization remains at the same level $n$ then selling and buying stocks according to probabilities of {\sc down} and {\sc up} transitions will generate stochastic equilibrium distribution in the simplex of configurations $\CC_n$. In fact this can extended to more general setting of transitions between simplexes, since for sufficiently large $n$ Polya distributions of {\em proportions} on these simplexes are close to each other and can be approximated by the Dirichlet distribution. In other words, since there is a limit of distributions of {\em proportions} $(n_1/n,...,n_m/n)$ for $n\to\infty$, as suggested by Borodin ana Olshanski[], the corresponding equilibrium distribution can be approximated on simplex $\CC_n$ for sufficiently large $n$.
\fi
\iffals
In fact, there is a class of possible transition probabilities which have form \eqref{ratio-form}.
For each configuration/state $\vcc nm$ sum of probabilities
must satisfy
\[
\summ i1m \sum_{j=1, j\neq i}^m \frac{n_i (\al+n_j)}{Q_n}+\frac{r(\vc n)}{Q_n}=1,
\]
where $r(\vc n)/Q_n$ denotes probability that system returns to state $\vc n$. After some manipulation, using $\te=m\al$
\[
\summ i1m n_i (\te-\al+n-n_i)+r(\vc n)
=n^2+n(\te-\al)-\summ i1m n_i^2+r(\vc n)
=Q_n
\]
Hence any choice of
\[
r(\vc n)=\summ i1m n_i^2+r_2 n^2+r_1 n+r_0
\]
is satisfactory, as long as $0\le r(\vc n)/Q_n \le 1$, with
\[
Q_n=n^2(1+r_2)+n(\te-\al+r_1)+r_0
\]
In particular choice of $r_2=0, r_1=\al-1,r_0=0$ corresponds to the {\sc down/up} chain.
\fi
\iffalse
\newpage
\prg{Up and Down moves.} Sequential placement of balls can be considered as random {\sc up}-walk on lattice of compositions. For the sake of illustration let us assume that there only $m=2$ stocks/colors. If at stage $n$ there are $k$ balls of the first color and $n-k$ balls of the second color, then partition $(k,n-k)$ can migrate to the following states (partitions of $n+1$) with corresponding probabilities:
\[\xymatrixcolsep{5pc}
\xymatrix{
& (k+1,n-k) \\
(k,n-k)\ar[ur]|{\clb \frac{\al+k}{\te+n}}\ar[r]|{\clb \frac{\al+n-k}{\te+n}}&
p(k,n-k+1)=
}\]
\iftrue
It is easy to see that these {\sc up} $(n\to n+1)$ transitions preserve probability measure, for instance
\[
p(k+1,n-k)=p(k,n-k) \frac{\al+k}{\te+n}+p(k+1,n-k-1)\frac{\al+n-k-1}{\te+n}
=\frac{n!}{\rfl \te {n+1}} \frac{\al^{[k+1]}}{k!}\frac{\al^{[n-k]}}{(n-k-1)!}
\bigg(\frac 1{n-k}+\frac 1{k+1}\bigg)
\]
\fi
In financial terms, {\sc up} transitions correspond to investment in shares and increase of capitalizations. Clearly, this probabilistic dynamics is of preferential attachment type, which in general is not unrealistic.
In order to model stock selling {\sc down}-transition should be defined. At first it might seem possible to endow the same probabilities as in {\sc up}-step, however it would break probability structure on compositions. The proper way of assignment of {\sc down}-probabilities is given by consistency conditions. That is, probability of state $(k,n-k)$ is induced by random removal of one of the balls of the first or the second color:
\[
\xymatrix{
& (k+1,n-k) \ar[dl]|{\clb \frac{k+1}{n+1}} \\
(k,n-k)&\\
& (k,n-k+1) \ar[ul]|{\clb \frac{n-k+1}{n+1}}
}\]
Indeed
\[
{\cldb \frac{k+1}{n+1}} \frac{(n+1)!}{(k+1)!(n-k)!}
\frac{\al^{[k+1]} \al^{[n-k]}}{\rfl \te {n+1}}
+{\cldb \frac{n-k+1}{n+1}} \frac{(n+1)!}{k!(n-k+1)!}
\frac{\al^{[k]}\al^{[n-k+1]}}{\rfl \te {n+1}}
=\frac{n!}{k!(n-k)!} \frac{\al^{[k]}\al^{[n-k]}}{\rfl \te n}
\]
In general, given state $\vc n=(n_1,...,n_m)$ probability to remove a ball of color $i$ is \[p_{i,n}^{\mathsf D}=\frac{n_i}n\]
\prg{Two phases.} In general, market capitalizations experience growth over time, which means that on average there are more buy than sell transactions.
\prg{connexions.}
Fisher-Wright with mutation
There are two problems with application of this approach. First of all, number of stocks in this framework is fixed in advance and in order to add or remove a stock from the listing the model parameters must be reconsidered.
Second, although numerical sampling from the Dirichlet distribution produce shapes of ranked weights which are similar to those on financial markets, the fit is only approximate ?....
It turns out, that the framework of the \tpPD addresses both of these issues.
\fi
\newpage
\bibliographystyle{abbrv}
|
1,108,101,565,980 | arxiv | \section{Introduction}
In the Deep Learning community, finding the best Neural Network architecture for a given task is a key problem that is mainly addressed \textit{by hand} or using validation techniques. For instance, in computer vision, this has lead to particularly well-known models like GoogleNet \cite{DBLP:journals/corr/SzegedyLJSRAEVR14} or ResNet \cite{DBLP:journals/corr/HeZRS15}. More recently, there is a surge of interest in developing techniques able to automatically discover efficient neural network architectures. Different algorithms have been proposed including evolutionary methods \cite{DBLP:conf/gecco/StanleyM02a, DBLP:journals/corr/MiikkulainenLMR17, DBLP:journals/corr/RealMSSSLK17} or reinforcement learning-based approaches \cite{DBLP:journals/corr/ZophL16}. But in all cases, this selection is usually based solely on a final predictive performance of the model such as the accuracy.
When facing real-world problems, this predictive performance is not the only measure that matters. Indeed, learning a very good predictive model with the help of a cluster of GPUs might lead to a neural network that can be incompatible with low-resource mobile devices. Another example concerns distributed models in which one part of the computation is made \textit{in the cloud} and the other part is made \textit{on the device}. In such situations, an efficient architecture would have to predict accurately while minimizing the amount of exchanged messages between the cloud and the device. One important research direction is thus to propose models that can learn to take into account the inference cost in addition to the quality of the prediction.
We formulate this issue as a problem of automatically learning a neural network architecture under budget constraints. To tackle this problem, we propose a \textit{budgeted learning} approach that integrates a maximum cost directly in the learning objective function. The main originality of our approach with respect to state-of-the-art is the fact that it can be used with any type of costs, existing methods being usually specific to particular constraints like inference speed or memory consumption -- see Section \ref{sec_related_work} for a review of state-of-the-art. In our case, we investigate the ability of our method to deal with three different costs: (i) the \textit{computation cost} reflecting the inference speed of the resulting model, (ii) \textit{the memory consumption cost} that measures the final size of the model, and the (iii) \textit{distributed computation cost} that measures the inference speed when computations are distributed over multiple machines or processors.
\begin{figure*}[ht]
\begin{center}
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=0.6\linewidth]{ResNets_Fabric.png}
\caption{\textbf{ResNet Fabric}: The ResNet Fabric is a super network that includes the ResNet model as a particular sub-graph. Each row corresponds to a particular size and number of feature maps. Each edge represents a simple \textit{building block} (as described in\cite{DBLP:journals/corr/HeZRS15}) i.e two stacked convolution layers + a shortcut connection. We use projection shortcuts (with 1x1 convolutions) for all connections going across different feature map sizes (green edges). Note that here, the subgraph corresponding to the bold edges is a ResNet-20. By increasing the width of the ResNet Fabric, we can include different variants of ResNets (from ResNet-20 with width 3 up to Resnet-110 with a width of 18). }
\label{fig1a}
\end{subfigure}
\hfill
\begin{subfigure}[t]{.49\textwidth}
\centering
\includegraphics[width=0.85\linewidth]{CNF_8x6.png}
\caption{\textbf{Convolutional Neural Fabrics} \cite{DBLP:journals/corr/SaxenaV16}: Each row corresponds to a particular resolution of feature maps. The number of features map is constant across the whole network. Each edge represents a convolution layer. The color of an edge represents the difference between input and output maps resolutions. Blue edges keep the same resolution, green edges decrease the resolution (stride $>$ 1) and red edges increase the resolution (upsampling). Feature maps are aggregated (by addition) at each node before being sent to the next layers. }
\label{fig1b}
\end{subfigure}
\end{center}
\caption{This figure illustrates the two Super Networks on top of which cost-constrained architectures will be discovered. The ResNet Fabric is a generalization of ResNets\cite{DBLP:journals/corr/HeZRS15}, while CNF has been proposed in \cite{DBLP:journals/corr/SaxenaV16}. In both cases, our objective is to discover architectures that are efficient in both prediction quality and cost, by sampling edges over these S-networks.}
\label{fig:SuperNetworks}
\label{SuperNetworks_imgs}
\end{figure*}
Our model called \textit{Budgeted Super Network} (BSN) is based on the following principles: (i) the user provides a (big) \textit{Super Network} (see Section \ref{secsn}) defining a large set of possible final network architectures as well as a maximum authorized cost. (ii) Since finding the best architecture that satisfies the cost constraint is an intractable combinatorial problem (Section \ref{s3}), we relax this optimization problem and propose a stochastic model (called \textit{Stochastic Super Networks} -- Section \ref{s4}) that can be optimized using policy gradient-inspired methods (Section \ref{s5}). We show that the optimal solution of this stochastic problem corresponds to the optimal constrained network architecture (Proposition \ref{prop1}) validating our approach. At last, we evaluate this model on various computer vision tasks. We particularly show that, by taking inspiration from the \textit{Residual Networks} (ResNet) \cite{DBLP:journals/corr/HeZRS15} and \textit{Convolutional Neural Fabrics} (CNF) \cite{DBLP:journals/corr/SaxenaV16}, our model is able to discover new neural network architectures that outperform these baselines at a lower computation/memory/distributed cost (Section \ref{section_experiments}) on CIFAR-10 and CIFAR-100. The related work is presented in Section \ref{sec_related_work}.
\section{Super Networks}
\label{secsn}
We consider the classical supervised learning problem defined by an input space $\mathcal{X}$ and an output space $\mathcal{Y}$. In the following, input and output spaces correspond to multi-dimensional real-valued spaces. The training set is denoted as $\mathcal{D}=\{ (x^1,y^1), ..., (x^\ell,y^\ell) \}$ where $x^i \in \mathcal{X}$, $y^i \in \mathcal{Y}$ and $\ell$ is the number of supervised examples. At last, we consider a model $f: \mathcal{X} \rightarrow \mathcal{Y}$ that predicts an output given a particular input.
We first describe a family of models called \textit{Super Networks} (S-networks)\footnote{The name \textit{Super Network} comes from \cite{DBLP:journals/corr/FernandoBBZHRPW17} which presents an architecture close to ours for a completely different purpose. } since our contribution presented in Section \ref{section_super_net} will be a stochastic extension of this model. Note that the principle of Super Networks is not new and similar ideas have been already proposed in the literature under different names, e.g Deep Sequential Neural Networks \cite{DBLP:journals/corr/DenoyerG14}, Neural Fabrics \cite{DBLP:journals/corr/SaxenaV16}, or even PathNet \cite{DBLP:journals/corr/FernandoBBZHRPW17}.
A Super Network is composed of a set of layers connected together in a direct acyclic graph (DAG) structure. Each edge is a (small) neural network, the S-Network corresponds to a particular combination of these neural networks and defines a computation graph. Examples of S-networks are given in Figure \ref{SuperNetworks_imgs}. More formally, let us denote $l_1,....,l_N$ a set of layers, $N$ being the number of layers, such that each layer $l_i$ is associated with a particular representation space $\mathcal{X}_i$ which is a multi-dimensional real-valued space. $l_1$ will be the \textit{input layer} while $l_N$ will be the \textit{output layer}. We also consider a set of (differentiable) functions $f_{i,j}$ associated to each possible pair of layers such that $f_{i,j}: \mathcal{X}_i\rightarrow \mathcal{X}_j$. Each function $f_{i,j}$ will be referred as a \textit{module} in the following: it takes data from $\mathcal{X}_i$ as inputs and transforms these data to $\mathcal{X}_j$. Note that each $f_{i,j}$ will make disk/memory/network operations having consequences on the inference speed of the S-network. Each $f_{i,j}$ module is associated with parameters in $\theta$, $\theta$ being implicit in the notation for sake of clarity.
On top of this structure, a particular architecture $E=\{e_{i,j}\}_{(i,j) \in [1;N]^2}$ is a binary adjacency matrix over the $N$ layers such that $E$ defines a DAG with a single source node $l_1$ and a single sink node $l_N$. Different matrices $E$ will thus correspond to different super network architectures. A S-network will be denoted $(E,\theta)$ in the following, $\theta$ being the parameters of the different \textit{modules}, and $E$ being the architecture of the super network.
\paragraph{Predicting with S-networks:} The computation of the output $f(x,E,\theta)$ given an input $x$ and a S-network $(E,\theta)$ is made through a classic forward algorithm, the main idea being that the output of modules $f_{i,j}$ and $f_{k,j}$ leading to the same layer $l_j$ will be added in order to compute the value of $l_j$. Let us denote $l_i(x,E,\theta)$ the value of layer $l_i$ for input $x$, the computation is recursively defined as:
\begin{equation}
\begin{aligned}
\text{Input:} l_1(x,E,\theta) &= x \\
\text{Layer Computation: } l_i(x,E,\theta) &= \sum\limits_k e_{k,i} f_{k,i}(l_k(x,E,\theta))
\end{aligned}
\end{equation}
In this configuration, learning of $\theta$ can be made using classical back-propagation and gradient-descent techniques.
\section{Learning Cost-constrained architectures}
\label{section_super_net}
Our main idea is the following: we now consider that the structure $E$ of the S-network $(E,\theta)$ describes not a single neural network architecture but a set of possible architectures. Indeed, each sub-graph of $E$ (subset of edges) corresponds itself to a S-network and will be denoted $H \odot E$, where $H$ corresponds to a binary matrix used as a mask to select the edges in $E$ and $\odot$ is the Hadamard product. Our objective will thus be to identify the best matrix $H$ such that the corresponding S-network $(H \odot E,\theta)$ will be a network efficient in terms of both predictive quality and computation/memory/... cost.
The next sections are organized as follows: (i) First, we formalize this problem as a combinatorial problem where one wants to discover the best matrix $H$ in the set of all possible binary matrices of size $N \times N$. Since this optimization problem is intractable, we propose a new family of models called \textit{Stochastic Super Networks} where $E$ is sampled following a parametrized distribution $\Gamma$ before each prediction. We then show that the resulting budgeted learning problem is continuous and that its solution corresponds to the optimal solution of the initial budgeted problem (Proposition 1). We then propose a practical learning algorithm to learn $\Gamma$ and $\theta$ simultaneously using gradient descent techniques.
\subsection{Budgeted Architectures Learning}
\label{s3}
Let us consider $H$ a binary matrix of size $N \times N$. Let us denote $C(H \odot E) \in \mathbb{R}^+$ the cost\footnote{Note that we consider that the cost only depends on the network architecture. The model could easily be extended to costs that depend on the input $x$ to process, or to stochastic costs -- see appendix} associated to the computation of the S-Network $(H \odot E,\theta)$. Let us also define $\mathbf{C}$ the maximum cost the user would allow. For instance, when solving the problem of \textit{learning a model with a computation time lower than 200 ms} then $\mathbf{C}$ is equal to $200 ms$. We aim at solving the following soft constrained budgeted learning problem:
\begin{multline} \label{objective}
H^*,\theta^* = \arg \min\limits_{H,\theta} \frac{1}{\ell} \sum\limits_i \Delta(f(x^i,H \odot E, \theta),y^i)
\\ + \, \lambda \max(0,C(H \odot E)-\mathbf{C})
\end{multline}
where $\lambda$ corresponds to the importance of the cost penalty. Note that the evaluated cost is specific to the particular infrastructure on which the model is ran. For instance, if $\mathbf{C}$ is the cost in milliseconds, the value of $C(H \odot E)$ will not be the same depending on the device on which the model is used. Note that the only required property of $C(H \odot E)$ is that this cost can be measured during training.
Finding a solution to this learning problem is not trivial since it involves the computation of all possible architectures which is prohibitive ($\mathcal{O}(2^N)$ in the worst case). We explain in the next section how this problem can be solved using \textit{Stochastic Super Networks}.
\begin{figure}[t]
\centering
\includegraphics[width=1.1\linewidth]{Cifar10_ResNetFab_flops_light}
\caption{Accuracy/Time trade-off using B-ResNet on CIFAR-10.}
\label{plot_cif10_flop_rnf}
\end{figure}
\subsection{Stochastic Super Networks}
\label{secion_ssn}
\label{s4}
Now, given a particular architecture $E$, we consider the following stochastic model -- called \textbf{Stochastic Super Network} (SS-network) -- that computes a prediction in two steps:
\begin{enumerate}
\item A binary matrix $H$ is sampled based on a distribution with parameters $\Gamma$. This operation is denoted $H \sim \Gamma$
\item The final prediction is made using the associated sub-graph i.e. by computing $f(x,H \odot E,\theta)$.
\end{enumerate}
A SS-network is thus defined by a triplet $(E,\Gamma,\theta)$, where both $\Gamma$ and $\theta$ are learnable parameters.
We can rewrite the budgeted learning objective of Equation \ref{objective} as:
\begin{multline} \label{stochobjective}
\Gamma^* , \theta^* = \arg \min\limits_{\Gamma,\theta} \frac{1}{\ell} \sum\limits_i \mathbb{E}_{H \sim \Gamma}\left[ \Delta(f(x^i,H \odot E, \theta),y^i) \right. \\+ \left. \lambda \max (0,C(H \odot E)-\mathbf{C}) \vphantom{\Delta}\right]
\end{multline}
\begin{mytheo}
\label{prop1} (proof in Appendix)
When the solution of Equation \ref{stochobjective} is reached, then the models sampled following $(\Gamma^*)$ and using parameters $\theta^*$ are optimal solution of the problem of Equation \ref{objective}.
\end{mytheo}
Said otherwise, solving the stochastic problem will provide a model that has a good predictive performance under the given cost constraint.
\paragraph{Edge Sampling: } In order to avoid inconsistent architectures where the input and the output layers are not connected, we sample $H$ using the following procedure: For each layer $l_i$ visited in the topological order of $E$ (from the first layer to the last one) and for all $k<i$: If $l_k$ is connected to the input layer $l_1$ based on the previously sampled edges, then $h_{k,i}$ is sampled following a Bernoulli distribution with probability\footnote{Note that $\gamma_{k,i}$ is obtained by applying a \textit{logistic} function over a continuous parameter value, but this is made implicit in our notations.} $\gamma_{k,i}$. In the other cases, $h_{k,i}=0$.
\subsection{Learning Algorithm}
\label{s5}
We consider the generic situation where the cost-function $C(.)$ is unknown and can be observed at the end of the computation of the model over an input $x$. Note that this case also includes stochastic costs where $C$ is a random variable, caused by some network latency during distributed computation for example. We now describe the case where $C$ is deterministic, see appendix for its stochastic extension.
Let us denote $\mathcal{D}(x,y,\theta,E,H)$ the quality of the S-Network $(H \odot E,\theta)$ on a given training pair $(x,y)$:
\begin{multline} \label{D_def}
\mathcal{D}(x,y,\theta,E,H)=\Delta(f(x,H \odot E,\theta),y) \\+ \lambda \max(0, C(H \odot E) - \mathbf{C})
\end{multline}
We propose to use a policy gradient inspired algorithm as in \cite{DBLP:journals/corr/DenoyerG14,DBLP:journals/corr/BengioBPP15} to learn $\theta$ and $\Gamma$. Let us denote $\mathcal{L}(x,y,E,\Gamma,\theta)$ the expectation of $\mathcal{D}$ over the possible sampled matrices $H$:
\begin{equation}
\mathcal{L}(x,y,E,\Gamma,\theta) = \mathbb{E}_{H \sim \Gamma} \mathcal{D}(x,y,\theta,E,H)
\end{equation}
The gradient of $\mathcal{L}$ can be written as\footnote{details provided in appendix}:
\begin{multline}
\nabla_{\theta,\Gamma} \mathcal{L}(x,y,E,\Gamma,\theta) \\ = \sum\limits_H P(H|\Gamma) \left[(\nabla_{\theta,\Gamma} \log P(H|\Gamma)) \mathcal{D}(x,y,\theta,E,H) \right] \\+ \sum\limits_H P(H|\Gamma) \left[ \nabla_{\theta,\Gamma} \Delta(f(x,H \odot E,\theta),y) \right]
\end{multline}
The first term corresponds to the gradient over the log-probability of the sampled structure $H$ while the second term is the gradient of the prediction loss given the sampled structure $H \odot E$.
Learning can be made using back-propagation and stochastic-gradient descent algorithms as it is made in Deep Reinforcement Learning models. Note that in practice, in order to reduce the variance of the estimator, the update is made following:
\begin{multline}
\nabla_{\theta,\Gamma} \mathcal{L}(x,y,E,\Gamma,\theta) \\ \approx (\nabla_{\theta,\Gamma} \log P(H|\Gamma)) (\mathcal{D}(x,y,\theta,E,H)-\tilde{\mathcal{D}} ) \\+ \nabla_{\theta,\Gamma} \Delta(f(x,H \odot E,\theta),y)
\end{multline}
where $H$ is sampled following $\Gamma$, and where $\tilde{\mathcal{D}}$ is the average value of $\mathcal{D}(x,y,\theta,E,H)$ computed on a batch of learning examples.
\section{Experiments}\label{section_experiments}
\subsection{Implementation}\label{implem}
We study two particular S-Network architectures:
\textbf{ResNet Fabric} (Figure \ref{fig1a}) which is used for \textbf{image classification}. This S-Network is inspired by the ResNet\cite{DBLP:journals/corr/HeZRS15} architecture on which extra modules (i.e. edges) have been added. The underlying idea is that a particular sub-graph of the ResNet Fabric corresponds exactly to a ResNet model. We thus aims at testing the ability of our approach to discover ResNet-inspired efficient architectures, or at least to converge to a ResNet model that is known to be efficient.
\textbf{Convolutional Neural Fabrics} (CNF) which has been proposed in \cite{DBLP:journals/corr/SaxenaV16} (Figure \ref{fig1b}). It is a generic architecture that can be used for both \textbf{image classification} and \textbf{image segmentation}. The layers of the CNF super network are organized in a $W \times H$ matrix. We always use $W=8$ when running our budgeted algorithm. Different values of $W$ (as in \cite{DBLP:journals/corr/SaxenaV16}) are used as baselines.
Image classification has been tested on CIFAR-10 and CIFAR-100 \cite{Krizhevsky09learningmultiple} while the image segmentation has been performed on the Part Label dataset \cite{GLOC_CVPR13}.
For these two architectures denoted \textit{B-ResNet} and \textit{B-CNF}, we consider three different costs functions: the first one is the (i) \textit{computation cost} computed as the number of operations \footnote{The number of Mult-Add operations required to fully evaluate a network.} made by the S-Network as used in \cite{DBLP:journals/corr/DongHYY17} or \cite{DBLP:journals/corr/HuangCLWMW17}. Note that this cost is highly correlated with the execution time\footnote{Expressing constraint directly in \textit{milliseconds} has been also investigated, with results similar to the ones obtain using the computation cost, and are not presented here.}. The second one is the \textit{memory consumption cost}, measured as the number of parameters of the resulting models. At last, the third cost (iii) is the \textit{distributed computation cost} which is detailed in Section \ref{parallel_section} and corresponds to the ability of a particular model to be efficiently computed over a distributed environment.
\subsection{Experimental Protocol and Baselines}
Each model is trained with various values for the objective cost $\mathbf{C}$. For the image classification problem, since we directly compare to ResNet, we select values of $\mathbf{C}$ that corresponds to the cost of the ResNet-20/32/44/56/110 architectures. This allows us to compare the performance of our method at the same cost level as the ResNet variants. When dealing with the B-CNF model, we select $\mathbf{C}$ to be the cost of different versions of the CNF model, having different width W. The height H being fixed by the resolution of the input image.
For each experiment, multiple versions of the different models are evaluated over the validation set during learning. Since our evaluation now involves both a cost and an accuracy, we select the best models using the pareto front on the cost/accuracy curve on the validation set. The reported performance are then obtained by evaluating these selected models over the test set. The detailed procedure of the hyper-parameters and model selection are given in Appendix and the source code of our implementation is open source\footnote{\url{https://github.com/TomVeniat/bsn}}. The learning is done using a classical stochastic gradient-descent algorithm for all parameters with learning rate decay, momentum of 0.9 and weight decay of $10^{-4}$ for $\theta$.
\input{results/cifar10_allmodels_flops.tex}
For each experiment, we give the performance of both reference models (ResNet \cite{DBLP:journals/corr/HeZRS15} and CNF \cite{DBLP:journals/corr/SaxenaV16}), and of related existing models
i.e Low Cost Collaborative Layer (LCCL)\cite{DBLP:journals/corr/DongHYY17} and MSDNet \cite{DBLP:journals/corr/HuangCLWMW17} (under the anytime classification settings). Note that the baselines methods have been designed to reduce exclusively the \textit{computation cost}, while our technique is able to deal with any type of cost. We provide the performance of our budgeted version of ResNet (B-ResNet) and of our budgeted version of CNF (B-CNF). Note that, for a fair comparison, we present the published results of ResNet and CNF, but also the ones that we have obtained by training these models by ourselves, \textbf{our results being of better quality} than the previously published performance.
\subsubsection{Experimental results}\label{section_classif}
\begin{figure*}[ht]
\begin{center}
\begin{subfigure}[t]{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{Arch_ResNet_fabric}
\caption{B-ResNet}
\label{arch_resnet}
\end{subfigure}
\begin{subfigure}[t]{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{Arch_cnf_flops_w9_allnodes}
\caption{B-CNF \& computation cost}
\label{arch_cnf_flop}
\end{subfigure}
\begin{subfigure}[t]{.3\textwidth}
\centering
\includegraphics[width=\linewidth]{Arch_cnf_params_w9_allnodes}
\caption{B-CNF \& memory consumption cost}
\label{arch_cnf_param}
\end{subfigure}
\end{center}
\caption{Discovered architectures: \textbf{(Left)} is a low \textit{computation cost} B-ResNet where dashed edges correspond to connections in which the two convolution layers have been removed (only shortcut or projection connections are kept). \textbf{(Center)} is a low \textit{computation cost} B-CNF where high-resolution operations have been removed. \textbf{(Right)} is a low \textit{memory consumption cost} B-CNF: the algorithm has mostly kept all high resolution convolutions since they allow fine-grained feature maps and have the same number of parameters than lower-resolution convolutions. It is interesting to note that our algorithm, constrained with two different costs, automatically learned two different efficient architectures.}
\label{fig:DiscoveredNetworks}
\end{figure*}
\input{results/cifar100_resnetfab_flops.tex}
\paragraph{Reducing the computation cost:} Figure \ref{plot_cif10_flop_rnf} and Table \ref{cif10_allmodels_flop} show the performance of different models over CIFAR-10. Each point corresponds to a model evaluated both in term of accuracy and \textit{computation cost}. When considering the B-ResNet model, and by fixing the value of $\mathbf{C}$ to the computation cost of the different ResNet architectures, we obtain budgeted models that have approximatively the same costs than the ResNets, but with a higher accuracy. For example, ResNet-20 obtains an accuracy of 92.19\% at a cost of $40.9 \times 10^6$ flop, while B-ResNet is able to discover an architecture with 92.39\% accuracy at a slightly lower cost ($39.25 \times 10^6$ flop). Moreover, the B-ResNet model also outperforms existing approaches like MSDNet or LCCL, particularly when the \textit{computation cost} is low i.e for architectures that can be computed at a high speed. When comparing CNF to B-CNF, one can see that our approach is able to considerably reduce the computation cost while keeping a high accuracy. For example, one of our learned models obtained an accuracy of 93.14\% with a cost of $103 \times 10^6$ flop while CNF has an accuracy of 92.54\% for a cost of $406 \times 10^6$ flop. Note that the same observations can be drawn for CIFAR-100 (Table \ref{cif100_resnetfab_flop}).
Figure \ref{arch_resnet} and \ref{arch_cnf_flop} illustrate two architectures discovered by B-ResNet and B-CNF with a low \textit{computation cost}. One can see that B-ResNet has converged to an architecture which is a little bit different than the standard ResNet architecture, explaining why its accuracy is better. On the CNF side, our technique is able to extract a model that has a minimum of high-resolution convolutions operations, resulting in a high speedup.
\input{results/cifar10_allmodels_params.tex}
\vspace{-0.3cm}
\paragraph{Reducing the memory consumption:} Similar experiments have been made considering the \textit{memory consumption cost} that measures the number of parameters of the learned architectures. We want to demonstrate here the ability of our technique to be used with a large variety of costs, and not only to reduce the computation time. Table \ref{cif10_allmodels_params} illustrates the results obtained on CIFAR-10. As with the \textit{computation cost}, one can see that our approach is able to discover architectures that, given a particular memory cost, obtain a better accuracy. For example, for a model which size is $\approx 0.47$ millions parameters, ResNet-32 has a classification error of 7.81\% while B-ResNet only makes 6.58\% error with $\approx 0.48$ million parameters.
\paragraph{Image Segmentation: }
We also perform experiments on the image segmentation task using the Part Label dataset with CNF and B-CNF (Table \ref{tab:partlabels}). In this task, the model computes a map of pixel probabilities, the output layer being now located at the top-right position of the CNF matrix. It is thus more difficult to reduce the overall computation cost. On the Part Label dataset, we are able to learn a BSN model with a computation gain of $40\%$. Forcing the model to reduce further the computation cost by decreasing the value of $\mathbf{C}$ results in inconsistent models. At a computation gain of $40 \%$, BSN obtains an error rate of $4.57 \%$, which can be compared with the error of $4.94\%$ for the full model. The B-CNF best learned architecture is given in appendix.
\vspace{-0.2cm}
\paragraph{Learning Dynamics: }
Figure \ref{fig:dynamics} illustrates the learning dynamics of B-CNF and CNF. First, one can see (entropy curve) that the model becomes deterministic at the end of the learning procedure, and thus converges to a unique architecture. Moreover, the training speed of B-CNF and CNF are comparable showing that our method does not result in a slower training procedure. Note that the figure illustrates the fact that during a burn-in period, we don't update the edges probabilities, which allows us to obtain a faster convergence speed (see appendix).
\input{results/part_cnf_flops.tex}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{error_entropy}
\end{center}
\caption{Evolution of the loss function and the entropy of $\Gamma$ during training. The period between epoch 0 and 50 is the burn-in phase. The learning rate is divided by 10 after epoch 150 to increase the convergence speed.}
\label{fig:dynamics}
\end{figure}
\subsection{Learning Distributed Architectures}\label{parallel_section}
\begin{figure*}[h]
\centering
\begin{subfigure}[t]{0.65\textwidth}
\includegraphics[width=1.0\linewidth]{Cifar10_ResNetFab_para_all}
\caption{Accuracy/number of operation for different number of cores on CIFAR-10 using B-ResNet.}
\end{subfigure}
\hspace{.7em}
\begin{subfigure}[t]{0.30\textwidth}
\vspace{-15em}
\includegraphics[width=1.0\linewidth]{Arch_cnf_p1p4_w9}
\caption{Architectures discovered with B-CNF for different number of cores: $n=1$ (top) and $n=4$ (bottom)}
\label{fig:para_arch}
\end{subfigure}
\caption{Architectures discovered on CIFAR-10 for different number of distributed cores $n$.}
\label{fig:para_res_arch}
\end{figure*}
At last, we perform a third set of experiments focused on distributed computing where different edges can be computed simultaneously on different computers/processors of a distributed platform. We thus evaluate the quality of an architecture by its ability to be efficiently parallelized. The \textit{distributed computation cost} corresponds to the number of steps needed to compute the output of the network e.g on an architecture with $n=2$ computers, depending on the structure of the network, two edges could be computed simultaneously. If the architecture is a sequence of layers, then this parallelization becomes impossible. Theses experiments allow us to measure the ability of BSN to handle complex costs that cannot be decomposed as a sum of individual modules costs as it is usually done in related works.
Results and corresponding architectures are illustrated in Figure \ref{fig:para_res_arch} for the CIFAR-10 dataset and for both the B-ResNet and the B-CNF architectures. Note that ResNet is typically an architecture that cannot be efficiently distributed since it is a sequences of modules. One can see that our approach is able to find efficient architectures for $n=2$ and $n=4$. Surprisingly, when $n=4$ the discovered architectures are less efficient, which is mainly due to an over fitting of the training set, the cost constraint becomes too large and stop acting as a regularizer on the network architecture. On Figure \ref{fig:para_arch}, one can see two examples of architectures discovered when $n=1$ and $n=4$. The shape of the architecture when $n=4$ clearly confirm that BSN is able to discover parallelized architectures, and to 'understand' the structure of this complex cost.
\section{Related Work}\label{sec_related_work}
\textbf{Learning cost-efficient models: } One of the first approaches to learn efficient models is to \textit{a posteriori} compress the learned network, typically by pruning some connections. The oldest work is certainly the Optimal Brain Surgeon \cite{Hassibi} which removes weights in a classical neural network. The problem of network compression can also be seen as a way to speed up a particular architecture, for example by using quantization of the weights of the network \cite{Vanhoucke11}, or by combining pruning and quantization \cite{han}. Other algorithms include the use of hardware efficient operations that allow a high speedup \cite{hard}.
\textbf{Efficient architectures: }Architecture improvements have been widely used in CNN to improve cost efficiency of network components, some examples are the bottleneck units in the ResNet model \cite{DBLP:journals/corr/HeZRS15}, the use of depthwise separable convolution in Xception \cite{DBLP:journals/corr/Chollet16a} and the lightweight MobileNets\cite{DBLP:journals/corr/HowardZCKWWAA17} or the combinaison of pointwise group convolution and channel shuffle in ShuffleNet\cite{DBLP:journals/corr/ZhangZLS17}.
\textbf{End-to-end approaches: } A first example of end-to-end approaches is the usage of quantization at training time: different authors trained models using binary weight quantization coupled with full precision arithmetic operations \cite{DBLP:journals/corr/CourbariauxBD15},\cite{DBLP:journals/corr/Lu17c}. Recently, \cite{DBLP:journals/corr/abs-1710-03740} proposed an method using half precision floating numbers during training. Another technique proposed by \cite{DBLP:journals/corr/HintonVD15}, \cite{DBLP:journals/corr/RomeroBKCGB14} and used in \cite{2017arXiv170510924Z,2017arXiv170510194N} is the distillation of knowledge, which consists of training a smaller network to imitate the outputs of a larger network.
Other approaches are dynamic networks which conditionally select the modules to respect a budget objective.\cite{DBLP:journals/corr/BolukbasiWDS17, DBLP:journals/corr/OdenaLO17,DBLP:journals/corr/HuangCLWMW17,DBLP:journals/corr/BengioBPP15,DBLP:journals/corr/McGillP17}.
\textbf{Architecture Search: } Different authors have proposed to provide networks with the ability to learn to select the computations that will be applied i.e choosing the right architecture for a particular task. This is the case for example for classification in \cite{DBLP:journals/corr/DenoyerG14,DBLP:journals/corr/ZophL16} based on Reinforcement learning techniques, in \cite{DBLP:journals/corr/SrivastavaGS15} based on gating mechanisms, in \cite{DBLP:journals/corr/RealMSSSLK17} based on evolutionary algorithms or even in \cite{DBLP:journals/corr/FernandoBBZHRPW17} based on both RL and evolutionary techniques.
The strongest difference w.r.t. existing methods is that we do not make any assumption concerning the nature of the cost. Our model is thus more generic than existing techniques and allow to handle a large variety of problems.
\section{Conclusion and Perspectives}
We proposed a new model called \textit{Budgeted Super Network} able to automatically discover cost-constrained neural network architectures by specifying a maximum authorized cost. The experiments in the computer vision domain show the effectiveness of our approach. Its main advantage is that BSN can be used for any costs (computation cost, memory cost, etc.) without any assumption on the shape of this cost. A promising research direction is now to study whether this model could be adapted in order to reduce the training time (instead of the test computation time). This could for example be done using meta-learning approaches.
\section*{Acknowledgments}
This work has been funded in part by grant ANR-16-CE23-0016 ``PAMELA'' and grant ANR-16-CE23-0006 ``Deep in France''.
{\small
\bibliographystyle{ieee}
|
1,108,101,565,981 | arxiv | \section{Introduction}
\label{chap1}
The majority of the digital information produced globally is present in the form of web pages, text documents, news articles, emails, and presentations expressed in natural language text. Collectively, such data is termed \emph{unstructured} as opposed to \emph{structured} data that is normalised and stored in a database. The domain of information extraction (IE) is concerned with identifying information in unstructured documents and using it to populate fields and records in a database~\cite{mccallum2005ie}. In most cases, this activity concerns processing human language texts by means of natural language processing (NLP)~\cite{spyns1996natural}.
Among various IE tasks, Cross-Document Coreference Resolution (CDCR)~\cite{MayfieldADEE09,Bagga1998} involves identifying equivalence classes for identifiable data elements, called entities, across multiple documents. In particular, CDCR is of critical importance for data quality and is fundamental for high-level information extraction and data integration, including semantic search, question answering, and knowledge base construction.
Traditional approaches to CDCR~\cite{MayfieldADEE09,wellner2004integrated} derive features from the context surrounding the appearance of an entity in a document (also called a ``mention") and then apply clustering algorithms that can group similar or related entities across all documents. As we will soon discuss, these approaches aim for being exhaustive and grow exponentially in time with the increase in the number of documents.
Recently, a new stream of CDCR research~\cite{elsayed2008pairwise,pantel2009web,sarmento2009approach,singh2011large,kolb2012dedoop} has focused on meeting the challenges of scaling CDCR techniques to deal with document collections sized of the order of tera-bytes and above. Popularly, dealing with such large-scale datasets has been termed as the ``big data" problem. In this context, CDCR tasks may face various drawbacks including difficulties in clustering and grouping large numbers of entities and mentions across large datasets. Therefore, CDCR techniques need to be overhauled to meet such challenges.
To address these challenges, researchers have studied methods to scale CDCR subtasks such as computing similarity between pairs of entity mentions~\cite{ng2010supervised,wick2009entity}, or even to replace pairwise approaches with more expressive and scalable alternatives~\cite{fastCoreference1,wellner2004integrated}. Recent publications~\cite{elsayed2008pairwise,pantel2009web,sarmento2009approach,singh2011large,kolb2012dedoop} have reported on the usage of parallel and distributed architectures such as Apache Hadoop~\cite{Hadoop,MapReduce} for supporting data-intensive applications which can be used to build scalable algorithms for pattern analysis and data mining.
Although these techniques represent the first steps to meeting big data challenges, CDCR tasks face various drawbacks in achieving a high quality coreference result (effectiveness) and performing the coreference resolution as fast as possible (efficiency) on large datasets. The aim of this paper is to provide readers with an understanding of the central concepts, subtasks, and the current state-of-the-art in CDCR process. We assess existing tools/techniques for CDCR subtasks and highlight big data challenges in each of them to help readers identify important and outstanding issues for further investigation. Finally, we provide concluding remarks and discuss possible directions for future work.
The remainder of this document is organized as follows. In Section~\ref{sec2}, we introduce the CDCR process and its sub-tasks in detail. Sections~\ref{chap3} and~\ref{chap4} discuss the state-of-the-art in entity identification and entity classification. Section~\ref{chap5} discusses the challenges brought about by big data in CDCR. Section~\ref{sec6} present the state-of-the-art tools and techniques for CDCR. Section~\ref{chap6} presents our conclusions and a roadmap for the future. Finally, in the Appendix we discuss our experience in implementing a MapReduce-based CDCR software prototype to address challenges discussed in the paper.
\section{CDCR Process and Evaluation Framework}
\label{sec2}
\subsection{Background and Preliminaries}
\label{chap1}
CDCR approaches provide techniques for the identification of entity mentions in different documents that refer to the same underlying entity. In this context, an \emph{entity} is a real-world person, place, organization, or object, such as the person who serves as the 44th president of the United States and an \emph{entity mention} is a string which refers to such an entity, such as ``Barack Hussein Obama", ``Senator Obama" or ``President Obama". Figure~\ref{fig:CDCExample} illustrates a sample example of person name mentions from different documents and their coreference resolutions. Given a collection of mentions of entities extracted from millions of documents, CDCR involves various subtasks, from extracting entities and mentions to clustering the mentions. The overall objective is to cluster mentions such that mentions referring to the same entity are in the same cluster and no other entities are included~\cite{singh2011large}. Mentions referring to the same entity are termed ``co-referent".
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{./fig/CoreferenceExample.pdf}
\caption{An \emph{entity} (i.e. the person who serves as the 44th president of the United States) and its \emph{entity mentions}, i.e. its true coreference resolutions. }
\label{fig:CDCExample}
\end{figure}
The current approach to cross-document (named) entity coreference resolution consists of two primary tasks~\cite{MayfieldADEE09,fastCoreference1,finin2009using,rao2010streaming,dozier2004cross}: entity identification and classification. Entity identification is the process of finding mentions of the entities of interest in documents and tie together those that are coreferent, while entity classification task involves deriving a classification and/or clustering technique that will separate data into categories, or classes, characterized by a distinct set of features. We discuss each in depth in the following subsections.
\subsubsection{Entity Identification}
Named-entity recognition~\cite{NER.evaluation.2009,NERComparison} (NER), also known as entity identification~\cite{nadeau2007survey} and entity extraction~\cite{chen1998named,ah2009clique}, refers to techniques that are used to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. There are numerous approaches and systems available for performing NER. For example, for traditional named entity recognition (NER), the most popular publicly available systems are: OpenNLP NameFinder\footnote{http://opennlp.apache.org/}, Illinois NER system\footnote{http://cogcomp.cs.illinois.edu/demo/ner/?id=8}, Stanford NER system\footnote{http://www-nlp.stanford.edu/software/CRF-NER.shtml}, and Lingpipe NER system\footnote{http://alias-i.com/lingpipe/demos/tutorial/ne/read-me.html}.
Various steps are considered in this task including: (i)~Format Analysis, in which document formats are analysed for formatting information in addition to textual content; (ii)~Tokeniser, where text is segmented into tokens, e.g., words, numbers, and punctuation; (iii)~Gazetteer, where the type and scope of the information is categorized; and (iv)~Grammar, where linguistic grammar-based techniques as well as statistical models are used to extract more entities. The output of entity identification task will be a set of named entities, extracted from a set of documents. Numerous approaches, techniques, and tools to extracting entities from individual documents have been described in the literature and will be discussed in depth in the next section.
\subsubsection{Entity Classification}
Entity classification task involves deriving a classification and/or clustering technique that will separate data into categories, or classes, characterized by a distinct set of features. To achieve this, extracted entities and mentions (from the entity identification task) are assigned a metric based on the likeness of their meaning or semantic content.
Various machine learning techniques have modeled the problem of entity coreference as a collection of decisions between mention pairs~\cite{fastCoreference1}. Prior to entity pairing, various \emph{features} may be extracted to annotate entities and their mentions. Figure~\ref{featurization} illustrates a simple example for calculating various featurization classes for the pair of mentions \{`Barack Obama' , `Barack Hussein Obama'\}. As illustrated in the figure, these classes can be defined for entities, words around the entities (document level), and meta-data about the documents such as their type. Then, the similarity scores for a pair of entities are computed using appropriate similarity functions for each type of feature (e.g., character-, document-, or metadata-level features).
The next step in entity classification task is to determine whether pairs of entities are co-referent or not. For example, in the sentence ``Mary said she would help me", \emph{she} and \emph{Mary} most likely refer to the same person or group, in which case they are co-referent. Several filtering steps can be applied to entity pairs to eliminate those pairs that have little chance of being deemed co-referent. Various supervised and/or unsupervised classification/clustering techniques (over a set of training examples) can be used to classify related entities. For example, generative classifiers (e.g., Hidden Markov model), discriminative classifiers (e.g., Support Vector Machine (SVM) or maximum entropy model), or decision tree techniques can be used to separate a set of featurized paired entities into two possible classes - coreferent or not-coreferent.
\begin{figure}
\centering
\includegraphics[scale=0.61]{./fig/featurization.pdf}\\
\caption{A simple example for calculating various featurization classes for the pair of entities (`Barack Obama' , `Barack Hussein Obama').}\label{featurization}
\end{figure}
\section{Entity Identification: State-of-the-Art}
\label{chap3}
Named Entity Recognition (NER), also known as Entity Extraction (EE), techniques can be used to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, and percentages. NER is a key part of information extraction system that supports robust handling of proper names essential for many applications, enables pre-processing for different classification levels, and facilitates information filtering and linking. However, performing coreference, or entity linking, as well as creating templates is not part of NER task.
A basic entity identification task can be defined as follows: \\
\noindent \emph{Let \{$t_1$, $t_2$, $t_3$, ..., $t_n$\} be a sequence of entity types denoted by $T$ and let \{$w_1$, $w_2$, $w_3$, ..., $w_n$\} be a sequence of words denoted by $W$, then the identification task can be defined as `given some $W$, find the best $T$'.}\\
In particular, entity identification consists of three subtasks: entity names, temporal expressions, and number expressions, where the expressions to be annotated are `unique identifiers' of entities (organizations, persons, locations), times (dates, times), and quantities (monetary values, percentages).
Most research on entity extraction systems has been structured as taking an unannotated block of text (e.g., ``Obama was born on August 4, 1961, at Gynecological Hospital in Honolulu") and producing an annotated block of text, such as the following\footnote{In this example, the annotations have been done using so-called ENAMEX (a user defined element in the XML schema) tags that were developed for the Message Understanding Conference in the 1990s.}:
\begin{verbatim}
<ENAMEX TYPE="PERSON">Obama</ENAMEX> was born on
<TIMEX TYPE="DATE">August 4, 1961,</TIMEX> at
<ENAMEX TYPE="ORGANIZATION">Gynecological Hospital</ENAMEX> in
<ENAMEX TYPE="CITY">Honolulu</ENAMEX>.
\end{verbatim}
where, entity types such as person, organization, and city are recognized.
However, NER is not just matching text strings with pre-defined lists of names. It should recognize entities not only in contexts where category definitions are intuitively quite clear, but also in contexts where there are many grey areas caused by metonymy. Metonymy is a figure of speech used in rhetoric in which a thing or concept is not called by its own name, but by the name of something intimately associated with that thing or concept. Metonyms can be either real or fictional concepts representing other concepts real or fictional, but they must serve as an effective and widely understood second name for what they represent. For example, (i)~\emph{Person vs. Artefact}: ``The Ham Sandwich (a person) wants his bill. vs ``Bring me a ham sandwich."; (ii)~\emph{Organization vs. Location}: ``England won the World Cup" vs. ``The World Cup took place in England"; (iii)~\emph{Company vs. Artefact}: ``shares in MTV" vs. ``watching MTV"; and (iv)~\emph{Location vs. Organization}: ``she met him at Heathrow" vs. ``the Heathrow authorities".
To address these challenges, the Message Understanding Conferences (MUC) were initiated and financed by DARPA (Defense Advanced Research Projects Agency) to encourage the development of new and better methods of information extraction. The tasks grew from producing a database of events found in newswire articles from one source to production of multiple databases of increasingly complex information extracted from multiple sources of news in multiple languages. The databases now include named entities, multilingual named entities, attributes of those entities, facts about relationships between entities, and events in which the entities participated. MUC essentially adopted simplistic approach of disregarding metonymous uses of words, e.g. `England' was always identified as a location. However, this is not always useful for practical applications of NER, such as in the domain of sports.
MUC defined basic problems in NER as follows: (i)~Variation of named entities: for example John Smith, Mr Smith, and John may refer to the same entity; (ii)~Ambiguity of named entities types: for example John Smith (company vs. person), May (person vs. month), Washington (person vs. location), and 1945 (date vs. time); (iii)~Ambiguity with common words: for example `may'; and (iv)~Issues of style, structure, domain, genre etc. as well as punctuation, spelling, spacing, and formatting. To address these challenges, the state of the art approaches to entity extraction proposed four primary steps~\cite{chen1998named,NER.evaluation.2009,nadeau2007survey,benjelloun2009swoosh}: Format Analysis, Tokeniser, Gazetteer, Grammar. Figure~\ref{NE_Process} illustrates a simplified process for the NER task. Following is brief description of these steps:
\begin{figure} [t]
\centering
\includegraphics[scale=1.1]{./fig/NE_Process.pdf}\\
\caption{A simplified process for NER tasks.}\label{NE_Process}
\end{figure}
\paragraph{Format Analysis.} Many document formats contain formatting information in addition to textual content. For example, HTML documents contain HTML tags specifying formatting information such as new line starts, bold emphasis, and font size or style. The first step, format analysis, is the identification and handling of the formatting content embedded within documents that controls the way the document is rendered on a computer screen or interpreted by a software program. Format analysis is also referred to as structure analysis, format parsing, tag stripping, format stripping, text normalization, text cleaning, and text preparation.
\paragraph{Tokeniser.} Tokenization is the process of breaking a stream of text up into words, phrases, symbols, or other meaningful elements called tokens.This module is responsible for segmenting text into tokens, e.g., words, numbers, and punctuation. The list of tokens becomes input for further processing such as parsing or text mining.
\paragraph{Gazetteer.} This module is responsible for categorizing the type and scope of the information presented. In particular, a gazetteer is a geographical dictionary or directory, an important reference for information about places and place names. It typically contains information concerning the geographical makeup of a country, region, or continent as well as the social statistics and physical features, such as mountains, waterways, or roads. As an output, this module will generate set of named entities (e.g., towns, names, and countries) and key words (e.g., company designators and titles).
\paragraph{Grammar.} This module is responsible for hand-coded rules for named entity recognition. NER systems are able to use linguistic grammar-based techniques as well as statistical models. Hand-crafted grammar-based systems typically obtain better precision, but at the cost of lower recall and months of work by experienced computational linguists. Statistical NER systems typically require a large amount of manually annotated training~data.
\section{Entity Classification: State-of-the-Art}
\label{chap4}
The classification step is responsible for determining whether pairs of entities are co-referent or not.
To achieve this, extracted named entities in the entity identification phase should be compared by applying various features to pair of entities.
Such features can be divided into various classes such as string match~\cite{lee2011stanford,singh2011large,chen2012combining,wick2009entity}, lexical~\cite{chen1998named,bengtson2008understanding}, syntactic~\cite{skut1998chunk,tsuruoka2005developing}, pattern-based~\cite{daume2005large}, count-based~\cite{daume2005large,marquez2012coreference,potau2010coreference}, semantic~\cite{kambhatla2004combining,daume2005large}, knowledge-based~\cite{daume2005large,bryl2010using,nastase2010wikinet}, class-based~\cite{ravichandran2005randomized,pantel2009web,fastCoreference1,elsayed2008pairwise,sarmento2009approach}, list-/inference-/history-based~\cite{daume2005large}, and relationship-based~\cite{giles2008large,kambhatla2004combining} features. Table~\ref{tblFeature} illustrates the various classes of features, their description, and the state-of-the-art approaches.
\begin{table}
\begin{adjustwidth}{-0.9cm}{}
\caption{Various classes of features.}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.9]{./fig/tblFeature.pdf}\\
\end{tabular}
\label{tblFeature}
\end{adjustwidth}
\end{table}
Recently, linked data~\cite{bizer2009linked} has become a prominent source of information about entities. Linked data describes a method of publishing structured data so that it can be interlinked and become more useful, and provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web. Projects such as DBpedia~\cite{DBpedia}, freebase~\cite{Freebase}, WikiTaxonomy~\cite{WikiTaxonomy}, and YAGO~\cite{Yago} have constructed huge knowledge bases (KBs) of entities, their semantic classes, and relationships among entities~\cite{BigDataMethods}. These systems can be used to enrich the entities with additional features and consequently to improve the effectiveness of the results. As an example, YAGO contains information harvested from Wikipedia, and linked to WordNet thesaurus~\cite{WordNet} as a semantic backbone, and having more than two million entities (e.g., people, organizations, and cities) and 20 million facts about these entities.
\subsection{Similarity Functions and Their Characteristics}
Approximate data matching usually relies on the use of a similarity function, where a similarity function $f(v_1, v_2) \mapsto s$ can be used to assign a score $s$ to a pair of data values $v_1$ and $v_2$. These values are considered to be representing the same real world object if $s$ is greater then a given threshold $t$.
In the classification step, similarity functions play a critical role in dealing with data differences caused by various reasons, such as misspellings, typographical errors, incomplete information, lack of standard formats, and so on. For example, personal name mentions may refer to a same person, but can have multiple conventions (e.g., \emph{Barack Obama} versus \emph{B. Obama}).
In the last four decades, a large number of similarity functions have been proposed in different research communities, such as statistics, artificial intelligence, databases, and information retrieval. They have been developed for specific data types (e.g., string, numeric, or image) or usage purposes (e.g., typographical error checking or phonetic similarity detection). For example, they are used for comparing strings (e.g., edit distance and its variations, Jaccard similarity, and tf/idf based cosine functions), for numeric values (e.g., Hamming distance and relative distance), for phonetic encoding (e.g., Soundex and NYSIIS), for images (e.g., Earth Mover Distance), and so on. The functions can be categorized as follows
\subsubsection{Similarity Functions for String Data}
\label{sec:stringTypeFunction}
For \textit{string} data types, in addition to exact string comparison, approximate string comparison functions~\cite{Hall1980} can be used for computing the similarity between two strings. They can be roughly categorized into three groups: \textit{character-based}, \textit{token-based} and \textit{phonetic} functions.
\paragraph{\bf Character-based Functions.}
These functions (e.g., edit distance, Jaro, or Jaro-Winkler) consider characters and their positions within strings to estimate the similarity~\cite{WangLF11}. Following we describe set of character-based functions.
\begin{description}
\item \emph{Edit distance}: The edit distance between two strings is measured, based on the smallest number of edit operations (insertions, deletions, and substitutions) required to transform one string to the other. Each of the edit operations has a cost value (e.g., 1). For example, the edit distance between ``window'' and ``widow'' is 1 since deleting the letter ``n'' in ``window'' will convert the first string into the second. The edit distance function~\cite{Needleman} is expensive or less accurate for measuring the similarity between long strings (e.g., document or message contents). It is likely to be suitable for comparing short strings (e.g., document titles) capturing typographical errors or abbreviations~\cite{ElmagarmidIV07}.\\[-10pt]
\item \emph{Jaro or Jaro-Winkler}: The Jaro function computes the string similarity by considering the number of common characters and transposed characters. Common characters are ones that emerge in both strings within half the length of the shorter string~\cite{tailor}. Transposed characters are ones that are non-matching when comparing common characters of the same position in both strings. The Jaro-Winkler function improves the Jaro function by using the length of the longest common prefix between two strings. These functions are likely to work well for comparing short strings (e.g., personal names).\\[-10pt]
\item \emph{Q-grams}: Given a string, q-grams are substrings in which each consists of q characters of the string~\cite{Kukich1992}. For example, q-grams (q= 2) of ``susan'' are: `su', `us', `sa', and `an'. The similarity function computes the number of common q-grams in two strings and divides the number by either the minimum, average, or maximum number of q-grams of the strings~\cite{Christen06}. If strings have multiple words and tokens of the strings are ordered differently in another strings, the similarity function might be more effective than the other character-based functions, such as edit distance or jaro function~\cite{Christen06}.\\[-10pt]
\end{description}
\paragraph{\bf Token-based Functions.}
These functions might be appropriate in situations where the string mismatches come from rearrangement of tokens (e.g., ``James Smith'' versus ``Smith James'') or the length of strings is long, such as the content of a document or a message~\cite{KopckeTR10}. The following are some token-based functions:
\begin{description}
\item \emph{Jaccard}: Jaccard function tokenizes two strings \texttt{s} and \texttt{t} into tokensets \texttt{S} and \texttt{T}, and quantifies the similarity based on the fraction of common tokens in the sets: $\frac{(S \cap T)}{(S \cup T)}$. For example, the jaccard similarity between ``school of computer science'' and ``computer science school'' is $\frac{3}{4}$. This function works well for the cases where word order of strings is unimportant
\item \emph{TF/IDF}: This function computes the closeness by converting two strings into unit vectors and measuring the angle between the vectors. In some situations, word frequency is important as in information retrieval applications that give more weight to rare words than on frequent words. In such cases, this function is likely to work better than the functions (e.g., Jaccard similarity) that are insensitive to the word frequency.
\end{description}
\paragraph{\bf Phonetic Similarity Functions.}
These functions describe how two strings are phonetically similar to each other in order to compute the string similarity. Some examples are as follows:
\begin{description}
\item \emph{Soundex}~\cite{HolmesM02}, one of the best known phonetic functions, converts a string into a code according to an encoding table. A soundex code is comprised of one character and three numbers. The Soundex code is generated as follows: (i)~Keep the first letter of a string and ignore all other occurrences of vowels (a, e, i, o, u) and h, w, y; (ii)~Replace consonants with numbers according to Table~\ref{tab:encodingTable}; (iii)~Code two consecutive letters as a single number; and (iv)~Pad with 0 if there are less than three numbers.
For example, using the soundex encoding table, both ``daniel'' and ``damiel'' return the same soundex code ``d540''
\item \emph{Phonex/Phonix}~\cite{Randell93} is an alternative function to Soundex, which was designed to improve the encoding quality by preprocessing names based on their pronunciations. Phonix~\cite{Gadd1990}, an extension of Phonex, uses more than one hundred rules on groups of characters~\cite{Christen06}. The rules are applied to only some parts of names, e.g., the beginning, middle or end of names
\item \emph{Double Metaphone}~\cite{Philips} performs better for string matching in non-English languages, like European and Asian, rather than the soundex function that is suitable for English. Thus it uses more complex rules that consider letter positions as well as previous and following letters in a string, compared with the soundex function
\end{description}
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|} \hline
\texttt{Consonants}& \texttt{Number} & \texttt{Consonants} & \texttt{Number} \\ \hline
b, f, p, v& 1 & l & 4 \\ \hline
c, g, j, k, q, s, x, z& 2 & m, n & 5 \\ \hline
d, t& 3 & r & 6 \\ \hline
\end{tabular}
\vspace{2mm}
\caption{Soundex encoding table.}
\label{tab:encodingTable}
\end{table}
\subsubsection{Similarity Functions for Numeric Data}
For \textit{numeric} attributes, one can treat numbers as strings and then compare them using the similarity functions for string data described above or choose different functions for comparing numeric values as follows
\begin{description}
\item \emph{Relative Distance}: The relative distance is used for comparing numeric attributes $x$ and $y$ (e.g., price, weight, size): $R(x,y)= \frac{|x - y|}{max\{x,y\}}$.
\item \emph{Hamming Distance}: The Hamming distance is the number of substitutions required to transform one number to the other. Unlike other functions (e.g., relative distance), it can be used only for comparing two numbers of equal length. For example, the Hamming distance between ``2121'' and ``2021'' is 1 as there is one substitution ($1\rightarrow 2$). The Hamming distance is used mainly for numerical fixed values, such as postcode and SSN~\cite{tailor}
\end{description}
\subsubsection{Similarity Functions for Date or Time Data}
Date and time values must be converted to a common format in order to be compared with each other.
For example, possible formats for date type (considering day as `dd', month as `mm', and year as `yyyy'/`yy') include: `ddmmyyyy', `mmddyyyy', `ddmmyy', `mmddyy', and so on. For time type, times could be given as strings of the form `hhmm' or `mmhh' in 24 hours format. In the process during which date or time values are converted to a uniform representation, separator characters like `-', `/', `:' are removed from the values. To determine the similarity between these converted values, we could use numeric similarity functions (e.g., absolute difference) by considering them as numeric values or character-based similarity functions (e.g., edit distance) by considering them as string values.
\subsubsection{Similarity Functions for Categorical Data}
For \emph{categorical} features (whose values come from a finite domain), the similarity can be computed in a similar way to binary data types. For example, the score `1' is assigned for a match and the score `0' for a non-match. Alternatively, in~\cite{Anderberg1973}, the authors presented an approach that measures the similarity between two categorical values, based on user inputs. For example, Table~\ref{tab:categoricalData} shows the user-defined similarity scores between any two insurance types. This method can give more detailed scores between categorical data, rather than giving only two scores `0' or `1', although some effort is needed to provide user-defined similarity scores in advance.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|} \hline
Insurance Type& Car & Motorbike & Home & Travel \\ \hline\hline
Car & 1 & & & \\ \hline
Motorbike & 0.7 & 1 & & \\ \hline
Home & 0 & 0 & 1 & \\ \hline
Travel & 0 & 0 & 0.3 & 1 \\ \hline
\end{tabular}
\flushleft
\caption{Similarity scores between two insurance types.}
\label{tab:categoricalData}
\end{table}
Even though there is no similarity between any two feature values, further comparisons can be made because of \textit{semantic} relationships between them. For example, consider two feature strings ``Richard'' and ``Dick'' of person entities. Although normal string comparison functions may fail to see the similarity, the strings still can be considered as similar to each other, if we keep the information that the latter is an alias for the former.
\subsection{Clustering}
Clustering is the task of grouping a set of objects in such a way that objects in the same group (called cluster) are more similar (in some sense or another) to each other than to those outside the cluster. In information extraction, identifying the equivalence classes of entity mentions is the main focus: it is important that an entity and all its mentions are placed in the same equivalence class. In this context, the goal of coreference resolution will be to identify and connect all textual entity mentions that refer to the same entity.
To achieve this goal, it is important to identify all references within the same document (i.e. within document coreference resolution). An intra-document coreference system can be used to identify each reference and to decide if these references refer to a single individual or multiple entities. Some techniques (e.g. ~\cite{rao2010streaming,green2012entity}) create a coreference chain for each unique entity within a document and then group related coreference chains in similar clusters. Then, they use a streaming clustering approach with common coreference similarity computations to achieve high performance on large datasets. The proposed method is designed to support both entity disambiguation and name variation operating in a streaming setting, in which documents are processed one at a time and only once.
The state-of-the-art in clustering has been discussed in previous publications~\cite{GooiA04,sekine2009named,nadeau2007survey}. Many of these approaches rely on mention (string) matching, syntactic features, and linguistic resources like English WordNet~\cite{stark1998wordnet}. The classic works on clustering~\cite{bagga1998algorithms,GooiA04} adapted the Vector Space Model (VSM\footnote{Vector space model or term vector model is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers}) or deployed different information retrieval techniques for entity disambiguation and clustering. Such works showed that clustering documents by their domain specific attributes such as domain genre will affect the effectiveness of cross-document coreferencing.
Some extensions to VSM for for cross-document coreference clustering have been proposed in~\cite{mann2003unsupervised,chen2007towards}. Furthermore, supervised approaches~\cite{black1998facile}, Semi-supervised approaches~\cite{ando2005high}, and and unsupervised approaches~\cite{elsner2009structured} have used clustering to group together different nominal referring to the same entity.
In particular, approaches to cross document coreference resolution have first constructed a vector space representation derived from local (or global) contexts of entity mentions in documents and then performed some form of clustering on these vectors. Most of these approaches focused on disambiguating personal names.
Another line of related work, e.g.~\cite{fleischman2004multi,MayfieldADEE09} added a discriminative pairwise mention classifier to a VSM-like model. For example, Mayfield et al.~\cite{MayfieldADEE09}, clustered the resulting entity pairs by eliminating any pair with an SVM output weight of less than 0.95, then they treated each of the connected components in the resulting graph as a single entity. Ah-Pine et al.~\cite{ah2009clique} proposed a clique-based clustering method based upon a distributional approach which allows to extract, analyze and discover highly relevant information for corpus specific NEs annotation. Another line of related work~\cite{green2011entity,aktolga2008cross} proposed techniques for clustering text mentions across documents and languages simultaneously. Such techniques may produce cross-lingual entity clusters. Some later work~\cite{ni2010enhancing,attardi2010tanl} relies on the use of extremely large corpora which allow very precise, but sparse features. For example, Ni et al.~\cite{ni2010enhancing} enhanced the open-domain classification and clustering of named entity using linked data approaches.
Dynamic clustering approach [9] follows the method in which set of points are observed from a potentially infinite set X, one at a time, in order to maintain a fixed number of clusters while minimizing the maximum cluster radius (i.e. the radius of the smallest ball containing all points of the cluster). This approach consists of two stages: update and merge. Update adds points to existing clusters or creates new clusters while merge combines clusters to prevent the clusters from exceeding a fixed limit. Comparing to the agglomerative clustering approach (which has the quadratic cost), the streaming clustering provides a potentially linear performance in the number of observations since each document need only be examined a single time.
\section{CDCR and Big Data}
\label{chap5}
The ability to harness the ever increasing amounts of data will enable us to understand what is happening in the world. In this context, big data enables the two main characteristics to come together: (i)~big data for massive amounts of detailed information; and (ii)~advanced analytics including artificial intelligence, natural language processing, data mining, statistics and so on. Generating huge metadata for imbuing the data with additional semantics will form part of the big data challenges in CDCR. For example, `Barack Obama' can be a student in the Harvard Law School in a period of time and can be the president of the United States in another time. More specifically, entities and their mentions may have massive amounts of detailed information which need to be analyzed over time.
Big data has raised various challenges in different tasks of information extraction, including in CDCR. In the entity identification phase, entity extraction subtasks such as format analysis, tokeniser, gazetteer, and grammar would have to be applied to huge number of documents. This is challenging as, in terms of scalability, entity extraction outputs more data than it takes. For example, as illustrated in Table~\ref{fig:dataset}, only 6600 documents provide more than two million entities. In contrast, the English Gigaword dataset contains more that nine million documents and will produce orders of magnitude more information.
Currently, the dominant methods for co-reference measure compatibility between pairs of mentions. These suffer from a number of drawbacks including difficulties scaling to large numbers of mentions and limited representational power~\cite{fastCoreference1}. For example, as illustrated in Table~\ref{fig:dataset}, for more than 30,000 extracted named entities, around 900 million pairs can be generated. In particular, in terms of scalability, pairwise entity comparison will become exponential across documents.
Recent research~\cite{fastCoreference1,wellner2004integrated,ng2010supervised,wick2009entity} have studied methods that measure compatibility between mention pairs (i.e., the dominant approach to coreference) and showed that these approaches suffer from a number of drawbacks including difficulties scaling to large numbers of mentions and limited representational power. For example, Wick et al.~\cite{fastCoreference1} proposed to replace the pairwise approaches with a more expressive and highly scalable alternatives, e.g., discriminative hierarchical models that recursively partitions entities into trees of latent sub-entities. Wellner et al.~\cite{wellner2004integrated} proposed an approach to integrated inference for entity extraction and coreference based on conditionally-trained undirected graphical models.
Luo et al.~\cite{luo2004mention} proposed an approach for coreference resolution which uses the Bell tree to represent the search space and casts the coreference resolution problem as finding the best path from the root of the Bell tree to the leaf nodes.
Wick et al.~\cite{wick2009entity} proposed a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities.
Finally, in the classification step, various similarity metrics should be calculated for all generated paired entities, and then the huge number of coreferent entities should be clustered and placed in the same equivalence class. To address these challenges, and to effectively classify and cluster these gigantic number of entities and pairs, parallel and distributed architectures have become popular. MapReduce~\cite{MapReduce} is a distributed computing framework introduced by Google with the goal of simplifying the process of distributed data analysis. \
The MapReduce programming model consists of two functions called Map and Reduce. Data are distributed as key-value pairs on which the Map function computes a different set of intermediate key and value pairs. An intermediate Shuffle phase groups the values around common intermediate keys. The Reduce function then performs computation on the lists of values with the same key. The resulting set of key-value pairs from the reducers is the final output. Apache Hadoop~\cite{Hadoop} is the most popular, open source implementation of MapReduce that provides a distributed file system (i.e., HDFS\footnote{http://hadoop.apache.org/}) and also, a high level language for data analysis, i.e., Pig\footnote{http://pig.apache.org/}. Hadoop~\cite{Hadoop} can be used to build scalable algorithms for pattern analysis and data mining. This has been demonstrated by recent research~\cite{elsayed2008pairwise,pantel2009web,sarmento2009approach,singh2011large,kolb2012dedoop} that have used MapReduce~\cite{MapReduce} for processing huge amounts of documents in a massively parallel way.
Elsayed et al.~\cite{elsayed2008pairwise} proposed a MapReduce algorithm for computing pairwise document similarity in large document collections. The authors focused on a large class of document similarity metrics that can be expressed as an inner product of term weights. They proposed a two step solution to the pairwise document similarity problem: (i)~Indexing, where a standard inverted index algorithm~\cite{frakes1992information} in which each term is associated with a list of document identifiers for documents that contain it and the associated term weight; and (ii)~Pairwise Similarity, where the MapReduce mapper generates key tuples corresponding to pairs of document IDs in the postings in which the key tuples will be associated with the product of the corresponding term weights.
A scalable MapReduce-based implementation based on distributional similarity have been proposed in~\cite{pantel2009web}, where the approach followed a generalized sparse-matrix multiplication algorithm~\cite{sarawagi2004efficient}. The MapReduce plan uses the Map step to start $M*N$ Map tasks in parallel, each caching $1/M_{th}$ part of $A$ as an inverted index and streaming $1/N_{th}$ part of $B$ through it. In this approach, there is a need to process each part of $A$ for $N$ times, and each part of $B$ is processed $M$ times.
A multi-pass graph-based clustering approach to large scale named-entity disambiguation have been proposed in~\cite{sarmento2009approach}. The proposed MapReduce-based algorithm is capable of dealing with an arbitrarily high number of entities types is able to handle unbalanced data distributions while producing correct clusters both from dominant and non-dominant entities. Algorithms will be applied to constructed clusters for assigning small clusters to big clusters, merging small clusters, and merging big and medium clusters. According to these related works, MapReduce Algorithm design could lead to data skew and the curse of the last reducer and consequently careful investigation is needed while mapping an algorithm into the MapReduce plan.
A distributed inference that uses parallelism to enable large scale processing have been proposed in~\cite{singh2011large}. The approach uses a hierarchical model of coreference that represents uncertainty over multiple granularities of entities. The approach facilitates more effective approximate inference for large collections of documents. They divided the mentions and entities among multiple machines, and propose moves of mentions between entities assigned to the same machine. This ensures all mentions of an entity are assigned to the same machine.
Kolb et al.~\cite{kolb2012dedoop} proposed a tool called Dedoop (Deduplication with Hadoop) for MapReduce-based entity resolution of large datasets. Dedoop automatically transforms the entity resolution workflow definition into an executable MapReduce workflow. Moreover, it provides several load balancing strategies in combination with its blocking techniques to achieve balanced workloads across all employed nodes of the cluster.
\section{CDCR Tools and Techniques Evaluation}
\label{sec6}
\subsection{Evaluation Dimensions}
\label{chapEval}
Cross-Document Coreference Resolution (CDCR) is the task of identifying entity mentions (e.g., persons, organizations or locations) across multiple documents that refer to the same underlying entity. An important problem in this task is how to evaluate a system's performance.
There are two requirements that should be lie at the heart of CDCR task: (i)~effectiveness, which concerns with achieving a high quality coreference result. For the evaluation of accuracy, well-known measures such as \emph{precision} (the fraction of retrieved instances that are relevant) and \emph{recall} (the fraction of relevant instances that are retrieved)~\cite{salton1986introduction} can be used; and (ii)~efficiency, that concerns with performing the coreference resolution as fast as possible for large datasets.
In this context, a good performance metric should have the following two properties~\cite{luo2005coreference}:
\begin{description}
\item \emph{discriminativity}: which is the ability to differentiate a good system from a bad one. For example, precision and recall have been proposed to measure the effectiveness of information retrieval and extraction tasks, where high recall means that an algorithm returned most of the relevant results and high precision means that an algorithm returned substantially more relevant results than irrelevant;
\item \emph{interpretability}: which emphasis that a good metric should be easy to interpret. In particular, there should be an intuitive sense of how good a system is when a metric suggests that a certain percentage of coreference results are correct. For example, when a metric reports 95\% or above correct for a system, we would expect that the vast majority of mentions are in right entities or coreference chains;
\end{description}
For the evaluation of accuracy, well-known measures such as precision (the fraction of retrieved instances that are relevant) and recall (the fraction of relevant instances that are retrieved)~\cite{salton1986introduction} can be used. As the complementary to precision/recall, some approaches such as link-based F-measure~\cite{vilain1995model}, count the number of common links between the truth (or reference) and the response. In these approaches, the link precision is the number of common links divided by the number of links in the system output, and the link recall is the number of common links divided by the number of links in the reference. The main shortcoming of these approaches is that they fail to distinguish system outputs with different qualities: they may result in higher F-measures for worse systems.
Some other value-based metric such as ACE-value~\cite{nist2003ace} count the number of false-alarm (the number of miss) and the number of mistaken entities. In this context, they associate each error with a cost factor that depends on things such as entity type (e.g., location and person) as well as mention level (e.g., name, nominal, and pronoun).
The main shortcoming of these approaches is that they are hard to interpret. For example a system with 90\% ACE-value does not mean that 90\% of system entities or mentions are correct: the cost of the system, relative to the one outputting zero entities is 10\%. To address this shortcoming, approaches such as Constrained Entity-Aligned F-Measure (CEAF)~\cite{luo2005coreference} have been proposed to measure the quality of a coreference system where an intuitively better system would get a higher score than a worse system, and is easy to interpret.
B-cubed metric~\cite{bagga1998algorithms}, a widely used approach, proposed to address the aforementioned shortcomings by first computing a precision and recall for each individual mention and then taking the weighted sum of these individual precisions and recalls as the final metric. The key contributions of this approach include: promotion of a set-theoretic evaluation measure, B-CUBED, and the use of TF/IDF\footnote{tf/idf, term frequency/inverse document frequency, is a numerical statistic which reflects how important a word is to a document in a collection or corpus.} weighted vectors and `cosine similarity'\footnote{Cosine similarity is a measure of similarity which can be used to compare entities/documents in text mining. In addition, it is used to measure cohesion within clusters in the field of data mining.} in single-link greedy agglomerative clustering. In particular, B-Cubed looks at the presence/absence of entities relative to each of the other entities in the equivalence classes produced: the algorithm computes the precision and recall numbers for each entity in the document, which are then combined to produce final precision and recall numbers for the entire output.
{\subsection{Datastes}
\label{sec:datasets}}
Measuring the effectiveness of CDCR task on large corpuses is challenging and needs large datasets providing sufficient level of ambiguity (the ability to express more than one interpretation) and sound ground-truth (the accuracy of the training set's classification for supervised learning techniques). Several manually/automatically labeled datasets have been constructed for training and evaluation of coreference resolution methods, however,
CDCR supervision will be challenging as it has a exponential hypothesis space in the number of mentions. Consequently, the manual annotation task will be time-consuming, expensive and will result in few number of ground-truths.
Few publications~\cite{bentivogli2008creating,day2008corpus,Bagga1998} introduced manually-labeled, small datasets containing high ambiguity, which make it very hard to evaluate the effectiveness of the CDCR techniques. Several automatic methods for creating CDCR datasets have been proposed to address this shortcoming. For example, recently, Google released the Wikilinks Corpus\footnote{http://googleresearch.blogspot.com.au/2013/03/learning-from-big-data-40-million.html}~\cite{singh12:wiki-links} which includes more than 40 million total disambiguated mentions over 3 million entities within around 10 million documents. Other examples of automatically labeled large datasets includes ~\cite{niu2004weakly,GooiA04,Sameerabs2010,spitkovsky2012cross}. Following we provide more details about TAC-KBP, John Smith, ACE, reACE, English Gigaword, and Google's Wikilinks datasets.
\begin{description}
\item[{\bf John Smith corpus~\cite{Bagga1998}.}] This dataset is one of the first efforts for creating corpora to train and evaluate cross-document co-reference resolution algorithms. The corpus is a highly ambiguous dataset which consisted of 197 articles from 1996 and 1997 editions of the
New York Times. The relatively of common name `John Smith' used to find documents that were about different individuals in the news.
\item [{\bf ACE (2008) corpus~\cite{extraction2008evaluation}.}] The most recent Automatic Content Extraction (ACE) evaluation took place in 2008, where the dataset includes approximately 10,000 documents from several genres (predominantly newswire). As the result of ACE participation (participants were expected to cluster person and organization entities across the entire collection), a selected set of about 400 documents were annotated and used to evaluate the system performance.
\item [{\bf reACE~\cite{hachey2012datasets}.}] The dataset was developed at the University of Edinburgh which consists of English broadcast news and newswire data originally annotated for the ACE (Automatic Content Extraction) program to which the Edinburgh Regularized ACE (reACE) mark-up has been applied. In order to provide a sufficient level of ambiguity and reasonable ground-truth, the dataset includes annotation for: (1)~a refactored version of the original data to a common XML document type; (2)~linguistic information from LT-TTT (a system for tokenizing text and adding markup) and MINIPAR (an English parser); and (3)~a normalized version of the original RE markup that complies with a shared notion of what constitutes a relation across domains. Similar to ACE and John Smith corpus, this dataset contains few annotated documents and cannot be used to evaluate the efficiency of big-data approaches in CDCR.
\item [{\bf English Gigaword}] is a comprehensive archive of newswire text data that has been acquired over several years by the LDC at the University of Pennsylvania. The fifth edition of this dataset includes seven distinct international sources of English newswire and contains more than 9 million documents. This large dataset is not annotated but can be used to assess the efficiency of the CDCR approaches.
\item [{\bf Google's Wikilinks Corpus~\cite{singh12:wiki-links}}] This dataset comprises of 40 million mentions over 3 million entities gathered using an automatic method based on finding hyperlinks to Wikipedia from a web crawl and using anchor text as mentions~\cite{singh12:wiki-links}. The Google search index has been used to discover the mentions that belong to the English language. The dataset provides the URLs of all the pages that contain labeled mentions, the actual mentions (i.e., the anchor text), the target Wikipedia link (entity label), and the byte offsets of the links on the page. Similar to Wikilinks, the \emph{TAC-KBP corpus}~\cite{mcnamee2009overview} links entity mentions to corresponding Wikipedia derived knowledge base nodes, focusing on ambiguous person, organization, and geo-political entities mentioned in newswire, and required systems to cope with name variation and name disambiguation. The dataset contains over 1.2 million documents, primarily newswire.
\end{description}
\subsection{Tools for Entity Identification and their evaluation}
In this section, we assess a set of named entity extraction systems including: OpenNLP, Stanford-NLP, LingPipe, Supersense tagger, AFNER, and AlchemyAPI.
Table~\ref{tools} illustrates a set of Information Extraction tools and their applications.
Table~\ref{tools_step} depicted CDCR tasks and the tools that can be leveraged in each phase.
The assessment only consider the names of persons, locations and organizations. The motivation behind this assessment is to provide a complementary vision for the results of domain independent systems that permit the processing of texts as well as process texts in a common language: English has been the selected language for this assessment. Following is a brief description of the selected tools.
\paragraph{\bf Stanford-NLP}\footnote{http://nlp.stanford.edu/},
is an integrated suite of natural language processing tools for English in Java, including tokenization, part-of-speech tagging, named entity recognition, parsing, and coreference. Stanford NER provides a general implementation of linear chain Conditional Random Field (CRF) sequence models, coupled with well-engineered feature extractors for Named Entity Recognition. The model is dependent on the language and entity type it was trained for and offers a number of pre-trained name finder models that are trained on various freely available corpora.
\paragraph{\bf OpenNLP}\footnote{http://opennlp.apache.org/},
is a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and coreference resolution. The OpenNLP Name Finder can detect named entities and numbers in text. The Name Finder needs a model to be able to detect entities. The model is dependent on the language and entity type it was trained for. The OpenNLP projects offers a number of pre-trained name finder models that are trained on various freely available corpora. The OpenNLP engine reads the text content and leverages the sentence detector and name finder tools bundled with statistical models trained to detect occurrences of named entities.
The OpenNLP tools are statistical NLP tools including a sentence boundary detector, a tokenizer, a POS tagger, a phrase chunker, a sentence parser, a name finder and a coreference resolver. The tools are based on maximum entropy models. The OpenNLP tools can be used as standalone (in which the output will be a single text format) or as plugins with other Java frameworks including UIMA (in which the output will be in XML metadata Interchange (XMI) format). It is possible to pipe output from one OpenNLP tool into the next, e.g., from the sentence detector into the tokenizer.
\begin{table}
\begin{adjustwidth}{-2cm}{}
\caption{List of existing Information Extraction tools and their applications.}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.9]{./fig/Tools.pdf}\\
\end{tabular}
\label{tools}
\end{adjustwidth}
\end{table}
\begin{landscape}
\begin{table}
\begin{adjustwidth}{-0.5cm}{}
\caption{CDCR tasks and the tools which can be leveraged in each phase.}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.9]{./fig/Tools_step.pdf}\\
\end{tabular}
\label{tools_step}
\end{adjustwidth}
\end{table}
\end{landscape}
The OpenNLP sentence detector is based on the approach proposed in~\cite{OpenNLPSentence}. One obvious drawback in the classification approach is that it cannot identify sentence boundaries where there is no marker. Next step is the statistical tagging. The statistical approach to tagging is to treat it as a multi-way classification problem. The OpenNLP POS (Part of speech) tagger is based on the approach proposed in~\cite{OpenNLP_POS}. After the OpenNLP tagger was developed, Toutanova and Manning~\cite{StanfordNLP_POS} proposed approaches for improving the accuracy of maximum entropy taggers. The Stanford-NLP POS (Part of speech) is based on this latter~work.
Chunking (also known as partial parsing) creates very shallow trees representing simple, flat phrase structure (mainly noun phrases). The basic approach in chunking is to exploit the work already done by the POS tagger in order to identify simple phrases by recognizing sequences of POS tags.
The OpenNLP tools include a chunker, which uses a maximum entropy model to recognize patterns in the POS tags made by the OpenNLP tagger. Stanford NLP does not provide chunker.
The Stanford parser is actually a set of alternative parsing algorithm and statistical models. It was developed in order to compare and evaluate different techniques.
\paragraph{\bf LingPipe}\footnote{http://alias-i.com/lingpipe/},
is a toolkit for processing text using computational linguistics. LingPipe is used to detect named entities in news, classify Twitter search results into categories, and suggest correct spellings of queries.
It includes multi-lingual, multi-domain, and multi-genre models as well as training with new data for new tasks. Moreover, it includes online training (learn-a-little, tag-a-little) and character encoding-sensitive I/O.
It offers a user interface and various demos through which it is possible to test texts. We used the latest release of LingPipe, LingPipe~4.1.0, in the assessment.
\paragraph{\bf Supersense Tagger}\footnote{https://sites.google.com/site/massiciara/},
is designed for the semantic tagging of nouns and verbs based on WordNet categories which includes set of named entities such as persons, organizations, locations, temporal expressions and quantities. It is based on automatic learning, offering three different models for application: CONLL, WSJ and WNSS. The Supersense-CONLL have been used in our evaluation.
\paragraph{\bf AFNER}\footnote{http://afner.sourceforge.net/afner.html},
is a package for named entity recognition. AFNER uses regular expressions to find simple case named entities such as simple dates, times, speeds, etc. Moreover, it supports finding the parts of text matching listed named entities. The regular expression and list matches are then used in a `\emph{maximum entropy}'\footnote{Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, part-of-speech tagging, and text segmentation. Maximum entropy can be used for text classification by estimating the conditional distribution of the class variable given the document.} based classifier. Features relating to individual tokens (including list and regular expression matches) as well as contextual features are used. It also allows the addition of lists and regular expressions, as well as the training of new models. It is by default
capable of recognizing persons' names, organizations, locations, miscellanies, monetary quantities, and dates in English texts.
\paragraph{\bf AlchemyAPI}\footnote{http://www.alchemyapi.com/},
utilizes natural language processing technology and machine learning algorithms to analyze content, extracting semantic meta-data: information about people, places, companies, topics, facts and relationships, authors, languages, and more. API endpoints are provided for performing content analysis on Internet-accessible web pages, posted HTML or text content. It supports multiple languages and offers comprehensive disambiguation capabilities solutions. Moreover, it can be used to identify positive, negative and neutral sentiment within HTML pages and text documents/contents as well as for extracting document-level sentiment, user-targeted sentiment, entity-level sentiment, and keyword-level sentiment.
\subsubsection{Analysis and Methodology}
\begin{table}
\caption{Main characteristics of the datasets.}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.62]{./fig/datasets.pdf}\\
\end{tabular}
\label{datasets}
\end{table}
In this assessment, we use reAce (to evaluate the effectiveness of the results) and English Gigaword (to evaluate the efficiency of the results) datasets. These datasets have been discussed in Section~\ref{sec:datasets}. Table~\ref{datasets} illustrates the main characteristics of these datasets.
The data analysis has been realized having focused on comparison of results obtained by the tools, for entities found in the test corpus: set of documents in the English Gigaword corpus has been used in order to evaluate the behavior of the tools. In particular, we used part of the Agence France-Presse, English Service (\texttt{afp\_eng}), as part of English Gigaword corpus, that has a total of 492 words, distributed in 5 documents and 21 paragraphs in which more than 60 occurrences of various types of entities have been accumulated. The assessment only consider the names of persons, locations and organizations. These entity types were distributed in various phrases in the corpus with different typography, where entities in a tool could neither totally coincide in number nor in semantic with their equivalent entities in other tools. Consequently, we adopted the corpus for every tool.
The data analysis has been realized having focused on the comparison of results obtained by the tools, for entities found in the test corpus. For the evaluation of accuracy, we use the well-known measures of precision and recall~\cite{salton1986introduction}. As discussed earlier, precision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved. In particular, precision measures the quality of the matching results, and is defined by the ratio of the correct entities to the total number of entities found:
\begin{center}
Precision = $\frac{number-of-currect-entities-found}{total-number-of-entities-extracted}$
\end{center}
Recall measures coverage of the matching results, and is defined by the ratio of the correct entities matched to the total number of all correct entities that should be found.
\begin{center}
Recall = $\frac{number-of-currect-entities-found}{total-number-of-correct-entities-that-should-be-ound}$
\end{center}
\begin{figure} [t]
\centering
\includegraphics[width=0.75\textwidth]{fig/PR-identification.pdf}
\caption{Precision-Recall in entity identification.}
\label{fig:PR-identification}
\end{figure}
For an approach to be effective, it should achieve a high precision and high recall. However, in reality these two metrics tend to be inversely related~\cite{salton1986introduction}. The evaluation has been realized through distinct measures of precision and recall based on: (i)~ identification of the entities and false-positives\footnote{In statistics, a false positive is the incorrect rejection of a true null hypothesis, which may lead one to conclude that a thing exists when really it doesn't. For example, that a named entity is of a specific type, e.g. Person, when the entity is not of that~type.} in the identification; (ii)~classification of entities; and (iii)~classification by NE types that each tool recognizes.
\begin{figure} [t]
\centering
\includegraphics[width=0.75\textwidth]{fig/PR-classification.pdf}
\caption{Precision-Recall in entity classification.}
\label{fig:PR-classification}
\end{figure}
Figure~\ref{fig:PR-classification} illustrates the precision-recall for entity classification. Notice that, entity classification is the process of classifying entities based on their type (i.e., recognized by the tools) and is different from coreference classification (see Section~4).
Given that classification is a process that depends on the identification of entities, the f-measure in identification is always superior to that of the classification. In particular, F-measure is the harmonic mean of precision and recall:
\begin{center}
F-measure=$2.\frac{precision.recall}{precision+recall}$
\end{center}
Figure~\ref{fig:PR-fmeasure} illustrates the F-measure in entity identification and classification.
Comparing to the precision-recall for entity identification and classification, it is generally observed that the values are similar for the F-measure in entity identification and classification.
\begin{figure} [t]
\centering
\includegraphics[width=0.75\textwidth]{fig/PR-fmeasure.pdf}
\caption{F-measure in entity identification and classification.}
\label{fig:PR-fmeasure}
\end{figure}
\begin{table}
\caption{Results by entity type.}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.5]{fig/PR-entityType}\\
\end{tabular}
\label{PRentityType}
\end{table}
We took a further analysis in account for the false positive errors, i.e. the elements erroneously identified as entities, as this could result more damaging in a project than partial identification or erroneous classification. To achieve this, in Table~\ref{PRentityType} we calculate the number of persons, locations and organizations and the f-measure for them. In this experiment, we didn't analyze the number of categories that each tool can recognize, as the utility and difficulty of recognition of some types against some others is different and demonstrates the need for a study based on the entity's types.
In this context, the study was carried out for person, location and organization types that the tools were able to recognize in the corpus. The analysis illustrated in Table~\ref{PRentityType} allows us to observe some differences to the global analysis. For example it is remarkable how OpenNLP has an f-measure on the entity type Person of 0.78, whilst AFNER achieves 0.65. As another example, Stanford-NLP has an f-measure on the entity type Location of 0.89, whilst LingPipe achieves~0.41.
\subsection{Tools for Entity Classification and their evaluation}
The classification step compares pairs of entities, in which each entity is augmented with several features extracted from documents in the featurization step, and then determines whether these pairs of entities are coreferent or not. This step consists of two consecutive tasks (in Figure~\ref{fig:classificationStep}): \textit{similarity computation} and \textit{coreference decision}. The similarity computation task takes as input a pair of entities and computes the similarity scores between their features (e.g., character-, document-, or metadata-level features) using different appropriate similarity functions for the features. The coreference decision task classifies entity pairs as either ``coreferent'' or ``not coreferent'' based on the computed similarity scores between their features.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{fig/overallProcess2-1.pdf}
\caption{Coreference classification process.}
\label{fig:classificationStep}
\end{figure}
There are two alternative methods for the final coreference decision as follows: (i)~Threshold-based classification: The feature similarity scores of an entity pair might be combined by taking a weighted sum or a weight average of the scores. The entity pairs whose combined score is above a given threshold are considered as ``coreferent''; and (ii)~Machine learning-based classification: A classifier is trained by one of machine learning techniques (e.g., SVM or decision tree) using a training data and entity pairs are classified based on the trained classifier. The similarity scores between entity pairs are used as features for classification. For the similarity computation and the threshold-based coreference decision we use the following open-source packages:
\begin{itemize}
\item \textbf{SecondString and SimMetrics: } \textit{SecondString}\footnote{http://secondstring.sourceforge.net} and \textit{SimMetrics}\footnote{http://sourceforge.net/projects/simmetrics/} are open-source packages that provide a variety of similarity functions used for comparing two feature attribute values. They provide different sets of similarity functions, e.g., \texttt{SecondString} does not provide \texttt{cosine} function supported by \texttt{SimMetrics}. Thus, we use both of the packages as we want to test different similarity functions for different cases.
\item \textbf{Weka: } \textit{Weka}~\cite{Witten2005} is a free software package under the GNU public license, which is a collection of various machine learning algorithms developed in Java. It also provides functionalities for supporting some standard data mining tasks, such as data preprocessing, classification, clustering, regression and feature selection. The package can be applied in this project if a sufficient, suitable and balanced training data is available.
\end{itemize}
\subsubsection{Analysis and Methodology}
In this assessment, we use reAce (to evaluate the effectiveness of the results) and English Gigaword (to evaluate the efficiency of the results) datasets. These datasets have been discussed in Section~\ref{chap1}. Figure~\ref{fig:dataset} shows the characteristics of those two datasets, which indicate for each dataset the types of extracted entities, the number of involved entities, the number of available feature attributes, the number of entity pairs, and so on. Figure~\ref{fig:exampleEntities} shows some example person entities (including metadata such as document identifier, type, and title) from the two datasets.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{fig/datasetCharacteristics-1.pdf}
\caption{Characteristics of datasets. The entity type and the feature attribute, which are considered in the evaluation, are underlined.}
\label{fig:dataset}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{fig/datasetExample-1.pdf}
\caption{Example person entities from two datasets.}
\label{fig:exampleEntities}
\end{figure}
We measured the overall performance with both of \textit{efficiency} and \textit{effectiveness}. First, the efficiency is commonly determined in terms of the execution time which is taken in comparing feature attributes using similarity functions and then making coreference decisions based on their computed similarity scores. Second, the effectiveness is determined with the standard measures precision, recall and F-measure with respect to ``perfect coreference results'' which are manually determined. Let us assume that TP is the number of true positives, FP the number of false positives (wrong results), TN the number of true negatives, and FN the number of false negatives (missing results).
\begin{itemize}
\item Precision= $\frac{TP}{TP+FP}$;
\item Recall= $\frac{TP}{TP+FN}$;
\item F-measure= $\frac{2*Precision*Recall}{Precision+Recall}$;
\end{itemize}
\begin{table}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.65\textwidth]{fig/executionTimes-1}\\
\end{tabular}
\caption{Execution times (in seconds) for the two datasets. The smallest and largest values are underlined.}
\label{tab:executionTime}
\end{table}
For the initial evaluations we focus on making coreference decisions for entity pairs of \texttt{Person} entity type. It should be noted that the same techniques described below would be applied to the other entity types, such as \texttt{organization}, \texttt{location}, and \texttt{date/time}. We compared feature attributes using various similarity functions. Figure~\ref{fig:dataset} indicates that several feature attributes could be used for the coreference decision (e.g., entity mention, docType, docDate, docHeadline, and docBody for the ``Gigaword'' dataset). In addition, we used the following four string similarity functions: \texttt{edit distance}, \texttt{Q-grams}, \texttt{jaccard}, and \texttt{cosine} functions. Here, edit distance and Q-grams are character-based functions while jaccard and cosine functions are token-based functions. For the \textit{Gigaword} dataset we only measured the \textit{execution time} as the perfect coreference results are not available. We applied the four similarity functions on one feature attribute (i.e., entity mention feature). For the \textit{reACE} dataset we measured the \textit{execution time} as well as the \textit{accuracy}. As in the ``Gigaword'' dataset, we used the four similarity functions in comparing the entity mention feature. \\[-15pt]
\begin{figure}
\centering
\includegraphics[width=0.59\textwidth]{fig/fig67.png}
\vspace{-3mm}
\caption{Evaluation results with the four different similarity functions (threshold= 0.5).}
\label{fig:precisionRecall}
\end{figure}
Table~\ref{tab:executionTime} lists the execution times taken for making coreference decisions by comparing \texttt{person} entities of the two datasets. The table shows significant differences between the applied similarity functions. The token-based functions (\texttt{Jaccard} and \texttt{cosine}) achieved fast execution time, when compared to the character-based functions (\texttt{edit distance} and \texttt{Q-grams}). This may be influenced by the algorithms of those functions, e.g., the character-based functions consider characters and their positions within strings to estimate the similarity, rather than considering tokens within strings as in the token-based functions. For the both datasets, among all the functions, the \texttt{Q-grams} function is the slowest one while the \texttt{cosine} function is the fastest one.
When comparing 20,308 entities (the number of entity pairs is 412 millions) from ``Gigaword'' dataset, an execution time of 12,364 seconds is needed with the \texttt{Q-grams} function while an execution time of 813 seconds is needed with the \texttt{cosine} function.
Figure~\ref{fig:precisionRecall} shows the coreference quality (precision, recall, and F-measure) results for the ``reACE'' dataset with different similarity functions.
The top half shows the results obtained by applying the character-based functions on just one single feature attribute of ``reACE'' dataset, namely \texttt{person name} entity mention. Among the character-based functions, the \texttt{Q-grams} function (average precision: 0.87) worked better than the \texttt{edit distance} function (average precision: 0.80). The bottom half shows the results achieved by applying the token-based functions on the same feature attribute. Among the token-based functions, the \texttt{cosine} function (average precision: 0.87) achieved slightly better results, compared with the \texttt{jaccard} function (average precision: 0.84). Among the functions, the coreference decision using the \texttt{Q-grams} function performed best while the one using the \texttt{edit distance} performed worst. The reason is that, if person names have multiple tokens and the tokens of the names are ordered differently in the other names, the \texttt{Q-grams} function could be more effective than the other character-based function, such as \texttt{edit distance}. All the functions performed reasonably well in terms of precisions, but they all suffered from very low recall, which means they missed many true coreferent entity pairs that should be contained in the returned results.
\section{Conclusions and Future Work}
\label{chap6}
In this paper we discussed the central concepts, subtasks, and the current state-of-the-art in Cross-Document Coreference Resolution (CDCR) process.
We provided assessment of existing tools/techniques for CDCR subtasks and highlight big-data challenges in each of them to help readers identify important and outstanding issues for further investigation. Finally, we provide concluding remarks and discuss possible directions for future work.
We believe that this is an important research area, which will attract a lot of attention in the research community.
In the following, we summarize significant research directions in this area.
\\\\
\textbf{Entity Extraction and the Big Data.} The entity extraction task outputs more data than it takes. Millions of documents can be used as an input to this task, and billions of entities can be extracted. In this context, the performance of entity extraction as well as the accuracy of extracted named entities should be optimized.
For example, as depicted in table~\ref{tab:executionTime} in Section~\ref{chap4}, the evaluation results on the effectiveness shows that the recall can be very poor, compared with the precision. There is a strong need for improving the recall results by exploiting more useful features and applying appropriate similarity functions to those features.
In this context, various load balancing techniques can be used to optimize the performance of MapReduce in extracting entities from huge number of documents. Moreover, various dictionaries and knowledge bases such as YAGO, freebase, DBpedia, and reACE can be used for training which may help to optimize the accuracy of extracted entities.
\\\\
\textbf{Entity Pairs Filtering and Featurization of Billions Extracted Entities.} For huge number of extracted entities, it is generally not feasible to exhaustively evaluate the Cartesian product of all input entities, and generate all possible entity pairs. To address this challenge, various blocking techniques (e.g., blocking strategy for all non-learning and learning-based match approaches) can be used to reduce the search space to the most likely matching entity pairs. Moreover, featurization of the corpus as well as extracted entities, will facilitate the filtering step and also will quickly eliminates those pairs that have little chance of being deemed co-referent. Similar to entity extraction phase, generating a knowledge-base from existing Linked Data systems may facilitate the featurization~step.
\\\\
\textbf{Classification of Billions Entity Pairs.} Various machine learning over a set of training examples can be used to classify the pairs as either co-referent or not co-referent. Different approaches has different similarity threshold, where entity pairs with a similarity above the upper classification threshold are classified as matches, pairs with a combined value below the lower threshold are classified as non-matches, and those entity pairs that have a matching weight between the two classification thresholds are classified as possible matches. This task is challenging as we need to investigate how different configurations could have an impact on the effectiveness and efficiency of coreference classification. Three characteristics can be considered for this configuration: (i)~which feature attributes to be used for classification; (ii)~which similarity functions to be used for the chosen feature attributes; and (iii)~which threshold is suitable for the classification decision.
\\\\
\textbf{Clustering of Billions of (Cross Document) Co-referent Entities.}
Once the individual pairs are classified, they must be clustered to ensure that all mentions of the same entity are placed in the same equivalence class.
Standard entity clustering systems commonly rely on mention (string) matching, syntactic features, and linguistic resources like English WordNet. Challenges here include: (i)~assigning each cluster to a global entity. For example, the cluster including ``Obama, B. Obama, B.H. Obama, Barack Obama, Barack H. Obam, etc" should be considered as mentions of the global entity `President of the United State'. To achieve, Linked Data systems can be used to help identifying the entities; and (ii)~when co-referent text mentions appear in different languages, standard entity clustering techniques cannot be easily applied.
\newpage
\bibliographystyle{plain}
|
1,108,101,565,982 | arxiv | \section*{Introduction}
In his derivation of the limiting mass of a white dwarf star using hydrostatic equilibrium, Chandrasekhar\cite{ch1}-\cite{ch2} assumed that the electrons inside the star can be modelled as {\it free} quantum mechanical particles inside a box of length equal to the radius of the star. The electrons are of course assumed to be trapped in an infinite well, since, to escape outside from the boundary, the electrons would need to overcome a huge Coulombic potential. Such a potential is clearly absent inside, as the star as a whole is electrically neutral. But inside the box, unlike the electrostatic interaction, gravity does {\it not} cancel out. The electrons inside the well are not free but are actually residing in a {\it background gravitational potential}. Since this background potential is admittedly weak, the electrons can be more realistically modelled approximately as free particles inside a box, but {\it with an additional perturbation} by this background gravitational potential. The aim of this article is to explore the effects of such a background potential on the mass limit. We argue that a correction to this limiting mass, albeit a small one, does indeed emerge within our proposed modification of electron dynamics inside the star.
We begin with a brief overview on white dwarfs and the Chandrasekhar mass limit, presenting a somewhat simpler but consistent derivation of the mass limit. Next we introduce the proposed modification due ot self-gravity of star, to the free electron Hamiltonian, within a sort of mean field picture. This modification is seen to be a weak effective gravitatonal potential. Resorting to first order time-independent perturbation theory, the effect of this effecive potential on the limiting mass of the star is then estimated, following the steps of Chandrasekhar's original derivation.
\section{Brief Overview of White Dwarfs and the Chandrasekhar Limit}
White Dwarfs have been a phenomenon of great importance to both theoretical and astrophysicists since their discovery in 1910. A White Dwarf is a stellar remnant, the fate of certain stars after they have exhausted all of their nuclear fuel. They are extremely dense with masses comparable to that of the Sun but a volume comparable to that of the earth. Sir Arthur Eddington, a leading astrophysicist of his time, formulated the first theoretical challenge that came from the existence and stability of White Dwarfs, commonly known as "Eddington's paradox" which can be stated in the simple and clever words: "A star needs energy to cool".
The meaning of this statement is as follows. When a star runs out of nuclear fuel, it starts to collapse under its own gravity. Since there is no source of energy left, the radiating star looses energy and its temperature falls. As temperature falls, we expect the ionized stellar material to recombine and form normal atoms. But in order to form such atoms the star must expand to the density of normal atoms at that temperature, working against gravitational potential energy. In the words of Eddington, `` When the star cools down and regains the normal density ordinarily associated with solids, it must expand and do work against gravity. {\it The star will need energy to cool.}"
But a white dwarf's stellar material would have radiated so much energy that it cannot expand to normal densities that are associated with atoms at such low temperatures. Such a star cannot be stable since it has nothing to sustain it any longer. Thus, according to such a scenario, the star will continue to collapse unabatedly, and one is led to conclude that stable white dwarfs ought not to exist. Yet, the existence of stable white dwarfs is in no doubt since this has been confirmed observationally for decades. Herein lies the paradox.
This paradox was resolved by R.H. Fowler in his 1926 landmark paper entitled "Dense matter". Fowler emphasized that as the temperature of the star falls, the electrons inside will become degenerate and by virtue of the Pauli Exclusion Principle will have a zero point energy and a corresponding degeneracy pressure which would prevent further collapse.
In the following section, a somewhat simpler derivation of Fowler's results is worked out. The conclusions were known since 1926 and are given here just for the sake of completeness.
\subsection{Fowler's Results}
In a completely degenerate electron gas, the density of states is calculated in the following way: Since the electrons cannot escape the volume of the star (due to Coulomb interaction), they are assumed to be trapped in an infinite square well inside which they are free (since the star as a whole is electrically neutral,
the average electrostatic interaction cancels out inside the star). Under these circumstances, electrons in an infinite square well potential have the following energy
\begin{equation}
E(n)=\frac{n^{2}\pi^{2}\hbar^{2}}{2ma^{2}} ~. \label{ener}
\end{equation}
where,
\begin{equation}
n^{2}=n_{x}^{2}+n_{y}^{2}+n_{z}^{2} \nonumber
\end{equation}
and $n$ characterizes the energy level. To calculate the density of states, we imagine a sphere in the space of integers labeled by $n$; since $n_{x}, n_{y}, n_{z}$ are positive, and an allowed state occupies a unit cube in the first quadrant of the sphere in $n$-space. The number of states between energy level $n$ and $n+dn$ is:
\begin{equation}
g(n)dn= \frac{\pi n^{2} }{2}dn ~\label{denst}
\end{equation}
Since the occupation number of electrons in a state with energy $E$ is the Fermi-Dirac function which becomes a step function for
temperature tending to 0. The number of electrons occupying the energy levels between $n$ and $n+dn$ becomes
\[
\ g(n)=\begin{cases}
\frac{\pi n^{2} }{2} \quad for \quad n\leq n_{f} \\
0 \qquad for \ n>n_{f}
\end{cases}
\]
where $n_{f}$ is the Fermi occupation number, with the Fermi energy $E_{f}$ being defined from $E_f = E(n_f)$, using (\ref{ener}). Since each state can contain only two electrons (opposite spin) by virtue of the Pauli exclusion principle, the total number of electrons is given by:
\begin{eqnarray}
N =\int_{0}^{n_{f}} 2g(n)dn = \frac{\pi}{3} n_{f}^3 ~\Rightarrow~ n_{f}=\left( \frac{3N}{\pi} \right)^{1/3} \label{occ}
\end{eqnarray}
Now we can compute the total kinetic energy of the electrons
\begin{eqnarray}
E_{kin} = \int_{0}^{n_{f}} 2g(n)E(n)dn =\frac{\pi ^{3} \hbar^{2}}{10ma^{2}} \left( \frac{3N}{\pi } \right)^{5/3} ~\label{kine}
\end{eqnarray}
But the total number of electrons N is equal to the total number of nuclei, which in turn is equal to the mass of the star divided by the mass of one nucleus.
\begin{equation}
E_{kin}=\frac{\pi^{3}\hbar^{2}}{10ma^2} \left( \frac{3M}{\pi A_m } \right)^{5/3}
\end{equation}
where $M$ is the mass of the star, $a$ is the radius of the white dwarf and $A_{m}$ is the average mass of one nucleus of constituent particle of the star.
Thus,
\begin{equation}
E_{kin}=C\frac{M^{\frac{5}{3}}}{a^{2}} ~{\rm where} ~ C=\frac{\pi^{3} \hbar^{2}}{10m} \left( \frac{3}{\pi A_m } \right)^{5/3} ~. \label{kinec}
\end{equation}
The total energy $E_{tot}$ of the star is the sum of the kinetic and the self gravitational potential energy of the star,
\begin{eqnarray}
E_{Self-pot}&=& - \frac{3GM^{2}}{5a} \nonumber \\
E_{tot} &=& C\frac{M^{5/3}}{a^{2}}-\frac{3GM^{2}}{5a}
\end{eqnarray}
Minimizing the total energy gives us the mass radius equilibrium relation for White Dwarfs :
\begin{equation}
\frac{dE_{tot}}{da}=0~\Rightarrow M = \left( \frac{10C}{3Ga} \right)^3
\end{equation}
where C is given by equation (\ref{kinec}). The conclusion drawn from Fowler's work resolves Eddington's paradox. Of course a star needs energy to cool. At the low temperatures available to the star after it had radiated away a substantial portion of its thermal energy arising from thermonclear fusion, the electrons become degenerate. By virtue of the Pauli Exclusion principle this degenerate electron gas must have a `zero-point' energy, since all electrons cannot have vanishing energy in the ground state of the system. This energy leads to a equilibrium condition for White Dwarfs of all masses. The radius of course varies inversely with the cube-root of the mass.
\section{Chandrasekhar Limit}
But as the mass increases, so does the density of the star, as its radius falls inversely with the cube-root of its mass. At very high densities, the electrons at the threshold energies will have a velocity close to the speed of light and hence become relativistic, invalidating equation (1). This scenario was worked out by Subrhamanium Chandrasekhar and a brief over view of this is now given. From Fowler's result it is clear that, the heavier the star is, the smaller and hence denser it is. The Fermi momentum - the threshhold momentum of the electrons - is an increasing function of density:
\begin{eqnarray}
p_{f}=\frac{ \pi \hbar n_{f}}{a} = \left( \frac{ 4\pi^3 \hbar^3 \rho}{A_{m}} \right)^{1/3} ~, \label{fermom}
\end{eqnarray}
For the average White Dwarf, $ \rho \simeq 4.19 \times 10^{6}~gm/cc$. This corresponds to momentum of the order of 1.3mc. At such high momenta the electrons have to be treated relativistically. Chandrasekhar showed that in the relativistic case, the White Dwarfs have a limiting mass. For any White Dwarf heavier than that mass, the electron degeneracy will not be able to prevent the collapse. We present below a somewhat simpler order of magnitude derivation of the limiting mass than the original version.
Starting with the well-known formula for the relativistic energy-momentum relation for a free particle of mass $m$,
\begin{eqnarray}
E=\sqrt{ p^{2}c^{2}+m^{2}c^{4}} ~, \label{rele}
\end{eqnarray}
Expanding this in powers of $mc/p$, we get:
\begin{eqnarray}
E \approx pc(1+O(0.8)) \simeq \frac{n \pi \hbar c}{a}
\end{eqnarray}
Using eqn. (\ref{denst}) for the density of states, we obtain
\begin{equation}
E_{kin}=\int^{n_{f}}_{0} 2E_{n}g(n)dn = \frac{C_{1}\hbar c}{a} \left( \frac{M}{A_m} \right)^{4/3}~{\rm with}~ C_1=\frac{\pi^{2}}{4}(\frac{3}{\pi} )^{4/3} ~. \label{ekin}
\end{equation}
Thus the total energy of the star is :
\begin{equation}
E_{tot}=\frac{C_{1}\hbar c}{a} \left( \frac{M}{ A_{m}} \right)^{4/3} - \frac{3GM^{2}}{5a}
\end{equation}
Minimizing this total energy, i.e., setting $(dE_{tot}/da)=0 \quad {\rm and} \quad d^{2}E_{tot}/{da^2} > 0$, we see that for {\it stable} equilibrium to exist, the radial dependence cancels out and a mass inequality emerges :
\begin{equation}
M < M_{limit}= (5 C_{1})^{3/2} \frac{( \frac{ \hbar c}{G} )^{3/2}}{A_{m}^{2}}
\end{equation}
This limiting mass, known as the Chandrasekhar Mass Limit, sets an upper bound for the mass of a white dwarf star. Any such star heavier than this mass cannot exist in nature as it will collapse without resistance and form much more compact objects like neutron stars or black holes.
In the calculation of his mass limit, Chandrasekhar, following Fowler, assumed that the electrons inside the star are free, except that they cannot escape the boundary of the star. In the next section, we argue against the practicality of this assumption and propose a correction to the limiting mass in light of a more realistic model for the electrons.
\section{Proposed Correction}
While calculating the electrons' average kinetic energy, Chandrasekhar assumed that the electrons are trapped in an infinite square well inside which they are free. But in reality, they are not. The average Coulombic interaction may cancel out due to the fact that the star as a whole is electrically neutral. But an effective gravitational potential exists inside the square well which, despite being weak, can alter the quantum mechanical properties of the electrons and hence their zero point energy. We show that the order of magnitude correction to the limiting mass is in fact computable.
An electron inside a star at a distance $r$ from the radius experiences a gravitational field ($F$) only due to the matter contained in a Gaussian sphere of radius $r$. Assuming uniform density for simplicity, the gravitational flux across the surface of the star is proportional to the stellar mass $M = (4/3) \pi r^3 \rho$,
\begin{equation}
\oint_{S} \vec{F} \cdot {\hat n} dS =-4\pi G~ \frac43 \pi r^3 ~\rho ~\label{gau}
\end{equation}
so that, the gravitational force at every point inside the star can be written as
\begin{eqnarray}
\vec{F}(\vec{r})=- \frac{4 \pi G \rho \vec{r}}{3}\ for \ r\leq a ~, \label{fin}
\end{eqnarray}
while, for locations outside the star,
\begin{eqnarray}
\vec{F}(\vec{r})=- \frac{4 \pi G \rho a^{3}}{3r^{2}}\hat{r}\ for \ r> a~. \label{fout}
\end{eqnarray}
This leads to an effective self-gravitational potential affecting the elecron gas,
\begin{eqnarray}
V(\vec{r})=- \int_{\infty}^{r} {F}(\vec{r}) \cdot d\vec{r}
\end{eqnarray}
The potential energy $U(\vec {r} )$ is just $V(\vec{r})$ times the mass $m$ of the electron : $U(r)=(2 \pi m G \rho r^2/3) - 2 \pi G \rho m a^{2}$. Thus, in this scenario, the electrons are trapped in an infinite square well potential, with a weak harmonic potential inside the star. One can treat this weak internal self-gravity potential as a perturbation on the un-perturbed free electron dynamics inside the star.
The first order perturbation correction to the energy spectrum can be easily computed:
\begin{eqnarray}
E^{1}_{n} &=& \int d^3r \Psi_n^* U(r) \Psi_{n} ~\nonumber \\
&=& \frac{4 \pi G \rho}{3}(-\frac{4}{3} m a^{2}-\frac{m a^{2}}{4 n^{2} \pi^{2}}) ~. \label{en1}
\end{eqnarray}
where $ \Psi _{n}(r)=\sqrt{\frac{2}{a}}\sin \frac{n \pi r}{a} $ are the nth level eigenfunctions of the infinite square well.
But $$\rho=\frac{M}{\frac{4}{3} \pi a^{3}}$$
Hence
\begin{eqnarray}
E_{n}^{1}=\frac{GM m}{a}(-\frac{4}{3}-\frac{1}{4 \pi^{2} n^{2}})=\frac{\omega}{a}(-\frac{4}{3}-\frac{1}{4 \pi^{2} n^{2}})
\end{eqnarray}
Where $$\omega=Gm M$$
Thus the actual energy spectrum of the electrons is not (12) but rather:
\begin{equation}
E(n)=\frac{n \pi \hbar c}{a} +E_{n}^{1}=\frac{n \pi \hbar c}{a} +\frac{\omega}{a}(-\frac{4}{3}-\frac{1}{4 \pi^{2} n^{2}}) \equiv An -B -\frac{D}{n^{2}} ~\label{cor}
\end{equation}
\noindent where
\begin{eqnarray}
A \equiv \frac{\pi \hbar c}{a}~,~ B \equiv \frac{4 \omega}{3a}~,~D \equiv \frac{\omega}{4 a \pi^{2}}~. \label{abd}
\end{eqnarray}
The numerical order of magnitude values of $A, B$ and $D$ have been calculated from the observed density and radius of typical white dwarf stars: $\rho \approx 10^{6} ~gm/cm^3~,~ a \approx 7000~km $ yielding $A \approx 10^{-32}~,~B \approx 10^{-23}~,~D \approx 10^{-25} $. Here, the relatively large values of the constants $B$ and $D$, compared to that of the unperturbed constant $A$ raise a question of validity of the perturbative result. One expects that the perturbative result would not dominate the zeroth order unperturbed result corresponding to the original scenario of the free electron gas; thus
\begin{eqnarray}
E(n) > 0 \Rightarrow An > B + \frac{D}{n^{2}} ~. \label{pert}
\end{eqnarray}
Since the correction terms decrease with increasing $n$, there must be a minimum $n=n_0$ such that $E(n_0)=0$; any value of $n>n_0$ is acceptable as a perturbative correction.
To find $n_0$, note that one has to solve a cubic equation; this is done by employing Cardano's method, yielding $n_0 \simeq (B/3A)\approx 10^{8}~i.e ~ An_0 \simeq B/3 $. This implies that our perturbative correction is valid only for electron states lying within the domain $3n_0 < n < n_f$, and this is consistent with our analysis in the ultrarelativistic limit. The density of states is, of course, still given by (\ref{denst}).
Referring back to (\ref{ekin}), the total kinetic energy of the electrons can be computed as before, with only the $n=0$ lower limit of the integration being replaced by $n_0$; this gives us the result
\begin{eqnarray}
E_{kin} &=& \int_{n_{0}}^{n_{f}} \left(An -B-\frac{D}{n^{2}} \right) \pi n^{2} dn \nonumber \\
&=& \pi A\frac{n_{f}^{4}}{4}\left[1-\left( \frac{n_{0}}{n_{f}} \right)^{4} \right]-\pi B\frac{n_{f}^{3}}{3}\left[1-\left( \frac{n_{0}}{n_{f}} \right)^3 \right]
-\pi Dn_{f}(1-\frac{n_{0}}{n_{f}}) ~. \label{ekin2}
\end{eqnarray}
Now, $n_{f} \approx N^{\frac{1}{3}} \approx 10^{20}$, since, $N \approx 10^{60} $, it follows that $ \frac{n_{0}}{n_{f}} \approx 10^{-12}$, so that all powers of $n_{0}/{n_{f}}$ can be safely ignored.
With these approximations, and substituting the expressions for the constants $A~,B,~,D$, the total kinetic energy can be written as
\begin{eqnarray}
E_{kin}=C_{1}\frac{\hbar c}{aA_{m}^{\frac{4}{3}}}M^{\frac{4}{3}} -C_{2} Gm_{e}\frac{M^{\frac{4}{3}}}{aA_{m}^{\frac{1}{3}}}-C_{3}Gm_{e}\frac{M^{2}}{aA_{m}}
~. \label{ekinf}
\end{eqnarray}
where, $ C_{1}$ is as defined in (16), $C_{2}=(\frac{3}{\pi})^{\frac{1}{3}} \frac{1}{4\pi }~,~ C_{3} = \frac{4}{3}$
The total energy with the perturbative correction can now be written as
\begin{eqnarray}
E_{tot} &=& E_{kin} - 3G\frac{M^{2}}{5a} \nonumber \\
&=& \frac{M^{\frac{4}{3}}}{a} \left( C_{1} \frac{hc}{ A_{m}^{\frac{4}{3}}} -C_{2}\frac{Gm_{e}}{A_{m}^{\frac{1}{3}}}\right) -\frac{M^{2}}{a}\left( \frac{C_{3}Gm_{e}}{A_{m}}+\frac{3G}{5} \right) ~\label{etotf}
\end{eqnarray}
Minimizing this new expression for total energy, i.e., setting $dE_{tot}/{da} =0~$ and $d^{2}E_{tot}/{da^2} > 0 $, the dependence on the stellar radius $a$ cancels out as before, leaving a limiting mass:
\begin{eqnarray}
M<M_{limit} \approx (5 C_{1})^{3/2} \frac{( \frac{ \hbar c}{G} )^{3/2}}{A_{m}^{2}} \left( 1-\frac{10}{3} \frac{m}{A_{m}}\right) ~. \label{maslim}
\end{eqnarray}
The expression of the first two factors with first term in the last factor in parenthesis is the original limiting mass a' la Chandrasekhar, while the second term is our leading correction arising from self-gravity of the star producing an effective gravitational potential inside the star. Clearly, this dimensionless correction term is a ratio of the mass of the electron to that of the proton, i.e., of the order of $10^{-4}$, and hence substantially smaller compared to the original contribution. This is as may have been expected, and in a sense justifies the neglect of the physical effect discussed here, in the incipient analysis. However, the effect is not so small as to be completely ignorable, especially if future observational studies require more precise results than what is available from the incipient analysis.
\section{Conclusion And Pending Issues}
From our calculations, we conclude that
\begin{itemize}
\item The effect of a background gravitational potential on the electrons inside a White Dwarf is physical and produces a change in the mass-limit, the change being of the order of $10^{-4}$.
\item This change will also affect the absolute luminosity of Type-Ia Supernovae as calculated from the Mass-Limit. Since Type-Ia Supernovae act as Standard candles, our correction might have a significant effect in the measured value of the cosmological parameters. In light of the second point, considering type 1-a supernovae to be thermonuclear explosions of super chandrasekhar mass white dwarfs, the total energy released in such an explosion and hence the luminosity can be thought to be approximately proportional to the mass of its progenitor times the speed of light squared. E.g., the luminosity $L= \alpha M_{limit}c^{2}$, when
our correction term is incorporated, becomes: $L^{*} \approx L(1-0.0001)$. This will change the measured value of the luminosity distance by:
\begin{eqnarray}
d_{L}^{*}=\sqrt{\frac{L^{*}}{4 \pi Flux}} \approx d_{L} (1-0.0001)^{\frac{1}{2}} \approx d_{L} (1-0.00005)
\end{eqnarray}
The resultant change is indeed small but the corresponding change in cosmological parameters might be significant enough, given the ever-increasing precision currently being achieved in measurement of these parameters. From this standpoint, there seems to be scope for further research in the area.
\item We have restricted ourselves to the simplest possible corrections to the celebrated result, based mainly on Chandrasekhar's Nobel lecture; one might wonder : are there others - based on more complicated models - which might produce corrections of similar magnitudes ? This is a very pertinent point which has not been addressed here. We plan to consider such refinements in future.
\item Similarly, for denser white dwarfs, one might wonder whether general relativity ought to be used, with the Dirac equation in a spherically symmetric background (where it is separable) providing a more precise estimate of the correction. Certainly a very important topic to be taken up in the near future.
\item As far as observations directly related to the mass limit is concerned, our knowledge is scanty, except for one reported observation which concludes that the data reveals a white dwarf about twice the limiting mass \cite{HOW}. It is likely that rotation and magnetic fields will produce a heavier white dwarf. In any event, these aspects have not been studied in this paper.
\item Another related question is : can similar corrections arise in hydrostatic equilibrium applied to more compact astrophysical objects like neutron stars ? We hope to report on these issues in the near future.
\end{itemize}
\section{Acknowledgements}
We acknowledge interesting and useful discussions with Muktish Acharya, Amitava Banerjee, Ritaban Chatterjee and Suchetana Chatterjee. One of us (PM) acknowledges very interesting correspondence with Amitabha Sen.
\section{Appendix}
\subsection{Solution of equation (24) using \textbf{Cardano's Method}}
$$An^{3} -Bn^{2}-D=0$$
Let $n=m+\frac{B}{3A}$. Substituting, we get:
$$m^{3}-\frac{B^{2}}{3A^{2}} m +[-\frac{2B^{3}}{27A^{3}} -\frac{D}{A}]=0$$
Now, using $\frac{2B^{3}}{27A^{3}} \approx 10^{26} ~ and \quad \frac{D}{A} \approx 10^{7}$ we get :
$$m^{3}-\frac{B^{2}}{3A^{2}}+\frac{2B^{3}}{27A^{3}}=0$$
Thus
$$m^{3} +pm + q =0$$
where, $p=-\frac{B^{2}}{3A^{2}}$ and $q=-\frac{2B^{3}}{27A^{3}}$
Now, following \textbf{Cardano's method}, let $m=u+v$
then,
$$u^{3} +v^{3} +3uv(u+v) +p(u+v)+q=u^{3} +v^{3} +(3uv+p)(u+v)+q=0$$
Now, since arbitrarily many pairs (u,v) can satisfy $u+v=m$, without loss of generality, we can impose another condition on u and v such that $uv=-\frac{p}{3}$.
Thus, the u and v that satisfy both $u+v=m$ and $uv=-\frac{p}{3}$ are unique and they can be evaluated as follows.
$$u^{3}+v^{3} +q=0$$
Or,
$$ u^{3}-\frac{p^{3}}{27u^{3}}+q=0$$
$$ \implies (u^{3})^{2}+qu^{3}-\frac{p^{3}}{27}=0$$
which is a quadratic in $u^{3}$ and has the solution
$$u^{3}=\frac{-q\quad \pm \quad \sqrt{q^{2}+\frac{4p^{3}}{27}}}{2}$$
In order to avoid negative values of $n=(u+v)-3a$ since no. of states cannot be negative, we ignore one root of u:
$$u^{3}=\frac{B^{3}}{27A^{3}}+\frac{B^{3}}{A^{3}} \sqrt{
\frac{1}{27^{2}}-\frac{1}{27^{2}}} = \frac{B^{3}}{27 A^{3}}$$
thus
$$u = \frac{B}{3A}$$
and
$$v=-\frac{p}{3u} = -\frac{B}{3A}$$
Thus, m=u+v=0 and $n=m+\frac{B}{3A}$
Thus
$$n=n_{0}= \frac{B}{3A}$$
|
1,108,101,565,983 | arxiv |
\section{Introduction}
The Transformers have shown to outperform the vanilla recurrent neural networks in many natural language processing tasks \cite{Vaswani2017, Peters2018, Liu2019, Devlin2019, Fedus2021}. One of the key factors of such improvement is the self-attention mechanism which allows contexts to be processed based on relation among tokens \cite{Bahdanau2015}. Specifically, each token is attended to all other tokens in the context, thus offers an alternative to the recurrence mechanism to capture the distant information. Also, as the recurrent neural networks have shown to focus on only the nearest 50 tokens \cite{Khandelwal2018}, the self-attention mechanism provides a better solution to exploit longer contexts.
Applying the Transformers in a causal manner is crucial for language modeling, particularly predicting next word given the preceding context \cite{Peters2018, Irie2019, Al-Rfou2019, Irie2020}. Typically, a window is slid through a long sequence of tokens, in which the self-attention representations are computed and used to estimate probabilities. However, as self-attention itself has no regards on how tokens are arranged, the sequential information is usually compensated by encoding the embedding vectors based on positions \cite{Vaswani2017}.
In this paper, we investigate if such sequential information can be captured in a more explicit manner. That is, we attempt to preserve the arrangement of the self-attention representations and use such information to improve the language models. The implementation involves cascading a standard recurrent neural network (RNN) to the Transformer layers, hence we refer this architecture to as TransfoRNN. Through a series of evaluations, we show empirically that the TransfoRNN models outperform the vanilla Transformers models, in terms of model perplexities and speech recognition accuracy. Furthermore, we discovered that with the inclusion of the sequential information into the model, a TransfoRNN model with shallow Transformer layers, e.g. two layers, is suffice to give comparable performance, if not better, to a deeper Transformer network. Also, we highlight that the TransfoRNN models possess faster inference time.
Next section provides some background information about language modeling. In Section and we will highlight the Transformers and some related works in language modeling. In Section \ref{sec:problem}, we highlight the shortcoming of using the Transformers for language modeling. Next in Section \ref{sec:transfornn}, we depict our proposed TransfoRNN model. Section \ref{sec:experiments} presents the experiments setup and results particularly the comparison of the TransforRNN model with the standard Transformers. Finally, Section \ref{sec:conclusion} concludes this paper and suggests future works.
\section{Background}
\label{sec:background}
The function of a language model (LM) is to estimate the probability distribution of next word given the history context, i.e. $P(w_i|w_1^{i-1})$. Traditionally, LMs are estimated from the smoothed $n$-gram counts \cite{Jelinek1985, Katz1987, Witten1991, Kneser1995, Chen1996}. In the past two decades, neural networks have been extensively used for language modeling due to their better smoothing capability, particularly when using longer history context, i.e. $|w_1^{i-1}| \gg 3,4$. Some commonly used architectures are the feedforward \cite{Bengio2001,Schwenk2007} and recurrent neural networks \cite{Mikolov2010,Sundermeyer2012,Arisoy2015}, and recently, the Transformers \cite{Al-Rfou2019,Irie2019,Irie2020,Beck2020}. The idea of these approaches is to project discrete words into low dimensional space such that latent information about the context can be better captured.
In general, neural network approach to language modeling can be depicted as follows.
\begin{equation}
\label{eq:nnlm}
P(w_i|w_1^{i-1}) \approx P_{\text{NN}} \big(w_i|f(w_1^{i-1})\big)
\end{equation}
where $f(\cdot)$ refers to projection applied on the history context.
The LMs are devised to capture as much knowledge from the language, especially the syntax, which governs how tokens, e.g. words, are arranged in sequence. In order to capture the sequential information, a feed-forward neural network (FNN) concatenates the embedding vectors of the tokens in the order as how they are arranged in the history context.
\begin{equation}
\label{eq:fnnlm}
P_{\text{FNN}} \big(w_i|<e_1, e_2, ..., e_{i-1}>\big)
\end{equation} where $e_1, e_2, ..., e_{i-1}$ denote the embedding vectors of tokens $w_1, w_2, ..., w_{i-1}$.
A recurrent neural network (RNN), on the other hand, digests the history context recursively, i.e. token by token, to update the context vector in the network.
\begin{equation}
\label{eq:rnnlm}
P_{\text{RNN}} \big(w_i|c_1^{i-2},e_{i-1}\big)
\end{equation} where $c_1^{i-2}$ and $e_{i-1}$ refer to the embedding vectors of context and token, respectively.
For the Transformers, next token is predicted based on the self-attention representation of the immediate preceding token, which is computed from its embedding vector, embellished by attending it to other tokens in the history context through multiple Transformer layers \cite{Al-Rfou2019,Irie2019,Irie2020,Beck2020}.
\begin{equation}
\label{eq:translm}
P_{\text{Transformer}} \big(w_i|z_{i-1}^{(N)}\big)
\end{equation} where $z_{i-1}^{(N)}$ denotes the self-attention representation of token $w_{i-1}$ computed by a stack of $N$ layers of Transformer.
\section{Problem Statement}
\label{sec:problem}
Self attention mechanism serves the core module in the Transformers. It allows distant information to be captured and incorporated into the local representation by attending each token to other tokens in the entire context \cite{Vaswani2017}. The computation begins by first mapping the embedding vector of a given token to query, key and value vectors.
\begin{equation*}
q_{t}, k_{t}, v_{t} = Qe_{t}, Ke_{t}, Ve_{t}
\end{equation*} where $e_{t}$ denotes the embedding vector corresponds to token $w_t$ and $Q$, $K$ and $V$ are the functions projecting $e_t$ to its respective query, key and value vectors. The self-attention of $w_t$ can be computed as the weighted sum of the value vectors, each is weighted by how similar its corresponding query to other keys. The similarity score is normalized by using a softmax function.
\begin{equation}
\label{eq:self-attention}
h_t = \sum_{i<t} \frac{\text{exp}(q_t \cdot k_i)}{\sum_{j<t} \text{exp}(q_t \cdot k_j)} v_i
\end{equation} Note that token $w_t$ attends only to the preceding tokens, i.e. $w_1, w_2, ..., w_{t-1}$, which reflects to the causality property in the language. In the actual realization, multiple sets of $Q$, $K$ and $V$ will be applied and the resulted representations are concatenated. This is referred to as the multi-head attention mechanism \cite{Vaswani2017}.
Since the attention mechanism does not take any positional information into account (see Eq.\ref{eq:self-attention}), in order to compensate such deficiency, the embedding vectors are usually pre-encoded based on their positions.
\begin{equation}
\label{eq:pe}
e_t=e_t^{\text{RAW}} + e_t^{\text{POS}}
\end{equation} where $e_t^{\text{RAW}}$ is the raw embedding vector obtained from linear transformation, while $e_t^{\text{POS}}$ is the positional encoder typically implemented as sinusoidal functions with various frequencies.
The positional encoder ensures that identical tokens located in different positions will be represented as different vectors. However, how such encoding scheme would reflect the sequential information, i.e. the arrangement of the tokens, is unclear. As pointed out also in other works \cite{Tang2018, Raganato2019}, the Transformers may not capture the syntatic information as well as the RNNs. For language modeling, particularly, as farther tokens would generally have weaker predictive power towards next word \cite{Chong2013}, the positional encoding does not reflect this phenomenon.
\section{TransfoRNN}
\label{sec:transfornn}
As discussed previously, the FNN and RNN models attempt to capture the sequential information either by concatenating the embedding vectors or presenting the vectors sequentially to the network. Although the Transformers does modify the raw embedding vectors based on positions (see Eq.\ref{eq:pe}), how sequential information can be captured is doubtful.
\subsection{Architecture}
To capture the sequential information in a more explicit manner, we propose to cascade the RNNs to the Transformers, such that the the arrangement in the token sequence can be modeled, but at the self-attention level. The benefits, besides keeping the strength of the self-attention mechanism in the models, the sequential information can be preserved. We refer this proposed model to as TransfoRNN. The architecture is shown in Figure \ref{fig:transfornn}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{transrnn.png}
\caption{The architecture of the TransfoRNN model. The structure consists of $N$ layers of Transformers, cascaded by $M$ layers of RNNs.}
\label{fig:transfornn}
\end{figure}
As compared to the Transformers which condition only on single representation (see Eq.\ref{eq:translm}), the TransfoRNN models make use of the entire representations in the context. Moreover, using the RNNs to process the representations of a long context allows the memory to be feasibly maintained during computation \cite{Irie2020}.
\subsection{Implementation}
The Transformer layers in the TransfoRNN models are realized followed the original implementation \cite{Vaswani2017}. In the $l$-th layer, the self-attention component (see Eq.\ref{eq:self-attention}) is surrounded by a residual connection followed by layer normalization \cite{Ba2016}.
\begin{equation}
x_t^{(l)} = \text{LayerNorm}(W_0h_t^{(l)} + z_t^{(l-1)})
\end{equation} where $W_0$ denotes the linear function of the residual connection. Note that, at the first layer, $z_t^{0} = e_t$. Next, a feed-forward component is applied and surrounded by another residual connection and layer normalization.
\begin{equation}
\label{eq:trans_ff}
y_t^{(l)} = x_t^{(l)} + \text{Activation}(W_{1}x_t^{(l)})
\end{equation}
\begin{equation}
z_t^{(l)} = \text{LayerNorm}(y_t^{(l)})
\end{equation} where $W_1$ refers to the linear function in the feed-forward connection.
At the $N$-th layer of the Transformers, a stack of $M$ layers of RNNs takes over the representation, i.e. $z_1^{(N)}, z_2^{(N)}, ..., z_{i-1}^{(N)}$, to produce outputs, $o_1^{(M)}, o_2^{(M)}, ..., o_{i-1}^{(M)}$. Probabilities can be obtained by sending the outputs through a softmax layer.
\section{Experiments}
\label{sec:experiments}
The performance of the proposed TransfoRNN model was evaluated in terms of perplexity (PPL) and speech recognition word error rate (WER). The TransfoRNN model was compared with the vanilla Transformer model to assess the performance gain contributed by the captured sequential information.
For the TransfoRNN models, the recurrent layers were implemented as the long short term memory (LSTM) neural networks \cite{Sundermeyer2012}. The Transformer models were trained followed the approach in \cite{Al-Rfou2019}, where during training, all outputs from the final layer, i.e. $z_1^{(N)}$, $z_2^{(N)}$, ..., $z_{i-1}^{(N)}$, were used to compute the loss, but only the last output, i.e. $z_{i-1}^{(N)}$, was used for inference. As will be shown later, this configuration would give lower model PPL as compared to using the entire outputs for prediction. In all models considered in the experiments, the input embedding was tied to the output embedding \cite{Press2017, Inan2017}. The models were built by using the PyTorch toolkit \cite{Paszke2019}.
\subsection{Perplexity Analysis}
The model PPLs were evaluated based on the Penn Treebank (PTB) and WikiText-2 \cite{Merity2017} corpora, each consists of 0.9M and 2.0M words in the training set. The vocabularies comprise 10K and 33K words, respectively.
For both TransfoRNN and Transformer models, the number of heads in the Transformer layers was 8 and the dimension of the feed-forward component (see Eq.\ref{eq:trans_ff}) was 1024. The models were trained using the stochastic gradient descent (SGD) method and new-bob learning rate adjustment. The initial learning rate was commonly fixed as 0.1 and the batch size was 16.
\subsubsection{TransfoRNN vs. Transformer}
First of all, the TransfoRNN models were compared with the Transformer models. The number of LSTM layers in the TransfoRNN models was fixed at two, while the depth of the Transformer layers was increased from two to eight. For the Transformer models, the depths varied from two to sixteen. In this evaluation, the number of heads of the Transformer layers, in both types of models, was eight and the dimension of the feed-forward component is 1024 (see Eq.\ref{eq:trans_ff}). Both models were compared in the embedding dimensions of 512 and 1024. The PPLs are shown in Table \ref{tab:ppl_ptb_wiki2}.
\begin{table}[]
\caption{PPLs of the TransfoRNN models as compared to the Transformer models. On both corpora, TransfoRNN models with only two Transformer layers showed lowest PPLs. (\textit{d}: embedding dimension, \textit{N}: number of Transformer layers)}
\centering
\begin{tabular}{|l|l|c|c|c|c|c|}
\hline
\multicolumn{2}{|l|}{\multirow{2}{*}{}} & \multicolumn{1}{l|}{\multirow{2}{*}{\textit{N}}} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{4-7}
\multicolumn{2}{|l|}{} & \multicolumn{1}{l|}{} & PPL & \#param & PPL & \#param \\ \hline
& KN5 & - & 147.9 & - & 231.0 & - \\ \hline \hline
\multirow{8}{*}{\rotatebox{90}{\textit{d}=512}} & \multirow{4}{*}{Transformer} & 2 & 103.0 & 9.3M & 127.1 & 21.3M \\
& & 4 & 93.8 & 13.5M & 108.5 & 25.5M \\
& & 8 & 88.5 & 22.0M & 100.1 & 33.9M \\
& & 16 & 101.5 & 38.8M & 100.0 & 50.7M \\ \cline{2-7}
& \multirow{3}{*}{TransfoRNN} & 2 & \textbf{83.6} & 18.7M & \textbf{98.5} & 42.5M \\
& & 4 & 86.1 & 22.9M & 102.0 & 46.7M \\
& & 8 & 90.2 & 31.3M & 109.1 & 55.1M \\ \cline{2-7}
& LSTM & - & 99.0 & 9.3M & 100.7 & 21.3M \\ \hline \hline
\multirow{8}{*}{\rotatebox{90}{\textit{d}=1024}} & \multirow{4}{*}{Transformer} & 2 & 111.3 & 22.9M & 130.9 & 46.7M \\
& & 4 & 98.2 & 35.5M & 109.7 & 59.3M \\
& & 8 & 94.7 & 60.7M & 98.6 & 84.5M \\
& & 16 & 100.3 & 111.1M & 96.9 & 134.9M \\ \cline{2-7}
& \multirow{3}{*}{TransfoRNN} & 2 & \textbf{83.2} & 49.9M & \textbf{94.8} & 97.6M \\
& & 4 & 83.2 & 62.5M & 98.4 & 110.2M \\
& & 8 & 89.9 & 87.7M & 104.3 & 135.4M \\ \cline{2-7}
& LSTM & - & 105.6 & 27.0M & 114.4 & 50.9M \\ \hline
\end{tabular}
\label{tab:ppl_ptb_wiki2}
\end{table}
As shown by the results in Table \ref{tab:ppl_ptb_wiki2}, the TransfoRNN models outperformed the Transformer models in most of the settings. Particularly, the TransfoRNN models which comprise only two Transformer layers, i.e. $N=2$, outperformed a deeper Transformer models On the WikiText-2 corpus, for example, although deepening a Transformer model consistently reduced the model PPLs (from 130.9 to 96.9 as $N$ was increased from 2 to 8 in the setting of $d=1024$), a TransfoRNN model with merely 2 Transformer layers have shown a lower PPL, i.e. 94.8. Under the best setting of the Transformer models, the TransfoRNN models reduced the PPLs up to 5.5\% on the PTB corpus (from 88.5 to 83.6) and 2.2\% on the WikiText-2 corpus (from 96.9 to 94.8).
More importantly, the TransfoRNN models demand fewer number of parameters in the models to achieve such comparable results. Particularly, on the PTB corpus, the model size was reduced by 10.5\% (from 22.0M to 18.7M), while on the WikiText-2 corpus, the model size was reduced by 27.7\% (from 134.9M to 97.6M). Both results were measured under the best settings of the models.
We notices that for the PTB corpus, the PPL of the Transformer model increased when the model went deeper than 8 layers. This can be explained by the PTB corpus is a comparatively small dataset which would easily cause the model overfits.
The LSTM models were also evaluated in order to validate the performance of the TransfoRNN models. Such results indicate the performance of the TransfoRNN models when the Transformer layers were unplugged. As expected, the TransfoRNN models outperform the LSTM and Transformer models.
\subsubsection{Number of RNN layers}
Next, we assessed the optimal number of LSTM layers, i.e. $M$, in the TransfoRNN models. By varying the number of LSTM layers, 2 layers of LSTM are found to be optimum to the TransfoRNN models. The results are shown in Table \ref{tab:ppl_rnn_layer}.
\begin{table}[H]
\caption{PPLs of the TranfoRNN models with different numbers of RNN layers. }
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textit{d}} & \multirow{2}{*}{\textit{M}} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{3-6}
& & PPL & \#param & PPL & \#param \\ \hline
512 & 1 & 85.6 & 16.6M & 100.3 & 40.4M \\
& 2 & \textbf{83.5} & 18.7M & \textbf{98.5} & 42.5M \\
& 3 & 89.0 & 20.8M & 106.5 & 44.6M \\ \hline
1024 & 1 & 89.6 & 41.5M & 97.8 & 89.2M \\
& 2 & \textbf{83.2} & 49.9M & \textbf{94.8} & 97.6M \\
& 3 & 88.9 & 58.3M & 98.3 & 106.0M \\ \hline
\end{tabular}
\label{tab:ppl_rnn_layer}
\end{table}
We noticed that the model PPLs were sensitive to the deeper LSTM stacks, particularly in the lower embedding dimension setting.
\subsubsection{With \& without positional encoding}
As the sequential information has been captured in the LSTM layers, we evaluated if the positional encoding (see Eq.\ref{eq:pe}) is still required in this scenario. We evaluated the TransfoRNN models with and without positional encoding. Under the situation where the positional encoder was removed, the input to the model is hence the raw embedding vectors. We compared the models of different embedding dimensions: 256, 512, 1024 and 2048. The results are in Table \ref{tab:ppl_position}.
\begin{table}[H]
\caption{PPLs of the TransfoRNN models with and without positional encoding under different dimensions of embedding.}
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{2-5}
& with pos. & w/o pos. & with pos. & w/o pos. \\ \hline
256 & 93.1 & 94.0 & 114.9 & 107.8 \\ \hline
512 & 83.6 & 83.8 & 98.5 & 97.0 \\ \hline
1024 & 83.2 & \textbf{82.1} & 94.8 & \textbf{92.2} \\ \hline
2048 & 86.7 & 90.0 & 98.4 & 121.8 \\ \hline
\end{tabular}
\label{tab:ppl_position}
\end{table}
On both corpora, under the best settings, the TransfoRNN models showed lower PPLs indicating the redundancy of the positional encoding to the TransfoRNN models (as bolded in the Table \ref{tab:ppl_position}). Nevertheless, there are already Transformer LMs suggested that the positional encoding is not required for deep Transformer architecture \cite{Irie2019, Irie2020}.
\subsubsection{Inference in the Transformer models}
The Transformer models depicted in this work were evaluated by using the approach discussed in \cite{Al-Rfou2019}. The Transformers are suggested to use only the representation in the final position, i.e. $z_{i-1}^{(N)}$ for inference, instead of using the entire representations. Althoguh certain gain in terms of PPL might be obtained, the inference time would drastically increase, as proportion to the length of the considered context.
In order to validate our results, we compared both settings in the Transformer models. Furthermore, as the approach was originally experimented at the character level \cite{Al-Rfou2019}, it's crucial to assess also at the word level, as how our models have been configured here. The results are shown in Table \ref{tab:ppl_transfp}.
\begin{table}[H]
\caption{PPLs of the Transformer models evaluated by using all or only the representation in the final position. }
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textit{d}} & \multirow{2}{*}{\textit{N}} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{3-6}
& & all & final & all & final \\ \hline
512 & 2 & 107.2 & 103 & 133.4 & 124.3 \\
& 4 & 96.4 & 94.0 & 116.9 & 108.5 \\
& 8 & 94.3 & 88.5 & 106.3 & 98.6 \\
& 16 & 102.9 & 94.4 & 105.9 & 96.9 \\ \hline
1024 & 2 & 113.5 & 111.3 & 137 & 130.9 \\
& 4 & 104.2 & 99.3 & 121.3 & 109.7 \\
& 8 & 102.4 & 94.7 & 110.1 & 101.7 \\
& 16 & 107.0 & 100.3 & 111.3 & 103.3 \\ \hline
\end{tabular}
\label{tab:ppl_transfp}
\end{table}
As shown in Table \ref{tab:ppl_transfp}, evaluating the Transformer models based only on the representation in the final position consistently shows lower PPLs, under all settings. Hence, we are assured that the proposed TransfoRNN models were compared with a more competitive Transformer model.
However, the TransfoRNN models discussed in this paper consistently used the entire representations for prediction, which we found out to give lower model PPLs.
\subsection{Speech recognition}
We also evaluated the performance of the TransfoRNN model on a speech recognition \textit{N}-best re-ranking task ny using the LibriSpeech corpus \cite{Panayotov2015}. The system was built followed the recipe in the Kaldi toolkit \cite{Povey2011}. The acoustic model comprises 17-layer TDNN taking 40-dimension MFCC and 100-dimension $i$-vector as input. The LM used in the decoder was the original official trigram model.
Both TransfoRNN and Transformer models were trained from the transcripts in the training set, i.e. 960-hour, consisting of 9.4M words. The vocabulary contains 200K words. The TransfoRNN models consists of two Transformer layers followed by two LSTM layers while the counterpart Transformer models consists of two to eight layers. Other settings in both model are the same: eight attention heads, 1024-dimension embedding and 2048-dimension hidden layer.
The PPLs and WERs of the TransfoRNN and Transformer models were compared on the four datasets in the corpus: dev\_clean, test\_clean, dev\_other and test\_other. The results are shown in Table \ref{tab:libri_ppl} and \ref{tab:wer}.
\begin{table}[H]
\caption{PPLs of the TransfoRNN and Transformer models. The TransfoRNN model which comprises two Transformer layers slightly outperforms the Transformer model with eight layers.}
\centering
\begin{tabular}{l|c|cc|cc}
\hline
\multirow{2}{*}{} & \multirow{2}{*}{\textit{N}} & \multicolumn{2}{c}{clean} & \multicolumn{2}{c}{other} \\
& & dev & test & dev & test \\ \hline
Fourgram & - & 283.7 & 289.5 & 250.8 & 259.2 \\ \hline
Transformer & 2 & 177.7 & 182.5 & 162.7 & 166.9 \\
& 4 & 147.9 & 152.2 & 134.9 & 138.9 \\
& 8 & 131.2 & 135.2 & 119.7 & 122.6 \\ \hline
TransfoRNN & 2 & \textbf{128.2} & \textbf{133.3} & \textbf{117.8} & \textbf{119.8} \\ \hline
\end{tabular}
\label{tab:libri_ppl}
\end{table}
\begin{table}[H]
\caption{WERs of the TransfoRNN and Transformer models. The results of both models are comparable.}
\centering
\begin{tabular}{l|c|cc|cc}
\hline
\multirow{2}{*}{} & \multirow{2}{*}{\textit{N}} & \multicolumn{2}{c|}{clean} & \multicolumn{2}{c}{other} \\
& & dev & test & dev & test \\ \hline
Trigram & - & 3.41 & 3.85 & 9.32 & 9.35 \\ \hline
Transformer & 2 & 3.33 & 3.82 & 9.12 & 9.30 \\
& 4 & 3.21 & 3.73 & 8.92 & 9.20 \\
& 8 & \textbf{3.13} & 3.72 & \textbf{8.72} & 9.09 \\ \hline
TransfoRNN & 2 & 3.19 & \textbf{3.66} & 8.80 & \textbf{9.02} \\ \hline
\end{tabular}
\label{tab:wer}
\end{table}
For the PPLs, the TransfoRNN models which consists of only two Transformer layers consistently gave lower PPLs. These results are consistent to the results presented earlier. For the WERs, although both models reduced the baseline WERs, as output by the decoder, the TransfoRNN gave only comparable results in this evaluation.
\section{Conclusions}
\label{sec:conclusion}
This paper has discussed a work on using the recurrent layers to improve the Transformer LMs by encapsulating the sequential information into the models. The proposed model, referred to as TransfoRNN has shown lower model PPLs with fewer number of parameters in the models as compared to the vanilla Transformers. Moreover, we have shown empirically, that with the assistance of the recurrent layers, lesser number of Transformer layers is required. Specifically, a TransfoRNN model with only 2 layers of LSTM and 2 layers of Transformer outperformed a vanilla Transformer model with deeper network, e.g. 8 or 16. Through speech recognition evaluation on the Librispeech corpus, the TransfoRNN model showed comparable results with the Transformers.
For future work, the TransfoRNN shall be evaluated under BPE or character based settings with larger dataset, which is more applicable to the state-of-the-art end-to-end ASR systems.
\bibliographystyle{IEEEtran}
\section{Introduction}
The Transformers have shown to outperform the vanilla recurrent neural networks in many natural language processing tasks \cite{Vaswani2017, Peters2018, Liu2019, Devlin2019, Fedus2021}. One of the key factors of such improvement is the self-attention mechanism which allows contexts to be processed based on relation among tokens \cite{Bahdanau2015}. Specifically, each token is attended to all other tokens in the context, thus offers an alternative to the recurrence mechanism to capture the distant information. Also, as the recurrent neural networks have shown to focus on only the nearest 50 tokens \cite{Khandelwal2018}, the self-attention mechanism provides a better solution to exploit longer contexts.
Applying the Transformers in a causal manner is crucial for language modeling, particularly predicting next word given the preceding context \cite{Peters2018, Irie2019, Al-Rfou2019, Irie2020}. Typically, a window is slid through a long sequence of tokens, in which the self-attention representations are computed and used to estimate probabilities. However, as self-attention itself has no regards on how tokens are arranged, the sequential information is usually compensated by encoding the embedding vectors based on positions \cite{Vaswani2017}.
In this paper, we investigate if such sequential information can be captured in a more explicit manner. That is, we attempt to preserve the arrangement of the self-attention representations and use such information to improve the language models. The implementation involves cascading a standard recurrent neural network (RNN) to the Transformer layers, hence we refer this architecture to as TransfoRNN. Through a series of evaluations, we show empirically that the TransfoRNN models outperform the vanilla Transformers models, in terms of model perplexities and speech recognition accuracy. Furthermore, we discovered that with the inclusion of the sequential information into the model, a TransfoRNN model with shallow Transformer layers, e.g. two layers, is suffice to give comparable performance, if not better, to a deeper Transformer network. Also, we highlight that the TransfoRNN models possess faster inference time.
Next section provides some background information about language modeling. In Section and we will highlight the Transformers and some related works in language modeling. In Section \ref{sec:problem}, we highlight the shortcoming of using the Transformers for language modeling. Next in Section \ref{sec:transfornn}, we depict our proposed TransfoRNN model. Section \ref{sec:experiments} presents the experiments setup and results particularly the comparison of the TransforRNN model with the standard Transformers. Finally, Section \ref{sec:conclusion} concludes this paper and suggests future works.
\section{Background}
\label{sec:background}
The function of a language model (LM) is to estimate the probability distribution of next word given the history context, i.e. $P(w_i|w_1^{i-1})$. Traditionally, LMs are estimated from the smoothed $n$-gram counts \cite{Jelinek1985, Katz1987, Witten1991, Kneser1995, Chen1996}. In the past two decades, neural networks have been extensively used for language modeling due to their better smoothing capability, particularly when using longer history context, i.e. $|w_1^{i-1}| \gg 3,4$. Some commonly used architectures are the feedforward \cite{Bengio2001,Schwenk2007} and recurrent neural networks \cite{Mikolov2010,Sundermeyer2012,Arisoy2015}, and recently, the Transformers \cite{Al-Rfou2019,Irie2019,Irie2020,Beck2020}. The idea of these approaches is to project discrete words into low dimensional space such that latent information about the context can be better captured.
In general, neural network approach to language modeling can be depicted as follows.
\begin{equation}
\label{eq:nnlm}
P(w_i|w_1^{i-1}) \approx P_{\text{NN}} \big(w_i|f(w_1^{i-1})\big)
\end{equation}
where $f(\cdot)$ refers to projection applied on the history context.
The LMs are devised to capture as much knowledge from the language, especially the syntax, which governs how tokens, e.g. words, are arranged in sequence. In order to capture the sequential information, a feed-forward neural network (FNN) concatenates the embedding vectors of the tokens in the order as how they are arranged in the history context.
\begin{equation}
\label{eq:fnnlm}
P_{\text{FNN}} \big(w_i|<e_1, e_2, ..., e_{i-1}>\big)
\end{equation} where $e_1, e_2, ..., e_{i-1}$ denote the embedding vectors of tokens $w_1, w_2, ..., w_{i-1}$.
A recurrent neural network (RNN), on the other hand, digests the history context recursively, i.e. token by token, to update the context vector in the network.
\begin{equation}
\label{eq:rnnlm}
P_{\text{RNN}} \big(w_i|c_1^{i-2},e_{i-1}\big)
\end{equation} where $c_1^{i-2}$ and $e_{i-1}$ refer to the embedding vectors of context and token, respectively.
For the Transformers, next token is predicted based on the self-attention representation of the immediate preceding token, which is computed from its embedding vector, embellished by attending it to other tokens in the history context through multiple Transformer layers \cite{Al-Rfou2019,Irie2019,Irie2020,Beck2020}.
\begin{equation}
\label{eq:translm}
P_{\text{Transformer}} \big(w_i|z_{i-1}^{(N)}\big)
\end{equation} where $z_{i-1}^{(N)}$ denotes the self-attention representation of token $w_{i-1}$ computed by a stack of $N$ layers of Transformer.
\section{Problem Statement}
\label{sec:problem}
Self attention mechanism serves the core module in the Transformers. It allows distant information to be captured and incorporated into the local representation by attending each token to other tokens in the entire context \cite{Vaswani2017}. The computation begins by first mapping the embedding vector of a given token to query, key and value vectors.
\begin{equation*}
q_{t}, k_{t}, v_{t} = Qe_{t}, Ke_{t}, Ve_{t}
\end{equation*} where $e_{t}$ denotes the embedding vector corresponds to token $w_t$ and $Q$, $K$ and $V$ are the functions projecting $e_t$ to its respective query, key and value vectors. The self-attention of $w_t$ can be computed as the weighted sum of the value vectors, each is weighted by how similar its corresponding query to other keys. The similarity score is normalized by using a softmax function.
\begin{equation}
\label{eq:self-attention}
h_t = \sum_{i<t} \frac{\text{exp}(q_t \cdot k_i)}{\sum_{j<t} \text{exp}(q_t \cdot k_j)} v_i
\end{equation} Note that token $w_t$ attends only to the preceding tokens, i.e. $w_1, w_2, ..., w_{t-1}$, which reflects to the causality property in the language. In the actual realization, multiple sets of $Q$, $K$ and $V$ will be applied and the resulted representations are concatenated. This is referred to as the multi-head attention mechanism \cite{Vaswani2017}.
Since the attention mechanism does not take any positional information into account (see Eq.\ref{eq:self-attention}), in order to compensate such deficiency, the embedding vectors are usually pre-encoded based on their positions.
\begin{equation}
\label{eq:pe}
e_t=e_t^{\text{RAW}} + e_t^{\text{POS}}
\end{equation} where $e_t^{\text{RAW}}$ is the raw embedding vector obtained from linear transformation, while $e_t^{\text{POS}}$ is the positional encoder typically implemented as sinusoidal functions with various frequencies.
The positional encoder ensures that identical tokens located in different positions will be represented as different vectors. However, how such encoding scheme would reflect the sequential information, i.e. the arrangement of the tokens, is unclear. As pointed out also in other works \cite{Tang2018, Raganato2019}, the Transformers may not capture the syntatic information as well as the RNNs. For language modeling, particularly, as farther tokens would generally have weaker predictive power towards next word \cite{Chong2013}, the positional encoding does not reflect this phenomenon.
\section{TransfoRNN}
\label{sec:transfornn}
As discussed previously, the FNN and RNN models attempt to capture the sequential information either by concatenating the embedding vectors or presenting the vectors sequentially to the network. Although the Transformers does modify the raw embedding vectors based on positions (see Eq.\ref{eq:pe}), how sequential information can be captured is doubtful.
\subsection{Architecture}
To capture the sequential information in a more explicit manner, we propose to cascade the RNNs to the Transformers, such that the the arrangement in the token sequence can be modeled, but at the self-attention level. The benefits, besides keeping the strength of the self-attention mechanism in the models, the sequential information can be preserved. We refer this proposed model to as TransfoRNN. The architecture is shown in Figure \ref{fig:transfornn}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{transrnn.png}
\caption{The architecture of the TransfoRNN model. The structure consists of $N$ layers of Transformers, cascaded by $M$ layers of RNNs.}
\label{fig:transfornn}
\end{figure}
As compared to the Transformers which condition only on single representation (see Eq.\ref{eq:translm}), the TransfoRNN models make use of the entire representations in the context. Moreover, using the RNNs to process the representations of a long context allows the memory to be feasibly maintained during computation \cite{Irie2020}.
\subsection{Implementation}
The Transformer layers in the TransfoRNN models are realized followed the original implementation \cite{Vaswani2017}. In the $l$-th layer, the self-attention component (see Eq.\ref{eq:self-attention}) is surrounded by a residual connection followed by layer normalization \cite{Ba2016}.
\begin{equation}
x_t^{(l)} = \text{LayerNorm}(W_0h_t^{(l)} + z_t^{(l-1)})
\end{equation} where $W_0$ denotes the linear function of the residual connection. Note that, at the first layer, $z_t^{0} = e_t$. Next, a feed-forward component is applied and surrounded by another residual connection and layer normalization.
\begin{equation}
\label{eq:trans_ff}
y_t^{(l)} = x_t^{(l)} + \text{Activation}(W_{1}x_t^{(l)})
\end{equation}
\begin{equation}
z_t^{(l)} = \text{LayerNorm}(y_t^{(l)})
\end{equation} where $W_1$ refers to the linear function in the feed-forward connection.
At the $N$-th layer of the Transformers, a stack of $M$ layers of RNNs takes over the representation, i.e. $z_1^{(N)}, z_2^{(N)}, ..., z_{i-1}^{(N)}$, to produce outputs, $o_1^{(M)}, o_2^{(M)}, ..., o_{i-1}^{(M)}$. Probabilities can be obtained by sending the outputs through a softmax layer.
\section{Experiments}
\label{sec:experiments}
The performance of the proposed TransfoRNN model was evaluated in terms of perplexity (PPL) and speech recognition word error rate (WER). The TransfoRNN model was compared with the vanilla Transformer model to assess the performance gain contributed by the captured sequential information.
For the TransfoRNN models, the recurrent layers were implemented as the long short term memory (LSTM) neural networks \cite{Sundermeyer2012}. The Transformer models were trained followed the approach in \cite{Al-Rfou2019}, where during training, all outputs from the final layer, i.e. $z_1^{(N)}$, $z_2^{(N)}$, ..., $z_{i-1}^{(N)}$, were used to compute the loss, but only the last output, i.e. $z_{i-1}^{(N)}$, was used for inference. As will be shown later, this configuration would give lower model PPL as compared to using the entire outputs for prediction. In all models considered in the experiments, the input embedding was tied to the output embedding \cite{Press2017, Inan2017}. The models were built by using the PyTorch toolkit \cite{Paszke2019}.
\subsection{Perplexity Analysis}
The model PPLs were evaluated based on the Penn Treebank (PTB) and WikiText-2 \cite{Merity2017} corpora, each consists of 0.9M and 2.0M words in the training set. The vocabularies comprise 10K and 33K words, respectively.
For both TransfoRNN and Transformer models, the number of heads in the Transformer layers was 8 and the dimension of the feed-forward component (see Eq.\ref{eq:trans_ff}) was 1024. The models were trained using the stochastic gradient descent (SGD) method and new-bob learning rate adjustment. The initial learning rate was commonly fixed as 0.1 and the batch size was 16.
\subsubsection{TransfoRNN vs. Transformer}
First of all, the TransfoRNN models were compared with the Transformer models. The number of LSTM layers in the TransfoRNN models was fixed at two, while the depth of the Transformer layers was increased from two to eight. For the Transformer models, the depths varied from two to sixteen. In this evaluation, the number of heads of the Transformer layers, in both types of models, was eight and the dimension of the feed-forward component is 1024 (see Eq.\ref{eq:trans_ff}). Both models were compared in the embedding dimensions of 512 and 1024. The PPLs are shown in Table \ref{tab:ppl_ptb_wiki2}.
\begin{table}[]
\caption{PPLs of the TransfoRNN models as compared to the Transformer models. On both corpora, TransfoRNN models with only two Transformer layers showed lowest PPLs. (\textit{d}: embedding dimension, \textit{N}: number of Transformer layers)}
\centering
\begin{tabular}{|l|l|c|c|c|c|c|}
\hline
\multicolumn{2}{|l|}{\multirow{2}{*}{}} & \multicolumn{1}{l|}{\multirow{2}{*}{\textit{N}}} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{4-7}
\multicolumn{2}{|l|}{} & \multicolumn{1}{l|}{} & PPL & \#param & PPL & \#param \\ \hline
& KN5 & - & 147.9 & - & 231.0 & - \\ \hline \hline
\multirow{8}{*}{\rotatebox{90}{\textit{d}=512}} & \multirow{4}{*}{Transformer} & 2 & 103.0 & 9.3M & 127.1 & 21.3M \\
& & 4 & 93.8 & 13.5M & 108.5 & 25.5M \\
& & 8 & 88.5 & 22.0M & 100.1 & 33.9M \\
& & 16 & 101.5 & 38.8M & 100.0 & 50.7M \\ \cline{2-7}
& \multirow{3}{*}{TransfoRNN} & 2 & \textbf{83.6} & 18.7M & \textbf{98.5} & 42.5M \\
& & 4 & 86.1 & 22.9M & 102.0 & 46.7M \\
& & 8 & 90.2 & 31.3M & 109.1 & 55.1M \\ \cline{2-7}
& LSTM & - & 99.0 & 9.3M & 100.7 & 21.3M \\ \hline \hline
\multirow{8}{*}{\rotatebox{90}{\textit{d}=1024}} & \multirow{4}{*}{Transformer} & 2 & 111.3 & 22.9M & 130.9 & 46.7M \\
& & 4 & 98.2 & 35.5M & 109.7 & 59.3M \\
& & 8 & 94.7 & 60.7M & 98.6 & 84.5M \\
& & 16 & 100.3 & 111.1M & 96.9 & 134.9M \\ \cline{2-7}
& \multirow{3}{*}{TransfoRNN} & 2 & \textbf{83.2} & 49.9M & \textbf{94.8} & 97.6M \\
& & 4 & 83.2 & 62.5M & 98.4 & 110.2M \\
& & 8 & 89.9 & 87.7M & 104.3 & 135.4M \\ \cline{2-7}
& LSTM & - & 105.6 & 27.0M & 114.4 & 50.9M \\ \hline
\end{tabular}
\label{tab:ppl_ptb_wiki2}
\end{table}
As shown by the results in Table \ref{tab:ppl_ptb_wiki2}, the TransfoRNN models outperformed the Transformer models in most of the settings. Particularly, the TransfoRNN models which comprise only two Transformer layers, i.e. $N=2$, outperformed a deeper Transformer models On the WikiText-2 corpus, for example, although deepening a Transformer model consistently reduced the model PPLs (from 130.9 to 96.9 as $N$ was increased from 2 to 8 in the setting of $d=1024$), a TransfoRNN model with merely 2 Transformer layers have shown a lower PPL, i.e. 94.8. Under the best setting of the Transformer models, the TransfoRNN models reduced the PPLs up to 5.5\% on the PTB corpus (from 88.5 to 83.6) and 2.2\% on the WikiText-2 corpus (from 96.9 to 94.8).
More importantly, the TransfoRNN models demand fewer number of parameters in the models to achieve such comparable results. Particularly, on the PTB corpus, the model size was reduced by 10.5\% (from 22.0M to 18.7M), while on the WikiText-2 corpus, the model size was reduced by 27.7\% (from 134.9M to 97.6M). Both results were measured under the best settings of the models.
We notices that for the PTB corpus, the PPL of the Transformer model increased when the model went deeper than 8 layers. This can be explained by the PTB corpus is a comparatively small dataset which would easily cause the model overfits.
The LSTM models were also evaluated in order to validate the performance of the TransfoRNN models. Such results indicate the performance of the TransfoRNN models when the Transformer layers were unplugged. As expected, the TransfoRNN models outperform the LSTM and Transformer models.
\subsubsection{Number of RNN layers}
Next, we assessed the optimal number of LSTM layers, i.e. $M$, in the TransfoRNN models. By varying the number of LSTM layers, 2 layers of LSTM are found to be optimum to the TransfoRNN models. The results are shown in Table \ref{tab:ppl_rnn_layer}.
\begin{table}[H]
\caption{PPLs of the TranfoRNN models with different numbers of RNN layers. }
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textit{d}} & \multirow{2}{*}{\textit{M}} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{3-6}
& & PPL & \#param & PPL & \#param \\ \hline
512 & 1 & 85.6 & 16.6M & 100.3 & 40.4M \\
& 2 & \textbf{83.5} & 18.7M & \textbf{98.5} & 42.5M \\
& 3 & 89.0 & 20.8M & 106.5 & 44.6M \\ \hline
1024 & 1 & 89.6 & 41.5M & 97.8 & 89.2M \\
& 2 & \textbf{83.2} & 49.9M & \textbf{94.8} & 97.6M \\
& 3 & 88.9 & 58.3M & 98.3 & 106.0M \\ \hline
\end{tabular}
\label{tab:ppl_rnn_layer}
\end{table}
We noticed that the model PPLs were sensitive to the deeper LSTM stacks, particularly in the lower embedding dimension setting.
\subsubsection{With \& without positional encoding}
As the sequential information has been captured in the LSTM layers, we evaluated if the positional encoding (see Eq.\ref{eq:pe}) is still required in this scenario. We evaluated the TransfoRNN models with and without positional encoding. Under the situation where the positional encoder was removed, the input to the model is hence the raw embedding vectors. We compared the models of different embedding dimensions: 256, 512, 1024 and 2048. The results are in Table \ref{tab:ppl_position}.
\begin{table}[H]
\caption{PPLs of the TransfoRNN models with and without positional encoding under different dimensions of embedding.}
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{2-5}
& with pos. & w/o pos. & with pos. & w/o pos. \\ \hline
256 & 93.1 & 94.0 & 114.9 & 107.8 \\ \hline
512 & 83.6 & 83.8 & 98.5 & 97.0 \\ \hline
1024 & 83.2 & \textbf{82.1} & 94.8 & \textbf{92.2} \\ \hline
2048 & 86.7 & 90.0 & 98.4 & 121.8 \\ \hline
\end{tabular}
\label{tab:ppl_position}
\end{table}
On both corpora, under the best settings, the TransfoRNN models showed lower PPLs indicating the redundancy of the positional encoding to the TransfoRNN models (as bolded in the Table \ref{tab:ppl_position}). Nevertheless, there are already Transformer LMs suggested that the positional encoding is not required for deep Transformer architecture \cite{Irie2019, Irie2020}.
\subsubsection{Inference in the Transformer models}
The Transformer models depicted in this work were evaluated by using the approach discussed in \cite{Al-Rfou2019}. The Transformers are suggested to use only the representation in the final position, i.e. $z_{i-1}^{(N)}$ for inference, instead of using the entire representations. Althoguh certain gain in terms of PPL might be obtained, the inference time would drastically increase, as proportion to the length of the considered context.
In order to validate our results, we compared both settings in the Transformer models. Furthermore, as the approach was originally experimented at the character level \cite{Al-Rfou2019}, it's crucial to assess also at the word level, as how our models have been configured here. The results are shown in Table \ref{tab:ppl_transfp}.
\begin{table}[H]
\caption{PPLs of the Transformer models evaluated by using all or only the representation in the final position. }
\centering
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textit{d}} & \multirow{2}{*}{\textit{N}} & \multicolumn{2}{c|}{PTB} & \multicolumn{2}{c|}{WikiText-2} \\ \cline{3-6}
& & all & final & all & final \\ \hline
512 & 2 & 107.2 & 103 & 133.4 & 124.3 \\
& 4 & 96.4 & 94.0 & 116.9 & 108.5 \\
& 8 & 94.3 & 88.5 & 106.3 & 98.6 \\
& 16 & 102.9 & 94.4 & 105.9 & 96.9 \\ \hline
1024 & 2 & 113.5 & 111.3 & 137 & 130.9 \\
& 4 & 104.2 & 99.3 & 121.3 & 109.7 \\
& 8 & 102.4 & 94.7 & 110.1 & 101.7 \\
& 16 & 107.0 & 100.3 & 111.3 & 103.3 \\ \hline
\end{tabular}
\label{tab:ppl_transfp}
\end{table}
As shown in Table \ref{tab:ppl_transfp}, evaluating the Transformer models based only on the representation in the final position consistently shows lower PPLs, under all settings. Hence, we are assured that the proposed TransfoRNN models were compared with a more competitive Transformer model.
However, the TransfoRNN models discussed in this paper consistently used the entire representations for prediction, which we found out to give lower model PPLs.
\subsection{Speech recognition}
We also evaluated the performance of the TransfoRNN model on a speech recognition \textit{N}-best re-ranking task ny using the LibriSpeech corpus \cite{Panayotov2015}. The system was built followed the recipe in the Kaldi toolkit \cite{Povey2011}. The acoustic model comprises 17-layer TDNN taking 40-dimension MFCC and 100-dimension $i$-vector as input. The LM used in the decoder was the original official trigram model.
Both TransfoRNN and Transformer models were trained from the transcripts in the training set, i.e. 960-hour, consisting of 9.4M words. The vocabulary contains 200K words. The TransfoRNN models consists of two Transformer layers followed by two LSTM layers while the counterpart Transformer models consists of two to eight layers. Other settings in both model are the same: eight attention heads, 1024-dimension embedding and 2048-dimension hidden layer.
The PPLs and WERs of the TransfoRNN and Transformer models were compared on the four datasets in the corpus: dev\_clean, test\_clean, dev\_other and test\_other. The results are shown in Table \ref{tab:libri_ppl} and \ref{tab:wer}.
\begin{table}[H]
\caption{PPLs of the TransfoRNN and Transformer models. The TransfoRNN model which comprises two Transformer layers slightly outperforms the Transformer model with eight layers.}
\centering
\begin{tabular}{l|c|cc|cc}
\hline
\multirow{2}{*}{} & \multirow{2}{*}{\textit{N}} & \multicolumn{2}{c}{clean} & \multicolumn{2}{c}{other} \\
& & dev & test & dev & test \\ \hline
Fourgram & - & 283.7 & 289.5 & 250.8 & 259.2 \\ \hline
Transformer & 2 & 177.7 & 182.5 & 162.7 & 166.9 \\
& 4 & 147.9 & 152.2 & 134.9 & 138.9 \\
& 8 & 131.2 & 135.2 & 119.7 & 122.6 \\ \hline
TransfoRNN & 2 & \textbf{128.2} & \textbf{133.3} & \textbf{117.8} & \textbf{119.8} \\ \hline
\end{tabular}
\label{tab:libri_ppl}
\end{table}
\begin{table}[H]
\caption{WERs of the TransfoRNN and Transformer models. The results of both models are comparable.}
\centering
\begin{tabular}{l|c|cc|cc}
\hline
\multirow{2}{*}{} & \multirow{2}{*}{\textit{N}} & \multicolumn{2}{c|}{clean} & \multicolumn{2}{c}{other} \\
& & dev & test & dev & test \\ \hline
Trigram & - & 3.41 & 3.85 & 9.32 & 9.35 \\ \hline
Transformer & 2 & 3.33 & 3.82 & 9.12 & 9.30 \\
& 4 & 3.21 & 3.73 & 8.92 & 9.20 \\
& 8 & \textbf{3.13} & 3.72 & \textbf{8.72} & 9.09 \\ \hline
TransfoRNN & 2 & 3.19 & \textbf{3.66} & 8.80 & \textbf{9.02} \\ \hline
\end{tabular}
\label{tab:wer}
\end{table}
For the PPLs, the TransfoRNN models which consists of only two Transformer layers consistently gave lower PPLs. These results are consistent to the results presented earlier. For the WERs, although both models reduced the baseline WERs, as output by the decoder, the TransfoRNN gave only comparable results in this evaluation.
\section{Conclusions}
\label{sec:conclusion}
This paper has discussed a work on using the recurrent layers to improve the Transformer LMs by encapsulating the sequential information into the models. The proposed model, referred to as TransfoRNN has shown lower model PPLs with fewer number of parameters in the models as compared to the vanilla Transformers. Moreover, we have shown empirically, that with the assistance of the recurrent layers, lesser number of Transformer layers is required. Specifically, a TransfoRNN model with only 2 layers of LSTM and 2 layers of Transformer outperformed a vanilla Transformer model with deeper network, e.g. 8 or 16. Through speech recognition evaluation on the Librispeech corpus, the TransfoRNN model showed comparable results with the Transformers.
For future work, the TransfoRNN shall be evaluated under BPE or character based settings with larger dataset, which is more applicable to the state-of-the-art end-to-end ASR systems.
\bibliographystyle{IEEEtran}
|
1,108,101,565,984 | arxiv | \section{\label{sec:Indroduction}Introduction}
Quantum computing (QC) has been demonstrated theoretically to provide significant speedup over classical computers in several computational tasks \cite{harrow2017quantum, nielsen2002quantum}. Notable examples include the factoring of large numbers \cite{shor1994algorithms} and searching an unstructured database \cite{grover1996fast}. Recent advances in quantum hardware by companies such as IBM \cite{cross2018ibm}, Google \cite{arute2019quantum} and IonQ \cite{debnath2016demonstration} \st{provide} enable the opportunity to implement quantum algorithms on real devices.
At the same time, the development of various machine learning (ML) techniques has accelerated progress in fields such as natural language processing \cite{cho2014learning,sutskever2014sequence}, automatic speech recognition \cite{graves2013speech,graves2014towards,sak2015fast,sak2014long}, computer vision \cite{krizhevsky2012imagenet,szegedy2015going,simonyan2014very,lecun1998gradient,he2016deep}, complex sequential decision making \cite{silver2017mastering,silver2016mastering,Mnih2015Human-levelLearning,schrittwieser2019mastering,badia2020agent57} and many more.
Considering the ever increasing data volume and complexity of accessible data, it is reasonable to examine whether we can build more powerful ML methods with the help of a novel computing paradigm. QC is a leading candidate and the attempt to address this problem let to the development of quantum machine learning (QML) \cite{dunjko2018machine,biamonte2017quantum}.
Sequential modeling is a common ML task and has been studied extensively in the classical setting.
For example, the recurrent neural network (RNN) \cite{dupond2019thorough,abiodun2018state,tealab2018time} and its variants--such as gated recurrent units (GRU) \cite{cho2014properties} and long short-term memory (LSTM) \cite{hochreiter1997long}--has a long history of being applied in machine translation \cite{cho2014learning,sutskever2014sequence}, speech recognition \cite{graves2013speech,graves2014towards,sak2015fast,sak2014long} and time-series analysis \cite{connor1994recurrent,hua2019deep}, just name a few.
Indeed, sequential modeling has also been studied in the QML field via the use of quantum recurrent networks (QRNN) \cite{bausch2020recurrent,takaki2020learning} and its variants such as quantum long short-term memory (QLSTM) \cite{chen2020quantum}.
However, existing methods using QRNN and its variants to study sequential modeling suffers from a major drawback: long training time. QML methods for sequential modeling such as QRNN and QLSTM largely depend on the iterative optimization of quantum circuit parameters. Notable examples are variational quantum algorithms (VQA) \cite{cerezo2021variational} and quantum circuit learning (QCL) \cite{mitarai2018quantum}; both require a significant amount of circuit evaluation to calculate the gradients and update the circuit parameters \cite{schuld2019evaluating}. For example, the commonly used \emph{parameter-shift} quantum gradient calculation method requires two circuit evaluations for each parameter \cite{mitarai2018quantum,schuld2019evaluating}.
Intuitively, one can ask the following question: can we only train part of the model instead of all of the parameters and achieve comparable results? The answer is yes when classical RNNs are randomly initialized to process the sequence and only the final linear layer is trained. Such architecture is called \emph{reservoir computing} (RC) \cite{jaeger2004harnessing,jaeger2001echo,tanaka2019recent}.
While RC based on classical RNN has demonstrated significant success, as described in \cite{jaeger2004harnessing,jaeger2001echo}, it is not yet clear whether their quantum counterpart (e.g. quantum RNN and variants) can achieve comparable or superior results. In this paper, we propose a reservoir computing (RC) method based on randomly initialized quantum circuits. Specifically, we investigate the quantum version of RNN-based RC. We consider the following quantum RNN: quantum recurrent neural network (QRNN), quantum long short-term memory (QLSTM) and a quantum gated recurrent unit (QGRU). We apply the untrained QRNN, QGRU and QLSTM as the reservoir and only train the final classical linear layer which is used to process the output from the respective quantum reservoirs.
The numerical simulations show that the QRNN-RC can reach results comparable to fully trained QRNN models in several function approximation and time-series prediction tasks. Since the QRNNs in the proposed model does not need to be trained, the overall process is much faster than the fully trained ones. We also compare to classical RNN-based RC and show that in most cases the quantum version learns faster or requires fewer training epochs.
The paper is organized as follows: In \sectionautorefname{\ref{sec:ReservoirComputing}} the basic notion of reservoir computing is described. In \sectionautorefname{\ref{sec:VQC}} we introduce the VQC which is the building block of QML models. We describe various kinds of QRNNs in the \sectionautorefname{\ref{sec:QRNN}}. The experimental settings are described in \sectionautorefname{\ref{sec:Exp}} and the results are shown in \sectionautorefname{\ref{sec:Results}}. Finally, we discuss the results in \sectionautorefname{\ref{sec:Discussion}} and provide concluding remarks in \sectionautorefname{\ref{sec:Conclusion}}.
\section{\label{sec:ReservoirComputing}Reservoir Computing}
A fundamental task in machine learning is to model temporal or sequential data. Examples of this include ML models trained to process audio or text data to perform natural language processing \cite{cho2014learning,sutskever2014sequence,graves2013speech,graves2014towards,sak2015fast,sak2014long}, or analyze financial data to provide better decision making \cite{krollner2010financial,dingli2017financial}. Various recurrent neural networks (RNN) are often used to achieve these tasks. However, there are challenges when training RNN such as vanishing or exploding gradients \cite{hochreiter1998vanishing,pascanu2013difficulty}, and training RNNs is usually computationally expensive.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[x=2.2cm,y=1.4cm]
\draw[color=black, fill=gray!5, thin](1, 2) circle (1.55 and 2.5);
\foreach \x in {1, 2, 3}
\draw [black, thick] (-1.5, \x) circle [radius=7pt];
\draw[black, thick] (1, 0) circle [radius=9pt];
\draw[black, thick] (1.5, 1) circle [radius=9pt];
\draw[black, thick] (0.5, 2) circle [radius=9pt];
\draw[black, thick] (1, 3) circle [radius=9pt];
\draw[black, thick] (1.25, 4) circle [radius=9pt];
\draw [black, thick] (3.9, 2.2) circle [radius=9pt];
\foreach \x in {1, 3}
\draw[-stealth, black, thin] (-1.35, \x) -- (1 - 0.2, 0 -0.2 + 0.1 * \x);
\foreach \x in {2}
\draw[-stealth, black, thin] (-1.35, \x) -- (1.5 - 0.2, 0.8 + 0.1 * \x);
\foreach \x in {2, 3}
\draw[-stealth, black, thin] (-1.35, \x) -- (1.25 - 0.2, 3.8 + 0.1 * \x);
\draw[-stealth, black, thin] (-1.35, 1) -- (0.5 - 0.2, 1.8 + 0.1);
\draw[-stealth, black!60!green, thick] (1.5, 1 + 0.25) to [bend right] (1.25, 4 - 0.25);
\draw[-stealth, black!60!green, thick] (1.5 - 0.1, 1 + 0.25) to [bend left] (0.5 + 0.2, 2 - 0.25);
\draw[-stealth, black!60!green, thick] (1, 0.3) to [bend right=10] (1, 2.7);
\draw[-stealth, black!60!green, thick] (0.6, 2.2) to [bend left] (0.9, 2.8);
\draw[-stealth, black!60!green, thick] (1.65, 0.8) to [bend right=-40, in=35, out=120, looseness=4] (1.35, 0.8);
\draw[-stealth, black!60!green, thick] (0.85, 3.2) to [bend left=-40, in=35, out=100, looseness=4] (1.1, 3.2);
\draw[-stealth, blue, dashed, thick] (1 + 0.15, 0) to [bend right=10] (3.9 - 0.2, 2 + 0.1 * 0);
\draw[-stealth, blue, dashed, thick] (1 + 0.15, 3) -- (3.9 - 0.2, 2 + 0.1 * 2);
\draw[-stealth, blue, dashed, thick] (1.25 + 0.15, 4) to [bend right=-10] (3.9 - 0.2, 2 + 0.1 * 4);
\draw(-.9, 3.4) node{${W^{in}}$};
\draw(2.9, 3.5) node{${W^{out}}$};
\draw(1.8, 1.95) node{${W}$};
\draw(-1.9, 2) node{${s_k}$};
\draw(1, -0.75) node{${\textbf{x}_k}$};
\draw(4.3, 2.2) node{${y_k}$};
\end{tikzpicture}
\end{center}
\caption{{\bfseries Reservoir computing (RC).}}
\label{Fig:classical_reservoir_computing}
\end{figure}
Reservoir computing (RC) is defined in \cite{Miikkulainen2017} as an approach to processing sequential data, where large, nonlinear, randomly connected, and fixed recurrent network (the \emph{reservoir}) is separated from a linear output layer with trainable parameters. It is assumed that the complexity of the recurrent network allows one to learn the desired output by using only a linear combination of its activations \cite{jaeger2004harnessing}. The linear output layer is fast to train, so it helps to mitigate issues with RNN training discussed above. RC based on RNN, as depicted in \figureautorefname{\ref{Fig:classical_reservoir_computing}, is sometimes referred to as the echo state network \cite{jaeger2001echo}. It can be summarized mathematically as follows:\begin{align}\mathbf{x}_k &= \mathbf{f}(W^{in} s_k + W \mathbf{x}_{k-1})\nonumber\\y_k &= W^{out} \textbf{x}_k^{out},\end{align}where $s_k$and $x_k$ correspond to the input signal, and state of the reservoir, respectively, at step $k$. Here, $W$, $W^{in}$, and $W^{out}$ correspond to the internal weights of the reservoir, the weights connecting the input nodes to the nodes in the reservoir, and the weights connecting the reservoir nodes to the output nodes, respectively. Only $W^{out}$ needs to be trained, other weights are randomly initialized.
With the success of classical RNN-based reservoirs, it is natural to consider a similar idea in the quantum regime. Specifically, we consider the quantum version of common RNN architectures such as quantum RNN, quantum long short-term memory (QLSTM) and quantum gated recurrent unit (QGRU).
Along with the idea of classical RNN-based RC, we replaced the classical neural networks inside these RNN architectures with variational quantum circuits (VQC) which have been shown to have certain advantages over classical neural networks \cite{caro2022generalization,du2018expressive,abbas2021power}.
In the next section, we will describe the building blocks of these quantum RNNs.
\section{\label{sec:VQC}Variational Quantum Circuits}
A variational quantum circuit (VQC) (also known as a parameterized quantum circuit (PQC)), is a quantum circuit which depends on tunable parameters. The parameters can be tuned via gradient-based \cite{schuld2019evaluating,pellow2021comparison} or gradient-free algorithms \cite{franken2020gradient,pellow2021comparison}. \figureautorefname{\ref{Fig:GeneralVQC}} illustrates a generic VQC which consists of three parts: state preparation, the parameterized circuit, followed by measurement. In the figure, $U(\mathbf{x})$ represents the state preparation circuit which encodes classical data $\mathbf{x}$ into a quantum state. $V(\boldsymbol{\theta})$ represents the variational or parameterized circuit with \emph{learnable} or adjustable parameters $\boldsymbol{\theta}$, which, in the context of this paper, is optimized using gradient-descent. The output is obtained as a classical bit string through measurement of a subset (or all) of the qubits.
\begin{figure}[hbtp]
\begin{center}
\scalebox{1.4}{
\begin{minipage}{10cm}
\Qcircuit @C=1em @R=1em {
\lstick{\ket{0}} & \multigate{3}{U(\mathbf{x})} & \qw & \multigate{3}{V(\boldsymbol{\theta})} & \qw & \meter \qw \\
\lstick{\ket{0}} & \ghost{U(\mathbf{x})} & \qw & \ghost{V(\boldsymbol{\theta})} & \qw & \meter \qw \\
\lstick{\ket{0}} & \ghost{U(\mathbf{x})} & \qw & \ghost{V(\boldsymbol{\theta})} & \qw & \meter \qw \\
\lstick{\ket{0}} & \ghost{U(\mathbf{x})} & \qw & \ghost{V(\boldsymbol{\theta})} & \qw & \meter \qw \\
}
\end{minipage}
}
\end{center}
\caption{{\bfseries Generic architecture for variational quantum circuits (VQC).}
$U(\mathbf{x})$ is a quantum circuit for encoding the classical input data $\mathbf{x}$ into a quantum state and $V(\boldsymbol{\theta})$ is the variational circuit with tunable or learnable parameters $\boldsymbol{\theta}$ which is optimized via gradient-based or gradient-free methods. This circuit is followed by measurement of some or all of the qubits.
}
\label{Fig:GeneralVQC}
\end{figure}
Noteworthy advantages of VQCs include resilience of quantum noise \cite{kandala2017hardware,farhi2014quantum,mcclean2016theory}, which makes them favorable for NISQ era quantum devices, and the ability to train VQCs with smaller datasets \cite{caro2022generalization}.
Quantum machine learning methods using VQCs demonstrate a varying degree of success. Notable examples VQC applications include function approximation \cite{chen2020quantum, mitarai2018quantum}, classification \cite{mitarai2018quantum,schuld2018circuit,havlivcek2019supervised,Farhi2018ClassificationProcessors,benedetti2019parameterized,mari2019transfer, abohashima2020classification, easom2020towards, sarma2019quantum, stein2020hybrid,chen2020hybrid,chen2020qcnn,wu2020application,stein2021quclassi,chen2021hybrid,jaderberg2021quantum,mattern2021variational,qi2021qtn,kyriienko2022unsupervised,li2022quantum,wu2022scalable,nguyen2022bayesian}, generative modeling \cite{dallaire2018quantum,stein2020qugan, zoufal2019quantum, situ2018quantum,nakaji2020quantum}, deep reinforcement learning \cite{chen19, chen2022variational, lockwood2020reinforcement,jerbi2019quantum,Chih-ChiehCHEN2020,wu2020quantum,skolik2021quantum,jerbi2021variational,hsiao2022unentangled,yun2022quantum,sequeira2022variational,heimann2022quantum,schenk2022hybrid,chen2022quantum}, sequence modeling \cite{chen2020quantum,bausch2020recurrent, takaki2020learning}, speech recognition \cite{yang2020decentralizing,qi2022classical}, natural language processing \cite{yang2022bert,di2022dawn}, metric and embedding learning \cite{lloyd2020quantum, nghiem2020unified}, transfer learning \cite{mari2019transfer} and federated learning \cite{chen2021federated,yang2020decentralizing,chehimi2021quantum}.
Additionally, it has been shown that the VQCs may have more expressive power than classical neural networks \cite{sim2019expressibility,lanting2014entanglement,du2018expressive,abbas2021power}. The \emph{expressive power} is defined by the ability to represent certain functions or distributions given a limited number of parameters or a specified model size.
Indeed, artificial neural networks (ANNs) are known as \emph{universal approximators} \cite{hornik1989multilayer}, i.e. a neural network with even one single hidden layer can, in principle, approximate any computable function. However, as the complexity of the function grows, the number of neurons required in the hidden layer(s) may become extremely large, increasing the demand for computational resources. Thus, it is worthwhile to examine whether VQCs can perform better than their classical counterparts with an equally limited number of parameters.
In the optimization procedure, we employ the \emph{parameter-shift} method to derive the analytical gradient of the quantum circuits, as described in \cite{schuld2019evaluating,bergholm2018pennylane}.
In this paper, VQCs are operated in the following ways: (i) in the reservoir computing cases, the VQCs are randomly initialized and then the parameters are fixed, no quantum gradients are needed in this case. (ii) In the full optimization cases, the VQCs are optimized through gradient-based methods.
In the next section, we describe the quantum version of RNNs used in this work.
\section{\label{sec:QRNN}Quantum Recurrent Neural Network}
RNNs are a special kind of ML model designed to handle sequential modeling via the memory capabilities which can keep track of previous information. What makes RNNs and its variants special is that the output from the RNN will be fed into the model again to retain previous information. The value fed back to the RNN is called the \emph{hidden} state. This is the major difference between a RNN and a fully-connected neural network.
RNNs can be used to learn and output a whole sequence or predict a single value. In the first case, at each time step $t$, given the hidden state from the previous time $h_{t-1}$ and the input $x_{t}$, the RNN will output the prediction $y_{t}$ and the hidden state $h_{t}$. In the other case, if we choose to use the RNN to predict a single value, then given an input sequence $\{x_0, x_1, \cdots, x_n\}$, only the final $y_{n}$ will be retained.
The generic form of a RNN suffers from several challenges such as vanishing gradients \cite{hochreiter1998vanishing,pascanu2013difficulty} and failing to learn long-range temporal dependencies \cite{hochreiter1998vanishing,pascanu2013difficulty}. Various modified forms of RNNs have been proposed to fix these issues such as long short-term memory (LSTM) \cite{hochreiter1997long} and gated recurrent units (GRU) \cite{cho2014properties}, which have demonstrated superior performance over the generic RNN in a wide range of applications \cite{salehinejad2017recent}.
RNN and its variants such as LSTM and GRU can be used to serve as a high-dimensional dynamical system or as a \emph{reservoir}. In this case, the RNN is not trained, meaning that its parameters are fixed after the random initialization \cite{lukovsevivcius2009reservoir}. The only trainable part is the final linear layer which will process the output from the RNN.
\subsection{Quantum Recurrent Neural Network}
\begin{figure}[htbp]
\includegraphics[width=0.6\linewidth]{diagrams/QRNN.pdf
\caption{{\bfseries The quantum recurrent neural networks (QRNN) architecture.} }
\label{fig:QRNN}
\end{figure}
The quantum recurrent neural network (QRNN) is the quantum version of the conventional RNN. The major distinction is that the classical neural network is replaced by a VQC, as shown in \figureautorefname{\ref{fig:QRNN}}. The formulation of a QRNN cell is given by
\begin{subequations}
\allowdisplaybreaks
\begin{align}
h_{t} &= \tanh(VQC(v_{t}))
\label{eqn:qrnn-h}\\
y_{t} &= NN(h_{t}) \label{eqn:qrnn-yt}
\end{align}
\label{eqn:qrnn}
\end{subequations}
where the input is the concatenation $v_t$ of the hidden state $h_{t-1}$ from the previous time step and the current input vector $x_t$. The VQC is detailed in the \sectionautorefname{\ref{sec:VQCcomponentsForQRNN}}. In this work, $x_t$ is set to be one-dimensional and the hidden unit $h_{t}$ is set to be three-dimensional.
Since the model is built to generate the prediction of a scalar value, the output from the QRNN, $h_{t}$, at the last time step (in the context of this paper the last step is $t = 4$) will be processed by a classical neural network layer $NN$ (as in \equationautorefname{\ref{eqn:qrnn-yt}}).
\subsection{Quantum Long Short-term Memory}
\begin{figure}[htbp]
\includegraphics[width=0.6\linewidth]{diagrams/QLSTM.pdf
\caption{{\bfseries The quantum long short-term memory (QLSTM) architecture.} }
\label{fig:QLSTM}
\end{figure}
The quantum long short-term memory (QLSTM) \cite{chen2020quantum} is an improved version of QRNN. There are two memory components in a QLSTM, namely the hidden state $h_t$ and the cell or internal state $c_t$.
A formal mathematical formulation of a QLSTM cell is given by
\begin{subequations}
\allowdisplaybreaks
\begin{align}
f_{t} &= \sigma\left(VQC_{1}(v_t)\right) \label{eqn:qlstm-f}\\
i_{t} &= \sigma\left(VQC_{2}(v_t)\right) \label{eqn:qlstm-i}\\
\tilde{C}_{t} &= \tanh \left(VQC_{3}(v_t)\right) \label{eqn:qlstm-bigC}\\
c_{t} &= f_{t} * c_{t-1} + i_{t} * \tilde{C}_{t} \label{eqn:qlstm-c}\\
o_{t} &= \sigma\left(VQC_{4}(v_t)\right) \label{eqn:qlstm-o}\\
h_{t} &= VQC_{5}(o_{t} * \tanh \left(c_{t}\right)) \label{eqn:qlstm-h}\\
\tilde{y_{t}} &= VQC_{6}(o_{t} * \tanh \left(c_{t}\right)), \label{eqn:qlstm-y}\\
y_{t} &= NN(\tilde{y_{t}}) \label{eqn:qlstm-final-y}
\end{align}
\label{eqn:qlstm}
\end{subequations}
where the input is the concatenation $v_t$ of the hidden state $h_{t-1}$ from the previous time step and the current input vector $x_t$. The VQC is detailed in the \sectionautorefname{\ref{sec:VQCcomponentsForQRNN}}. In this work, the $x_t$ is set to be one dimensional and the hidden unit $h_{t}$ is set to be three dimensional. The cell state or internal state $c_{t}$ is set to be four-dimensional.
Since the model is built to generate the prediction of a scalar value, the output from the QLSTM $\tilde{y_{t}}$ at the last time step (in the context of this paper the last step is $t = 4$) will be processed by a classical neural network layer $NN$ to get $y_{t}$.
\subsection{Quantum Gated Recurrent Unit}
\begin{figure}[htbp]
\includegraphics[width=0.6\linewidth]{diagrams/QGRU.pdf
\caption{{\bfseries The quantum gated recurrent units (QGRU) architecture.} }
\label{fig:QGRU}
\end{figure}
The quantum gated recurrent unit (QGRU) is another QRNN with gating mechanisms similar to QLSTM. QGRU has fewer parameters and simpler architectures than QLSTM.
A formal mathematical formulation of a QGRU cell is given by
\begin{subequations}
\allowdisplaybreaks
\begin{align}
r_{t} &= \sigma\left(VQC_{1}(v_t)\right) \label{eqn:qgru-rt}\\
z_{t} &= \sigma\left(VQC_{2}(v_t)\right) \label{eqn:qgru-zt}\\
o_{t} &= cat(x_{t}, r_{t} * H_{t-1}) \label{eqn:qgru-ot}\\
\tilde{H}_{t} &= \tanh \left( VQC_{3}(o_{t}) \right) \label{eqn:qgru-tilda-Ht}\\
H_{t} &= z_{t} * H_{t-1} + (1 - z_{t}) * \tilde{H}_{t} \label{eqn:qgru-Ht}\\
y_{t} &= NN(H_{t}) \label{eqn:qgru-yt}
\end{align}
\label{eqn:qgru}
\end{subequations}
where the input is the concatenation $v_t$ of the hidden state $H_{t-1}$ from the previous time step and the current input vector $x_t$. The VQC is detailed in the \sectionautorefname{\ref{sec:VQCcomponentsForQRNN}}. In this work, the $x_t$ is set to be one-dimensional and the hidden unit $H_{t}$ is set to be three-dimensional.
Since the model is built to generate the prediction of a scalar value, the output from the QGRU $H_{t}$ at the last time step (in the context of this paper the last step is $t = 4$) will be processed by a classical neural network layer $NN$ to get $y_{t}$.
\subsection{VQC Components}
\label{sec:VQCcomponentsForQRNN}
The specific VQC components used in this paper are represented in \figureautorefname{\ref{Fig:Basic_VQC_Hadamard_MoreEntangle}}. As previously mentioned, a VQC includes the following three parts: an \emph{encoding circuit}, a \emph{variational circuit} and \emph{quantum measurement}.
\subsubsection{Encoding Circuit}
A quantum state with $N$-qubits can be defined as
\begin{equation}
\label{eqn:quantum_state_vec}
\ket{\psi} = \sum_{(q_1,q_2,\cdots,q_N) \in \{ 0,1\}} c_{q_1, q_2, \cdots, q_N}\ket{q_1} \otimes \ket{q_2} \otimes \cdots \otimes \ket{q_N},
\end{equation}
where $ c_{q_1, \cdots, q_N} \in \mathbb{C}$ is the complex \emph{amplitude} for each basis state and $q_i \in \{0,1\}$.
The square of the amplitude $c_{q_1, \cdots, q_N}$ is the measurement \emph{probability} for the corresponding value in $\ket{q_1} \otimes \ket{q_2} \otimes \cdots \otimes \ket{q_N}$, such that the total probability is $1$:
\begin{equation}
\label{eqn:quantum_state_vec_normalization_condition}
\sum_{(q_1, \cdots, q_N) \in \{0, 1\}} ||c_{q_1, \cdots, q_N}||^2 = 1.
\end{equation}
The encoding circuit maps classical data values to quantum amplitudes. In this paper, we use the encoding procedure described in \cite{chen2020quantum}. The circuit is initialized in the ground state and then Hadamard gates are applied to create an unbiased initial state. We use a two-angle encoding, similar to dense angle encoding \cite{larose2020robust}, but for encoding one value with two angles. This involves encoding each data value to a qubit with a series of two gates, $R_{y}$ and $R_{z}$, respectively. The angles of the rotation gates are given by $f(x_i) = \arctan(x_i)$ and $g(x_i) = \arctan(x_{i}^{2})$, respectively, where $x_i$ is a component of data vector $\bold{x}$. The quantum state of the encoded data takes the form
\begin{equation}
\label{eqn:dense_angle_encoding}
\ket{\bold{x}} = \bigotimes_{i=1}^{N}\,\cos\left(f(x_{i})+\frac{\pi}{4}\right)\ket{0} + \exp{(ig(x_{i}))} \sin\left(f( x_{i})+\frac{\pi}{4}\right)\ket{1}
\end{equation}
where $N$ is the dimensionality of $\bold{x}$ and the $\pi/4$ angle offset accounts for the initial Hadamard rotations.
\subsubsection{Variational Circuit}
The trainable (or learnable) part of the VQC is the \emph{variational} circuit. This is a parameterized circuit where the parameters are subject to iterative optimization, such as gradient-descent. In this paper, the variational part includes several \emph{blocks}, represented as dashed boxes in \figureautorefname{\ref{Fig:Basic_VQC_Hadamard_MoreEntangle}. Each block consists of multiple CNOT gates to entangle qubits, and unitary rotation gates controlled by learnable parameters $\alpha$, $\beta$ and $\gamma$. The blocks can be repeated several times to increase the number of parameters.
\subsubsection{Quantum Measurement}
Our hybrid quantum-classical architecture relies on the ability to move data between quantum and classical systems. To extract the information from the quantum circuit, we perform quantum measurements. Consider the circuit shown in \figureautorefname{\ref{Fig:Basic_VQC_Hadamard_MoreEntangle}} as an example, if we run the circuit once, we will get a bit string like 0011 since we measure all the four qubits. Due to the probabilistic nature of quantum systems, we will get different bit strings at each circuit repetition and measurement. In the next run, it may be, for example, 0110. If we run the circuit many times (number of \emph{shots}), we can get a distribution of the measurement results, called the \emph{expectation values} of the observable. The expectation values can be calculated analytically when using a quantum simulator software without noise, or multiple sampling when a certain device noise model is specified.
Given an operator $\hat{O}$, the expected value for a state $|\psi\rangle$ is given by
\begin{equation}
\mathbb{E}[\hat{O}] = \langle\psi|\hat{O}|\psi\rangle.
\end{equation}
In our case, $\ket{\psi}$ corresponds to the state $U\ket{\bold{x}}$ in which $\ket{\bold{x}}$ is the encoded data vector as defined in Equation~\ref{eqn:dense_angle_encoding}, and $U$ is the variational circuit.
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{10cm}
\Qcircuit @C=1em @R=1em {
\lstick{\ket{0}} & \gate{H} & \gate{R_y(\arctan(x_1))} & \gate{R_z(\arctan(x_1^2))} & \ctrl{1} & \qw & \qw & \targ & \ctrl{2} & \qw & \targ & \qw & \gate{R(\alpha_1, \beta_1, \gamma_1)} & \meter \qw \\
\lstick{\ket{0}} & \gate{H} & \gate{R_y(\arctan(x_2))} & \gate{R_z(\arctan(x_2^2))} & \targ & \ctrl{1} & \qw & \qw & \qw & \ctrl{2} & \qw & \targ & \gate{R(\alpha_2, \beta_2, \gamma_2)} & \meter \qw \\
\lstick{\ket{0}} & \gate{H} & \gate{R_y(\arctan(x_3))} & \gate{R_z(\arctan(x_3^2))} & \qw & \targ & \ctrl{1} & \qw & \targ & \qw & \ctrl{-2}& \qw & \gate{R(\alpha_3, \beta_3, \gamma_3)} & \meter \qw \\
\lstick{\ket{0}} & \gate{H} & \gate{R_y(\arctan(x_4))} & \gate{R_z(\arctan(x_4^2))} & \qw & \qw & \targ & \ctrl{-3}& \qw & \targ & \qw & \ctrl{-2}& \gate{R(\alpha_4, \beta_4, \gamma_4)} & \meter \gategroup{1}{5}{4}{13}{.7em}{--}\qw
}
\end{minipage}
\end{center}
\caption{{\bfseries Generic VQC architecture for QRNN, QLSTM and QGRU.}
The VQC we use for QRNN, QLSTM and QGRU includes the following three parts: the data encoding circuit (with $H$, $R_y$, and $R_z$ gates), the variational or parameterized circuit (shown within the dashed outline), and the measurement. Note that the number of qubits and the number of measurements can be adjusted to fit the problem of interest (various input and output dimensions), and the variational layer can contain several iterations to increase the model size or the number of parameters, depending on the capacity and capability of the quantum machine or quantum simulation software used for the (actual or numerical) experiments. In the context of this paper, the number of qubits used is $4$.}
\label{Fig:Basic_VQC_Hadamard_MoreEntangle}
\end{figure}
\section{\label{sec:Exp}Numerical Experiments}
We compare the performance of full optimization and reservoir computing as well as the effect of quantum device noise. In the optimization procedure of quantum circuits, we employ the \emph{parameter-shift} method to derive the analytical gradient of quantum parameters. The method is described in \cite{schuld2019evaluating,bergholm2018pennylane}.
Additionally, we compare our quantum models to classical models with a similar number of parameters. We present the learning performance of these models at different numbers of training epochs.
For a better comparison with previous works, the experimental setting follows that in\cite{chen2020quantum}. We reproduce similar results of fully trained QLSTM and apply the same procedure to QRNN and QGRU.
We use PyTorch \cite{paszke2019pytorch} for the overall ML workflow, PennyLane \cite{bergholm2018pennylane} for building the quantum circuits and Qiskit \cite{cross2018ibm} for noisy quantum simulation.
The training and testing scheme follows that in \cite{chen2020quantum}. Concisely, the model is expected to predict the ($N+1$)-th value given the first $N$ values in the sequence. For function approximation tasks (described in \sectionautorefname{\ref{sec:function_approximation_damped_SHM}} and \sectionautorefname{\ref{sec:function_approximation_bessel}), at step $t$ if the input is $[x_{t-4}, x_{t-3}, x_{t-2}, x_{t-1}]$ (i.e., $N = 4$), then the model is expected to generate the output $y_t$, which should be close to the ground truth $x_{t}$.
For time series prediction tasks (described in \sectionautorefname{\ref{sec:time_series_prediction}}), at step $t$, if the input is $[u_{t-4}, u_{t-3}, u_{t-2}, u_{t-1}]$ (i.e., $N = 4$) from the \emph{input sequence}, then the model is expected to generate the output $y_t$, which should be close to the ground truth $v_{t}$ in the \emph{target sequence}. We set $N=4$ for all experiments in this paper.
\subsection{\label{subsec:full_opt_methods}Full Optimization}
In this part, we present the full optimization (e.g. training all the quantum parameters) of the QRNN to be referenced as the baseline. We consider the QRNN, QGRU and QLSTM models with the following model configurations: For function approximation tasks such as damped SHM and Bessel function, QRNN is with $1 \times 2 \times 4 \times 3 = 24$ trainable quantum parameters and $3 \times 1 + 1 = 4$ trainable classical parameters; QGRU is with $3 \times 2 \times 4 \times 3 = 72$ trainable quantum parameters and $3 \times 1 + 1 = 4$ trainable classical parameters; QLSTM is with $6 \times 2 \times 4 \times 3 = 144$ trainable quantum parameters and $4 \times 1 + 1 = 5$ trainable classical parameters. For time-series prediction tasks such as NARMA5 and NARMA10, the number of trainable quantum parameters are $48$, $144$ and $288$ for QRNN, QGRU and QLSTM, respectively.
The optimizer used for this experiment is RMSprop \cite{Tieleman2012}, a variant of gradient descent methods with an adaptive learning rate. The optimizer is configured with the following hyperparameters: learning rate $\eta =0.01$, smoothing constant $\alpha = 0.99$, and $\epsilon = 10^{-8}$.
\subsection{Reservoir Computing}
The RC experiments are configured with the same hyparparameters as the full optimization cases in \sectionautorefname{\ref{subsec:full_opt_methods}}. The only difference is that all the quantum parameters are frozen after the random initialization. Therefore, only classical parameters are trainable.
\subsection{Noisy Simulation}
\label{sec:noise_simulation_definition}
We used a noise model consisting of serial thermal relaxation and depolarization noise channels, an approach supported by \cite{georgopoulos2021modeling} \cite{dahlhauser2021modeling}. We use high-performance noise model parameters that are largely based on the upper limit performance of the IBM Peekskill superconducting quantum device, currently in exploratory mode. For the thermal relaxation noise channel, T1 and T2 coherence times were sampled per qubit from $\mathcal{N}(500 \mu s, 50 \mu s)$ and $\mathcal{N}(400 \mu s, 40 \mu s)$, respectively, where $\mathcal{N}(\mu,\sigma)$ denotes a normal distribution. Quantum gate and instruction times were fixed values of 0 ns for the $R_Z$ virtual gate, 20 ns for X90 gate, 300 ns for CNOT gate, 700 ns for measurement and 800 ns for reset instruction. Depolarization channel parameters, single-qubit errors and CNOT gate errors were sampled from
$\mathcal{N}(\num{1e-4},\num{1e-5})$ and $\mathcal{N}(\num{1e-3},\num{1e-4})$, respectively.
\subsection{Classical RNN Baseline}
We set the classical RNN, GRU and LSTM with the following model size to be the baseline in this study. The model sizes (number of parameters) are set to be similar to their quantum counterpart to investigate the learning capabilities of these models. For the experiments considered in this paper: RNN is with $40$ parameters in RNN and $6$ parameters in the final linear layer; GRU is with $120$ parameters in GRU and $6$ parameters in the final linear layer; LSTM is with $160$ parameters in LSTM and $6$ parameters in the final linear layer. Similar to the setting in quantum models, RC training means the recurrent parameters are frozen after randomly initialized and only final linear layers are trained.
\subsection{Tasks}
\subsubsection{\label{sec:function_approximation_damped_SHM}Function Approximation-Damped SHM}
Damped harmonic oscillators can be used to describe or approximate a wide range of systems, including the mass on a string and acoustic systems. Damped harmonic oscillation can be described by the equation:
\begin{equation}
\frac{\mathrm{d}^{2} x}{\mathrm{d} t^{2}}+2 \zeta \omega_{0} \frac{\mathrm{d} x}{\mathrm{d} t}+\omega_{0}^{2} x=0,
\end{equation}
where $\omega_{0}=\sqrt{\frac{k}{m}}$ is the (undamped) system's characteristic frequency and $\zeta=\frac{c}{2 \sqrt{m k}}$ is the damping ratio. In this paper, we consider a specific example from the simple pendulum with the following formulation:
\begin{equation}
\frac{d^{2} \theta}{d t^{2}}+\frac{b}{m} \frac{d \theta}{d t}+\frac{g}{L} \sin \theta=0,
\end{equation}
in which the gravitational constant $g = 9.81$, the damping factor $b = 0.15$, the pendulum length $l = 1$ and mass $m = 1$. The initial condition at $t = 0$ has angular displacement $\theta = 0$, and the angular velocity $\dot{\theta} = 3$ rad/sec.
We present the quantum learning result of the angular velocity $\dot{\theta}$.
\subsubsection{\label{sec:function_approximation_bessel}Function Approximation-Bessel Function}
Bessel functions are also commonly encountered in physics and engineering problems, such as electromagnetic fields or heat conduction in a cylindrical geometry.
Bessel functions of the first kind, $J_\alpha(x)$, are solutions to the Bessel differential equation
\begin{equation}
x^{2} \frac{d^{2} y}{d x^{2}}+x \frac{d y}{d x}+\left(x^{2}-\alpha^{2}\right) y=0,
\end{equation}
and can be defined as
\begin{equation}
J_{\alpha}(x)=\sum_{m=0}^{\infty} \frac{(-1)^{m}}{m! \Gamma(m+\alpha+1)}\left(\frac{x}{2}\right)^{2 m+\alpha},
\end{equation}
where $\Gamma(x)$ is the Gamma function.
In this paper, we choose $J_2$ as the function used for training.
\subsubsection{Time Series Prediction (NARMA Benchmark)}
\label{sec:time_series_prediction}
We use NARMA (Non-linear Auto-Regressive Moving Average) time series datasets \cite{NarmaAtiyaA} for this task.
The NARMA series that we use in this work can be defined by \cite{NarmaGoudarzi}:
\begin{equation}
y_{t+1}=\alpha y_{t}+\beta y_{t}\left(\sum_{j=0}^{n_{o}-1} y_{t-j}\right)+\gamma u_{t-n_{o}+1} u_{t}+\delta
\end{equation}
where $(\alpha, \beta, \gamma, \delta)=(0.3,0.05,1.5,0.1)$ and $n_{0}$ is used to determine the nonlinearity.
The input $\left\{u_{t}\right\}_{t=1}^{M}$ for the NARMA tasks is:
\begin{equation}
u_{t}=0.1\left(\sin \left(\frac{2 \pi \bar{\alpha} t}{T}\right) \sin \left(\frac{2 \pi \bar{\beta} t}{T}\right) \sin \left(\frac{2 \pi \bar{\gamma} t}{T}\right)+1\right)
\end{equation}
where $(\bar{\alpha}, \bar{\beta}, \bar{\gamma}, T)=(2.11,3.73,4.11,100)$ as used in \cite{suzuki2022natural}. We set the length of inputs and outputs to $M = 300$. In this paper, we consider $n_{0} = 5$ and $n_{0} = 10$, NARMA5 and NARMA10 respectively.
\section{\label{sec:Results}Results}
In the results we present here, the orange dashed line represents the ground truth while the blue solid line is the output from the models. The vertical red dashed line separates the \emph{training} set (left) from the \emph{testing} set (right). For all datasets we consider in this paper, $67\%$ are used for the training and the remaining $33\%$ are for testing.
\subsection{Function Approximation}
\subsubsection{QRNN}
For the QRNN, we observe similar results in both the damped SHM (\figureautorefname{\ref{fig:rnn_damped_SHM}}) and Bessel function (\figureautorefname{\ref{fig:rnn_bessel}}) cases. Both the QRNN-RC and QRNN learn the important features after single training epochs. However, the fully trained QRNN captures more amplitude information in the first epoch. This is not surprising since the fully trained one requires more resources to tune all the quantum parameters, while the RC version does not. We observe that QRNN-RC can achieve performance comparable to fully trained QRNN after 15 epochs of training, except some of the large amplitude regions. After the training, the loss of RC and fully trained converge to a low value.
In the case of damped SHM, if we compare the QRNN-RC to classical RNN-RC and RNN, we can observe that the QRNN-RC beats RNN-RC even after 100 epochs of training and reaches comparable performance to fully trained RNN. The results are similar in the case of Bessel function case, we observe that QRNN-RC beat RNN-RC from Epoch 1 to Epoch 100.
If we further add quantum device noises to the simulation (defined in \sectionautorefname{\ref{sec:noise_simulation_definition}}), we can observe that both the fully trained and RC QRNN reach pretty good performance after 100 epochs of training (shown in \figureautorefname{\ref{fig:noisy_rnn}}). Particularly, in both the damped SHM and Bessel function cases, we see that QRNN-RC can provide smoother outputs than the fully trained QRNN. We summarize the loss values of noise-free and noisy simulations in \tableautorefname{\ref{tab:noise_free_qrnn}} and \tableautorefname{\ref{tableNoisyMSE}} respectively.
\begin{table}[htbp]
\resizebox{1\textwidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Dataset & Model & Reservoir & Epoch 1 & Epoch 15 & Epoch 30 & Epoch 100 \\ \hline
Damped SHM & QRNN & True & $1.72 \times 10^{-1}$/$2.21 \times 10^{-2}$ & $1.81 \times 10^{-2}$/$5.17 \times 10^{-3}$ & $1.81 \times 10^{-2}$/$4.96 \times 10^{-3}$ & $1.18 \times 10^{-2}$/$4.91 \times 10^{-3}$ \\ \hline
Damped SHM & QRNN & False & $1.01 \times 10^{-1}$/$5.04 \times 10^{-3}$ & $7.19 \times 10^{-3}$/$9.06 \times 10^{-4}$ & $1.87 \times 10^{-3}$/$2.16 \times 10^{-5}$ & $6.8 \times 10^{-4}$/$4.89 \times 10^{-5}$ \\ \hline
Damped SHM & RNN & True & $2.02 \times 10^{-1}$/$6.46 \times 10^{-2}$ & $1.29 \times 10^{-1}$/$2.68 \times 10^{-2}$ & $9.49 \times 10^{-2}$/$1.97 \times 10^{-2}$ & $2.87 \times 10^{-2}$/$5.87 \times 10^{-3}$ \\ \hline
Damped SHM & RNN & False & $7.22 \times 10^{-1}$/$6.78 \times 10^{-2}$ & $1.66 \times 10^{-2}$/$4.04 \times 10^{-3}$ & $2.82 \times 10^{-3}$/$9.40 \times 10^{-4}$ & $1.69 \times 10^{-3}$/$3.60 \times 10^{-4}$ \\ \hline
Bessel & QRNN & True & $1.77 \times 10^{-1}$/$2.91 \times 10^{-2}$ & $1.65 \times 10^{-2}$/$3.60 \times 10^{-3}$ & $1.53 \times 10^{-2}$/$3.61 \times 10^{-3}$ & $1.52 \times 10^{-2}$/$3.57 \times 10^{-3}$ \\ \hline
Bessel & QRNN & False & $4.16 \times 10^{-2}$/$5.54 \times 10^{-3}$ & $5.10 \times 10^{-3}$/$6.08 \times 10^{-4}$ & $1.40 \times 10^{-3}$/$3.19 \times 10^{-5}$ & $6.45 \times 10^{-4}$/$2.62 \times 10^{-5}$ \\ \hline
Bessel & RNN & True & $5.22 \times 10^{-1}$/$1.65 \times 10^{-1}$ & $7.82 \times 10^{-2}$/$1.93 \times 10^{-2}$ & $7.11 \times 10^{-2}$/$1.76 \times 10^{-2}$ & $4.37 \times 10^{-2}$/$1.11 \times 10^{-2}$ \\ \hline
Bessel & RNN & False & $1.79 \times 10^{-1}$/$2.93 \times 10^{-2}$ & $3.80 \times 10^{-3}$/$2.83 \times 10^{-4}$ & $4.49 \times 10^{-3}$/$3.25 \times 10^{-3}$ & $3.05 \times 10^{-4}$/$1.79 \times 10^{-5}$ \\ \hline
NARMA5 & QRNN & True & $2.72 \times 10^{-3}$/$2.93 \times 10^{-4}$ & $1.03 \times 10^{-4}$/$7.28 \times 10^{-5}$ & $1.26 \times 10^{-4}$/$3.44 \times 10^{-5}$ & $1.40 \times 10^{-4}$/$4.13 \times 10^{-5}$ \\ \hline
NARMA5 & QRNN & False & $3.19 \times 10^{-2}$/$1.82 \times 10^{-4}$ & $3.26 \times 10^{-4}$/$9.52 \times 10^{-5}$ & $1.57 \times 10^{-4}$/$4.64 \times 10^{-5}$ & $1.84 \times 10^{-4}$/$4.63 \times 10^{-5}$ \\ \hline
NARMA5 & RNN & True & $2.73 \times 10^{-2}$/$7.02 \times 10^{-3}$ & $1.78 \times 10^{-4}$/$7.28 \times 10^{-5}$ & $1.73 \times 10^{-4}$/$7.07 \times 10^{-5}$ & $1.45 \times 10^{-4}$/$6.01 \times 10^{-5}$ \\ \hline
NARMA5 & RNN & False & $1.03 \times 10^{-1}$/$2.94 \times 10^{-2}$ & $3.47 \times 10^{-4}$/$1.37 \times 10^{-4}$ & $3.27 \times 10^{-4}$/$1.29 \times 10^{-4}$ & $2.44 \times 10^{-4}$/$9.76 \times 10^{-5}$ \\ \hline
NARMA10 & QRNN & True & $1.00 \times 10^{-1}$/$8.73 \times 10^{-3}$ & $2.22 \times 10^{-4}$/$7.99 \times 10^{-5}$ & $2.63 \times 10^{-4}$/$9.42 \times 10^{-5}$ & $3.03 \times 10^{-4}$/$1.24 \times 10^{-4}$ \\ \hline
NARMA10 & QRNN & False & $3.54 \times 10^{-2}$/$1.03 \times 10^{-4}$ & $3.27 \times 10^{-4}$/$1.96 \times 10^{-4}$ & $3.65 \times 10^{-4}$/$1.55 \times 10^{-4}$ & $3.99 \times 10^{-4}$/$1.56 \times 10^{-4}$ \\ \hline
NARMA10 & RNN & True & $6.87 \times 10^{-2}$/$6.73 \times 10^{-4}$ & $5.82 \times 10^{-4}$/$1.75 \times 10^{-4}$ & $5.80 \times 10^{-4}$/$1.74 \times 10^{-4}$ & $5.70 \times 10^{-4}$/$1.71 \times 10^{-4}$ \\ \hline
NARMA10 & RNN & False & $2.70 \times 10^{-1}$/$5.75 \times 10^{-2}$ & $5.25 \times 10^{-4}$/$1.57 \times 10^{-4}$ & $5.07 \times 10^{-4}$/$1.51 \times 10^{-4}$ & $4.23 \times 10^{-4}$/$1.27 \times 10^{-4}$ \\ \hline
\end{tabular}}
\caption{RNN model results for training Epochs 1, 15, 30 and 100.}
\label{tab:noise_free_qrnn}
\end{table}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/RNN/RNN_Damped_SHM_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the damped SHM with QRNN-RC.}
}
\label{fig:rnn_damped_SHM}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/RNN/RNN_BESSEL_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the Bessel function with QRNN-RC.}
}
\label{fig:rnn_bessel}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/NoisySimulation/NOISY_RNN_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Noisy simulation of QRNN.}
}
\label{fig:noisy_rnn}
\end{figure}
\subsubsection{QGRU}
For the QGRU, we observe similar results for both the damped SHM (\figureautorefname{\ref{fig:gru_damped_SHM}}) and Bessel function (\figureautorefname{\ref{fig:gru_bessel}}) cases.
After the first epoch of training, we can observe that the fully trained QGRU learns more amplitude information than the QGRU-RC, in which only the final linear layer is trained. In the case of damped SHM, we observe that the QGRU-RC can reach comparable performance to QGRU after 15 epochs of training. If we compare the QGRU-RC to classical GRU-RC and GRU, we can observe that the QGRU-RC beats GRU-RC up to the first 30 epochs of training and reaches similar performance to GRU-RC and fully trained GRU after 100 epochs of training. In the case of Bessel function, we observe that QGRU-RC saturates after 15 epochs of training and can capture most of the data, except some of the large amplitude regions. We also observe that the QGRU-RC performs similar to the classical GRU-RC after 15 epochs of training.
If we add quantum device noise to the simulation (defined in \sectionautorefname{\ref{sec:noise_simulation_definition}}), we observe that both the full optimization and RC training of QGRU under the effect of simulated quantum noises can still reach reasonable performance in both the damped SHM and the Bessel function (shown in \figureautorefname{\ref{fig:noisy_gru}}).
Most importantly, we observe that the in both the damped SHM and Bessel function cases, the QGRU-RC can generate smoother outputs than the fully optimized QGRU. We summarize the loss values of noise-free and noisy simulations in \tableautorefname{\ref{tab:noise_free_qgru}} and \tableautorefname{\ref{tableNoisyMSE}} respectively.
\begin{table}[htbp]
\resizebox{1\linewidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Dataset & Model & Reservoir & Epoch 1 & Epoch 15 & Epoch 30 & Epoch 100 \\ \hline
Damped SHM & QGRU & True & $2.26 \times 10^{-1}$/$2.78 \times 10^{-2}$ & $4.54 \times 10^{-2}$/$1.31 \times 10^{-2}$ & $4.55 \times 10^{-2}$/$1.30 \times 10^{-2}$ & $4.55 \times 10^{-2}$/$1.29 \times 10^{-2}$ \\ \hline
Damped SHM & QGRU & False & $1.97 \times 10^{-1}$/$1.51 \times 10^{-2}$ & $2.01 \times 10^{-2}$/$3.64 \times 10^{-3}$ & $1.04 \times 10^{-2}$/$1.30 \times 10^{-3}$ & $1.39 \times 10^{-3}$/$1.19 \times 10^{-4}$ \\ \hline
Damped SHM & GRU & True & $4.62 \times 10^{-1}$/$1.18 \times 10^{-1}$ & $1.13 \times 10^{-1}$/$2.26 \times 10^{-2}$ & $7.45 \times 10^{-2}$/$1.50 \times 10^{-2}$ & $4.61 \times 10^{-2}$/$9.92 \times 10^{-3}$ \\ \hline
Damped SHM & GRU & False & $2.12 \times 10^{-1}$/$8.54 \times 10^{-2}$ & $2.22 \times 10^{-2}$/$3.90 \times 10^{-3}$ & $2.94 \times 10^{-3}$/$1.80 \times 10^{-4}$ & $4.51 \times 10^{-4}$/$7.75 \times 10^{-5}$ \\ \hline
Bessel & QGRU & True & $1.54 \times 10^{-1}$/$2.58 \times 10^{-2}$ & $3.90 \times 10^{-2}$/$9.92 \times 10^{-3}$ & $3.82 \times 10^{-2}$/$9.90 \times 10^{-3}$ & $3.82 \times 10^{-2}$/$9.89 \times 10^{-3}$ \\ \hline
Bessel & QGRU & False & $5.53 \times 10^{-2}$/$9.39 \times 10^{-3}$ & $1.10 \times 10^{-2}$/$2.05 \times 10^{-3}$ & $2.94 \times 10^{-3}$/$9.69 \times 10^{-5}$ & $1.31 \times 10^{-3}$/$1.37 \times 10^{-5}$ \\ \hline
Bessel & GRU & True & $1.71 \times 10^{-1}$/$3.33 \times 10^{-2}$ & $4.16 \times 10^{-2}$/$1.13 \times 10^{-2}$ & $3.78 \times 10^{-2}$/$1.05 \times 10^{-2}$ & $3.05 \times 10^{-2}$/$8.51 \times 10^{-3}$ \\ \hline
Bessel & GRU & False & $1.03 \times 10^{-1}$/$9.98 \times 10^{-2}$ & $1.98 \times 10^{-2}$/$4.77 \times 10^{-3}$ & $4.62 \times 10^{-3}$/$1.72 \times 10^{-3}$ & $4.68 \times 10^{-4}$/$9.09 \times 10^{-6}$ \\ \hline
NARMA5 & QGRU & True & $9.48 \times 10^{-2}$/$6.59 \times 10^{-3}$ & $6.46 \times 10^{-5}$/$3.21 \times 10^{-5}$ & $8.62 \times 10^{-5}$/$2.36 \times 10^{-5}$ & $1.10 \times 10^{-4}$/$3.50 \times 10^{-5}$ \\ \hline
NARMA5 & QGRU & False & $4.12 \times 10^{-3}$/$5.58 \times 10^{-5}$ & $1.53 \times 10^{-4}$/$3.22 \times 10^{-5}$ & $1.36 \times 10^{-4}$/$2.41 \times 10^{-5}$ & $1.22 \times 10^{-4}$/$3.55 \times 10^{-5}$ \\ \hline
NARMA5 & GRU & True & $1.90 \times 10^{-3}$/$2.23 \times 10^{-2}$ & $3.62 \times 10^{-4}$/$1.45 \times 10^{-4}$ & $3.46 \times 10^{-4}$/$1.39 \times 10^{-4}$ & $2.69 \times 10^{-4}$/$1.09 \times 10^{-4}$ \\ \hline
NARMA5 & GRU & False & $9.00 \times 10^{-2}$/$6.43 \times 10^{-4}$ & $2.63 \times 10^{-4}$/$1.04 \times 10^{-4}$ & $2.36 \times 10^{-4}$/$9.39 \times 10^{-5}$ & $1.22 \times 10^{-4}$/$4.99 \times 10^{-5}$ \\ \hline
NARMA10 & QGRU & True & $1.54 \times 10^{-3}$/$2.91 \times 10^{-4}$ & $2.20 \times 10^{-4}$/$7.22 \times 10^{-5}$ & $2.50 \times 10^{-4}$/$9.23 \times 10^{-5}$ & $2.74 \times 10^{-4}$/$1.21 \times 10^{-4}$ \\ \hline
NARMA10 & QGRU & False & $6.30 \times 10^{-3}$/$1.28 \times 10^{-4}$ & $3.63 \times 10^{-4}$/$1.14 \times 10^{-4}$ & $4.04 \times 10^{-4}$/$1.20 \times 10^{-4}$ & $2.97 \times 10^{-4}$/$1.25 \times 10^{-4}$ \\ \hline
NARMA10 & GRU & True & $2.08 \times 10^{-1}$/$8.04 \times 10^{-2}$ & $2.25 \times 10^{-4}$/$8.21 \times 10^{-5}$ & $2.18 \times 10^{-4}$/$7.57 \times 10^{-5}$ & $2.14 \times 10^{-4}$/$7.50 \times 10^{-5}$ \\ \hline
NARMA10 & GRU & False & $5.39 \times 10^{-1}$/$6.93 \times 10^{-2}$ & $2.56 \times 10^{-4}$/$8.03 \times 10^{-5}$ & $2.52 \times 10^{-4}$/$7.94 \times 10^{-5}$ & $2.32 \times 10^{-4}$/$7.47 \times 10^{-5}$ \\ \hline
\end{tabular}
}
\caption{GRU model results for training Epochs 1, 15, 30 and 100.}
\label{tab:noise_free_qgru}
\end{table}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/GRU/GRU_Damped_SHM_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the damped SHM with QGRU-RC.}
}
\label{fig:gru_damped_SHM}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/GRU/GRU_BESSEL_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the Bessel function with QGRU-RC.}
}
\label{fig:gru_bessel}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/NoisySimulation/NOISY_GRU_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Noisy simulation of QGRU.}
}
\label{fig:noisy_gru}
\end{figure}
\subsubsection{QLSTM}
For the QLSTM, we observe similar results in both the damped SHM (\figureautorefname{\ref{fig:lstm_damped_SHM}}) and Bessel function (\figureautorefname{\ref{fig:lstm_bessel}}) cases. For the damped SHM case, we observe that after the first epoch of training, the fully trained QLSTM learns more amplitude information than the QLSTM-RC in which only the final linear layer is trained. While in the Bessel function case, the QLSTM-RC and QLSTM provide similar learning outcomes in the first training epoch. We observe that both models reach similar results after 100 epochs of training. However, the loss values of QLSTM are much lower after the training. This is not surprising since all the model parameters are trained in QLSTM while in QLSTM-RC only the final linear layer is trained.
If we compare QLSTM-RC to LSTM-RC, we can observe that the quantum version captures more features after the same number of training epochs in both the damped SHM and Bessel function cases.
If we add quantum device noise to the simulation (defined in \sectionautorefname{\ref{sec:noise_simulation_definition}}), we observe that the both the full optimization and RC training of QLSTM under the effect of simulated quantum noise can still reach reasonable performance in both the damped SHM and the Bessel function (shown in \figureautorefname{\ref{fig:noisy_lstm}}).
We observe that the in both the damped SHM and Bessel function cases, the QLSTM-RC can generate smoother outputs than the fully optimized QLSTM. The results are consistent with QRNN and QGRU. We summarize the loss values of noise-free and noisy simulations in \tableautorefname{\ref{tab:noise_free_qlstm}} and \tableautorefname{\ref{tableNoisyMSE}} respectively.
\begin{table}[htbp]
\resizebox{1\linewidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Dataset & Model & Reservoir & Epoch 1 & Epoch 15 & Epoch 30 & Epoch 100 \\ \hline
Damped SHM & QLSTM & True & $3.19 \times 10^{-1}$/$5.86 \times 10^{-2}$ & $6.42 \times 10^{-2}$/$1.08 \times 10^{-2}$ & $5.55 \times 10^{-2}$/$1.38 \times 10^{-2}$ & $5.55 \times 10^{-2}$/$1.41 \times 10^{-2}$ \\ \hline
Damped SHM & QLSTM & False & $1.66 \times 10^{-1}$/$1.35 \times 10^{-2}$ & $2.89 \times 10^{-2}$/$5.53 \times 10^{-3}$ & $9.06 \times 10^{-3}$/$3.41 \times 10^{-4}$ & $2.86 \times 10^{-3}$/$1.94 \times 10^{-4}$ \\ \hline
Damped SHM & LSTM & True & $3.45 \times 10^{-1}$/$7.49 \times 10^{-2}$ & $1.89 \times 10^{-1}$/$3.98 \times 10^{-2}$ & $1.66 \times 10^{-1}$/$3.51 \times 10^{-2}$ & $1.10 \times 10^{-1}$/$2.32 \times 10^{-2}$ \\ \hline
Damped SHM & LSTM & False & $3.32 \times 10^{-1}$/$3.29 \times 10^{-2}$ & $3.65 \times 10^{-2}$/$7.38 \times 10^{-3}$ & $6.74 \times 10^{-3}$/$7.27 \times 10^{-4}$ & $2.32 \times 10^{-3}$/$1.68 \times 10^{-3}$ \\ \hline
Bessel & QLSTM & True & $7.53 \times 10^{-2}$/$1.36 \times 10^{-2}$ & $3.94 \times 10^{-2}$/$9.67 \times 10^{-3}$ & $3.90 \times 10^{-2}$/$1.01 \times 10^{-2}$ & $3.90 \times 10^{-2}$/$1.02 \times 10^{-2}$ \\ \hline
Bessel & QLSTM & False & $1.04 \times 10^{-1}$/$1.66 \times 10^{-2}$ & $2.30 \times 10^{-2}$/$5.35 \times 10^{-3}$ & $1.27 \times 10^{-2}$/$2.42 \times 10^{-3}$ & $6.97 \times 10^{-4}$/$1.21 \times 10^{-5}$ \\ \hline
Bessel & LSTM & True & $1.21 \times 10^{-1}$/$2.46 \times 10^{-2}$ & $6.58 \times 10^{-2}$/$1.65 \times 10^{-2}$ & $5.43 \times 10^{-2}$/$1.39 \times 10^{-2}$ & $3.76 \times 10^{-2}$/$1.02 \times 10^{-2}$ \\ \hline
Bessel & LSTM & False & $3.03 \times 10^{-1}$/$4.55 \times 10^{-2}$ & $3.48 \times 10^{-2}$/$8.71 \times 10^{-3}$ & $6.97 \times 10^{-3}$/$1.41 \times 10^{-3}$ & $1.31 \times 10^{-3}$/$3.53 \times 10^{-4}$ \\ \hline
NARMA5 & QLSTM & True & $8.54 \times 10^{-4}$/$5.40 \times 10^{-4}$ & $1.32 \times 10^{-4}$/$1.10 \times 10^{-4}$ & $9.06 \times 10^{-5}$/$2.96 \times 10^{-5}$ & $1.13 \times 10^{-4}$/$2.58 \times 10^{-5}$ \\ \hline
NARMA5 & QLSTM & False & $3.99 \times 10^{-3}$/$4.07 \times 10^{-4}$ & $3.30 \times 10^{-4}$/$4.23 \times 10^{-4}$ & $1.86 \times 10^{-4}$/$2.06 \times 10^{-4}$ & $9.85 \times 10^{-5}$/$2.52 \times 10^{-5}$ \\ \hline
NARMA5 & LSTM & True & $4.15 \times 10^{-2}$/$2.10 \times 10^{-4}$ & $3.73 \times 10^{-4}$/$1.48 \times 10^{-4}$ & $3.72 \times 10^{-4}$/$1.48 \times 10^{-4}$ & $3.65 \times 10^{-4}$/$1.45 \times 10^{-4}$ \\ \hline
NARMA5 & LSTM & False & $1.19 \times 10^{-1}$/$7.97 \times 10^{-4}$ & $3.34 \times 10^{-4}$/$1.38 \times 10^{-4}$ & $2.93 \times 10^{-4}$/$1.15 \times 10^{-4}$ & $1.91 \times 10^{-4}$/$8.78 \times 10^{-5}$ \\ \hline
NARMA10 & QLSTM & True & $1.97 \times 10^{-3}$/$2.78 \times 10^{-4}$ & $3.01 \times 10^{-4}$/$1.39 \times 10^{-4}$ & $2.36 \times 10^{-4}$/$8.78 \times 10^{-5}$ & $2.59 \times 10^{-4}$/$9.64 \times 10^{-5}$ \\ \hline
NARMA10 & QLSTM & False & $4.19 \times 10^{-3}$/$4.71 \times 10^{-4}$ & $3.35 \times 10^{-4}$/$4.73 \times 10^{-4}$ & $3.20 \times 10^{-4}$/$3.74 \times 10^{-4}$ & $2.59 \times 10^{-4}$/$9.50 \times 10^{-5}$ \\ \hline
NARMA10 & LSTM & True & $1.16 \times 10^{-2}$/$4.50 \times 10^{-3}$ & $4.26 \times 10^{-4}$/$1.27 \times 10^{-4}$ & $4.17 \times 10^{-4}$/$1.25 \times 10^{-4}$ & $3.74 \times 10^{-4}$/$1.12 \times 10^{-4}$ \\ \hline
NARMA10 & LSTM & False & $1.70 \times 10^{-1}$/$4.21 \times 10^{-4}$ & $2.94 \times 10^{-4}$/$8.68 \times 10^{-5}$ & $2.76 \times 10^{-4}$/$8.53 \times 10^{-5}$ & $2.31 \times 10^{-4}$/$8.12 \times 10^{-5}$ \\ \hline
\end{tabular}}
\caption{LSTM model results for training Epochs 1, 15, 30 and 100.}
\label{tab:noise_free_qlstm}
\end{table}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/LSTM/LSTM_Damped_SHM_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the damped SHM with QLSTM-RC.}
}
\label{fig:lstm_damped_SHM}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/LSTM/LSTM_BESSEL_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the Bessel function with QLSTM-RC.}
}
\label{fig:lstm_bessel}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/NoisySimulation/NOISY_LSTM_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Noisy simulation of QLSTM.}
}
\label{fig:noisy_lstm}
\end{figure}
\begin{table}[htbp]
\resizebox{1\linewidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Data & Model & Reservoir & Epoch 1 & Epoch 15 & Epoch 30 & Epoch 100 \\
\hline
Bessel & GRU & False & $3.88 \times 10^{-2}$/$9.71 \times 10^{-3}$ & $1.33 \times 10^{-2}$/$4.7 \times 10^{-3}$ & $9.65 \times 10^{-3}$/$5.32 \times 10^{-3}$ & $5.43 \times 10^{-3}$/$4.47 \times 10^{-3}$ \\
\cline{1-7}
Bessel & GRU & True & $5.18 \times 10^{-2}$/$1.3 \times 10^{-2}$ & $3.72 \times 10^{-2}$/$1.05 \times 10^{-2}$ & $3.74 \times 10^{-2}$/$1.05 \times 10^{-2}$ & $3.67 \times 10^{-2}$/$1.0 \times 10^{-2}$ \\
\cline{1-7}
Bessel & LSTM & False & $7.77 \times 10^{-2}$/$1.96 \times 10^{-2}$ & $2.74 \times 10^{-2}$/$9.08 \times 10^{-3}$ & $2.5 \times 10^{-2}$/$1.21 \times 10^{-2}$ & $1.49 \times 10^{-2}$/$7.69 \times 10^{-3}$ \\
\cline{1-7}
Bessel & LSTM & True & $6.16 \times 10^{-2}$/$1.4 \times 10^{-2}$ & $4.02 \times 10^{-2}$/$1.55 \times 10^{-2}$ & $3.89 \times 10^{-2}$/$1.58 \times 10^{-2}$ & $4.18 \times 10^{-2}$/$1.28 \times 10^{-2}$ \\
\cline{1-7}
Bessel & RNN & False & $4.27 \times 10^{-2}$/$1.0 \times 10^{-2}$ & $1.24 \times 10^{-2}$/$4.43 \times 10^{-3}$ & $7.49 \times 10^{-3}$/$5.23 \times 10^{-3}$ & $6.49 \times 10^{-3}$/$4.95 \times 10^{-3}$ \\
\cline{1-7}
Bessel & RNN & True & $3.77 \times 10^{-2}$/$8.71 \times 10^{-3}$ & $1.69 \times 10^{-2}$/$4.15 \times 10^{-3}$ & $1.61 \times 10^{-2}$/$4.32 \times 10^{-3}$ & $1.62 \times 10^{-2}$/$4.95 \times 10^{-3}$ \\
\cline{1-7}
Damped SHM & GRU & False & $1.27 \times 10^{-1}$/$2.47 \times 10^{-2}$ & $1.88 \times 10^{-2}$/$4.18 \times 10^{-3}$ & $7.89 \times 10^{-3}$/$3.32 \times 10^{-3}$ & $7.17 \times 10^{-3}$/$6.49 \times 10^{-3}$ \\
\cline{1-7}
Damped SHM & GRU & True & $8.34 \times 10^{-2}$/$1.29 \times 10^{-2}$ & $4.52 \times 10^{-2}$/$1.51 \times 10^{-2}$ & $4.42 \times 10^{-2}$/$1.45 \times 10^{-2}$ & $4.51 \times 10^{-2}$/$1.2 \times 10^{-2}$ \\
\cline{1-7}
Damped SHM & LSTM & False & $1.21 \times 10^{-1}$/$2.19 \times 10^{-2}$ & $2.96 \times 10^{-2}$/$9.09 \times 10^{-3}$ & $2.28 \times 10^{-2}$/$8.77 \times 10^{-3}$ & $1.86 \times 10^{-2}$/$1.41 \times 10^{-2}$ \\
\cline{1-7}
Damped SHM & LSTM & True & $1.18 \times 10^{-1}$/$2.37 \times 10^{-2}$ & $5.92 \times 10^{-2}$/$1.9 \times 10^{-2}$ & $5.96 \times 10^{-2}$/$1.72 \times 10^{-2}$ & $6.01 \times 10^{-2}$/$1.75 \times 10^{-2}$ \\
\cline{1-7}
Damped SHM & RNN & False & $2.45 \times 10^{-2}$/$5.18 \times 10^{-3}$ & $1.38 \times 10^{-2}$/$5.48 \times 10^{-3}$ & $1.17 \times 10^{-2}$/$6.43 \times 10^{-3}$ & $8.56 \times 10^{-3}$/$4.95 \times 10^{-3}$ \\
\cline{1-7}
Damped SHM & RNN & True & $7.24 \times 10^{-2}$/$1.72 \times 10^{-2}$ & $1.91 \times 10^{-2}$/$6.21 \times 10^{-3}$ & $1.76 \times 10^{-2}$/$6.8 \times 10^{-3}$ & $1.94 \times 10^{-2}$/$5.47 \times 10^{-3}$ \\
\cline{1-7}
\hline
\end{tabular}}
\caption{{\bfseries Summary of Simulation Results with Quantum Noise Model}}
\label{tableNoisyMSE}
\end{table}
\subsection{Time-Series Prediction-NARMA benchmark}
We further investigate the time-series prediction task with NARMA benchmarks (described in \sectionautorefname{\ref{sec:time_series_prediction}}).
\subsubsection{QRNN}
For QRNN, we observe that in both the NARMA5 and NARMA10 cases (shown in \figureautorefname{\ref{fig:rnn_NARMA5}} and \figureautorefname{\ref{fig:rnn_NARMA10}}), the QRNN learns more structure of the data in the first training epoch. However, we can see that the QRNN-RC can catch up pretty quickly. After 15 epochs of training, the results from QRNN-RC are very close to QRNN.
If we compare the performance of QRNN-RC to classical RNN-RC and RNN, we can see that QRNN-RC provides results superior than classical models with a similar number of parameters.
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/RNN/RNN_NARMA5_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the NARMA5 with QRNN-RC.}
}
\label{fig:rnn_NARMA5}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/RNN/RNN_NARMA10_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the NARMA10 with QRNN-RC.}
}
\label{fig:rnn_NARMA10}
\end{figure}
\subsubsection{QGRU}
For the QGRU, we observe that in both the NARMA5 and NARMA10 cases (shown in \figureautorefname{\ref{fig:gru_NARMA5}} and \figureautorefname{\ref{fig:gru_NARMA10}}), the QGRU learns more structure of the data in the first training epoch. However, we can see that the QGRU-RC can catch up pretty quickly. After 15 epochs of training, the results from QGRU-RC are very similar to the ones from QGRU. We also see that they are indistinguishable after 100 epochs of training.
In addition, the simulation shows that the performance of QGRU-RC is superior than the classical GRU-RC and GRU with a similar number of parameters.
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/GRU/GRU_NARMA5_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the NARMA5 with QGRU-RC.}
}
\label{fig:gru_NARMA5}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/GRU/GRU_NARMA10_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the NARMA10 with QGRU-RC.}
}
\label{fig:gru_NARMA10}
\end{figure}
\subsubsection{QLSTM}
For the QLSTM, we observe that in both the NARMA5 and NARMA10 cases (shown in \figureautorefname{\ref{fig:lstm_NARMA5}} and \figureautorefname{\ref{fig:lstm_NARMA10}}), the RC and full optimization of QLSTM can reach good performance after 100 epochs of training. Surprisingly, the training performance of QLSTM-RC is better than the fully optimized one as we can see that the QLSTM-RC predicts the sequence better than QLSTM after 30 epochs of training.
In addition, we observe that the quantum LSTM, either RC or fully optimized one, perform better than their classical counterparts.
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/LSTM/LSTM_NARMA5_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the NARMA5 with QLSTM-RC.}
}
\label{fig:lstm_NARMA5}
\end{figure}
\begin{figure}[hbtp]
\includegraphics[width=1.\linewidth]{results/LSTM/LSTM_NARMA10_PERFORMANCE_COMPARISON_GRID_full.pdf
\caption{{\bfseries Learning the NARMA10 with QLSTM-RC.}
}
\label{fig:lstm_NARMA10}
\end{figure}
\section{\label{sec:Discussion}Discussion}
\subsection{Quantum Hardware Efficiency}
Quantum hardware efficiency is a quantum algorithm design consideration in which the demands on quantum computing resources are minimized. This is particularly important in the current noisy intermediate-scale quantum (NISQ) era of quantum computing \cite{preskill2018quantum}. In this paper we consider that hardware efficiency is achieved by running fewer quantum circuits.
The RC framework demonstrated in this work is well-suited for NISQ computers because hardware efficiency is improved significantly over the original three QRNNs. The clear reason for this improvement is that efficient training is limited to the final layer, meaning that a quantum computer would only be used for generating the outputs for the classical linear layer and the quantum parameters are not trained.
As the RC approach is a hardware efficient approach it reduces the negative effects of noise on the quantum computation and therefore can improve the performance of time-series prediction. In our work, the noisy simulation results in Figures~\ref{fig:noisy_rnn}, ~\ref{fig:noisy_gru}, and~\ref{fig:noisy_lstm} show that the RC approach, when compared with the original QRNN algorithm, has smoother prediction curves that are less corrupted by simulation noise. This is highly desirable given that the target function is smooth. In addition there is evidence that the MSE loss curves, particularly for QRNN-RC in \figureautorefname{\ref{fig:noisy_rnn}}, has less noise and stabilizes to a loss minimum in fewer epochs.
\subsection{Potential Applications}
In order to facilitate maximal advantage of a quantum approach to machine learning, the method proposed in this paper can be utilized to decrease the time and complexity required by existing methods for certain applications. In this paper, we analyzed examples of function approximation and time series prediction tasks. This method can further be applied to nuanced tasks using sequential or temporal data, such as using acoustic models for time series classification as implemented in \cite{yang2021voice2series}, facial recognition systems \cite{easom2020towards}, and natural language processing \cite{di2022dawn}. Additionally, there are numerous financial applications \cite{egger2020financial} including time series prediction \cite{krollner2010financial, dingli2017financial} for stock price and market behavior, and classification problems for risk and fraud detection.
\section{\label{sec:Conclusion}Conclusion}
In this paper, we introduce the function approximation and time-series prediction framework in which the quantum RNN and its variants, such as quantum GRU and quantum LSTM, are used as the reservoir. We show via numerical simulations that the QRNN-RC can reach results comparable to fully trained QRNN models in several function approximation and time-series prediction tasks. Since the QRNNs in the proposed model do not need to be trained, the overall process is much faster than the fully trained ones. We also compare to classical RNN-based RC and show that the quantum solutions require fewer training epochs in most cases. Our results demonstrate a new possibility to utilize quantum neural networks for sequential modeling with very small amount of resource requirement.
\clearpage
\begin{acknowledgments}
The authors would like to thank Constantin Gonciulea and Vanio Markov for constructive and helpful discussions during the development of this paper.
The views expressed in this article are those of the authors and do not represent the views of Wells Fargo. This
article is for informational purposes only. Nothing contained in this article should be construed as investment advice.
Wells Fargo makes no express or implied warranties and expressly disclaims all legal, tax, and accounting implications
related to this article.\\
\end{acknowledgments}
|
1,108,101,565,985 | arxiv |
\section{Introduction}
\subsection{Interpretation of observables}
\subfile{01a_intro_physics.tex}
\subsection{Software}
\subfile{01b_intro_software.tex}
\section{GNA}
\subsection{Design principles}
\subfile{02a_gna_design.tex}
\subsection{GNA core}
\subfile{02b_gna_types.tex}
\subsection{GNA usage}
\subfile{02c_control_flow.tex}
\subsection{Transformations, bundles and expressions}
\subfile{02d_bundles.tex}
\section{Prospects}
\subfile{03_intro.tex}
\subsection{Multithreading}
\subfile{03a_multithreeding.tex}
\subsection{GPU}
\subfile{03b_gpu.tex}
\section{Conclusion}
\subfile{04_conclusion.tex}
\section*{Acknowledgements}
\subfile{05_acknowledgements.tex}
\end{document}
|
1,108,101,565,986 | arxiv | \section{Introduction}
Graph, as a ubiquitous data structure, is employed extensively in a wide range of applications, such as bioinformatics~\cite{Graph-classification-using-structural-attention}, citation networks~\cite{Cora} and social networks~\cite{Supervised-random-walks-predicting-and-recommending-links-in-social-networks}. All of these domains and many more can be readily modeled as graphs, which contain information about connection between individual units. For instance, citation graph describes interactions among science research papers which are represented as nodes with labels to indicate category, and the citation links between papers are mapped into edges. Information from single node or local dense nodes propagates along edges, and this makes graphs be useful structured knowledge repositories for machine learning tasks like link prediction and node classification.
Graph Convolutional Networks (GCNs)~\cite{GCN,Mixhop,GAT,Fastgcn,Adaptive-graph-convolutional-neural-networks} draw support from convolutional operation on graph to aggregate neighbor nodes information from low- to high-order hierarchical structures to get central node representation. In the course of time, GCNs and subsequent variants have emerged as powerful approaches for a variety of tasks like semi-supervised node classification~\cite{GCN,An-end-to-end-deep-learning-architecture-for-graph-classification}, which is also the main focus of this paper.
In order to enable GCNs with more expressivity to wider neighbors, one may stack more layers to the network. But unfortunately, deeper layer network model fails to achieve the expectation partly due to the phenomenon of over-smoothing~\cite{over-smoothing}, which is an inherent issue of graph convolutional calculation mechanism. It has been proven that graph convolution operation is a type of Laplacian smoothing, thus representations of nodes in same region converge to same values and tend to be indistinguishable across different classes in embedding space as model goes deeper~\cite{DeeperInsight-of-GCN}. An easy but effective way to tackle with over-smoothing is to generate perturbed graph data for training. DropNode~\cite{Fastgcn}, DropEdge~\cite{Dropedge}, Dropout~\cite{Dropout, Dropout2} and GRAND~\cite{GRAND} are four typical tricks but under different focuses. DropNode and DropEdge belong to topological structure based perturbation approaches, while Dropout and GRAND are in the category of graph feature matrix based perturbation methods. These four methods are shown in Figure \ref{Figure-1} for brief introduction.
\textbf{DropNode} belongs to the kind of node sampling based methods. In particular, it samples subgraphs for mini-batch training by randomly removing a part of nodes as well as edges connected to the dropped nodes. As a consequence, this method will construct a subgraph $SG$ of original graph $G$, satisfying
\begin{equation}\label{1}
V(SG)\subseteq V(G), E(SG)\subseteq E(G).
\end{equation}
\textbf{DropEdge} acts like a data augmenter by randomly dropping a certain rate of edges from the input graph. Formally, it randomly enforces $|E_{p}|$ non-zero elements of the adjacent matrix $\mathbf{A}$ to be zeros, where $E_{p}$ is the dropped edges set with drop rate $p$. If we denote the resulting adjacency matrix as $\mathbf{A}_{\rm{DropEdge}}$, its relation with $\mathbf{A}$ becomes
\begin{equation}\label{1}
\mathbf{A}_{\rm{DropEdge}} = \mathbf{A} - \mathbf{A}',
\end{equation}
where $\mathbf{A}'$ is a sparse matrix expanded by a random subset of size $E_{p}$ from original edges set $E$. Like DropNode, DropEdge also constructs a subgraph of original graph $G$, but what sets it apart from DropNode is only edges being deleted instead of nodes being deleted, which would inevitably involve removing edges between nodes deleted from the graph. Since DropEdge is edge-oriented instead of node-oriented, it is possible to preserve all node features for the training, exhibiting more flexibility.
\textbf{Dropout} has been widely used for regularizing neural networks. This trick tries to perturb the feature matrix by randomly setting some elements in feature matrix $\mathbf{X}$ to be zeros, i.e.,
\begin{equation}\label{1}
\tilde{\mathbf{X}}_{ij} = \frac{\epsilon_{ij}}{1-\delta}\mathbf{X}_{ij},
\end{equation}
where $\mathbf{X}_{ij}$ is the $j$-th element of the $i$-th row vector $\mathbf{X}_{i}$ in feature matrix $\mathbf{X}$, $\epsilon_{ij}$ draws from Bernoulli distribution $Bernoulli(1-\delta)$. Dropout makes the input feature matrix $\mathbf{X}$ noisy by randomly dropping some elements without affecting graph structures, since it only operates on nodes' feature vector components.
\textbf{GRAND} is another graph feature matrix based perturbation method. Different from Dropout, GRAND randomly sets some nodes' features to be zero vectors, i.e.,
\begin{equation}\label{1}
\tilde{\mathbf{X}}_{i} = \frac{\epsilon_{i}}{1-\delta}\mathbf{X}_{i},
\end{equation}
where $\tilde{\mathbf{X}}_{i}$ denotes the $i$-th row vector of feature matrix $\mathbf{X}$ and $\epsilon_{i}$ draws from Bernoulli distribution $Bernoulli(1-\delta)$.
\begin{figure*}[htbp]
\centering
\includegraphics[height=7cm,width=15cm]{Figure-1.png}
\caption{
Illustration of the four graph data augmentation approaches that help alleviating over-smoothing. DropNode randomly drops nodes which are in grey color and generates new graphs for training. DropEdge generates new graphs by randomly deleting edges instead, which are marked in dotted lines. Dropout tries to perturb feature matrix by stochastically setting part of elements to be zeros. GRAND perturbs feature matrix by randomly setting some rows to be zero vectors, and the corresponding nodes are marked in green.
}
\label{Figure-1}
\end{figure*}
These four techniques utilize different perturbation strategies to generate graph data for training but lead to the damage of structure in respect to graph topology or feature information. As for DropNode and DropEdge, they perturb graph by stochastically breaking topological structure and notice that the propagating efficiency of information from one node to another relies on the integrity of graph topology. As for Dropout and GRAND, they operate on graph feature matrix but keep topological structure unchanged. Motifs as higher-order connectivity patterns are crucial to construct graph topology and control behaviors of a network~\cite{motifs-2}. In motifs, features from local dense nodes are gathered into a whole as motif-based information structure to express information, but this special aggregated information structure could be inevitably destroyed after various perturbation operations.
As a fundamental concept of statistical physics and information theory, graph entropy serves as a quantitative measure for graph dynamics~\cite{EntropyDynamic} and could describe the change of graph topology as well as graph features. The greater the entropy is, the more information the system reflects due to higher uncertainty and more randomness. It is obvious that the original graph exhibits highest entropy, since there exists no damage on both topological structure and feature information. But the over-smoothing phenomenon because of information coupling weakens feature extraction ability for machine learning model, and as a consequence, the information contained by the original graph could not be fully learned, although it has highest entropy. Different graph augmentation methods tackling with over-smoothing by deleting elements from graph topology or features, result in the perturbations on features distribution. Usually graph entropy aims at a topological invariant (e.g., vertex degrees, distances etc.), but the features attached on nodes from local areas still need attention.
In this paper, we propose a new graph entropy as an index to describe the smoothness of graph feature information diffusion and yield that the key point to control this kind of smoothness lies in motif-based information structures. Afterwards, a novel graph data perturbation strategy for over-smoothing problem on GCNs is provided, whereby graph entropy could be protected as much as possible. The main steps in this augmentation strategy are as follows. Firstly, we tend to keep original adjacent matrix $\mathbf{A}$ unchanged instead of dropping any nodes or edges from the input graph for each training epoch. Afterwards nodes from specific shape motifs and nodes not in motifs but selected with a certain probability are set as activated status. Only activated nodes' features could be present in feature matrix while the remaining are represented in zero vectors. In this study, we focus on the triangle motifs for its ubiquitousness in understanding interaction of social networks~\cite{motif-important-social-network}. Extensive experiments have been conducted on several real-world datasets and the results demonstrate the effectiveness of the proposed method in reducing over-smoothing and increasing robustness during the whole training process. Our results significantly improve semi-supervised nodes classification performance compared to state-of-the-art methods.
We summarize the main contributions as follows:
\begin{itemize}
\item[(1)] We provide a new graph entropy design to measure the smoothness of graph features distribution and conclude that motif-based information structures determine this graph entropy.
\item [(2)] We propose a novel graph data augmentation strategy that protects not only the integrity of topological structure but also the integrity of motifs-based information structures. Our strategy shows advantages in maintaining more graph entropy compared with other methods.
\item[(3)] Extensive experiments are conducted on several real-world datasets to show the effectiveness of our proposed method.
\item[(4)] Our approach significantly enhances the robustness and generalization ability for GCNs.
\end{itemize}
\section{BACKGROUND AND PROBLEM STATEMENT}
Let $G = (V, E)$ denote a graph with nodes set $V$ and edges set $E \subseteq V\times V$, $G$ has a feature matrix $\mathbf{X} \in \mathbb{R}^{|V| \times \eta}$ with $i$-th row $\mathbf{X}_{i}$ corresponding to the feature vector of node $v_{i}$ with length $\eta$, and training labels for all nodes are listed in $\mathbf{Y} \in\{0,1\}^{|V| \times c}$, where $c$ is the classes number. The adjacency matrix $\mathbf{A} \in \mathbb{R}^{|V| \times |V|}$ encodes the node-wise connection of the network, whose entry $\mathbf{A}_{ij} = 1$ if there exits an edge between node $v_i$ and $v_j$, otherwise $\mathbf{A}_{ij} = 0$. We denote $\mathbf{D}$ as the degree matrix, $\mathbf{D} = diag(d_1, d_2, ..., d_{|V|})$, such that $d_j=\sum\nolimits_{i=1}^{|V|}\mathbf{A}_{ij}$.
\subsection{Semi-Supervised Node Classification}
In most cases, semi-supervised node classification task has common applications including classifying documents, videos or proteins and is one of the most basic tasks to verify the effectiveness of GCNs~\cite{GCN}. In this task, labels are only available for a small proportion of nodes, with the goal being to label the full graph relying on this small initial set. Formally, $m$ nodes ($0 < m \ll |V|$) from the graph have observed their labels $\mathbf{Y}^{L}$ and the labels $\mathbf{Y}^{U}$ of the remaining $|V|-m$ nodes are missing. The objective is to learn function $g : (G, \mathbf{X}, \mathbf{Y}^{L}) \rightarrow \mathbf{Y}^{U}$ to predict the missing labels $\mathbf{Y}^{U}$ for unlabeled nodes.
\subsection{Graph Convolutional Networks}
Graph Convolutional Networks (GCNs) generalize neural techniques into graph-structured data. The core operation in GCNs is graph propagation, in which information spreads from each node to its neighborhoods with some deterministic propagation rules. The feed forward propagation in GCNs model consists of $k$ layers of graph convolution, which is similar to a perception but additionally has a neighborhood aggregation step motivated by spectral convolution. This framework is recursively conducted as
\begin{equation}\label{1}
\quad \mathbf{H}^{(l)}=\left\{
\begin{aligned}
\sigma(\hat{\mathbf{A}}\mathbf{H}^{(l-1)}\mathbf{W}^{(l-1)}) &,\ if\ l\in[1,...,k] \\
\mathbf{X}&,\ if\ l=0
\end{aligned}
\right.,
\end{equation}
where $\hat{\mathbf{A}}=\hat{\mathbf{D}}^{-1/2}(\mathbf{A}+\mathbf{I})\hat{\mathbf{D}}^{-1/2}$ is a symmetrically normalized adjacency matrix with self-connections, $\hat{\mathbf{D}}$ is the degree matrix of $\mathbf{A}+\mathbf{I}$, $\mathbf{H}^{(l+1)} = \{h_1^{l+1},...,h_{|V|}^{l+1}\}$ are the hidden vectors of the $(l+1)$-th layer with $h^{(l+1)}_{i}$ as the hidden features for node $i$ for input activations $\mathbf{H}^{(l)}$ with $\mathbf{H}^{(0)} = \mathbf{X}$, $\sigma(\cdot)$ is a nonlinear function, and $\mathbf{W}^{(l)}$ is the corresponding weight matrix for layer $l$.
\section{Graph Entropy}
The concept of entropy is a fundamental law of statistical physics, and the second law of thermodynamics shows that the entropy of macroscopic system is hard to decrease. Shannon introduced the concept of entropy into information theory as a characteristic measure to reveal information related to a system. As a representation of complex systems, real networks are usually very large, one can characterize graph information quantitatively in terms of macroscopic parameters using methods similar to entropy. Thus graph entropy is widely used to describe and understand the dynamics of graph quantitatively in terms of general topology or features. It was first introduced by Rashevsky~\cite{GraphEntropyBeginging}, then Mowshowitz investigated graph entropy to measure structural information content of graphs~\cite{entropyAndComplexity, entropyAndComplexity-2} and Körner applied a different definition of graph entropy into coding theory~\cite{korner1973coding}.
Most graph entropies are derived from basic Shannon's entropy definition, whose details are as follows: for a discrete system $X$, $I(x_{i})=-\log p(x_{i})$ denotes the self-information of $x_{i}\in X$ with occurring probability $p(x_{i})$. The entropy of system $X$ is defined by $H(X)$, as
\begin{equation}\label{Shannon_Entropy}
H(X)= - \sum\limits_{i=1}^{|X|} p(x_{i})\log p(x_{i}).
\end{equation}
Usually, information-theoretic measures for graphs are based on a graph invariant and then derive a partitioning~\cite{entropyAndComplexity-2}. Instead of determining partitions of elements based on a given invariant, Dehmer et al. developed an approach which is based on using so called information functional $f$, mapping sets of vertices to the positive reals~\cite{entropyAndComplexity-3}, via
\begin{equation}\label{information_functional}
p_{i} = \frac{f_{i}}{\sum\limits_{j=1}^{|X|}f_{j}}.
\end{equation}
Then graph entropy measure is obtained by applying function (\ref{Shannon_Entropy}) and (\ref{information_functional}).
Graph entropy measures the randomness or uncertainty in a statistical perspective. Maximum entropy description retains all of the uncertainty not removed from the original data, and it has been interpreted as the maximally noncommittal with respect to missing information~\cite{MaxEntropy-1}. Here we briefly present a novel graph entropy design in terms of features on nodes as well as neighborhoods relations to evaluate feature information diffusion.
\subsection{Smoothness Index}
Here we provide a new graph entropy design as an index to indicate the smoothness of global information distribution. Its idea comes from an application of entropy in image segmentation, in which each pixel of a digital image maps to vertices and one divides them into different communities based on image contrasts. Entropy plays a significant role in quantifying the smoothness of texture in various regions image analysis: high-entropy indicates more smoothness of the texture and less abrupt graphic blocks. As a consequence, more information will be contained in the target image due to the fact that it exhibits more uniform distribution~\cite{MaxImageEntropy-1,MaxImageEntropy-2}.
In our new graph entropy design, feature vector of each node is regarded as an individual, and then all of them constitute feature vector space. In particular, in accordance with previous definition, $\mathbf{X} \in \mathbb{R}^{|V| \times \eta}$ denotes the feature matrix for graph $G=(V, E)$, where $i$-th row is the feature vector $\mathbf{X}_{i}$ with length $\eta$ for node $v_{i}$, $i=1,\ldots, |V|$. We assign probability values to each individual node of a graph as
\begin{equation}\label{1}
p(v_{i}) = \frac{f(v_{i})}{\sum\limits_{j=1}^{|V|}f(v_{j})},
\end{equation}
where $f(v_{i})$ equals the average inner product between feature vector $\mathbf{X}_{i}$ and its first order neighbors' features, i.e.,
\begin{equation}\label{1}
f(v_{i}) = \frac{1}{|N_{v_{i}}|}\sum\limits_{v_{k}\in N_{v_{i}}}\langle\mathbf{X}_{i}, \mathbf{X}_{{k}}\rangle,
\end{equation}
where $N_{v_{i}}$ denotes the first order neighbors set of target node $v_{i}$. We apply average feature distance between a node and its neighbors as similarity measurement to express local features distribution. Neighboring nodes with larger inner product indicate more similarity in feature space and exhibit higher smoothness.
Relying on the definition of $p(v_{i})$ for each node, we yield the smoothness index of feature information diffusion on a graph as
\begin{equation}\label{my_graph_entropy}
I(G) = -\sum\limits_{i = 1}^{|V|} p(v_{i}) \log p(v_{i}).
\end{equation}
It quantifies the randomness of features distribution by the ensemble average of $- \log p(v_{i})$ over each node $v_{i}$, where $p(v_{i})$ represents the contribution of local features to the global scope in the form of probability. We could infer that features tend to scatter evenly around a graph $G$ if $ I(G)$ reaches a relative high value.
\begin{figure*}[htbp]
\centering
\includegraphics[height=4.0cm,width=15cm]{Figure-2.png}
\caption{The graph entropy curves are plotted against drop rate varying from $0\%$ to $90\%$ on Cora, Citeseer and Pubmed datasets. Each dot on curves denotes an average value over ten times calculations.
}
\label{Figure-ShanoonEntropy}
\end{figure*}
In Figure \ref{Figure-ShanoonEntropy}, we take Cora, Citeseer and Pubmed datasets as examples to show how the graph entropy varies after feature information structures being damaged. All the curves achieve highest graph entropy equalling 7.6357, 7.9247 and 9.6724 on three datasets, respectively. After that, these curves appear to show different responses to the drop rate. GRAND leads to most severe decaying on graph entropy and DropEdge gives rise to the slightest loss on graph entropy as drop rate increasing. All curves decrease quickly after they meet $50\%$ drop rate and stop at the lowest values at $90\%$ drop rate. From the results, it is clear that all these four methods are strongly sensitive to the drop rate, which reflects the damage extent of features on a graph. But these four methods result in such performances for different reasons: DropNode deletes parts of features by dropping nodes, DropEdge cuts off connections between features attached on nodes, while Dropout and GRAND remove some elements or rows of feature matrix directly.
\subsection{Motif-Based Information Structure}
Motif-based approaches are well used in graph learning tasks~\cite{motifs-Triangle-community-detection, motifs-higher-order-clustering, motifs-EdMOT}, and formally, a motif with $s$ nodes and $t$ edges can be denoted as:
\begin{equation}\label{1}
M_{s}^{t} = \{ V_{M},E_{q} \},
\end{equation}
where $V_{M}\subseteq V$ represents the set of $s$ nodes and $E_{M}\subseteq E$ represents the set of $t$ edges. In particular, triangle motifs ($M_{3}^{3}$) which we focus on in this paper are significant in social graph learning~\cite{motif-important-social-network}.
\begin{figure}[htbp]
\centering
\includegraphics[height=2cm,width=7cm]{Figure-3.png}
\caption{
Examples of four triangle motifs: an undirected triangle (a), a triangle in any direction (b), a cycle (c), and a feed-forward loop (d).
}
\label{Figure-motif}
\end{figure}
As higher-order connectivity structures, motifs characterize the building blocks of graph topological structure and are crucial to the organization of graph~\cite{motifs-1,motifs-2, motifs-Graphlet}. DropNode and DropEdge break the integrity of motifs, so that features on motifs are deleted at the same time. Dropout and GRAND break the integrity of features on motifs without perturbing graph topology. It is worth noting that features attached on nodes from motifs aggregate as an whole to express local dense characteristics among the whole graph due to the fact motifs exhibit higher connectivity. We define this special information structure as motif-based information structure, which plays a significant role in the graph entropy~\cite{GraphEntropyBook}, and keeping this information structure integrity could be regarded as a criteria for designing new augmentation strategy that demands entropy preserving.
\section{METHODOLOGY}
Building on above, in this section, we describe a new data stochastically augmenting strategy for semi-supervised learning on graphs as illustrated in Figure \ref{Figure-3} and Figure \ref{Figure-4}.
\begin{figure*}[htbp]
\centering
\includegraphics[height=6.5cm,width=15cm]{Figure-4.png}
\caption{
Here we take triangle motifs $M_{3}^{3}$ to illustrate our proposed strategy. The original graph with feature matrix $\mathbf{X}$ serves as an input and its perturbed data are generated through the following steps. In step 1, nodes will be activated depending on whether they are in triangle motifs $M_{3}^{3}$ or not, and the activated nodes are marked in red while others are in green. In step 2, nodes from the remaining part will be activated with a preset probability and turn into red from green. Only features on activated nodes could be revealed in $\mathbf{X}$ and others are set as zero vectors.
}
\label{Figure-3}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[height=6.5cm,width=15cm]{Figure-5.png}
\caption{
Illustration of operation by our proposed method on Cora dataset. There are 2708 nodes and 5429 edges in total on Cora network. The color of each node will turn into red from green if being activated, and only features from activated nodes are retained for training. At the beginning, all the nodes stay in the dormant state marked in green waiting to be activated; then nodes from triangle motifs are activated, as shown in the second subfigure; in the last, the remaining nodes are activated randomly into red with probability $p=0.5$.
}
\label{Figure-4}
\end{figure*}
\subsection{Generate Graph Data Augmentations Using Entropy Preserving Strategy}
For a graph $G = (V, E)$ with its adjacent matrix $\mathbf{A}$ and feature matrix $\mathbf{X}$, our method keeps the topological structure of $G$ unchanged and then takes two steps to generate multiple graph data augmentations: (1) activating nodes on motifs, (2) activating the remaining nodes with certain probability.
In the first step, in each training epoch, we set nodes on triangle motifs $M_{s}^{t}$ as activated status and the rest nodes are in dormant status. Afterwards it generates a feature matrix $\mathbf{X}_{M_{s}^{t}}$, where only features on activated nodes could be revealed, while the others are set as zero vectors. For the second step, we randomly sample a binary mask $\alpha_{i}$ by $Bernoulli(1-\delta)$ for each node $v_{i}$ in the remaining part to determine whether $v_{i}$ would be activated next or not.
To guarantee the perturbed feature vector is in expectation equal to original vector, we multiple coefficient $\frac{1}{1-\delta}$ and get the following as regularized perturbed feature vector
\begin{equation}\label{1}
\mathbf{\tilde{X}}_{i} = \frac{\alpha_{i}}{1-\delta}\mathbf{X}_{i}.
\end{equation}
In summary, our proposed method generates perturbed feature matrix $ \mathbf{\tilde{X}}$ such that
\begin{equation}\label{1}
\mathbf{\tilde{X}}_{i}=\left\{
\begin{aligned}
\mathbf{X}_{i} &, \ if \ v_{i} \in V_{M_{s}^{t}} \\
\frac{\alpha_{i}}{1-\delta}\mathbf{X}_{i} &, \ otherwise
\end{aligned},
\right.
\end{equation}
where $\mathbf{X}_{i}$ is the $i$-th row vector of original feature matrix $\mathbf{X}$, and binary mask $\alpha_{i}$ draws from $Bernoulli(1-\delta)$. The whole computation complexity is $\mathcal{O}(n^3)$ and pseudo code is shown in Algorithm \ref{Algorithm-Strategy}.
\begin{algorithm}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\caption{Graph Data Augmentation Strategy with Entropy Preserving}
\label{Algorithm-Strategy}
\begin{algorithmic}[1]
\REQUIRE graph $G$ with its node set $V$ and feature matrix $\mathbf{X}$, target motif $M_{s}^{t}$, Bernoulli distribution $Bernoulli(1-\delta)$.
\ENSURE augmentations of graph feature matrix: $\mathbf{\tilde{X}}$.
\FOR {$i=1$; $i<|V|$; $i++$}
\IF {node $v_{i}$ in $M_{s}^{t}$}
\STATE $\mathbf{\tilde{X}}_{i} = \mathbf{X}_{i}$
\ELSE
\STATE $\alpha_{i} \sim Bernoulli(1-\delta)$
\STATE $\mathbf{\tilde{X}}_{i} = \frac{\alpha_{i}}{1-\delta}\mathbf{X}_{i}$
\ENDIF
\ENDFOR
\STATE \textbf{return} $\mathbf{\tilde{X}}$
\end{algorithmic}
\end{algorithm}
One benefit of this strategy is that neither nodes nor edges are deleted from the original graph in the whole procedure, therefore graph topology keeps unchanged.
A further advantage is that this strategy could maintain graph entropy as much as possible since motif-based information structures are protected. In order to understand how our strategy preserves graph entropy, we calculate the graph entropy of citation graphs in $7$ contrasting scenarios as shown in Table \ref{table-3}: original graph without any operation on feature matrix, graph perturbed by DropNode, graph perturbed by DropEdge, graph perturbed by Dropout, graph perturbed by GRAND, graph with only motif-based information structures preserved, and graph perturbed by our entropy preserving strategy. Graph entropy derived from original graph without any operation exhibits highest entropy reaching 7.6358, 7.9247 and 9.6724 on three datasets respectively, since original graph contains all the features. DropNode and GRAND get the lowest entropy compared with others. Here especially note that the sixth scenario graph with only motif-based information structures preserved is corresponding to the first step in our strategy. It maintains almost graph entropy of original graph achieving 7.6028, 7.6996 and 9.6384, which are only a little bit worse than original graph but better than DropNode, DropEdge, Dropout and GRAND except for only one case. We can see our entropy preserving strategy achieves second highest graph entropy and the values are closest to the original graph scenario with only 0.0129, 0.1466 and 0.0151 less on each dataset.
We also explore the statistics of the number of triangle motifs and the number of nodes on triangle motifs, as shown in Table \ref{table-4}. The nodes number on motifs for Cora is 1470, accounting for more than $50\%$ of the total nodes number. While, as for Citeseer and Pubmed, the numbers of nodes on motifs are 1183 and 4835 accounting for about $35.6\%$ and $24.5\%$, respectively. These observations confirm the facts: motif-based information structures determine the graph entropy of the whole graph although they are not in dominant in nodes number, and randomly activating feature operation plays more roles in generating perturbations.
\begin{table*}
\centering
\caption{
Graph entropy calculation results derived from $7$ scenarios, which are shown in each column from left to right: original graph, graph perturbed by DropNode (with $50\%$ drop rate), graph perturbed by DropEdge (with $50\%$ drop rate), graph perturbed by Dropout (with $50\%$ drop rate), graph perturbed by GRAND (with $50\% $ drop rate), graph with only triangle motif-based information structures preserved and graph perturbed by our entropy preserving strategy (with activating probability drawing from $Bernoulli(0.5)$ distribution).
}
\begin{tabular}{llllllll}
\toprule
Datasets & Original & DropNode & DropEdge & Dropout & GRAND & Only Motif-based & Our Strategy\\ & & & & & &Information Structures \\ & & & & & & Preserved\\
\midrule
Cora & 7.6358 & 6.7793 & 7.3648 & 7.3063 &6.6292 & 7.6028 & 7.6229 \\
Citeseer & 7.9247 & 6.8784 & 7.5570 & 7.7154 &6.7884 & 7.6996 & 7.7781 \\
Pubmed & 9.6724 & 8.6353 & 9.3230 & 9.3119 &8.5597 & 9.6384 & 9.6573 \\
\bottomrule
\end{tabular}
\label{table-3}
\end{table*}
\begin{table}
\centering
\caption{
Number of nodes from original graph, number of triangle motifs and number of nodes on triangle motifs.
}
\begin{tabular}{llll}
\toprule
Datasets & Nodes & Motifs & Nodes on Motifs \\
\midrule
Cora & 2708 & 4890 & 1470 \\
Citeseer & 3327 & 4137 & 1183 \\
Pubmed & 19717 & 37649 & 4835 \\
\bottomrule
\end{tabular}
\label{table-4}
\end{table}
\subsection{Aggregate Mixed Order Information}
Since for various datasets, the styles that how information from multiple order neighborhoods affects central node are different. Hence we adopt linear combination of different adjacent matrix powers with adaptive weights to target dataset, i.e. $\bar{\mathbf{X}}=\bar{\mathbf{A}}\tilde{\mathbf{X}}$, where
\begin{equation}\label{2}
\bar{\mathbf{A}}=\sum\limits_{i=0}^{d}g_{i}(\theta_{0},\theta_{1},...,\theta_{d})\hat{\mathbf{A}}^{i}
\end{equation}
is the weighted average power of symmetrically normalized adjacency matrix $\hat{\mathbf{A}}$ from order $0$ to $d$. The weight $g_{i}(\theta_{0},\theta_{1},...,\theta_{d})$ is defined by softmax function as
\begin{equation}\label{2}
g_{i}(\theta_{0},\theta_{1},...,\theta_{d})=\frac{\exp(\theta_{i})}
{\sum\limits_{j=0}^{d}\exp(\theta_{j})}.
\end{equation}
Note that after enough iterative calculations, parameters $\theta_{0},\theta_{1},...,\theta_{d}$ will be updated and adjusted to best values until reaching final convergence state.
\subsection{Make Prediction}
Suppose that we generate $K$ augmentations as input for each training epoch and derive perturbed feature matrix set $\{ \tilde{\mathbf{X}}^{(k)}\}_{k=1}^{K}$, each one from that set will be fed into networks to get prediction probabilities in the form of binary matrix $\bar{\mathbf{Z}}^{(k)}\in [0, 1]^{n \times c}$:
\begin{equation}\label{2}
\bar{\mathbf{Z}}^{(k)}= \varphi(\tilde{\mathbf{X}}^{(k)}, \Omega),
\end{equation}
where $\bar{\mathbf{X}}^{(k)} = \bar{\mathbf{A}}\tilde{\mathbf{X}}^{(k)}$ and $\Omega$ denotes the parameter.
\subsection{Loss}
Our work follows ~\cite{deeplearning-loss} and ~\cite{GRAND} to design the loss function, which is a combination of the supervised loss and the graph regularization loss, i.e.,
\begin{equation}\label{2}
\mathcal{L}= \mathcal{L}_{s} + \lambda \mathcal{L}_{r}.
\end{equation}
The first term $\mathcal{L}_{s}$ on the right-hand side is the supervised classification loss on labeled node set $S_{L} = \{(v_{i}, \mathbf{Y}_{i}^{L})\}_{i=1}^{m}$, where $v_{i}$ corresponds to the target node and $\mathbf{Y}_{i}^{L}$ is the ground-true label. GCNs model calculates each node $v_{i}$ from $S_{L}$ and outputs $\bar{\mathbf{Z}}^{(k)}_{i}$ as corresponded prediction, then derives $\mathcal{L}_{s}$ by the average cross-entropy loss over $K$ data augmentations:
\begin{equation}\label{2}
\mathcal{L}_{s} = -\frac{1}{K}\sum_{k=1}^{K}\sum\limits_{i=1}^{m}(\mathbf{Y}_{i}^{L})^\mathsf{T}\log \bar{\mathbf{Z}}_{i}^{(k)}.
\end{equation}
The second term denotes the graph regularization loss as $\mathcal{L}_{r}$ to guide the prediction of unlabeled node $v_{i}$ close to its expected label over $K$ augmentations by minimizing the distance between $\tilde{\mathbf{Z}}^{(k)}_{i}$ and $\bar{\mathbf{Z}}^{'}_{i}$:
\begin{equation}\label{equation-15}
\mathcal{L}_{r}= \frac{1}{K}\sum_{k=1}^{K}\sum\limits_{i=1}^{|V|}\|\tilde{\mathbf{Z}}^{(k)}_{i} - \bar{\mathbf{Z}}^{'}_{i}\|.
\end{equation}
In equation (\ref{equation-15}), $\bar{\mathbf{Z}}^{'}_{i}$ represents the possible distribution on the basis of expected label for node $v_{i}$, which is defined in the form of
\begin{equation}\label{33}
\bar{\mathbf{Z}}_{i} = \frac{1}{K}\sum_{k=1}^{K}\tilde{\mathbf{Z}}_{i}^{(k)}.
\end{equation}
Each $j$-th element of $\bar{\mathbf{Z}}^{'}_{i}$ refers to the probability of node $v_{i}$ on the $j$-th class, and denotes as
\begin{equation}\label{2}
\bar{\mathbf{Z}}_{ij}^{'} = \frac{\bar{\mathbf{Z}}_{ij}^{\frac{1}{\kappa}}}{\sum_{t=1}^{c}\bar{\mathbf{Z}}_{it}^{\frac{1}{\kappa}}}, 1 \leq j \leq c,
\end{equation}
where the categorical distribution is controlled by hyperparameter $\kappa \in [0,1] $, and $\bar{\mathbf{Z}}^{'}_{i}$ will converge to a one-hot distribution as $\kappa$ getting close to 0~\cite{GRAND}.
\section{Experiments}
With the proposed model above, in this section we follow the experiment setup in ~\cite{GCN} to evaluate the effectiveness of our model on semi-supervised node classification task.
\subsection{Datasets}
We evaluate our model on real-world citation datasets Cora, Citeseer and Pubmed~\cite{Cora-Citeseer-Pubmed} and the data partition~\cite{Revisiting-Graph-Ebd} details are covered in Table \ref{table-1}. Each citation network provides all the relevant information for each single paper represented as node and citing link between two documents by an edge. Nodes are assigned with labels under their categories. In node semi-supervised classification task, a number of nodes with labels selected by a preset rate from the whole as training set for learning to predict the labels of nodes in test set and validation set.
\begin{table*}
\centering
\caption{Statistics of the benchmark graph datasets. The columns are: name of dataset, number of classes, number of nodes, number of edges, number of features, number of training nodes, number of validation nodes and number of test nodes.}
\begin{tabular}{llllllll}
\toprule
Datasets & Classes & Nodes & Edges & Features & Training Nodes & Validation Nodes & Test Nodes \\
\midrule
Cora & 7 & 2708 & 5429 & 1433 & 1208 & 500 & 1000 \\
Citeseer & 6 & 3327 & 4732 & 3703 & 1827 & 500 & 1000 \\
Pubmed & 3 & 19717 & 44338 & 500 & 18217 & 500 & 1000 \\
\bottomrule
\end{tabular}
\label{table-1}
\end{table*}
\subsection{BASELINES}
To validate the performance of our approach, we compare it with graph embedding methods, graph neural networks and graph augmentation methods on datasets in Table \ref{table-1}. Here are the details of the learning methods that are compared:
\begin{itemize}
\item \textbf{Graph Embedding Methods:} SemiEbd~\cite{SemiEbd} provides a semi-supervised embedding algorithms which can be easily applied to deep multi-layer learning model. DeepWalk~\cite{deepwalk}, as a representation of factorization-based approach, takes advantage of random walks to get embeddings from graph.
\item \textbf{Graph Convolution Methods:} GCN~\cite{GCN} proposes convolutional architecture via a local first-order approximation containing both local graph structure and features of nodes. GAT~\cite{GAT} leverages self-attention layers to specify different weights to different nodes in the neighbors. MixHop~\cite{Mixhop} learns mixing feature representation of neighbors at different orders. SGC~\cite{SGC} improves GCN by reducing excess complexity via removing nonlinearities and collapsing weight matrices between consecutive layers. Graph Markov Neural Network (GMNN)~\cite{GMNN} models the joint distribution of labels with a conditional random field, and uses graph neural networks for classification learning. GraphSAGE~\cite{GraphSAGE} proposes a general inductive framework to embed the target node by sampling and aggregating features from its local neighborhood. FastGCN~\cite{Fastgcn} interprets graph convolutions as integral transforms of embedding function under probability measures, which are evaluated through Monte Carlo approximation.
\item \textbf{Graph Augmentation Methods:} DropNode~\cite{AS-GCN} randomly removes a certain number of nodes from the input graph at each training epoch, while DropEdge~\cite{Dropedge} randomly removes edges instead. Dropout~\cite{Dropout, Dropout2} perturbs the feature matrix by stochastically setting elements to be zeros. GRAND~\cite{GRAND} utilizes a random propagation strategy to generate new graph feature matrix for training by randomly setting a rate of nodes' feature vectors as zero vectors.
\end{itemize}
\begin{table}
\centering
\caption{Semi-supervised node classification accuracy (standard deviation) (\%) of our method and baselines on benchmark datasets Cora, Citeseer and Pubmed. Bold font marks the best performance in a column.
}
\setlength{\tabcolsep}{4mm}{
\begin{tabular}{llll}
\toprule
\textbf{Algorithm} & \textbf{Cora} & \textbf{Citeseer} & \textbf{Pubmed} \\
\midrule
SemiEbd & 68.0 & 45.3 & 63.0 \\
DeepWalk & 67.2 & 43.2 & 65.3 \\
\midrule
GCN & 81.5 & 70.3 & 79.0 \\
GAT & 83.0 (0.7) & 72.5 (0.7) & 79.0 (0.3) \\
MixHop & 81.9 (0.4) & 71.4 (0.8) & 80.8 (0.6) \\
SGC & 81.0 (0.0) & 71.9 (0.1) & 78.9 (0.0) \\
GMNN & 83.7 & 72.9 & 81.8 \\
GraphSAGE & 78.9 (0.8) & 67.4 (0.7) & 77.8 (0.6) \\
FastGCN & 81.4 (0.5) & 68.8 (0.9) & 77.6 (0.5) \\
\midrule
DropNode & 80.7 & 70.8 & 77.5 \\
DropEdge & 82.8 & 72.3 & 79.6 \\
Dropout & 81.9 & 70.0 & 78.9 \\
GRAND & 85.4 (0.4) & 75.4 (0.4) & 82.7 (0.6) \\
\midrule
\textbf{Our}& $\mathbf{85.6}$ (0.4) & $\mathbf{77.5}$ (0.3) & $\mathbf{86.6}$ (0.3) \\
\bottomrule
\end{tabular}}
\label{table-2}
\end{table}
\subsection{Parameter Settings}
We follow exactly the same experimental procedure (such as features and data splits) as the standard GCNs settings on semi-supervised graph learning~\cite{GCN,robust-GCN,SGC}. We randomly select nodes out of triangles motifs under Bernoulli distribution $Bernoulli(1-\delta)$, where $\delta = 0.5$. The mixture order of aggregated adjacent matrix $\bar{\mathbf{A}}$ is set as $d=8$. For each training epoch, 4 perturbed feature matrixes will be generated as input and the model utilizes stochastic gradient descent algorithm to calculate the loss function for 1000 epoches. The regularization hyper-parameter $\lambda$ equals 1 and hyper-parameter $\kappa$ is set as 0.5.
\subsection{Comparison Results and Discussion}
Comparison results of semi-supervised node classification tasks on Cora, Citeseer and Pubmed datasets are reported in Table \ref{table-2}, where the scores of our method are averaged over 10 times and the standard deviations are also reported. Figure \ref{Figure-5} shows how the loss curves vary on training and validation sets with the increase of training epoch compared with DropNode, DropEdge, Dropout and GRAND.
As can be seen in Table \ref{table-2}, the results of our proposed model go beyond other baselines on all three datasets. Compared with graph embedding methods, our method
improves the classification accuracy upon SemiEbd by $17.6\%$, $32.2\%$, and $23.6\%$ on Cora, Citeseer and Pubmed, respectively, and improves $18.4\%$, $34.3\%$, $21.3\%$ upon DeepWalk respectively. As for graph convolution methods, our method gains in at least $1.9\%$, $4.6\%$ and $4.8\%$ on Cora, Citeseer, and Pubmed. As a new data augmentation approach, our method also performs best in this category, reaching $0.2\%$, $2.1\%$ and $3.9\%$ higher in accuracy on Cora, Citeseer, and Pubmed. This strongly demonstrates the effectiveness of our proposed model on semi-supervised learning task.
In Figure \ref{Figure-5}, we utilize training and validation losses decreasing performance to show the training characteristic of our model compared to DropNode, DropEdge, Dropout and GRAND on Cora, Citeseer and Pubmed. It appears that both training and validation curves of our proposed model apparently decrease smoothly and then level off at successively inferior values along with the training on all three datasets, while other methods tend to fluctuate on different levels. This obvious superiority of our model over others suggests that our model achieves more stability and robustness during training process. Another novel finding we need to note is that for all three datasets, the validation loss curves of our model are all under the training curves, which suggests our model could achieve higher generalization performance.
\begin{figure*}[htbp]
\centering
\includegraphics[height=7.5cm,width=15cm]{Figure-6.png}
\caption{
The training and validation losses decreasing performances of our proposed model on Cora, Citeseer and Pubmed compared with DropNode, DropEdge, Dropout and GRAND. The red curves corresponds to the training losses, while the green curves are the validation losses.
}
\label{Figure-5}
\end{figure*}
\subsection{Conclusion}
In summary, we firstly introduce a new graph entropy design which acts as an index to indicate the smoothness of graph feature information diffusion. Keeping motif-based information structure integrity is very much key point that determines such graph entropy. We adopt preserving entropy as a criteria and propose a novel graph data augmentation strategy for over-smoothing problem in GCNs. Compared with other graph data augmentation methods, our strategy maintains randomness with only a small amount of graph entropy loss and without breaking graph topological structure. Experimental results show that our model exhibits higher generalization ability and achieves good accuracy on semi-supervised node classification tasks among a series of baselines. Another advantage is that it performs more stable during the whole training process, which enhances robustness.
There are a lot of interesting directions for future work. Research on graph entropy defined by different pairwise distance is still warranted for further study. In addition, using entropy tool to investigate control problems in graph dynamics (e.g., pinning control problem~\cite{PinningControl}) is another interesting topic we aim to focus on in the future.
\subsection*{Acknowledgements}
This work is supported by the Research and Development Program of China (No.2018AAA0101100), the Fundamental Research Funds for the Central Universities, the Beijing Natural Science Foundation (1192012, Z180005) and National Natural Science Foundation of China (No.62050132).
\bibliographystyle{unsrt}
|
1,108,101,565,987 | arxiv | \section{Introduction \label{sec:Introduction}}
In numerous signal processing applications such as equalization and
interference cancellation, long FIR filters have to be implemented
at high sampling rates. This results in high complexity, which grows
proportional to the square of the number of nonzero taps. One approach
to reduce this complexity is to implement only the most significant
FIR filter taps, i.e., sparse filters. However, reliably determining
the locations of these dominant taps is often very challenging.
Several design approaches have been investigated in the literature
to reduce the complexity of long FIR filters. In \cite{strongestTap},
the number of nonzero coefficients is reduced by selecting only the
significant taps of the equalizer. Nonetheless, knowledge of the whole
equalizer tap vector is required which increases the computational
complexity. In \cite{linearProgAlnOppen10}, an $\ell_{1}$-norm minimization
problem is formulated to design a sparse filter. However, since the
resulting filter taps are not exactly sparse, a strict thresholding
step is required to force some of the nonzero taps to $0$. An algorithm,
called sparse chip equalizer, for finding the locations of sparse
equalizer taps is given in \cite{sparseChipEqu} but this approach
assumes that the channel itself is sparse. In \cite{sparseFilterDesign13},
a general optimization problem for designing a sparse filter is formulated
that involves a quadratic constraint on filter performance. Nonetheless,
the number of iterations of the proposed backward selection algorithm
becomes large as the desired sparsity of the filter increases. In
addition, the approach in \cite{sparseFilterDesign13} also involves
inversion of a large matrix in the case of long Channel Impulse Responses
(CIRs). In \cite{newDFW}, a framework for designing sparse FIR equalizers
is proposed. Using greedy algorithms, the proposed framework achieved
better performance than just choosing the largest taps of the MMSE
equalizer, as in \cite{strongestTap}. However, this approach involves
Cholesky factorization, whose computational cost could be large in
the case of channels with large delay spreads. In addition, no theoretical
guarantees are provided.
In this paper, we develop a general framework for the design of sparse
FIR equalizers that transforms the original problem into one of sparse
approximation of a vector using different dictionaries. The developed
framework can then be used to find the sparsifying dictionary that
leads to the sparsest FIR filter subject to an approximation constraint.
We also investigate the coherence of the sparsifying dictionaries
that we propose as part of our analysis and identify one that has
the smallest coherence. Then, we use simulations to validate that
the dictionary with the smallest coherence gives the sparsest FIR
linear equalizer. Moreover, the numerical results demonstrate the
significance of our approach compared to conventional sparse FIR equalizers
(e.g., \cite{strongestTap}) in terms of both performance and computational
complexity.
\textbf{\textit{\small{}Notations}}: We use the following standard
notation in this paper: $\mbox{\ensuremath{\boldsymbol{I}}}_{N}$
denotes the identity matrix of size $N$. Upper and lower case bold
letters denote matrices and vectors, respectively. The notations $(.)^{-1},\,(.)^{*}\mbox{ and }\,(.)^{H}$
denote the matrix inverse, the matrix (or element) complex conjugate
and the complex-conjugate transpose operations, respectively. $\mbox{E\ensuremath{\left[.\right]}}$
denotes the expected value operator. The components of a vector starting
from $k_{1}$ and ending at $k_{2}$ are given as subscripts to the
vector separated by a colon, i.e., $\boldsymbol{x}_{k_{1}:k_{2}}.$
\vspace{-1.0em}
\section{System Model\label{sub:Signal-Model}}
A linear, time invariant, dispersive and noisy communication channel
is considered. The standard complex-valued equivalent baseband signal
model is assumed. At time $k$, the received sample $y_{k}$ can be
expressed as
\vspace{-1.0em}
\begin{equation}
y_{k}=\sum_{l=0}^{v}h_{l}\,x_{k-l}\,+n_{k},\label{eq:y_k}
\end{equation}
where $h_{l}$ is the CIR whose memory is $v$, $n_{k}$ is the additive
noise symbol and $x_{k-l}$ is the transmitted symbol at time ($k-l$).
At any time $k$, an FIR filter of length $N_{f}$ is applied to the
received samples in order to recover the transmitted symbols with
some possible time delay. For simplicity, we assume a symbol-spaced
equalizer but our proposed design framework can be easily extended
to the general fractionally-spaced case. For these $N_{f}$-long received
samples of interest, the input-output relation in (\ref{eq:y_k})
can be written compactly as
\vspace{-1.2em}
\begin{equation}
\boldsymbol{y}_{k:k-N_{f}+1}=\boldsymbol{H}\,\boldsymbol{x}_{k:k-N_{f}-v+1}+\boldsymbol{n}_{k:k-N_{f}+1}\,,\label{eq:y_Hx_n}
\end{equation}
where $\boldsymbol{y}_{k:k-N_{f}+1},\,\boldsymbol{x}_{k:k-N_{f}-v+1}$
and $\boldsymbol{n}_{k:k-N_{f}+1}$ are column vectors grouping the
received, transmitted and noise samples. Additionally, $\boldsymbol{H}$
is an $N_{f}\times(N_{f} + \nu)$ Toeplitz matrix whose first row
is formed by $\{h_{l}\}_{l=0}^{l=v}$ followed by zero entries. It
is useful, as will be shown in the sequel, to define the output auto-correlation
and the input-output cross-correlation matrices based on the block
of length $N_{f}$. Using (\ref{eq:y_Hx_n}), the input correlation
and the noise correlation matrices are, respectively, defined by {\small{}$\boldsymbol{R}_{xx}\triangleq E\left[\boldsymbol{x}_{k:k-N_{f}-v+1}\boldsymbol{x}_{k:k-N_{f}-v+1}^{H}\right]\mbox{ and }\boldsymbol{R}_{nn}\triangleq E\left[\boldsymbol{n}_{k:k-N_{f}+1}\boldsymbol{n}_{k:k-N_{f}-1}^{H}\right]$}.
Both the input and noise processes are assumed to be white; hence,
their auto-correlation matrices are assumed to be (multiples of) the
identity matrix, i.e., $\boldsymbol{R}_{xx}=\boldsymbol{I}_{N_{f}+v}$
and $\boldsymbol{R}_{nn}=\frac{1}{SNR}\boldsymbol{I}_{N_{f}}$. Moreover,
the output-input cross-correlation and the output auto-correlation
matrices are, respectively, defined as
\vspace{-1.0em}
{\small{}
\begin{eqnarray}
\boldsymbol{R}_{yx} & \!\!\triangleq & \!\!E\left[\boldsymbol{y}_{k:k-N_{f}+1}\boldsymbol{x}_{k:k-N_{f}-v+1}^{H}\right]=\boldsymbol{H}\boldsymbol{R}_{xx}\,,\,\mbox{and}\\
\boldsymbol{R}_{yy} & \!\!\triangleq & \!\!E\left[\boldsymbol{y}_{k:k-N_{f}+1}\boldsymbol{y}_{k:k-N_{f}+1}^{H}\right]\!\!=\boldsymbol{H}\boldsymbol{R}_{xx}\boldsymbol{H}^{H}+\boldsymbol{R}_{nn}.\label{eq:R_yy_def}
\end{eqnarray}
}\vspace{-2em}
\section{Sparse FIR Linear Equalizers Design\label{sec:Sparse-FIR-Equalization}}
\subsection{Initial formulation}
The received samples are passed through an FIR filter with length
$N_{f}$. Hence, the error symbol at time $k$ is given by
\vspace{-1.2em}
\begin{equation}
e_{k}=x_{k-\Delta}-\hat{x}_{k}=x_{k-\Delta}-\boldsymbol{w}^{H}y_{k:k-N_{f}+1}\,,
\end{equation}
where $\Delta$ is the decision delay, typically {\small{}$0\leq\Delta\leq N_{f}+v-1$},
and $\boldsymbol{w}$ denotes the equalizer taps vector whose dimension
is $N_{f}\times1$. Using the orthogonality principle of linear least-squares
estimation, the MSE, denoted as $\xi\left(\boldsymbol{w}\right)$,
equals \cite{newDFW}
\vspace{-1em}
\begin{eqnarray*}
\xi\left(\boldsymbol{w}\right) & \ensuremath{\triangleq} & E\left[\left|e_{k}^{2}\right|\right]=\varepsilon_{x}-\boldsymbol{w}^{H}\boldsymbol{R}_{yx}-\boldsymbol{R}_{yx}^{H}\boldsymbol{w}+\boldsymbol{w}^{H}\boldsymbol{R}_{yy}\boldsymbol{w}\,,
\end{eqnarray*}
where $\varepsilon_{x}\triangleq E\left[x_{k-\Delta}^{2}\right]$.
By writing $x_{k-\Delta}=\boldsymbol{1}_{\triangle}^{H}x_{k:k-N_{f}-v+1}$
and $r_{\Delta}=\boldsymbol{R}_{yx}\boldsymbol{1}_{\Delta}$, where
$\boldsymbol{1}_{\Delta}$ denotes $\left(N_{f}+v\right)$-dimensional
vector that is zero everywhere except in the $(\Delta+1)$-th element
where it is one, it follows that
\vspace{-1.2em}
{\small{}
\begin{eqnarray}
\!\!\!\!\!\!\!\!\xi\left(\boldsymbol{w}\right)\!\!\!\!\!\! & = & \!\!\!\!\!\!\underbrace{\varepsilon_{x}-r_{\Delta}^{H}\boldsymbol{R}_{yy}^{-1}r_{\Delta}}_{\xi_{m}}+\underbrace{(\boldsymbol{w}-\boldsymbol{R}_{yy}^{-1}r_{\Delta})^{H}\boldsymbol{R}_{yy}(\boldsymbol{w}-\boldsymbol{R}_{yy}^{-1}r_{\Delta})}_{\xi_{e}(\boldsymbol{w})}.\label{eq:MSE}
\end{eqnarray}
}Since $\xi_{m}$ does not depend on $\boldsymbol{w}$, the MSE $\xi\left(\boldsymbol{w}\right)$
is minimized by minimizing the term $\xi_{e}(\boldsymbol{w})$. Hence,
the optimum selection for $\boldsymbol{w}$, in the MMSE sense, is
the well-known Wiener solution $\boldsymbol{w}_{opt}=\boldsymbol{R}_{yy}^{-1}\boldsymbol{r}_{\Delta}$.
However, in general, this optimum choice is undesirable since $\boldsymbol{w}_{opt}$
is not sparse and its implementation complexity increases proportional
to $(N_{f})^{2}$ which can be computationally expensive \cite{DCProakis}.
However, any choice for $\boldsymbol{w}$ other than $\boldsymbol{w}_{opt}$
increases $\xi_{e}(\boldsymbol{w})$, which leads to performance loss.
This suggests that we can use the excess error $\xi_{e}(\boldsymbol{w})$
as a design constraint to achieve a desirable performance-complexity
tradeoff. Specifically, we formulate the following problem for the
design of sparse FIR equalizers:
\vspace{-1.8em}
\begin{eqnarray}
\widehat{\boldsymbol{w}}_{s} & \triangleq & \underset{\boldsymbol{w}\in\mathbb{C}^{N_{f}}}{\mbox{arg}\mbox{min}}\,\,\left\Vert \boldsymbol{w}\right\Vert _{0}\,\,\,\,\mbox{subject to}\,\,\,\,\,\xi_{e}(\boldsymbol{w})\leq\delta_{eq}\,,\label{eq:opt_prob1}
\end{eqnarray}
where $\left\Vert \boldsymbol{w}\right\Vert _{0}$ is the number of
nonzero elements in its argument, $\left\Vert .\right\Vert _{2}$
denotes the $\ell_{2}$-norm and $\delta_{eq}$ can be chosen as a
function of the noise variance. While one can attempt to use convex-optimization-based
approaches (after replacing $\left\Vert .\right\Vert _{0}$ with its
convex approximation $\left\Vert .\right\Vert _{1}$ in (\ref{eq:opt_prob1})
to reduce the search space and to make it more tractable \cite{justRelax06})
in order to estimate the sparse approximation vector $\widehat{\boldsymbol{w}}_{s}$,
there exists a number of greedy algorithms with low complexity that
can be used in an efficient manner. Starting with this initial formulation,
we now discuss a general framework for sparse FIR LEs design such
that the performance loss does not exceed a predefined limit.
\vspace{-1.2em}
\subsection{Proposed sparse approximation framework}
Unlike earlier works, including the one by one of the co-authors \cite{newDFW},
we provide a general framework for designing sparse FIR linear equalizers
that can be considered as the problem of sparse approximation using
different dictionaries. Mathematically, this framework poses the problem
of sparse FIR equalizers design as follows:
\vspace{-1.2em}
\begin{equation}
\!\widehat{\boldsymbol{w}}_{s}\triangleq\underset{\boldsymbol{w}\in\mathbb{C}^{N_{f}}}{\mbox{\mbox{arg}\mbox{min}}}\,\left\Vert \boldsymbol{w}\right\Vert _{0}\,\,\,\mbox{subject to}\,\,\,\left\Vert \boldsymbol{A}\left(\boldsymbol{\varPhi}\boldsymbol{w}-\boldsymbol{b}\right)\right\Vert _{2}^{2}\leq\delta_{eq}\,,\label{eq:propFW}
\end{equation}
where $\boldsymbol{\varPhi}$ is the dictionary that will be used
to sparsely approximate $\boldsymbol{b}$, while $\boldsymbol{A}$
is a known matrix and $\boldsymbol{b}$ is a known data vector, both
of which change depending upon the sparsifying dictionary $\boldsymbol{\varPhi}$.
Note that by completing the square in (\ref{eq:opt_prob1}), the problem
reduces to the one shown in (\ref{eq:propFW}). Hence, one can use
any decomposition for $\boldsymbol{R}_{yy}$ to come up with a sparse
approximation problem. By writing the Choleskey or eigen-value decompositions
for $\boldsymbol{R}_{yy}$, we can have different choices for $\boldsymbol{A}$,
$\boldsymbol{\varPhi}$ and $\boldsymbol{b}$. Some of these possible
choices are shown in Table \ref{tab:Examples-of-different}.
\begin{table}
\vspace{-2em}
{\footnotesize{}\protect\caption{{\footnotesize{}Examples of different sparsifying dictionaries.}\label{tab:Examples-of-different}}
}{\footnotesize \par}
\vspace{-1.0em}
{\footnotesize{}
\begin{tabular}[b]{|l|l|l|l|l|l|}
\hline
\multicolumn{3}{|c|}{{\scriptsize{}Cholesky Factorization }} & \multicolumn{3}{c|}{{\scriptsize{}Eigen Decomposition}}\tabularnewline
\hline
\multicolumn{3}{|c|}{{\scriptsize{}$\boldsymbol{R}_{yy}=\boldsymbol{L}\boldsymbol{L}^{H}$
or $\boldsymbol{R}_{yy}=\boldsymbol{P}\boldsymbol{\Lambda}\boldsymbol{P}^{H}$}} & \multicolumn{3}{c|}{{\scriptsize{}$\boldsymbol{R}_{yy}=\boldsymbol{U}\boldsymbol{D}\boldsymbol{U}^{H}$}}\tabularnewline
\hline
{\scriptsize{}$\boldsymbol{A}$} & {\scriptsize{}$\boldsymbol{\varPhi}$} & {\scriptsize{}$\boldsymbol{b}$} & {\scriptsize{}$\boldsymbol{A}$} & {\scriptsize{}$\boldsymbol{\varPhi}$} & {\scriptsize{}$\boldsymbol{b}$}\tabularnewline
\hline
\hline
{\scriptsize{}$\boldsymbol{I}$} & {\scriptsize{}$\boldsymbol{L}^{H}$} & {\scriptsize{}$\boldsymbol{L}^{-1}\boldsymbol{r}_{\Delta}$} & {\scriptsize{}$\boldsymbol{I}$} & {\scriptsize{}$\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H}$} & {\scriptsize{}$\boldsymbol{D}^{-\frac{1}{2}}\boldsymbol{U}^{H}\boldsymbol{r}_{\Delta}$}\tabularnewline
\hline
{\scriptsize{}$\boldsymbol{L}^{-1}$} & {\scriptsize{}$\boldsymbol{R}_{yy}$} & {\scriptsize{}$\boldsymbol{r}_{\Delta}$} & {\scriptsize{}$\boldsymbol{D}^{-\frac{1}{2}}\boldsymbol{U}^{H}$} & {\scriptsize{}$\boldsymbol{R}_{yy}$} & {\scriptsize{}$\boldsymbol{r}_{\Delta}$}\tabularnewline
\hline
{\scriptsize{}$\boldsymbol{I}$} & {\scriptsize{}$\boldsymbol{\Lambda}^{\frac{1}{2}}\boldsymbol{\boldsymbol{P}}^{H}$} & {\scriptsize{}$\boldsymbol{\Lambda}^{-\frac{1}{2}}\boldsymbol{P}^{-1}\boldsymbol{r}_{\Delta}$} & {\scriptsize{}$\boldsymbol{D}^{\frac{1}{2}}$} & {\scriptsize{}$\boldsymbol{U}^{H}$} & {\scriptsize{}$\boldsymbol{D}^{-1}\boldsymbol{U}^{H}\boldsymbol{r}_{\Delta}$}\tabularnewline
\hline
\end{tabular}{\footnotesize \par}
\vspace{-2.5em}
\end{table}
Note that the framework parameters (i.e., $\boldsymbol{A}$, $\boldsymbol{\varPhi}$
and $\boldsymbol{b}$ ) in the left list of Table \ref{tab:Examples-of-different}
result by defining the Cholesky factorization \cite{matAnalysis}
either in the form $\boldsymbol{R}_{yy}\triangleq\boldsymbol{L}\boldsymbol{L}^{H}$
or $\boldsymbol{R}_{yy}\triangleq\boldsymbol{P}\boldsymbol{\Lambda}\boldsymbol{P}^{H}$
(where $\boldsymbol{L}$ is a lower-triangular matrix, $\boldsymbol{P}$
is a lower-unit-triangular (unitriangular) matrix and $\boldsymbol{\Lambda}$
is a diagonal matrix). On the other hand, the columns on the right
result by letting $\boldsymbol{R}_{yy}\triangleq\boldsymbol{U}\boldsymbol{D}\boldsymbol{U}^{H}$,
where $\boldsymbol{U}$ is a unitary matrix whose columns are the
eigenvectors of the matrix $\boldsymbol{R}_{yy}$ and $\boldsymbol{D}$
is a diagonal matrix with the corresponding eigenvalues on the diagonal.
For instance, by assuming $\boldsymbol{L}^{H}$, $\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H}$
and $\boldsymbol{R}_{yy}$ as sparsifying dictionaries, the problem
in (\ref{eq:propFW}) can, respectively, take one of the forms shown
below
\vspace{-1.5em}
{\small{}
\begin{eqnarray}
& \!\!\!\!\underset{\boldsymbol{w}\in\mathbb{C}^{N_{f}}}{\mbox{min}}\left\Vert \boldsymbol{w}\right\Vert _{0}\mbox{\,\,\,\,\mbox{s.t }\,\,\,\,\ensuremath{\left\Vert \left(\boldsymbol{L}^{H}\boldsymbol{w}-\boldsymbol{L}^{-1}\boldsymbol{r}_{\Delta}\right)\right\Vert _{2}^{2}\leq\delta_{eq}\,},}\\
& \!\!\!\!\!\!\!\!\!\!\!\,\underset{\boldsymbol{w}\in\mathbb{C}^{N_{f}}}{\mbox{min}}\left\Vert \boldsymbol{w}\right\Vert _{0}\mbox{\,\,\mbox{s.t }\,\,\ensuremath{\left\Vert \left(\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H}\boldsymbol{w}-\boldsymbol{D}^{-\frac{1}{2}}\boldsymbol{U}^{H}\boldsymbol{r}_{\Delta}\right)\right\Vert _{2}^{2}\leq\delta_{eq},\,\mbox{and}}}\\
& \!\!\!\!\!\!\underset{\boldsymbol{w}\in\mathbb{C}^{N_{f}}}{\mbox{min}}\left\Vert \boldsymbol{w}\right\Vert _{0}\mbox{\,\,\,\,\mbox{s.t }\,\,\,\,\ensuremath{\left\Vert \boldsymbol{L}^{-1}\left(\boldsymbol{R}_{yy}\boldsymbol{w}-\boldsymbol{r}_{\Delta}\right)\right\Vert _{2}^{2}}\ensuremath{\leq\delta_{eq}}\,}.
\end{eqnarray}
}{\small \par}
Note that we can reduce the decomposition complexity by approximating,
for reasonably large $N_{f}$, the Toeplitz $\boldsymbol{R}_{yy}$
by a circulant matrix whose eigenvectors are the Discrete Fourier
Transform (DFT) vectors and eigenvalues are the output discrete spectrum
of its first column \cite{toep2circApp2003}. For a Toeplitz matrix,
the most efficient algorithms for Cholesky factorization are Levinson
or Schur algorithms \cite{statDSP}, which involve $\mathcal{O}(N_{f}^{2})$
computations. In contrast, the eigen-decomposition of a circulant
matrix can be done efficiently using the fast Fourier transform (FFT)
and its inverse with only $\mathcal{O}\left(N_{f}\,log(N_{f})\right)$
operations.
The preceding discussion shows that the problem of designing sparse
FIR equalizers can be cast into one of sparse approximation of a vector
by a fixed dictionary. The general form of this problem is given by
(\ref{eq:propFW}). To solve this problem, we use the well-known Orthogonal
Matching Pursuit (OMP) greedy algorithm \cite{omp07} that estimates
$\widehat{\boldsymbol{w}}_{s}$ by iteratively selecting a set $S$
of the sparsifying dictionary columns (i.e., atoms $\boldsymbol{\phi}_{i}'s$)
of $\boldsymbol{\varPhi}$ that are most correlated with the data
vector $\boldsymbol{b}$ and then solving a restricted least-squares
problem using the selected atoms. The OMP stopping criterion ($\rho$)
is changed here from an upper-bound on the residual error to an upper-bound
on the Projected Residual Error (PRE), i.e., ``$\boldsymbol{A}\times\mbox{Residual Error}$''.
The computations involved in the OMP algorithm are well documented
in the sparse approximation literature (e.g., \cite{omp07}) and are
omitted here due to page limitations.
Unlike conventional compressive sensing \cite{CS}, where the measurement
matrix is a fat matrix, the sparsifying dictionary in our framework
is a square one with full rank. However, OMP and similar methods can
still be used if $\boldsymbol{R}_{yy}$ can be decomposed into $\boldsymbol{Q}\boldsymbol{Q}^{H}$
and the data vector $\boldsymbol{b}$ is compressible \cite{sparsefeng2012,sparseFilterDesign13}.
Among the proposed dictionaries shown in Table \ref{tab:Examples-of-different},
only $\boldsymbol{U}^{H}$ is not a valid choice of $\boldsymbol{\varPhi}$
since the data vector $\boldsymbol{b}$ associated with it can not
be compressed into a lower dimensional space without significant information
loss and, in addition, its PRE is large. Notice that it is better
to keep the PRE as small as possible to limit the amount of noise
in the data.
Our next challenge is to determine the best sparsifying dictionary
for use in our framework. We know from the sparse approximation literature
that the sparsity of the OMP solution tends to be inversely proportional
to the worst-case coherence $\mu\left(\boldsymbol{\varPhi}\right)$,
{\small{}$\mu\left(\boldsymbol{\varPhi}\right)\triangleq\underset{i\neq j}{\mbox{max}}\frac{\left|\left\langle \phi_{i},\,\phi_{j}\right\rangle \right|\,}{\left\Vert \phi_{i}\right\Vert _{2}\left\Vert \phi_{j}\right\Vert _{2}}$}
\cite{finiteSparseFilter013,greedIsGood03}. Notice that $\mu\left(\boldsymbol{\varPhi}\right)\in\left[0,1\right]$.
Next, we investigate the coherence of the dictionaries proposed in
Table \ref{tab:Examples-of-different}.
\vspace{-1.5em}
\subsection{Worst-Case Coherence Analysis\label{sub:Preliminary-Analysis}}
We carry out a coherence metric analysis to gain some insight into
the performance of the proposed sparsifying dictionaries and the behavior
of the resulting sparse equalizers. First and foremost, we are concerned
with analyzing $\mu\left(\boldsymbol{\varPhi}\right)$ to ensure that
it does not approach $1$ for the proposed sparsifying dictionaries.
In addition, we are interested in identifying which $\boldsymbol{\varPhi}$
has the smallest coherence and, hence, gives the sparsest FIR equalizer.
We proceed as follows. We estimate an upper bound on the worst-case
coherence of $\boldsymbol{R}_{yy}$ and evaluate its closseness to
$1$. Then, through simulation we show that the coherence of other
dictionaries, which can be considered as the square roots of $\boldsymbol{R}_{yy}$
in the spectral-norm sense, i.e., $\left\Vert \boldsymbol{R}_{yy}\right\Vert _{2}=\left\Vert \boldsymbol{L}\boldsymbol{L}^{H}\right\Vert _{2}\leq\left\Vert \boldsymbol{\boldsymbol{L}\vphantom{\boldsymbol{L}^{H}}}\right\Vert _{2}^{2}$,
$\!\left\Vert \boldsymbol{R}_{yy}\right\Vert _{2}\!\leq\!\left\Vert \boldsymbol{\Lambda}^{\frac{1}{2}}\boldsymbol{\boldsymbol{P}}^{H}\right\Vert _{2}^{2}$
and $\left\Vert \boldsymbol{R}_{yy}\right\Vert _{2}\leq\left\Vert \boldsymbol{\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H}}\right\Vert _{2}^{2}$,
will be less than that of $\mu(\boldsymbol{R}_{yy})$. Interestingly,
$\boldsymbol{R}_{yy}$ has a well-structured (Hermitian Toeplitz)
closed form in terms of the CIR coefficients, filter time span $N_{f}$
and SNR, i.e., $\boldsymbol{R}_{yy}=\boldsymbol{H}\boldsymbol{H}^{H}+\mbox{\ensuremath{\frac{\mbox{1}}{SNR}}}\boldsymbol{I}$.
It can be expressed in a matrix form as
\vspace{-2.5em}
{\small{}
\begin{equation}
\boldsymbol{R}_{yy}=\mbox{Toeplitz}\overbrace{\left(\left[\begin{array}{ccccccc}
r_{0} & r_{1} & \ldots & r_{v} & 0 & \ldots & 0\end{array}\right]\right)}^{\boldsymbol{\phi}_{1}^{H}}\,,\label{eq:R_yy_matrix_from}
\end{equation}
}where {\small{}$r_{0}={\displaystyle \sum_{i=0}^{v}\left|h_{i}\right|^{2}+\left(\mbox{SNR}\right)^{-1}}$,
$r_{j}=\sum_{i=j}^{v}h_{i}h_{i-j}^{*},\,\forall j\neq0$}. Assuming
high SNR, we can compute $\mu(\boldsymbol{R}_{yy})$ in terms of the
channel taps only. By noting that the columns of $\boldsymbol{R}_{yy}$
are fully defined by the first column, we can get the maximum possible
absolute inner product $\mu(\boldsymbol{R}_{yy})$ by simultaneously
maximizing the entries of $\boldsymbol{\phi}_{1},$ which results
in maximizing all columns entries accordingly. While we can pose the
problem of computing $\mu(\boldsymbol{R}_{yy})$ in terms of maximizing
the sum of the inner product $\left\langle \boldsymbol{\phi}_{i},\,\boldsymbol{\phi}_{j}\right\rangle ,\,\forall i\neq j$,
it turns out that it is equivalent to maximizing $r_{1}$ due to the
special structure of $\boldsymbol{R}_{yy}$. Hence, an upper bound
on $\mu(\boldsymbol{R}_{yy})$ in the high SNR setting can be derived
by solving the following optimization problem
\vspace{-2em}
\begin{equation}
\mbox{max}\,\,\,\sum_{i=1}^{v}\left|h_{i}h_{i-1}^{*}\right|\,\,\,\,\,\,\,\mbox{s.t}.\,\,\,\,\,\,\,\sum_{i=0}^{v}\left|h_{i}\right|^{2}=1.\label{eq:optWorstTaps}
\end{equation}
The solution of (\ref{eq:optWorstTaps}) gives the worst CIR vector
$\boldsymbol{h}$ which is then used to estimate an upper-bound on
$\mu(\boldsymbol{R}_{yy})$ for any given channel length $v$. This
solution has a symmetric structure that can be obtained by solving
a simpler equivalent problem formulated as below
\vspace{-1.0em}
\begin{equation}
\mbox{max}\,\,\,\left|\boldsymbol{h}^{H}\boldsymbol{R}\boldsymbol{h}\right|\,\,\,\,\,\,\,\mbox{s.t}.\,\,\,\,\,\,\,\boldsymbol{h}^{H}\boldsymbol{h}=1\,,\label{eq:quadEqProb}
\end{equation}
where $\boldsymbol{h}=\left[\begin{array}{cccc}
h_{0} & h_{1} & \ldots & h_{v}\end{array}\right]^{H}$ is the length-$(v+1)$ CIR vector and $\boldsymbol{R}$ is a matrix
that has ones along the super and sub-diagonals. The solution of (\ref{eq:quadEqProb})
is the eigenvector corresponding to the maximum (or minimum, since
$\mu(\boldsymbol{R}_{yy})$ is defined in terms of absolute value)
eigenvalue of $\boldsymbol{R}$. Interestingly, the eigenvalues $\lambda_{s}$
and eigenvectors $h_{j}^{(s)}$ of the matrix $\boldsymbol{R}$ have
the following simple closed forms \cite{eigValueVector_R}
\vspace{-1.5em}{\small{}
\begin{eqnarray}
\lambda_{s} & = & 2\,\mbox{cos}(\frac{\pi s}{v+2})\,\,\,\,,\,\,\,\,h_{j}^{(s)}=\sqrt{\frac{2}{v+2}}\mbox{sin\ensuremath{(\frac{j\pi s}{v+2})\,,\,}}\label{eq:worst-taps}
\end{eqnarray}
}where $s,j=1,\ldots,v+1.$ Finally, by numerically evaluating $h_{j}^{(s)}$
for the maximum $\lambda_{s}$ we find that the worst-case coherence
of $\boldsymbol{R}_{yy}$ (for any $v$) is sufficiently less than
1, which points to the likely success of OMP in providing the sparsest
solution $\widehat{\boldsymbol{w}}_{s}$ which is corresponding to
the dictionary that has the smallest $\mu(\boldsymbol{R}_{yy})$.
Next, we will report the results of our numerical experiments to evaluate
the performance of our proposed framework under different sparsifying
dictionaries.
\vspace{-1.0em}
\section{Simulation Results \label{sec:Simulation-Results}}
Throughout the simulations, the used CIRs are unit-energy symbol-spaced
FIR filters with $v$ nonzero taps generated as zero-mean uncorrelated
complex Gaussian random variables. We assume $v=5$ and $N_{f}=35$
\cite{jcioffi}. To quantify the performance of the sparsifying dictionaries
involved in our analysis in terms of coherence, we plot the worst-case
coherence versus the input SNR in Figure \ref{fig:mu_versus_snr}.
Note that a smaller value of $\mu\left(\boldsymbol{\varPhi}\right)$
indicates that a reliable sparse approximation is more likely. Clearly,
$\boldsymbol{R}_{yy}$ has higher $\mu\left(\boldsymbol{\varPhi}\right)$
which reflects higher similarities between its columns compared to
$\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H}$ and $\boldsymbol{L}^{H}$
(both have the same $\mu\left(\boldsymbol{\varPhi}\right)$). The
coherence increases with SNR up to a certain limit and then saturates.
This can be interpreted by the fact that, at high SNR, the noise effects
are negligible and, therefore, the sparsifying dictionaries (e.g.,
{\small{}$\boldsymbol{R}_{yy}\approx\boldsymbol{H}\boldsymbol{H}^{H}$})
do not depend on the SNR. Hence, the coherence converges to a constant.
In contrast, at low SNR, the noise effects dominate the channel effects.
Hence, the channel can be approximated as a memoryless (i.e., 1 tap)
channel. Then, the dictionaries (e.g., {\small{}$\boldsymbol{R}_{yy}\approx\frac{1}{SNR}\boldsymbol{I}$})
can be approximated as a multiple of the identity matrix, i.e., $\mu\left(\boldsymbol{\varPhi}\right)\rightarrow0$.
Figure \ref{fig:upperBound} shows theoretical bounds, estimated through
(\ref{eq:worst-taps}), and empirical upper bounds on the worst-case
coherence $\mu\left(\boldsymbol{R}_{yy}\right)$. This figure shows
that the maximum coherence is sufficiently less than 1 and the mismatch
between the theoretical and simulation results is negligible (only
0.67\%).
\begin{figure}
\vspace{-2em}
\includegraphics[scale=0.28]{globalSIP015_figures/coh_versus_SNR2}
\vspace{-1em}\protect\caption{{\footnotesize{}Worst-case coherence for the sparsifying dictionaries
versus input SNR. Each point represents the mean of 5000 channel realizations.}\label{fig:mu_versus_snr}}
\vspace{-1em}
\end{figure}
\begin{figure}
\vspace{-0.45em}
\includegraphics[scale=0.28]{globalSIP015_figures/upperBoundworstCaseCoherence}
\vspace{-1em}
\protect\caption{{\footnotesize{}Upper-bounds on $\boldsymbol{R}_{yy}$ worst-case
coherence versus channel length under unit-energy channel constraint.
}\label{fig:upperBound}}
\vspace{-2em}
\end{figure}
We further compare the sparse FIR equalizer designs based on the dictionaries
$\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H}$, $\boldsymbol{L}^{H}$
and $\boldsymbol{R}_{yy}$, denoted as $\boldsymbol{w}_{s}(\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H})$,
$\boldsymbol{w}_{s}(\boldsymbol{L}^{H})$ and $\boldsymbol{w}_{s}(\boldsymbol{R}_{yy})$,
respectively, to study the effect of $\mu$ on their performance.
The case of $\boldsymbol{\varPhi}=\boldsymbol{\Lambda}^{\frac{1}{2}}\boldsymbol{\boldsymbol{P}}^{H}$
is not presented here since its performance is almost equivalent to
{\small{}$\boldsymbol{L}^{H}$}. The OMP algorithm is used to compute
the sparse approximations. The OMP stopping criterion is set to be
a function of the PRE such that: {\small{}Performance Loss ($\eta$)$=10\,\mbox{Log}_{10}\left(\frac{SNR(\boldsymbol{w}_{s})}{SNR(\boldsymbol{w}_{opt})}\right)\leq10\,\mbox{Log}_{10}\left(1+\frac{_{\delta_{eq}}}{\xi_{m}}\right)\triangleq\eta_{max}$}.
Here, $\delta_{eq}$ is computed based on an acceptable $\eta_{max}$
and, then, the coefficients of $\widehat{\boldsymbol{w}}_{s}$ are
computed through (\ref{eq:propFW}). The percentage of the active
taps is calculated as the ratio between the number of nonzero taps
to the total number of filter taps, i.e., $N_{f}$. For the MMSE equalizer,
where none of the coefficients is zero, the number of active filter
taps is equal to the filter span. The decision delay is set to be
{\small{}$\Delta\approx\frac{N_{f}+v}{2}$}\cite{jcioffi}.
\begin{figure}[t]
\vspace{-2em}
\includegraphics[scale=0.29]{globalSIP015_figures/activeTaps_versus_SNR_loss_snr_10_30_5000}
\vspace{-1em}
\protect\caption{{\footnotesize{}Percentage of active taps versus the performance loss
($\eta_{max}$) for the sparse LEs (5000 channel realizations).}\label{fig:activeTaps_versus_eta_max}}
\vspace{-1.1em}
\end{figure}
\begin{figure}
\includegraphics[scale=0.29]{globalSIP015_figures/BER_versus_SNR_16QAM}
\vspace{-1.2em}\protect\caption{{\footnotesize{}SER comparison between the MMSE non-sparse LE, the
proposed sparse LEs $\boldsymbol{w}_{s}(\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H})$,
$\boldsymbol{w}_{s}(\boldsymbol{L}^{H})$, $\boldsymbol{w}_{s}(\boldsymbol{R}_{yy})$
and the ``significant-taps'' based LE with sparsity level = 0.25
and 16-QAM modulation.}\label{fig:BER_versus_SNR}}
\vspace{-1.5em}
\end{figure}
\vspace{-1em}
Figure \ref{fig:activeTaps_versus_eta_max} plots the percentage of
the active taps versus the performance loss $\eta_{max}$ for the
proposed sparse FIR-LEs. We observe that a lower active taps percentage
is obtained when the coherence of the sparsifying dictionary is small.
For instance, allowing for $0.25$ dB SNR loss results in a significant
reduction in the number of active LE taps. Approximately two-thirds
(two-fifths) of the taps are eliminated when using $\boldsymbol{w}_{s}(\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H})$
and $\boldsymbol{w}_{s}(\boldsymbol{L}^{H})$ at SNR equals to 10$\,$(30).
The sparse LE $\boldsymbol{w}_{s}(\boldsymbol{R}_{yy})$ needs more
active taps to maintain the same SNR loss as that of the other sparse
LEs due to its higher coherence. This suggests the smaller the worst-case
coherence, the sparser is the equalizer. Moreover, a lower sparsity
level (active taps percentage) is achieved at higher SNR levels which
is consistent with the previous findings (e.g., in \cite{tapPositions07}).
Furthermore, reducing the number of active taps decreases the filter
equalization design complexity and, consequently, the power consumption
since a smaller number of complex multiply-and-add operations are
required.
In Figure \ref{fig:BER_versus_SNR}, we compare the symbol error rate
(SER) performance of our proposed sparse LEs with the proposed approach
in \cite{strongestTap} which we refer to it as the ``significant-taps''
approach. In that approach, all of the MMSE LE taps are computed and
only the $K$ significant ones are retained. Assuming a $25\%$ sparsity
level, both the $\boldsymbol{w}_{s}(\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H})$
and $\boldsymbol{w}_{s}(\boldsymbol{L}^{H})$ sparse LEs achieve the
lowest SER followed by $\boldsymbol{w}_{s}(\boldsymbol{R}_{yy})$,
while the ``significant-taps'' performs the worst. In addition to
this performance gain, the complexity of the proposed sparse LEs is
less than that of the ``significant-taps'' LE since only an inversion
of an $N_{s}\times N_{s}$ matrix is required (not $N_{f}\times N_{f}$
as in the ``significant-taps'' approach) where $N_{s}$ is the number
of nonzero taps. Although the $\boldsymbol{w}_{s}(\boldsymbol{D}^{\frac{1}{2}}\boldsymbol{U}^{H})$
and $\boldsymbol{w}_{s}(\boldsymbol{L}^{H})$ LEs achieve almost the
same SER, the former has a lower decomposition complexity since its
computation can be done efficiently with only the FFT and its inverse.
\vspace{-1.5em}
\section{Conclusions\label{sec:Conclusion-and-Future}}
\vspace{-0.5em}
In this paper, we proposed a general framework for sparse FIR equalizer
design based on a sparse approximation formulation using different
dictionaries. In addition, we investigated the coherence of the proposed
dictionaries and showed that the dictionary with the smallest coherence
gives the sparsest equalizer design. The significance of our approach
was shown analytically and quantified through simulations.
\balance
\vspace{-1.0em}
\bibliographystyle{IEEEtran}
|
1,108,101,565,988 | arxiv | \section{Our project}
Galactic open clusters (OCs) are considered very good tracers of
the disk' properties: they are seen over the
whole disk, cover the entire age interval of the disk and trace its chemical
abundances both at present time and in the past \citep[e.g.,][]{friel95}.
One of the subjects where there are several advantages in using OCs instead of isolated, field stars, is the study of
the disk metallicity distribution and
its possible evolution with time. As a matter of fact,
the distances and ages of OCs can be measured with higher precision up to large distances,
and their ages span a much larger interval than e.g., B stars, Cepheids or Planetary Nebulae,
other widely used tracers.
OCs are also useful tests for stellar models and are complementary to older,
metal-poorer globular clusters.
If we want to study the history of the disk, we have to obtain
information on a large and significant number of OCs; of course, old OCs must
be conspicuously present in this sample.
With our program BOCCE, that stands for Bologna Open Cluster Chemical
Evolution project, we are building and homogeneously analyzing such a sample; for a
a detailed description of its goals and a summary of results
on the first part of the photometric work, see \citet{bt06}.
Very briefly, we employ
i) deep, precise photometry to derive ages, distances and reddeninigs (and a first indication of the metallicity)
using the comparison of observed and synthetic colour-magnitude diagrams (CMDs, e.g.
\citealt{bt06});
ii) medium resolution spectra to derive radial velocities and crucial information on
membership \citep[e.g.,][]{vale};
iii) and high resolution spetra to derive the metallicity and the detailed abundances
\citep[e.g.,][]{cbgt04,cbgt05}.
We try to cover all disk positions both in distance and direction, as shown in Fig.~\ref{fig1}, where we plot all the OCs for which photometry has been acquired. We concentrated on old
clusters and have already published results for 16 OCs older than about 1 Gyr, while a few more
are expected soon. This number represents a fair fraction of the total number of similar, known
clusters: in the most recent catalogue by \cite{dias} there are about 120 OCs older than 1
Gyr, out of the more than 1700 objects present.
We have already obtained large amounts of data, but the analysis has been
completed only for part of them.
Furthermore, we also plan to increase our sample including interesting
clusters from the archives or from collaborations, and
homogenizing their analysis to our system. An example of the latter will be
the distant OCs observed with FLAMES/UVES (PI S. Randich) in the
anticenter direction (see e.g., \citealt{paola} and Sestito et al. this conference).
\begin{figure}[t!]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{bragagliafig1.ps}}
\caption{\footnotesize
Positions of the clusters in the BOCCE sample; filled symbols indicate objects for which
the photometric analysis is completed, stars indicate work in progress, and empty squares
clusters acquired but not studied yet.}
\label{fig1}
\end{figure}
\subsection{Photometric data}
We have alredy published results for 20 clusters \citep{bt06,n3960,be17,tbc07}, which
represent about one half of our
sample and that cover the age range from about 0.1 to 9 Gyr. We are presently adding other ones, like Be~20 and Be~66 \citep{gloria}, or NGC~6791.
The homogeneous determination of ages, distances, reddening and a first indication of metal abundance is our
main result. They are derived using the photometric data and synthetic CMDs based
on stellar evolutionary tracks and taking into account photometric errors and completeness, and
the presence of a fraction of binaries.
We always use the same three sets of tracks: the FRANEC, without overshooting \citep{franec},
the old Padova ones, with classical overshooting \citep[e.g.,][]{pd}, and the FST ones, which use the Full Spectrum Turbulence approach
described by \cite{fst}.
Once we have completed our homogeneous analysis for a large enough number of targets,
we can compare the clusters' properties on a
common scale, and so derive information on the Galactic disk; and
we may compare the influence of different assumptions on the
measured parameters, and test stellar models.
Since we derive all parameters with the three
sets, we have a good estimate of the systematics involved.
\begin{figure*}[]
\resizebox{\hsize}{!}{\includegraphics[clip=true]{bragagliafig2.ps}}
\caption{\footnotesize
Upper panels: histograms of the ages and distances fron the Galactic center for BOCCE
clusters. Lower panel: the radial abundance distribution. Different symbols
indicate the degree of integration in the BOCCE sample: large filled circles:
all parameters obtained in the BOCCE system; smaller filled circles: [Fe/H]
obtained in BOCCE, R$_{GC}$ from literature; filled squares [Fe/H] from
literature, R$_{GC}$ in BOCCE; squares: R$_{GC}$ in BOCCE, but [Fe/H]
from literature (filled) or from the BOCCE photometry,
i.e., from the Z of the evolutionary tracks (open), respectively.
}
\label{fig2}
\end{figure*}
We have a few very interesting objects in our sample.
In particular, we have observed Berkeley 29 \citep{be29}, the farthest known cluster, very
important to define any metallicity gradient, and Berkeley 17 \citep{be17}, which is perhaps the oldest known open cluster, with an age similar to the one of
the youngest globular clusters.
We are presently working on the photometry of NGC~6791, a rather peculiar
cluster, that could be the oldest (but see Be~17) and metal-richest (but see NGC~6253) in our Galaxy.
We use the deep and precise data obtained with the CFHT \citep{jason07}, taking also into account
information
on membership from radial velocities, very useful to better define its red giant branch. We have just started the simulations of its CMDs, but its very high metallicity is an obstacle, since of the models
that we homogeneously use, only the Padova tracks reach the required Z. We need
new, more metal-rich extension of the two other sets (in preparation).
\subsection{Spectroscopic data}
We have already analyzed the high-resolution spectra of about 10
clusters \citep{n6819, cbgt04,cbgt05,gbct06,cbg07}, which span the metallicity from [Fe/H] $\simeq-0.5$
to [Fe/H] $\simeq+0.5$ dex.
We already have data on a few more objects,
and have recently obtained observing time to complete the spectroscopic part of other OCs for which the photometry has been presented.
Further data will be added, from the archives and
from a companion
program, for other interesting clusters.
Also in this case, our goal is to reach the highest possible precision and homogeneity.
To ensure this, we always use the same model grids to derive abundances, the same
line lists, $gf$'s, solar reference abundances, and the same method of
measurement of EWs or synthetic spectra.
All our spectra have been obtained up to now with SARG@TNG, [email protected] ESO, and
UVES@VLT; they have a resolution $R\sim30000-50000$.
Our strategy is to obtain spectra of a few stars (3-5) in each
cluster, chosen among confirmed members by previous radial velocity or proper
motion studies. We usually concentrate on red clump giants, since they are the best
compromise between the bright luminosity necessary to reach very good S/N even at
high resolution and temperatures not too cold to be a problem for the analysis
of line-crowded spectra.
One of the motivation of our project is to determine the metallicity distribution in the Galactic disk.
We have not reached our goal yet, but first results can be seen in Fig.~\ref{fig2}.
The upper panels show the distributions in age and distance from the Galactic center
of all cluster already photometrically analyzed. The lower panel is a representation of
the radial metallicity distribution. However,
this plot is not completely based on BOCCE, because the analysis is done completely by our group
on a common scale only for part of the clusters. At the moment we cannot yet
derive a self-consistent picture of the radial
metallicity distribution or of its possible evolution with time: we need to reach full
homogeneity.
One may wonder whether
OCs really are good tracers of
the Galactic disk. When we compare their abundances (or better the run
of elemental ratios with [Fe/H]) with those of field stars, the answer
seems to be positive. In general the elemental ratios follow the same
pattern for OCs and field stars. There are a few exceptions, like Na; this
could mean for instance that we are not taking into
account well the non-LTE effects for Na, or maybe that there are actual differences
between giants (usually used to study clusters) and dwarfs (usually selected
in field samples). This is a complicate subject and needs dedicate studies.
An example of "good" behavior comes from the $\alpha$-elements: they share
the same run with metallicity for field and cluster stars, in all the metallicity range
covered by OCs. Furthermore, the [$\alpha$/Fe] ratios do not show any dependence
from the cluster age, in our sample or in literature ones.
There is an indication of a slight trend for [$\alpha$/Fe] values to increase with
Galactocentric distance; however, this is based on literature abundances,
and seems not confirmed by other studies (see Sestito et al., this conference).
Further discussion is postponed until we have homogeneously analyzed all the BOCCE
sample.
\section{Summary}
To briefly summarize : \begin{itemize}
\item
we are studying (old) open clusters as tracers of the
disk properties;
\item
to this end we are deriving ages, distances, reddenings, metallicities and detailed
abundances for a large sample of old OCs (about 20 already analyzed photometrically,
and about 10 spectroscopically, up to now);
\item
in doing so, we try to maintain the maximum homogeneity of methodology, to reach really sound and significant results;
\item
if we compare them to field stars, the open clusters do indeed appear to be good tracers of the general disk abundances, confirming that we may use them to trace the disk properties;
\item
abundances of open and globular clusters and field stars studied
by our group will be on a common scale, ensuring meaningful comparisons between
different stellar populations;
\item
finally, our sample can be used to test stellar evoutionary models of different ages
and metallicities, being complementary to the older, metal-poorer globular clusters.
\end{itemize}
\begin{acknowledgements}
People currently collaborating in this project are: M. Tosi, E. Carretta, M. Cignoni
(INAF-Oss. Astr. Bologna), R.G., Rratton, E.V. Held (INAF-Oss. Astr. Padova),
G. Andreuzzi, L. Di Fabrizio (INAF-Fundaci\'on Galilei), J. Kalirai (UCO-Lick),
and G. Marconi (ESO). This study has been made possible by generous allocation
of time at Italian telescopes (Loiano and TNG), at the CFHT, and at ESO telescopes,
both in La Silla and Paranal. This work has been greatly helped by the WEBDA (located at
http://www.univie.ac.at/webda/).
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,565,989 | arxiv | \section{Methods}
\begin{figure*}[h!]
\centerline{\includegraphics[width=15cm]{main}}
\caption{Pipeline of our proposed nuclear instance segmentation and classification algorithm.
}
\label{main}
\end{figure*}
We propose a unified model to achieve the simultaneous nuclear instance segmentation and classification on histopathological images, which is illustrated in Fig.~\ref{main}. Benefiting from the design of HoVer-Net, we employ a three-branch structure for simultaneous nuclear segmentation, HoVer map production, and pixel-level nuclear classification. The segmentation branch aims to depict the boundary of each nucleus. The HoVer branch generates horizontal and vertical maps by calculating horizontal and vertical distances of nuclear pixels to their centres of mass, which is used in post-processing to achieve precise nuclear segmentation. The nuclear classification branch divides the nucleus into six categories: epithelial, lymphocyte, plasma, eosinophil, neutrophil, or connective tissue.
The three branches have shared encoder and decoder but different heads. Our network follows the general encoder-decoder architecture of the popular U-Net model \cite{ronneberger2015u}. But we replace the convolution blocks in the encoder part of the original U-Net with a more elaborate SE-Res module originally proposed in \cite{hu2018squeeze}. The SE-Res module employs a gating mechanism with the sigmoid activation to capture channel-wise feature dependencies, which helps to enhance more informative features and suppress less useful ones. In the decoder part of the network, we embed the CA module \cite{hou2021coordinate} at each resolution level in order to capture cross-channel, direction-aware, and position-sensitive information. The skip connections of the original U-Net are kept between the encoder and decoder blocks, which helps aggregate features of inter-channel relationships and precise positional information at the decoder to get more accurate segmentation results.
The prediction heads in the three branches have similar structures but different weights, which are composed of convolution, batch normalization (BN), and ReLU. The loss functions adopt the Dice + cross-entropy (CE) and mean squared error (MSE) in the segmentation branch and HoVer branches, respectively. Considering the class imbalance of different types of nuclei, the loss function of the classification branch employs the Dice and weight CE. These loss functions are defined as follows.
\begin{equation}
L_{\text {seg}}=L_{CE}+L_{\text {Dice}}
\end{equation}
\begin{equation}
L_{\text {HoVer}}=L_{MSE}(y, \hat{y})=\|\widehat{y}-y\|_{2}^{2}
\end{equation}
\begin{equation}
L_{\text {cls}}=L_{WCE}+L_{\text {Dice}}
\end{equation}
\begin{equation}
L_{\text {WCE}}(y, \hat{y})=-\frac{1}{N} \sum_{i=1}^{N} \sum_{k=1}^{C} w_{k} y_{i, k} \log \hat{y}_{i, k}
\end{equation}
\begin{equation}
L_{\text {Dice}}=-\frac{2}{|C|} \sum_{k \in C} \frac{\sum_{i} y_{i, k} \hat{y}_{i, k}+\varepsilon}{\sum_{i} y_{i, k}+\sum_{i} \hat{y}_{i, k}+\varepsilon}
\end{equation}
\noindent where $y$ and $ \hat{y}$ represent the ground truth label and the predicted label, respectively. The $w_k$ denotes the class weights, which are set empirically to 2, 2, 3, 4, 4, 2, and 1 for the epithelial, lymphocyte, plasma, eosinophil, neutrophil, connective tissue, and background classes, respectively. When the $w_k$ equals 1, the $L_{\text {WCE}}$ becomes the traditional $L_{\text {CE}}$.
After the network training, our post-processing follows the method used by HoVer-Net \cite{graham2019hover}, which helps drive more accurate classification and segmentation results.
\section{Results}
\subsection{Experimental setups}
Standard real-time data augmentation methods such as horizontal flipping, vertical flipping, random rescaling, random cropping, and random rotation are performed to make the model invariant to geometric perturbations. Moreover, RandomHSV is also adopted to randomly change the hue, saturation, and value of images in the hue saturation value (HSV) color space, making the model robust to color perturbations. The Adam optimizer \cite{kingma2014adam} is used as the optimization method for model training. The initial learning rate is set to 0.0003, and reduced by a factor of 10 at the 25th and the 35th epoch, with a total of 50 training epochs. The min-batch size is set as 32. All models are implemented using the PyTorch framework and all experiments are performed on a workstation equipped with an Intel(R) Xeon(R) E5-2680 v4 2.40GHz CPU and four 32 GB memory NVIDIA Tesla V100 GPU cards.
\subsection{Experimental results}
Two evaluation metrics are used for algorithm validation, including multi-class panoptic quality ($mPQ^+$) and multi-class coefficient of determination ($R^2$), which keep the same with the organizer of the challenge \cite{graham2021conic}. Table~\ref{tab} shows our results by comparing them with other methods.
\begin{table}[htbp]
\centering
\caption{Comparison between our results and other methods}
\begin{tabular}{lcc}
\toprule
Methods & $mPQ^+$ & $R^2$ \\
\midrule
HoVer-Net (baseline) & 0.29558 & -0.42802 \\
MaskRCNN & 0.35460 & -0.19821 \\
Ours & 0.43419 & 0.56484 \\
\bottomrule
\end{tabular}%
\label{tab}%
\end{table}%
As shown in Table~\ref{tab}, it is seen that our method outperforms previous HoVer-Net and MaskRCNN by about 0.14 and 0.8 in terms of $mPQ^+$ metric. The results demonstrate the effectiveness of our proposed nuclear classification and segmentation algorithm.
\bibliographystyle{ieeetran}
\section{Methods}
\begin{figure*}[h!]
\centerline{\includegraphics[width=15cm]{main}}
\caption{Pipeline of our proposed nuclear instance segmentation and classification algorithm.
}
\label{main}
\end{figure*}
We propose a unified model to achieve the simultaneous nuclear instance segmentation and classification on histopathological images, which is illustrated in Fig.~\ref{main}. Benefiting from the design of HoVer-Net, we employ a three-branch structure for simultaneous nuclear segmentation, HoVer map production, and pixel-level nuclear classification. The segmentation branch aims to depict the boundary of each nucleus. The HoVer branch generates horizontal and vertical maps by calculating horizontal and vertical distances of nuclear pixels to their centres of mass, which is used in post-processing to achieve precise nuclear segmentation. The nuclear classification branch divides the nucleus into six categories: epithelial, lymphocyte, plasma, eosinophil, neutrophil, or connective tissue.
The three branches have shared encoder and decoder but different heads. Our network follows the general encoder-decoder architecture of the popular U-Net model \cite{ronneberger2015u}. But we replace the convolution blocks in the encoder part of the original U-Net with a more elaborate SE-Res module originally proposed in \cite{hu2018squeeze}. The SE-Res module employs a gating mechanism with the sigmoid activation to capture channel-wise feature dependencies, which helps to enhance more informative features and suppress less useful ones. In the decoder part of the network, we embed the CA module \cite{hou2021coordinate} at each resolution level in order to capture cross-channel, direction-aware, and position-sensitive information. The skip connections of the original U-Net are kept between the encoder and decoder blocks, which helps aggregate features of inter-channel relationships and precise positional information at the decoder to get more accurate segmentation results.
The prediction heads in the three branches have similar structures but different weights, which are composed of convolution, batch normalization (BN), and ReLU. The loss functions adopt the Dice + cross-entropy (CE) and mean squared error (MSE) in the segmentation branch and HoVer branches, respectively. Considering the class imbalance of different types of nuclei, the loss function of the classification branch employs the Dice and weight CE. These loss functions are defined as follows.
\begin{equation}
L_{\text {seg}}=L_{CE}+L_{\text {Dice}}
\end{equation}
\begin{equation}
L_{\text {HoVer}}=L_{MSE}(y, \hat{y})=\|\widehat{y}-y\|_{2}^{2}
\end{equation}
\begin{equation}
L_{\text {cls}}=L_{WCE}+L_{\text {Dice}}
\end{equation}
\begin{equation}
L_{\text {WCE}}(y, \hat{y})=-\frac{1}{N} \sum_{i=1}^{N} \sum_{k=1}^{C} w_{k} y_{i, k} \log \hat{y}_{i, k}
\end{equation}
\begin{equation}
L_{\text {Dice}}=-\frac{2}{|C|} \sum_{k \in C} \frac{\sum_{i} y_{i, k} \hat{y}_{i, k}+\varepsilon}{\sum_{i} y_{i, k}+\sum_{i} \hat{y}_{i, k}+\varepsilon}
\end{equation}
\noindent where $y$ and $ \hat{y}$ represent the ground truth label and the predicted label, respectively. The $w_k$ denotes the class weights, which are set empirically to 2, 2, 3, 4, 4, 2, and 1 for the epithelial, lymphocyte, plasma, eosinophil, neutrophil, connective tissue, and background classes, respectively. When the $w_k$ equals 1, the $L_{\text {WCE}}$ becomes the traditional $L_{\text {CE}}$.
After the network training, our post-processing follows the method used by HoVer-Net \cite{graham2019hover}, which helps drive more accurate classification and segmentation results.
\section{Results}
\subsection{Experimental setups}
Standard real-time data augmentation methods such as horizontal flipping, vertical flipping, random rescaling, random cropping, and random rotation are performed to make the model invariant to geometric perturbations. Moreover, RandomHSV is also adopted to randomly change the hue, saturation, and value of images in the hue saturation value (HSV) color space, making the model robust to color perturbations. The Adam optimizer \cite{kingma2014adam} is used as the optimization method for model training. The initial learning rate is set to 0.0003, and reduced by a factor of 10 at the 25th and the 35th epoch, with a total of 50 training epochs. The min-batch size is set as 32. All models are implemented using the PyTorch framework and all experiments are performed on a workstation equipped with an Intel(R) Xeon(R) E5-2680 v4 2.40GHz CPU and four 32 GB memory NVIDIA Tesla V100 GPU cards.
\subsection{Experimental results}
Two evaluation metrics are used for algorithm validation, including multi-class panoptic quality ($mPQ^+$) and multi-class coefficient of determination ($R^2$), which keep the same with the organizer of the challenge \cite{graham2021conic}. Table~\ref{tab} shows our results by comparing them with other methods.
\begin{table}[htbp]
\centering
\caption{Comparison between our results and other methods}
\begin{tabular}{lcc}
\toprule
Methods & $mPQ^+$ & $R^2$ \\
\midrule
HoVer-Net (baseline) & 0.29558 & -0.42802 \\
MaskRCNN & 0.35460 & -0.19821 \\
Ours & 0.43419 & 0.56484 \\
\bottomrule
\end{tabular}%
\label{tab}%
\end{table}%
As shown in Table~\ref{tab}, it is seen that our method outperforms previous HoVer-Net and MaskRCNN by about 0.14 and 0.8 in terms of $mPQ^+$ metric. The results demonstrate the effectiveness of our proposed nuclear classification and segmentation algorithm.
\bibliographystyle{ieeetran}
|
1,108,101,565,990 | arxiv | \section{Appendix}\label{sec:App}
\subsection{Convex Envelopes of quadratic and bilinear terms}
The nonlinearities appearing in power flow are of the form $x^2$ or $xy$ for some variables $x,y$. We use the following convex envelopes as relaxations of these nonlinear terms:
\begin{subequations}
\begin{align}
\mathrm{SqRel}\br{y,[\lb{y},\ub{y}]}=\left\{x: \begin{array}{ll}
x &\geq y^2\\
x &\leq \br{\lb{y}+\ub{y}}y-\lb{y}\ub{y}
\end{array} \right\} \\
\mathrm{McCormick}\br{y,z,[\lb{y},\ub{y}],[\lb{z},\ub{z}]}=\left\{x: \begin{array}{ll}
x &\geq \lb{y}z + \lb{z}y - \lb{y}\lb{z}\\
x &\geq \ub{y}z + \ub{z}y - \ub{y}\ub{z}\\
x &\leq \lb{y}z - \ub{z}y + \lb{y}\ub{z}\\
x &\leq \lb{z}y - \ub{y}z + \lb{z}\ub{y}
\end{array} \right\}
\end{align}
\end{subequations}
\subsection{Interval DP Relaxation}\label{sec:AppRelax}
In this section, we describe a concrete implementation of the interval relaxation procedure \eqref{eq:PropBoundRelax}:
\begin{subequations}
\begin{align}
&\Ext_{s_i,\{S_k,\mathfrak{v}_k,\mathrm{Sq}^{P}_k,,\mathrm{Sq}^{P}_k,\mathrm{Prod}^{\mathfrak{v},\mathfrak{i}}_k\}_{k\in \Chi{i}\cup\{i\}}} \{\mathfrak{v}_i,P_i,Q_i,a_i\br{t}p_i+b_i\br{t}q_i+c_i\br{t}\} \\
& \text{Subject to } \nonumber\\
& P_i = p_i+\sum_{k \in \Chi{i}}\br{P_k-\mathfrak{i}_kr_k} \label{eq:RelaxDPaa}\\
& Q_i = q_i+\sum_{k \in \Chi{i}}\br{Q_k-\mathfrak{i}_kx_k} \label{eq:RelaxDPab}\\
& \mathfrak{v}_i = \mathfrak{v}_k+\mathfrak{i}_k\br{r_k^2+x_k^2}-2\br{P_kr_k+Q_kx_k},k \in \Chi{i} \label{eq:RelaxDPa}\\
& \sqrt{P_k^2+Q_k^2} \leq \sqrt{\mathfrak{v}_k\mathfrak{i}_k}\quad k\in \Chi{i}\cup\{i\} \label{eq:RelaxDPb}\\
& \mathrm{Sq}^{P}_k+\mathrm{Sq}^{Q}_k=\mathrm{Prod}^{\mathfrak{v},\mathfrak{i}}_k \quad k\in \Chi{i}\cup\{i\}\label{eq:RelaxDPc}\\
& \mathrm{Sq}^{P}_k \in \mathrm{SqRel}\br{P_k,P\br{\mathbb{I}_k}} \quad k\in \Chi{i}\cup\{i\} \label{eq:RelaxDPd}\\
& \mathrm{Sq}^{Q}_k \in \mathrm{SqRel}\br{Q_k,Q\br{\mathbb{I}_k}} \quad k\in \Chi{i}\cup\{i\} \label{eq:RelaxDPe}\\
& \mathrm{Prod}^{\mathfrak{v},\mathfrak{i}}_k \in \mathrm{McCormick}\br{\mathfrak{v}_k,\mathfrak{i}_k,\mathfrak{v}\br{\mathbb{I}_k},\mathfrak{i}\br{\mathbb{I}_k}}\quad k\in \Chi{i}\cup\{i\} \label{eq:RelaxDPf} \\
&\mathfrak{v}_k \in \mathfrak{v}\br{\mathbb{I}_k}, \mathfrak{i}_k \in \mathfrak{i}\br{\mathbb{I}_k},P_k\in P\br{\mathbb{I}_k},Q_k\in Q\br{\mathbb{I}_k},k\in\{i\}\cup\Chi{i} \\
& p_i \in [\lb{p_i}\br{t},\ub{p_i}\br{t}],q_i \in [\lb{q_i}\br{t},\ub{q_i}\br{t}]
\end{align}\label{eq:RelaxDP}
\end{subequations}
This requires solution of a small number of SOCPs within each DP update (specifically $10$ SOCPs in $6d$ variables where $d$ is the maximum degree of a node in the tree - note that we can always choose $d\leq 2$ by modifying the original problem as in lemma \ref{lem:DegLem}). Note also that as the intervals $\mathbb{I}_k$ get smaller, the relaxation gets tighter - this is formalized in the lemma below:
\begin{lemma}\label{lem:AppRelax}
The relaxation defined by \eqref{eq:RelaxDP} is a valid interval relaxation and satisfies the conditions of the definition \ref{def:IntervalRelax}.
\end{lemma}
\begin{proof}
Through this proof, we use $\eta\br{M}$ to refer to some constant that depends on the number $M$ from assumption \ref{assump:B}. Properties \eqref{eq:IntervalRelaxDefb},\eqref{eq:IntervalDefb} are obvious since \eqref{eq:RelaxDP} is a valid relaxation of the problem \eqref{eq:PropBound}. The property \eqref{eq:IntervalDefa} follows from the tightness of the McCormick relaxation. If $\mathbb{I}_i,\br{\mathbb{I}_k }_{k\in \Chi{i}}$ are of radius at most $\epsilon$, we know that
\[\lb{\mathfrak{v}_k}\mathfrak{i}_k+\lb{\mathfrak{i}_k}\mathfrak{v}_k-\lb{\mathfrak{v}_k}\lb{\mathfrak{i}_k} \leq \mathrm{Prod}^{\mathfrak{v},\mathfrak{i}}_k \leq \ub{\mathfrak{v}_k}\mathfrak{i}_k+\lb{\mathfrak{i}_k}\mathfrak{v}_k-\ub{\mathfrak{v}_k}\lb{\mathfrak{i}_k}\]
so that the range of $\mathrm{Prod}^{\mathfrak{v},\mathfrak{i}}_k$ is of size at most
\[\br{\ub{\mathfrak{v}_k}-\lb{\mathfrak{v}_k}}\br{\mathfrak{i}_k-\lb{\mathfrak{i}_k}}\leq \br{\ub{\mathfrak{v}_k}-\lb{\mathfrak{v}_k}}\br{\ub{\mathfrak{i}_k}-\lb{\mathfrak{i}_k}}\leq \eta \epsilon\] since $\mathfrak{i}_k$ has upper and lower bounds depending on the problem data. Thus, we know that $\mathrm{Prod}_k^{\mathfrak{v},\mathfrak{i}}$ is at most $\eta\br{M} \epsilon$ away from $\mathfrak{v}_k\mathfrak{i}_k$.
Similarly, $\mathrm{Sq}^{P_k}$ is at most $\eta\br{M}\epsilon$ away from $P_k^2$, $\mathrm{Sq}^{Q_k}$ is at most $\eta\br{M}\epsilon$ away from $Q_k^2$. Combining these results, we get the that \eqref{eq:IntervalDefa} holds.
\end{proof}
\subsection{Bound-tightening procedure}\label{sec:Relax}
We use a scheme similar to the one described in \cite{coffrin2015strengthening} that infers bounds on the variables $\mathfrak{v}_i,P_i,Q_i\mathfrak{i}_i$ given the constraints in the ACOPF problem \eqref{eq:DPMain} . Let $\mathrm{Conv}\br{\mathbf{S}_i}$ denote the convex hull of the set $\mathbf{S}_i$. We use a convex relaxation of the constraints \eqref{eq:OPFform} that depends on the variable bounds. We then iterate this procedure where we use a relaxation to infer tighter bounds, and then tighten the relaxation using the inferred bounds. In practice, we find that the procedure converges in a few iterations typically to a stable set of bounds.
The nonconvex constraint $P_k^2+Q_k^2=\mathfrak{v}_k\mathfrak{i}_k$ can be relaxed to a convex constraint: $\sqrt{P_k^2+Q_k^2}\leq\sqrt{\mathfrak{i}_k\mathfrak{v}_k}$. This can be tightened by replacing the nonlinear terms in the equation $\powb{P_k}{2}+\powb{Q_k}{2}=\mathfrak{v}_k\mathfrak{i}_k$ with their McCormick envelopes. Plugging all this into a single formulation, we obtain:
\begin{subequations}
\begin{align}
\Ext_{\mathfrak{v},\mathfrak{i},S,s,\mathrm{Sq}^{P},\mathrm{Sq}^{Q},\mathrm{Prod}^{\mathfrak{v},\mathfrak{i}}} &\quad \{\mathfrak{v}_i,\mathfrak{i}_i,P_i,Q_i\}_{i=1}^n \\
\text{ Subject to } & S_i-\sum_{k \in \Chi{i}} \br{S_k-\mathfrak{i}_kz_k} \in \mathrm{Conv}\br{\mathbf{S}_i}, i \in \{0,\ldots,n\} \\
& \mathfrak{v}_i = \mathfrak{v}_k+\mathfrak{i}_k|z_k|^2-2\br{P_kr_k+Q_kx_k}, i\in \{0,\ldots,n\}, k \in \Chi{i} \\
& \sqrt{P_k^2+Q_k^2} \leq \sqrt{\mathfrak{v}_k\mathfrak{i}_k}, k\in \{1,\ldots,n\} \\
& \mathrm{Sq}^{P}_k+\mathrm{Sq}^{Q}_k=\mathrm{Prod}^{\mathfrak{v},\mathfrak{i}}_k \quad k\in \{1,\ldots,n\}\\
& \mathrm{Sq}^{P}_k \in \mathrm{SqRel}\br{P_k,[\lb{P_k},\ub{P_k}]}, k\in \{1,\ldots,n\}\\
& \mathrm{Sq}^{Q}_k \in \mathrm{SqRel}\br{Q_k,[\lb{Q_k},\ub{Q_k}]}, k\in \{1,\ldots,n\}\\
& \mathrm{Prod}^{\mathfrak{v},I}_k \in \mathrm{McCormick}\br{\mathfrak{v}_k,\mathfrak{i}_k,[\lb{\mathfrak{v}_k},\ub{\mathfrak{v}_k}],[\lb{\mathfrak{i}_k},\ub{\mathfrak{i}_k}]}, k\in \{1,\ldots,n\}
\end{align} \label{eq:BoundTighten}
\end{subequations}
Each minimum/maximum value involves solving a Second Order Cone Program (SOCP) and can be done in parallel over the variables involved. This entire procedure can be viewed as a mapping:
\[{\begin{pmatrix} \lb{\mathfrak{v}} & \ub{\mathfrak{v}} & \lb{S} & \ub{S} & \lb{\mathfrak{i}} & \ub{\mathfrak{i}} \end{pmatrix}}_t \mapsto {\begin{pmatrix} \lb{\mathfrak{v}} & \ub{\mathfrak{v}} & \lb{S} & \ub{S} & \lb{\mathfrak{i}} & \ub{\mathfrak{i}} \end{pmatrix}}_{t+1}\]
We iterate this mapping until there the improvement in bounds is smaller than some threshold. The obtained bounds are used to redefine the domains $\mathcal{X}_i$ for each of the variables.
\subsection{Proof of Lemma \ref{lem:DegLem}}\label{sec:AppDeg}
We describe a transformation that takes a node $i$ with $m$ children and adds at most $r=\lceil\log_2\br{m}\rceil$ additional buses to create a new network where each node has at most $2$ children.
\noindent
We add children in ``levels'' $p=1,\ldots,r$: At level $1$, we add children $c_{10},c_{11}$ connected to bus $i$ by $0$-impedance transmission lines. We have the following constraints between $i$ and its children ${\Chi{i}}^{\prime}=\{c_{10},c_{11}\}$:
\begin{align*}
S_i & =s_i+\sum_{k \in {\Chi{i}}^{\prime}}S_k \\
\mathfrak{v}_i & =\mathfrak{v}_k, k \in \Chi{i}^{\prime} \\
\mathfrak{i}_k\mathfrak{v}_k &= |S_k|^2, k \in {\Chi{i}}^{\prime}
\end{align*}
\noindent
At any level $p \leq r$, all nodes are of the form $c_{i_1\ldots i_p}$. We add its children $\Chi{c_{i_1\ldots i_p}}=\{c_{i_1\ldots i_p 0},c_{i_1\ldots i_p 1}\}$ connected to it by $0$-impedance lines with the constraints:
\begin{align*}
S_{c_{i_1\ldots i_p}} & =\sum_{k \in \Chi{c_{i_1\ldots i_p}}}S_k \\
\mathfrak{v}_{c_{i_1\ldots i_p}} & =\mathfrak{v}_k, k \in \Chi{c_{i_1\ldots i_p}} \\
\mathfrak{i}_k\mathfrak{v}_k &= |S_k|^2, k \in \Chi{c_{i_1\ldots i_p}}
\end{align*}
At the final level $p=r-1$, every node is of the form $c_{i_1\ldots i_{r-1}}$ and its children are picked from the set of original children $\Chi{i}$. One way of doing this is to assign children in order: $\Chi{c_{10\ldots 0}}=\{c_1,c_2\},\Chi{c_{10\ldots 1}}=\{c_3,c_4\},\ldots$. Then, we add the balance equations:
\begin{align*}
S_{c_{i_1i_2\ldots i_{r-1}}} & =\sum_{k \in \Chi{c_{i_1i_2\ldots i_{r-1}}}}\br{S_k} \\
\mathfrak{v}_i & =\mathfrak{v}_k, k \in \Chi{i}^{\prime} \\
\mathfrak{i}_k\mathfrak{v}_k &= |S_k|^2, k \in {\Chi{i}}^{\prime}
\end{align*}
Adding the power balance equations at all the intermediate buses, we recover the original power balance condition
\[s_i=\sum_{k \in \Chi{i}} \br{S_k-z_k\mathfrak{i}_k} \]
Further, we have that $\mathfrak{v}_i=\mathfrak{v}_{c_{i_1i_2\ldots i_{p}}}$ for every $1\leq p \leq r-1$.
\begin{figure}
\begin{center}
\includegraphics[width=.75\textwidth]{Binarize.png}
\end{center}
\end{figure}
\subsection{Proof of Theorem \ref{thm:ApproxMain}}\label{sec:AppMain}
\begin{proof}
Through this proof, we will use $\zeta\br{M}$ to denote an arbitrary function of $M$. The proof of the algorithm breaks down to three key statements:
\begin{itemize}
\item[1] The size of the messages $|\eta^i|$ is bounded by $\frac{\zeta\br{M}}{\epsilon^3}$.
\item[2] For each message, $\mathrm{Rad}\br{\beta^i\br{t}}\leq \zeta\br{M}\epsilon,t=1\ldots,|\beta^i|$.
\item[3] In the interval DP update Algorithm \ref{Alg:DPUpdate} makes at most $\frac{\zeta\br{M}}{\epsilon^5}$ calls to the $\mathrm{PropBound}$ routine.
\end{itemize}
\emph{Proof of 1,2}\\
For the leaf nodes, the size of the messages is bounded by $|\mathbf{S}_i|\br{\frac{\ub{\mathfrak{v}_i}-\lb{\mathfrak{v}_i}}{\epsilon}+1}\leq \frac{\zeta\br{M}}{\epsilon}$ (since $\ub{\mathfrak{v}}_i\leq M, \lb{\mathfrak{v}_i} \geq \frac{1}{M},|\mathbf{S}_i|\leq M$). Since $0<\epsilon<1$, this is smaller than $\frac{\zeta\br{M}}{\epsilon^3}$. Further, since $\ub{p_i}\br{t}-p_i\br{t},\ub{q_i}\br{t}-q_i\br{t}\leq \frac{1}{M}$, we know that $\mathrm{Rad}\br{\beta^i\br{t}}\leq \epsilon$ for each $t=1,\ldots,|\beta^i|$.
For non-leaf node, the size of the messages is bounded by the size of $|\mathrm{Partition}\br{\mathcal{X}_i,\epsilon}|\leq\frac{\zeta\br{M}}{\epsilon^3}$. Further, since $\beta^i\br{t} \subset \mathbb{I}_i \in \mathrm{Partition}\br{\mathcal{X}_i,\epsilon}$, we know that $\mathrm{Rad}\br{\beta^i\br{t}}\leq \epsilon$.
Thus, using \eqref{eq:IntervalRelaxDef}, the error of the PF equations $|p_i-p_i\br{\beta_i,\beta_{\Chi{i}}}|,|q_i-q_i\br{\beta_i,\beta_{\Chi{i}}}|,|\mathfrak{v}_i-\mathfrak{v}_i\br{\beta_k}|$ can be bounded by $\zeta\br{M}\epsilon$. This proves claim $1$ of the theorem. Further, since at each step we propagate all interval regions consistent with at least one interval value of the child, the optimal solution to the original problem is feasible for the interval relaxation. Thus, for each we have that $c_i\br{p_i^\ast,q_i^\ast}\leq c_i\br{p_i^{OPT},q_i^{OPT}}$. Adding this over all $i$ gives us claim $2$ of the theorem.
Finally, we show that the DP update can be implemented with $\frac{\zeta\br{M}}{\epsilon^5}$ calls to the $\mathrm{PropBound}$ routine. In the loops in algorithm \ref{Alg:DPMain}, we are implicitly looping over possible interval values of $\mathfrak{v}_i,P_i,Q_i,\mathfrak{v}_j,P_j,Q_j,\mathfrak{v}_k,P_k,Q_k$. However, these variables are linked by the constraints:
\begin{align*}
P_i &=p_i+\br{P_k-r_k \frac{P_k^2+Q_k^2}{\mathfrak{v}_k}}+\br{P_j-r_j \frac{P_j^2+Q_j^2}{\mathfrak{v}_j}} \\
Q_i & =q_i+\br{Q_k-x_k \frac{P_k^2+Q_k^2}{\mathfrak{v}_k}}+\br{Q_j-x_j \frac{P_j^2+Q_j^2}{\mathfrak{v}_j}} \\
\mathfrak{v}_i & =\mathfrak{v}_k+\br{r_k^2+x_k^2}\frac{\br{P_k^2+Q_k^2}}{\mathfrak{v}_k}-2\br{P_kr_k+Q_kx_k} \\
\mathfrak{v}_i & =\mathfrak{v}_j+\br{r_j^2+x_j^2}\frac{\br{P_j^2+Q_j^2}}{\mathfrak{v}_j}-2\br{P_jr_j+Q_jx_j}
\end{align*}
Thus, if $p_i,q_i$ are fixed, the $9$ variables are constrained to lie on a $5$-dimensional manifold (since there are 4 non-redundant . This suggests that we only need to do an exhaustive search over a $5$-dimensional space rather than a $9$ dimensional space.
Suppose $p_i\in [\lb{p_i}\br{t},\ub{p_i}\br{t}],q_i\in [\lb{q_i}\br{t},\ub{q_i}\br{t}]$ and we fix particular interval values (of radius smaller than $\epsilon$) for variables $\mathfrak{v}_i,P_k,Q_k,P_j,Q_j$. Then, from the first two equations, we know that $P_i,Q_i$ must lie in an interval of size $\zeta\br{M}\epsilon$ (since $p_i,q_i$ lie in intervals of size $\zeta\br{M}$). Thus, we need to loop over at most $\zeta\br{M}$ possible values of $P_i,Q_i$. Similarly, if $\mathfrak{v}_i,P_k,Q_k$ are fixed to interval values of radius $\epsilon$, the third equation says that $\mathfrak{v}_k$ must lie in an interval of size $\zeta\br{M}\epsilon$, and similarly if $\mathfrak{v}_i,P_j,Q_j$ are fixed to intervals of radius of $\epsilon$, $\mathfrak{v}_j$ must lie in an interval of size $\zeta\br{M}\epsilon$. Thus, we need to loop over at most $\zeta\br{M}$ possible values of $\mathfrak{v}_j,\mathfrak{v}_k,P_i,Q_i$ once the values of $\mathfrak{v}_i,P_k,Q_k,P_j,Q_j$ are fixed to intervals of radius $\epsilon$. Finally, the total number of loops in algorithm \ref{Alg:DPUpdate} is at most $\frac{\zeta\br{M}}{\epsilon^5}$. Adding this over all nodes of the network, we get the third claim of Theorem \ref{thm:ApproxMain}.
\end{proof}
\section{Conclusions}\label{sec:Conc}
We have presented a novel dynamic programming based algorithm for solving optimal power flow over tree networks. Preliminary experiments have indicated that the approach is promising and that it can solve difficult mixed integer NLP problems arising in power systems. We note that these conclusions are still preliminary and further work needs to be done to carefully test and validate the performance of this approach across a range of test problems. Overall, we envision that graphical models will be a powerful paradigm for analysis and control of power systems and other infrastructure networks. We plan to explore the following concrete directions in future work: \\
(1) Extension to probabilistic inference problems: As solar penetration increases, the notion of security analysis (making sure that all voltages, flows, currents are within bounds) will need to be phrased in a probabilistic manner. For example, given a joint spatial distribution of solar generation at various points in the network, compute the probability that a given physical quantity (voltage/current/flow) deviates beyond its acceptable bounds. This problem can be phrased as the sum-product analog of the problem solved here. \\
(2) Extensions to loopy graphs: There are several possibilities for extending the algorithms presented here to loopy graphs. The most straightforward extensions would be based on junction trees \cite{koller2009probabilistic} (cluster nodes into supernodes to form a tree) or on cutset conditioning \cite{dechter2003constraint} (fix values of variables on a cutset of the graph, and given for each fixed value, use inference on the remaining tree-structured graph). Another route is to use loopy belief propagation or the corresponding Linear Programming relaxation of the inference problem \cite{koller2009probabilistic}, and subsequent hierarchies of relaxations, in the spirit of \cite{sontag2010approximate}\cite{johnson2008convex}. \\
(3) Parameterized messages: We represented messages with piecewise-constant approximations. Another option is to use a parameterized representation (polynomial/piecewise linear/piecewise polynomial for ex). An interesting related development is \cite{gamarnik2012belief}, where the authors show that belief propagation with piecewise-linear messages is guaranteed to find the global optimum of a certain special minimum cost flow problem in polynomial time. Extending this to ACOPF is another promising direction for future work.
\section{Introduction}
\label{intro}
Your text comes here. Separate text sections with
\section{Section title}
\label{sec:1}
Text with citations \cite{RefB} and \cite{RefJ}.
\subsection{Subsection title}
\label{sec:2}
as required. Don't forget to give each section
and subsection a unique label (see Sect.~\ref{sec:1}).
\paragraph{Paragraph headings} Use paragraph headings as needed.
\begin{equation}
a^2+b^2=c^2
\end{equation}
\begin{figure}
\caption{Please write your figure caption here}
\label{fig:1}
\end{figure}
\begin{figure*}
\caption{Please write your figure caption here}
\label{fig:2}
\end{figure*}
\begin{table}
\caption{Please write your table caption here}
\label{tab:1}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
first & second & third \\
\noalign{\smallskip}\hline\noalign{\smallskip}
number & number & number \\
number & number & number \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\end{comment}
\bibliographystyle{spmpsci}
\section{Finite Algorithm based on Interval Discretization}
\label{sec:DPInt}
The algorithm described in the preceding section, as it stands, cannot
be implemented on a computer, since it requires representing the
functional objects $\kappa^i\br{\beta_i}$. A straightforward approach to deal
with this would be to discretize the variables $\beta_i$ to a finite set
and allow for some error tolerance on each of the constraints in
\eqref{eq:OPFform}. However, our experiments have indicated that, in
order to produce solutions of acceptable accuracy, one needs an
intractably fine discretization of $\beta_i$ resulting in prohibitive
computation times, and that an accurate estimation of the error
tolerance parameter can be problematic.
Hence, we need an alternative procedure to approximate the infinite dimensional dynamic program. We take the approach of using an interval-based discretization of the power flow variables $\mathfrak{v}_i,P_i,Q_i$ (i.e., each variable can take values in the respective interval). Given the constraints $\beta_i \in \mathcal{X}_i$, we partition the set $\mathcal{X}_i$ into a finite union of interval regions defined by interval constraints on the components of $\beta_i$. For any practical OPF problem, $\mathcal{X}_i$ is a compact set (since the bounds on voltages/flows are always finite), so such a decomposition is always possible. Naturally, the computational complexity of the algorithm depends on the number of interval regions. If we fix an interval resolution $\epsilon$ (each interval region in our decomposition is made up of interval constraints of width at most $\epsilon$ in each variable), the number of interval regions depends on the bounds defining the region $\mathcal{X}_i$. Thus, it is of interest to have as tight bounds as possible on the variables $\beta_i$. We describe a procedure to infer tight bounds on the power flow variables using convex relaxations in Section \ref{sec:Relax}. We use the inferred bounds to redefine $\mathcal{X}_i$ and perform the interval discretization on this refined domain.
Then, we perform the dynamic programming update in the space of intervals: For every interval-value of the parent, we look at all possible interval values of the children, and for each combination of intervals, compute the minimum cost solution given the interval constraints. This gives us a lower bound on the message for each interval value of the parent, thus leading to a piecewise-constant lower bound on the message functions $\kappa^i\br{\beta_i}$. The algorithm is described in Section \ref{sec:DPInt1}.
As the interval discretization gets finer, the relaxation becomes tighter, reducing the errors incurred in the power flow equations due to the relaxation. The errors can be made smaller than $\epsilon$ for any $\epsilon>0$, with the running time polynomial in $\frac{1}{\epsilon}$. These results are formalized in Theorem \ref{thm:ApproxMain} in Section \ref{sec:DPThm}.
\subsection{Interval Dynamic Programming with Adaptive Refinement}\label{sec:DPInt1}
We use Algorithm \ref{Alg:DPMain} with the DP update step replaced with a tractable lower bound based on an interval partition of the variable domains. We develop such a scheme guaranteed to produce a lower bound on the optimal value and an approximately feasible solution. The algorithm is based on a set of operators that will replace the DP update \eqref{eq:DPMain}:
\begin{itemize}
\item[1]\emph{Interval Partition Operator}: A procedure that take a set and creates a collection of intervals such that every point in the set is in an interval box of size at most $\epsilon$, for some specified tolerance $\epsilon$.
\item[2] \emph{Interval DP Relaxation}: A procedure that takes a set of interval constraints on children of a node $i$ and produces a lower bound on $\kappa^i\br{\beta_i}$.
\end{itemize}
We now define these formally.
\begin{definition}
An interval constraint on a real variable $x\in\mathbb{R}$ is a constraint of the type $a \leq x \leq b$, parameterized by real numbers $a,b, a\leq b$.
\end{definition}
\begin{definition}
An interval region $\mathbb{I}_i$ is a subset of $\mathcal{X}_i$ specified by interval constraints on the variables $\mathfrak{v}_i,P_i,Q_i$:
\begin{align*}
\mathbb{I}_i=\left\{\br{\mathfrak{v}_i,P_i,Q_i}: \begin{array}{ccc}
\lb{\mathfrak{v}_i} \leq \mathfrak{v}_i \leq \ub{\mathfrak{v}_i} \\
\lb{P_i} \leq P_i \leq \ub{P_i} \\
\lb{Q_i} \leq Q_i \leq \ub{Q_i}
\end{array}\right\}
\end{align*}
We use $\mathfrak{v}\br{\mathbb{I}_i}=[\lb{\mathfrak{v}_i},\ub{\mathfrak{v}_i}],P\br{\mathbb{I}_i}=[\lb{P_i},\ub{P_i}],Q\br{\mathbb{I}_i}=[\lb{Q_i},\ub{Q_i}]$ to denote the interval constraints in $\mathbb{I}_i$ corresponding to each of the variables,
\begin{align*}
& \mathrm{mid}\br{\mathbb{I}_i} = \br{\frac{\lb{\mathfrak{v}_i}+\ub{\mathfrak{v}_i}}{2},\frac{\lb{P_i}+\ub{P_i}}{2},\frac{\lb{Q_i}+\ub{Q_i}}{2}}
\end{align*}
to select the midpoint of an interval region and
\[\mathrm{Rad}\br{\mathbb{I}_i}=\max\br{|\lb{\mathfrak{v}_i}-\ub{\mathfrak{v}_i}|,|\lb{P_i}-\ub{P_i}|,|\lb{Q_i}-\ub{Q_i}|}.\]
\end{definition}
\begin{definition}
An interval partition $\mathcal{I}_i$ of a set $X \subseteq \mathcal{X}_i$ is a collection of interval regions satisfying the following conditions:
\begin{subequations}
\begin{align}
& \mathcal{I}_i\br{t} = \left\{\beta_i: \begin{array}{ccc}
\lb{\mathfrak{v}_i}\br{t} \leq & \mathfrak{v}_i & \leq \ub{\mathfrak{v}_i}\br{t} \\
\lb{P_i}\br{t} \leq & P_i & \leq \ub{P_i}\br{t} \\
\lb{Q_i}\br{t} \leq & Q_i & \leq \ub{Q_i}\br{t}
\end{array}\right\},t=1,\ldots,|\mathcal{I}_i| \label{eq:IntervalDefa}\\
& \cup_{t=1}^{|\mathcal{I}_i|} \mathcal{I}_i\br{t} = X \label{eq:IntervalDefb}\end{align}\label{eq:IntervalDef}
\end{subequations}
\noindent
The number of regions in the partition is denoted by $|\mathcal{I}_i|$. A partition such that $
\max_t \mathrm{Rad}\br{\mathcal{I}_i\br{t}} \leq \epsilon $
is denoted as $\mathrm{Partition}\br{\mathcal{X}_i;\epsilon}$.
\end{definition}
\begin{definition}\label{def:IntervalRelax}
An interval relaxation of the DP update step \eqref{eq:DPMain} is a computational procedure that, given a bus $i \in \{0,\ldots,n\}$ with $\Chi{i}=\{k_1,\ldots,k_m\}$ and interval regions $\beta_i \in \mathbb{I}_i \subseteq \mathcal{X}_i,\beta_k \in \mathbb{I}_k\subseteq \mathcal{X}_k$ for each $k\in\Chi{i}$, with $\max_{k\in \{i\}\cup\Chi{i}}\mathrm{Rad}\br{\mathbb{I}_k}\leq \epsilon$, produces as output an interval region $\mathbb{I}_i^{\prime}$ and values $p_i,q_i$ such that
\begin{subequations}
\begin{align}
& \exists \{\beta_k\in \mathbb{I}_k\}_{k \in \Chi{i}},\beta_i\in \mathbb{I}_i^{\prime} \nonumber \\
& \text{ s.t } \left\{\begin{array}{ll} |p_i-p_i\br{\beta_i,\beta_{\Chi{i}}}| & \leq \eta \epsilon \\
|q_i-q_i\br{\beta_i,\beta_{\Chi{i}}}| & \leq \eta \epsilon \\
\left| \mathfrak{v}_i -\br{\mathfrak{v}_k+\frac{P_k^2+Q_k^2}{\mathfrak{v}_k}\br{r_k^2+x_k^2}-2\br{P_kr_k+Q_kx_k}}\right| & \leq \eta \epsilon \quad \forall k \in \Chi{i}
\end{array}\right. \label{eq:IntervalRelaxDefa}\\
& c_i\br{p_i,q_i} \leq \min_{\{\beta_k\in \mathbb{I}_k\}_{k \in \Chi{i}\cup\{i\}}} H_i\br{\beta_i,\beta_{\Chi{i}}} \label{eq:IntervalRelaxDefb}\\
& \{\beta_i:\exists \{\beta_k \in \mathbb{I}_k\}_{k \in \Chi{i}} \text{ s.t }H_i\br{\beta_i,\beta_{\Chi{i}}}<\infty\} \subseteq \mathbb{I}_i^\prime \subseteq \mathbb{I}_i \label{eq:IntervalRelaxDefc}
\end{align}\label{eq:IntervalRelaxDef}
\end{subequations}
\noindent
where $\eta$ is a constant that depends only on the number $M$ from Assumption \ref{assump:B}. We denote this computation as
\[\br{\mathbb{I}_i^\prime,p_i,q_i}=\mathrm{PropBound}\br{i,\mathbb{I}_i,\mathbb{I}_{k_1},\ldots,\mathbb{I}_{k_m}}.\]
\end{definition}
These conditions can be interpreted as follows:
\begin{itemize}
\item \eqref{eq:IntervalRelaxDefa} states that the relaxation gets tighter as the intervals get smaller. A natural relaxation that satisfies this requirement is to take convex envelopes of the nonlinear terms over the bound constraints defined by the intervals: This is what we use in the concrete implementation described in Section \ref{sec:ConcreteIntervalRelax}.
\item \eqref{eq:IntervalRelaxDefb} states that the injections produced by the relaxation step are super-optimal, so that we are guaranteed to get a lower bound on the optimal solution through the DP procedure.
\item \eqref{eq:IntervalRelaxDefc} states that the bound propagation (which shrinks the interval $\mathbb{I}_i$ to $\mathbb{I}_i^{\prime}$ using the constraints implicit in $H_i$) cannot cut off any feasible points.
\end{itemize}
\noindent
Given this computational procedure, we can construct a DP-like
algorithm where the intractable DP update \eqref{eq:DPMain} is
replaced with a tractable procedure based on $\mathrm{PropBound}$,
thereby producing a lower bound on the message function
$\kappa^i\br{\beta_i}$. The algorithm is described in Algorithms
\ref{Alg:DPUpdate} and \ref{Alg:DPUpdateLeaf}. The algorithm starts by
partitioning the space $\mathcal{X}_i$ using the interval partition
operator. For each element of the interval partition, we loop over the
pieces in the messages corresponding to each child, and propagate
constraints from the children to the parent $\beta_i$. If the propagated
interval is non-empty (that is, there exists a feasible setting for
the parents and children within the interval constraints), the lower
bound computed on $c_i\br{p_i,q_i}$ is used and a new piece is
added to the messages $\kappa^i\br{\beta_i},\eta^i\br{\beta_i}$. In
comparison to the DP Algorithm \ref{Alg:DPMain}, we also maintain
functions $p^i,q^i,\beta^i$ which store the optimal injections
and intervals for every variable computed in the DP procedure.
\subsection{Implementation of operators}
\subsubsection{Interval Discretization Operator}
The $\epsilon$-partition operator can be implemented by using a uniform discretization. The bounds $\lb{\mathfrak{v}},\ub{\mathfrak{v}},\lb{S},\ub{S}$ are obtained from the bound tightening procedure \eqref{eq:BoundTighten} described in Section \ref{sec:Relax}. For the variable $\mathfrak{v}_i$ with bounds $\lb{\mathfrak{v}_i},\ub{\mathfrak{v}_i}$, the partition operator will create the intervals
\[\left\{[\lb{\mathfrak{v}_i},\lb{\mathfrak{v}_i}+\epsilon^{\mathfrak{v}}],[\lb{\mathfrak{v}_i}+\epsilon^{\mathfrak{v}},\lb{\mathfrak{v}_i}+2\epsilon^{\mathfrak{v}}],\ldots,\left[\lb{\mathfrak{v}_i}+\left\lceil\frac{\ub{\mathfrak{v}_i}-\lb{\mathfrak{v}_i}}{\epsilon^{\mathfrak{v}}}\right\rceil \epsilon^{\mathfrak{v}}\right]\right\}\]
A similar interval discretization procedure is used for $P_i,Q_i$.
\subsubsection{Interval Relaxation Operator}\label{sec:ConcreteIntervalRelax}
In order to define the interval relaxation operator, it will be convenient to introduce the square of the current magnitude $\mathfrak{i}_i=\frac{P_i^2+Q_i^2}{\mathfrak{v}_i}$. This serves to isolate the nonconvexities in the problem and simplify the derivation of convex relaxations. Note also that using the current is natural in view of its edge-invariance: in contrast to power flows, current conserves along any edge (power line).
The interval relaxation operator requires solution of the following problem:
\begin{subequations}
\begin{align}
\Ext_{\{\mathfrak{v}_k,P_k,Q_k\}_{k \in \Chi{i}\cup\{i\}}} & \{\mathfrak{v}_i,P_i,Q_i,c_i\br{p_i,q_i}\} \\
\text{Subject to } & P_i = p_i+\sum_{k \in \Chi{i}}\br{P_k-\mathfrak{i}_kr_k} \label{eq:PFnewa}\\
& Q_i = q_i+\sum_{k \in \Chi{i}}\br{Q_k-\mathfrak{i}_kx_k} \label{eq:PFnewb}\\
& \mathfrak{v}_i = \mathfrak{v}_k+\mathfrak{i}_k\br{r_k^2+x_k^2}-2\br{P_kr_k+Q_kx_k},k \in \Chi{i} \label{eq:PFnewc}\\
& \mathfrak{v}_k\mathfrak{i}_k= P_k^2+Q_k^2, k \in \Chi{i}\cup\{i\} \label{eq:PropNC}\\
& \mathfrak{v}_k \in \mathfrak{v}\br{\mathbb{I}_k}, P_k\in P\br{\mathbb{I}_k},Q_k\in Q\br{\mathbb{I}_k},k\in\{i\}\cup\Chi{i} \label{eq:PFnewd}\\
& \br{p_i,q_i} \in \mathbf{S}_i
\end{align} \label{eq:PropBound}
\end{subequations}
\noindent
where $\Ext$ means that we both maximize and minimize every term in the set of objectives subject to the constraints specified. Thus, we obtain tighter bounds on the variables $\mathfrak{v}_i,P_i,Q_i$ and a lower bound on the objective given the interval constraints. The nonconvexity in the above problem is due to the constraint \eqref{eq:PropNC} and the possibly nonconvex cost $c_i$ and constraints $\mathbf{S}_i$. To deal with the later, we explicitly enumerate over $\mathbb{I}_i^{s}\br{t},t=1,\ldots,|\mathbf{S}_i|$ so that for each $t$, the injection constraints and costs are linear. To deal with nonconvexity of \eqref{eq:PropNC}, we use convex envelopes of the bilinear and quadratic terms. An abstract version of the problem solved is below (we defer exact details to Section \ref{sec:AppRelax})
\begin{subequations}
$\forall t \in \{1,\ldots,|\mathbf{S}_i|\}$
\begin{align}
\Ext_{p_i,q_i,\{\mathfrak{v}_k,P_k,Q_k\}_{k \in \Chi{i}\cup\{i\}}} & \{\mathfrak{v}_i,P_i,Q_i,a_i\br{t}p_i+b_i\br{t}q_i+c_i\br{t}\} \\
\text{Subject to } & \eqref{eq:PFnewa},\eqref{eq:PFnewb},\eqref{eq:PFnewc} \\
& 0 \in \text{Relax}\br{\mathfrak{v}_k\mathfrak{i}_k-\br{P_k^2+Q_k^2},\mathbb{I}_k}, k \in \Chi{i}\cup\{i\} \label{eq:PropNC}\\
& \mathfrak{v}_k \in \mathfrak{v}\br{\mathbb{I}_k}, P_k\in P\br{\mathbb{I}_k},Q_k\in Q\br{\mathbb{I}_k},k\in\{i\}\cup\Chi{i} \\
& p_i\in [\lb{p_i}\br{t},\ub{p_i}\br{t}],q_i\in [\lb{q_i}\br{t},\ub{q_i}\br{t}]
\end{align} \label{eq:PropBoundRelax}
\end{subequations}
\noindent
The relaxations we use depend on the bound constraints $\mathbb{I}_k$ and are denoted as $\text{Relax}\br{\mathfrak{v}_k\mathfrak{i}_k-\br{P_k^2+Q_k^2},\mathbb{I}_k}$. A simple example of this kind of relaxation is shown pictorially in Figure \ref{fig:ConvexRelaxInt} for the constraint $xy=1$. It is easy to see that this relaxation gets tighter as the bound constraints on $x$ get tighter, leading to the property that any feasible solution $\br{x,y}$ of the relaxation satisfies $|xy-1| \propto \epsilon$, where $\epsilon$ is the size of the interval constraint on $x$ (this is formalized in Lemma \ref{lem:AppRelax} in the Appendix Section \ref{sec:AppRelax}).
\begin{figure}
\begin{center}
\includegraphics[width=.4\textwidth]{McCormickSimple.pdf}
\end{center}
\caption{Convex relaxation of nonlinear constraint $xy=1$ over the region $x\in [0.3,0.7],y\in [\frac{1}{0.7},\frac{1}{0.3}]$: The set of points satisfying the constraint $xy=1,x \in [.3,.7]$ is plotted in the solid curve (blue). The convex region enclosed by the dashed lines (black) is the feasible region of the convex relaxation. }\label{fig:ConvexRelaxInt}
\end{figure}
\begin{algorithm}
\begin{algorithmic}
\State $\br{j,k} \gets \Chi{i}$
\State $n_i \gets 0$
\State $\mathcal{I}_i \gets \mathrm{Partition}\br{\mathcal{X}_i,\epsilon}$
\For{$m_i=1,\ldots,|\mathcal{I}_i|$}
\For{$m_j \in 1,\ldots,|\eta^j|$}
\For{$m_k \in 1,\ldots,|\eta^k|$}
\State $\br{\mathbb{I},p,q} \gets \mathrm{PropBound}\br{\mathcal{I}_i\br{m_i},\beta^{j}\br{m_j},\beta^k\br{m_k}}$
\If {$\mathbb{I} \neq \emptyset$}
\State $n_i \gets n_i+1,\eta^i\br{n_i}\gets \br{m_j,m_k}$
\State $\beta^i\br{n_i} \gets \mathbb{I},\kappa^i\br{n_i} \gets c_i\br{p,q}+\kappa^j\br{m_j}+\kappa^k\br{m_k}$
\State $p^i\br{n_i} \gets p,q^i\br{n_i} \gets q$
\EndIf
\EndFor
\EndFor
\EndFor
\State \Return $\eta^i,\kappa^i,p^i,q^i,\beta^i$
\end{algorithmic}
\caption{Interval DP update at node $i$ with children $\br{j,k}$} \label{Alg:DPUpdate}
\end{algorithm}
\begin{algorithm}
\begin{algorithmic}
\State $n_i \gets 0$
\For{$m_i \in \left\{1,\ldots,\left\lceil \frac{\ub{\mathfrak{v}_i}-\lb{\mathfrak{v}_i}}{\epsilon}\right\rceil\right\}$}
\For{$t=1,\ldots,|\mathbf{S}_i|$}
\State $n_i=n_i+1$
\If {$[\lb{P},\ub{P}]\cap [\lb{p_i\br{t}},\ub{p_i\br{t}}]\neq \emptyset,[\lb{Q},\ub{Q}]\cap [\lb{q_i\br{t}},\ub{q_i\br{t}}]\neq \emptyset$}
\State $\mathbb{I}^{\mathfrak{v}} \gets [\lb{\mathfrak{v}_i}+\br{m_i-1}\epsilon,\min\br{\lb{\mathfrak{v}_i}+m_i\epsilon,\ub{\mathfrak{v}_i}}],\mathbb{I}^{P} \gets [\lb{P},\ub{P}]\cap [\lb{p_i\br{t}},\ub{p_i\br{t}}]$
\State $\mathbb{I}^{Q} \gets [\lb{Q},\ub{Q}]\cap [\lb{q_i\br{t}},\ub{q_i\br{t}}],\eta^{i}\br{n_i} \gets \mathbb{I}^{\mathfrak{v}}\times \mathbb{I}^{P} \times \mathbb{I}^{Q}$
\State $\br{p^i\br{n_i},q^i\br{n_i}}\gets \displaystyle\argmin_{p_i\in \mathbb{I}^{P},q\in \mathbb{I}^{Q}} c_i\br{t}+a_i\br{t}p_i+b_i\br{t}q_i$
\State $\kappa^i\br{n_i} \gets \displaystyle\min_{p_i\in \mathbb{I}^{P},q_i\in \mathbb{I}^{Q}} c_i\br{t}+a_i\br{t}p_i+b_i\br{t}q_i$
\EndIf
\EndFor
\EndFor
\State \Return $\eta^i,\kappa^i$
\end{algorithmic}
\caption{Interval DP update at leaf node} \label{Alg:DPUpdateLeaf}
\end{algorithm}
\subsection{Analysis of the interval DP algorithm} \label{sec:DPThm}
We now present formal results verifying the correctness, optimality, and feasibility properties of the solutions produced by our DP algorithm.
\noindent
Before we state our main theorem that provides an approximation guarantee, we note that we can always convert an OPF problem on an arbitrary tree network to a problem on a tree network with maximum degree $3$:
\begin{lemma}\label{lem:DegLem}
An OPF problem on an arbitrary tree network with $n$ nodes and maximum degree $d$ can be converted to an OPF problem on a modified tree network with maximum degree $3$ and at most $nd$ nodes.
\end{lemma}
\begin{proof}
See Appendix Section \ref{sec:AppDeg}.
\end{proof}
\begin{theorem}[Approximate optimality property]\label{thm:ApproxMain}
Suppose that assumptions \ref{assump:A} and \ref{assump:B} (see Section \ref{sec:OPFIntro}) hold and that the DP Algorithm \ref{Alg:DPMain} (with update rule from Algorithm \ref{Alg:DPUpdate}) is run on a tree network with maximum degree $3$ and with $0<\epsilon<1$. Let $\opt{m}_1,\opt{m}_2,\ldots,\opt{m}_n$ be the indices of the variables in the optimal solution. Let $\br{\opt{\mathfrak{v}}_i,\opt{P}_i,\opt{Q}_i}=\mathrm{mid}\br{\eta_i\br{\opt{m}_i}}$ and $\opt{p}_i=p^i\br{\opt{m}_i},\opt{q}_i=q^i\br{\opt{m}_i}$. Then, the following guarantees hold:
\begin{itemize}
\item[1]\emph{Approximation guarantee}: $\br{\opt{\mathfrak{v}},\opt{P},\opt{Q},\opt{p},\opt{q}}$ satisfies each constraint of \eqref{eq:OPFform} with a bounded error $\zeta \epsilon$ where $\zeta$ is a constant that depends only on $M$ (the constant from Assumption \ref{assump:B} in Section \ref{sec:OPFIntro}).
\item[2] \emph{Runtime bound}: There is a constant $\zeta^\prime$ (depending on $M$) such that the algorithm requires at most $n \zeta^\prime \powb{\frac{1}{\epsilon}}{5}$ calls to the $\mathrm{PropBound}$ routine.
\item[3] \emph{Optimality guarantee}: The cost of the solution is bounded as:
\[\sum_{i=0}^n c_i\br{\opt{p}_i,\opt{q}_i} \leq \mathrm{OPT}\]
where $\mathrm{OPT}$ is the optimal cost of the original problem \eqref{eq:OPFform}.
\end{itemize}
Thus, we find a super-optimal approximately feasible solution in time linear in the size of the network and polynomial in the error tolerance.
\end{theorem}
\begin{proof}
See Appendix Section \ref{sec:AppMain}.
\begin{comment}
Since $I_1,I_2,\ldots,I_n$ were feasible for the relaxation, there is a combination of a values in each interval that satisfies the relaxed constraints in \eqref{eq:RelaxDP}. Consider node $k$. Now, the only constraint that was relaxed was $\mathfrak{i}_k\mathfrak{v}_k=|S_k|^2$. From the combination of the SOCP constraint and the Mc-Cormick constraints in the relaxation, it is clear that there exists a universal constant $\zeta$ such that
\[|\mathfrak{i}_k\mathfrak{v}_k-|S_k|^2|\leq \epsilon\zeta\]
Thus, there is at least one point inside the intervals $I_i,I_k$ that satisfies all the constraints with error at most $\epsilon\zeta$. Since all the constraints are continuous, moving to the midpoints of the intervals only incurs an additional error that is proportional to $\epsilon$ given the fixed problem data.
Finally, since the interval DP is a relaxation of the original DP, the optimal solution of the original problem is feasible for the interval DP. Now, since we can write $s_i$ as a function of the other variables:$y_iS_i-\sum_{k \in \Chi{i}}y_k\br{S_k-\mathfrak{i}_k}$. Since the original optimum was feasible for the relaxation, the optimal value $s_i$ of the relaxation is at most $\eta\epsilon$ away from the optimal value of the original problem, and hence the difference in cost between them is at most $L_i\epsilon\eta$. The result follows by adding up the cost terms.
\end{comment}
\end{proof}
\begin{remark} Theorem \ref{thm:ApproxMain} formalizes the intuition that as we use finer intervals in the Interval DP algorithm, we get closer to the optimal solution in terms of cost, and we get a tighter relaxation as well. The numerical results in Section \ref{sec:Num} show that our algorithm often finds the true optimal solutions even with a finite error tolerance.
\end{remark}
\section{Introduction}
In this paper, we study a novel application of well-known AI
techniques (constraint programming and inference in graphical models)
to a difficult engineering problem - the optimization of resources in
a power distribution network. As larger amounts of renewable
generation sources (solar/wind) are incorporated into the power grid,
the availability of generation capacity becomes uncertain (due to
dependence on unpredictable weather phenomena). However, the physics
of the power grid imply the need to maintain real-time balance between
demand and generation. Exploiting the flexibility of electricity
demand becomes central for solving this problem, which is the core of
the ``smart grid'' vision ({\tt https://www.smartgrid.gov/}). Thus,
future grids will require efficient algorithms that can process data
from millions of consumers and efficiently compute optimal ways of
exploiting demand-side flexibility while respecting engineering
constraints. In this paper, we develop a novel algorithm guaranteed to
compute an approximately optimal solution for this problem in
polynomial time.
More concretely, we study the Optimal Power Flow (OPF) problem
\cite{carpentier1962contribution}. At an abstract level, one can view
this as a network flow optimization:
\begin{align*}
\text{ minimize } & \text{ cost of generating electicity } \\
\text{ subject to } & \text{ conservation of flows, flows consistent with voltages,} \\
& \text{ demands are met}, \text{ engineering limits are respected}
\end{align*}
where the engineering limits typically refer to capacities of the
transmission lines in the power grid and limits on voltages.
However, as opposed to a standard network-flow problem for which there
are well-known efficient algorithms, the physics of electric power
flow make the above problem challenging. Electrical flows cannot be
arbitrary, but are driven by differences in voltages, so that the flow
on a transmission line is a nonlinear function of the voltage difference
between the two ends of the line. Due to this nonlinear constraint,
the OPF problem becomes non-convex. In fact, it is strongly
NP-hard over arbitrary networks \cite{bienstock2015strong} and weakly
NP-hard over tree-structured networks \cite{PascalHardness}. The special
case of tree-structured networks is particularly important in the
context of the smart grid as distribution networks (which connect
high-voltage long-distance power transmission network to individual
consumers) are typically tree-structured. In order to exploit
demand side flexibility, we will need efficient OPF algorithms on
tree-networks.
In recent years, several researchers have studied applications of
convex relaxation techniques to the OPF problem
\cite{low2014convex}. In particular, for the tree OPF problem, elegant
results have been developed characterizing the conditions under which
convex relaxations of OPF are guaranteed to be exact
\cite{BaseQCQP,lavaei2013geometry,sojoudi2014exactness,gan2015exact}. The
most general results are presented in \cite{gan2015exact} and cover
several practical instances of OPF over tree networks. However, the
conditions for exactness require assumptions that are incompatible
with the above ``smart grid'' applications (absence of discrete
variables, limits on flow reversal, ...).
In this paper, we develop a new approach to solving optimal power flow
on tree-structured networks by using techniques from Constraint
Programming (CP) and Graphical Models (GM). We first restate the tree OPF
problem as an inference problem over a tree-structured factor
graph. Based on this representation, we develop an algorithm that
computes a super-optimal approximately feasible solution. The running time of the algorithm is linear
in the size of the network and polynomial in $\frac{1}{\epsilon}$,
where $\epsilon$ is the error tolerance allowed. Relative to the
existing algorithms based on convex relaxations, the approach we
develop has the following advantages:
\begin{itemize}
\item It can handle mixed-integer optimization problems (involving
both discrete and continuous variables) and hence is capable of
addressing load-control and distributed generation applications with
discrete components such as on/off constraints, switching
transformer taps, and capacitor banks.
\item Unlike \cite{gan2015exact}, the approach does not require
restrictive assumptions on flow directionality or voltage limits. It
also do not require costs to be convex, allowing arbitrary (possibly
discontinuous) piecewise-linear costs.
\item The resulting algorithm is inherently distributed and can be
implemented using a message-passing framework, which is expected to
be a significant advantage for future distribution networks.
\end{itemize}
\noindent
On the other hand, a disadvantage of our algorithm is that we only
produce approximate solutions and there may be cases where achieving
acceptable error tolerances will require intractably fine
discretizations. However, we show that, for several practical OPF
problems, this problem can be alleviated by leveraging CP techniques.
A closely related approach, developed in an abstract form and stated in the language of linear programming, was presented in \cite{bienstock2015lp}. The authors present a general framework that computes super-optimal and approximately feasible solutions to graph-structured mixed-integer polynomial optimization problems. While our approach provides similar guarantees, our main contributions relative to that work are as follows:
\begin{itemize}
\item[1] We restate the approach in the intuitive language of graphical models. This allows us to take advantage of the inference techniques developed for graphical models \cite{wainwright2008graphical,sontag2010approximate}, and study problems beyond optimization (probabilistic inference, for example).
\item[2] As opposed to the point discretization approach employed in \cite{bienstock2015lp}, we develop an interval discretization approach, that allows us to use interval CP techniques (bound-tightening, constraint propagation etc.) to achieve practically efficient implementation of the algorithm.
\end{itemize}
\noindent
We note that a related approach has been used for finding approximate Nash-equilibria in tree-structured graphical games \cite{kearns2001graphical}. Finally, we note that related ideas have been explored in detail in the graphical models and constraint programming literature \cite{dechter2003constraint,wainwright2008graphical,johnson2008convex,sontag2010approximate}.
The rest of this paper in organized as follows. Section
\ref{sec:TechIntro} provides the background on power systems and
graphical models necessary for this paper. Section \ref{sec:OPFasGM}
formulates the OPF problem over a tree network as a MAP-inference
problem over a tree-structured factor graph. Section \ref{sec:DPInt}
describes a finite dynamic-programming algorithm based on an interval
discretization of the variables in the factor graph and presents
guarantees associated with the algorithm. Section \ref{sec:Num}
evaluates our algorithm and compares it to off-the-shelf solvers on a
number of IEEE distribution network test cases. Finally, Section
\ref{sec:Conc} summarizes our findings and presents directions for
future work.
\section{Numerical Illustrations}\label{sec:Num}
In this Section, we present numerical tests of our approach on some IEEE benchmark networks - power grids with network topologies and loads that are deemed representative of real power systems. In particular, we use a set of sub-networks of the 56-bus distribution network \cite{saverioTest} (based on the IEEE 123 bus distribution feeder network \cite{IEEEdist}). We create additional subnetworks(14/30/56 bus) by aggregating nodes in the original network.
We study discrete load-curtailment problems, which are mixed integer nonconvex nonlinear optimization problems (MINLPs). We analyze a highly overloaded distribution network: a scenario that might arise just before a blackout, or after the loss of a major generator or transmission line. The goal is to curtail (reduce the consumption of) a small number of loads so that the power grid is restored to its normal operating state (bring voltages back to acceptable range). A cost is incurred for curtailing a load (typically proportional to the reduction of load). The total cost is the sum of the load-shedding costs plus a generation cost at the substation (bus $0$). The formal statement of the problem is as follows:
\begin{align*}
\mini_{\sigma,\mathfrak{v},S} \quad &c_0\br{p_0,q_0}+\sum_{i=1}^n c_i\br{\sigma_i} \\
\text{ Subject to } & \eqref{eq:ACpbreac},\eqref{eq:ACpb},\eqref{eq:ACPFohm} \\
& p_i=p_i^{nom}(1-\sigma_i)+p_i^{red}\sigma_i,q_i=q_i^{nom}(1-\sigma_i)+q_i^{red}\sigma_i \\
& \sigma_i\in \{0,1\}, \mathfrak{v}^{L}_i \leq \mathfrak{v}_i \leq \mathfrak{v}^{U}_i
\end{align*}
The values $p_i^{nom},q_i^{nom}$ denote the nominal values of the real and reactive demands at the bus $i$. $p_i^{red},q_i^{red}$ denote the reduced (curtailed) values of the loads. $\sigma_i \in \{0,1\}$ denote the curtailment decision ($\sigma=1$ denotes curtailment). Curtailment of loads incurs a cost $c_i\br{\sigma_i}$.
\noindent
We run the DP algorithm (Algorithm \ref{Alg:DPMain} with update step
from Algorithms \ref{Alg:DPUpdate},\ref{Alg:DPUpdateLeaf}) on our
three test cases (14,30, and 56 buses). Ratio of DP optimum to true
optimum/upper bound, maximum constraint violation and CPU time are
studied as functions of $\epsilon$. To ensure that the results are not
artifacts of the particular test cases used, the results averaged over
$50$ instances of the problem generated by perturbing the loads in the
original problem randomly by up to $10\%$ at each bus. We show both
the mean and standard deviations of each quantity. We summarized our
observations below.
\begin{itemize}
\item Since our approach is based on a relaxation, it may produce
infeasible solutions. However, as the radius $\epsilon$ of the
interval discretization reduces, the degree of infeasibility, as
measured by the maximum constraint violation, decreases. We quantify
this dependence by taking the optimal configuration produced by the
DP algorithm and solving the power flow equations (using Newton's
method). We then examine if the power flow solution satisfies the
bound constraints on voltages. Otherwise, we compute the maximum
violation, as shown in Figures \ref{fig:14busa},\ref{fig:40busa},
and \ref{fig:56busa}. The results show that near-feasible
super-optimal solutions are found by our DP algorithm consistently.
\item The other parameter of interest is the degree of
super-optimality, which measures how close the optimal cost of the
solution found by the DP is to the true optimal cost of the original
problem \eqref{eq:OPFform}. For the $14$ bus network, it is feasible
to find the true optimal cost using a brute-force search. However,
for larger networks, we rely on the BONMIN solver to get a feasible
solution and bound the optimality gap. The results shown in Figures
\ref{fig:14busb},\ref{fig:40busb},\ref{fig:56busb} prove that when
$\epsilon$ is sufficiently small, the DP algorithm optimum is within
$.99$ of the true optimum. Note that the non-monotonic behavior of
the optimum is due to the fact that we use an adaptive
discretization. Even though our interval discretization gets tighter
as $\epsilon$ gets smaller, it is not guaranteed to be a strict
refinement of the intervals corresponding to a larger $\epsilon$.
\item Finally, we study the dependence of the running time of the
algorithm on $\epsilon$, in Figures
\ref{fig:14busc},\ref{fig:40busc},\ref{fig:56busc}. The running time
of the algorithm grows as $\epsilon$ decreases, but the plots show
that a good optimality ratio and an acceptable error can be achieved
with a fairly small running time of several seconds.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}\includegraphics[width=.95\columnwidth]{Res14busCviols}\caption{Constraint Violation}\label{fig:14busa}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=.95\columnwidth]{Res14busTims}
\caption{Computation time}\label{fig:14busb}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=.95\columnwidth]{Res14busObjs}
\caption{Superoptimality ratio}\label{fig:14busc}
\end{subfigure}
\caption{14 bus network}\label{fig:ExptA}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.33\textwidth}\includegraphics[width=.95\columnwidth]{Res40busCviols}\caption{Constraint Violation}\label{fig:40busa}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=.95\columnwidth]{Res40busTimsdf}
\caption{Computation time}\label{fig:40busb}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=.95\columnwidth]{Res40busObjs}
\caption{Superoptimality ratio}\label{fig:40busc}
\end{subfigure}
\caption{40 bus network}\label{fig:ExptB}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.33\textwidth}\includegraphics[width=.95\columnwidth]{Res56busCviols}\caption{Constraint Violation}\label{fig:56busa}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=.95\columnwidth]{Res56busTimsdf}
\caption{Computation time}\label{fig:56busb}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=.95\columnwidth]{Res56busObjs}
\caption{Superoptimality ratio}\label{fig:56busc}
\end{subfigure}
\caption{56 bus network}\label{fig:ExptB}
\end{figure}
\noindent
We also compared our algorithm to other available MINLP solvers: (1)
BONMIN \cite{bonami2008algorithmic} is a solver guaranteed to find
global optima of convex MINLPs. It can be used as a heuristic solver
for nonconvex MINLPs but with no guarantees on global optimality. (2)
COUENNE ({\tt http://www.coin-or.org/Couenne/}) is a solver guaranteed
to find global optima of nonconvex MINLPs based on a spatial branch
and bound algorithm. We access both solvers through the Julia
interface available via the JuMP package
\cite{DunningHuchetteLubin2015}. For the problems we studied, COUENNE
failed to converge within an acceptable time limit ($1$ hr) so we do
not report results from COUENNE. The BONMIN results are summarized in
Table \ref{tab:BONMIN}. While the BONMIN solver was faster in our
experiments, it is a heuristic solver, i.e, it is not guaranteed to
find a globally optimal solution. Indeed, BONMIN indeed fail to find
optimal solutions in the $56$ bus network. In contrast, our DP
approach always succeeds in finding a global optimum (see table
\ref{tab:DP}), although for the $56$ bus network, it requires a very
small $\epsilon$ which drives up the running time of the algorithm to
$1170s$. The reason for this behavior is that there is a super-optimal
solution that violates the voltage constraints by only $10^{-4}
\%$. This means that the discretization of voltages has to be smaller
than this for the solver to be able to recognize infeasibility of this
solution, and find the true global optimum.
\begin{table}
\begin{center}
\caption{Performance of BONMIN solver on discrete load-control: Optimality gap computed using lower bound from DP algorithm.} \label{tab:BONMIN}
\begin{tabular}{| l | l | l | l |}
\hline
Test Case & Optimality Gap & Computation Time \\ \hline
14 bus & $0 \%$ & .1s \\ \hline
30 bus & $0 \%$ & .2s \\ \hline
56 bus & $0.1 \%$ & .5s \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Performance of DP solver on discrete load-control.} \label{tab:DP}
\begin{tabular}{| l | l | l | l |}
\hline
Solver & Test Case & Optimality Gap & Computation Time \\ \hline
DP & 14 bus & $0 \%$ & 1s \\ \hline
DP & 30 bus & $0 \%$ & 40s \\ \hline
DP & 56 bus & $0 \%$ & 1170s \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{OPF as a Graphical Model}\label{sec:OPFasGM}
This section shows how to rewrite the OPF problem \eqref{eq:OPFform} as
a graphical model inference problem. We first define the augmented
variables $\beta_i=\begin{pmatrix}
\mathfrak{v}_i & P_i & Q_i \end{pmatrix},i=1,\ldots,n,\beta_0 =\begin{pmatrix}
\mathfrak{v}^{ref} & 0 & 0
\end{pmatrix}$. Note that $\beta_0$ is fixed and is only introduced for notational convenience. In terms of the augmented variables $\beta_i$, the constraints \eqref{eq:ACPFv},\eqref{eq:ACPFf} are simply bound constraints on components of $\beta_i$. The domain of the variable $\beta_i$ is defined as $\mathcal{X}_i =[\lb{\mathfrak{v}}_i,\ub{\mathfrak{v}}_i] \times [\lb{P}_i,\ub{P}_i] \times [\lb{Q}_i,\ub{Q}_i]$. Further, using \eqref{eq:ACpb}, we can define
\begin{subequations}
\begin{align}
p\br{\beta_i,\beta_{\Chi{i}}} & =P_i-\sum_{k\in\Chi{i}} \br{P_k-r_k \frac{P_k^2+Q_k^2}{\mathfrak{v}_k}} \\
q\br{\beta_i,\beta_{\Chi{i}}} & =Q_i-\sum_{k\in\Chi{i}} \br{Q_k-x_k \frac{P_k^2+Q_k^2}{\mathfrak{v}_k}} \\
\mathfrak{v}_i\br{\beta_k} & = \mathfrak{v}_k+\br{r_k^2+x_k^2}\frac{\br{P_k^2+Q_k^2}}{\mathfrak{v}_k}-2\br{P_kr_k+Q_kx_k}
\end{align}
\end{subequations}
\noindent
For each $i=0,\ldots,n$, we define
\begin{align}
&H_i\br{\beta_i,\beta_{\Chi{i}}}= \\
&\begin{cases}
c_i\br{s_i} & \text{ if } \left\{\begin{array}{l} \br{p_i\br{\beta_i,\beta_{\Chi{i}}},q_i\br{\beta_i,\beta_{\Chi{i}}}} \in \mathbf{S}_i \\
\mathfrak{v}_i = \mathfrak{v}_i\br{\beta_k} \quad \forall k \in \Chi{i} \end{array}\right. \\
\infty & \text{ otherwise }
\end{cases} \label{eq:Hdef}
\end{align}
This is a convenient shorthand that allows to write the constrained optimization problem \eqref{eq:OPFform} as an unconstrained problem, thereby simplifying notation. The OPF problem \eqref{eq:OPFform} is equivalent to
\begin{align}
\mini_{\{\beta_i\in \mathcal{X}_i\}_{i=1}^n} \sum_{i=0}^n H_i\br{\beta_i,\beta_{\Chi{i}}}\label{eq:optDP}
\end{align}
This corresponds to a graphical model in the factor graph representation \cite{kschischang2001factor} where the nodal variables are $\beta_i$ and the factors correspond to the $H_i$.
\begin{definition}[OPF factor graph]\label{def:FactG}
The problem \eqref{eq:optDP} corresponds to a factor graph: The set of variable nodes is
\[\beta_i\in \mathcal{X}_i \;\;\; (i=1,\ldots,n)\] and the set of factors is
\[H_i \;\;\; (i=0,\ldots,n)\]
For $i=1,\ldots,n$, $\beta_i$ is connected to $H_i,H_{\Par{i}}$ (see Figure \ref{fig:FactGraph}).
\end{definition}
\begin{theorem}
The factor graph from definition \ref{def:FactG} is a tree. Hence, the problem \eqref{eq:optDP} can be solved exactly by a two-pass dynamic programming algorithm.
\end{theorem}
\begin{proof}
Every variable node $\beta_i \;\; (i=1,\ldots,n)$ is connected to two factors $H_{\Par{i}},H_i$. Thus, the total number of edges in the factor graph is $2n$ and the total number of nodes is $2n-1$ (there are factors $H_0,\ldots,H_n$ and variable nodes $\beta_1,\ldots,\beta_n$). Further, the factor graph is connected because variable nodes are connected (Since the variable nodes that are also neighbors in the power network are neighbors 2 hops apart in the factor graph). The factor graph is a connected graph with the number of vertices equal to the number of edges plus one. Therefore, the graph is a tree. The result on efficient inference over a tree factor graph is a standard result \cite{kschischang2001factor}.
\end{proof}
\begin{figure}
\centering\includegraphics[width=.55\textwidth]{FactorGraph.png}
\caption{Transformation from the power network to factor graph: Each variable node $i$ is connected to two factors - $H_{\Par{i}}$ and $H_i$} \label{fig:FactGraph}
\end{figure}
\noindent
The dynamic-programming (DP) approach is formalized in Algorithm
\ref{Alg:DPMain}. The algorithm works by passing ``messages''
$\kappa_t\br{\beta_t}$. Let $\Desc{t}$ denote all the nodes in the
subtree rooted at $t$ (the descendants of $t$, including $t$ itself).
\begin{algorithm}
\begin{algorithmic}
\State $\mathrm{Processed}=\emptyset$
\State $\kappa_i\br{\beta_i}=0,\eta_i\br{\beta_i}=\emptyset,i=0,1\ldots,n$
\While{$|\mathrm{Processed}|<=n$}
\State Choose $i \in \{k: k \not\in \mathrm{Processed},\Chi{i} \subseteq \mathrm{Processed}\}$
\State
\begin{subequations}
\begin{align}
\kappa_{i}\br{\beta_i} \gets \displaystyle\min_{\beta_{\Chi{i}}}H_i\br{\beta_i,\beta_{\Chi{i}}} +\sum_{k \in \Chi{i}}\kappa_{k}\br{\beta_k} \\
\eta_i\br{\beta_i} \gets \displaystyle\argmin_{\beta_{\Chi{i}}}H_i\br{\beta_i,\beta_{\Chi{i}}} +\sum_{k \in \Chi{i}}\kappa_{k}\br{\beta_k}
\end{align} \label{eq:DPMain}
\end{subequations}
\State $\mathrm{Processed}=\mathrm{Processed}\cup\{i\}$
\EndWhile
\State $\br{\opt{c},\opt{\beta_{\Chi{0}}}} \gets\displaystyle\min_{\beta_{\Chi{0}}}H_0\br{\beta_0,\beta_{\Chi{0}}} +\sum_{k \in \Chi{0}}\kappa_{k}\br{\beta_k} $
\State $\mathrm{Processed}=\{0\}$.
\While{$|\mathrm{Processed}|\leq n$}
\State Choose $i \in \{k: k \not\in \mathrm{Processed},\Par{k}\in \mathrm{Processed}\} $
\State $\opt{\beta}_{\Chi{i}}\gets\eta_{i}\br{\opt{\beta}_i}$
\State $\mathrm{Processed}\gets \mathrm{Processed} \cup \Chi{i}$
\EndWhile \\
\Return $\br{\opt{c},\opt{\beta}}$
\end{algorithmic}
\caption{DP Algorithm: (optimization implicitly subject to $\beta_i \in \mathcal{X}_i$)} \label{Alg:DPMain}
\end{algorithm}
The message denotes the optimal value of the following subproblem of\eqref{eq:optDP}:
\begin{subequations}
\begin{align*}
\kappa_t\br{\beta_t}=& \min_{\beta_i\in \mathcal{X}_i :i \in \Desc{t}\setminus \{t\}} \sum_{i\in \Desc{t}} H_i\br{\beta_i,\beta_{\Chi{i}}}. \end{align*}
\end{subequations}
The messages can be computed recursively starting at the leaves of the tree.
For the 5 bus network shown in Figure \ref{fig:5buspic}, Algorithm \ref{Alg:DPMain} proceeds as follows: At $T=0$, nodes $3,4$ send messages $\kappa_3,\kappa_4$ (which are simply equal to $H_3,H_4$) to their parent nodes. For brevity, we do not write the constraint $\beta_i \in \mathcal{X}_i$ in the updates below explicitly. At $T=1$, node $2$ computes
\[\kappa_2\br{\beta_2}=\min_{\beta_3\in \mathcal{X}_3} H_2\br{\beta_2,\beta_3}+\kappa_3\br{\beta_3},\eta_2\br{\beta_2}=\argmin_{\beta_3\in \mathcal{X}_3} H_2\br{\beta_2,\beta_3}+\kappa_3\br{\beta_3}.\]At $T=2$, node $1$ computes
\begin{align*}
\kappa_1\br{\beta_1}=\min_{\beta_2\in \mathcal{X}_2,\beta_4 \in \mathcal{X}_4} H_2\br{\beta_1,\beta_2,\beta_4}+\kappa_2\br{\beta_2}+\kappa_4\br{\beta_4} \\
\eta_1\br{\beta_1} = \argmin_{\beta_2\in \mathcal{X}_2,\beta_4 \in \mathcal{X}_4} H_2\br{\beta_1,\beta_2,\beta_4}+\kappa_2\br{\beta_2}+\kappa_4\br{\beta_4}.
\end{align*}
The forward pass ends here, and the backward pass proceeds similarly in the reverse direction:
\begin{align*}
T=3: & \br{\text{OPT},\opt{\beta_1}}=\min_{\beta_1 \in \mathcal{X}_1} \kappa_1\br{\beta_1} \\
T=4: & \br{\opt{\beta_2},\opt{\beta_4}}=\eta_1\br{\opt{\beta_1}}\\
T=5: & \opt{\beta_3}=\eta_2\br{\opt{\beta_2}}.
\end{align*}
\begin{figure}
\centering \includegraphics[width=.32\textwidth]{5bus.png}
\caption{Transformation of ACOPF to Graphical Model} \label{fig:5buspic}
\end{figure}
\section{Background}\label{sec:TechIntro}
In this Section, we introduce all the necessary background on
Alternating Current (AC) power flows and graphical models required to
follow the development of the algorithm in this paper. We have attempted
to make this Section self-contained providing sufficient details for
the purposes of this paper. Interested readers may consult
textbooks on power engineering \cite{BergenVittal,pai2014computer} and
graphical models
\cite{koller2009probabilistic,dechter2003constraint,wainwright2008graphical} for further details.
In the following, $\mathbb{C}$ denotes the set of complex numbers and $\mathbb{R}$
the set of real numbers.$\mathbf{j}=\sqrt{-1}$ to avoid confusion
with currents as is traditional in power systems. For $x\in\mathbb{C}$, we
use $\herm{x}$ to denote the complex conjugate of $x$, $\Rep{x}$ the real part,
$\Imp{x}$ the imaginary part, and $\angle x$ the phase of $x$. For $x,y\in\mathbb{C}$, $a\leq b$ denotes the pair of
inequalities $\Rep{a}\leq \Rep{b},\Imp{a}\leq \Imp{b}$.
\begin{comment}
\begin{itemize}
\item{$\mathbb{C}$:}{ The set of complex numbers}%
\item{$\mathbb{R}$:} {The set of real numbers}%
\item{$\mathbf{j}$:}{ The imaginary unit, $\sqrt{-1}$}%
\item{$\herm{x}$:}{ Conjugate of the complex number $x$}%
\item{$\Rep{x}$:}{ Real part of the complex number $x$}%
\item{$\Imp{x}$:}{ Imaginary part of the complex number $x$}%
\item{$\angle x$:}{ The phase of the complex number x ($\in [0,2\pi)]$)}
\item{$a\leq b,a,b\in\mathbb{C}$}: Inequality between complex numbers, equivalent to
\[\Rep{a}\leq\Rep{b},\Imp{a}\leq \Imp{b}.\]
\item{$\mathfrak{i}$}: Square of branch current magnitude
\item{$\mathfrak{v}$}: Square of nodal voltage magnitude
\item{$S$}: Complex branch power flow
\item{$P$}: Real branch power flow
\item{$Q$}: Reactive branch power flow
\item{$s$}: Complex power injection
\end{itemize}
\end{comment}
\subsection{AC Power Flow over a Tree Network}
We will work with a power distribution network transporting
Alternating-Current power (AC power). Mimicking power engineering
terminology, nodes in the network are called buses and edges are
called transmission lines (or simply lines or branches). These
networks are typically tree-structured, and the root of the tree is
known as the \emph{substation bus} - physically, this is the point at
which the power distribution network is connected to the high voltage
power transmission network. We label the nodes $0,\ldots,n$, where $0$
is the substation bus. The network is represented as a directed graph,
with all edges pointing towards the substation bus (this
directionality is simply a convention and has no physical
meaning - the physical power flow over this edge can be in either direction). Each node $k$ (except the substation) has a unique outgoing
edge, connecting it to its \emph{parent node}, denoted $\Par{k}$, and
$k$ is said to be a \emph{child} of $\Par{k}$. $\Chi{i}$ denotes the
set of children of bus $i$.
\begin{figure}
\centering\includegraphics[width=.95\columnwidth]{FlowPicB}
\caption{AC Power flow in a tree network }\label{fig:FlowPicB}
\end{figure}
AC power flow in steady-state is described
by complex voltages (known as voltage phasors) that represent
sinusoidal steady-state voltage profiles. Let the complex voltage at
node $i$ be $V_i$ and let $\mathfrak{v}_i=|V_i|^2$ denote the squared voltage
magnitude. The power-flow is also a complex number, whose real part is
known as \emph{active power} and imaginary part as \emph{reactive
power}. In a tree network, every edge connects a bus and its
parent. Thus, we can identify every edge with the child node $i$
incident upon it, and denote its complex impedance by $z_i=r_i+\mathbf{j}
x_i$ ($r_i$ is the resistance and $x_i$ the inductance of the
line). We denote the sending-end power flow from bus $i$ to bus
$\Par{i}$ by $S_i=P_i+\mathbf{j} Q_i$. Note that because of losses,
the power received at $\Par{i}$ is not equal to $S_i$. The power
losses are given by $z_{i}\frac{|S_i|^2}{\mathfrak{v}_i}$ (see
\cite{BergenVittal} or \cite{low2014convexb} for further details).
Combined with the conservation of flow at each node in the network,
this leads to the AC power flow equations, in the so-called
branch-flow model (first presented in \cite{baran1989optimal}),
illustrated in Figure \ref{fig:FlowPicB}. The power flow equations in
the branch flow (or Baran-Wu) form can be written as:
\begin{subequations}\begin{align}
P_i &=p_i+\sum_{k \in \Chi{i}}\br{P_k-r_k\br{\frac{P_k^2+Q_k^2}{\mathfrak{v}_k}}}\quad \forall i \in \{0,\ldots,n\}\label{eq:ACpb} \\
Q_i &=q_i+\sum_{k \in \Chi{i}}\br{Q_k-x_k\br{\frac{P_k^2+Q_k^2}{\mathfrak{v}_k}}}\quad \forall i \in \{0,\ldots,n\}\label{eq:ACpbreac} \\
\mathfrak{v}_i &= \mathfrak{v}_k+\br{r_k^2+x_k^2}\frac{\br{P_k^2+Q_k^2}}{\mathfrak{v}_k}-2\br{r_iP_i+x_iQ_i} \quad \forall i \in \{0,\ldots,n\},k \in \Chi{i}\label{eq:ACPFohm}
\end{align} \label{eq:PF}
\end{subequations}
\noindent
where $p_i$ and $q_i$ are the real and reaction power
injections/consumptions at bus $i$, as discussed in the next
Section. For the rest of this paper, this is the form of the PF
equations we will use.
\subsection{Optimal Power Flow (OPF) on a tree network}\label{sec:OPFIntro}
The optimal power flow (OPF) problem aims at finding the most
efficient utilization of generation and flexible demand resources in a
grid subject to the power flow constraints and engineering limits on
voltages, currents and flows. At each node, there is generation and/or
consumption of power. To simplify notation, we assume that there is
only one entity (generator or consumer) at each node in the network (but this restriction is not necessary). Both generators and
flexible consumers are characterized by their injection domain
$\br{p_i,q_i}\in\mathbf{S}_i$. The domain may be a finite set (for modeling
discrete load control where a load can take on one of a set of
possible values, for example) or an infinite set (for modeling
continuous constraints like minimum/maximum generation
limits). Inflexible generators or consumers are modeled by choosing
$\mathbf{S}_i$ to be a singleton set. Additionally, each generator has a
cost of production $c_i\br{p_i,q_i}$ and similarly every
flexible consumer may be compensated for adjusting consumption, and
this compensation is also denoted $c_i\br{p_i,q_i}$.
With these assumptions, a generic OPF problem can be stated as
\begin{subequations}
\begin{align}
\mini_{p,q\mathfrak{v},P,Q} &\sum_{i=0}^n c_i\br{p_i,q_i} \text{ (Minimize Cost)} \\
\;\;\;\:\;\;\text{ Subject to }
& \eqref{eq:ACpb},\eqref{eq:ACpbreac},\eqref{eq:ACPFohm} \\
& \mathfrak{v}^{L}_i \leq \mathfrak{v}_i \leq \mathfrak{v}^{U}_i \quad \forall i \in \{1,\ldots,n\} \label{eq:ACPFv} \\
& \lb{P_i} \leq P_i \leq \ub{P_i},\lb{Q_i} \leq Q_i \leq \ub{Q_i} \, \forall i \in \{1,\ldots,n\} \label{eq:ACPFf} \\
& \br{p_i,q_i}\in \mathbf{S}_i\quad \forall i \in \{1,\ldots,n\} \label{eq:ACPFpq} \\
& P_0=0,Q_0=0,\mathfrak{v}_0=\mathfrak{v}^{ref} \label{eq:Slackbus}
\end{align}\label{eq:OPFform}
\end{subequations}
\noindent
The costs and flow balance constraints are described above. The
constraints \eqref{eq:ACPFv} and \eqref{eq:ACPFf} come from
engineering limits - devices connected to the grid only work properly
for a certain range of voltages, and the flow limits are related to
the capacity of transmission lines (both due to dynamic stability and
thermal limitations). The constraint \eqref{eq:Slackbus} enforces that
there is no current or flow going upstream from the substation bus and
that the substation bus voltage is set to a fixed reference value,
$\mathfrak{v}^{ref}$. Note that more general OPF problems can be handled with
our approach (adding tap transformer positions, capacitor banks etc.),
but we restrict ourselves to this basic problem to simplify notation
and make the exposition clear. We make assumptions on the cost
function and problem data as stated below:
\begin{assumption}\label{assump:A}
The injection constraint set can be partitioned as:
\begin{align*}
\mathbf{S}_i=\cup_{t=1}^{|\mathbf{S}_i|}\mathbb{I}^{s}\br{t},\mathbb{I}^{s}\br{t}=\left\{\br{p_i,q_i}: \lb{p_i}\br{t}\leq p_i \leq \ub{p_i}\br{t}, \lb{q_i}\br{t}\leq q_i \leq \ub{q_i}\br{t}\right\}
\end{align*}
and the cost function $c_i\br{p_i,q_i}$ is a linear function over $\mathbb{I}^{s}\br{t}$:
\[c_i\br{p_i,q_i}=a_i\br{t}p_i+b_i\br{t}q_i+c_i\br{t}\]
\end{assumption}
\begin{assumption}\label{assump:B}
$\exists M>0$ such that:
\begin{itemize}
\item[1] $[\lb{P_i},\ub{P_i}],[\lb{Q_i},\ub{Q_i}],[\lb{\mathfrak{v}_i},\ub{\mathfrak{v}_i}] \subseteq [-M,M] \quad i\in \{1,\ldots,n\}$ (voltages/flows are bounded uniformly across all nodes).
\item[3] $\lb{\mathfrak{v}_i}\geq \frac{1}{M}$ for $i=1,\ldots,n$ (voltage lower bounds are bounded below).
\item[3] $|z_i|\leq M,i=1,\ldots,n$ (impedances are bounded uniformly across all nodes).
\item[4] $|\mathbf{S}_i|\leq M,i=0,\ldots,n$ (number of pieces in the cost is bounded).
\item[5] $\forall i\in \{0,\ldots,n\},t\in\{1,\ldots,|\mathbf{S}_i|\}$: $\max\br{\ub{p_i}\br{t}-\lb{p_i}\br{t},\ub{q_i}\br{t}-\lb{q_i}\br{t}} \leq \frac{1}{M}$ (the size of each piece in the piecewise linear cost is small).
\end{itemize}
\end{assumption}
\noindent
Assumptions \ref{assump:A} and \ref{assump:B} are non-restrictive: Assumption \ref{assump:A} simply requires that
the cost function is piecewise-linear (or can be approximated by one) and assumption \ref{assump:B} requires that all parameters are bounded.
\subsection{Factor Graphs}
\begin{figure}
\centering\includegraphics[width=.23\columnwidth]{FactorGraphExample}
\caption{Factor graph corresponding to \eqref{eq:FactExample}}\label{fig:FactExample}
\end{figure}
A factor graph \cite{kschischang2001factor} is a formal tool to express the structure of an optimization problem or a probability distribution. In this paper, we focus on optimization problems and do not discuss the use of factor graphs in probabilistic inference. A factor graph is defined by specifying:
\begin{itemize}
\item[1] A set of $n$ variables $\{\beta_i\}_{i=1}^{n}$ and their domains $\beta_i \in \mathcal{X}_i$.
\item[2] A set of functions, called \emph{factors}, $\{H_{k}:\displaystyle\prod_{t \in \alpha\br{k}} \mathcal{X}_t \mapsto \mathbb{R}\}_{k=1}^m$, where $\alpha\br{k}\subseteq \{1,\ldots,n\}$ is the set of variables that $H_k$ depends on.
\end{itemize}
\noindent
The factor graph is represented as a bipartite graph where the variables live on one side of the graph and the factors on the other side. A variable is connected to a factor if the factor depends on the variable, so that there is an edge between $\beta_i$ and $H_j$ if and only if $i \in \alpha\br{j}$.
The optimization problem associated with the factor graph is $
\mini_{\{\beta_i \in \mathcal{X}_i\}_{i=1}^n} \sum_{k=1}^m H_k\br{\beta_{\alpha\br{k}}}$.
As a concrete example, the optimization problem represented by the factor graph shown in Figure \ref{fig:FactExample} is:
\begin{align}
\min_{\{\beta_i \in \{0,1\}\}_{i=1}^5} H_1\br{\beta_1,\beta_2,\beta_3}+H_2\br{\beta_2,\beta_4}+H_3\br{\beta_4,\beta_5} \label{eq:FactExample}
\end{align}
\noindent
A naive approach to solving \eqref{eq:FactExample} would be to simply enumerate all $2^5$ assignments to the variables $\beta_1,\ldots,\beta_5$. However, one can solve this problem efficiently by exploiting the factor graph structure as follows: We first note that $\beta_5$ only appears in the factor $H_3$, so we can rewrite the optimization as
\[\min_{\beta_1,\beta_2,\beta_3,\beta_4} H_1\br{\beta_1,\beta_2,\beta_3}+H_2\br{\beta_2,\beta_4}+\min_{\beta_5}H_3\br{\beta_4,\beta_5}\]
Define $\kappa_3\br{\beta_4}=\min_{\beta_5} H_3\br{\beta_4,\beta_5}$ (this function can be evaluated using $4$ units of time assuming each function evaluation takes $1$ unit of time, by evaluating $H_3$ for all $4$ assignments to its arguments). The problem then reduces to
\[\min_{\beta_1,\beta_2,\beta_3} H_1\br{\beta_1,\beta_2,\beta_3}+\min_{\beta_4} H_2\br{\beta_2,\beta_4}+\kappa_3\br{\beta_4}\]
Again, define $\kappa_2\br{\beta_2}=\min_{\beta_4} H_2\br{\beta_2,\beta_4}+\kappa_3\br{\beta_4}$ (this can again be evaluated in $4$ units of time). Then, the problem reduces to
\[\min_{\beta_1,\beta_2,\beta_3} H_1\br{\beta_1,\beta_2,\beta_3}+\kappa_3\br{\beta_3}\]
No further simplification is possible since $H_1$ depends on all the remaining variables. The optimal value can be computed now in $8$ units of time (since there are $2^3$ possible assignments to the variables).
Thus the global optimum can be computed using $4+4+8=16$ units of time as opposed to the $32$ units of time taken by the naive brute-force approach. This algorithm generalizes to arbitrary tree-structured factor graphs (factor graphs with no cycles). In Section \ref{sec:OPFasGM}, we formulate the ACOPF \eqref{eq:OPFform} as a tree-structured factor graph and show how to exploit factor graph techniques to solve the ACOPF with approximation guarantees.
|
1,108,101,565,991 | arxiv | \section{Contact terms and the master orbit}
Many years ago, it was suggested in \cite{Witten} that because of
the factorization of correlation functions at large-$N$ and translational
invariance, the functional measure for $YM$ theories at $N=\infty$ should be
localized on the gauge orbit of some constant connection $A_{\mu}$,
the master orbit:
\begin{eqnarray}
A_{\mu}^{g}=gA_{\mu}g^{-1}-i g^{-1}\partial_{\mu}g~.
\end{eqnarray}
However, in this paper I show that if the gauge tranformation $g$ is not smooth,
some gauge-invariant local
operators do not need to be constant when computed on the master orbit,
but in addition to the constant part get the contribution of some
`contact terms'.\\
It turns out that the most general structure of the contact terms for
gauge-invariant local operators at $N=\infty$, compatible
with translational invariance and factorization, is that they are ultralocal
distributions (linear combinations of delta functions and their derivatives)
localized at submanifolds
that depend on moduli that contain some copies of the translations.
For example I show in this paper that, in the gauge $F^{ch}_{z \bar z}=0$
(the superscript $^{ch}$ means the charged part with respect to the diagonal
$U(1)^{N-1}$), in
$SU(N)$-$YM_4$ on a product of two
Riemann surfaces $Z \times W$, the structure of the master field for
the neutral part of $F_{z \bar z}$, that is the eigenvalues of $F_{z \bar z}$
(that determine the correlation functions of all the traces of $F_{z \bar z}$ ),
may be in general the
sum of a constant `bulk' contribution and some `contact' terms that are
delta-like distributions.\\
This situation is reminiscent of topological field theories
\cite{Witten1},\cite{Witten2},\cite{Verlinde},
in which contact terms arise because of
the topological non-triviality of the gauge orbit
\cite{Imbimbo}.\\
From now on I will restrict my argument to $SU(N)$-$YM_4$ on a product of two
Riemann surfaces $Z \times W$.\\
I will assume that Eq.(1) holds for some not-necessarily smooth gauge
transformation $g$.
Let us suppose that the gauge $F^{ch}_{z \bar z}=0$ can be reached.
Should the gauge
transformation $g$ in Eq.(1) be smooth on the entire orbit,
$F^{0}_{z \bar z}$ would simply be a constant.
However, under a singular gauge transformation $F_{z \bar z}$ transforms as:
\begin{eqnarray}
F^g_{z \bar z}=gF_{z \bar z}g^{-1} + F_{z \bar z}( g^{-1} \partial_{z} g,
g^{-1} \partial_{\bar z}g)
\end{eqnarray}
where the second term represents the field strength of a connection that is
locally a pure gauge.
In the gauge $F^{ch}_{z \bar z}=0$, that allows residual $U(1)^{N-1}$
transformations, Eq.(2) reduces to:
\begin{eqnarray}
F^{0 \omega}_{z \bar z}=F^0_{z \bar z}+\partial_{z}\partial_{\bar z}\omega^0-
\partial_{\bar z} \partial_{z}\omega^0 ~.
\end{eqnarray}
The second term is zero for smooth $\omega^0$, but it is proportional to
a singular delta function
if the gauge transformation $\omega^0$ carries a non-trivial $\pi_1$, that is
if there is a magnetic vortex in the theory:
\begin{eqnarray}
\omega^0=h^0~ Imlog(z-z_1)~.
\end{eqnarray}
Only $\pi_1$ needs to be considered here since the $w$-coordinates appear
as a parameter in Eq.(3).\\
I conclude that the most general structure for the master field
of $F^{0}_{z \bar z}$ is a constant part plus a vortex condensate:
\begin{eqnarray}
i F^0_{z \bar z}(z,w)=H^0_{0}+ \sum_{i} h^0_i(z,w) \delta^{(2)}(z-z_i(w))
\end{eqnarray}
or, in a
singular $U(1)^{N-1}$ gauge in which the phase of the charged connections is
multivalued, a condensate of strings. The factor of $i$ has been introduced
into Eq.(5) to make the constant $H^0_0$ real.\\
The occurrence of these `contact' terms is obviously compatible with
translational invariance since the functional measure will contain the
integration over the moduli (positions) of the `contacts'.
In the next section I show that it is also compatible with the large-$N$
factorization of the gauge invariant correlations.
\section{Contact terms and factorization}
I start presenting a slightly different argument that does not make explicit
use of the assumption that Eq.(1) holds.
In the gauge $F^{ch}_{z \bar z}=0$, the effective action for $F^{0}_{z \bar z}$,
$\Gamma$,
\begin{eqnarray}
Z&=&\int \exp[-S_{YM}] \delta(F^{ch}_{z \bar z}) \delta(F^{0}_{z \bar z} - H^0)
\Delta_{FP} DA DH^0 \nonumber \\
&=&\int \exp[-\Gamma(H^0)] DH^0~,
\end{eqnarray}
defined integrating out all the other fields but $F^{0}_{z \bar z}$,
though a priori
unknown is of order $N^2$ for $N$ large, while the integration measure
grows as $N$. Therefore, for the functional integral in which $\Gamma$
occurs, the saddle-point method applies for large $N$.
Because of translational invariance the minima or the saddles of $\Gamma$
must be either constant or non-constant configurations containing the
translations
among their moduli, that is:
\begin{eqnarray}
H^0(x)= H^0_0+ H^0(x;[x_i])
\end{eqnarray}
Because of large-$N$ clustering, for the non-constant part of $H^0(x)$ I
may assume:
\begin{eqnarray}
H^0(x;[x_i])= \sum_i H^0(x-x_i)
\end{eqnarray}
where for simplicity I made the unnecessary assumption that the irreducible
constituents of the master field with respect to translations $H^0(x-x_i)$ are
all of the same `type'.
Now I compute the two-point correlation function making the `ansatz'
of Eqs.(7)-(8):
\begin{eqnarray}
&&<H^i(x)H^j(y)>= H^i H^j + H^i \frac{n}{V} \int H^j(y-x_1) dx_1+ \nonumber\\
&& + H^j \frac{n}{V} \int H^i(x-x_1) dx_1+ \nonumber\\
&&+\frac{1}{V^n} \int \sum_k H^i(x-x_k) \sum_{k^{'}}
H^{j}(y-x_{k^{'}})
\prod dx_i = \nonumber\\
&&=H^i H^j + H^i \frac{n}{V} \int H^j(y-x_1) dx_1+ \nonumber\\
&& + H^j \frac{n}{V} \int H^i(x-x_1) dx_1+ \nonumber\\
&&+ {\frac{1}{V}} \sum_k \int H^i(x-x_k) H^j(y-x_k) dx_k+ \nonumber\\
&&+\sum_{k \neq {k^{'}}}
\frac{1}{V} \int H^i(x-x_k) dx_k
\frac{1}{V} \int H^i(y-x_{k^{'}}) dx_{k^{'}} = \nonumber\\
&&=H^i H^j + H^i \frac{n}{V} \int H^j(y-x_1) dx_1+ \nonumber\\
&& + H^j \frac{n}{V} \int H^i(x-x_1) dx_1+ \nonumber\\
&&+ {\frac{n}{V}} \int H^i(x-x_1) H^j(y-x_1) dx_1 + \nonumber\\
&&+\frac{n^2-n}{V^2} [\int H^i(x-x_1) dx_1] [\int H^j(y-x_1) dx_1]
\end{eqnarray}
This correlation function should be compared with the disconnected
product:
\begin{eqnarray}
<H^i(x)> <H^i(y)>&=&
[H^i+ \frac{n}{V} \int H^i(x-x_1) dx_1] \times \nonumber\\
&&\times [H^j+ \frac{n}{V} \int H^j(y-x_1) dx_1]
\end{eqnarray}
where $n$ is the number of irreducible constituents with respect to
translations.
I also assume that the limit $n \rightarrow \infty$ is taken keeping
constant the number of constituents per unit of `volume'.
The two-point correlation function factorizes only if the
constituents
of the master field are either a constant or an ultralocal distribution,
otherwise the non-trivial
overlap between different irreducible constituents that appears after the last
equality in Eq.(9) would imply a non-vanishing
two-point connected function.
Quite analogous formulae hold for all the other correlation functions of
$F^{0}_{z \bar z }$.\\
Since $F^{0}_{z \bar z }$ has scaling dimension two, it is not restrictive to
assume that $F^{0}_{z \bar z}$ is a linear combination with dimensionless
coefficients
of delta-two (anomalous dimensions are expected to appear only as $\frac{1}{N}$
corrections). All the other ultralocal distributions can be obtained as
limits of linear combinations of delta, because Dirac measures are dense in the
distributions \cite{Barry-Simon}.
Now I show explicitly that the ansatz in Eq.(5) is compatible with
factorization, provided the v.e.v.'s of $h^0_i(z,w)$ factorize and are
translational invariant (for example $h^0_i(z,w)$ are constant) :
\begin{eqnarray}
&&i^2<F^i_{z \bar z}(z_1,w_1) F^j_{z \bar z}(z_2,w_2)>
= H_0^i H_0^j + H_0^i \frac{<\sum_k h^j_k(z_k,w_2)>}{A}+ \nonumber\\
&& + H_0^j \frac{<\sum_k h^i_k(z_k,w_1)>}{A} + \nonumber\\
&&+\frac{1}{A^{2n}} < \int \sum_k h^i_k(z_1,w_1) \delta^{(2)}(z_1-z_k(w_1))
\prod dz_k(w_1) \times \nonumber\\
&&\times \int \sum_{k'} h^j_{k'}(z_2,w_2)
\delta^{(2)}(z_2-z_{k'}(w_2))
\prod dz_{k'}(w_2)>~.
\end{eqnarray}
For $w_1 \neq w_2$
the two-point function should be compared with the disconnected
product:
\begin{eqnarray}
i^2<F^i_{z \bar z}(z_1,w_1) > <F^j_{z \bar z}(z_2,w_2)>&=&
[H_0^i+ \frac{<\sum_k h^i_k(z_k,w_1)>}{A}] \times \nonumber\\
&&\times [H_0^j+ \frac{<\sum_{k'} h^j_{k'}(z_{k'},w_2)>}{A} ]
\end{eqnarray}
In this case factorization and translational invariance follow from
the assumed factorization and translational invariance of $<h^j_k(z,w)>$.
When $w_1=w_2$ the factorization follows from the preceding assumptions
about the v.e.v. of $h^j_k(z,w)$ and from the computation
in Eq.(9)-Eq.(10). \\
The occurrence of delta-two can be interpreted as vortices, as I did in
the first section.
The vortex charge is quantized according to
the topological class defined by $\pi_1(SU(N)/Z_N)$.
Alternatively
the gauge fixing $F^{ch}_{z \bar z}$ leaves a residual $U(1)^{N-1}$, and solutions
of the system:
\begin{eqnarray}
&& F^{ch}_{z \bar z}(z,w)=0 \nonumber\\
&&i F^0_{z \bar z}(z,w)=H^0_{0} + H^0(z,w;[z_i])
\end{eqnarray}
can be classified by $\pi_1(U(1)^{N-1})$.
If we allow $k$-fold covers of the original surface $Z$,
rational holonomies and vortex charges are allowed at large $N$.
This completes the classification of the contact terms that may occur at large
$N$. In the next section I present a physical interpretation of the structure of
the master field for $F^{0}_{z \bar z}$.
\section{Confinement and the master field}
According to \cite{'t Hooft}, if there is a mass gap,
the phase of pure $SU(N)$-gauge theories
can be classified either as the Higgs or the confining one, depending whether
either the electric or the magnetic fluxes condense, respectively.
If there is no mass gap and the gauge symmetry is unbroken, the theory is in
the Coulomb phase.\\
It is quite obvious that if pure $SU(N)$-$YM_4$ were in the Higgs phase,
free vortices could not occur in the master field, since the magnetic charge
is confined in this phase.
Indeed if $F^{0}_{z \bar z}$ is a non-zero constant, the $SU(N)$ gauge group must be
spontaneously broken to the isotropy subgroup of $H^0_0$,
since the gauge orbit under global gauge transformations
is non-trivial.
This case is analogous to the one in which the eigenvalues of a scalar field
in the adjoint representation condense in the vacuum.
This theory looks like a superconductor of type one.\\
If a constant field and a vortex condensate occur at the same time in the
master field for the same eigenvalues, the theory resembles a superconductor
of type two, which can be penetrated by magnetic flux vortices
(they would form lines in 3d and sheets in 4d).\\
One way of seeing this is to look at the equations (for $U(N)$):
\begin{eqnarray}
&&F^{\phantom{0}ih}_{\bar z z}=\partial_{[\bar z }
A^{ih}_{z]}+i(A^i-A^h)_{[\bar z }A^{ih}_{z]}+i\sum_{j}A^{ij}_{[\bar z }A^{jh}_{z]}
=0 \nonumber \\
&&i F^{\phantom{0}i}_{\bar z z}=i(\partial_{[\bar z }
A^{i}_{z]}+i\sum_{j}A^{ij}_{[\bar z }A^{ji}_{z]})
=-H^{i}
\end{eqnarray}
that appear as a constraint at $N=\infty$ in the functional integral.
Using the ansatz (reduction):
\begin{eqnarray}
&& A^{ih}_{z}=A^{ih}_{\bar z }=0~~~~\vert i-h \vert > 1 \nonumber\\
&& A^{ii+1}_{z}=0 ~~~~\rm{vortex} \nonumber\\
&& A^{ii+1}_{\bar z }=0 ~~~~\rm{anti-vortex}~,
\end{eqnarray}
the following Toda equations corresponding to vortices or antivortices are
obtained:
\begin{eqnarray}
&& (A^i-A^{i+1})_{z}=-i \partial_{z} \log A^{i i+1}_{\bar z } \nonumber \\
&& -\partial_{z} \partial_{\bar z } \log \vert A^{i i+1}_{\bar z } \vert^2-
2 \vert A^{i i+1}_{\bar z } \vert^2 + \vert A^{i-1 i}_{\bar z } \vert^2
+ \vert A^{i+1 i+2}_{\bar z } \vert^2 =H^{i+1}- H^i \nonumber \\
&&i \sum_{i} \partial_{[\bar z }A^{i}_{z]}=-\sum_{i}H^i
\end{eqnarray}
and
\begin{eqnarray}
&& (A^i-A^{i+1})_{\bar z }=i \partial_{\bar z } \log A^{i i+1}_{z} \nonumber \\
&& \partial_{z} \partial_{\bar z } \log \vert A^{i i+1}_{z} \vert^2+
2 \vert A^{i i+1}_{z} \vert^2- \vert A^{i-1 i}_{z} \vert^2
- \vert A^{i+1 i+2}_{z} \vert^2 =H^{i+1}-H^i \nonumber \\
&&i \sum_{i}\partial_{[\bar z } A^{i}_{z]}=-\sum_{i}H^i~.
\end{eqnarray}
Toda equations are a $SU(N)$ generalization of the Liouville equation
($SU(2)$) involving only $3N-1$ generators,
the Cartan generators and the immediately off-diagonal charged generators.\\
Liouville and Toda equations are known to possess vortex solutions.
In fact they are the paradigm of vortex equations \cite{Yaffe}.\\
To be more precise, a vortex with magnetic charge:
\begin{equation}
\frac{2\pi n^{ii+1}}{k}
\end{equation}
arises wherever the charged field $A^{ii+1}_{\bar z}$ has a zero of the form:
\begin{equation}
h(z,\bar z )^{ii+1}(\bar z - \bar z _{1})^{\frac{n^{ii+1}}{k}}~.
\end{equation}
This corresponds to a pole singularity of the vector potential
in the Cartan subalgebra and a $\delta-$like singularity in the
Abelian field strength.
An anti-vortex corresponds to a zero involving the complex conjugate variable
and component of the connection on the $Z$ surface.
The `order parameter' $A^{i,i+1}_{\bar z}$ for vortices of type $i,i+1$
approaches exponentially, with exponent of order $|H^i-H^{i+1}|^{\frac{1}{2}}$,
its asymptotic value
$H^{i+1}-H^{i}$ from the vanishing value in the centre of the vortex.
Hence in 3d the magnetic flux would be squeezed into long flux tubes of
transverse width of the order $|H^i-H^{i+1}|^{-\frac{1}{2}}$.
In the case where vortices and a non-vanishing zero mode occur in the master field,
the v.e.v. of $F^{0}_{z \bar z}$ is in general still different from zero
and the symmetry is broken, unless there is enough magnetic flux to
compensate the constant part and the v.e.v. of $F^{0}_{z \bar z}$ is zero.
In this last case the symmetry is unbroken, and the superconductor is at its
transition point with the Coulomb phase. \\
There is only one possibility left.
The constant part of the master field vanishes and the magnetic vortices
condense in the vacuum: this is the confining phase of $YM_4$.
Hence at large $N$, in the gauge $F^{ch}_{z \bar z}=0$, the $SU(N)$
functional integral must be localized on the moduli space of flat connections
with punctures, if the theory confines the electric charge.
\section{Acknowledgements}
I would like to thank Camillo Imbimbo, Giorgio Parisi and Massimo
Testa for several clarifying discussions.
|
1,108,101,565,992 | arxiv | \section{Introduction}
In this paper, we will consider the viscosity solutions for the following Dirichlet (exterior) problem
\begin{equation}\label{e:pde}
\begin{cases}
-\phi(-\Delta)u=f &\text{in} ~ D, \\
u=0 &\text{in} ~ \mathbb{R}^n \backslash D,
\end{cases}
\end{equation}
where $\phi$ is in the class of functions called \textit{Bernstein function}, which contains $\phi(\lambda)=\lambda^{\alpha}$ with $0<\alpha<1$, and $D$ is a bounded $C^{1,1}$ open set in $\mathbb{R}^n$. For example, if $\phi(\lambda)=\lambda^\alpha$, then $-\phi(-\Delta)=-(-\Delta)^\alpha$ is a fractional Laplacian.
We will focus on the boundary behavior of the viscosity solutions of the Dirichlet problem \eqref{e:pde} under assumptions \eqref{e:phi-wsc} and \eqref{e:exp-j} below.
\subsection{Probabilistic point of view} \label{s:prob}
The operator $-\phi (-\Delta)$ can be understood as the infinitesimal generator of subordinate Brownian motions, thus we can use probabilistic tools to study the behavior of solutions of \eqref{e:pde}.
Let $S=(S_t)_{t \ge 0}$ be a subordinator, that is, an increasing L\'evy process in $\bR$. It is known that its Laplace exponent is given by
$$ {\mathbb E}[e^{-\lambda S_t}] = \exp(-t\phi(\lambda)), \quad \lambda > 0, $$
where the function $\phi:(0,\infty) \rightarrow (0,\infty)$ satisfies $\displaystyle \lim_{\lambda \downarrow 0} \phi(\lambda)=0$ and
\begin{equation}\label{d:phi}
\phi(\lambda)= b\lambda + \int_{(0,\infty)} (1-e^{-\lambda x}) \mu(dx)
\end{equation}
with a drift $b \ge 0$ and a measure $\mu$ on $(0,\infty)$ satisfying $\int_{(0,\infty)} (1 \land x) \mu(dx) < \infty.$
It is known that the function $\phi$ of the form \eqref{d:phi} is a \textit{Bernstein function}, it means, $\phi:(0,\infty) \rightarrow (0,\infty)$ is a $C^\infty$-function satisfying
$$ (-1)^{n+1} \phi^{(n)}(\lambda) \ge 0 \quad \mbox{for all} \quad n \in \bN. $$
Here $\phi^{(n)}$ is the $n$-th derivative of $\phi$. Also, it is known that every Bernstein function can be uniquely represented by \eqref{d:phi}.
Subordinate Brownian motion $Y=(Y_t)_{t\geq 0} = (B_{S_t})_{t \geq 0}$ in $\mathbb{R}^n$ is a L\'evy process obtained by replacing the time of Brownian motion in $\mathbb{R}^n$ by an independent subordinator. Then, the characteristic exponent of $Y$ is given by $z \mapsto \phi(\vert z \vert^2)$. Also, the L\'evy measure of the process has a density $y \mapsto j(|y|)$ where $j:(0,\infty) \rightarrow (0, \infty)$ is the function given by
\begin{equation}\label{d:j}
j(r) =j_n(r)= \int_0^\infty (4\pi t)^{-n/2} e^{-\frac{r^2}{4t}} \mu(dt),
\end{equation}
and we have
\begin{equation}\label{d:Phi}
\phi(\vert z\vert^2)= \int_{\mathbb{R}^n \backslash \{0\}} (1- \cos ( z \cdot y)) j(|y|)dy.
\end{equation}
Let $A$ be the infinitesimal generator of $Y$. Then, by \cite[Section 4.1]{Sko} we have
\begin{equation}\label{e:SBM}
Au(x) =-\phi(-\Delta)u(x)= \int_{\mathbb{R}^n \setminus \left\lbrace 0 \right\rbrace} \left( u(x+y) - u(x) - {\bf 1}_{\left\lbrace \vert y \vert \le 1 \right\rbrace} y \cdot \nabla u(x) \right) j(|y|)dy.
\end{equation}
for any $u \in C^2({\mathbb R}^n)$. See
Section \ref{s:N} for the definition of function spaces and
Section \ref{s:preliminaries} for the definition of infinitesimal generator.
Note that when $\phi(\lambda)=\lambda^\alpha$ with $0<\alpha<1$, the corresponding subordinate Brownian motion in $\mathbb{R}^n$ is a rotationally symmetric $2\alpha$-stable process. We also have $j(|y|)=c(n,\alpha)|y|^{-n-2\alpha}$. Thus the corresponding infinitesimal generator is the fractional Laplacian $-(-\Delta)^\alpha$.
Now we introduce some conditions which we will impose in this paper. The first condition is \textit{weak scaling condition at the infinity} for $\phi$, that is, there exist constants $0 < \alpha_1 \le \alpha_2 < 1$ and $b_1 \geq 1$ such that
\begin{equation}\label{e:phi-wsc}
b_1^{-1} \left( \frac{R}{r} \right)^{\alpha_1} \le \frac{\phi(R)}{\phi(r)} \le b_1 \left( \frac{R}{r} \right)^{\alpha_2} \quad \mbox{for all} \quad 1 \le r \le R < \infty.
\end{equation}
The constant $1$ in above condition can be changed into other positive constant without loss of generality. Note that \eqref{d:phi} and \eqref{e:phi-wsc} imply that $b =0$ and that $\mu$ is an infinite measure. The second one is that the L\'evy density of process satisfies
\begin{equation}\label{e:exp-j}
j(r+1) \le b_2 j(r) \quad \mbox{for all} \quad r \ge 1
\end{equation}
for some constant $b_2 > 0$. \eqref{e:exp-j} is valid for any complete Bernstein function satisfying \eqref{e:phi-wsc}. See \cite[Definition 6.1]{SSV} and \cite[Theorem 13.3.5]{KSV} for details. Moreover, we also have \eqref{e:exp-j} when \eqref{e:phi-wsc} holds for any $0< r \le R < \infty$ (See \cite[Corollary 22]{BGR1}).
We will see that the \textit{renewal function} $V$ with respect to one dimensional L\'evy process is related to the boundary behavior of solutions. This function plays an important role throughout this paper. For the definition of the \textit{renewal function}, see Section \ref{s:renewal function}.
\subsection{Analytic point of view}
In analytic point of view, nonlocal operators can be defined via the Fourier transformation. For example, the fractional Laplacian is defined by
\begin{align*}
-(-\Delta)^{\sigma/2} f(x)
&:= -(\vert \xi \vert^\sigma \hat{f})^\vee (x) \\
&= P.V. \int_{\mathbb{R}^n} \frac{f(y) - f(x)}{\vert y - x \vert^{n+\sigma}} \, dy \\
&= \int_{\mathbb{R}^n} \frac{f(y) - f(x) - \nabla f(x) \cdot (y-x) {\bf 1}_{\left\lbrace \vert y - x \vert < k\right\rbrace}}{\vert y - x \vert^{n+\sigma}} \, dy
\end{align*}
for $f \in C^\infty_c(\mathbb{R}^n)$ and it is well-known that
\begin{align*}
\lim_{\sigma \rightarrow 2} (2-\sigma) c(n, \sigma) (-\Delta)^{\sigma/2} f(x) = - \Delta f(x).
\end{align*}
Moreover, Caffarelli and Silvestre \cite{CS2} provided Harnack inequality and interior $C^{1,\alpha}$ regularity for fully nonlinear integro-differential equations associated with kernels comparable to that of fractional Laplacian, which remain uniform as $\sigma \rightarrow 2$. These results were generalized in \cite{KL4} and \cite{KKL} to more general integro-differential equations. These results make the theory of integro-differential operators and elliptic differential operators become unified.
The fractional Laplacian $(-\Delta)^{\sigma/2} f$ can be also thought as the normal derivative of some extension of $f$ (the Dirichlet to Neumann operator of $f$). Consider the extension problem
\begin{align*}
\begin{cases}
- \nabla (y^{1-\sigma} \nabla u) = 0 &\text{in} ~ \mathbb{R}^n \times (0, \infty), \\
u(x,0) = f(x) &\text{for} ~ x \in \mathbb{R}^n.
\end{cases}
\end{align*}
It is known in \cite{CS1} that the following holds:
\begin{align*}
(-\Delta)^{\sigma/2} f(x) = \partial_\nu u(x,0) = - \lim_{y \rightarrow 0} y^{1-\sigma} u_y(x,y),
\end{align*}
where $\partial_\nu u$ is the outward normal derivative of $u$ on the boundary $\left\lbrace y = 0\right\rbrace$.
We are interested in the operator of the form
\begin{equation}\label{e:operator}
Lu(x) = P.V. \int_{\mathbb{R}^n \setminus \left\lbrace 0 \right\rbrace} \left( u(x+y) - u(x) \right) j(\vert y \vert) \, dy
\end{equation}
where $j: (0, \infty) \rightarrow (0, \infty)$ is an non-increasing function satisfying
\eqref{d:Phi}, \eqref{e:phi-wsc} and \eqref{e:exp-j}, or satisfying \eqref{e:Phi-wsc} and \eqref{e:J} in Section \ref{s:Levy process}. Let us call the function $j(|y|)$ be the \textit{kernel} of operator $L$. Note that
$Lu(x)$ is well-defined if $u \in C^2(x) \cap B(\mathbb{R}^n)$, where $C^2(x)$ denotes the family of all functions which are $C^2$ in some neighborhood of $x$ and $B(\mathbb{R}^n)$ denotes the family of all bounded functions defined on $\mathbb{R}^n$, and this is why we needed the assumption $0 < \alpha_1 \leq \alpha_2 < 1$. Due to the symmetry of the kernel $j(|y|)dy$, the operator can be rewritten without the principal value as
\begin{align} \label{e:operator1}
\begin{split}
Lu(x) &= \int_{\mathbb{R}^n \setminus \left\lbrace 0 \right\rbrace} \left( u(x+y) - u(x) - {\bf 1}_{\left\lbrace \vert y \vert \le 1 \right\rbrace} y \cdot \nabla u(x) \right) j(|y|)dy \\
&= \frac{1}{2}\int_{\mathbb{R}^n \setminus \left\lbrace 0 \right\rbrace} \left( u(x+y) + u(x-y) -2u(x) \right) j(\vert y \vert) \, dy
\end{split}
\end{align}
when $u \in C^2(x) \cap B(\mathbb{R}^n)$. The important point to note here is that $Lu=Au$ for $u \in C^2({\mathbb R}^n)$ when $j(|y|)$ in \eqref{e:SBM} and \eqref{e:operator} are the same. In Section \ref{s:MH} we discuss the connection between two operators in \eqref{e:SBM} and \eqref{e:operator}.
We will consider the {\it viscosity solution} of $Lu = f$ in $D$. A function $u : \mathbb{R}^n \rightarrow {\mathbb R}$ which is upper (resp. lower) semicontinuous on $\overline{D}$ is said to be a {\it viscosity subsolution} (resp. {\it viscosity supersolution}) to $Lu = f$, and we write $Lu \geq f$ (resp. $Lu \leq f$) in {\it viscosity sense}, if for any $x \in D$ and a test function $v \in C^2(x)$ satisfying
$v(x) = u(x)$ and $$v(y) > u(y) \quad (\mbox{resp. }< \,), \quad y \in \mathbb{R}^n \setminus \left\lbrace x \right\rbrace,$$
it holds that
\begin{align*}
Lv(x) \geq f(x) \quad (\mbox{resp.} \le).
\end{align*}
A function $u$ is said to be a {\it viscosity solution} if $u$ is both sub and supersolution.
We are going to prove the H\"older regularity of viscosity solutions of nonlocal Dirichlet problem
\begin{align} \label{e:pde2}
\begin{cases}
Lu = f &\text{in } D, \\
u = 0 &\text{in } \mathbb{R}^n \setminus D,
\end{cases}
\end{align}
up to the boundary using the gradient heat kernel estimates and prove higher boundary regularity using PDE tools: barriers, comparison principle, and Harnack inequality. It is important that the boundary condition in \eqref{e:pde2} is given not only on $\partial D$ but on the whole complement of $D$ because of the nonlocal character of the operator $L$. See Section \ref{s:MH} for details.
The PDE approach can be applied to nonlinear integro-differential equations. There are many literatures dealing with regularity results with PDE approach. See \cite{CS2, KL2, KL4, Bae, BK3, ROS2} and \cite{KKL}. We expect that similar results such as Harnack inequality and H\"older regularity hold for nonlinear equations with our $L$.
\subsection{History}
Over the last few decades there have been a lot of studies for the nonlocal operators, and regularity theory for nonlocal operators is one of the main areas as the one for local operators. In \cite{BL} Bass and Levin proved H\"older regularity of harmonic functions with respect to a class of pure jump Markov processes in $\mathbb{R}^n$, whose kernels are comparable to those of symmetric stable processes. Bass and Kassmann generalized this result to kernels with variable order in \cite{BK1, BK2}. Bass also established in \cite{Bas} the Schauder estimates for stable-like operators in $\mathbb{R}^n$. All these works were done by probabilistic methods.
On the other hand, in \cite{Sil} Silvestre provided a purely analytic proof of H\"older estimates for solutions to integro-differential equation. His assumptions include the case of an operator with variable orders. In \cite{CS2} Caffarelli and Silvestre generalized this result to fully nonlinear integro-differential equations associated with symmetric kernels comparable to fractional Laplacian by PDE methods. Kim and Lee, in \cite{KL2} and \cite{KL4}, extended this result to fully nonlinear integro-differential equations associated with nonsymmetric kernels. A singular regularity theory for parabolic nonlocal nonlinear equations was also established at \cite{KL3}. In \cite{Bae}, Bae proved H\"older regularity for solutions of fully nonlinear integro-differential equations with kernels of variable orders in \cite{Bae}. Bae and Kassmann in \cite{BK3} established Schauder estimates for integro-differential equation with kernels of variable orders. In \cite{KKL}, they extended the regularity results for the integro-differential operators of the fractional Laplacian type by Caffarelli and Silvestre \cite{CS2} to those for the integro-differential operators associated with symmetric, regularly varying kernels at zero.
There are relatively fewer results concerning boundary regularity of solutions of Dirichlet problem. For the boundary regularity for local operators, see \cite{DL}. Kim and Lee proved regularity up to the boundary for the fractional heat flow in \cite{KL1}. The boundary regularity up to the boundary is well-known for the fractional Laplacian, and for fully nonlinear integro-differential equations, when $D$ is a bounded $C^{1,1}$ domain. See \cite{ROS1, ROS2}. Ros-Oton and Serra also proved the similar result when $D$ is a bounded $C^{1, \alpha}$ or $C^1$ domain in \cite{ROS3}. However, there is no boundary regularity result for the operators with kernels having variable orders.
\subsection{Notation} \label{s:N}
In this paper, we denote $a \land b = \min \{a,b\}$ and $a \lor b = \max\{a,b\}$. For any nonnegative functions $f$ and $g$, $f(r) \asymp g(r)$ for $r > 0$ (resp. $0<r \le r_0$) means that there is a constant $ c \ge 1$ such that $c^{-1} f(r) \le g(r) \le c f(r)$ for $r>0$ (resp. $0<r \le r_0$). We call $c$ the \textit{comparison constant} of $f$ and $g$. We also denote $B(x,r):= \{ y \in {\mathbb R}^n : |x-y|<r \}$ for the open ball and $d_D(x) := \mathrm{dist} (x, D^c)$ for the distance between $x \in D$ and $D^c$. For $n \ge 1$, let $\omega_n= \int_{{\mathbb R}^n} {\bf 1}_{ \{|y| \le 1\} } dy$ be the volume of $n$-dimensional ball.
We denote by $C(D)$ the Banach space of bounded and continuous functions on $D$, equipped with the supremum norm $\Vert f \Vert_{C(D)} := \sup_{x \in D} \vert f(x) \vert$, and denote by $C^k(D), k \geq 1$, the Banach space of $k$-times continuously differentiable functions on $D$, equipped with the norm $\Vert f \Vert_{C^k(D)} := \sum_{\vert \gamma \vert \leq k} \sup_{x\in D} \vert D^\gamma f(x) \vert$. Also, denote $C_0(D):= \{ u \in C(D): u \mbox{ vanishes at the boundary of } D \}$. For $x \in {\mathbb R}^n$, define $C^{1}(x)$ as the collection of functions which are $C^{1}$ in some neighborhood of $x$. Similarly, we define $C^2(x)$, $C^{1,1}(x)$, etc. For $0<\alpha<1$, the H\"older space $C^\alpha({\mathbb R}^n)$ is defined as
\begin{align} \label{e:Holder}
C^\alpha({\mathbb R}^n) := \left\lbrace f \in C({\mathbb R}^n) ~ | ~ \Vert f \Vert_{C^\alpha(\mathbb{R}^n)} < \infty \right\rbrace,
\end{align}
equipped with the $C^\alpha$-norm
$$ \Vert f \Vert_{C^\alpha({\mathbb R}^n)} := \Vert f \Vert_{C({\mathbb R}^n)} + \sup_{x,y \in \mathbb{R}^n, x\neq y} \frac{|f(x)-f(y)|}{|x-y|^\alpha}.$$
Also, for given open set $D \subset \mathbb{R}^n$ we define $C^\alpha(D)$ by
$$ C^\alpha(D) := \left\lbrace f \in C(D) ~| ~ \Vert f \Vert_{C^{\alpha}(D)} < \infty \right\rbrace $$
with the norm
$$ \Vert f \Vert_{C^\alpha(D)} := \Vert f \Vert_{C(D)} + \sup_{x,y \in D, x\neq y} \frac{|f(x)-f(y)|}{|x-y|^\alpha}.$$
For given function $h : (0, \infty) \rightarrow (0, \infty)$, we define Generalized H\"older space $C^h(D)$ for bounded open set $D$ by
\begin{align} \label{e:gen Holder}
C^h(D) := \left\{f \in C(D) ~ | ~ \Vert f \Vert_{C^h(D)} < \infty \right\rbrace,
\end{align}
equipped with the norm
$$\Vert f \Vert_{C^h(D)} := \Vert f \Vert_{C(D)} + \sup_{x,y \in D, x\neq y}\frac{|f(x)-f(y)|}{h(|x-y|)}.
$$
We define seminorm $[\, \cdot \,]_{C^h(D)}$ by
$$ [f]_{C^h(D)} := \sup_{x,y \in D, x\neq y}\frac{|f(x)-f(y)|}{h(|x-y|)}. $$
We denote the diameter of $D$ by diam$(D)$.
Note that if $h_1 \asymp h_2$ in $0<r \le $ diam$(D)$, $\Vert \cdot \Vert_{C^{h_1}(D)}$ and $\Vert \cdot \Vert_{C^{h_2}(D)}$ are equivalent and $C^{h_1}(D) = C^{h_2}(D)$.
We say that $D\subset \mathbb{R}^n$ (when $n\ge 2$) is a $C^{1,1}$ open set if there exist a localization radius $ R_0>0 $ and a constant $\Lambda>0$ such that for every $z\in\partial D$ there exist a $C^{1,1}$-function $\varphi=\varphi_z: {\mathbb R}^{n-1}\to {\mathbb R}$ satisfying $\varphi(0)=0$, $\nabla\varphi (0)=(0, \dots, 0)$, $\| \nabla\varphi \|_\infty \leq \Lambda$, $| \nabla \varphi(x)-\nabla \varphi(w)| \leq \Lambda |x-w|$ and an orthonormal coordinate system $CS_z$ of $z=(z_1, \cdots, z_{n-1}, z_n):=(\widetilde z, \, z_n)$ with origin at $z$ such that $ D\cap B(z, R_0 )= \{y=({\tilde y}, y_n) \in B(0, R_0) \mbox{ in } CS_z: y_n > \varphi (\widetilde y) \}$.
The pair $( R_0, \Lambda)$ will be called the $C^{1,1}$ characteristics of the open set $D$.
Note that a $C^{1,1}$ open set $D$ with characteristics $(R_0, \Lambda)$ can be unbounded and disconnected, and the distance between two distinct components of $D$ is at least $R_0$.
By a $C^{1,1}$ open set in ${\mathbb R}$ with a characteristic $R_0>0$, we mean an open set that can be written as the union of disjoint intervals so that the {infimum} of the lengths of all these intervals is {at least $R_0$} and the {infimum} of the distances between these intervals is {at least $R_0$}.
\subsection{Main theorems}
The main results of this paper are the existence and the uniqueness of the viscosity solution $u$ of \eqref{e:pde}, the generalized H\"older regularity estimates of such solution $u$ and the regularity of the quotient $u\phi(d_D^{-2})$ up to the boundary.
The boundary estimate for nonlinear PDE has been studied for a long time, where the solution behaves as a linear function. See \cite{CC} and references therein. For the degenerate or singular PDE, \cite{KL3}, it has been proved that the solution behaves in various ways just as that of the fractional Laplace equation. In \cite{ROS1}, Ros-Oton and Serra applied the known techniques for local operators to fractional Laplacian, which has a nice scaling invariance and a simple barrier of the form $x_n^{\alpha}$. On the other hand, our $\phi$ has only a weak scaling condition at infinity and it has a general form which allows nontrivial boundary behavior different from $x_n^{\alpha}$.
In this paper, we track down $u$ in every scale to find scaling invariant uniform estimates only with the weak scaling condition at infinity. We also construct the renewal function, $V(\cdot)$, of the ladder height process defined at \eqref{e:V} to overcome the lack of a simple barrier.
In addition, we provide the existence and uniqueness theory for given Dirichlet problem by utilizing the concept of viscosity solution.
The first result is the H\"older estimates up to the boundary of solutions of the Dirichlet problem \eqref{e:pde}. Unlike the case of the fractional Laplacian, it is inappropriate to represent H\"older regularity as a single number since kernel in \eqref{e:operator} has variable orders. Therefore it is natural to consider a generalized H\"older space.
\begin{theorem}[H\"older estimates up to the boundary] \label{t:est-u}
Assume that $D$ is a bounded $C^{1,1}$ open set in $\mathbb{R}^n$, and $\phi$ is a Bernstein function satisfying \eqref{e:phi-wsc} and \eqref{e:exp-j}. If $f \in C (D)$, then there exists a unique viscosity solution $u$ of \eqref{e:pde2} and $u \in C^{\overline{\phi}}(D)$. Moreover, we have
$$ \Vert u \Vert_{C^{\overline{\phi}} (D)} \leq C \Vert f \Vert_{C(D)}, $$
where $\overline{\phi}(r) := \phi(r^{-2})^{-1/2}$, for some constant $C > 0$ depending only on $n, D$, and $\phi$.
\end{theorem}
We will prove Theorem \ref{t:est-u} using the potential operator, which is the inverse of the operator $L$, and the estimates on the transition density and its spatial derivatives, see Section \ref{s:H} for details. In whole space ${\mathbb R}^n$, estimates on any order of spatial derivatives of the transition density are known. Based on these estimates, Bae and Kassmann established Schauder estimates for the integro-differential operators with kernels of variable orders in \cite{BK3}. However, in a bounded $C^{1,1}$ open set, estimates on the first order derivative of the transition density are only known. Higher order regularities up to the boundary require further research in future.
It is well known that $\bar{\phi}$ is comparable to renewal function $V$ (see Section \ref{s:renewal function}.) Thus any solution $u$ of Dirichlet problem \eqref{e:pde} is in $C^V$ up to the boundary by Theorem \ref{t:est-u}. Hence it is of importance to study the regularity of $u/V(d_D)$ up to the boundary. The following is our second main result.
\begin{theorem}[Boundary estimates] \label{t:est-u/V}
Assume that $D$ is a bounded $C^{1,1}$ open set in $\mathbb{R}^n$, and $\phi$ is a Bernstein function satisfying \eqref{e:phi-wsc} and \eqref{e:exp-j}. If $f \in C (D)$ and $u$ is the viscosity solution of \eqref{e:pde2}, then $u / V(d_D) \in C^\alpha(D)$ and
\begin{align*}
\left\Vert \frac{u}{V(d_D)} \right\Vert_{C^\alpha(D)} \leq C \Vert f \Vert_{C(D)}
\end{align*}
for some constants $\alpha > 0$ and $C > 0$ depending only on $n,D$, and $\phi$.
\end{theorem}
One of the methods proving the above result follows the standard argument of Krylov in \cite{Kry}. In the other words, we are going to control the oscillation of the function $u \phi(d_D^{-2})^{1/2}$ near the boundary using barriers, comparison principle, and the Harnack inequality. However, the construction of barriers are highly nontrivial. The difficulty mainly comes from the fact that the operator \eqref{e:operator} is not scale-invariant.
In fact, we will prove Theorems \ref{t:est-u} and \ref{t:est-u/V} for a little more general operators including $-\phi(-\Delta)$. In section \ref{s:preliminaries} we will state the generalization of these theorems, and we collect some known results about the renewal function $V$. We will prove Theorem \ref{t:est-u} in Section \ref{s:H}, and Theorem \ref{t:est-u/V} in section \ref{s:bdry reg}.
\section{Preliminaries} \label{s:preliminaries}
The operators we consider in this paper coincides with infinitesimal generators of isotropic unimodal L\'evy processes for $C^2({\mathbb R}^n)$ functions. Thus, in Section \ref{s:Levy process} we first explain the definitions and properties of L\'evy processes, and some related concepts. Then we introduce some additional conditions that will be needed in this paper. With these concepts, we state Theorems \ref{t:1} and \ref{t:2}, which are generalized version of Theorems \ref{t:est-u} and \ref{t:est-u/V}. Throughout this paper, we prove Theorems \ref{t:1} and \ref{t:2}.
Next, in Section \ref{s:renewal function} we will define the renewal function $V$, which will be act as a barrier, and record some properties of renewal function.
\subsection{L\'evy processes} \label{s:Levy process}
Let $X=(X_t,{\mathbb P}^x, t \ge 0, x \in {\mathbb R}^n)$ be a L\'evy process in $\mathbb{R}^n$ defined on the probability space $(\Omega, {\mathcal F}, {\mathbb P}^x)$ with ${\mathbb P}^x(X_0=x)=1$. For the precise definition of L\'evy process, see \cite[Definition 1.5]{Sat}. Note that ${\mathbb P}^x(X_t \in A)={\mathbb P}^0(X_t+x \in A)$. By L\'evy-Khintchine formula, the characteristic exponent of L\'evy process is given by
$$ {\mathbb E}^0[e^{i z \cdot X_t}] = e^{t\Phi(z)}, \quad z \in \mathbb{R}^n, $$
where
$$ \Phi(z) = -\frac{1}{2}z \cdot Uz + i\gamma \cdot z + \int_{\mathbb{R}^n} \big( e^{iz \cdot x} -1 -iz \cdot x {\bf 1}_{\{|x| \le 1 \} } \big) J(dx) $$
with an $n \times n$ symmetric nonnegative-definite matrix $U = (U_{ij})$, $\gamma \in \mathbb{R}^n$ and a measure $J(dx)$ on $\mathbb{R}^n \backslash \{0\}$ satisfying
$$ \int_{\mathbb{R}^n \backslash \{0\}} \big( 1 \land |x|^2 \big) J(dx) < \infty. $$
Let $(P_t)_{t \ge 0 }$ be a transition semigroup for $X$, it means that
$$P_t f(x) := {\mathbb E}^x[f(X_t)]= {\mathbb E}^0[f(x+X_t)]. $$
Now, define \textit{the infinitesimal generator} $A$ of $X$ by
$$Au(x) := \lim_{t \downarrow 0} \frac{P_t u(x) - u(x)}{t}$$
if the limit exists.
By \cite[Section 4.1]{Sko}, $Au$ is well-defined for $u \in C^2(\mathbb{R}^n)$ and represented by
$$Au(x) = \frac{1}{2} \sum_{i,j=1}^n U_{ij}\partial_{ij}u(x) + \sum_{i=1}^n \gamma_i\partial_i u(x) + \int_{\mathbb{R}^n \setminus \left\lbrace 0 \right\rbrace} \left( u(x+y) - u(x) - {\bf 1}_{\left\lbrace \vert y \vert \le 1 \right\rbrace} y \cdot \nabla u(x) \right) J(dy). $$
Throughout this paper, we will assume that $X$ is an isotropic unimodal pure jump L\'evy process with an infinite L\'evy measure, that is, $U=0$, $\gamma=0$ and $J(dy)$ is an infinite measure with an isotropic density $J(|y|)dy$, where $r \mapsto J(r)$ is non-increasing. Under these assumptions, $X$ possesses transition density $p:(0,\infty) \times {\mathbb R}_+ \rightarrow {\mathbb R}_+$ satisfying
$$P_t f(x) = {\mathbb E}^x[f(X_t)] = \int_{\mathbb{R}^n} f(y) p(t,|x-y|)dy $$
and characteristic exponent $\Phi: {\mathbb R}^n \rightarrow {\mathbb R}_+$ is an isotropic function. From now on, we regard isotropic functions $J$ and $\Phi$ as functions on ${\mathbb R}_+$.
For every open subset $D \subset {\mathbb R}^n$, let $\tau_D:= \inf \{ t>0 : X_t \notin D \}$ be the first exit time of $D$ by $X$. We define subprocess $X^D=(X_t^D)_{t \ge 0}$, which is called \textit{the killed process of X upon $D$}, by $X_t^D=X_t$ when $t < \tau_D$ and $X_t^D=\partial$ when $t \le \tau_D$ where $\partial$ is a cemetery point. Since $X$ has the transition density, $X^D$ also possesses the transition density $p_D(t,x,y)$ with
$$p_D(t,x,y) = p(t,|x-y|)-{\mathbb E}^x[p(t-\tau_D,|X_{\tau_D}-y|);\tau_D<t],$$
and its transition semigroup $(P^D_t)_{t \ge 0}$ is represented by
$$P^D_t f(x) : = {\mathbb E}^x[f(X^D_t)] = \int_{D}f(y)p_D(t,x,y) \, dy. $$
Now we are ready to introduce main assumptions in this paper. Note that, under settings above, the infinitesimal generator can be rewritten as
\begin{align}\label{d:A}
Au(x) &= \frac{1}{2} \int_{\mathbb{R}^n \setminus \left\lbrace 0 \right\rbrace} \left( u(x+y)+ u(x-y) - 2u(x) \right)J(|y|)dy
\end{align}
for $u \in C^2(\mathbb{R}^n)$. Moreover, it is known in \cite[Lemma 2.6]{BLM} that \eqref{d:A} still holds for $u \in C^2(x) \cap C_0(\mathbb{R}^n)$.
Recall that the operator $L$ in \eqref{e:operator} with kernel $J(|y|)$ is represented as
\begin{align}
\begin{split}\label{d:L}
Lu(x) &= \frac{1}{2} \int_{\mathbb{R}^n \setminus \left\lbrace 0 \right\rbrace} \left( u(x+y)+ u(x-y) - 2u(x) \right)J(|y|)dy
\end{split}
\end{align}
for $u \in C^2(x) \cap B(\mathbb{R}^n)$ since $J$ is symmetric. We record that $Au(x)=Lu(x)$ for any $u \in C^{2}(x) \cap C_0(\mathbb{R}^n)$ for the next use.
We first assume that the characteristic exponent $\Phi$ satisfies weak scaling condition with constants $a_1 \ge 1$ and $0<\alpha_1 \le \alpha_2<1$ so that
\begin{equation}\label{e:Phi-wsc}
a_1^{-1} \left( \frac{R}{r} \right)^{2\alpha_1} \le \frac{\Phi(R)}{\Phi(r)} \le a_1 \left( \frac{R}{r} \right)^{2\alpha_2} ~~ \text{for all} ~~ 1 < r \le R \le \infty.
\end{equation}
We also assume that
The L\'evy measure of the isotropic unimodal pure jump L\'evy process $X$ has the density $y \to J(|y|)$ and it satisfies that there exists a constant $a_2 >0$ such that
\begin{equation}\label{e:J}
J(r+1) \le a_2 J(r) \,\, \mbox{for all} \,\,r>0, \,\, \mbox{and} \quad r \mapsto -\frac{J'(r)}{r} \quad \mbox{is non-increasing}.
\end{equation}
\noindent Let
\begin{equation*}
\varphi(r): = \frac{J(1)}{J(r) r^n}.
\end{equation*}
By \cite{BGR1}, for any $c>0$ we have $\Phi(r^{-1})^{-1} \asymp \varphi(r)$ in $0< r \leq c$ with comparison constant depending only on $c$ and $n$. Thus, there exists a constant $a_3=a_3(n,a_1) \geq 1$ such that
\begin{equation}\label{e:varphi-wsc}
a_3^{-1} \left( \frac{R}{r} \right)^{2\alpha_1} \le \frac{\varphi(R)}{\varphi(r)} \le a_3 \left( \frac{R}{r} \right)^{2\alpha_2} ~~ \text{for all} ~~ 0 < r \le R \le 1,
\end{equation}
where $\alpha_1$ and $\alpha_2$ are constants in \eqref{e:Phi-wsc}.
Note that
\eqref{e:varphi-wsc} implies that $\varphi(r) \le c r^{2\alpha_1}$ for $r \le 1$, so by definition of $\varphi$ we see that $J(|y|)dy$ is an infinite measure.
We say that $D\subset {\mathbb R}^d$ (when $d\ge 2$) is a $C^{1,1}$ open set with $C^{1, 1}$ characteristics $(R_0, \Lambda)$ if there exist a localization radius $ R_0>0 $ and a constant $\Lambda>0$ such that for every $z\in\partial D$ there exist a $C^{1,1}$-function $\varphi=\varphi_z: {\mathbb R}^{d-1}\to {\mathbb R}$ satisfying $\varphi(0)=0$, $\nabla\varphi (0)=(0, \dots, 0)$, $\| \nabla\varphi \|_\infty \leq \Lambda$, $| \nabla \varphi(x)-\nabla \varphi(w)| \leq \Lambda |x-w|$ and an orthonormal coordinate system $CS_z$ of $z=(z_1, \cdots, z_{d-1}, z_d):=(\widetilde z, \, z_d)$ with origin at $z$ such that $ D\cap B(z, R_0 )= \{y=({\tilde y}, y_d) \in B(0, R_0) \mbox{ in } CS_z: y_d > \varphi (\widetilde y) \}$.
The pair $( R_0, \Lambda)$ will be called the $C^{1,1}$ characteristics of the open set $D$.
Note that a bounded $C^{1,1}$ open set $D$ with characteristics $(R_0, \Lambda)$ can be disconnected, and the distance between two distinct components of $D$ is at least $R_0$.
By a $C^{1,1}$ open set in ${\mathbb R}$ with a characteristic $R_0>0$, we mean an open set that can be written as the union of disjoint intervals so that the {infimum} of the lengths of all these intervals is {at least $R_0$} and the {infimum} of the distances between these intervals is {at least $R_0$}.
Now, consider the following Dirichlet (exterior) problem on a bounded $C^{1,1}$ open set $D \subset {\mathbb R}^n$:
\begin{equation}\label{e:pde1}
\begin{cases}
Lu=f &\text{in} ~ D , \\
u=0 &\text{in} ~ \mathbb{R}^n \backslash D ,
\end{cases}
\end{equation}
where $L$ is the operator in \eqref{d:L}, which coincides with \eqref{e:pde2} when the process $X$ is a subordinate Brownian motion. We will prove the following theorems, which contain Theorem \ref{t:est-u} and \ref{t:est-u/V} (See Remark \ref{r:SBM} below), in Sections \ref{s:H} and \ref{s:bdry reg}, respectively.
\begin{theorem} [H\"older estimates up to the boundary] \label{t:1}
Assume that $D$ is a bounded $C^{1,1}$ open set in $\mathbb{R}^n$, and $X$ is an isotropic pure jump L\'evy process satisfying \eqref{e:Phi-wsc} and \eqref{e:J}. If $f \in C(D)$, then there exists a unique viscosity solution $u$ of \eqref{e:pde1} and $u \in C^{\overline{\phi}}(D)$. Moreover, we have
$$ \Vert u \Vert_{C^{\overline{\phi}} (D)} \leq C \Vert f \Vert_{C(D)}, $$
where $\overline{\phi}(r) := \varphi(r)^{1/2}$, for some constant $C > 0$ depending only on $n, D$, and $\Phi$.
\end{theorem}
\begin{theorem} [Boundary estimates] \label{t:2}
Assume that $D$ is a bounded $C^{1,1}$ open set in $\mathbb{R}^n$, and $X$ is an isotropic pure jump L\'evy process satisfying \eqref{e:Phi-wsc} and \eqref{e:J}. If $f \in C (D)$ and $u$ is the viscosity solution of \eqref{e:pde1}, then $u / V(d_D) \in C^\alpha(D)$ and
\begin{align*}
\left\Vert \frac{u}{V(d_D)} \right\Vert_{C^\alpha(D)} \leq C \Vert f \Vert_{C(D)}
\end{align*}
for some constants $\alpha>0$ and $C>0$ depending only on $n,D$, and $\Phi$.
\end{theorem}
\noindent In the next remark, we explain that assumptions in Theorem \ref{t:est-u} and Theorem \ref{t:est-u/V} imply assumptions in Theorem \ref{t:1} and Theorem \ref{t:2}.
\begin{remark}\label{r:SBM}
When $X$ is a subordinate Brownian motion satisfying \eqref{e:phi-wsc} and \eqref{e:exp-j}, we have \eqref{e:Phi-wsc} by using $\Phi(r)=\phi(r^2)$ and \eqref{e:phi-wsc}. We also have that by \eqref{d:j}
$$
J(r) = J_n(r) = \int_0^\infty (4\pi t)^{-n/2} e^{-\frac{r^2}{4t}} \mu(dt).
$$
Thus $J(r)$ is decreasing. Also, differentiating above equation we obtain
$$ -\frac{J'_n(r)}{r}= 2\pi J_{n+2}(r), \quad r>0, $$
so $-\frac{J(r)}{r}$ is decreasing. Therefore, \eqref{e:J} holds.
Note that by \cite[Corollary 23]{BGR1} and \eqref{e:Phi-wsc} we have $\varphi(r) \asymp \Phi(r^{-1})^{-1}$. Using this and $\Phi(r)=\phi(r^2)$, both $\overline{\phi}$'s in Theorem \ref{t:est-u} and Theorem \ref{t:1} are comparable. Therefore, two $C^{\overline{\phi}}(D)$ norms are equivalent.
\end{remark}
\subsection{Renewal function} \label{s:renewal function}
Let $Z = (Z_t)_{t \geq 0}$ be an one-dimensional L\'evy process with characteristic exponent $\Phi(\vert z \vert)$ and $M_t := \sup \{ Z_s : 0 \le s \le t\}$ be the supremum of $Z$. Let $L=(L_t)_{t \ge 0}$ be a local time of $M_t - Z_t$ at 0, which satisfies
$$ L_t = \int_0^t {\bf 1}_{\{M_t = Z_t\}}(s)ds. $$
Note that since $t \mapsto L_t$ is non-decreasing and continuous with probability 1, we can define the right-continuous inverse of $L$ by
$$L^{-1}(t) := \inf \{s>0 : L(s) >t \}. $$
The mapping $t \mapsto L^{-1}(t)$ is non-decreasing and right-continuous a.s. The process $L^{-1}=(L^{-1}_t)_ { t \ge 0 }$ with $L^{-1}_t = L^{-1}(t)$ is called \textit{the ascending ladder time process} of $Z$. \textit{The ascending ladder height process} $H=(H_t)_{t \ge 0}$ is defined as
$$ H_t := \begin{cases} M_{L^{-1}_t} (= Z_{L^{-1}_t} ) \quad &\mbox{if} \quad L_t^{-1} < \infty, \\ \infty \quad &\mbox{otherwise}. \end{cases} $$
\noindent (See \cite{Fri} for details.) Define the renewal function of the ladder height process $H$ with respect to $\Phi$ by
\begin{align} \label{e:V}
V(x) = \int_0^\infty {\mathbb P}(H_s \le x ) ds, \quad x \in \bR.
\end{align}
It is known that $V(x) =0 $ if $ x\le 0$, $V(\infty)=\infty$ and $V$ is strictly increasing, differentiable on $[0, \infty)$. So, there exists the inverse function $V^{-1}:[0,\infty) \rightarrow [0,\infty)$.
In the following lemma we collect some basic scaling properties of renewal function in \cite{BGR1} and \cite{BGR2}.
\begin{lemma}\label{l:V0}
For any $c>0$, There exist constants $C_i(c)= C_i(c,n,a_1,\alpha_1,\alpha_2)>0$ for $i=1,2,3$ such that
\begin{equation}\label{e:V-asymp}
C_1^{-1}\varphi(r) \le V(r)^2 \le C_1\varphi(r),\quad 0<r \le c,
\end{equation}
\begin{equation}\label{e:V-wsc}
C_2^{-1} \left( \frac{R}{r} \right)^{\alpha_1} \le \frac{V(R)}{V(r)} \le C_2 \left( \frac{R}{r} \right)^{\alpha_2}, \quad 0<r\le R \le c \quad \mbox{and}
\end{equation}
\begin{equation}\label{e:V-inv-wsc}
C_3^{-1} \left(\frac{T}{t} \right)^{1/\alpha_2} \le \frac{V^{-1}(T)}{V^{-1}(t)} \le C_3 \left(\frac{T}{t} \right)^{1/\alpha_1}, \quad 0<t \le T < V(c).
\end{equation}
\end{lemma}
\noindent{\bf Proof.} By \cite[Corollary 3]{BGR1} and \cite[Proposition 2.4]{BGR2}, we have
\begin{equation*}
(V(r))^{-2} \asymp \Phi(r^{-1}), \quad r>0.
\end{equation*}
with comparison constant depending only on $n$. Combining this with $\Phi(r^{-1})^{-1} \asymp \varphi(r)$ in $0<r\le c$, we conclude \eqref{e:V-asymp}.
By \eqref{e:V-asymp} and \eqref{e:varphi-wsc} we have \eqref{e:V-wsc}. Using \cite[Remark 4]{BGR1}, we also obtain the weak scaling property of the inverse function in \eqref{e:V-inv-wsc}. {\hfill $\Box$ \bigskip}
The most important property of renewal function in this paper is the following: $w(x):=V(x_n)$ is a solution of the following Dirichlet problem :
\begin{equation}\label{e:pde-V}
\begin{cases}
Lw=0 &\text{in} \quad \mathbb{R}^n_+, \\
w=0 &\text{in} \quad \mathbb{R}^n \backslash \mathbb{R}^n_+,
\end{cases}
\end{equation}
where $L$ is of the form \eqref{d:L} and $\bR_+^n := \left\{x=(x_1,...,x_n) \in \bR^n ~|~ x_n >0 \right\}$ is upper half plane (see \cite[Theorem 3.3]{GKK}).
The following estimates for derivatives of $V$ are in \cite[Proposition 3.1]{GKK} and \cite[Theorem 1.2]{KR1}.
\begin{lemma}\label{l:V}
Assume $X$ is an isotropic pure jump L\'evy process satisfying \eqref{e:Phi-wsc} and \eqref{e:J}. Then $r \mapsto V(r)$ is twice-differentiable for any $r>0$. Moreover, for any $c>0$ there exists a constant $C(c)=C(c,n,a_1,\alpha_1, \alpha_2)>0$ such that
\begin{equation}
\label{e:V-diff}
|V''(r)| \le C \frac{V'(r)}{r \land c}, \quad V'(r) \le C \frac{V(r)}{r \land c}.
\end{equation}
\end{lemma}
\noindent We are going to utilize the space $C^V(D)$ in Section \ref{s:H} and adopt $V(d_D)$ as a barrier in Section \ref{s:bdry reg}.
\section{H\"older regularity up to the boundary} \label{s:H}
In this section, we give the proof of Theorem \ref{t:1}. First we introduce the following Dirichlet heat kernel estimates from \cite[Corollary 1.6]{CKS} and \cite[Thoerem 1.1 and 1.2]{KR2}. We reformulate here for the usage of our proofs.
\begin{theorem}\label{t:est-p}
Let $X$ be an isotropic unimodal L\'evy process satisfying \eqref{e:Phi-wsc} and \eqref{e:J}. Let $D \subset \mathbb{R}^n$ be a bounded $C^{1,1}$ open set satisfying $\mathrm{diam}(D) \le 1$ and $p_D(t,x,y)$ be the Dirichlet heat kernel for $X$ on $D$. Then $x \mapsto p_D(t,x,y)$ is differentiable for any $y \in D, t >0$, and there exist constants $C_i=C_i(n,D,a_1,a_2,\alpha_1,\alpha_2,\Phi(1))>0$, $i=1,\dots,4$ satisfying the following estimates:
\begin{enumerate}
\item[(a)] For any $(t,x,y) \in (0,1] \times D \times D$,
\begin{align*}
p_D(t,x,y) \le C_1 \left( 1 \land \frac{V(d_D(x))}{t^{1/2}} \right) \left( 1 \land \frac{V(d_D(y))}{t^{1/2}} \right) p\left( t, |x-y|/4 \right)
\end{align*}
and
$$|\nabla_x p_D(t,x,y) | \le C_2 \left[ \frac{1}{d_D(x) \land 1} \lor \frac{1}{V^{-1}(\sqrt t)} \right] p_D(t,x,y). $$
\item[(b)] For any $(t,x,y) \in [1,\infty) \times D \times D$,
$$ p_D(t,x,y) \le C_3 e^{-\lambda_1 t} V(d_D(x)) V(d_D(y)) $$
and
$$|\nabla_x p_D(t,x,y) | \le C_4 \left[ \frac{1}{d_D(x) \land 1} \lor \frac{1}{V^{-1}(1)} \right] p_D(t,x,y), $$
\end{enumerate}
where $-\lambda_1 = -\lambda_1(n,a_1,a_2,\alpha_1,\alpha_2,\Phi(1))<0$ is the largest eigenvalue of the generator of $X^{B(0,1)}$.
\end{theorem}
In the estimates of Theorem \ref{t:est-p}, we used $d_D(x) \lor d_D(y) \le \mathrm{diam}(D) \le 1$, $V(r) \asymp \varphi(r)^{1/2}$ in $0<r\le 1$ and $\frac{1}{V^{-1}(\sqrt t)} \asymp \varphi^{-1}(t)$ to reformulate theorems in our references. In addition, estimates in \cite[Corollary 1.6]{CKS}
are of the form
$$ p_D(t,x,y) \le ce^{-\lambda(D)t} V(d_D(x))V(d_D(y))$$
where $-\lambda(D)<0$ is the largest eigenvalue of the generator of $X^{D}$. Using \cite[(6.4.14) and Lemma 6.4.5]{FOT}, we have $\lambda(D)=\inf \{ \int_{{\mathbb R}^n} -Lu(x)u(x)dx \, | \, \Vert u \Vert_2 =1, \mbox{supp}(u) \subset D \}$, thus we can obtain $\lambda_1 \le \lambda(D)$. This implies heat kernel estimates in Theorem \ref{t:est-p}(b).
Without loss of generality, we will always assume $\mathrm{diam}(D) \le 1$ in this paper.
\subsection{Potential operator for the killed process of subordinate Brownian motion}
In this subsection, we assume that $D \subset {\mathbb R}^n$ is a bounded $C^{1,1}$ open set with diam$(D)\le 1$ and $X$ is a L\'evy process satisfying \eqref{e:Phi-wsc} and \eqref{e:J}, which are conditions in Theorem \ref{t:est-p}. We define the \textit{Green function of $X^D$} by
$$ G^D (x,y) = \int_0^\infty p_D(t,x,y) dt $$
for $x,y \in D$ with $x \neq y$. Note that by Theorem \ref{t:est-p}(b), $G^D(x,y)$ is finite for any $x \neq y$.
We define a potential operator $R^D$ for $X^D$ as
\begin{equation}\label{d:R}
R^D f(x) := \int_0^\infty \int_{D} p_D(t,x,y) f(y) dy dt.
\end{equation}
Using definitions of $P_t^D$ and $G^D$, we also have
\begin{align} \label{e:R^D}
R^Df(x) = \int_{D
\backslash \{x\}} G^D(x,y) f(y) dy = \int_0^\infty P_t^D f(x) dt.
\end{align}
In the next subsection, we will see that $R^D$ acts as the inverse of $-A$.
First we will prove interior H\"older estimate of $R^D f$. For the next usage, we prove the following proposition for the functions in $L^\infty(D)$.
\begin{proposition}\label{p:R}
For any $f \in L^\infty(D)$ and any ball $B(x_0,r) \subset D$ satisfying $d_D(x_0) \le 2r$, we have $R^Df \in C^V(B/2)$ and there is a constant $C=C(n,a_1,a_2,\alpha_1,\alpha_2,D,\Phi(1))>0$ satisfying
\begin{equation}
\label{e:R}
\Vert R^Df \Vert_{C^V(B/2)} \le C \left( \Vert f \Vert_{L^\infty(D)} + \Vert R^Df \Vert_{C(B)} \right)
\end{equation}
Here we have denoted $B=B(x_0,r)$ and $B/2=B(x_0,r/2)$.
\end{proposition}
\noindent{\bf Proof.} We have $\vert x-y \vert < r$ for any $x, y \in B/2$. Thus, we have
\begin{align*}
[R^Df]_{C^V(B/2)}
&\leq \sup_{\vert h \vert \le r} \sup_{x \in B/2} \frac{\vert R^Df(x+h) - R^Df(x) \vert}{V(\vert h \vert)} \\
&\leq \sup_{\vert h \vert \le r} \int_0^\infty \sup_{x\in B/2} \frac{\vert P_s^Df(x+h) - P_s^Df(x) \vert}{V(\vert h \vert)} \, ds \\
&\leq \sup_{|h| \le r} \left( \int_0^{V(|h|)V(r)} + \int_{V(|h|)V(r)}^{V(r)^2} + \int_{V(r)^2}^\infty \right) \sup_{x\in B/2} \frac{|P_s^D f(x+h)-P_s^D f(x)|}{V(|h|)} \, ds \\
&=: \sup_{|h| \le r} \left( {\rm \rom{1}} + {\rm \rom{2}}+ {\rm \rom{3}} \right).
\end{align*}
To estimate ${\rm \rom{1}}$, we use $\vert P_s^D f(x) \vert \le \Vert f \Vert_{L^\infty(D)}$ so that
\begin{equation} \label{e:R1}
\begin{split}
{\rm \rom{1}} &=\int_0^{V(|h|)V(r)} \sup_{x\in B/2} \frac{|P_s^D f(x+h)-P_s^D f(x)|}{V(|h|)} \, ds \\
&\le \int_0^{V(|h|)V(r)} \frac{2\Vert f \Vert_{L^\infty(D)}}{V(|h|)} \, ds \le c_1V(r) \Vert f \Vert_{L^\infty(D)}.
\end{split}
\end{equation}
To estimate ${\rm \rom{2}}$, we will use Theorem \ref{t:est-p}(a). Since $s\le V(r)^2$ and $x \in B/2$, we obtain
$$ \frac{1}{d_D(x) \land 1} \lor \frac{1}{V^{-1}(\sqrt{s})} \le \frac{c_2}{V^{-1}(\sqrt{s})}. $$
Therefore, for $s \le V(r)^2$ we have
$$|\nabla_x P_s^D f(x)| \le c_3 \left( \frac{1}{d_D(x) \land 1} \lor \frac{1}{V^{-1}(\sqrt{s})}\right) \Vert P_s^D f\Vert_{L^\infty(D)} \le \frac{c_2c_3 }{V^{-1}(\sqrt s)} \Vert f \Vert_{L^\infty(D)} $$
for every $x \in D$. Here we used Theorem \ref{t:est-p}(a) for the first inequality. Using above inequality we conclude
\begin{equation} \label{e:R2-1}
\begin{split}
{\rm \rom{2}} &= \int_{V(|h|)V(r)}^{V(r)^2} \sup_{x\in B/2} \frac{|P_s^D f(x+h)-P_s^D f(x)|}{V(|h|)} \, ds \\
&\le \frac{|h|}{V(|h|)} \int_{V(|h|)V(r)}^{V(r)^2} \sup_{x\in B/2} |\nabla_x{P_s^D f(x^*)}| \, ds \\
&\le c_2c_3 \Vert f \Vert_{L^\infty(D)} \frac{|h|}{V(|h|)} \int_{V(|h|)V(r)}^{V(r)^2}\frac{1}{V^{-1}(\sqrt s)} \, ds,
\end{split}
\end{equation}
where $x^*$ is a point on the segment between $x$ and $x+h$. Using change of variables with $s=V^2(t)$ in the first equality and Lemma \ref{l:V} for the second inequality, we get
\begin{align}\label{e:R2-2}
\int_{V(r)V(|h|)}^{V(r)^2} \frac{1}{V^{-1}(\sqrt s)} ds
= 2\int_{V^{-1}(V(r)^{1/2}V(|h|)^{1/2})}^r \frac{V(t)V'(t)}{t} dt
\le c_4 \int_\varepsilon^r \frac{V(t)}{t} \frac{V(t)}{t} dt,
\end{align}
where $\varepsilon := V^{-1}(V(|h|)^{1/2}V(r)^{1/2})$. Also, by \eqref{e:V-wsc} we have
\begin{align*}
\frac{V(t)}{V(\varepsilon)} \le c_5\left(\frac{t}{\varepsilon} \right)^{\alpha_2} \le c_5 \frac{t}{\varepsilon}, \quad t \ge \varepsilon
\end{align*}
and
$$\int_0^r \frac{V(t)}{t} dt = \int_0^r \frac{V(r)}{t} \frac{V(t)}{V(r)} dt \le c_6V(r) \int_0^r \frac{1}{t} \left(\frac{t}{r} \right)^{\alpha_1} dt \le c_7V(r). $$
Using above two inequalities, we deduce from \eqref{e:R2-2} that
\begin{align}
\begin{split} \label{e:R2-3}
\int_{V(r)V(|h|)}^{V(r)^2} \frac{1}{V^{-1}(\sqrt s)} ds &\le c_4\int_\varepsilon^r \frac{V(t)}{t} \frac{V(t)}{t} dt
\le c_8 \frac{V(\varepsilon)}{\varepsilon} \int_0^r \frac{V(t)}{t}dt \\ &\le c_9 V(r) \frac{V(\varepsilon)}{\varepsilon} = c_9 V(r) \frac{V(|h|)^{1/2}V(r)^{1/2}}{V^{-1}(V(|h|)^{1/2}V(r)^{1/2})}.
\end{split}
\end{align}
Combining \eqref{e:R2-1} and \eqref{e:R2-3}, we conclude that
\begin{align*}
&{\rm \rom{2}} \le c_{10} \Vert f \Vert_{L^\infty(D)} \frac{|h|}{V(|h|)} \cdot V(r) \frac{V(|h|)^{1/2}V(r)^{1/2}}{V^{-1}(V(|h|)^{1/2}V(r)^{1/2})} \\
&= c_{10} \Vert f \Vert_{L^\infty(D)} V(r) \frac{V(r)}{u}\frac{V^{-1}(u^2/V(r))}{ V^{-1}(u)} \le c_{11} \Vert f \Vert_{L^\infty(D)}V(r) \left( \frac{u}{V(r)} \right)^{\frac{1}{\alpha_2} -1} \le c_{11} V(r) \Vert f \Vert_{L^\infty(D)},
\end{align*}
where $u := V(h)^{1/2}V(r)^{1/2} \le V(r)$. Here we used \eqref{e:V-inv-wsc} and $\alpha_2<1$ for the second line.
For \rom{3}, first note that for any $V(r)^2 \le s \le 1$,
$$\frac{1}{d_D(x) \land 1} \lor \frac{1}{V^{-1}(\sqrt s)} \lor \frac{1}{V^{-1}(1)} \le \frac{1}{r} \lor \frac{1}{V^{-1}(\sqrt s)} \lor \frac{1}{V^{-1}(1)} \le \frac{1}{r}.$$
So, by Theorem \ref{t:est-p}(a) we have for $V(r)^2 \leq s \leq 1$,
\begin{equation} \label{e:R3-1}
\begin{split}
|\nabla_x p_D(s,x,y)| &\le \frac{c_{12}}{r} p_D(s,x,y)
\le \frac{c_{13}}{r}\left(1 \land {\frac{V(d_D(x))}{s^{1/2}}}\right) \left(1 \land {\frac{V(d_D(y))}{s^{1/2}}}\right) p(s, |x-y|/4) \\
&\le \frac{c_{14}}{r} \frac{V(r)}{\sqrt s} p(s,|x-y|/4).
\end{split}
\end{equation}
Here in the second line we used $V(d_D(x)) \le c_{15}V(r)$, which follows from \eqref{e:V-wsc} and $d_D(x) \le 2r$. Thus, we obtain
\begin{equation} \label{e:1}
\begin{split}
|P^D_sf(x+h)&-P^D_sf(x)| = |h| |\nabla_x P^D_s f(x_*)|
\le |h| \Vert f \Vert_{L^\infty(D)} \int_{D} |\nabla_x p_D(s,x^*,y) | \, dy \\
&\le c_{16} \vert h \vert \Vert f \Vert_{L^\infty(D)} \frac{V(r)}{r \sqrt s} \int_{D} p\left(s,\frac{|x^*-y|}{4} \right) \, dy \le c_{17} \vert h \vert \Vert f \Vert_{L^\infty(D)} \frac{V(r)}{r \sqrt s},
\end{split}
\end{equation}
where $x_*$ is a point on the line segment between $x$ and $x+h$. Here we used $\int_{\mathbb{R}^n}p(s,y/4)dy =4^n$ for the last inequality.
For $s \ge 1$, using Theorem \ref{t:est-p}(b) we have
\begin{equation}\label{e:R3-2}
|\nabla_x p_D(s,x,y)| \le \frac{c_{18}}{r} p_D (s,x,y) \le \frac{c_{19}}{r} e^{-\lambda_1 s} V(d_D(x))V(d_D(y)) \le \frac{c_{20}V(r)}{r} e^{-\lambda_1 s},
\end{equation}
Here we used $d_D(x) \le 2r$, $d_D(y) \le 1$ and \eqref{e:V-wsc} in the last inequality. Thus we arrive
\begin{equation} \label{e:2}
\begin{split}
|P^D_sf(x+h)-P^D_sf(x)| &= |h| |\nabla_x P^D_s f(x_*)|
\le |h| \Vert f \Vert_{L^\infty(D)} \int_{D} |\nabla_x p_D(s,x^*,y) | \, dy \\
&\le c_{21} \vert h \vert \Vert f \Vert_{L^\infty(D)} \frac{V(r)}{r} \int_D e^{-\lambda_1 s} \, dy \le c_{22} \vert h \vert \Vert f \Vert_{L^\infty(D)} \frac{V(r)}{r} e^{-\lambda_1 s},
\end{split}
\end{equation}
where $x_*$ is a point on the line segment between $x$ and $x+h$.
Now combining \eqref{e:1} and \eqref{e:2}, we obtain
\begin{equation} \label{e:R3-3}
\begin{split}
{\rm \rom{3}}&=\int_{V(r)^2}^\infty \frac{|P_s^D f(x+h)-P_s^D f(x)|}{V(|h|)} \,ds= \left(\int_{V(r)^2}^1 + \int_1^\infty \right) \frac{|P_s^D f(x+h)-P_s^D f(x)|}{V(|h|)}ds \\
&\le c_{23} \frac{V(r)}{r}\frac{|h|}{V(|h|)} \Vert f \Vert_{L^\infty(D)} \left( \int_{V(r)^2}^1 \frac{1}{ \sqrt s} \, ds + \int_1^\infty e^{-\lambda_1 s} \, ds \right) \\
&\le c_{24} \Vert f \Vert_{L^\infty(D)}(2-2V(r)+\lambda_1^{-1}).
\end{split}
\end{equation}
The last inequality follows from $\frac{V(r)}{V(|h|)} \le c_{25} \big( \frac{r}{|h|} \big)^{\alpha_2} \le c_{25} \big( \frac{r}{|h|} \big)$ since $|h| \le r$.
Combining \eqref{e:R1}, \eqref{e:R2-3} and \eqref{e:R3-3}, we conclude
$$ [Rf]_{C^V(B/2)} \le c_{26}(1+V(r))\Vert f \Vert_{L^\infty(D)} \le c_{26}(1+V(1)) \Vert f \Vert_{L^\infty(D)}. $$
Above inequality and that $\Vert Rf \Vert_{C^V(B/2)}= [Rf]_{C^V(B/2)} + \Vert Rf \Vert_{C(B/2)}$ finish the proof.{\hfill $\Box$ \bigskip}
We next provide an upper bound of $R^D f$ near the boundary. In the proof we apply the estimates on the Green function in \cite[Theorem 1.6]{GKK}.
\begin{lemma} \label{l:u}
There exists a constant $C=C(n,a_1,a_2,\alpha_1,\alpha_2,D,\Phi(1)) > 0$ such that
\begin{align*}
\vert R^D f(x) \vert \leq C \Vert f \Vert_{L^\infty(D)} V(\mathrm{diam}(D)) V(d_D(x))
\end{align*}
for any $f \in L^\infty(D)$ and $x \in D$.
\end{lemma}
\noindent{\bf Proof.}
The estimate on the Green function in \cite[Theorem 1.6]{GKK} and \eqref{e:V-asymp} give that for any $x,y \in D$,
\begin{equation} \label{e:Green}
\begin{split}
G^D (x, y) &\leq c_1 \frac{\varphi(\vert x-y \vert)}{\vert x-y \vert^n} \left( 1 \wedge \frac{\varphi(d_D(x))}{\varphi(\vert x-y \vert)} \right)^{1/2} \left( 1 \wedge \frac{\varphi(d_D(y))}{\varphi(\vert x-y \vert)} \right)^{1/2} \\
&\leq c_1 \frac{\varphi(\vert x-y \vert)^{1/2}}{\vert x-y \vert^n} \varphi(d_D(x))^{1/2} \le c_2 \frac{V(\vert x-y \vert)}{\vert x-y \vert^n} V(d_D(x)).
\end{split}
\end{equation}
Substituting \eqref{e:Green} to \eqref{e:R^D} we obtain
\begin{align}
\vert R^D f(x) \vert &\leq c_3 \Vert f \Vert_{L^\infty(D)} V(d_D(x)) \int_{D} \frac{V(\vert x-y \vert)}{\vert x-y \vert^n} dy.
\end{align}
Also, using \eqref{e:V-wsc} we have
\begin{align}\begin{split}
\int_{D} \frac{V (\vert x-y \vert)}{\vert x-y \vert^n} \, dy &\leq \int_{B(x,\mathrm{diam}(D))}\frac{V (\vert x-y \vert)}{\vert x-y \vert^n} \, dy \le c_4 \int_0^{\mathrm{diam}(D)} \frac{V(r)}{r} \, dr \\ &\leq c_5 \frac{V(\mathrm{diam}(D))}{\mathrm{diam}(D)^{\alpha_1}} \int_0^{\mathrm{diam}(D)} r^{\alpha_1 - 1} \, dr \leq c_6 V(\mathrm{diam}(D)).
\end{split}\end{align}
Combining above two inequalities we have proved the lemma.
{\hfill $\Box$ \bigskip}
\begin{remark} \label{r:3.7}
As a corollary of Lemma \ref{l:u}, we have
\begin{align*}
\Vert R^D f \Vert_{L^\infty(D)} \leq C\Vert f \Vert_{L^\infty(D)}.
\end{align*}
Hence we can simplify \eqref{e:R} to
\begin{align} \label{e:CV}
\Vert R^D f \Vert_{C^{V}(B/2)} \leq \tilde{C} \Vert f \Vert_{L^\infty(D)}
\end{align}
for some constant $\tilde{C} = \tilde{C}(n, a_1,a_2, \alpha_1, \alpha_2, D,\Phi(1)) > 0$.
\end{remark}
Now we are ready to prove Theorem \ref{t:1} for the function $R^D f$.
\begin{proposition}
\label{p:R^D}
Assume $f \in L^\infty(D)$. Then, $R^D f \in C^{V}(D)$ and there exists a constant $C>0$ such that
\begin{equation}
\label{e:RD}
\Vert R^D f \Vert_{C^{V}(D)} \le C \Vert f \Vert_{L^\infty(D)}.
\end{equation}
The constant $C>0$ depends only on $n,a_1,a_2,\alpha_1,\alpha_2,D$ and $\Phi(1)$.
\end{proposition}
\noindent{\bf Proof.}
By \eqref{e:CV} we have
\begin{align} \label{e:Holder ineq}
\vert R^Df(x) - R^Df(y) \vert &\leq c_1 \Vert f \Vert_{L^\infty(D)} V(\vert x-y \vert)
\end{align}
for all $x, y$ satisfying $\vert x-y \vert < d_D(x)/2$. We want to show that (\ref{e:Holder ineq}) holds, perhaps with a bigger constant, for all $x,y \in D$.
Let $(R_0, \Lambda)$ be the $C^{1,1}$ characteristics of $D$. Then $D$ can be covered by finitely many balls of the form $B(z_i, d_D(z_i)/2)$ with $z_i \in D$ and finitely many sets of the form $B(z^*_j, R_0) \cap D$ with $z^*_j \in \partial D$. Thus, it is enough to show that \eqref{e:Holder ineq} holds for all $x, y \in B(z_j^*, R_0) \cap D$ possibly with a larger constant.
Fix $B(z_0^*, R_0) \cap D$ and assume that the outward normal vector at $z_0$ is $(0, \cdots, 0, -1)$. This is possible because the operator is invariant under the rotation. Now let $x=(x', x_n)$ and $y = (y', y_n)$ be two points in $B(z_0^*, R_0) \cap D$, and let $r = \vert x - y \vert$. Let us define for $k \geq 0$
\begin{align*}
x^k = (x', x_n + \lambda^k r) \quad \text{and} \quad y^k = (y', y_n + \lambda^k r),
\end{align*}
for some $1- 2^{-1} (1+\Lambda^2)^{-1/2} \leq \lambda <1$. Since $(1+\Lambda^2)^{-1/2} (x^k)_n \leq d_D(x^k)$, we have
\begin{align*}
\vert x^k - x^{k+1} \vert = \lambda^k (1-\lambda) r \leq \frac{1}{2\sqrt{1+\Lambda^2}} (x^k)_n \leq \frac{1}{2} d_D(x^k).
\end{align*}
Thus, we have from \eqref{e:Holder ineq} that
\begin{align*}
\vert R^Df(x^k) - R^Df(x^{k+1}) \vert \leq c_1 \Vert f \Vert_{L^\infty(D)} V ( \vert x^k - x^{k+1} \vert ) = c_1 \Vert f \Vert_{L^\infty(D)} V ( \lambda^k (1- \lambda) r )
\end{align*}
and similarly that $\vert R^Df(y^k) - R^Df(y^{k+1}) \vert \leq c_1 \Vert f \Vert_{L^\infty(D)} V ( \lambda^k (1- \lambda) r )$. Moreover, note that the distance from the line segment joining $x^0$ and $y^0$ to the boundary $\partial D$ is more than $r(1-\Lambda/2)$. Thus, this line can be split into finitely many line segments of length less than $r(1-\Lambda/2)/2$. The number of small line segments depends only on $\Lambda$. Therefore, we have $\vert R^Df(x^0) - R^Df(y^0) \vert \leq c_2 \Vert f \Vert_{L^\infty(D)} V(r)$ and hence
\begin{align*}
&\vert R^Df(x) - R^Df(y) \vert \\ &\leq \vert R^Df(x^0) - R^Df(y^0) \vert + \sum_{k \geq 0} \big( \vert R^Df(x^k) - R^Df(x^{k+1}) \vert + \vert R^Df(y^k) - R^Df(y^{k+1}) \vert \big) \\
&\leq c_3 \Vert f \Vert_{L^\infty(D)} \big( V(r) + \sum_{k \geq 0} V( \lambda^k(1- \lambda) r) \big) \\
&\leq c_4 \Vert f \Vert_{L^\infty(D)} V(r) \bigg( 1 + c_5 \sum_{k \geq 0} \big( \lambda^k (1-\lambda) \big)^{\alpha_1} \bigg) \\
&\leq c_6 \Vert f \Vert_{L^\infty(D)} V(r).
\end{align*}
Recall that $r=|x-y|$. This finishes the proof. {\hfill $\Box$ \bigskip}
In the next subsection, we will prove that the function $u=-R^D f$ is the unique viscosity solution for \eqref{e:pde1} when $f \in C(D)$.
\subsection{Nonlocal operator and infinitesimal generator} \label{s:MH}
In this section we establish the relation between viscosity solutions of \eqref{e:pde1} and solutions of the following:
\begin{equation}
\label{e:pde3}
\begin{cases}
Au=f &\text{in} ~ D , \\
u=0 &\text{in} ~ \mathbb{R}^n \backslash D.
\end{cases}
\end{equation}
In \cite{BLM}, the authors discussed the relation between operators $A$ and $L$, for instance, domain or values of the operators; see \cite{BLM} for the application to heat equations.
At the beginning of this section we apply the strategies in \cite{BLM} to our settings and obtain some related properties. After then, we obtain comparison principle for the viscosity solution. Combining these results, we finally obtain the existence and uniqueness for Dirichlet problems \eqref{e:pde1} and \eqref{e:pde3}. Moreover, these two solutions coincide under some conditions. Also, in Section \ref{s:Harnack} we obtain Harnack inequality, which is one of the key ingredients for the standard argument of Krylov in \cite{Kry}. In Section \ref{s:pf of Thm 1.2} we will make use of Harnack inequality and the comparison principle to prove Theorem \ref{t:2}. \\
Let $D \subset {\mathbb R}^n$ be a bounded $C^{1,1}$ open set and let
$${\mathcal D}={\mathcal D}(D):= \{ u \in C_0(D) : Au \in C(D) \} $$
be the domain of operator $A$. Recall that by \cite[Lemma 2.6]{BLM} we have
\begin{equation}
\label{e:gen2} Au(x) = Lu(x)
\end{equation}
for any $u \in C^2(x) \cap C_0(\mathbb{R}^n)$, $x \in D$. We first show that $u=-R^D f$ satisfies \eqref{e:pde3} when $f$ is continuous.
\begin{lemma}\label{t:A} Let $f \in C(D)$ and define $u = - R^D f$. Then, $u$ is a solution for \eqref{e:pde3}.
\end{lemma}
\noindent{\bf Proof.} First we claim that for any $u \in C_0(D)$ and $x \in D$,
\begin{equation}
\label{e:A}
Au(x) = \lim_{t \downarrow 0} \frac{P_t^D u(x) - u(x)}{t}.
\end{equation}
To show \eqref{e:A}, we follow the proof in \cite[Theorem 2.3]{BLM}. Note that our domain of operator is slightly different from it in \cite[(2.8)]{BLM}.
We first observe that for any $u \in {\mathcal D}$ and $x \in D$,
\begin{align*}
P_t^D u(x) - P_t u(x) &= {\mathbb E}^x u(X_t^D) - {\mathbb E}^x u(X_t) \\
&= {\mathbb E}^x [u(X_t^D) {\bf 1}_{\{\tau_D \ge t \}}] -{\mathbb E}^x [u(X_t) {\bf 1}_{\{\tau_D \ge t \}}] - {\mathbb E}^x [u(X_t) {\bf 1}_{\{\tau_D < t \}}] \\ &= - {\mathbb E}^x [u(X_t){\bf 1}_{\{\tau_D < t\}}].
\end{align*}
Indeed, the first and the third term in the second line cancel. Hence
\begin{equation}
\label{e:P}
\frac{P_t^D u(x)-u(x)}{t} - \frac{P_t u(x)- u(x)}{t} = -\frac{{\mathbb E}^x [ u(X_t) {\bf 1}_{\{\tau_D <t \}}]}{t}= \frac{{\mathbb E}^x[\big(u(X_{\tau_D}) - u(X_t)\big) {\bf 1}_{\{\tau_D <t\}}]}{t}.
\end{equation}
Meanwhile, by the strong Markov property we obtain
\begin{align*}
\left| {\mathbb E}^x\left[ \big(u(X_{\tau_D}) - u(X_t)\big) {\bf 1}_{\{\tau_D <t\}}\right] \right| \le {\mathbb E}^x\left[ \left|{\mathbb E}^{X_{\tau_D}}[u(X_0)-u(X_{t-\tau_D})] \right| {\bf 1}_{\{\tau_D <t \}} \right].
\end{align*}
Since $u \in C_0(D)$ is uniformly continuous, with stochastic continuity of L\'evy process we have that for any $\varepsilon>0$ there is $\delta=\delta(\varepsilon)>0$ such that
$$ | {\mathbb E}^z[u(X_s)]-u(z)| < \varepsilon $$
for any $z \in D$ and $0<s \le \delta$. Combining above two equations we conclude
\begin{align*}
\big| {\mathbb E}^x[\big(u(X_{\tau_D}) - u(X_t)\big) {\bf 1}_{\{\tau_D <t\}}] \big| \le \varepsilon {\mathbb P}^x (\tau_D < t)
\end{align*}
for $0<t \le \delta$. Since $D$ is open, for any $x \in D$ we have a constant $r_x>0$ such that $B(x,r_x) \subset D$. Using \cite[Theroem 5.1 and Proposition 2.27(d)]{BSW} there exists some $M>0$ such that
$$\frac{{\mathbb P}^x(\tau_D < t)}{t} \le \frac{{\mathbb P}^x (\tau_{B(x,r_x)} < t)}{t} \le M \quad \mbox{for all} \quad t>0.$$
Combining above inequalities we obtain that
\begin{align*}
\lim_{t \downarrow 0} \left| \frac{P_t^D u(x) - u(x)}{t} - Au(x) \right| &= \lim_{t \downarrow 0} \left| \frac{P_t^D u(x)-u(x)}{t} - \frac{P_t u(x)- u(x)}{t} \right| \\
&\le \varepsilon \lim_{t \downarrow 0}\frac{{\mathbb P}^x [\tau_D < t]}{t} \le \varepsilon M.
\end{align*}
Since $\varepsilon>0$ is arbitrarily, this concludes the claim.
Now we prove the lemma. Note that $u=0$ in $D^c$ immediately follows from the definition of $R^D$. Then, by \eqref{e:A} and \eqref{e:R^D} we have that for $x \in D$,
\begin{align}
\begin{split}\label{e:Au}
Au(x) &= A(-R^Df)(x) = -\lim_{t \downarrow 0}\frac{ P^D_t (R^D f) (x) - R^D f(x)}{s} \\
&=-\lim_{t \downarrow 0} \frac{1}{t} \left[ P^D_t \Big( \int_0^\infty P_s^D f(\cdot)ds \Big)(x) - \int_0^\infty P_s^D f(x)ds \right] \\
&=\lim_{t \downarrow 0} \frac{1}{t} \left( -\int_0^\infty P_{t+s}^D f(x)ds + \int_0^\infty P_s^D f(x )ds \right) \\
&= \lim_{t \downarrow 0} \frac{1}{t} \left( - \int_t^\infty P_{s}^D f(x)ds + \int_0^\infty P_s^D f(x )ds \right) \\
&= \lim_{t \downarrow 0} \frac{\int_0^t P_s^D f(x)ds}{t} = f(x).
\end{split}
\end{align}
Indeed, the third line follows from the semigroup property $P_s^D P_t^D= P_{s+t}^D$ and that $R^D f \in C_0(D)$ which follows from Proposition \ref{p:R}. This finishes the proof. {\hfill $\Box$ \bigskip}
The next lemma shows that every solution of \eqref{e:pde3} is a viscosity solution of \eqref{e:pde1}.
\begin{lemma}
\label{l:rel}
Assume that $f \in C(D)$ and $u \in {\mathcal D}$ satisfies $Au = f$ in $D$. Then, $u$ is a viscosity solution of $Lu = f$.
\end{lemma}
\noindent{\bf Proof.} For any $x_0 \in D$ and test function $v \in C^2({\mathbb R}^n)$ with $v(x_0) = u(x_0)$ and $v(y) > u(y)$ for $y \in {\mathbb R}^n \setminus \{x_0\}$, we have
$$ Av(x_0) = Lv(x_0). $$
Since $v(x_0) = u(x_0)$ and $P_t^D v(x_0) \ge P_t^D u(x_0)$ for every $t>0$, we have
$$ Av(x_0) = \lim_{t \downarrow 0} \frac{P_t^D v(x_0) - v(x_0)}{t} \ge \lim_{t \downarrow 0} \frac{P_t^D u(x_0) - u(x_0)}{t} = Au(x_0). $$
Thus, we arrive
$$ Lv(x_0) \ge Au(x_0), $$
which concludes that $u$ is a viscosity solution of \eqref{e:pde1}. {\hfill $\Box$ \bigskip}
Now we see comparison principle in \cite{CS2}. This implies the uniqueness of viscosity solution for \eqref{e:pde1}.
\begin{theorem} [Comparison principle] \label{t:cp}
Let $D$ be a bounded open set in $\mathbb{R}^n$. Let $u$ and $v$ be bounded functions satisfying $Lu \geq f$ and $Lv \leq f$ in $D$ in viscosity sense for some continuous function $f$, and let $u \leq v$ in $\mathbb{R}^n \setminus D$. Then $u \leq v$ in $D$.
\end{theorem}
\noindent{\bf Proof.} We first claim that $L$ satisfies \cite[Assumption 5.1]{CS2}. More precisely, there exists constant $r_0 \geq 1$ such that for every $r \geq r_0$, there exists a constant $\delta= \delta(r) > 0$ satisfying $Lw > \delta$ in $B_r$, where $w(x) = 1 \land \frac{ \vert x \vert^2}{r^3}$.
Let $r_0=4$, $r \geq 4$ and $x \in B_r$. Note that by $r \ge 4$ we have
$$\frac{|y|^2}{r^3} \le \frac{4r^2}{r^3} \le 1, \qquad y \in B_{2r}. $$
Thus, for $y \in B_r$ we obtain
\begin{align*}
w(x+y) + w(x-y) - 2w(x) = \frac{\vert x +y \vert^2 + \vert x - y \vert^2 - 2\vert x \vert^2}{r^3} = \frac{2\vert y \vert^2}{r^3}.
\end{align*}
On the other hand, for $y \in B_r^c$ we have $$w(x+y) + w(x-y) - 2w(x) \geq \frac{2|y|^2}{r^3} \land (1 - 2w(x)) > 0.$$
Therefore, since $w \in C^2({\mathbb R}^d)$ we have
\begin{align*}
&Lw(x):= \frac{1}{2}\int_{{\mathbb R}^n} \left( w(x+y) + w(x-y) - 2w(x) \right) J(y) \, dy \\
&= \frac{1}{2}\int_{B_r} \left( w(x+y) + w(x-y) - 2w(x) \right) J(y)\, dy + \frac{1}{2}\int_{B_r^c} \left( w(x+y) + w(x-y) - 2w(x) \right) J(y) \, dy
\\ &\ge \frac{1}{r^3} \int_{B_r} |y|^2 J(y)dy \, =: \delta(r)>0
\end{align*}
for every $r \geq r_0=4$ and $x \in B_r$. Since $L$ satisfies \cite[Assumption 5.1]{CS2}, we can apply Theorem 5.2 therein, which proves the theorem.
{\hfill $\Box$ \bigskip}
The following uniqueness of viscosity solution is immediate.
\begin{corollary} \label{c:uniqueness}
Let $D$ be a bounded open set in $\mathbb{R}^n$ and let $f \in C(D)$. Then there is at most one viscosity solution of \eqref{e:pde1}.
\end{corollary}
Here is the main result in this section.
\begin{theorem}\label{t:sol}
Assume that $f \in C(D)$. Then, $u=-R^D f \in {\mathcal D}$ is the unique solution of \eqref{e:pde3}. Also, $u$ is the unique viscosity solution of \eqref{e:pde1}.
\end{theorem}
\noindent{\bf Proof.} By Lemma \ref{t:A}, we have that $u=-R^D f \in {\mathcal D}$ is solution of \eqref{e:pde3}. Now, Lemma \ref{l:rel} and Corollary \ref{c:uniqueness} conclude the proof. {\hfill $\Box$ \bigskip}
\noindent \textbf{Proof of Theorem \ref{t:1}} By Theorem \ref{t:sol}, the unique viscosity solution for \eqref{e:pde1} is given by $u=-R^Df$. Therefore, Proposition \ref{p:R^D} yields the H\"older regularity of viscosity solution with respect to $C^V$-norm. By \eqref{e:V-asymp}, we have $V \asymp \overline{\phi}$ and this concludes the proof. {\hfill $\Box$ \bigskip}
\section{Boundary regularity} \label{s:bdry reg}
\subsection{Barriers} \label{s:barriers}
Throughout this section, $D \subset \mathbb{R}^n$ is a bounded $C^{1,1}$ open set. Without loss of generality, we assume that $\mathrm{diam}(D) \leq 1$.
Since $d_D$ is only $C^{1,1}$ near $\partial D$, we need to consider the following ``regularized version" of $d_D$.
\begin{definition}\label{d:psi}
We call $\psi : D \rightarrow (0, \infty)$ the regularized version of $d_D$ if $\psi \in C^{1,1}(D)$ and it satisfies
\begin{equation}\label{e:psi1}
\tilde{C}^{-1}d_D(x) \le \psi(x) \le \tilde{C}d_D(x), \quad \Vert\nabla \psi(x)\Vert \le \tilde{C} \quad \mbox{and} \quad \Vert \nabla \psi(x) - \nabla \psi(y)\Vert \le \tilde{C}|x-y|
\end{equation}
for any $x,y \in D$, where the constant $\tilde{C}>0$ depends only on $D$.
\end{definition}
For $D=B(0,1)$, there exists a regularized version of $d_{B(0,1)}$ which is $C^2$ and isotropic. Denote this function by $\Psi$ and let $C=C(n)$ be the constant in \eqref{e:psi1} for the function $\Psi$. For any open ball $B_r:=B(x_0,r)$, we will take the regularized version of $d_{B_r}$ which is defined by
$\Psi_r(x):= \Psi(\frac{x-x_0}{r})$. Then, $\Psi_r$ satisfies
\begin{equation}\label{e:psi2}
C^{-1}d_{B_r}(x) \le \Psi_r(x) \le Cd_{B_r}(x), \quad \Vert \nabla \Psi_r\Vert \le C \quad \mbox{and} \quad \Vert \nabla^2\Psi_r(x)\Vert \le \frac{C}{r}
\end{equation}
for any $x,y \in B(x_0,r)$. The last estimate follows from the fact that $\Psi \in C^2(B_r)$.
We first introduce the following three lemmas which will be used to construct a barrier for $L$.
\begin{lemma}\label{l:3.4}
Assume that $D$ is a bounded $C^{1,1}$ open set and let $\psi$ be a regularized version of $d_D$. Then, for every $x \in \mathbb{R}^n$ and $x_0 \in D$ we have
\begin{equation}\label{e:3.4}
|\psi(x)-(\psi(x_0)+\nabla \psi(x_0) \cdot (x-x_0))_+| \le \tilde{C}|x-x_0|^2
\end{equation}
where $\tilde{C}$ is the constant in \eqref{e:psi1}.
In addition, when $D=B(0,r)$ and $\psi=\Psi_r$ we have \eqref{e:3.4} with $\tilde{C}=\frac{C}{r}$ where $C$ is the constant in \eqref{e:psi2}.
\end{lemma}
\noindent{\bf Proof.} Let $\tilde{\psi}$ be a $C^{1,1}$ extension of $\psi |_D$ satisfying $\tilde{\psi} \le 0$ in ${\mathbb R}^n \backslash D.$ Then, since $\tilde{\psi} \in C^{1,1}({\mathbb R}^n)$ we clearly have
\begin{equation}\label{e:4.3}
|\tilde{\psi} (x) - \psi(x_0) - \nabla \psi(x_0) \cdot (x-x_0)|=|\tilde{\psi} (x) - \tilde{\psi}(x_0) - \nabla \tilde{\psi} \cdot (x-x_0)| \le \tilde{C}|x-x_0|^{2}
\end{equation}
in all of $x \in {\mathbb R}^n$. Using $|a_+-b_+| \le |a-b|$ and $ (\tilde{\psi})_+=\psi$, we have
$$
|\psi(x)-(\psi(x_0)+\nabla \psi(x_0) \cdot (x-x_0))_+| \le |\tilde{\psi} (x) - \psi(x_0) - \nabla \psi(x_0) \cdot (x-x_0)| \\
\le \tilde{C}|x-x_0|^2
$$
for all $x\in {\mathbb R}^n$. If $D=B(0,r)$ and $\psi=\Psi_r$, the constant $\tilde{C}$ in \eqref{e:4.3} become $\frac{C}{r}$. Thus, the conclusion of lemma follows. {\hfill $\Box$ \bigskip}
Next lemma is a collection of inequalities which will be used for this section. Note that we can easily check these inequalities when $\varphi(r)=r^{2\alpha}$ and $V(r)=r^{\alpha}$ with $0<\alpha<1$. The inequalities \eqref{e:varphi-inf} and \eqref{e:V-inf} are in \cite[Lemma 3.5]{BGR2}. We provide the proof for the completeness.
\begin{lemma}
\label{l:phi-inf}
There exists a constant $C_1=C_1(n, a_1, \alpha_1,\alpha_2)>0$ such that for any $0<r \le 1$,
\begin{equation}\label{e:varphi-0}
\int_0^r \frac{s}{\varphi(s)}ds \le \frac{C_1 r^2}{\varphi(r)},
\end{equation}
\begin{equation}
\label{e:varphi-inf}
\int_{r}^\infty \frac{1}{s\varphi(s)}ds \le \frac{C_1}{\varphi(r)},
\end{equation}
\begin{equation}\label{e:V-0}
\int_0^r \frac{1}{V(s)}ds \le \frac{C_1 r}{V(r)}, \quad \int_0^r \frac{V(s)}{s}ds \le C_1 V(r)
\end{equation}
and
\begin{equation}
\label{e:V-inf}
\int_{r}^\infty \frac{V(s)}{s\varphi(s)}ds \le \frac{C_1}{V(r)}.
\end{equation}
\end{lemma}
\noindent{\bf Proof.} The inequalities \eqref{e:varphi-0} and \eqref{e:V-0} can be proved using weak scaling conditions \eqref{e:varphi-wsc} and \eqref{e:V-wsc}: by \eqref{e:varphi-wsc}, we have
$$\int_0^r \frac{s}{\varphi(s)}ds = \int_0^r \frac{s}{\varphi(r)} \frac{\varphi(r)}{\varphi(s)}ds \le c_1\int_0^r \frac{s}{\varphi(r)} \Big(\frac{r}{s}\Big)^{2\alpha_2} ds = \frac{c_1}{2-2\alpha_2} \frac{r^2}{\varphi(r)},$$
and by \eqref{e:V-wsc} we have
$$\int_0^r \frac{1}{V(s)}ds = \int_0^r \frac{1}{V(r)} \frac{V(r)}{V(s)} ds \le \int_0^r c_2 \Big( \frac{r}{s} \Big)^{\alpha_2} ds = \frac{c_2}{1-\alpha_2} \frac{r}{V(r)} $$
and
$$\int_0^r \frac{V(s)}{s}ds = \int_0^r \frac{V(r)}{s} \frac{V(s)}{V(r)} ds \le \int_0^r \frac{V(r)}{s} c_2 \Big( \frac{s}{r} \Big)^{\alpha_1} ds = \frac{c_2}{\alpha_1} V(r). $$
Let $\mathcal{P}(r):= \int_{\mathbb R} \big( 1 \land \frac{|x|^2}{r^2} \big) J(x)dx$ be the Pruitt function of $X$. By \cite[(6) and Lemma 1]{BGR1} and \eqref{e:V-asymp}, we have a constant $c_3>0$ satisfying
\begin{equation}
\label{e1}\mathcal{P}(r) \le c\varphi(r)^{-1} \le c_3 V(r)^{-2}, \quad r>0.
\end{equation}
Let $\mathcal{P}_1(r):= \int_r^\infty \frac{1}{s\varphi(s)}ds$. Note that we have
\begin{equation}
\label{e2}
\mathcal{P}_1(r)=\omega_n^{-1} \int_{B(0,r)^c} \Big( 1 \land \frac{|x|^2}{r^2} \Big) J(|x|)dx \le \omega_n^{-1} \mathcal{P}(r) \le c_4 V(r)^{-2}, \quad r>0.
\end{equation}
Thus, \eqref{e2} and \eqref{e:V-asymp} imply \eqref{e:varphi-inf}.
Also, using integration by parts and \eqref{e2} we have
\begin{align*}
\int_r^{\infty} \frac{V(s)}{s\varphi(s)} ds &= \int_r^\infty V(s) d(-\mathcal{P}_1)(s) \\
&= V(r) \mathcal{P}_1(r) - \lim_{s \rightarrow \infty} V(s) \mathcal{P}_1(s) + \int_r^\infty V'(s) \mathcal{P}_1(s)ds \\
&\le c_5 \left(\frac{1}{V(r)} - \lim_{s \rightarrow \infty} \frac{1}{V(s)} + \int_r^\infty \frac{V'(s)}{V(s)^2}ds \right) = \frac{2c_5}{V(r)},
\end{align*}
which concludes \eqref{e:V-inf}. {\hfill $\Box$ \bigskip}
\begin{lemma}\label{l:3.5} Let $U \subset {\mathbb R}^n$ be a $C^{1,1}$ open set, which can be unbounded. Then there exists a constant $C_2=C_2(n,U, a_1,a_2,\alpha_1, \alpha_2)>0$ such that for any $x \in U$ and $0<r \le 1$,
\begin{equation}\label{e:4.4.1}
\int_{ U \cap \big(B(x,r) \backslash B(x,d_U(x)/2) \big)} \frac {V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \le \frac{C_2 r}{V(r)} .
\end{equation}
\end{lemma}
\noindent{\bf Proof.} Fix $x \in U$ and denote $\rho:=d_U(x)<2r$, $B_r := B(x,r)$ for $r > 0$ and $B_r = \emptyset$ for $r\le 0$. First note that there is a constant $\kappa= \kappa(U)>0$ such that the level set \noindent $\{d_U \ge t \} = \{x \in U | d_U(x) \ge t \}$ is $C^{1,1}$ for any $t \in (0, \kappa]$ since $U$ is $C^{1,1}$. Without loss of generality we can assume $\kappa \le r$ because $\kappa$ can be arbitrarily small.
Since $B_R \cap \left\{ d_U \ge \kappa \right\} = \emptyset$ for every $R \le \kappa - \rho$, we have
\begin{align*}
&\int_{(B_r \backslash B_{\rho/2}) \cap \left\{ d_U \ge \kappa \right\}}\frac{V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \\ &= \int_{(B_r \backslash B_{\max\{\rho/2,\kappa-\rho\}}) \cap \{ d_U \ge \kappa \}}\frac{V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \\ &\le
\int_{(B_r \backslash B_{2\kappa /3}) \cap \{ d_U \ge \kappa \}}\frac{V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)},
\end{align*}
where the last line follows from $\rho/2 \lor (\kappa-\rho) \ge \frac{2\kappa}{3}.$ Using
$$\kappa \le d_U(y) \le r+\kappa \le 2r \quad \mbox{and} \quad \frac{2\kappa}{3} \le |x-y| \le r $$
for every $y \in (B_r \backslash B_{2\kappa/3}) \cap \{d_U \ge \kappa \}$, we arrive that for any $x \in U$,
\begin{align}\begin{split}
&\int_{(B_r \backslash B_{2\kappa/3}) \cap \left\{ d_U \ge \kappa \right\}}\frac{V(d_D(y))}{d_D(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \\ &\le \int_{(B_r \backslash B_{2\kappa /3}) \cap \{ d_U \ge \kappa \}}\frac{V(2r)}{\kappa} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)}
\\ &\le c_1\frac{V(r)}{\kappa} \int_0^r \frac{s}{\varphi(s)}ds \le c_2(\kappa) \frac{r^2}{V(r)} \le c_2(\kappa) \frac{r}{V(r)}, \label{e:3.5}
\end{split}
\end{align}
where we used \eqref{e:V-asymp} and \eqref{e:varphi-0} for the second last inequality.
Thus, it suffices to estimate the integrand \eqref{e:4.4.1} in the set $(B_r \backslash B_{\rho/2}) \cap \{ 0 < d_U < \kappa \}.$
We will utilize the following estimates on Hausdorff measure in [RV15], that is, there exists a constant $c_3(U)>0$ such that that for every $x \in U$ and $t \in (0,\kappa)\,$,
\begin{equation}\label{e:3.5-3}
{\mathcal H}^{n-1}(\left\{ d_U=t \right\} \cap (B_{2^{-k+1}r} \backslash B_{2^{-k}r})) \le c_3(2^{-k}r)^{n-1}
\end{equation}
which follows from the fact that the level set $\left\{d_U=t\right\}$ is $C^{1,1}$ for $t \in (0,\kappa)$.
Let us denote $C_n := B_{r 2^{-n}}$ for $n \ge 0$ and let $M \in {\mathbb N}$ be the natural number satisfying $2^{-M}r \le \rho/2 \le 2^{-M+1}r.$ Using $|x-y| \ge 2^{-k}r$ for every $y \in C_{k-1} \backslash C_k$ and $\varphi$ is increasing for the third line, we have
\begin{align}
&\int_{(B_r \backslash B_{\rho/2}) \cap \{ 0<d_U < \kappa \}}\frac{V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \nonumber \\ & \le \quad \sum_{k=1}^M \int_{(C_{k-1} \backslash C_k) \cap \{ 0<d_U < \kappa \}}\frac{V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \nonumber \\
& \le \quad \sum_{k=1}^M \frac{1}{(2^{-k}r)^{n-2}\varphi(2^{-k}r)} \int_{(C_{k-1} \backslash C_k) \cap \{ 0< d_U < \kappa \}}\frac{V(d_U(y))}{d_U(y)}dy \nonumber \\
& = \quad \sum_{k=1}^M \frac{1}{(2^{-k}r)^{n-2}\varphi(2^{-k}r)} \int_{(C_{k-1} \backslash C_k) \cap \{ 0< d_U < \kappa \}}\frac{V(d_U(y))}{d_U(y)}|\nabla d_U(y)|dy. \nonumber
\end{align}
Here we used $|\nabla d_U(y)|=1$ for $y \in \{0<d_U<\kappa\}$ for the last line. (See \cite{ROV}.) \\
For any $1 \le k \le M$ and $y \in C_{k-1}$ we have $d_U(y) \le 2^{-k+1}r + \rho \le (2^{-k+1} + 2^{-M+2})r \le 6 \cdot 2^{-k}r$, which implies $C_{k-1} \subset \{d_U < 6 \cdot 2^{-k}r \}$. Thus, combining this with above inequality we have
\begin{align}\begin{split}
&\int_{(B_r \backslash B_{\rho/2}) \cap \{ 0<d_U < \kappa \}}\frac{V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \\ &\le \quad \sum_{k=1}^M \frac{1}{(2^{-k}r)^{n-2}\varphi(2^{-k}r)} \int_{(C_{k-1} \backslash C_k) \cap \{ 0< d_U < 6 \cdot 2^{-k}r \}}\frac{V(d_U(y))}{d_U(y)}|\nabla d_U(y)|dy. \label{e:3.5-2}
\end{split}
\end{align}
Plugging $u(y)=d_U(y)$ and $g(y)=\frac{V(d_U(y))}{d_U(y)}$ into the following coarea formula
$$\int_{D}g(y) | \nabla u(y)| dy = \int_{-\infty}^{\infty} \left( \int_{u^{-1}(t)} g(y)d{\mathcal H}_{n-1}(y) \right) dt, $$
we obtain
\begin{align}\label{e:3.5-4}
\begin{split}
&\sum_{k=1}^M \frac{1}{(2^{-k}r)^{n-2}\varphi(2^{-k}r)} \int_{(C_{k-1} \backslash C_k) \cap \{ 0< d_U < 6 \cdot 2^{-k}r \}}\frac{V(d_U(y))}{d_U(y)}|\nabla d_U(y)|dy \\ &= \quad \sum_{k=1}^M \frac{1}{(2^{-k}r)^{n-2}\varphi(2^{-k}r)} \int_0^{6 \cdot 2^{-k} r} \int_{(C_{k-1} \backslash C_k) \cap \left\{ d = t \right\}} \frac{V(t)}{t} d\mathcal{H}^{n-1}(y)dt \\ &
\le \quad \sum_{k=1}^M \frac{1}{(2^{-k}r)^{n-2}\varphi(2^{-k}r)} \int_0^{6\cdot 2^{-k}r} c_3 (2^{-k}r)^{n-1} \frac{V(t)}{t}dt \\& = \quad c_3 \sum_{k=1}^M \frac{ 2^{-k}r}{\varphi(2^{-k}r)} \int_0^{6\cdot 2^{-k}r}\frac{V(t)}{t}dt \le c_4 \sum_{k=1}^M \frac{ 2^{-k}r}{\varphi(2^{-k}r)} V(6 \cdot 2^{-k}r),
\end{split}
\end{align}
where we used \eqref{e:3.5-3} for the third line and \eqref{e:V-0} for the last line. Also, by \eqref{e:V-wsc} and \eqref{e:V-asymp},
\begin{equation}\label{e:3.5-5}
\begin{split}
\sum_{k=1}^M \frac{ 2^{-k}r}{\varphi(2^{-k}r)} V(6 \cdot 2^{-k}r) &\le \sum_{k=1}^M \frac{ 2^{-k}r}{V(2^{-k}r)} = \sum_{k=1}^M \int_{2^{-k}r}^{2^{-k+1}r} \frac{1}{V(2^{-k}r)}ds \\
&\le \int_0^r \frac{1}{V(s)}ds \le c_5 \frac{r}{V(r)},
\end{split}
\end{equation}
where in the last two inequalities we have used that $V$ is increasing and \eqref{e:V-0}. \\
\noindent Using \eqref{e:3.5-2}, \eqref{e:3.5-4}, and \eqref{e:3.5-5}, we conclude
$$ \int_{(B_r \backslash B_{\rho/2}) \cap \{ d < \kappa \}}\frac{V(d_U(y))}{d_U(y)} \frac{dy}{|x-y|^{n-2} \varphi(|x-y|)} \le \frac{c_4c_5 r}{V(r)}. $$
This and \eqref{e:3.5} finish the proof. {\hfill $\Box$ \bigskip}
\noindent Now we are ready to show that $V(\psi)$ acts as a barrier of $L$ on $D$.
\begin{proposition}\label{p:barrier1}
Let $L$ be given by \eqref{d:L} and $\psi$ be a regularlized version of $d_D$. Then there exists a constant $\tilde{C}_3=\tilde{C}_3(n,a_1,a_2,\alpha_1,\alpha_2,D)>0$ such that
\begin{equation}\label{e:5.1}
|L(V(\psi))| \le \tilde{C}_3 \quad in \,\, D.
\end{equation}
where $V$ is the renewal function with respect to $\Phi$. In addition, if $D=B(0,r)$ is a ball with radius r, there exists a constant $C_3=C_3(n,a_1,a_2,\alpha_1,\alpha_2)>0$ such that
\begin{equation}\label{e:5.2}
|L(V(\psi))| \le \frac{C_3}{V(r)} \quad in \,\, B(0,r),
\end{equation}
where $\psi=\Psi_r$ is a regularized version of $d_{B(0,r)}$ defined in \eqref{e:psi2}. Note that $C_3$ is independent of r.
\end{proposition}
\noindent{\bf Proof.} We prove \eqref{e:5.2} only. The proof of \eqref{e:5.1} is similar.
Let $x_0 \in B_r:=B(0,r)$ and $\rho := d_{B_r}(x_0)$.
First we prove \eqref{e:5.2} for the case $\rho \ge \kappa r > 0$ with $\kappa=1/(8C^2)$. In this case, we have
\begin{equation} \label{e:5.3}
\begin{split}
|L(V(\psi))(x_0)|
=& \left\vert\int_{\mathbb{R}^n}\left(\frac{V(\psi(x_0+y))+V(\psi(x_0-y))}{2}-V(\psi(x_0)) \right)\frac{J(1)}{|y|^n \varphi(|y|)}dy \right\vert \\
\le& \int_{B_{\kappa r/2}}\left\Vert \nabla^2 [V(\psi(x_*))]\right\Vert \frac{J(1)}{|y|^{n-2} \varphi(|y|)}dy \\
&+\int_{B^c_{\kappa r/2}} \left\vert \frac{V(\psi(x_0+y))+V(\psi(x_0-y))}{2}-V(\psi(x_0)) \right\vert\frac{J(1)}{|y|^n \varphi(|y|)}dy,
\end{split}
\end{equation}
where $x^*$ is a point on the segment between $x_0-y$ and $x_0+y$, so that $d_{B_r}(x_*) \ge \kappa r/2$ when $y \in B_{\kappa r /2 }$. Using \eqref{e:V-wsc}, \eqref{e:psi2}, and Lemma \ref{l:V}, we have
$$\Vert \nabla^2 [V(\psi(x_*))] \Vert \le |V''(\psi(x))| \Vert \nabla \psi(x) \Vert ^2+|V'(\psi(x))| \Vert \nabla^2 \psi(x) \Vert \le \frac{c_1(\kappa) V(r)}{r^2},$$
which yields to estimate the first term of \eqref{e:5.3} by
\begin{align*}
\int_{B_{\kappa r/2}}\Vert \nabla^2 [V(\psi(x_*))]\Vert \frac{J(1)}{|y|^{n-2} \varphi(|y|)}dy
&\le c_1 \frac{V(r)}{r^2} \int_{B_{\kappa r/2}}\frac{1}{|y|^{n-2} \varphi(|y|)}dy \\
&= c_2 \frac{V(r)}{r^2} \int_0^{\kappa r /2 } \frac{s}{\varphi(s)} ds \le \frac{c_3}{V(r)}.
\end{align*}
In the last inequality above, we have used \eqref{e:varphi-0}, \eqref{e:varphi-wsc}, and \eqref{e:V-asymp}. For the second term, using $\psi(x) \le C d_{B_r}(x) \le Cr$ for any $x \in B_r$, we have $$\left\vert \frac{V(\psi(x_0+y))+V(\psi(x_0-y))}{2}-V(\psi(x_0)) \right\vert \le 2 V(C r) \le c_4 V(r).$$
Therefore,
$$ \int_{B^c_{\kappa r/2}} \left\vert \frac{V(\psi(x_0+y))+V(\psi(x_0-y))}{2}-V(\psi(x_0)) \right\vert\frac{J(1)}{|y|^n \varphi(|y|)}dy \le c_5V(r) \int_{\kappa r /2}^{\infty} \frac{1}{ s\varphi(s)} ds \le \frac{c_6(\kappa)}{V(r)}.$$
In the last inequality we have used \eqref{e:varphi-inf}, \eqref{e:varphi-wsc}, and \eqref{e:V-asymp}.
Therefore, \eqref{e:5.2} for the case $\rho \ge \kappa r$ holds with $C_3 = c_3 + c_6$.
Now it suffices to consider the case $\rho < \kappa r$. Denote
$$l(x):=(\psi(x_0)+\nabla \psi(x_0) \cdot (x-x_0))_+,$$
which satisfies
$$ L(V(l))=0 \quad \mbox{on} \quad \{l>0\} $$
by \eqref{e:pde-V}. Note that
$ \psi(x_0)=l(x_0)$ and $\nabla \psi(x_0)=\nabla l(x_0).$
Moreover, by \eqref{e:3.4} we have
\begin{equation}\label{e:3.6.1}
|\psi(x)-l(x)| \le \frac{C}{r}|x-x_0|^{2}.
\end{equation}
For any $0<a\le b \le C$, there exists $a_* \in [a,b]$ satisfying $|V(a)-V(b)|=|a-b|V'(a_*)$. Using Lemma \ref{l:V} in the first inequality we have
$$
|V(a)-V(b)| = |a-b|V'(a_*) \le c_7|a-b| \frac{V(a_*)}{a_*}
\le c_8 |a-b| \frac{V(a)}{a} .
$$
Here we used \eqref{e:V-wsc} with $c=C$ for the second inequality. Therefore, for any $a,b \in (0,C]$ we have
$$|V(a)-V(b)| \le c_8|a-b| \left(\frac{V(a)}{a}+\frac{V(b)}{b}\right). $$
Also, one can easily see the following inequality
\begin{equation}\label{e:3.6.2}
|V(a)-V(b)| \le c_8|a-b| \left(\frac{V(a)}{a} {\bf 1}_{\{a>0\}}+\frac{V(b)}{b}\cdot {\bf 1}_{\{b>0\}}\right)
\end{equation}
for any $0 \le a,b \le C$ by using Lemma \ref{l:V}.
By \eqref{e:3.6.1} and \eqref{e:3.6.2} we have that for any $x \in B_r(x_0)$,
\begin{align}\label{e:3.6.3}
|V(\psi(x))-V(\ell(x))| &\le \frac{c_8}{r}|x-x_0|^{2}\left(\frac{V(\psi(x))}{\psi(x)} {\bf 1}_{\{\psi(x)>0 \}}+\frac{V(\ell(x))}{\ell(x)} {\bf 1}_{\{\ell(x)>0\}}\right) \\ &\le \frac{c_9}{r}|x-x_0|^{2}\left(\frac{V(d_{B_r}(x))}{d_{B_r}(x)} {\bf 1}_{\{d_{B_r}(x)>0\}}+\frac{V(\ell(x))}{\ell(x)}{\bf 1}_{\{\ell(x)>0\}}\right), \nonumber
\end{align}
where we used $\psi(x) \le Cd_{B_r}(x) \le C$ and $\ell(x)=(\psi(x_0)+\nabla \psi(x_0) \cdot (x-x_0))_+ \le Cd_{B_r}(x_0) + Cr \le C$ for the first inequality and \eqref{e:V-wsc} for the second. \\
On the other hand, for any $x \in B_{\rho/2}(x_0)$ with $\rho \le \kappa r$ we have
$$ |\ell(x) - \psi(x)| \le \frac{C}{r}|x-x_0|^2 \le \frac{C}{r} \rho^2 \le C \kappa \rho$$
and
$$ C^{-1} \frac{\rho}{2} \le C^{-1}d_{B_r}(x) \le \psi(x). $$
Thus, using $\kappa = 1/(8C^2)$ we obtain
$$ \frac{1}{2} \psi(x) \le \ell(x) \le 2\psi(x) \quad \mbox{for any} \quad x \in B_{\rho/2}(x_0).$$
Using $\frac{\rho}{2} \le d_{B_r}(x) \le 2\rho$, we arrive at
$$ \psi(x), \ell(x) \in [(4C)^{-1} \rho, 4C\rho]. $$
Therefore, there exists $y \in ((4C)^{-1} \rho, 4C \rho)$ satisfying $$\frac{V(\psi(x)) - V(\ell(x))} {\psi(x)-\ell(x)} = V'(y),$$ so using \eqref{e:3.6.1} and \eqref{e:V-diff}, we have
\begin{align}
|V(\psi(x))-V(\ell(x))|&=|\psi(x)-\ell(x)|V'(y) \le \frac{c_{10}}{r} |x-x_0|^2 \frac{V(y)}{y} \label{e:5.4} \\
&\le \frac{c_{11}}{r} |x-x_0|^2 \frac{V((4C)^{-1} \rho)}{(4C)^{-1}\rho}
\le \frac{c_{12}}{r} |x-x_0|^2 \frac{V(\rho)}{\rho} \nonumber
\end{align}
for $x \in B_{\rho/2}(x_0)$. Here we used \eqref{e:V-diff} and \eqref{e:V-wsc} for the second line. Also, for any $x \in B^c_r(x_0)$ we have
$$V(\ell(x)) = V(\psi(x_0)+(x-x_0)\nabla \psi(x_0)) \le V(C\rho + C|x-x_0|) \le V(2C|x-x_0|) \le c_{13} V(|x-x_0|)$$
and
$$V(\psi(x))\le V(Cr) \le V(C|x-x_0|) \le c_{13} V(|x-x_0|),$$
where we have used \eqref{e:V-wsc} and $\rho \le r \le |x-x_0|.$
Thus we obtain
\begin{equation}\label{e:5.5}
|V(\psi)-V(\ell)|(x) \le c_{14} V(|x-x_0|)
\end{equation}
for $x \in B^c_r(x_0)$. Therefore, by taking $x=y+x_0$ for \eqref{e:3.6.3}, \eqref{e:5.4}, and \eqref{e:5.5} we have
\begin{align}
|V(\psi)-V(\ell)|(y+x_0) \le c
\begin{cases}
\frac{1}{r}\frac{V(\rho)}{\rho}|y|^2 &\mbox{for} \ y \in B_{\rho/2} \\
\frac{|y|^2}{r}\left(\frac{V(d_{B_r}(x_0+y))}{d_{B_r}(x_0+y)} {\bf 1}_{\{d_{B_r}(x_0+y)>0\}}+\frac{V(l(x_0+y))}{l(x_0+y)}{\bf 1}_{\{l(x_0+y)>0\}}\right) &\mbox{for} \
y \in B_r \backslash B_{\rho/2} \\
V(|y|) &\mbox{for} \ y \in B^c_r
\end{cases} \nonumber
\end{align}
where $c = c_9 \lor c_{12} \lor c_{14}.$
Hence, recalling that $L(V(\ell))(x_0)=0$ and $\psi(x_0)=\ell(x_0)$, we find that
\begin{align*}
|L(V(\psi))&(x_0)|= |L(V(\psi)-L(V(\ell)))(x_0)| \\
& = \int_{\mathbb{R}^n} |V(\psi)-V(\ell)|(x_0+y) \frac{J(1)}{|y|^n \varphi(|y|)}dy \\
& \le \frac{c}{r} \frac{V(\rho)}{\rho} \int_{B_{\rho/2}}|y|^2 \frac{J(1)}{|y|^n \varphi(|y|)}dy + c\int_{B^c_r} V(|y|)\frac{J(1)}{|y|^n \varphi(|y|)}dy \\
&\quad + c\int_{B_r \backslash B_{\rho/2}} \frac{|y|^2}{r}\left(\frac{V(d_{B_r}(x_0+y))}{d_{B_r}(x_0+y)} {\bf 1}_{\{d_{B_r}(x_0+y)>0\}}+\frac{V(\ell(x_0+y))}{\ell(x_0+y)}{\bf 1}_{\{\ell(x_0+y)>0\}}\right)\frac{J(1)}{|y|^n \varphi(|y|)}dy \\
&=: {\rm {\rom{1}}}+{\rm {\rom{2}}}+{\rm {\rom{3}}}.
\end{align*}
For \rom{1}, using \eqref{e:varphi-0} we have
\begin{align*}
{\rm {\rom{1}}} &= \frac{c}{r} \frac{V(\rho)}{\rho} \int_{B_{\rho/2}}|y|^2 \frac{J(1)}{|y|^n \varphi(|y|)}dy = \frac{c_{15}}{r} \frac{V(\rho)}{\rho} \int_0^{\rho/2} \frac{s}{\varphi(s)}ds \\ &\le \frac{c_{16}}{r} \frac{V(\rho)}{\rho} \frac{(\rho/2)^2}{\varphi(\rho/2)} \le \frac{c_{17}}{V(r)} \left( \frac{\rho}{r} \frac{V(r)}{V(\rho)} \right) \le \frac{ c_{18} }{V(r)},
\end{align*}
where we used \eqref{e:V-asymp} and \eqref{e:V-wsc} for the last two inequalities. Also, using \eqref{e:V-inf} we obtain
\begin{align*}
{\rm {\rom{2}}}= c\int_{B^c_r} V(|y|)\frac{J(1)}{|y|^n \varphi(|y|)}dy = c_{19} \int_r^\infty \frac{V(s)}{s\varphi(s)} ds \le \frac{c_{20}}{V(r)}.
\end{align*}
For the estimate of \rom{3}, we first observe that for any $y \in \{ \ell >0\} := H$,
$$ \left\vert \frac{\ell(y)}{d_H(y)} \right\vert = \Vert \nabla \psi(x_0) \Vert \le C. $$
Thus, by \eqref{e:V-wsc} we have
$$ \frac{V(\ell(y))}{\ell(y)} \le c_{21} \frac{V(C d_H(y))}{d_H(y)} \le c_{22} \frac{V(d_H(y))}{d_H(y)}. $$
Therefore, using Lemma \ref{l:3.5} for $B_r$ and the half plane $H:= \{\ell >0\}$ for each line, we conclude
\begin{align*}
{\rm {\rom{3}}}&=\frac{c}{r} \int_{{B_r} \cap \big( B_1(x_0) \backslash B_{\rho/2}(x_0) \big)} \frac{V(d_{B_r}(y))}{d_{B_r}(y)}\frac{J(1)}{|y-x_0|^{n-2} \varphi(|y-x_0|)}dy
\\ & \quad + \frac{c}{r}\int_{ H \cap \big( B_1(x_0) \backslash B_{\rho/2}(x_0) \big) } \frac{V(\ell(y))}{\ell(y)}\frac{J(1)}{|x-y|^{n-2} \varphi(|x_0-y|)}dy\\
&\le \frac{c_{23}}{r} \frac{r}{V(r)} + \frac{c_{24}}{r}\int_{ H \cap \big( B_1(x_0) \backslash B_{\rho/2}(x_0) \big) } \frac{V(d_H(y))}{d_H(y)}\frac{1}{|x-y|^{n-2} \varphi(|x_0-y|)}dy \le \frac{c_{25}}{V(r)}.
\end{align*}
Combining estimates of \rom{1},\rom{2} and \rom{3} we arrive
$$ |L(V(\psi))(x_0)| \le {\rm {\rom{1}}}+{\rm {\rom{2}}}+{\rm {\rom{3}}} \le (c_{18}+c_{20}+c_{25}) \frac{1}{V(r)} $$
and \eqref{e:5.2} follows. {\hfill $\Box$ \bigskip}
\subsection{Subsolution and Harnack inequality}\label{s:Harnack}
In this section we construct a subsolution from the barrier we have obtained in Proposition \ref{p:barrier1}. Recall that we defined the domain of infinitesimal generator $A$ by
$$ {\mathcal D}= {\mathcal D}(D)= \{ u \in C_0(D) : Au \in C(D) \} $$
in Section \ref{s:MH}. It is uncertain whether $V(\psi) \in {\mathcal D}(D)$ since $A(V(\psi))$ is not continuous in general. To make our barrier included in the domain of operator, we construct a new domain of generator which contains $V(\psi)$. For given $C^{1,1}$ bounded open set $D$ and open subset $U$ in $D$, define
$$ {\mathcal F}= {\mathcal F}(D,U):= \{ u \in C_0(D) : Au \in L^\infty(U) \}. $$
for the usage of proof. Denote ${\mathcal F}(D)={\mathcal F}(D,D)$. Clearly ${\mathcal F}(D,U_2) \subset {\mathcal F}(D,U_1)$ for any $U_1 \subset U_2$. We first prove that $V(\psi) \in {\mathcal F}(D)$.
\begin{lemma}\label{l:FF}
Let $\psi$ be the regularized version of $d_D$. Then, $A(V(\psi))= L(V(\psi))$ in $D$. Moreover, $V(\psi) \in {\mathcal F}(D)$.
\end{lemma}
\noindent{\bf Proof.} Let $u \in C_0(D)$ be a twice-differentiable function in $D$. Assume that $\nabla^2 u$ is bounded in some $U \subset \subset D$. We first claim that
\begin{equation}\label{e:AL} Lu(x) = Au(x) \quad \mbox{for any} \quad x \in U. \end{equation}
Indeed, fix $x \in U$ and let $r_x>0$ be a constant satisfying $B=B(x,r_x) \subset U$. Without loss of generality we can assume $r_x \le 1$. Note that there exists a constant $c_1>0$ such that $2|u| + r_x^2\Vert \nabla^2 u\Vert \le c_1$ in $U$. Then we have
\begin{align}\begin{split}\label{e:AL1}
Au(x) &= \lim_{t \downarrow 0} \frac{P_tu(x) - u(x)}{t} = \lim_{t \downarrow 0} \frac{1}{t} \left(\int_{{\mathbb R}^n} u(x+y) p(t,|y|)dy - u(x)\right) \\
&= \lim_{t \downarrow 0} \int_{{\mathbb R}^n} \left( \frac{u(x+y) + u(x-y) }{2} -u(x) \right) \frac{p(t,|y|)}{t} dy.\end{split}
\end{align}
Since there is a constant $c_2>0$ such that $\frac{p(t,r)}{t} \le c_2 J(r)$ for any $t>0$ and $r>0$, we have
\begin{align*}
&\int_{{\mathbb R}^n} \left| \frac{u(x+y) + u(x-y) }{2} -u(x) \right| \frac{p(t,|y|)}{t} dy \\ &\le \int_{B} \left| \frac{u(x+y) + u(x-y) }{2} -u(x) \right| \frac{p(t,|y|)}{t} dy + \int_{B^c} \left| \frac{u(x+y) + u(x-y) }{2} -u(x) \right| \frac{p(t,|y|)}{t} dy \\ &\le c_1 \int_{B} \frac{|y|^2}{r_x^2} \frac{p(t,|y|)}{t}dy + c_1\int_{B^c} \frac{p(t,|y|)}{t}dy \le c_1 \int_{{\mathbb R}^n} (\frac{|y|^2}{r_x^2} \land 1) (c_3J(|y|))dy< \infty
\end{align*}
for any $t>0$ so that we can apply dominate convergence theorem in the right-handed side of \eqref{e:AL1}. Thus, using $\displaystyle \lim_{t \downarrow 0}$ $\frac{p(t,r)}{t} = J(r)$ we obtain
\begin{align*}Au(x) &= \lim_{t \downarrow 0} \int_{{\mathbb R}^n} \left( \frac{u(x+y) + u(x-y) }{2} -u(x) \right) \frac{p(t,|y|)}{t} dy \\ &= \int_{{\mathbb R}^n}\left( \frac{u(x+y) + u(x-y) }{2} -u(x) \right) J(|y|) dy = Lu(x). \end{align*}
This concludes the claim. Now, by Lemma \ref{l:V} we have that $V(\psi) \in C_0(D)$ is twice-differentiable and $\nabla^2 V(\psi)$ is locally bounded on $D$. Therefore, we arrive $L(V(\psi))= A(V(\psi))$ in $D$. It immediately follows from \eqref{e:5.1} that $V(\psi) \in {\mathcal F}(D)$. {\hfill $\Box$ \bigskip} \\
Now we are ready to construct a subsolution with respect to the generator $A$.
\begin{lemma}[subsolution]\label{l:sub}
There exist a constant $C_4=C_4(n,a_1, a_2, \alpha_1, \alpha_2) > 0$ independent of $r$ and a radial function $w= w_r \in {\mathcal F}(B_{4r})$ satisfying
\begin{align*}
\begin{cases}
Aw \geq 0 &\text{in} ~ B_{4r} \setminus B_{r}, \\
w \le V(r) &\text{in} ~ B_{r}, \\
w \geq C_4 V( 4r - \vert x \vert) &\text{in} ~ B_{4r} \setminus B_{r}, \\
w \equiv 0 &\text{in} ~ \mathbb{R}^n \setminus B_{4r},
\end{cases}
\end{align*}
where $B_r := B(0, r)$.
\end{lemma}
\noindent{\bf Proof.} Let $\Psi = \Psi_{4r}$ be the regularized version of $d_{B_{4r}}$ in \eqref{e:psi2} and choose a function $\eta \in C_c^\infty(B_1)$ satisfying $\Vert \eta \Vert_{C(B_1)}=1$ and $\eta \equiv 1$ on $B_{1/2}$. Define $\eta_r(x):=V(r) \eta(x/r) \in C_c^\infty(B_{r})$. Then, we have
\begin{align*}
|A\eta_r (x) |
&= |L\eta_r(x)| \le \int_{\mathbb{R}^n} \left| \frac{\eta_r(x+y)+\eta_r(x-y)}{2} - \eta_r(x) \right| J(|y|) dy \\
&\le \left( \Vert \nabla^2 \eta_r \Vert_{L^\infty(B_r)} + \Vert \eta_r \Vert_{L^\infty(B_r)} \right) \int_{\mathbb{R}^n} \big( |y|^2 \land 1 \big) J(|y|)dy < \infty
\end{align*}
for any $x \in {\mathbb R}^n$, which implies $\eta_r \in {\mathcal F}(B_{4r})$. Also, for $x \in B_{4r} \backslash B_{r}$,
\begin{align*}
A\eta_r (x)
&= L\eta_r(x) = \int_{\mathbb{R}^n} \frac{\eta_r(x+y)+\eta_r(x-y)}{2} \frac{J(1)}{|y|^n\varphi(|y|)} dy \\
&= \int_{\mathbb{R}^n} \eta_r (x+y) \frac{J(1)}{|y|^n\varphi(|y|)} dy \ge \int_{ B(-x,r/2 ) } \frac{V(r) J(1)}{|y|^n\varphi(|y|)} dy \ge \frac{ c_1(r/2)^n V(r)}{(9r/2)^n \varphi(9r/2)} \ge \frac{c_2}{V(4r)}.
\end{align*}
Here we used \eqref{e:V-asymp} and \eqref{e:V-wsc} for the last inequality.
Define a function $\tilde{w}_r$ by
\begin{align*}
\tilde{w}_r = \frac{c_2}{C_3} V(\Psi) + \eta_r,
\end{align*}
where $C_3$ is the constant in Proposition \ref{p:barrier1}. We have $\tilde{w}_r \in {\mathcal F}(B_{4r})$ by Lemma \ref{l:FF}. Also, for $x \in B_{4r} \backslash B_{r}$, using Proposition \ref{p:barrier1} and Lemma \ref{l:FF} again, we have
\begin{align*}
A\tilde{w}_r(x) = \frac{c_2}{C_3} AV(\Psi)(x) + A\eta_r(x) \ge - \frac{c_2}{C_3} |LV(\Psi)(x)| + A\eta_r(x) \ge -\frac{c_2}{V(4r)} + \frac{c_2}{V(4r)} = 0
\end{align*}
and
\begin{align*}
\tilde{w}_r(x) = \frac{c_2}{C_3} V(\Psi(x)) \ge c_3 V(d_D(x)) = c_3 V(4r -|x|).
\end{align*}
For $x \in B_{r}$,
$$ \tilde{w}_r(x) \le \frac{c_2}{C_3}V(4C r) + V(r) \le c_4 V(r) $$
by \eqref{e:psi2} and \eqref{e:V-wsc}.
Define $w_r(x) := \frac{1}{c_4} \tilde{w}_r(x)$. Then $w_r$ satisfies all assertions in Lemma \ref{l:sub} with constant $C_4 = \frac{c_3}{c_4}$, which is independent of $r$. {\hfill $\Box$ \bigskip}
We end this section with the Harnack inequality and the maximum principle of probabilistic version. For local operators, the Harnack inequality implies H\"older regularity of solutions of differential equations. However for nonlocal operators, as Silvestre mentioned in \cite{Sil}, this is not true because the nonnegativity of the function $u$ is required in the whole space $\mathbb{R}^n$. The Harnack inequality, maximum principle, and the subsolution constructed in Lemma \ref{l:sub} will play a key role in the proof of Theorem \ref{t:2}. We emphasize that the following theorem is the Harnack inequality for harmonic function with respect to $A$, and it does not imply the Harnack inequality for the viscosity solution with respect to $L$. See \cite{CS2} for the statement of Harnack inequality for viscosity solution.
\begin{theorem} [Harnack inequality] \cite[Theorem 2.2]{SV} \label{t:Harnack}
Let $D$ be a bounded $C^{1,1}$ open set. Then, there exists a constant $C > 0$ such that for any ball $B(x_0,r) \subset D$, and any nonnegative function $u \in {\mathcal F}(D)$ satisfying $Au = 0$ a.e. in $B(x_0, r)$, we have
\begin{align*}
\sup_{B(x_0, r/2)} u \leq C \inf_{B(x_0, r/2)} u.
\end{align*}
\end{theorem}
Also, we have the following maximum principle.
\begin{lemma}[Maximum principle]\label{l:m} Let $D$ be a bounded $C^{1,1}$ open set and $U$ be an open subset of $D$. If the function $u \in {\mathcal F}(D,U)$ satisfies $Au=0$ a.e. in $U$ and $u \ge 0$ in $U^c$, then $u \ge 0$ in ${\mathbb R}^n$.
\end{lemma}
\noindent{\bf Proof.} Suppose that there exists $x \in U$ satisfying $u(x) < 0$. Since $u \in C_0(D)$, the set $U_- := \{ x \in {\mathbb R}^d : u(x) < 0 \}$ is bounded and open set with positive Lebesgue measure. For any $t>0$ we have
\begin{align*}
\int_{U_-} P_t u(x) - u(x) dx &= \int_{U_-} \int_{{\mathbb R}^d} u(y) p(t,|x-y|) dy dx - \int_{U_-} u(x)dx \\
&= \int_{{\mathbb R}^d} u(y) \int_{U_-} p(t,|x-y|)dx dy - \int_{U_-} u(y)dy \\
&= \int_{U_-^c} u(y) \int_{U_-} p(t,|x-y|)dx dy + \int_{U_-} u(y) \left( \int_{U_-} p(t,|x-y|)dx - 1 \right) dy \\
&\ge \int_{U_-} u(y) \left( \int_{U_-} p(t,|x-y|)dx - 1 \right) dy.
\end{align*}
Since $U_-$ is bounded, $\rm{diam}(U_-)=:R<\infty$. Thus, for any $y \in U_- \subset B(y,R)$,
\begin{align*}
\frac{1-\int_{U_-} p(t,|x-y|)dx}{t} \ge \frac{1-\int_{B(y,R)} p(t,|x-y|)dx }{t} = \frac{1}{t} \left( 1 - {\mathbb P}^y (X_t \in B(y,R)) \right) = \frac{{\mathbb P}^0 (|X_t| \ge R)}{t}.
\end{align*}
Using heat kernel estimates in \cite[Theorem 21]{BGR1}, we have $p(t,r) \asymp \left(\varphi^{-1}(t)^{-n} \land \frac{t}{r^n \varphi(r)}\right)$ for $(t,r) \in (0,1] \times {\mathbb R}_+$. Note that $\frac{t}{r^n\varphi(r)} \le \varphi^{-1}(t)^{-n}$ for $t \le \varphi(r) $. Thus, there exists $\varepsilon = \varepsilon(R)>0$ satisfying
$$ \frac{{\mathbb P}^0(|X_t| \ge R)}{t} \ge \frac{1}{t}\int_{R \le |z| \le 2R} p(t,|z|)dz \ge c \int_R^{2R} \frac{1}{r\varphi(r)}dr \ge \varepsilon \quad \mbox{for all} \quad t \in (0,\varphi(R)]. $$
Combining above estimates we obtain
$$ \int_{U_-} \frac{P_t u(x)- u(x)}{t} dx \ge -\varepsilon \int_{U_-} u(y) dy \quad \mbox{for all} \quad t \in (0,\varphi(R)]. $$
Letting $t \to 0$, we conclude
$$ 0= \int_{U_-} Au(x)dx = \lim_{t \to 0} \int_{U_-} \frac{P_t u(x)- u(x)}{t} dx \ge -\varepsilon \int_{U_-} u(y)dy>0, $$
which is contradiction. Therefore, $u \ge 0$ in ${\mathbb R}^n$.
{\hfill $\Box$ \bigskip}
\subsection{Proof of Theorem \ref{t:2}} \label{s:pf of Thm 1.2}
In this section we will prove Theorem \ref{t:2}. More precisely, we prove the H\"older regularity for the function $u/V(d_D)$ up to the boundary of $D$. We will control the oscillation of this function using the Harnack inequality, the maximum principle and the subsolution constructed in Lemma \ref{l:sub}.
Let us adopt notations in \cite[Definition 3.3]{ROS1}. Let $\kappa > 0$ be a fixed small constant and let $\kappa' = 1/2 + 2\kappa$. Given $x_0 \in \partial D$ and $r>0$, define
\begin{align*}
D_r = D_r(x_0) = B(x_0, r) \cap D
\end{align*}
and
\begin{align*}
D_{\kappa'r}^+ = D_{\kappa'r}^+(x_0) = B(x_0, \kappa'r) \cap \left\lbrace x \in D : -x \cdot \nu(x_0) \geq 2\kappa r \right\rbrace,
\end{align*}
where $\nu(x_0)$ is the unit outward normal at $x_0$. Since $D$ is a bounded $C^{1,1}$ open set, there exists $\rho_0 > 0$ such that for each $x_0 \in \partial D$ and $r \leq \rho_0$, there exists an orthonormal system $CS_{x_0}$ with its origin at $x_0$ and a $C^{1,1}$-function $\Psi : {\mathbb R}^{n-1} \rightarrow {\mathbb R}$ satisfying $\Psi(\tilde{0}) = 0, \nabla_{CS_{x_0}} \Psi(\tilde{0}) = 0, \Vert \Psi \Vert_{C^{1,1}} \leq \kappa$, and
\begin{align*}
\left\lbrace y=(\tilde{y}, y_n) ~ \text{in} ~ CS_{x_0} : \vert \tilde{y} \vert < 2r, \Psi(\tilde{y}) < y_n < 2r \right\rbrace \subset D.
\end{align*}
Then we have
\begin{align} \label{e:Dr1}
B(y, \kappa r) \subset D_r(x_0) ~~ \text{for all} ~~ y \in D^+_{\kappa' r}(x_0),
\end{align}
and we can take a $C^{1.1}$ subdomain $D_r^{1,1}$ satisfying $D_r \subset D_r^{1,1} \subset D_{2r}$ and
\begin{align} \label{e:Dr11}
\mathrm{dist} (y, \partial D_r^{1,1}) = d_D(y)
\end{align}
for all $y\in D_r$. Since $D_r$ is not $C^{1,1}$ in general, we will use this subdomain instead of $D_r$.
Since $D$ is bounded and $C^{1,1}$ again, we can assume that for each $x_0 \in \partial D$ and $r \leq \rho_0$,
\begin{align} \label{e:Dr2}
B( y^* - 4\kappa r \nu(y^*), 4\kappa r) \subset D_r(x_0) ~~\text{and} ~~ B(y^* - 4\kappa r \nu(y^*), \kappa r) \subset D_{\kappa' r}^+ (x_0)
\end{align}
for all $y \in D_{r/2}(x_0)$, where $y^* \in \partial D$ is the unique boundary point satisfying $\vert y - y^* \vert = d_D(y)$.
The following oscillation lemma is the key lemma to prove Theorem \ref{t:2}.
\begin{lemma} [Oscillation lemma] \label{l:osc}
Assume $f \in C(D)$ and let $u \in {\mathcal D}$ be the viscosity solution of \eqref{e:pde1}. Then there exist constants $\gamma \in (0, 1)$ and $C_1 > 0$, depending only on $n, a_1, a_2, \alpha_1, \alpha_2$ and $D$, such that
\begin{align} \label{e:osc}
\sup_{D_r(x_0)} \frac{u}{V(d_D)} - \inf_{D_r(x_0)} \frac{u}{V(d_D)} \leq C_1 V(r)^\gamma \Vert f \Vert_{L^\infty(D)}
\end{align}
for any $x_0 \in \partial D$ and $r > 0$.
\end{lemma}
To prove the oscillation lemma, we need some preparation. Note that in the following two lemmas we aim to verify inequalities for every function $u \in {\mathcal F}$, since we want to utilize the subsolution constructed in Lemma \ref{l:sub}. The first one is a generalized version of Harnack inequality.
\begin{lemma} [Harnack inequality] \label{l:harnack}
There exists a constant $C_2 = C_2(n, a_1, a_2, \alpha_1, \alpha_2, D) > 0$ such that for any $r \leq \rho_0, x_0 \in \partial D$ and nonnegative function $u \in {\mathcal F}(D,D_r^{1,1})$,
\begin{align} \label{e:Harnack}
\sup_{D^+_{\kappa' r}(x_0)} \frac{u}{V(d_D)} \leq C_2 \left( \inf_{D^+_{\kappa' r}(x_0)} \frac{u}{V(d_D)} + \Vert Au \Vert_{L^\infty(D_r^{1,1})} V(r) \right).
\end{align}
\end{lemma}
\noindent{\bf Proof.}
We first prove that if a nonnegative function $v$ satisfies $Av = 0$ a.e. in $D^{1,1}_r$, then
\begin{align} \label{e:h for v/V}
\sup_{D^+_{\kappa' r}(x_0)} \frac{v}{V(d_D)} \leq c \inf_{D^+_{\kappa' r}(x_0)} \frac{v}{V(d_D)}
\end{align}
for a constant $c>0$ which is independent of $r$ and $v$. Indeed, for each $y \in D^+_{\kappa' r}$, we have $B(y, \kappa r) \subset D^{1,1}_r$ by \eqref{e:Dr1} hence $Av = 0$ a.e. in $B(y, \kappa r)$. We may cover $D^+_{\kappa' r}$ by finitely many balls $B(y_i, \kappa r/2)$. Here the number of balls is independent of $r$. By the Theorem \ref{t:Harnack}, we have for each $i$,
\begin{align*}
\sup_{B(y_i, \kappa r/2)} v \leq c_1 \inf_{B(y_i, \kappa r/2)} v.
\end{align*}
If $x \in B(y_i, \kappa r/2)$, we have $\kappa r /2 \leq d_D(x) \leq r/2 + 5\kappa r / 2$. Thus, using \eqref{e:V-wsc} we obtain
\begin{align*}
\sup_{B(y_i, \kappa r/2)} \frac{v}{V(d_D)} \leq \sup_{B(y_i, \kappa r/2)} \frac{v}{V(\kappa r / 2)} \leq c_2 \inf_{B(y_i, \kappa r/2)} \frac{v}{V(r/2 + 5\kappa r/2)} \leq c_2 \inf_{B(y_i, \kappa r/2)} \frac{v}{V(d_D)}.
\end{align*}
Now \eqref{e:h for v/V} follows from the standard covering argument, possibly with a larger constant.
We next prove \eqref{e:Harnack}. Let us write $u = u_1 + u_2$, where $u_1 := u + R^{D_r^{1,1}}Au$ and $u_2 := -R^{D_r^{1,1}} Au$. We claim that $u_1 \ge 0$ in $\mathbb{R}^n$ and $Au_1 = 0$ a.e. in $D^{1,1}_r$.
Following the calculations of \eqref{e:A} we obtain that for any open subset $U \subset D$, $x \in U$ and $u \in {\mathcal F}(D,U)$,
\begin{equation}\label{e:A1}
Au(x) = \lim_{ t \downarrow 0} \frac{P_t u(x) - u(x)}{t}=\lim_{ t \downarrow 0} \frac{P_t^U u(x) - u(x)}{t}.
\end{equation}
Let us emphasize that we only have used $u \in C_0(D)$ in \eqref{e:A} so we can repeat the same argument for $u \in {\mathcal F}(D,U)$.
Let $g \in L^\infty(U)$. Deducing $R^U g \in C_0(U)$ from Proposition \ref{p:R} and \eqref{d:R}, we obtain the following counterpart of \eqref{e:Au}: For any $x \in U$,
\begin{align}
\begin{split}\label{e:A2}
AR^U g(x) &= A \left(\int_0^\infty P_s^U g(\cdot ) ds \right)(x) = \lim_{t \downarrow 0} \frac{1}{t} \left( P^U_t\left(\int_0^\infty P_s^U g(\cdot ) ds \right)(x) - \int_0^\infty P_s^U g(x) ds \right) \\
& = \lim_{t \downarrow 0} \frac{1}{t} \left( \int_0^\infty P_{s+t}^U g(x) ds - \int_0^\infty P_s^U g(x)ds \right) \\ &= -\lim_{t \downarrow 0} \frac{ \int_0^t P_s^U g(x)ds}{t} =-\lim_{t \downarrow 0} \frac{ \int_0^t P_s g(x)ds}{t}.
\end{split}
\end{align}
Here we used \eqref{e:A1} for the first line. Let
$$U_g := \{ x \in U : \lim_{r \downarrow 0} \frac{1}{r^n} \int_{B(x,r)} |g(x)-g(y)|dy =0 \}. $$
Then, we have $| U \setminus U_g |=0$ since $g \in L^\infty(U) \subset L^1(U)$. For $x \in U_g$, we have
$$|P_t g(x) - g(x) | = \left\vert \int_{{\mathbb R}^n} p(t,|x-y|)(g(y)-g(x))dy \right\vert \le \int_{{\mathbb R}^n} p(t,|x-y|) |g(y)-g(x)|dy. $$
Let $\varepsilon>0$. Using $p(t,r) \asymp \left( \varphi^{-1}(t)^{-n} \land \frac{t}{r^n \varphi(r)}\right)$ for $t \in (0,1] \times {\mathbb R}_+$ in \cite[Theorem 21]{BGR1} again, there exist constants $c_3(\varepsilon), c_4(\varepsilon)>0$ such that for any $t \in (0,1]$ and $r>0$,
$$ p(t,r) \le c_4 \varphi^{-1}(t)^{-n} $$
and
$$ {\mathbb P}^x (|X_t| > c_3 \varphi^{-1}(t)) \le \varepsilon. $$
Indeed, using \eqref{e:varphi-inf} and \eqref{e:varphi-wsc} we have
$${\mathbb P}^x (|X_t| > c_3 \varphi^{-1}(t)) = \int_{|z|>c_3 \varphi^{-1}(t)} p(t,|z|)dz \le c_4 t\int_{c_3 \varphi^{-1}(t)}^\infty \frac{dr}{r\varphi(r)} \le \frac{c_5t} {\varphi(c_3\varphi^{-1}(t))} \le c_6c_3^{-2\alpha_1}. $$
Thus, we obtain
\begin{align*}
|P_t g(x) - g(x) | &\le \int_{B(x, c_3\varphi^{-1}(t))} p(t,|x-y|) |g(y)-g(x)| dy + \int_{B(x, c_3\varphi^{-1}(t))^c} p(t,|x-y|) |g(y)-g(x)|dy \\
&\le c_4 \varphi^{-1}(t)^{-n} \int_{B(x, c_3 \varphi^{-1}(t))} |g(y)-g(x)|dy + 2 \Vert g \Vert_{\infty} \int_{B(x, c_3\varphi^{-1}(t))} p(t,|x-y|)dy \\
&\le c_4 \varphi^{-1}(t)^{-n} \int_{B(x, c_3 \varphi^{-1}(t))} |g(y)-g(x)|dy + 2\Vert g \Vert_{\infty} \varepsilon.
\end{align*}
Since $\varepsilon>0$ is arbitrary and $x \in U_g$, we conclude
$$ \lim_{t \downarrow 0} |P_t g(x)- g(x)| =0. $$
Combining this with \eqref{e:A2} we arrive that for any open subset $U \subset D$ and $g \in L^\infty(D)$,
\begin{equation}\label{e:g}
AR^{U} g= -g \quad \mbox{a.e. in} \quad U.
\end{equation}
Since $u \in {\mathcal F}(D,U)$, we have $Au \in L^\infty(U)$. Thus, taking $U=D_r^{1,1}$ and $g=Au$ in \eqref{e:g} we conclude
$$ Au_1 = Au + AR^{D_r^{1,1}}Au = 0 \quad \mbox{a.e. in } D_r^{1,1}. $$
Also, $u_1 \ge 0$ follows from applying Lemma \ref{l:m} with above equation and $u_1 = u \ge 0$ in ${\mathbb R}^n \backslash D_r^{1,1}$.
Applying \eqref{e:h for v/V} to $u_1$, we get
\begin{align*}
\sup_{D^+_{\kappa' r}} \frac{u_1}{V(d_D)} \leq c_7 \inf_{D^+_{\kappa' r}} \frac{u_1}{V(d_D)}.
\end{align*}
Meanwhile, using \eqref{e:Dr11} and Lemma \ref{l:u} we have
\begin{align*}
\vert u_2(x) \vert \leq c_8 \Vert Au \Vert_{L^\infty(D_r^{1,1})} V(\mathrm{diam}(D_r^{1,1})) V(\mathrm{dist}(x, \partial D^{1,1}_r)) \leq c_9 \Vert Au \Vert_{L^\infty(D_r^{1,1})} V(r) V(d_D(x))
\end{align*}
for all $x \in D_r^{1,1}$. Therefore, combining above two inequalities we conclude that
\begin{align*}
\sup_{D^+_{\kappa' r}} \frac{u}{V(d_D)}
&\leq \sup_{D^+_{\kappa' r}} \frac{u_1}{V(d_D)} + \sup_{D^+_{\kappa' r}} \frac{u_2}{V(d_D)} \leq c_5 \inf_{D^+_{\kappa' r}} \frac{u_1}{V(d_D)} + \sup_{D^+_{\kappa' r}} \frac{u_2}{V(d_D)} \\
&\leq c_5 \inf_{D^+_{\kappa' r}} \frac{u}{V(d_D)} + (c_5 + 1) \sup_{D^+_{\kappa' r}} \frac{\vert u_2 \vert}{V(d_D)} \leq C_2 \left( \inf_{D^+_{\kappa' r}} \frac{u}{V(d_D)} + \Vert Au \Vert_{L^\infty(D_r^{1,1})} V(r) \right).
\end{align*}
{\hfill $\Box$ \bigskip}
The next lemma gives the link between $D^+_{\kappa' r}$ and $D_{r/2}$. Here we are going to use the subsolution $w$ in Lemma \ref{l:sub}.
\begin{lemma} \label{l:4.9}
Let $r \leq \rho_0, x_0 \in \partial D$. If $u \in {\mathcal F}(D,D_r^{1,1})$ is nonnegative, then there exists a constant $C_3 = C_3(n, a_1, a_2, \alpha_1, \alpha_2, D) > 0$ such that
\begin{align*}
\inf_{D^+_{\kappa' r}(x_0)} \frac{u}{V(d_D)} \leq C_3 \left( \inf_{D_{r/2}(x_0)} \frac{u}{V(d_D)} + \Vert Au \Vert_{L^\infty(D_r^{1,1})} V(r) \right).
\end{align*}
\end{lemma}
\noindent{\bf Proof.}
First assume that $Au$ is nonnegative. As in the proof of Lemma \ref{l:harnack}, we write $u = u_1 + u_2$, where $u_1 = u + R^{D_r^{1,1}} Au$ and $u_2 = -R^{D_r^{1,1}} Au$. Then $u_1$ is a nonnegative solution for
\begin{align*}
\begin{cases}
Au_1 = 0 &\text{a.e. in} ~ D^{1,1}_r, \\
u_1 = u &\text{in} ~ \mathbb{R}^n \setminus D^{1,1}_r.
\end{cases}
\end{align*}
Let
\begin{align*}
m:= \inf_{D^+_{\kappa' r}} \frac{u_1}{V(d_D)} \geq 0.
\end{align*}
For $y \in D_{r/2}$, we have either $y\in D^+_{\kappa' r}$ or $d_D(y) < 4\kappa r$ by \eqref{e:Dr2}.
If $y\in D^+_{\kappa' r}$, then clearly
\begin{align} \label{e:inf}
m \leq \frac{u_1(y)}{V(d_D(y))}.
\end{align}
If $d_D(y) < 4 \kappa r$, let $y^*$ be the closest point to $y$ on $\partial D_r^{1,1}$ and let $\tilde{y} = y^* - 4\kappa r \nu (y^*)$. By \eqref{e:Dr2}, we have $B_{4\kappa r}(\tilde{y}) \subset D_r$ and $B_{\kappa r}(\tilde{y}) \subset D^+_{\kappa' r}$.
Now consider $w \in {\mathcal F}(B_{4\kappa r} (\tilde{y})) \subset {\mathcal F}(D,B_{4\kappa r} (\tilde{y}) \backslash B_{\kappa r}(\tilde{y}))$ satisfying
\begin{align*}
\begin{cases}
Aw \geq 0 &\text{in} ~ B_{4\kappa r} (\tilde{y}) \setminus B_{\kappa r}(\tilde{y}), \\
w \leq V(\kappa r) &\text{in} ~ B_{\kappa r} (\tilde{y}), \\
w \geq c_1 V(4\kappa r - \vert x - \tilde{y} \vert) &\text{in} ~ B_{4\kappa r} (\tilde{y}) \setminus B_{\kappa r} (\tilde{y}), \\
w \equiv 0 &\text{in} ~ \mathbb{R}^n \setminus B_{4\kappa r} (\tilde{y}),
\end{cases}
\end{align*}
which can be obtained by translating the subsolution in Lemma \ref{l:sub}. Since $Au_1 = 0$ a.e. in $B_{4\kappa r} (\tilde{y})$, we have
\begin{align*}
\begin{cases}
Au_1 = 0 \leq A(mw) &\text{a.e. in} ~ B_{4\kappa r} (\tilde{y}) \setminus B_{\kappa r}(\tilde{y}), \\
u_1 \geq mV(d_D) \geq mw &\text{in} ~ B_{\kappa r} (\tilde{y}), \\
u_1 \geq 0 = m w &\text{in} ~ \mathbb{R}^n \setminus B_{4\kappa r} (\tilde{y}).
\end{cases}
\end{align*}
Now by the maximum principle in Lemma \ref{l:m} with the function $u_1 -mw$ and $U=B_{4\kappa r} (\tilde{y}) \setminus B_{\kappa r}(\tilde{y})$, we obtain $u_1 \geq mw$ in $\mathbb{R}^n$. In particular, for $y \in B_{4\kappa r} (\tilde{y}) \setminus B_{\kappa r}(\tilde{y})$,
\begin{align*}
u_1(y) \geq c_1 m V(4\kappa r - \vert y - \tilde{y} \vert ) = c_1 m V(d_D(y)).
\end{align*}
Therefore, we obtain
\begin{align*}
\inf_{D^+_{\kappa' r}} \frac{u_1}{V(d_D)} \leq c_2 \inf_{D_{r/2}} \frac{u_1}{V(d_D)}.
\end{align*}
On the other hand, $u_2$ satisfies
\begin{align*}
\vert u_2(x) \vert \leq c_3 \Vert Au \Vert_{L^\infty(D_r^{1,1})} V(r) V(d_D(x))
\end{align*}
for all $x \in D_r^{1,1}$, which gives the desired result. {\hfill $\Box$ \bigskip}
We prove the oscillation lemma \eqref{e:osc} by using Lemmas \ref{l:harnack} and \ref{l:4.9}.
\\
\noindent{\bf Proof of Lemma \ref{l:osc}}
As a consequence of Remark \ref{r:3.7}, by dividing $\Vert f \Vert_{L^\infty(D)}$ on both sides of \eqref{e:pde1} if necessary, we may assume $\Vert f \Vert_{L^\infty (D)} \leq 1$ and $\Vert u \Vert_{C(D)}=\Vert R^Df \Vert_{C(D)} \leq c_1$ without loss of generality. Fix $x_0 \in \partial D$. We will prove that there exist constants $c_2 > 0, \rho_1 \in (0, \rho_0 /16]$, and $\gamma \in (0,1)$
and monotone sequences $(m_k)_{k\geq 0}$ and $(M_k)_{k \geq 0}$ such that $M_k - m_k = V(r_{k+1} / 2)^\gamma,$
\begin{align*}
-V(\rho_1 / 16) \leq m_k \leq m_{k+1} < M_{k+1} \leq M_k \leq V(\rho_1 / 16),
\end{align*}
and
\begin{align*}
m_k \leq \frac{u}{c_2 V(d_D)} \leq M_k ~~ \text{in} ~ D_{r_k} = D_{r_k}(x_0)
\end{align*}
for all $k \geq 0$, where $r_k = \rho_1 8^{-k}$. If we have such constants and sequences, then for any $0< r \leq \rho_1$ we have $k \ge 0$ satisfying $r \in (r_{k+1},r_k]$ and
\begin{align*}
\sup_{D_r} \frac{u}{V(d_D)} - \inf_{D_r} \frac{u}{V(d_D)} \leq \sup_{D_{r_k}} \frac{u}{V(d_D)} - \inf_{D_{r_k}} \frac{u}{V(d_D)} \leq c_2 (M_k-m_k) = c_2 V(r_{k+1} / 2)^\gamma \leq c_2 V(r)^\gamma.
\end{align*}
Also, for any $r > \rho_1$ we have
\begin{align*}
\sup_{D_r} \frac{u}{V(d_D)} - \inf_{D_r} \frac{u}{V(d_D)} \leq c_3 \leq c_4 V(\rho_1)^\gamma \leq c_4 V(r)^\gamma
\end{align*}
by Lemma \ref{l:u}. Above two inequalities conclude the lemma so it suffices to construct such constants and sequences.
Let us use the induction on $k$. The case $k=0$ follows from Lemma \ref{l:u} provided we take $c_2$ large enough. The constants $\rho_1$ and $\gamma$ will be chosen later. Assume that we have sequences up to $m_k$ and $M_k$. Let $\psi$ be the regularized version of $d_D$. We may assume that $\psi = d_D$ in $\left\lbrace d_D(x) \leq \rho_1 \right\rbrace$. Define
\begin{align*}
u_k = V(\psi) \left( \frac{u}{c_2 V(\psi)} - m_k \right) = \frac{1}{c_2} u - m_k V(\psi)
\end{align*}
in $\mathbb{R}^n$. Note that $u_k \in {\mathcal F}(D)$ since $Au=f$ by the consequence of Theorem \ref{t:sol}. Moreover, for $x \in D^{1,1}_{r_k / 4}$ we have $u_k^- \in C^2(x)$ since we know that $u_k^- \equiv 0$ in $B(x_0, r_k)$ by the induction hypothesis. Thus, we have $Au_k^-(x) = Lu_k^-(x)$ by \eqref{e:gen2}, which implies that $Au_k^-$ is well-defined in $D^{1,1}_{r_k / 4}$, and so is $Au_k^+$. We will apply Lemmas \ref{l:harnack} and \ref{l:4.9} for the function $u_k^+$ and $r = r_k / 4$ to find $m_{k+1}$ and $M_{k+1}$. By \eqref{e:5.1} and Lemma \ref{l:FF}, we have
\begin{align} \label{e:Lu+}
\begin{split}
\vert Au_k^+ \vert
&\leq \vert Au_k \vert + \vert Au_k^- \vert \leq \left\vert \frac{1}{c_2} Au - m_k AV(\psi) \right\vert + \vert Au_k^- \vert \\
&\leq \left(\frac{1}{c_2} \vert f \vert + V(\rho_1/16) \vert L(V(\psi)) \vert\right) + \vert Au_k^- \vert \leq c_3 + \vert Au_k^- \vert
\end{split}
\end{align}
in $D$. Thus, we need to estimate $\vert Au_k^- \vert$ in $D_{r_k / 4}^{1,1}$ for the usage of Lemmas \ref{l:harnack} and \ref{l:4.9}.
Let $x \in D^{1,1}_{r_k / 4}$. By the induction hypothesis, we have $u_k^- \equiv 0$ in $B(x_0, r_k)$, which implies that $u_k^- \in C^2(x)$. Thus, we compute the value $Au_k^-(x)$ using the operator $L$ as follows:
\begin{equation} \label{e:Lu_k^-}
\begin{split}
0 \leq Au_k^-(x) = Lu_k^-(x) &= \frac{1}{2} \int_{\mathbb{R}^n} \left( u_k^-(x+h) + u_k^-(x-h) \right) \frac{J(1)}{\vert h \vert^n \varphi(\vert h \vert)} \, dh \\
&= \int_{x+h \notin B_{r_k}} u_k^-(x+h) \frac{J(1)}{\vert h \vert^n \varphi(\vert h \vert)} \, dh.
\end{split}
\end{equation}
For any $y \in B_{r_0} \setminus B_{r_k}$, there is $0 \le j<k$ such that $y\in B_{r_j} \setminus B_{r_{j+1}}$. Since $c_2^{-1} u \geq m_j V(\psi)$ and $d_D = \psi$ in $B_{r_j}$, we have
\begin{align*}
u_k(y) &= c_2^{-1} u(y) - m_k V(\psi(y)) \geq (m_j - m_k)V(\psi(y)) \\
&\geq (m_j - M_j + M_k - m_k) V(d_D(y)) \geq - (V(r_{j+1} / 2)^\gamma - V(r_{k+1} / 2)^\gamma ) V(r_j).
\end{align*}
It follows from $r_{j+1} \leq \vert y - x_0 \vert < r_j \leq 8\vert y - x_0 \vert \leq 1$ that
\begin{equation} \label{e:u_k^-}
\begin{split}
u_k^- (y)
&\leq c_4 \left( V(\vert y - x_0 \vert / 2)^\gamma - V(r_k / 16)^\gamma \right) V(8 \vert y - x_0 \vert) \\
&\leq c_5 \left( V(\vert y - x_0 \vert / 2)^\gamma - V(r_k / 16)^\gamma \right) V( \vert y - x_0 \vert / 2).
\end{split}
\end{equation}
Note that \eqref{e:u_k^-} possibly with a larger constant also holds for $y \in \mathbb{R}^n \setminus B_{r_0}$ because $\Vert u_k \Vert_{C(\mathbb{R}^n)} \leq c_1 c_2^{-1} + V(1/16) V(\tilde{C})$ for any $k$ and
\begin{align*}
\left( V(\vert y - x_0 \vert / 2)^\gamma - V(r_k / 16)^\gamma \right) V(\vert y - x_0 \vert / 2) \geq \left( V(\rho_1 / 2)^\gamma - V(\rho_1 /16)^\gamma \right) V(\rho_1 / 2) > 0
\end{align*}
for any $y \in \mathbb{R}^n \setminus B_{r_0}$. Thus, by \eqref{e:Lu_k^-} and \eqref{e:u_k^-}, we have
\begin{align*}
\vert Au_k^-(x) \vert
&\leq c_6 \int_{x+y \notin B_{r_k}} \left( V(\vert x+h - x_0 \vert / 2)^\gamma - V(r_k / 16)^\gamma \right) \frac{V(\vert x+y-x_0 \vert / 2)}{\vert h \vert^n \varphi(\vert h \vert)} \, dh.
\end{align*}
If $x+y \notin B_{r_k}$, then $\vert h \vert \geq \vert x+h-x_0 \vert - \vert x-x_0 \vert \geq r_k - r_k / 2 = r_k / 2$ and $\vert x+h-x_0 \vert \leq r_k / 2 + \vert h \vert \leq 2\vert h \vert$. Thus, recalling that ${\mathcal P}_1(r)=\int_r^\infty \frac{ds}{s\varphi(s)}$, we obtain
\begin{align*}
\vert Au_k^-(x) \vert
\leq c_6& \int_{\vert h \vert \geq r_k / 2} \left( V(\vert h \vert)^\gamma - V(r_k / 16)^\gamma \right) \frac{V(\vert h \vert)}{\vert h \vert^n \varphi(\vert h \vert)} \, dh \\
\leq c_7& \int_{r_k/2}^\infty \left( V(s)^\gamma - V(r_k / 16)^\gamma \right) V(s) d(-\mathcal{P}_1)(s) \\
= c_7& \bigg( \left[ - \left( V(s)^\gamma - V(r_k / 16)^\gamma \right) V(s) \mathcal{P}_1(s) \right]_{r_k/2}^\infty \\
&+ \int_{r_k / 2}^\infty \left( (1+\gamma)V(s)^\gamma - V(r_k / 16)^\gamma \right) V'(s) \mathcal{P}_1(s) ds \bigg) =: c_7 \left( {\rm \rom{1}} + {\rm \rom{2}} \right).
\end{align*}
By \eqref{e2} we have
\begin{align*}
\lim_{s\rightarrow \infty} \left( V(s)^\gamma -V(r_k / 16)^\gamma \right) V(s) \mathcal{P}_1(s) \leq c_8 \lim_{s\rightarrow\infty} \frac{V(s)^\gamma - V(r_k / 16)^\gamma}{V(s)} = 0,
\end{align*}
hence
\begin{align*}
{\rm \rom{1}} \leq c_8 \frac{ V(r_k/2)^\gamma - V(r_k / 16)^\gamma }{V(r_k / 2)}.
\end{align*}
Also, using \eqref{e2} again we have
\begin{equation*}
\begin{split}
{\rm \rom{2}} &\leq c_8 \int_{r_k / 2}^\infty \left( (1+\gamma)V(s)^\gamma - V(r_k / 16)^\gamma \right) \frac{V'(s)}{V(s)^2} \, ds \\
&= c_8 \left( \frac{1+\gamma}{1-\gamma} V(r_k/2)^\gamma - V(r_k/16)^\gamma \right) \frac{1}{V(r_k/2)}.
\end{split}
\end{equation*}
Therefore, combining above two inequalities and using \eqref{e:V-wsc} we get
\begin{align*}
\vert Au_k^-(x) \vert
&\leq c_9 \left( \frac{2}{1-\gamma} V(r_k / 2)^\gamma - 2V(r_k / 16)^\gamma \right) \frac{1}{V(r_k / 2)} \\
&\leq c_9 \left( \frac{2}{1-\gamma} \left( c_{10} 64^{\alpha_2} \right)^\gamma - 2 \left( c_{10}^{-1} 8^{\alpha_1} \right)^\gamma \right) \frac{V(r_{k+2}/2)^\gamma }{V(r_k / 4)} \\
&=: c_9 \varepsilon_\gamma \frac{V(r_{k+2}/2)^\gamma }{V(r_k / 4)}
\end{align*}
and hence
\begin{align*}
\Vert Au_k^+ \Vert_{L^\infty(D_{r_k / 4}^{1,1})} \leq c_{11} \left( 1+ \varepsilon_\gamma \frac{V(r_{k+2}/2)^\gamma}{V(r_k / 4)} \right).
\end{align*}
Note that $\varepsilon_\gamma \rightarrow 0$ as $\gamma \rightarrow 0$.
Now we apply Lemma \ref{l:harnack} and \ref{l:4.9} for $u_k^+ \in {\mathcal F}(D,D_{r_k/4}^{1,1})$. Since $u_k = u_k^+$ and $d_D = \psi$ in $D_{r_k}$, we have
\begin{align*}
\sup_{D^+_{\kappa' r_k / 4}} \left( \frac{u}{c_2 V(\psi)} - m_k \right)
&\leq c_{12} \left( \inf_{D^+_{\kappa' r_k / 4}} \left( \frac{u}{c_2 V(\psi)} - m_k \right) + V(r_k / 4) + \varepsilon_\gamma V(r_{k+2} / 2)^\gamma \right) \\
&\leq c_{13} \left( \inf_{D_{r_{k+1}}} \left( \frac{u}{c_2 V(\psi)} - m_k \right) + V(r_k / 4) + \varepsilon_\gamma V(r_{k+2}/2)^\gamma \right).
\end{align*}
Repeating this procedure with the function $u_k = M_k V(d_D) - c_2^{-1}u$ instead of $u_k = c_2^{-1} u - m_k V(d_D)$, we also have
\begin{align*}
\sup_{D^+_{\kappa' r_k / 4}} \left( M_k - \frac{u}{c_2 V(\psi)} \right) \leq c_{14} \left( \inf_{D_{r_{k+1}}} \left( M_k - \frac{u}{c_2 V(\psi)} \right) + V(r_k / 4) + \varepsilon_\gamma V(r_{k+2}/2)^\gamma \right).
\end{align*}
Adding up these two inequalities, we obtain
\begin{align*}
M_k - m_k \leq c_{15} \left( \inf_{D_{r_{k+1}}} \frac{u}{c_2 V(\psi)} - \sup_{D_{r_{k+1}}} \frac{u}{c_2 V(\psi)} + M_k - m_k + V(r_k / 4) + \varepsilon_\gamma V(r_{k+2} / 2)^\gamma \right).
\end{align*}
Thus, recalling that $M_k - m_k = V(r_{k+1} / 2)^\gamma$, we get
\begin{align*}
\sup_{D_{r_{k+1}}} \frac{u}{c_2 V(\psi)} - \inf_{D_{r_{k+1}}} \frac{u}{c_2 V(\psi)}
&\leq \frac{c_{15}-1}{c_{15}} V(r_{k+1}/2)^\gamma + V(r_k / 4) + \varepsilon_\gamma V(r_{k+2}/2)^\gamma \\
&\leq \left( \frac{c_{15}-1}{c_{15}} c_{16}^\gamma + c_{17}^\gamma V(\rho_1)^{1-\gamma} + \varepsilon_\gamma \right) V(r_{k+2}/2)^\gamma.
\end{align*}
Now we choose $\gamma$ and $\rho_1$ small enough so that
\begin{align*}
\frac{c_{15}-1}{c_{15}} c_{16}^\gamma + c_{17}^\gamma V(\rho_1)^{1-\gamma} + \varepsilon_\gamma \leq 1,
\end{align*}
and it yields that
\begin{align*}
\sup_{D_{r_{k+1}}} \frac{u}{c_2V(\psi)} - \inf_{D_{r_{k+1}}} \frac{u}{c_2 V(\psi)} \leq V(r_{k+2}/2)^\gamma.
\end{align*}
Therefore, we are able to choose $m_{k+1}$ and $M_{k+1}$.
{\hfill $\Box$ \bigskip}
Finally, we prove the Theorem \ref{t:2} using the Lemma \ref{l:osc}. \\
\noindent{\bf Proof of Theorem \ref{t:2}}
By Remark \ref{r:3.7}, by dividing $\Vert f \Vert_{L^\infty(D)}$ on both sides of \eqref{e:pde1} if necessary, we may assume that $\Vert f \Vert_{L^\infty(D)} \leq 1$ and $\Vert u \Vert_{C(D)} \leq c_1$. We first show that the following holds for any $x \in D$:
\begin{align*}
\left[ \frac{u}{V(d_D)} \right]_{C^\beta (B(x, r/2))} \leq \frac{C}{r^{\beta}V(r)}
\end{align*}
for each $0<\beta \leq \alpha_1$, where $r = d_D(x)$. We are going to use the inequality
\begin{align} \label{e:Cbeta}
\left[ \frac{u}{V(d_D)} \right]_{C^\beta} \leq \Vert u \Vert_C \left[ \frac{1}{V(d_D)} \right]_{C^\beta} + [u]_{C^\beta} \left\Vert \frac{1}{V(d_D)} \right\Vert_C.
\end{align}
From \eqref{e:CV} we know that $[u]_{C^V ( B(x, r/2))} \leq c_2$. Thus, we have $[u]_{C^\beta (B(x, r/2))} \leq c_3$ for each $0<\beta \leq \alpha_1$. Since $d_D(y) \geq r/2$ for $y \in B(x, r/2)$, we have
\begin{align*}
\left\Vert \frac{1}{V(d_D)} \right\Vert_{C (B(x, r/2))} \leq \frac{c_4}{V(r)}
\end{align*}
and
\begin{align*}
\left[ \frac{1}{V(d_D)} \right]_{C^{0,1}(B(x, r/2))} &\leq \sup_{y,z\in B(x, r/2)} \frac{\vert V(d_D(y))^{-1} - V(d_D(z))^{-1} \vert}{\vert y - z \vert} \\
&\leq \sup_{y, z \in B(x, r/2)} \frac{V'(d^*)}{V(d^*)^2} \frac{ \vert d_D(y) - d_D(z) \vert}{\vert y - z \vert} \\
&\leq c_5 \left( \sup_{y, z \in B(x, r/2)} \frac{1}{d^*V(d^*)} \right) [d]_{C^{0,1}(B(x, r/2))} \\
&\leq \frac{c_6}{rV(r)},
\end{align*}
where $d^*$ is a value in $[d_D(y), d_D(z)]$, so $d^* \geq r/2$. Thus, by interpolation, we obtain
\begin{align*}
\left[ \frac{1}{V(d_D)} \right]_{C^\beta (B(x, r/2))} &\leq c_7 \left\Vert \frac{1}{V(d_D)} \right\Vert_{C (B(x, r/2))}^{1-\beta} \left[ \frac{1}{V(d_D)} \right]_{C^{0,1}(B(x, r/2))}^{\beta} \leq \frac{c_8}{r^\beta V(r)}
\end{align*}
and it follows from \eqref{e:Cbeta} that
\begin{align} \label{e:u/V}
\left[ \frac{u}{V(d_D)} \right]_{C^\beta} \leq \frac{c_1 c_8}{r^\beta V(r)} + \frac{c_3 c_4}{V(r)} \leq \frac{c_9}{r^{\beta}V(r)}.
\end{align}
Next, let $x, y \in D$ and let us show that
\begin{align*}
\left\vert \frac{u(x)}{V(d_D(x))} - \frac{u(y)}{V(d_D(y))} \right\vert \leq C \vert x - y \vert^\alpha
\end{align*}
for some $\alpha > 0$. Without loss of generality, we may assume that $r:= d_D(x) \geq d_D(y)$. Fix any $0 < \beta \leq \alpha_1$ and let $p > 1+ \alpha_2/\beta$. If $\vert x - y \vert \leq r^p / 2$, then we have $\vert x - y \vert \leq r/2$ and $y \in B(x, r/2)$ since $r \leq 1$. Thus, by \eqref{e:u/V} we obtain
\begin{align*}
\left\vert \frac{u(x)}{V(d_D(x))} - \frac{u(y)}{V(d_D(y))} \right\vert \leq c_9 \frac{\vert x - y \vert^\beta}{r^\beta V(r)} \leq c_{10} \frac{\vert x - y \vert^{\beta - \beta / p}}{V(\vert x - y \vert^{1/p})} \leq c_{11} \vert x - y \vert^{\beta - (\beta + \alpha_2)/p}.
\end{align*}
On the other hand, if $\vert x - y \vert \geq r^p / 2$, let $x_0, y_0 \in \partial D$ be boundary points satisfying $d_D(x) = \vert x - x_0 \vert$ and $d_D(y) = \vert y - y_0 \vert$. Then by the oscillation lemma \ref{l:osc} we have
\begin{align} \label{e:osc1}
\left\vert \frac{u(x)}{V(d_D)(x)} - \frac{u(x_0)}{V(d_D)(x_0)} \right\vert \leq c_{12} V(d_D(x))^\gamma, \quad \left\vert \frac{u(y)}{V(d_D)(y)} - \frac{u(y_0)}{V(d_D)(y_0)} \right\vert \leq c_{12} V(d_D(y))^\gamma
\end{align}
and
\begin{align} \label{e:osc2}
\left\vert \frac{u(x_0)}{V(d_D)(x_0))} - \frac{u(y_0)}{V(d_D)(y_0)} \right\vert \leq c_{12} V\left( d_D(x) + \vert x - y \vert + d_D(y) \right)^\gamma.
\end{align}
Using inequalities \eqref{e:osc1} and \eqref{e:osc2} we obtain
\begin{align*}
\left\vert \frac{u(x)}{V(d_D)(x))} - \frac{u(y)}{V(d_D)(y)} \right\vert \leq c_{12} \left( 2V(r)^\gamma + V(2r + \vert x - y \vert)^\gamma \right) \leq c_{13} \vert x - y \vert^{\alpha_1 \gamma / p}.
\end{align*}
Therefore, taking $\alpha = \min \left\lbrace \beta - (\beta + \alpha_2)/p, \alpha_1 \gamma / p \right\rbrace$ gives the result.
{\hfill $\Box$ \bigskip}
\section*{Acknowledgement}
The research of Minhyun Kim and Jaehun Lee is supported by
the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) : NRF-2016K2A9A2A13003815.
The research of Panki Kim and Kiahm Lee is supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP)
(No. NRF-2015R1A4A1041675).
|
1,108,101,565,993 | arxiv | \section*{Introduction}
\emph{Entangled quantum systems are more strongly correlated than classical systems can be.}
This simplified statement proves to be remarkably rich by raising several questions, from practical ones -
``How do we measure those correlations?''
- to more subtle ones -
``What does \emph{stronger} mean here and \emph{how} much stronger are the correlations? Are there exceptions to this rule? Can we use this relation to characterize entanglement?''
While entanglement of bipartite or multipartite systems is typically detected using a set of carefully chosen measurements on all subsystems, here, we discuss a simpler and, as it turns out, yet powerful method to analyze correlations and entanglement.
There, one samples from the entire set of correlations by measuring in random directions, instead of considering a specific set of correlations.
This type of measurement is often called a \emph{random} or \emph{randomized measurement}.
Depending on the context, these can be either controlled measurements without a shared reference frame, measurements without active control over (but knowledge of) the measurement direction, or even measurements with neither control nor knowledge about the measurement direction.
Previous works showed the violation of Bell-type inequalities without the need for a shared reference frame~\cite{liang_nonclassical_2010,palsson_experimentally_2012,shadbolt_generating_2012,shadbolt_guaranteed_2012,de_rosier_multipartite_2017,fonseca_survey_2018}, but with the ability to repeat previously conducted measurements.
Random measurements in the sense of measurements without control have been used in the context of many-body systems~\cite{elben_renyi_2018,elben_statistical_2019,brydges_probing_2019,elben_many-body_2020,elben_mixed-state_2020}, for the verification of quantum devices~\cite{elben_cross-platform_2020}, for the detection of entanglement~\cite{tran_quantum_2015,tran_correlations_2016,dimic_single-copy_2018,saggio_experimental_2019}, for the prediction of fidelities, entanglement entropies and various other properties~\cite{huang_predicting_2020} as well as for the characterization and classification of genuine multipartite entanglement~\cite{saggio_experimental_2019,ketterer_characterizing_2019,ketterer_entanglement_2020,knips_multipartite_2020}.
Recently, it was shown that even bound entangled states, i.e., states so weakly entangled that their entanglement is not recognized by the PPT (positive partial transpose) criterion~\cite{peres_separability_1996,horodecki_separability_1996}, can be characterized in a reference-frame independent manner~\cite{imai_multiparticle_2020}.
In this perspective article, we will discuss a work of Ketterer et al., which recently appeared in \textit{Quantum}~\cite{ketterer_entanglement_2020}.
Before we will review their means of entanglement detection and classification, we will introduce the general concept of random measurements, provide an intuitive understanding for them and give context by discussing other methods for detection and classification of entanglement in this scenario.
Finally, we will discuss their approach for selecting local measurement directions based on spherical $t$-designs.
\section*{Scenario}
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{concept_network4d.pdf}
\caption{Concept of random measurements: a quantum state is distributed and is assumed to undergo (unknown) local unitary transformations.}
\label{fig:concept}
\end{figure}
To illustrate the scenario, we can first think of a two-qubit state whose subsystems are sent to two observers via unitary but unknown quantum channels as shown in Fig.~\ref{fig:concept}. Hence, the goal is to characterize the quantum state as well as possible despite the lack of a shared reference frame.
Interestingly, although the correlation value of the outcomes of both observers (which quantifies the probability for both results being equal or opposite) is random as it depends on the respective measurement directions, the \emph{distribution} of correlation values turns out to be a useful resource for describing the state.
If the unknown unitary transformations of the quantum channels are furthermore time-dependent, i.e., the channels are \emph{fluctuating} or \emph{noisy}, one could naively perform measurements in any fixed direction.
However, the type of fluctuations strongly influences the measurement results: are the fluctuations distributed uniformly (according to the Haar measure) or are we oversampling some and undersampling other directions?
To mitigate this problem and avoid any dependence on the type of noise of the quantum channels, the measurement directions can themselves be chosen Haar-randomly.
In this way, the concatenation of the quantum channel of the noisy environment featuring possibly biased noise with the channel corresponding to a Haar random distribution of unitary transformations due to intentional rotation removes any bias.
\section*{Intuitive Picture}
To establish an intuitive understanding how entangled and separable quantum states behave in this scenario and how we can make use of the stronger correlations of an entangled state, let us do a short numerical experiment.
Take the pure product state $|\psi^{\mathrm{prod}}\rangle=|00\rangle$ and a Bell state, e.g., $|\psi^-\rangle=(|01\rangle-|10\rangle)/\sqrt{2}$.
Now repeatedly draw two $2\times2$ unitary matrices according to the Haar measure (see, e.g., \cite{mezzadri_how_2007} for a practical recipe).
Apply the first (second) of those unitaries to the first (second) qubit of both states and evaluate, e.g., $\langle\sigma_z\otimes\sigma_z\rangle$, i.e., the correlation value in the $zz$-direction.
A histogram of those two distributions shows the very distinct behavior of the product and the maximally entangled state, as shown in figure \ref{fig:distributions}.
The maximally entangled state results in a uniform distribution of the expectation values of the correlations, whereas small absolute values are much more probable than large ones for the product state.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{distributions_mod_v3.pdf}
\caption{Measuring different quantum states in many different randomly chosen directions may lead to distinct distributions of correlations.}
\label{fig:distributions}
\end{figure}
For those two states, it is still easy to understand the origin of the corresponding distributions.
Consider the schematic arrow diagrams in Fig. \ref{fig:arrows}, where in a) the two red arrows represent the spins of a product state after applying arbitrary local unitary (LU) transformations each parametrized here by a single angle.
A measurement of $\sigma_z\otimes\sigma_z$ is just given by the product of the results, which are obtained by the projection onto the $\sigma_z$ directions.
Both angles, $\alpha$ and $\beta$, have to be close to $0$ or to $\pi$ to give a large absolute correlation value.
In the example, we show the case of $\alpha=75^\circ$ and $\beta=60^\circ$ leading to a $zz$-correlation of about $0.13$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{arrows_v4.pdf}
\caption{If we compare a pure two-qubit product state with a maximally entangled state, we find that correlation values for the former state depend on two angles, whereas only a single angle suffices to describe the correlations of the latter state.}
\label{fig:arrows}
\end{figure}
For the maximally entangled state in Fig.~\ref{fig:arrows} b), the situation is very different.
We illustrate the state as a superposition of both red arrows with both yellow arrows.
By applying an LU transformation of the form $U\otimes U$ on both qubits such that, say, the first qubit is aligned with its measurement direction (i.e., such that $U$ is a rotation by an angle $-\alpha$), the state does not change (up to a global phase factor).
The expectation value of $\sigma_z\otimes\sigma_z$ therefore now obviously only depends on the single relative angle $\beta-\alpha$ instead of the two angles $\alpha$ and $\beta$.
For the same angles as in the product-state case, we now obtain a $zz$-correlation of about $-0.71$.
Formally, we can obtain the distributions of correlations $E$ by integration over the Bloch spheres $S^{2}\times S^{2}$ as
\begin{align}
&p_{\mathrm{prod}}(E) = \frac{1}{(4\pi)^2} \int_{S^2} {\mathrm{d}U_1}\int_{S^2} {\mathrm{d}U_2} \delta(E-E_{\mathrm{prod}}(U_1,U_2)) \nonumber \\
&= \frac{1}{4} \int_0^{\pi} \sin(\theta_1) {\mathrm{d}\theta_1}\int_0^{\pi} \sin(\theta_2){\mathrm{d}\theta_2} \delta(E - \cos(\theta_1) \cos(\theta_2) ) \nonumber \\
&= -\frac{1}{2} \log(|E|),\\
&p_{\mathrm{Bell}}(E) = \frac{1}{(4\pi)^2} \int_{S^2} {\mathrm{d}U_1}\int_{S^2} {\mathrm{d}U_2} \delta(E-E_{\mathrm{Bell}}(U_1,U_2)) \nonumber \\
&= \frac{1}{4} \int_0^{\pi} \sin(\theta_1) {\mathrm{d}\theta_1}\int_0^{\pi} \sin(\theta_2){\mathrm{d}\theta_2} \delta(E - \cos(\theta_1-\theta_2) ) \nonumber \\
&= \frac{1}{2}.
\end{align}
In the histograms of Fig.~\ref{fig:distributions}, we have also shown the distributions of two other states.
The Werner state, as the mixture of the Bell state (corresponding to a uniform distribution) and the maximally mixed state (corresponding to a Dirac delta peak at $0$ as the maximally mixed state $\mathbbm{1}/4$ always results in $\langle \sigma_z\otimes\sigma_z\rangle=0$, independently of the measurement directions) with mixing parameter $p$, results in a uniform distribution in the range $[-p,p]$ (shown above for $p=1/\sqrt{3}$), i.e., mixing white noise to a Bell state bounds the possible correlations in this sense.
The two-qubit marginal of a tripartite $W$ state, however, gives yet another distinct distribution.
Due to the nature of the measurements, two quantum states which are equivalent up to local unitary transformations will show the same distribution of correlations.
Therefore, all pure two-qubit product states result in a logarithmic distribution, while, for example, all maximally entangled two-qubit states result in a uniform distribution.
\section*{Statistical Moments}
To characterize such distributions of correlations, statistical moments have proven to be powerful.
The $t$-th moment of a probability distribution $p(x)$ is given by
\begin{equation}
m^{(t)} = \int x^t p(x) {\mathrm{d}x}.
\end{equation}
The first moment ($t=1$) is the mean value.
The next lower
\emph{centralized}
moments (i.e., the moments after shifting the distribution around its mean) are the variance, the skewness and the kurtosis.
As we are dealing with a symmetric distribution here, the mean value vanishes and, hence, the centralized moments are identical to the moments themselves.
In our scenario, where all moments are finite and the moment-generating function has a positive radius of convergence, knowing all moments allows one to uniquely determine the distribution~\cite{papoulis_probability_1984}.
Random measurements can be used to obtain the statistical moments of the distribution of expectation values.
The scheme naturally generalizes to the case of more than two parties~\cite{van_enk_measuring_2012,klockl_characterizing_2015,tran_quantum_2015} and is not limited to qubits \cite{tran_quantum_2015,klockl_characterizing_2015,tran_correlations_2016}.
By considering the measurement results on a subset of parties, also distributions of correlations of marginal states can be retrieved.
As the second moment is the first non-trivial one here, we will from now on denote it just as $m:= m^{(2)}$ and specify the respective set of parties it pertains to using a subscript.
The combined information of the second moments of the
\emph{full}
distribution ($m_{1,2,\dots,n}$) involving all $n$ observers together with those of all marginal states ($m_{1}$, $m_{1,2}$, $m_{1,2,3}$, $\dots$, $m_{(n-1),n}$, $\dots$, $m_{n}$) allows us to determine the purity of an $n$-qubit state~\cite{van_enk_measuring_2012,lawson_reliable_2014,klockl_characterizing_2015,knips_multipartite_2020} as
\begin{align}
\operatorname{tr}{\varrho^2}=\frac{1}{2^n} \sum_{\mathcal{A} \in \mathbbm{P}(\mathcal{S})}{ 3^{|\mathcal{A}|} \, m_\mathcal{A}},
\end{align}
where $\mathbbm{P}(\mathcal{S})$ is the set of all subsets of $\mathcal{S}=\{1,\ldots,n\}$ and $|\mathcal{A}|$ denotes the cardinality (number of elements) of the set $\mathcal{A}$.
Here, $m_\mathcal{A}$ is the second moment of the distribution involving the observers given by $\mathcal{A}$.
In other words, the purity is given by the weighted sum of second moments of the full distribution and all marginal distributions.
In the remainder of this perspective, we will discuss how to detect and to characterize entanglement based on statistical moments.
\section*{Detecting Entanglement}
A remarkably simple method to detect entanglement was presented in~\cite{tran_quantum_2015}.
For any pure, fully separable $n$-qubit state $|\psi\rangle=|\psi_1\rangle\otimes|\psi_2\rangle\otimes\dots|\psi_n\rangle$, the length of correlations (sum of the correlations of all basis directions squared) is $1$.
If
\emph{each}
observer $j$ aligns their measurement apparatus along $|\psi_j\rangle$, they jointly observe a correlation of $1$, whereas any orthogonal measurement direction will result in the loss of correlated outcomes.
This relation holds independently of the number of parties and can be adapted for arbitrary dimensions.
Choosing $M$ measurement directions completely randomly with $K$ repetitions each allows to estimate the length of correlations.
If one significantly overcomes the threshold of a product state, one can, after proper statistical analysis, conclude that the state must carry \emph{some} entanglement.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.95\textwidth]{gme_all_8b.pdf}
\caption{For different four-qubit states, it can be evaluated how much the distribution of correlations can be described by the product of moments of its subsystems.
If one finds the quantity ${\mathcal M}_{4}$ above its purity-dependent bound, the state is proven to be genuinely fourpartite entangled as shown for the four-qubit GHZ and the four-qubit linear Cluster state.
For the tri- and biseparable states, no genuine fourpartite entanglement is detected.
However, the marginals of the first two qubits (for both states) as well as of the last two qubits (for the biseparable state) give values of ${\mathcal M}_{2}$ above the respective bound, indicating entanglement within those marginals.}
\label{fig:gme}
\end{figure*}
Subsequent work~\cite{tran_correlations_2016} studied the length of correlations of various genuinely multipartite entangled states and extended the results of~\cite{tran_quantum_2015} to mixed states.
There, entanglement detection based on a single measurement setting is derived explicitly.
However, it was found that the length of correlations (alone) is not an entanglement measure as it can increase under local operations and classical communication (LOCC).
Nevertheless, random measurements can be used to witness genuine multipartite entanglement, i.e., entanglement truly involving all parties, as we will see below.
Moreover, classes of states inequivalent under stochastic local operations and classical communication (SLOCC)~\cite{dur_three_2000,acin_classification_2001} can be distinguished in this way, as we will discuss.
Yet another approach for entanglement detection is used in \cite{dimic_single-copy_2018,saggio_experimental_2019}.
There, a measurement direction is randomly drawn from a specific set.
In addition, a framework for the probabilistic use of entanglement witnesses is provided.
With this, entanglement can in some cases be detected with a single copy with high confidence~\cite{dimic_single-copy_2018} or different classes of entanglement can be discriminated using a few copies of the state in a more general approach~\cite{saggio_experimental_2019}.
Also note that it is common that entanglement criteria are sufficient, but not necessary.
For example, any method, whether based on random measurements or on fully controlled ones, that is only using \emph{full} correlations, i.e., correlations between all $n$ parties without inspecting correlations between subsets of parties, will miss some entangled states: There are genuinely $n$-partite entangled states without any correlations between all $n$ parties~\cite{schwemmer_genuine_2015,tran_genuine_2017,klobus_higher_2019}, which therefore will result in a Dirac-delta peak for the full distribution and are on that level indistinguishable from white noise.
\section*{Detecting Genuine Multipartite Entanglement}
\subsection*{Using Second Moments of Marginals}
In \cite{knips_multipartite_2020}, the second moments of the distributions are used for the detection of genuine multipartite entanglement (entanglement which truly involves all parties and cannot be broken down into pure biseparable states or even mixtures of states biseparable with respect to different bipartitions).
The key ingredient of that strategy is to relate the second moment of the distribution of correlations to the second moments of marginal distributions and test that quantifier with a purity-dependent threshold.
For a pure product state $|\psi_{\mathcal S}\rangle=|\psi_{{\mathcal A}_1}\rangle\otimes|\psi_{{\mathcal A}_2}\rangle$, the second moment of the distribution of correlations when considering the set of parties $\mathcal S$ factorizes into the second moments of the corresponding marginal distributions, i.e., $m_{{\mathcal S}}=m_{{\mathcal A}_1}m_{{\mathcal A}_2}$.
For a pure state, genuine multipartite entanglement can therefore be detected by verifying $m_{{\mathcal S}}>m_{{\mathcal A}_1}m_{{\mathcal A}_2}$ for every possible bipartition ${\mathcal A}_1|{\mathcal A}_2$ such that ${\mathcal S}={\mathcal A}_1\cup{\mathcal A}_2$.
Unfortunately, for non-pure biseparable states, the second moment of the full distribution might be larger than the product of the marginals' second moments.
To mitigate this problem, purity-dependent bounds for detecting genuine multipartite entanglement can be used.
For an $n$-qubit state we can consider
\begin{equation}
{\mathcal M}_{n} := m_{{\mathcal S}} - \frac{1}{2} \sum_{{\mathcal A} \in \{\mathbbm{P}({\mathcal S}) \setminus ({\mathcal S} \cup \varnothing)\}}{ m_{\mathcal A} m_{{\mathcal S} \setminus {\mathcal A}}}
\end{equation}
to capture by how much the full distribution's second moment can be expressed in terms of the marginals' ones.
For example, it has been found~\cite{knips_multipartite_2020} that in the case of $n=4$ qubits, all biseparable states fulfill the relation
\begin{eqnarray}
{\mathcal M}_{4}
\le \tfrac{8}{81}(1-\operatorname{tr}\varrho^2).
\label{eq:M4}
\end{eqnarray}
A violation of the latter inequality therefore indicates genuine fourpartite entanglement.
In Fig.~\ref{fig:gme}, the values of ${\mathcal M}_{4}$ based on the distribution of fourpartite correlations for experimental photonic states (four qubits encoded in polarization and path degrees of freedom of two photons from type-I spontaneous parametric down-conversion, see~\cite{knips_multipartite_2016} for details on the setup) are compared with the purity-dependent threshold given by Eq.~(\ref{eq:M4})~\cite{knips_multipartite_2020}.
As the red (four-qubit GHZ state) and blue (linear four-qubit Cluster state) data points are above the threshold, those states are shown to be genuinely fourpartite entangled.
The distributions of a prepared triseparable state [$\propto \left(|00\rangle+|11\rangle\right)\otimes|0\rangle\otimes|0\rangle$] and of a biseparable state [$\propto \left(|00\rangle+|11\rangle\right)\otimes\left(\sin\varphi\,|00\rangle+\cos\varphi\,|11\rangle\right)$ with $\varphi\approx0.2$], however, contain only one and two bipartite entangled marginals, respectively, as one finds by considering ${\mathcal M}_{3}$ for all tripartite and ${\mathcal M}_{2}$ for all bipartite marginals of those states.
Thus, this approach not only allows to detect genuine multipartite entanglement for mixed states, but generally provides insights into the entanglement structure.
\subsection*{Using Higher Order Moments}
The combination of moments of various orders may in general capture more information about a distribution of correlations and, hence, about the underlying quantum state than restricting the analysis to moments of second order, only.
In two recent works~\cite{ketterer_characterizing_2019,ketterer_entanglement_2020}, the latest of which was published in \emph{Quantum}~\cite{ketterer_entanglement_2020}, Ketterer et al. use a combination of the second and the fourth moment, denoted there by $\mathcal{R}^{(2)}$ and $\mathcal{R}^{(4)}$ (with $\mathcal{R}^{(t)}\equiv m^{(t)}$), respectively.
Obviously, those moments are not entirely independent of each other.
For example, a vanishing second moment $\mathcal{R}^{(2)}$ indicates a Dirac delta distribution, which in turn requires $\mathcal{R}^{(4)}$ to vanish.
They discuss possible combinations of those two moments for two, three and four qubits.
In a combined analytical and numerical study, the authors identify regions in an $\mathcal{R}^{(2)}$-$\mathcal{R}^{(4)}$ plane which allow them to directly indicate that a state is, e.g., biseparable or that it cannot belong to a specific SLOCC class.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{3qubit_GME.pdf}
\caption{Three-qubit states are being analyzed based on their second and fourth moments $\mathcal{R}^{(2)}$ and $\mathcal{R}^{(4)}$, respectively. The blue-shaded region marks biseparable states. Therefore, the green-dashed line as described by Eq.~(\ref{eq:threequbit_bisep}) can be used to detect genuine tripartite entanglement. Figure from~\cite{ketterer_entanglement_2020}.}
\label{fig:3qubitgme}
\end{figure}
In Fig.~\ref{fig:3qubitgme}, three-qubit states are sampled and represented in the $\mathcal{R}^{(2)}$-$\mathcal{R}^{(4)}$ plane.
The blue-shaded area is outlined by different LU-inequivalent types of biseparable states.
Ketterer et al. propose the inequality
\begin{equation}
\mathcal{R}^{(4)} \geq \frac{1}{425}\left[972 \left(\mathcal{R}^{(2)}\right)^2+90\mathcal{R}^{(2)}-5\right],\label{eq:threequbit_bisep}
\end{equation}
which is shown as the green dashed line, as demarcation between biseparable and genuine multipartite entangled states.
Therefore, states for which the fourth moment $\mathcal{R}^{(4)}$ is below this threshold are not biseparable and are hence shown to be genuinely multipartite entangled.
\section*{Witnessing SLOCC classes}
Furthermore, in \cite{ketterer_entanglement_2020} the authors discuss moments of distributions of correlations for witnessing SLOCC classes, which allows to decide if a state is reversibly convertible into, say, a $W$ state.
Figure \ref{fig:slocc} shows sampled four-qubit states in the $\mathcal{R}^{(2)}$-$\mathcal{R}^{(4)}$ plane.
The region with the solid border contains states of the $\mathcal{W}^{(4)}$ class, i.e., the SLOCC class of $W$ states, whereas the region surrounded by the dashed line encompasses its convex hull $\operatorname{Conv}(\mathcal{W}^{(4)})$.
States whose moments lie outside of the regions enclosed by the solid and dashed lines are shown not to belong to the SLOCC classes $\mathcal{W}^{(4)}$ and $\operatorname{Conv}(\mathcal{W}^{(4)})$, respectively.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{SLOCC_3.pdf}
\caption{The second moment $\mathcal{R}^{(2)}$ and the fourth moment $\mathcal{R}^{(4)}$ allow to discriminate four-qubit states as shown here. With this, also different entanglement classes can be distinguished. The solid black line encloses the SLOCC class of $W$ states, whereas the dashed line surrounds the convex hull of the $W$ states. Figure from~\cite{ketterer_entanglement_2020}.}
\label{fig:slocc}
\end{figure}
We already see from Fig.~\ref{fig:slocc} that $\mathcal{R}^{(2)}$ might be very helpful for witnessing that a state is not a member of the mixed $W$ class $\operatorname{Conv}(\mathcal{W}^{(4)})$.
In contrast, additional consideration of $\mathcal{R}^{(4)}$ does not improve the ability for detection significantly.
More generally, the authors of~\cite{ketterer_entanglement_2020} derive a witness for $\operatorname{Conv}(\mathcal{W}^{(n)})$ for an arbitrary number of $n$ qubits.
If the second moment $\mathcal{R}^{(2)}$ of a distribution of correlations is larger than
\begin{equation}
\chi^{(n)}:=\frac{5-\frac{4}{n}}{3^n},
\end{equation}
the $n$-qubit state is not a member of the mixed $W$ class $\operatorname{Conv}(\mathcal{W}^{(n)})$~\cite{ketterer_entanglement_2020}.
For $4$ qubits, the threshold is $\chi^{(4)}=4/81\approx0.049$.
Hence, all states with $\mathcal{R}^{(2)}>\chi^{(4)}=4/81$, i.e., on the right-hand side of $|\mathrm{W}_4\rangle$ and $|\phi\rangle\otimes|\mathrm{GHZ}_3\rangle$ in Fig.~\ref{fig:slocc}, are shown not to belong to the mixed $W$ class.
Please note that this is not a statement about genuine multipartite entanglement.
For example, both the biseparable state $|\mathrm{Bell}\rangle\otimes|\mathrm{Bell}\rangle$ and the genuinely fourpartite entangled state $|\mathrm{GHZ}_4\rangle$ are outside of this region as both are not members of $\operatorname{Conv}(\mathcal{W}^{(4)})$.
\section*{Spherical Designs}
Ketterer et al.~\cite{ketterer_characterizing_2019,ketterer_entanglement_2020} not only use higher-order moments to detect and characterize entanglement, but they also follow a different approach for selecting measurement directions.
Up to this point of the discussion it has been assumed that the distributions of correlations are obtained by sampling over a large set of random directions, where the distribution of directions should follow a Haar random distribution.
Whereas in, e.g., \cite{knips_multipartite_2020} the sampling was done over a large set of Haar randomly distributed measurement directions to describe the distributions of correlations, the approach of Ketterer et al. allows one to fix the number of measurement directions if a specific moment is to be calculated.
This significantly reduces the measurement effort at the cost of requiring active control over the local measurement directions.
Using the notation of Ref.~\cite{ketterer_entanglement_2020}, the $t$-th moment $\mathcal{R}^{(t)}$ of the distribution of correlations of an $n$-qubit state can be obtained from
\begin{align}
\mathcal{R}^{(t)} &= \!\!\int\limits_{\mathcal{U}(2)} \!\!\!\mathrm{d}\eta (U_1) \cdots\!\! \int\limits_{\mathcal{U}(2)}\!\!\! \mathrm{d}\eta(U_n)\langle U_1\sigma_z U_1^\dagger\otimes \dots \otimes U_n\sigma_z U_n^\dagger \rangle^t \nonumber \\
&= \frac{1}{\left(4\pi\right)^n} \int_{S^2} \mathrm{d}\vec{u}_1 \cdots \int_{S^2} \mathrm{d}\vec{u}_n E(\vec{u}_1,\dots,\vec{u}_n)^{t}, \label{eq:tth_moment_integration}
\end{align}
where $E(\vec{u}_1,\dots,\vec{u}_n):=\langle \sigma_{\vec{u}_1}\otimes\dots\otimes\sigma_{\vec{u}_n} \rangle$ denotes the correlation along specific local measurement directions $\vec{u}_i$ with $\sigma_{\vec{u}_i}=\vec{u}_i\cdot\vec{\sigma}$, where $\vec{\sigma}=\left(\sigma_x,\sigma_y,\sigma_z\right)^{T}$ is a vector of the Pauli matrices $\sigma_x$, $\sigma_y$ and $\sigma_z$, while $\eta$ and $\mathrm{d}\vec{u}_i=\sin\theta_i\mathrm{d}\theta_i\mathrm{d}\varphi_i$ are the Haar measure on the unitary group $\mathcal{U}(2)$ and the uniform measure on the Bloch sphere $S^2$, respectively.
To determine the average of a homogeneous polynomial $P_{t^\prime}:S^2\rightarrow\mathbb{R}$ of order $t^\prime$ over the Bloch sphere $S^2$, it is sufficient to sample a finite set of points as shown in~\cite{ketterer_characterizing_2019,ketterer_entanglement_2020}.
For that, they use a so-called spherical $t$-design in dimension three which is defined by the finite set of points $\{\vec{u}_i|i=1,\dots,L^{(t)}\}\subset S^2$ such that
\begin{equation}
\int_{S^2} \mathrm{d}\vec{u} P_{t^\prime}(\vec{u}) = \frac{1}{L^{(t)}}\sum_{k=1}^{L^{(t)}}P_{t^\prime}(\vec{e}_k)
\end{equation}
holds for all homogeneous polynomials of order $t^\prime$ with $t^\prime\leq t$.
Hence, for the respective spherical $t$-design, $L^{(t)}$ determines the number of measurement directions to consider.
Using this framework, Ketterer et al. evaluate the $t$-th moment of the correlations of an $n$-qubit state as
\begin{equation}
\mathcal{R}^{(t)} = \frac{1}{\left(L^{(t)}\right)^n}\sum_{k_1,\dots,k_n=1}^{L^{(t)}} \langle \sigma_{\vec{u}_1}\otimes\dots\otimes\sigma_{\vec{u}_n} \rangle^{t},
\end{equation}
instead of using the integration as in Eq.~(\ref{eq:tth_moment_integration}).
Although they also show a similar derivation for qudit states employing
\emph{unitary}
$t$-designs, we here restrict our discussion to the qubit case using
\emph{spherical}
$t$-designs.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{spherical_designs.png}
\caption{Spherical $t$-designs as the shown $3$-design (a) and $5$-design (b) can be used to find measurement directions when calculating moments of a specific order. Figure from~\cite{ketterer_entanglement_2020}.}
\label{fig:sphericaldesign}
\end{figure}
In Fig.~\ref{fig:sphericaldesign}, the $L^{(3)}=6$ directions $\{\pm \vec{e}_i|i=x,y,z\}$ of a spherical $3$-design as well as the $L^{(5)}=12$ directions of a $5$-design are shown.
If one is only interested in the second moment (polynomial of order $2$), the $6$ local measurement directions of $L^{(3)}$ are sufficient.
Moreover, as even-order moments are invariant under a parity transformation of the measurement direction, skipping $\{-\vec{e}_i|i=x,y,z\}$ does no harm.
For obtaining the fourth moment, $L^{(5)}/2=6$ local measurement directions are suitable.
With this method, the selection of measurement directions from the pseudo-random process of a spherical design allows one to mimic uniform averages over the sphere~\cite{ketterer_entanglement_2020}.
\section*{Conclusion and Future Research}
In this perspective, we have discussed what we can learn from random measurements despite the lack of shared or even local reference frames.
Distributions of correlations can reveal entanglement and exclude any type of separability.
Although considering only the second moment of these distributions is not sufficient for constructing an entanglement measure, it can be used for witnessing SLOCC classes as recently shown in \cite{ketterer_characterizing_2019,ketterer_entanglement_2020}.
As random measurements are inherently not sensitive to local unitary transformations, they do not divert one's gaze from LU-invariant properties.
Random measurements turn out to be a powerful tool for entanglement detection and classification.
Here, we did not elaborate on statistical errors involved in those measurements, which require some further research.
Also, we focused on quantum
\emph{states}.
Of course, random measurements are also of interest for characterizing quantum
\emph{processes}.
Another open question is the tomographic reconstruction using random measurements: Which states can be discriminated and what information will stay hidden?
Also, it is worth to discuss how and to what degree random measurements can be employed for applications such as quantum metrology.
\section*{Acknowledgments}
I am grateful to Tomasz Paterek, Jasmin D. A. Meinecke, Karen Wintersperger, and Nicolai Friis for their helpful comments and suggestions for improvements of the manuscript.
|
1,108,101,565,994 | arxiv | \section{Introduction}\label{sec:Intro}
This chapter will discuss settings in which the dependent variable we seek to
model takes on a range of values that are restricted, broadly defined as
\textbf{limited dependent variable models}. Within the class of limited
dependent variables, a special case arises when the outcome is no longer a
continuous measure but a discrete variable. Such data often arise as
individuals making a choice from a set of potential discrete outcomes, thus
earning the name \textbf{discrete choice} models. The most common case of
such models occurs when $y$ is a binary response and takes on the values zero
and one, indicating whether or not the event has occurred, giving rise to
\textbf{binary models}\footnote{Binary models are special cases of both
ordinal and multinomial models with more than two categories.}. Consider for
example, participation in the labour force, whether or not an individual will
buy a vehicle, whether or not a country is part of free trade agreement. In
other cases, $y$ may take on multiple (more than two) discrete values, with
no natural ordering. Consider for example, choice of brand of toothpaste or
mode of transportation. These are referred to as \textbf{multinomial models}.
We refer the readers to \citet{Train-2009} for a detailed discussion on
multinomial models, their estimation and inference. Further, there could be
situations where $y$ takes on multiple (more than two) discrete values that
are inherently ordered or ranked. For example, scores attached to opinion on
surveys (oppose, neutral, support), classification of educational attainment,
or ratings on bonds. These give rise to \textbf{ordinal models} or
\textbf{ordered choice models}. Here, we discuss four discrete choice models
-- ordinal probit, ordinal logit, binary probit and binary logit models.
Discrete choice models have their foundations in the theory of choice in
economics, which itself is inherently related with the random utility model
\citep{Luce-1959, Luce-Suppes-1965, Marschak-1960}. The random utility
framework involves a utility maximizing rational individual whose objective
is to choose an alternative from a set of \textit{mutually exclusive} and
completely \textit{exhaustive} alternatives. The utilities attached with each
alternative are completely known to the decision maker and the agent chooses
the same alternative in replications of the experiment. However, to a
researcher the utilities are unknown, since s/he only observes a vector of
characteristics (such as age, gender, income etc.) of the decision maker,
referred to as \textit{representative utility}. This forms the systematic
component. The unobserved factors form the stochastic part. The stochastic
component is assigned a distribution, typically continuous, to make
probabilistic statements about the observed choices conditional on the
representative utility. The distributional specification implies that there
exists a continuous latent random variable (or a continuous latent utility)
that underlies the discrete outcomes.
When the set of alternatives or outcomes are inherently ordered or ranked,
individual choice of a particular alternative can be associated as the latent
variable crossing a particular threshold or cut-point. This latent variable
threshold-crossing formulation of the ordered choices elegantly connects
individual choice behavior and ordinal data models serve as a useful tool in
the estimation process. While the theoretical support relates to choice and
random utility theory, the econometric techniques are completely general and
applicable when the ordering conditions of the data are met.
To understand the application of discrete choice models, we consider the case
of legalization of marijuana in the United States. The debate around
legalization of marijuana has been an important yet controversial policy
issue. Marijuana has been proved to be effective in treatment of several
diseases and a wealth of new scientific understanding regarding its
medicinal benefits are documented in \citet{Berman-etal-2004,
Wilsey-etal-2013, Abrams-etal-2003, Abrams-etal-2007, Ellis-etal-2009,
Johnson-etal-2010, McAllister-etal-2011, Guzman-2003,Duran-etal-2010}.
\footnote{The reader is directed to the website www.procon.org for a list of
60 peer-reviewed articles
(\url{http://medicalmarijuana.procon.org/view.resource.php?resourceID=000884})
on the effect of marijuana in treatment of the above mentioned diseases.}
However, despite the medicinal benefits, smoking or consumption of marijuana
is not completely benign and may cause harmful effects, especially associated
with respiratory illnesses and cognitive development \citep{Kalant-2004,
Polen-etal-1993, Meier-etal-2012}.\footnote{A list of peer reviewed articles
on the public health consequences of marijuana can be obtained from the
Office of National Drug Control Policy. Refer to
https://www.whitehouse.gov/ondcp/marijuana.} As a result, several surveys
have been conducted to assess public opinion on the matter. For the purpose
of this chapter, we specifically utilize poll data collected by the Pew
Research Center for the periods 2013 and 2014 to demonstrate the application
of binary models and ordinal models. While there is an increasing trend in
favor of legalizing marijuana based on public opinion, it is noteworthy to
study these specific time periods given that the year 2013 marked the first
time in more than four decades that majority of Americans favored legalizing
the use of marijuana in the United States \citep{Dimock-etal-2013}.
\section{Ordinal Models}\label{sec:Model1}
In \emph{ordinal} regression models, the outcomes of a dependent variable are
categorical and follow a natural ordering. Each outcome or category is
assigned a score (value or number) with the characteristic that the scores
have an ordinal meaning but hold no cardinal interpretation. Therefore, the
difference between categories is not directly comparable. For example, the
study presented in Section~\ref{sec:Study2} codifies the public response to
marijuana legalization as follows: 1 for `oppose legalization', 2 for `legal
only for medicinal use', and 3 for `legal for personal use'. Here, a score of
2 implies more support for legalization as compared to 1, but we cannot
interpret a score of 2 as twice the support compared to a score of 1.
Similarly, the difference in support between 2 and 1 is not the same as that
between 3 and 2.
While the ordinal regression model has a dependent variable that takes
discrete values, the model can be conveniently expressed in terms of a
continuous latent variable $z_{i}$\footnote{The continuous latent construct
may represent underlying latent utility, some kind of propensity, or strength
of preference \citep{Greene-Hensher-2010}.} as follows:
\begin{equation}
z_{i} = x'_{i} \beta + \epsilon_{i}, \hspace{0.75in} \forall \; i=1, \cdots, n,
\label{eq:model}
\end{equation}
where $x_{i}$ is a $k \times 1$ vector of covariates, $\beta$ is a $k \times
1$ vector of unknown parameters and $n$ denotes the number of observations.
Like most applications, the stochastic term $\epsilon_{i}$ is assumed to be
\emph{independently and identically distributed} (\emph{iid}) as a standard
normal distribution, i.e., $\epsilon_{i} \mathop\sim\limits^{iid} N(0,1)$ for
$i=1,\ldots,n$; which gives rise to an \emph{ordinal probit model} (also
known as \emph{ordered probit model}). The latent variable $z_{i}$ is related
to the observed discrete response $y_{i}$ as follows:
\begin{equation}
\gamma_{j-1} < z_{i} \leq \gamma_{j} \; \Rightarrow \;
\emph{$y_{i}$ = j}, \hspace{0.75in}
\forall \; i=1,\cdots, n; \; j=1,\cdots, J,
\label{eq:cutpoints}
\end{equation}
where $-\infty = \gamma_{0} < \gamma_{1} \cdots < \gamma_{J-1} < \gamma_{J} =
\infty$ are the cut-points (or thresholds) and $y_{i}$ is assumed to have
$J$ categories or outcomes. A visual representation of the outcome
probabilities (for the case of marijuana legalization) and the cut-points are
presented in Figure~\ref{fig:ordinalpdf3}. One may observe from
Figure~\ref{fig:ordinalpdf3} that different combinations of $(\beta, \gamma)$
can produce the same outcome probabilities giving rise to parameter
identification problem. We therefore need to anchor the location and scale of
the distribution to identify the model parameters. The former is achieved by
setting $\gamma_{1}=0$ and the latter by assuming
$\textrm{var}(\epsilon_{i})=1$. Other identification schemes are possible and
the reader is referred to \citet{Jeliazkov-etal-2008} and
\citet{Jeliazkov-Rahman-2012} for details.
\begin{figure}[!t]
\centerline{
\mbox{\includegraphics[width=7.50in, height=2.25in]{fig-ordinalpdf3}}
}
\caption{The two cut-points ($\gamma_{1},\gamma_{2}$) divide the
area under the curve into three parts, with each part representing the
probability of a response falling in the three response categories. The three
probabilities P$(y_{i}=1)$, P$(y_{i}=2)$ and P$(y_{i}=3)$ correspond to
`oppose legalization', `legal only for medical use' and `legal for personal
use', respectively. Note that for each individual $i$ the mean $x'_{i}\beta$
will be different and so will be the category probabilities.}
\label{fig:ordinalpdf3}
\end{figure}
Given a data vector $y$ = $(y_{1}, \cdots, y_{n})'$, the likelihood for the
ordinal probit model expressed as a function of unknown parameters $(\beta,
\gamma)$ is the following,
\begin{equation}
\begin{split}
\ell(\beta, \gamma; y)
& = \prod_{i=1}^{n} \prod_{j=1}^{J} \Pr(y_{i} = j | \beta,
\gamma)^{ I(y_{i} = j)}, \\
& = \prod_{i=1}^{n} \prod_{j=1}^{J}
\bigg[ \Phi(\gamma_{j} - x'_{i}\beta) -
\Phi(\gamma_{j-1} - x'_{i}\beta) \bigg]^{ I(y_{i} = j)},
\label{eq:likelihoodOP}
\end{split}
\end{equation}
where $\Phi(\cdot)$ denotes the \emph{cumulative distribution function (cdf)}
of a standard normal distribution and $I(y_{i}=j)$ is an indicator function,
which equals 1 if the condition within parenthesis is true and 0 otherwise.
The parameter estimates for ($\beta, \gamma$) are obtained by maximizing the
logarithm of the likelihood (equation~\ref{eq:likelihoodOP}) using numerical
techniques such as the Newton-Raphson method or BHHH procedure
\citep{Train-2009}. The principle behind maximizing the likelihood -- known
as maximum likelihood (ML) estimation -- is to obtain those parameter values
that are most probable to have produced the data under the assumed
statistical model. Note that it is convenient to work with the logarithm of
the likelihood (log-likelihood) since logarithm being a monotonic function,
the maximum of the log-likelihood and the likelihood occur at the same
parameter values. Once the parameter estimates are available, they may be
used to calculate the covariate effects, make predictions or assess model
fitting. Interested readers may look into \citet{Greene-Hensher-2010} or
\citet[][Chap.~4]{Johnson-Albert-2000} for a detailed review of ordinal data
modeling.
Thus far, we have described the ordinal probit model but the framework can be
transformed into an \emph{ordinal logit model} (or \emph{ordered logit
model}) by simply assuming that the error follows a logistic distribution
\citep{McKelvey-Zavoina-1975,McCullagh-1980}. Therefore, for the model in
equation~\eqref{eq:model}, we now assume that $\epsilon_{i} \sim L(0,1)$ for
$i=1,\ldots, n$, where $L$ denotes a logistic distribution with mean $0$ and
variance $\pi^{2}/3$. Like the normal distribution, the logistic distribution
is symmetric but has heavier tails relative to a normal distribution. The
likelihood for the ordinal logit model has the same structure as
equation~\eqref{eq:likelihoodOP} with $\Phi(w)$ replaced by $\Lambda(w) =
\exp(w)/[1 + \exp(w)]$, where $w$ is the argument inside the parenthesis.
Analogous to the ordinal probit model, the parameters are estimated using the
ML technique.
An interesting property of the ordinal logit model is that the ratio of odds
of not exceeding a certain category (say $j$) for any two individuals is
constant across response categories. This earns it the name
\emph{proportional odds model}. To see this property in effect, let
$\theta_{ij} = \Pr(y_{i} \le j)$ denote the cumulative probability that
individual $i$ chooses category $j$ or below. For the ordinal logit model,
this implies: $\theta_{ij} = \exp{(\gamma_{j} - x'_{i}\beta)}/ \big[1 +
\exp{(\gamma_{j} - x'_{i}\beta )} \big]$, and $ \theta_{ij}/(1-\theta_{ij}) =
\exp \big[\gamma_{j} - x'_{i}\beta \big]$, where the latter represents the
odds for the event $y_{i} \le j$. Accordingly, for any two individuals (say
$1$ and $2$), the ratio of odds is,
\begin{equation}
\frac{\theta_{1j}/(1-\theta_{1j})}{\theta_{2j}/(1-\theta_{2j})} =
\exp\big[ - (x_{1} - x_{2})' \beta \big].
\label{eq:OddsRatio}
\end{equation}
The odds ratio presented in equation~\eqref{eq:OddsRatio} does not depend on
the response category $j$ and is proportional to $(x_{1} - x_{2})$ with
$\beta$ being the constant of proportionality. Thus, the name
\emph{proportional odds model}.
\section{Binary Models}\label{sec:Model2}
\emph{Binary} models are a simplification of ordinal models and are designed
to deal with situations where the outcome (response or dependent) variable is
dichotomous i.e., takes only two values, typically coded as 1 for `success'
and 0 for `failure'. For example, the application presented in
Section~\ref{subsec:Study1} models the response as a `success' if an opinion
is in favor of legalization and a `failure' otherwise. The general set up of
a binary probit model (or simply probit model) is similar to an ordinal
probit model and can be written in terms of a continuous latent variable
$z_{i}$\footnote{The continuous latent variable can be interpreted as the
difference between utilities from choice $1$ and $0$ i.e., $z_{i} = U_{i1} -
U_{i0}$, where $U$ denotes utility \citep{Jeliazkov-Rahman-2012}.} as
follows,
\begin{equation}
\begin{split}
z_{i} & = x'_{i} \beta + \epsilon_{i}, \hspace{0.75in} \forall \;
i=1, \cdots, n, \\
y_{i} & = \left\{ \begin{array}{ll}
1 & \textrm{if} \; z_{i} > 0,\\
0 & \textrm{otherwise}.
\end{array} \right.
\end{split}
\label{eq:ModelBinary}
\end{equation}
where $\epsilon_{i} \sim N(0,1)$ for $i=1,\ldots,n$. With only two responses,
there is a single cut-point which is typically fixed at 0 for the sake of
simplicity. A pictorial representation of the binary outcome probabilities
for the marijuana legalization is
shown in Figure~\ref{fig:ordinalpdf2}. The figure also shows that the
cut-point $\gamma_{1}$ is fixed at 0 to anchor the location of the
distribution. Besides, the scale is fixed by assuming the variance of the
normal distribution is 1. Both the restrictions, as mentioned earlier, are
necessary to identify the model parameters.
\begin{figure*}[!t]
\centerline{
\mbox{\includegraphics[width=7.50in, height=2.25in]{fig-ordinalpdf2}}
}
\caption{The cut-point $\gamma_{1}$ divides the area under the curve into two
parts, the probability of failure and probability of success. In our study,
P$(y_{i}=0)$ and P$(y_{i}=1)$ correspond to probability of
opposing and supporting marijuana legalization, respectively. Note that for
each individual $i$ the mean $x'_{i}\beta$ and hence the probabilities,
P$(y_{i}=0)$ and P$(y_{i}=1)$, will be different.}
\label{fig:ordinalpdf2}
\end{figure*}
The likelihood for the binary probit model can be expressed as,
\begin{equation}
\begin{split}
\ell(\beta; y)
& = \prod_{i=1}^{n}
\big\{ \Pr(y_{i} = 0|x'_{i}\beta)^{(1-y_{i})}
\Pr(y_{i} = 1|x'_{i}\beta)^{y_{i}} \big\}, \\
& = \prod_{i=1}^{n}
\big\{ \Phi(- x'_{i}\beta)^{(1- y_{i})}
\Phi(x'_{i}\beta)^{y_{i}} \big\}.
\label{eq:likelihoodBP}
\end{split}
\end{equation}
Given the likelihood, the model parameters are estimated using the ML
technique i.e., by maximizing the log-likelihood
(equation~\ref{eq:likelihoodBP}) with respect to the parameter vector
$\beta$. Once the parameter estimates are available, we may calculate objects
of interest, such as the covariate effects and predicted probabilities.
Measures for goodness of fit can also be calculated to assess model fitting.
Readers interested in further details about binary data modeling may look
into \citet[][Chap.~3]{Johnson-Albert-2000}.
Similar to ordinal models, the framework for the binary probit model given by
equation~\eqref{eq:ModelBinary} can be utilized to describe a binary logit
model (or simply logit model) with the modification that the errors follow a
logistic distribution \citep{Hosmer-etal-2013}. Both the location and scale
restrictions still apply to the logit model, but note that the variance is
now fixed at $\pi^{2}/3$ as compared to 1 in a probit model. To obtain the
logit likelihood, the normal \emph{cdf} $\Phi(x'_{i}\beta)$ in
equation~\eqref{eq:likelihoodBP} is replaced by the logistic \emph{cdf}:
$\Lambda (x'_{i}\beta) = \exp{(x'_{i}\beta)}/ \big[1 + \exp{(x'_{i}\beta )}
\big] $. We can maximize the resulting log-likelihood to obtain the parameter
estimates for the logit model.
The logit model is appealing to researchers in many fields (including
epidemiologists) because of the ease in interpreting its slope coefficient.
To see this, let $\theta_{i} = \Pr(y_{i}=1|x'_{i}\beta)$ denote the
probability of success and $x_{.,l}$ be a continuous covariate (or
independent variable). Then the logarithm of the odds (log-odds) of success
can be expressed as,
\begin{equation*}
\log \bigg( \frac{\theta_{i}}{1 - \theta_{i}} \bigg) = x'_{i} \beta
= x_{i,l} \, \beta_{l} + x'_{i,-l} \, \beta_{-l}.
\end{equation*}
where $x_{i} = (x_{i,l}, \, x_{i,-l} )$, $\beta = (\beta_{l}, \,
\beta_{-l})$, and $-l$ in the subscript denotes all covariates/parameters
except the $l$-th covariate/parameter. If we differentiate the log-odds with
respect to the $l$-th covariate, we obtain $\beta_{l}$. Therefore, the slope
coefficient $\beta_{l}$ represents the log-odds for a 1 unit change in the
$l$-th covariate.
Similarly, the coefficient of an indicator variable (dummy or dichotomous
variable) has an interesting interpretation. Let $x_{.,m}$ be an indicator
variable, $\theta_{i}^{1}$ be the probability of success when $x_{i,m}=1$,
$\theta_{i}^{0}$ be the probability of success when $x_{i,m}=0$. Our goal is
to find the expression for the odds-ratio, which measures the odds of success
among those with $x_{i,m}=1$ compared to those with $x_{i,m}=0$. Then the
logarithm of the odds-ratio is,
\begin{equation*}
\log \bigg( \frac{\theta_{i}^{1}/(1 - \theta_{i}^{1})}
{\theta_{i}^{0}/(1 - \theta_{i}^{0})}
\bigg) = \beta_{m} + x'_{i,-m} \, \beta_{-m} - x'_{i,-m} \, \beta_{-m}
= \beta_{m}.
\end{equation*}
The odds-ratio is better understood with the help of an example. Suppose $y$
denotes the presence or absence of a heart disease and $x_{.,m}$ denotes
whether the person is a smoker or non-smoker. Then, an odds-ratio $=2$ implies
that heart disease is twice as likely to occur among smokers as compared to
non-smokers for the population under study.
\section{Covariate Effects and Model Fitting}\label{sec:CEMF}
In ordinal models, the coefficients do not give covariate effects because the
link function is non-linear and non-monotonic. Consequently, we need to
calculate the covariate effect for each outcome. Let $x_{.,l}$ be a
continuous covariate, then covariate effect for the $i$-th observation (or
individual) in an ordinal probit model is calculated as,
\begin{equation}
\begin{split}
\frac{\partial \Pr(y_{i}=j)}{\partial x_{i,l}} & = -\beta_{l} \;
\big[ \phi(\gamma_{j} - x'_{i}\beta) - \phi(\gamma_{j-1}-x'_{i}\beta) \big] \\
& \simeq -\hat{\beta}_{l} \;
\big[ \phi(\hat{\gamma}_{j} - x'_{i}\hat{\beta})
- \phi(\hat{\gamma}_{j-1}-x'_{i}\hat{\beta}) \big],
\end{split}
\label{eq:CEContOrdinal}
\end{equation}
where $\phi(\cdot)$ denotes the probability density function (\emph{pdf}) of
a standard normal distribution and $(\hat{\beta}, \hat{\gamma})$ are the ML
estimates of the parameters $(\beta, \gamma)$. The average covariate effect
is computed by averaging the covariate effect in
equation~\eqref{eq:CEContOrdinal} across all observations. If the covariate
is an indicator variable (say $x_{.,m})$, then the covariate effect for the
$i$-th observation on outcome $j$ ($=1,\ldots,J$) is calculated as,
\begin{equation}
\begin{split}
& \Pr(y_{i}=j|x_{i,-m}, x_{i,m}=1) - \Pr(y_{i}=j|x_{i,-m}, x_{i,m}=0) \\
& = \big[ \Phi(\gamma_{j} - {x'}^{\dagger}_{i}\beta) - \Phi(\gamma_{j-1}
- {x'}^{\dagger}_{i}\beta)\big] -
\big[ \Phi(\gamma_{j-1} - {x'}^{\ddagger}_{i}\beta)
- \Phi(\gamma_{j-1} - {x'}^{\ddagger}_{i}\beta)\big]\\
& \simeq \big[ \Phi(\hat{\gamma}_{j} - {x'}^{\dagger}_{i}\hat{\beta})
- \Phi(\hat{\gamma}_{j-1} - {x'}^{\dagger}_{i} \hat{\beta})\big] -
\big[ \Phi(\hat{\gamma}_{j-1} - {x'}^{\ddagger}_{i} \hat{\beta})
- \Phi(\hat{\gamma}_{j-1} - {x'}^{\ddagger}_{i}\hat{\beta})\big],
\end{split}
\label{eq:CEIndOrdinal}
\end{equation}
where ${x}^{\dagger}_{i} = (x_{i,-m}, \, x_{i,m}=1)$ and ${x}^{\ddagger}_{i}
= (x_{i,-m}, \, x_{i,m}=0)$. The average covariate effect is calculated by
averaging the covariate effect given in equation~\eqref{eq:CEIndOrdinal}
across all observations. Note that for ordinal models, the sign of the
regression coefficient translates unambiguously into the sign of covariate
effect only for the lowest and highest categories of the response variable.
Covariate effect for the middle categories cannot be known \emph{a priori}.
Moving on to binary probit model, the expressions for the covariate effects
simplify. For a continuous variable, the covariate effect is given by the
expression,
\begin{equation}
\frac{\partial \Pr(y_{i}=1)}{\partial x_{i,l}} = \beta_{l} \,
\phi(x'_{i}\beta) \simeq \hat{\beta}_{l} \; \phi(x'_{i} \hat{\beta}),
\label{eq:CEContBinary}
\end{equation}
and the same for an indicator variable is given by the expression,
\begin{equation}
\begin{split}
& \Pr(y_{i}=1|x_{i,-m},x_{i,m}=1) - \Pr(y_{i}=1|x_{i,-m}, x_{i,m}=0) \\
& = \Phi({x'}^{\dagger}_{i}\beta) - \Phi({x'}^{\ddagger}_{i}\beta)
\simeq \Phi({x'}^{\dagger}_{i}\hat{\beta}) - \Phi({x'}^{\ddagger}_{i}{\beta}),
\end{split}
\label{eq:CEIndBinary}
\end{equation}
where all the notations have been explained in the previous paragraph. Once
again, the average covariate effect is computed by averaging across all
observations. While the discussion on covariate effects has considered
ordinal and binary probit models because of their implementation in the
applications, covariate effects for the ordinal and binary logit models can
be calculated analogously by replacing the normal \emph{pdf}'s and
\emph{cdf}'s with logistic \emph{pdf}'s and \emph{cdf}'s at appropriate
places.
To assess the goodness of model fit, we calculate three measures: likelihood
ratio (LR) test statistic, McFadden's R-square \citep{McFadden-1974} and
hit-rate \citep{Johnson-Albert-2000}. For the null hypothesis $H_{0}:
\beta_{2} = \ldots = \beta_{k}=0$, the LR test statistic $\lambda_{LR}$ is
defined as follows:
\begin{equation*}
\lambda_{LR} = - 2 [\ln L_{0} - \ln L_{\mathrm{fit}}] \quad
\mathop\sim\limits^{H_{0}} \quad \chi^{2}_{k-1}
\end{equation*}
where $\ln L_{\mathrm{fit}}$ is the log-likelihood of the fitted model and
$\ln L_{0}$ is the log-likelihood of the intercept-only model. Under the null
hypothesis, $\lambda_{LR}$ follows a chi-square distribution with degrees of
freedom equal to $k-1$, i.e., the number of restrictions under the null
hypothesis. So, we calculate the statistic $\lambda_{LR}$ and compare it with
$\chi^{2}_{k-1}$ for a given level of significance. If $\lambda_{LR} >
\chi^{2}_{k-1}$, then we reject the null hypothesis. Otherwise, we do not
reject the null hypothesis.
Another popular goodness of fit measure for discrete choice models is
McFadden's R-square ($R^{2}_{M}$), due to \citet{McFadden-1974}. The
McFadden's R-square, also referred to as pseudo R-square or likelihood ratio
index, is defined as follows,
\begin{equation*}
R^{2}_{M} = 1 - \frac{\ln L_{\mathrm{fit}}}{\ln L_{0}}
\end{equation*}
The $R^{2}_{M}$ is intuitively appealing because it is bounded between 0 and
1, similar to the coefficient of determination $(R^{2})$ in linear regression
models. When all slope coefficients are zero, the $R^{2}_{M}$ equals zero;
but in discrete choice models $R^{2}_{M}$ can never equal 1, although it can
come close to 1. While higher values of $R^{2}_{M}$ implies better
fit, the value as such has no natural interpretation in sharp contrast to
$R^{2}$ which denotes the proportion of variation in the dependent variable
explained by the covariates.
While both LR test statistic and McFadden's R-square are commonly used in
applied studies, the hit-rate is relatively uncommon. The hit-rate (HR) is
defined as the percentage of correct predictions i.e., percentage of
observations for which the model correctly assigns the highest probability to
the observed response category. Mathematically, the HR can be defined as
follows,
\begin{equation*}
HR = \frac{1}{n} \sum_{i=1}^{n} I\bigg( \Big(
\max_{j} \; \{ \hat{p}_{ij} \}_{j=1}^{J} \Big) = y_{i} \bigg),
\end{equation*}
where $\hat{p}_{ij}$ is the predicted probability that individual $i$ selects
outcome $j$, and $I(\cdot)$ is the indicator function as defined earlier.
\section{Some Advances in Discrete Choice Modeling}\label{sec:SomeAdv}
Till now, we have looked at ordinal and binary models and their Classical (or
Frequentist) approach to estimation via the maximum likelihood technique,
which only involves the likelihood function (say $f(y|\theta)$, where
$\theta$ is the parameter vector). The Classical approach assumes that model
parameters are unknown but have fixed values and hence the parameters cannot
be treated as random variables. In contrast, Bayesian approach to estimation
utilizes the Bayes' theorem,
\begin{equation*}
\pi(\theta|y) = \frac{f(y|\theta) \pi(\theta)}{\int f(y|\theta) \pi(\theta)
\; d\theta} \; ,
\label{eq:BayesTheorem}
\end{equation*}
to update the belief/information about $\theta$ (considered a random
variable) by combining information from the observed sample (via the
likelihood function) and non-sample or prior beliefs (arising from previous
studies, theoretical consideration, researcher's belief, etc.) represented by
the prior distribution $\pi(\theta)$. Inference is based on the posterior
distribution $\pi(\theta|y)$. Bayesian approach provides several advantages
including finite sample inference, working with likelihoods which are
difficult to evaluate, and advantages in computation. Interested readers may
look into \citet{Greenberg-2012} for details on the Bayesian approach and
\citet{PoirierBook-1995} for a comparison of Classical and Bayesian
estimation methods.
The posterior density for ordinal and binary models do not have a tractable
density and so the parameters cannot be sampled directly. While the
Metropolis-Hastings (MH) algorithm \citep{Metropolis-etal-1953,
Hastings-1970} can be employed to sample the parameters, the standard and
more convenient approach is to consider \emph{data augmentation}
\citep{Tanner-Wong-1987}. In this approach, the joint posterior density is
augmented by a latent variable $z$ and the augmented joint posterior
$\pi(\theta,z|y)$ is written as,
\begin{equation*}
\pi(\theta,z|y) \propto \pi(\theta) f(z,y|\theta) =
\pi(\theta) f(z|\theta) f(y|z,\theta).
\label{eq:AugJointPost}
\end{equation*}
For an ordinal probit model, $\theta = (\beta, \gamma)$ and $f(y|z,\beta,
\gamma) \equiv f(y|z,\gamma)$; whereas for a binary probit model $\theta =
\beta$ and $f(y|z,\beta) \equiv f(y|z)$. The two equivalencies arise because,
given a latent observation $z_{i}$, $y_{i}$ is known with certainty
regardless of $\beta$ for $i=1,\ldots,n$. This de-linking of the likelihood
function from $\beta$, made possible through data augmentation, simplifies
the estimation procedure and allows sampling of $\beta$ through a Gibbs
process \citep{Geman-Geman-1984} -- a well known Markov chain Monte Carlo
(MCMC) technique. The latent variable $z$ is sampled element-wise from a
truncated normal distribution. For ordinal models, a monotone transformation
of the cut-points, $\gamma$, is sampled using an MH algorithm. The MCMC
algorithms for estimating ordinal and binary probit models outlined
here were introduced in \citet{Albert-Chib-1993}. Other notable references
that describe the Bayesian modeling and estimation of ordinal and binary
responses in great detail include \citet{Johnson-Albert-2000},
\citet{Greenberg-2012}, and \citet{Jeliazkov-Rahman-2012}. Bayesian
estimation of logit model is based on the same principle and presented in
\citet{Holmes-Held-2006} and \citet{Jeliazkov-Rahman-2012}.
The ordinal and binary models considered in this chapter, whether estimated
using the Classical or the Bayesian techniques, provide information on the average
probability of outcomes conditional on the covariates. However, interests in
the quantiles of the response variable as a robust alternative to mean
regression have grown enormously since the introduction of quantile
regression in \citet{Koenker-Basset-1978}. Quantile modeling gained further
momentum with the development of Bayesian quantile regression by
\citet{Yu-Moyeed-2001}, where the authors create a working likelihood by
assuming that the errors follow an asymmetric Laplace (AL) distribution
\citep{Yu-Zhang-2005}. Binary quantile regression was proposed by
\citet{Kordas-2006} and its Bayesian formulation was presented by
\citet{Benoit-Poel-2010}. \citet{Rahman-2016} introduced Bayesian quantile
regression with ordinal responses and estimated the model using MCMC
techniques. A flexible form of Bayesian ordinal quantile regression was
proposed in \citet{Rahman-Karnawat-2019}. Some recent research on ordinal and
binary quantile regression in the panel/longitudinal set up include
\citet{Alhamzawi-Ali-Longitudinal2018}, \citet{Rahman-Vossmeyer-2019}, and
\citet{Bresson-etal-2021}. Two applied studies employing ordinal and binary
quantile framework are \citet{Omata-etal-2017} and \citet{Ojha-Rahman-2021},
respectively. Interested readers may explore the above mentioned papers and
references therein to develop a thorough understanding on ordinal and binary
quantile modeling and their applications.
\section{Application: Public Opinion on Legalization of Marijuana in the
United States}
In the United States (US), marijuana is illegal under the federal law as per
the Controlled Substances Act of 1970. The Act classifies marijuana as a
schedule I drug i.e., a drug with no accepted medical value, high potential
for abuse and not safe to use even under medical supervision
\citep{DEA-2011}. However, state laws pertaining to marijuana have evolved
overtime. Up until 2016, 29 states had either legalized, allowed access for
medical reasons or decriminalized its use (See Figure~\ref{fig:USmap}). More
legalization efforts are appearing in the remaining states of US. Such
revisions in state laws represent a change in public attitude that is aptly
reflected in survey data collected by independent polling agencies such as
Pew Research Center, General Social Survey and Gallup. Figure~\ref{fig:Trend}
shows an increasing trend in favor of legalizing marijuana based on public
opinion. Besides, political standing on marijuana has also popularized the
debate on its legalization and may have affected public opinion regarding it.
The gradual growth in support of marijuana perhaps echoes a better public
insight on the medicinal value of marijuana \citep{EarleywineBook-2005} and
the social cost of prohibition that includes illegal trade, racially skewed
arrests of African Americans and huge enforcement cost
\citep{Shepard-Blackley-2007}.
\begin{figure}[!t]
\centerline{
\mbox{\includegraphics[width=6.50in, height=3.75in]{fig-USmap}}
}
\caption{Marijuana state laws in the US as of 6th January, 2016.}
\label{fig:USmap}
\end{figure}
\begin{figure}[!t]
\centerline{
\mbox{\includegraphics[width=6.50in, height=3.00in]{fig-Trend}}
}
\caption{Public opinion on marijuana legalization for the period 1969-2015.
The black dashed line is the 50 percent benchmark. Data source: Pew Research
Center, General Social Survey and Gallup. We have averaged the percentages
for years with multiple surveys. The combined percentage of the two opinions
is below 100 since on average 4 percent of respondents answered
``don't know'' or ``refused to answer''.}
\label{fig:Trend}
\end{figure}
While policies on marijuana use are in the early stages of formulation as
states evaluate its costs and benefits \citep{Winterbourne-2012}, more states
decriminalizing its use may cause a major policy shift at the federal level
\citep{FernerNews-2015}. In this regard, some scholars argue that public
policies ought to be guided by public opinion such that mass opinion and
democracy is upheld \citep{Monroe-1998,Paletz-etal-2015}. Besides,
\citet{Shapiro-2011} cites a large number of studies to argue that public
opinion influences government policy making in the US. Therefore, it is
imperative to study and identify the factors that significantly impact US
public opinion towards marijuana legalization.
In the next section, we employ a binary probit model to analyze public
opinion on marijuana legalization and thereafter implement the ordinal probit
model to analyze public opinion on the extent of marijuana legalization. The
choice of probit models over their logit counterparts is driven by practical
considerations -- the probit models are tractable in univariate cases and can
be generalized to multivariate and hierarchical settings
\citep{Jeliazkov-etal-2008}. In contrast, logistic model based on logistic
distribution cannot model correlations in multivariate settings.
\subsection{Binary Probit Model}\label{subsec:Study1}
\subsubsection{Data}\label{subsubsec:bpData}
We utilize the March 2013 Political Survey data from the Pew Research Center
to analyze public opinion on marijuana and identify the factors that
significantly impact the probability of supporting its legalization. The
survey was conducted during the period March 13-17, 2013, by Abt SRBI
(Schulman, Ronca \& Buculvas, Inc) for the Pew Research Center for the People
and the Press. The survey selected and interviewed a representative sample of
1,501 adults living in the US. Of the 1501 adults, 750 individuals were
interviewed over land line and the remaining 751 individuals over cell phone.
The available sample had several respondents with missing values (``don't
know'' or ``refused to answer'') on the variables of interest, along with 49
respondents who were unsure about marijuana legalization. After removing data
on these respondents, we have a sample of 1182 observations available for the
study.
\begin{table}[!t]
\centering \small \setlength{\tabcolsep}{8pt} \setlength{\extrarowheight}{2pt}
\setlength\arrayrulewidth{1pt}
\caption{Descriptive summary of the variables (March 2013 Political Survey).}\label{Table:Summary1}
\begin{tabular}{llr r}
\toprule
\textsc{variable} & & \textsc{mean} & \textsc{std} \\
\midrule
\textsc{log age} & & 3.86 & 0.40 \\
\textsc{log income} & & 10.64 & 0.98 \\
\textsc{household size} & & 2.72 & 1.44 \\
\midrule
& \textsc{category} & \textsc{counts} & \textsc{percentage} \\
\midrule
\textsc{past use} & & 554 & 46.87 \\
\textsc{male} & & 570 & 48.22 \\
\cmidrule{2-2}
& \textsc{bachelors \& above}
& 426 & 36.04 \\
\textsc{education} & \textsc{below bachelors}
& 360 & 30.46 \\
& \textsc{high school \& below}
& 396 & 33.50 \\
\cmidrule{2-2}
\textsc{tolerant states} & & 374 & 31.64 \\
\cmidrule{2-2}
& \textsc{white} & 938 & 79.36 \\
\textsc{race}
& \textsc{african american}
& 142 & 12.01 \\
& \textsc{other races}
& 102 & 8.63 \\
\cmidrule{2-2}
& \textsc{republican}
& 353 & 29.86 \\
\textsc{party affiliation}
& \textsc{democrat}& 404 & 34.18 \\
& \textsc{independent \& others}
& 425 & 35.96 \\
\cmidrule{2-2}
& \textsc{protestant}
& 494 & 41.79 \\
& \textsc{roman catholic}
& 258 & 21.83 \\
\textsc{religion}
& \textsc{christian}
& 138 & 11.68 \\
& \textsc{conservative}
& 72 & 6.09 \\
& \textsc{liberal}
& 220 & 18.61 \\
\midrule \textsc{public opinion}
& \textsc{favor legalization}
& 622 & 52.62 \\
& \textsc{oppose legalization}
& 560 & 47.38 \\
\bottomrule
\end{tabular}
\end{table}
In this application, the dependent variable is response to the question: ``Do
you think the use of marijuana should be made legal, or not?''. The responses
were recorded as `yes, legal' (i.e. favor legalization), `no, illegal' (i.e.,
oppose legalization) or `don't know or refused'. We remove the last category
as it constitutes missing responses. This makes the response variable binary
and hence a binary probit model is utilized to analyze the response based on
the following set of covariates: age, income, household size, past use of
marijuana, gender, education, state of residence, race, party affiliation and
religion.
Age was recorded in years. Income (measured in US Dollars) was reported as
belonging to one of 9 groups (0-10k, 10k-20k, $\cdots$, 40k-50k, 50k-75k,
75k-100k, 100k-150k, 150k and above, where `k' denotes thousand). We convert
income to a continuous variable by taking the mid-point of the first 8 income
groups and impute 150k for the last group. Household size represents the
number of members in the family. Past use of marijuana and gender are
indicator variables in the model. Educational attainment of the respondents
is classified into three categories
and the category `high school and below' forms the base or reference category
in the regressions. The variable `tolerant states' indicates if a respondent
lives in one of the 20 states, where recreational usage is legal, possession
is decriminalized and/or allowed for medical use only.\footnote{The list of
tolerant states before the date of the survey include Alaska, Arizona,
California, Colorado, Connecticut, Delaware, Hawaii, Maine, Maryland,
Massachusetts, Michigan, Montana, Nevada, New Jersey, New Mexico, Oregon,
Rhode Island, Vermont, Washington, Washington DC. Source:
https://www.whitehouse.gov/ondcp/state-laws-related-to-marijuana.} Race is
classified into three categories and
`White' race is used as the base category in the regressions. The category
`Other Races' comprises of Asian, Hispanic, native American, Pacific
Islanders and remaining races. Party affiliation is also classified into
three categories and Republican Party is used as the reference category in
the models. Religion is classified into five categories and `Protestant' is
used as the base category. Here, the category `Conservative' comprises of
respondents belonging to one of the following religions: Buddhist, Hindu,
Islam, Jew, Mormon and orthodox church. The category `Liberal' comprises of
respondents who claim to be Agnostic, Atheist, Universalist or nothing in
particular. The descriptive statistics for all the variables are presented in
Table~\ref{Table:Summary1}.
Let us now look at the socio-demographic characteristics of a typical
respondent in the sample. An average respondent is about 51 years old and
s/he belongs to a household of size 3 with an annual income of 60,527 US
Dollars. The sample is almost evenly split between males and females and
46.87 percent of respondents have a history of marijuana use. In the sample,
the largest proportion of respondents (36.04 percent) have `bachelors and
above' degree, followed by `below bachelors' degree (30.46 percent). A
significant fraction of the respondents (31.64 percent) reside in states that
have some favorable laws towards marijuana. The sample is predominantly White
(79.36 percent) with a good representation (12.01 percent) of the African
American population. Amongst the respondents, almost 30 percent consider
themselves as Republican, about 34 percent declare themselves as Democrats
and the remaining are Independent or belong to other parties. With respect to
religious codification, the largest proportion (41.79 percent) is
Protestants, followed by Roman Catholics (21.83 percent). A good proportion,
11.68 percent, declare themselves to be simply Christian. The Liberal
category forms about 18.61 percent and the Conservatives have the lowest
fraction at 6.09 percent.
\subsubsection{Estimation }\label{subsubsec:bpEstimation}
We estimate four different binary probit models and present the estimated
coefficients and standard errors for each model in
Table~\ref{Table:BinaryModelResults}. To begin with, Model~1 considers a
basic set of covariates that includes log age, income, past use of marijuana,
gender, education categories, household size and state of residence of the
respondents. Subsequent models generalize Model~1 by adding more variables to
the basic set of regressors. Specifically, Model~2 adds the race variable,
Model~3 adds party affiliation to the list of variables in Model~2, and
Model~4 incorporates religious denomination to the regressors in Model~3. All
the four models have high LR statistic as shown in
Table~\ref{Table:BinaryModelResults} and hence we reject the null hypothesis
that the coefficients are jointly zero in each model. The two other goodness
of fit statistics, McFadden's $R^{2}$ and hit-rate, also show that all
models provide a good fit, with each subsequent model providing a better fit
than the previous model.
\subsubsection{Results}\label{subsubsec:bpResults}
We focus on the results from Model~4, since it provides the best model fit.
Table~\ref{Table:BinaryModelResults} shows that log age has a negative
coefficient and is statistically significant at 5 percent level.\footnote{The
default significance level is 5 percent and henceforth reference to
significance level will be omitted.} This implies that young people are more
supportive of marijuana legalization. This is not surprising since risk
taking or deviant behavior is high amongst the younger population and is well
documented in the literature \citep{Brown-etal-1974}. Moreover,
\citet{Saieva-2008} reports a negative association between age and
probability of supporting legalization, while \citet{Alfonso-Dunn-2007} and
\citet{Delforterie-etal-2015} note that marijuana prevalence rate is higher
among the younger population.
\begin{table}[!t]
\centering \small \setlength{\tabcolsep}{4pt} \setlength{\extrarowheight}{2pt}
\setlength\arrayrulewidth{1pt}\caption{Estimation results for the binary probit model.}
\begin{tabular}{ll d{3.4} d{1.2} r
d{3.4} d{1.2} r
d{3.4} d{1.2} r
d{3.4} d{1.2} }
\toprule
&& \multicolumn{2}{c}{\textsc{model 1}} & &
\multicolumn{2}{c}{\textsc{model 2}} & &
\multicolumn{2}{c}{\textsc{model 3}} & &
\multicolumn{2}{c}{\textsc{model 4}} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10} \cmidrule{12-13}
&& \multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} &&
\multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} &&
\multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} &&
\multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} \\
\midrule
\textsc{intercept} && 1.74^{**} & 0.61 && 1.70^{**} & 0.64 &&
0.11 & 0.66 && 0.51 & 0.69 \\
\textsc{log age} && -0.41^{**} & 0.11 && -0.41^{**} & 0.11 &&
-0.39^{**} & 0.11 && -0.29^{**} & 0.12 \\
\textsc{log income} && -0.06 & 0.05 && -0.06 & 0.05 &&
-0.02 & 0.05 && -0.03 & 0.05 \\
\textsc{past use} && 0.81^{**} & 0.08 && 0.82^{**} & 0.08 &&
0.82^{**} & 0.08 && 0.81^{**} & 0.08 \\
\textsc{male} && 0.18^{**} & 0.08 && 0.17^{**} & 0.08 &&
0.21^{**} & 0.08 && 0.17^{**} & 0.08 \\
\textsc{bachelors \& above} && 0.23^{**} & 0.10 && 0.23^{**} & 0.10 &&
0.23^{**} & 0.11 && 0.21^{*} & 0.11 \\
\textsc{below bachelors} && 0.19^{*} & 0.10 && 0.19^{*} & 0.10 &&
0.24^{**} & 0.10 && 0.26^{**} & 0.10 \\
\textsc{household size} && -0.04 & 0.03 && -0.05 & 0.03 &&
-0.04 & 0.03 && -0.03 & 0.03 \\
\textsc{tolerant states} && 0.13 & 0.08 && 0.12 & 0.08 &&
0.12 & 0.08 && 0.08 & 0.09 \\
\textsc{african american} && .. & .. && -0.05 & 0.12 &&
-0.26^{**} & 0.13 && -0.14 & 0.13 \\
\textsc{other races} && .. & .. && 0.12 & 0.14 &&
0.03 & 0.14 && 0.02 & 0.15 \\
\textsc{democrat} && .. & .. && .. & .. &&
0.68^{**} & 0.10 && 0.57^{**} & 0.11 \\
\textsc{other parties} && .. & .. && .. & .. &&
0.48^{**} & 0.10 && 0.40^{**} & 0.10 \\
\textsc{roman catholic} && .. & .. && .. & .. &&
.. & .. && 0.18^{*} & 0.10 \\
\textsc{christian} && .. & .. && .. & .. &&
.. & .. && -0.06 & 0.13 \\
\textsc{conservative} && .. & .. && .. & .. &&
& .. && 0.44^{**} & 0.18 \\
\textsc{liberal} && .. & .. && .. & .. &&
.. & .. && 0.66^{**} & 0.12 \\
\midrule
\textsc{LR ($\chi^{2}$) statistic}
&& & 164.37 && & 165.38 && & 211.44 && & 248.55 \\
\textsc{mcfadden's $R^{2}$}
&& & 0.10 && & 0.10 && & 0.13 && & 0.15 \\
\textsc{hit-rate}
&& & 66.67 && & 66.58 && & 67.51 && & 69.03 \\
\bottomrule
\\
\multicolumn{5}{l}{\footnotesize{$\ast \ast$ p $<$ 0.05, $\ast$ p $<$ 0.10}}
\end{tabular}
\label{Table:BinaryModelResults}
\end{table}
The coefficient for income is negative, but statistically insignificant as
also documented in \citet{Nielsen-2010} and \citet{Saieva-2008}. As such, we
do not note a significant relationship between income and probability of
supporting marijuana legalization. This is in disagreement with the
hypothesis that the economically weaker section will support legalization
since marijuana is popular within the lower income group. Past use of
marijuana may drive support for legalization, but this was not controlled
either in \citet{Nielsen-2010} or \citet{Saieva-2008}. Controlling for this
variable in our models, we find that the coefficient for past use is largest
amongst all variables and highly significant. This finding provides support
to the hypothesis that individuals who have used marijuana in the past
strongly favor its legalization and the large coefficient value implies that
past use is an important factor in favoring legalization. Males are more
supportive of legalizing marijuana compared to females and the coefficient is
significant, a result also documented in \citet{Nielsen-2010} and
\citet{Delforterie-etal-2015}. Similarly, \citet{Rodriguez-2015} finds that
boys are more likely to use marijuana during their adolescent years. Since
past use is an important determinant for supporting legalization and
marijuana use is more prevalent among males, it is not surprising to find
that males are more supportive of legalization.
The indicator variables for higher education i.e., `bachelors \& above' and
`below bachelors' have positive coefficients and are statistically
significant (either at 10 or 5 percent significance level) relative to the
base category, `high school \& below'. Thus, higher education leads to
increased support for legalization possibly because a more educated
individual can better understand the costs and medicinal benefits of
marijuana. However, some studies have found that early use of marijuana leads
to lower educational attainment and poor performance in school
\citep{Lynskey-Hall-2000, VanOurs-Williams-2009, Horwood-etal-2010}.
Household size has a negative effect, but the coefficient is not statistically
significant. Similarly, the coefficient for `tolerant states' is positive,
but insignificant. This implies that residing in one of 20 states that offers
some relaxation on marijuana use/offence does not statistically increase the
probability of supporting legalization.
The coefficient for African American and `Other Races', are not statistically
different from the base category, White, with the exception of Model~3. The
negative coefficient for African American, although insignificant, is rather
surprising because one would expect African Americans to support legalization
in order to curtail the large number of marijuana related arrests from the
African American community. In line with this, \citet{Chen-KilleyaJones-2006}
also examine the extent of marijuana use across race and find that marijuana
use is higher among suburban White students compared to their African
American counterparts. Moreover, \citet{Nasim-etal-2007} document the
cultural orientation for African American young women and find that
traditional religious beliefs and practices could be the reason behind less
marijuana usage among African American.
Political affiliation often represents ideological differences towards any
public policy and several poll studies conducted by Pew and Gallup have found
that Republicans (Democrats) are more likely to oppose (favor) legalization
of marijuana. We also arrive at a similar conclusion, with the coefficients
for Democrat and `Other Parties' being positive and significant. This
suggests that individuals having either political orientations are more
supportive of legalization compared to Republicans and the result is
consistent with \citet{Nielsen-2010}. Lastly, we look at the effect of
religious affiliations since religious beliefs sometime acts as a protective
factor against alcohol usage and smoking. The results show that the
coefficient for Roman Catholic is positive and statistically significant at
10 percent, while coefficient for Christian is negative but statistically
insignificant. In contrast, the coefficients for Conservative and Liberal are
both positive and statistically significant and hence both groups are more
supportive of legalization compared to Protestants. However, individual
opinions often do not strictly adhere to religious codes and conducts, so
these results may vary significantly across samples.
\begin{table}[!t]\centering
\small \setlength{\tabcolsep}{6pt} \setlength{\extrarowheight}{2pt}
\caption{Average covariate effects from Model~4.}
\label{Table:CovEffectBM}
\begin{tabular}{ll S[table-format=-1.4] }
\toprule
\textsc{covariate} & & \text{$\Delta$P(favor legalization)} \\
\midrule
\textsc{age, 10 years} & & -0.019 \\
\textsc{past use} & & 0.285 \\
\textsc{male} & & 0.057 \\
\textsc{bachelors \& above} & & 0.068 \\
\textsc{below bachelors} & & 0.085 \\
\textsc{democrat} & & 0.188 \\
\textsc{other parties} & & 0.134 \\
\textsc{roman catholic} & & 0.058 \\
\textsc{conservative} & & 0.143 \\
\textsc{liberal} & & 0.220 \\
\bottomrule
\end{tabular}
\end{table}
The above discussion suggests that the signs of the coefficients, except the
race variables, are consistent with what one would typically expect. However,
the coefficients by themselves do not give the covariate effects (see
Section~\ref{sec:Model1} and \ref{sec:Model2}). Table~\ref{Table:CovEffectBM}
presents the average covariate effects for all significant variables, either
at 5 or 10 percent level. Results show that an increase in age by 10 years
decreases the probability of support by 1.9 percent. The highest positive
impact comes from past use, which shows that an individual who has used
marijuana is 28.5 percent more likely to support legalization relative to
someone who has never used it. Males are 5.7 percent more likely to support
legalization relative to females. Higher education increases the probability
of support and an individual with bachelors or higher degree (below
bachelors) is 6.8 (8.5) percent more likely to support legalization relative
to an individual with a high school degree or below. Political affiliation to
the Democratic Party increases the probability of support by 18.1 percent.
Similarly, an individual who identifies themselves with Independent and other
parties is 13.4 percent more likely to support legalization compared to a
Republican. Finally, an individual who is
Conservative (Liberal) is 14.3 (22)
percent more likely to support legalization relative to a Protestant.
\subsection{Ordinal Probit Model - The Extent of Marijuana Legalization}\label{sec:Study2}
The year 2013 was the first year in four decades that majority of Americans
favored legalization of marijuana. While its support has grown overtime as
shown in Figure~\ref{fig:Trend}, it is important to distinguish between
different levels of support. This distinction is crucial because support for
personal use of marijuana is stronger than support for its medical use and
has different policy implications. Individuals may opine to support marijuana
for medicinal benefits, but not for personal use. The February 2014 Political
Survey recorded individual response as a three level categorical variable,
which permits use of an ordinal probit model to study the effect of
covariates on public opinion about the extent of legalization.
\subsubsection{Data}\label{subsubsec:opData}
The February 2014 Political Survey was conducted during February 14-23, 2014
by the Princeton Survey Research Associates and sponsored by the Pew Research
Center for the People and the Press. In the survey, a representative sample
of 1,821 adults living in the US were interviewed over telephone with 481
(1,340) individuals interviewed over land line (cell phone, including 786
individuals without a land line phone). The sampled data contain several
missing observations and many respondents were unsure about their opinion on
legalization. As before, we remove data on these respondents and are left with
a sample of 1,492 observations for the study.
\begin{table}[!t]
\centering \small \setlength{\tabcolsep}{8pt} \setlength{\extrarowheight}{2pt}
\setlength\arrayrulewidth{1pt}
\caption{Descriptive summary of the variables (February 2014 Political Survey).}\label{Table:Summary2}
\begin{tabular}{llr r}
\toprule
\textsc{variable} & & \textsc{mean} & \textsc{std} \\
\midrule
\textsc{log age} & & 3.72 & 0.44 \\
\textsc{log income} & & 10.63 & 0.98 \\
\textsc{household size} & & 2.74 & 1.42 \\
\midrule
& \textsc{category} & \textsc{counts} & \textsc{percentage} \\
\midrule
\textsc{past use} & & 719 & 48.19 \\
\textsc{male} & & 792 & 53.02 \\
\cmidrule{2-2}
& \textsc{bachelors \& above}
& 551 & 36.93 \\
\textsc{education} & \textsc{below bachelors}
& 434 & 29.09 \\
& \textsc{high school \& below}
& 507 & 33.98 \\
\cmidrule{2-2}
\textsc{tolerant states} & & 556 & 37.27 \\
\textsc{eventually legal} & & 1,154 & 77.35 \\
\cmidrule{2-2}
& \textsc{white} & 1149 & 77.01 \\
\textsc{race}
& \textsc{african american}
& 202 & 13.54 \\
& \textsc{other races}
& 141 & 9.45 \\
\cmidrule{2-2}
& \textsc{republican}
& 333 & 22.32 \\
\textsc{party affiliation}
& \textsc{democrat}& 511 & 34.25 \\
& \textsc{independent \& others}
& 648 & 43.43 \\
\cmidrule{2-2}
& \textsc{protestant}
& 550 & 36.86 \\
& \textsc{roman catholic}
& 290 & 19.44 \\
\textsc{religion}
& \textsc{christian}
& 182 & 12.20 \\
& \textsc{conservative}
& 122 & 8.18 \\
& \textsc{liberal}
& 348 & 23.32 \\
\cmidrule{2-2}
& \textsc{oppose legalization}
& 218 & 14.61 \\
\textsc{public opinion}
& \textsc{legal only for medicinal use}
& 640 & 42.90 \\
& \textsc{legal for personal use}
& 634 & 42.49 \\
\bottomrule
\end{tabular}
\end{table}
The dependent variable in the model is the respondents' answer to the
question, ``Which comes closer to your view about the use of marijuana by
adults?''. The options provided were, `It should not be legal,' `It should be
legal only for medicinal use,' or `It should be legal for personal use'. The
fourth category labeled, `Don't know/Refused' is removed from the study.
Similar to the March 2013 Survey, the February 2014 Survey also collected
information on the age, income, household size, past use, gender, education,
race, party affiliation and religion. We use these variables as independent
variables in the models. All the definitions and categories for the variables
remain the same as in Section~\ref{subsubsec:bpData}. We also include the
indicator variable `tolerant states', with the definition modified to include
Illinois and New Hampshire to the previous list of 20 states.\footnote{Note
that marijuana related laws were passed in Illinois and New Hampshire after
the March 2013 Political Survey, but before the February 2014 Political
Survey.} Finally, we include an additional variable, labeled `eventually
legal', for which data was collected only in the February 2014 Survey. This
variable indicates whether respondents expect marijuana to be legal
irrespective of their individual opinion. We present the descriptive
statistics for all the variables in Table~\ref{Table:Summary2}.
Upon exploration of the socio-demographic characteristics of the current
sample, we note that an average respondent is about 45.5 years old and s/he
belongs to a household of size 3 with an annual income of 60,647 US Dollars.
Thus, the typical respondent is about 6 years younger compared to the March
2013 data and approximately has the same household size and income. The
percentage of males is higher in the current sample by 5 percent, but still
close to a fair split between males and females. Similarly, the sample is
almost evenly split between respondents who have used marijuana and those who
have not. The largest proportion of respondents (36.93 percent) have
`bachelors and above' degree, followed by `high school \& below' degree
(33.98 percent). A significant fraction of the respondents (37.27 percent)
reside in states that have some favorable law on marijuana. Looking at the
additional variable `eventually legal', note that 77.35 percent of the
surveyed people expect marijuana to be legal irrespective of their opinion.
Similar to the earlier data, the sample is predominantly White (77.01
percent) with a good representation (13.54 percent) of the African American
population. Party affiliation shows that 34.25 percent of the sample is
comprised of Democrats, 22.32 percent Republicans and the remaining fraction
are `independent \& others'. With respect to religious classifications, the
largest proportion of respondents are Protestant (36.86 percent), followed by
Liberal (23.32 percent) and Roman Catholics (19.44 percent).
\subsubsection{Estimation}\label{subsubsec:opEstimation}
The ordinal probit model results are presented in
Table~\ref{Table:OrdinalModelResults}, which show the coefficient estimates
and standard errors of four different models. The estimation of models follows
a similar sequence as in Table~\ref{Table:BinaryModelResults}. Model~5 is the
base model and contains log age, log income, past use of marijuana, male,
education categories, household size, tolerant states and eventually legal.
Model~6 adds the race categories to Model~5 and Model~7 adds party
affiliation to the list of variables in Model~6. Finally, Model~8 contains
all the variables in Model~7 and religious categories. The goodness of fit
statistics are presented in the last three rows of
Table~\ref{Table:OrdinalModelResults}. The LR statistics are large and each
model fits better than the respective intercept model. The other two
measures, McFadden's $R^{2}$ and hit-rate, show that Model~7 and Model~8
outperform the remaining two models. While Model~8 provides a better fit
compared to Model~7 as per McFadden's $R^{2}$ (0.1187 compared to 0.1254),
Model~7 has a better hit-rate.
\begin{table}[!t]
\centering \small \setlength{\tabcolsep}{4pt} \setlength{\extrarowheight}{2pt}
\setlength\arrayrulewidth{1pt}\caption{Estimation results for the ordinal probit model.}
\begin{tabular}{ll d{3.4} d{1.2} r
d{3.4} d{1.2} r
d{3.4} d{1.2} r
d{3.4} d{1.2} }
\toprule
&& \multicolumn{2}{c}{\textsc{model 5}} & &
\multicolumn{2}{c}{\textsc{model 6}} & &
\multicolumn{2}{c}{\textsc{model 7}} & &
\multicolumn{2}{c}{\textsc{model 8}} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10} \cmidrule{12-13}
&& \multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} &&
\multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} &&
\multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} &&
\multicolumn{1}{c}{\textsc{coef}} & \multicolumn{1}{c}{\textsc{se}} \\
\midrule
\textsc{intercept} && 1.26^{**} & 0.44 && 1.27^{**} & 0.45 &&
0.83^{*} & 0.46 && 0.34 & 0.48 \\
\textsc{log age} && -0.44^{**} & 0.07 && -0.45^{**} & 0.07 &&
-0.45^{**} & 0.08 && -0.35^{**} & 0.08 \\
\textsc{log income} && 0.07^{*} & 0.03 && 0.07^{*} & 0.03 &&
0.08^{**} & 0.03 && 0.09^{**} & 0.04 \\
\textsc{past use} && 0.74^{**} & 0.06 && 0.73^{**} & 0.06 &&
0.71^{**} & 0.06 && 0.69^{**} & 0.06 \\
\textsc{male} && 0.06 & 0.06 && 0.06 & 0.06 &&
0.07 & 0.06 && 0.06 & 0.06 \\
\textsc{bachelors \& above} && 0.26 & 0.08 && 0.26^{**} & 0.08 &&
0.25^{**} & 0.08 && 0.24^{**} & 0.08 \\
\textsc{below bachelors} && 0.06 & 0.08 && 0.05^{*} & 0.08 &&
0.05 & 0.08 && 0.05 & 0.08 \\
\textsc{household size} && -0.04^{*} & 0.02 && -0.04 & 0.02 &&
-0.03 & 0.02 && -0.02 & 0.02 \\
\textsc{tolerant states} && 0.11^{*} & 0.06 && 0.13^{**} & 0.06 &&
0.09 & 0.06 && 0.07 & 0.07 \\
\textsc{eventually legal} && 0.58^{**} & 0.07 && 0.58^{**} & 0.07 &&
0.56^{**} & 0.07 && 0.57^{**} & 0.07 \\
\textsc{african american} && .. & .. && 0.11 & 0.09 &&
-0.01 & 0.10 && 0.03 & 0.10 \\
\textsc{other races} && .. & .. && -0.18^{*} & 0.11 &&
-0.26^{**} & 0.11 && -0.27^{**} & 0.11 \\
\textsc{democrat} && .. & .. && .. & .. &&
0.48^{**} & 0.09 && 0.44^{**} & 0.09 \\
\textsc{other parties} && .. & .. && .. & .. &&
0.40^{**} & 0.08 && 0.36^{**} & 0.08 \\
\textsc{roman catholic} && .. & .. && .. & .. &&
.. & .. && 0.10 & 0.09 \\
\textsc{christian} && .. & .. && .. & .. &&
.. & .. && 0.16 & 0.10 \\
\textsc{conservative} && .. & .. && .. & .. &&
& .. && 0.09 & 0.12 \\
\textsc{liberal} && .. & .. && .. & .. &&
.. & .. && 0.39^{**} & 0.09 \\
\textsc{cut-point} && 1.43^{**} & 0.05 && 1.43^{**} & 0.05 &&
1.45^{**} & 0.05 && 1.46^{**} & 0.05 \\
\midrule
\textsc{LR ($\chi^{2}$) statistic}
&& & 316.27 && & 321.47 && & 356.99 && & 377.02 \\
\textsc{mcfadden's $R^{2}$}
&& & 0.10 && & 0.11 && & 0.12 && & 0.12 \\
\textsc{hit-rate}
&& & 57.77 && & 57.44 && & 59.11 && & 58.91 \\
\bottomrule
\\
\multicolumn{5}{l}{\footnotesize{$\ast \ast$ p $<$ 0.05, $\ast$ p $<$ 0.10}}
\end{tabular}
\label{Table:OrdinalModelResults}
\end{table}
\subsubsection{Results}\label{subsubsec:opResults}
We focus on the results from Model~8 because it is the most general model,
provides the best fit according to McFadden's $R^{2}$ and no variables change
sign. The results indicate that log age has a negative effect on the support
for personal use (third category) and is statistically significant at 5
percent level. Alternatively, log age has a positive effect on opposing
legalization (first category) and is statistically significant. However, the
effect of age on medicinal use only (second category) cannot be determined
\emph{a priori}. Henceforth, we shall only discuss the effect on personal use
and the impact on opposing legalization will be opposite to that of personal
use. As before, the default level of significance used is 5 percent and
further discussion will omit reference to significance level.
We note that the coefficient for log income is positive and statistically
significant. This implies that individuals with higher income are more likely
to support legalization for personal use. Past use of marijuana has a
statistically significant positive effect on the probability of supporting
personal use of marijuana. Moreover, the coefficient for past use is largest
among all the variables, a result which is similar to that obtained in the
binary probit model. Contrary to the finding in
Section~\ref{subsubsec:bpResults}, the coefficient for male is positive, but
is not statistically significant. Thus, the current data do not confirm any
role of gender on public opinion towards marijuana. Higher education is
positively associated with support for personal use of marijuana. However,
only the coefficient for `bachelors degree \& above' is statistically
significant. The coefficients for household size and `tolerant states' are
not significant and conform to the results from the binary probit model. We
find that `eventually legal' has a significant positive effect, indicating
that if an individual expects marijuana to be legal irrespective of his or
her opinion, then s/he is more likely to support legalization for personal
use.
The race variables suggest that opinions of African Americans on personal use
of marijuana are not significantly different compared to the Whites. Such a
similarity of opinions across race was also observed in the binary probit
model. In contrast, `Other Races' has a significant negative coefficient and
is more opposed to legalization as compared to Whites. The coefficients for
political party affiliations are in consonance with the results from
Section~\ref{subsubsec:bpResults}. Affiliation to Democratic Party or `Other
Parties' increases the support for personal use and the coefficients are
significant. Lastly, religious affiliations do not show a strong effect.
Here, only the Liberals are more supportive of personal use of marijuana,
while the opinions of the remaining religious categories are not
significantly different from the base category, Protestant.
\begin{table}[!t]\centering
\small \setlength{\tabcolsep}{6pt} \setlength{\extrarowheight}{2pt}
\caption{Average covariate effects from Model~8.}
\label{Table:CovEffectOM}
\begin{tabular}{ll S[table-format=-1.4] S[table-format=-1.4] S[table-format=-1.4] }
\toprule
\textsc{covariate} & & \text{$\Delta$P(not legal)} & \text{$\Delta$P(medicinal use)}
& \text{$\Delta$P(personal use)} \\
\midrule
\textsc{age, 10 years} & & 0.015 & 0.012 & -0.028 \\
\textsc{income, \$10,000} & & -0.005 & -0.003 & 0.008 \\
\textsc{past use} & & -0.129 & -0.113 & 0.243 \\
\textsc{bachelors \& above} & & -0.045 & -0.035 & 0.080 \\
\textsc{eventually legal} & & -0.126 & -0.060 & 0.186 \\
\textsc{other races} & & 0.059 & 0.031 & -0.089 \\
\textsc{democrat} & & -0.080 & -0.066 & 0.147 \\
\textsc{other parties} & & -0.070 & -0.051 & 0.121 \\
\textsc{liberal} & & -0.068 & -0.066 & 0.134 \\
\bottomrule
\end{tabular}
\end{table}
We mentioned in Section~\ref{sec:Model1} that the coefficients of the ordinal
probit model only give the direction of impact for the first and last
categories, but not the remaining categories. The actual covariate effects
need to be calculated for all the categories. We compute the average
covariate effects for all significant variables and present them in
Table~\ref{Table:CovEffectOM}. From the table, note that past use,
eventually legal, and identifying oneself as a Democrat are three variables
with the highest impact on public opinion. Past use of marijuana increases
support for personal use by 24.3 percent and decreases the support for
medicinal use and oppose legalization by 12.9 and 11.3 percents,
respectively. Similarly, a respondent who expects marijuana to be legal is
18.6 percent more likely to support marijuana for personal use. This increase
comes from a decrease in probability for medicinal use and oppose
legalization, which are 6.0 and 12.6 percents, respectively. In the same way,
a respondent who is a Democrat is 14.7 percent more likely to favor personal
use, and 6.6 and 8.0 percents less likely to favor medicinal use and oppose
legalization, respectively. The covariate effects for the remaining variables
can be interpreted similarly.
\section{Conclusion}\label{sec:Conclusion}
This chapter presents an overview of two popular ordinal models (ordinal
probit and ordinal logit models) as well as two widely used binary models
(probit and logit models). These models fall within the class of discrete
choice models and are extremely popular in several disciplines including
economics, epidemiology, finance, and sociology. The models are described
using the latent variable threshold-crossing framework since it elegantly
connects individual choice behavior with the random utility model in
economics. While we focus on the ordinal probit and binary probit models as
prototypes to derive the likelihood and outline the estimation procedure, the
approach is completely applicable to its logit counterparts. Some interesting
aspects about interpreting the coefficients of logit models are emphasized.
Since the models considered here are non-linear models, the coefficients
cannot be interpreted as covariate effects. We explain how to compute the
covariate effects when the covariates are continuous and when they are binary
(indicator variable). Measures to assess model fitting, namely, likelihood
ratio statistic, McFadden's R-square, and hit-rate are also described. We
also include specific applications of discrete choice models, wherein we
utilize the ordinal probit and binary probit models to analyze public opinion
on marijuana legalization and extent of legalization in the United States --
a notable and contentious topic with credible arguments from proponents as
well as critics of the policy.
The first application utilizes a binary probit model to analyze the response
to legalization (`oppose legalization' or `favor legalization') based on
individual demographic variables, educational background, racial
characteristics and political affiliation of the respondents. The data is
taken from the March 2013 Political Survey collected by the Pew Research
Center. The results suggest that while log age has a negative effect, past
use of marijuana, male, higher education, affiliation to the Democratic or
`Independent \& Other' parties, as well as Roman Catholic, Conservative and
Liberal religious beliefs have a positive effect on the probability of
supporting legalization. Not surprisingly, past use of marijuana has the
highest positive effect and increases the probability of support by 28.5
percent. The proposed model performs well and correctly classifies 69 percent
of the responses.
The second study employs an ordinal probit model to analyze the ordered
response to legalization (`oppose legalization', `legal for medicinal use' or
`legal for personal use') based on a similar set of covariates as in the
first study. Data for this study is taken from the February 2014 Political
Survey collected by the Pew Research Center. The results show that log age
and belonging to `Other Races' (non-White and non-African American)
negatively (positively) affects the probability of supporting personal use
(oppose legalization) and the latter has the highest negative effect at $8.9$
percent. The variables that have a positive (negative) effect on personal use
(oppose legalization) include income and indicators for past use of
marijuana, bachelors or higher education, individual expectation on eventual
legalization, Democratic Party, `Other Parties' and Liberal religious
beliefs. Amongst these, past use of marijuana has the highest positive effect
on personal use at 24.3 percent. The proposed ordinal model performs well and
correctly classifies approximately 59 percent of the responses.
The insights from these studies are interesting and may assist policymakers
to better assess public preference regarding marijuana legalization, the
extent of legalization (particularly, medical marijuana) and the factors
associated with such preferences. For instance, the finding that educational
attainment has a positive impact on public support for legalization implies
that providing information on the medicinal findings on marijuana and
emphasizing college and university education is likely to increase support
for it. If people believe that the benefits outweigh the costs, then public
opinion will move further towards legalization and vice-versa. These findings
may also be helpful to various advocacy groups and opposition lobbies engaged
in promoting or opposing unrestricted legalization of marijuana respectively.
A clear understanding of the underlying factors that drive an individual's
opinion will help these groups better plan their campaigns. For example,
since support for legalization is negatively related with age, groups
opposing legalization may consider not campaigning amongst the youth as it is
unlikely to yield support. Similarly, lobbies engaged in opposing
legalization may optimize their time use on better possibilities than
convince an individual or group with a history of marijuana use to oppose
legalization.
\clearpage \pagebreak
\pdfbookmark[1]{References}{unnumbered}
|
1,108,101,565,995 | arxiv | \section*{Introduction}
Though QSO absorption lines have contributed many impressive strides
in our understanding of the UV universe (Charlton, this volume),
{\it terra--incognita\/} remains to be explored and scientifically
exploited.
A few of the ultimate aims of QSO absorption line studies are to
establish the history of cosmic chemical evolution, the shape and
intensity of the UV background, the evolving rate of galactic
accretion events and star bursting outflows, and the reciprocative
roles these play in galactic formation and evolution over $\sim 95$\%
of the age of the universe.
The UV rest--frame gaseous conditions seen in absorption at high
redshift do {\it not\/} suffer cosmologically induced effects that
might otherwise be mistaken as evolutionary processes
(i.e.~k--corrections).
Thus, from $z \sim 4$ to $z = 0$, statistical changes in absorbing
conditions, such as profile velocity spreads, numbers of
subcomponents, and ionization levels, can be unambiguously attributed
to evolution in either the structures, dynamics, numbers, and/or
ionization and chemical conditions of UV flux sensitive gas--phase
baryons.
In this contribution, the unknown ``absorbing'' UV universe is
discussed and scientific motives for its exploration are given.
Neutral gas is easily traced using the {{\rm Ly}\kern 0.1em$\alpha$}
$\lambda 1216$ transition.
Traditionally, the {{\rm Mg}\kern 0.1em{\sc ii}} $\lambda\lambda 2796,
2803$ and the {{\rm C}\kern 0.1em{\sc iv}} $\lambda\lambda 1548,
1550$ resonant doublets have been used as tracers of the low and high
ionization gas, respectively, because they are very strong in
absorption and have easily identified doublet patterns.
Presented in Figure~\ref{cwcfig:hiresdata} are the {{\rm Mg}\kern
0.1em{\sc ii}} $\lambda 2796$ profiles from a HIRES/Keck survey
\cite{cwcref:thesis}.
For 15 of these systems, Churchill {et.~al} \cite{cwcref:csv97}
compared the absorption and luminous properties of the galaxies.
They concluded that {{\rm Mg}\kern 0.1em{\sc ii}} absorbing gas
exhibits no clear spatial distribution or systematic kinematics and
suggested the gas results from episodic galactic processes.
The implications for galaxy evolution are not entirely clear.
The unexplored kinematics of the high ionization gas
(i.e.~{{\rm C}\kern 0.1em{\sc iv}}) in these systems will be key for
further interpretation.
\begin{figure}[hbt]
\cwcplotmacro{cwchurchill_fig1.eps}{3.5in}{0}{63.}{55.}{-256}{-28}
\caption{The $z < 2$ HIRES/Keck {{\rm Mg}\kern 0.1em{\sc ii}}
$\lambda$2796 transition in absorption presented in line of sight
velocity of the rest frame and in order of increasing redshift (marked
above the continuum). The absorption arises in the extended low
ionization gas surrounding galaxies. For each galaxy, the
$\lambda$2803 transition and several {{\rm Fe}\kern 0.1em{\sc ii}}
transitions have also been observed.
\protect\label{cwcfig:hiresdata}}
\end{figure}
Shown in the left panels of Figure~\ref{cwcfig:tpcf_dndz} are the {{\rm
Mg}\kern 0.1em{\sc ii}} Two--Point Clustering Functions (TPCFs) of
subcomponents (blended ``clouds'' decomposed with Voigt Profiles).
The TPCF gives the probability of finding two clouds separated by
$\Delta v$ in an absorbing system \cite{cwcref:cvc97}.
In principle, parameterizing the TPCF by multi--component Gaussians
provides a means for statistically quantifying kinematic
evolution by comparing different redshift regimes.
Additionally, the TPCFs of low and high ionization gas can be compared
at similar epochs for quantifying the both relative ionization and
kinematic conditions.
The high resolution spectra required to study either {{\rm Mg}\kern
0.1em{\sc ii}} or {{\rm C}\kern 0.1em{\sc iv}} kinematic evolution
from $0 \leq z \leq 4$ do not exist-- nor do the spectra to study
their relative kinematics at $z\leq 1.2$ and $z\geq 2.2$.
The former represents 50--70\% of the age of the universe and the
latter covers the epoch when galaxy evolution is believed to be
very active.
What data exist, and what remains unknown?
\begin{figure}[hbt]
\cwcplotmacro{cwchurchill_fig2a.eps}{1.85in}{0}{35.}{35.}{0}{-106}
\cwcplotmacro{cwchurchill_fig2b.eps}{1.85in}{0}{35.}{35.}{-203}{43}
\vglue -0.72in
\caption{
(left panels) The Two Point Clustering Function of {{\rm Mg}\kern
0.1em{\sc ii}} clouds in galactic halos. The upper panel is for $0.4
\leq z \leq 1.7$, and the lower panel is for $0.4 \leq z \leq 1.0$,
the latter presented because the higher redshift subsample is biased
toward the strongest absorption strengths.
--- (right panel) The number per unit redshift for tracers of neutral
({{\rm H}\kern 0.1em{\sc i}}), low ({{\rm Mg}\kern 0.1em{\sc ii}}),
and high ({{\rm C}\kern 0.1em{\sc iv}}) ionization gas (adapted from
Steidel 1993 \protect\cite{cwcref:steidel93}).
The unexplored UV absorbing universe and the required technology are
shown.
\protect\label{cwcfig:tpcf_dndz}}
\end{figure}
The numbers of neutral, low, and high ionization systems are shown in
the right panel of Figure~\ref{cwcfig:tpcf_dndz}.
These numbers are the product of system number density and
gas cross section (ionization and metallicity dependent).
{{\rm Mg}\kern 0.1em{\sc ii}} has been surveyed from $0.3 < z < 2.2$
and traces the number of Lyman limit systems (LLS).
{{\rm C}\kern 0.1em{\sc iv}} has been surveyed from $1.2 < z < 3.7$
and shows a dramatic increase in the number of strong systems.
The HST Key Project will soon provide overlap between {{\rm C}\kern
0.1em{\sc iv}} and {{\rm Mg}\kern 0.1em{\sc ii}} for $z \leq 1.2$ (low
resolution only).
The $z \leq 1.2$ {{\rm C}\kern 0.1em{\sc iv}} kinematics are unknown,
and {{\rm Mg}\kern 0.1em{\sc ii}} is unexplored for $z \geq 2.2$.
High resolution spaced--based (UV) spectra are needed to extend {{\rm
C}\kern 0.1em{\sc iv}} kinematic studies to $z \leq 1.2$.
Near Infrared (IR) spectra are needed for a {\it complete
and uniform\/} survey of {{\rm Mg}\kern 0.1em{\sc ii}} to $z \geq
2.2$, followed by a higher resolution study of the gas kinematics.
These data are necessary if we are to establish a complete picture of
absorbing gas to $z=4$ for tracking the UV background, chemical
evolution, and inferring their roles in galactic evolution.
\section*{Detailing High Ionization at $z \leq 1.2$}
Consider the $z=0.927$ systems shown in the upper right panel of
Figure~\ref{cwcfig:q1206}.
The complex {{\rm Mg}\kern 0.1em{\sc ii}} doublets have a velocity
spread of 500 km~s$^{-1}$.
An additional system is present 1100 km~s$^{-1}$ to the red,
at $z=0.934$.
The lower right panels show the {{\rm C}\kern 0.1em{\sc iv}}, {{\rm
Si}\kern 0.1em{\sc iv}} $\lambda\lambda 1393, 1402$ and {{\rm N}\kern
0.1em{\sc v}} $\lambda\lambda 1238, 1242$ doublets aligned by their
zero point velocities.
The {{\rm C}\kern 0.1em{\sc iv}} doublet has not been resolved, but
the profile suggests complex kinematics.
In the ``CIV1550'' panel, note the previously unreported {{\rm C}\kern
0.1em{\sc iv}} doublet from the $z=0.934$ system.
\begin{figure}[hbt]
\cwcplotmacro{cwchurchill_fig3a.eps}{1.7in}{0}{42.}{42.}{-108}{-90}
\cwcplotmacro{cwchurchill_fig3b.eps}{1.7in}{0}{42.}{42.}{-225}{47}
\vglue -0.85in
\caption{
(right panels) The $z \sim 0.93$ cluster of systems toward QSO
1206+459 (FOS/HST data courtesy D. Schneider).
--- (left panel) An example of resolved {{\rm C}\kern 0.1em{\sc iv}}
profiles. These HIRES/Keck {{\rm C}\kern 0.1em{\sc iv}} profiles show
each transition of the doublet highly resolved into multiple
subcomponents (upper panels: $z=2.106$; lower panels: $z=1.937$).
\label{cwcfig:q1206}}
\end{figure}
Photoionization modeling (CLOUDY) of the {{\rm Mg}\kern 0.1em{\sc ii}}
profiles, {{\rm Ly}\kern 0.1em$\alpha$} (not shown), {{\rm C}\kern
0.1em{\sc iv}}, {{\rm Si}\kern 0.1em{\sc iv}}, and {{\rm N}\kern
0.1em{\sc v}} equivalent widths, was unsuccessful at matching the data
(the ionization parameter and {{\rm H}\kern 0.1em{\sc i}} column
densities were varied for each subcomponent).
Apparently, the systems are not multiple single--phase photoionized
``clouds'' (cf.~\cite{cwcref:bergeron94}).
The gas could be shock heated (starburst), or the high
ionization gas could be intercluster material not spatially distributed
with the low ionization gas.
An example of how the FOS/HST {{\rm C}\kern 0.1em{\sc iv}} profiles
may appear when resolved is shown in the left panels of
Figure~\ref{cwcfig:q1206}.
Three galaxy candidates, one with a confirmed redshift, have been
identified within 10{\hbox{$^{\prime\prime}$}} of the QSO.
High resolution STIS/HST spectra are required if we are to gain an
appreciation of gas associated with galaxies and their
environments.
A sample of 15 systems have been selected by their HIRES/Keck
{{\rm Mg}\kern 0.1em{\sc ii}} absorption properties for a STIS/HST
study to obtain the first view of the velocity spreads and
cloud--cloud clustering of {{\rm C}\kern 0.1em{\sc iv}} in $z \sim 1$
galactic halos.
This program allows the first direct comparison of {{\rm Mg}\kern
0.1em{\sc ii}} and {{\rm C}\kern 0.1em{\sc iv}} kinematics in $z\sim
1$ galaxies.
Additionally, the $z \leq 1$ {{\rm C}\kern 0.1em{\sc iv}} TPCF from
STIS can be compared to that measured at $z\sim 3.0$
\cite{cwcref:rauch97}, allowing a direct quantification of the
kinematic evolution of high ionization gas.
\section*{The $2.2 \leq z \leq 3.8$ UV Universe}
As seen in the right panel of Figure~\ref{cwcfig:tpcf_dndz}, the
strong {{\rm C}\kern 0.1em{\sc iv}} systems increase in number from
$z=4$ to $z=1.2$.
It has been suggested (see \cite{cwcref:lauroesch96}) that this rapid
evolution is due to an epoch of increased metallicity, and is likely
not due to an evolving UV background flux.
However, the metallicity enrichment scenario is corroborated only by a
few, small, non--uniform data sets of singlet low ionization
transitions, and, as such, is plagued by several uncertainties.
A uniform sample of {{\rm Mg}\kern 0.1em{\sc ii}} doublets would yield
an unambiguous look at how the low ionization gas evolves out to
$z=4$, providing the leverage needed to settle the ``chemical
enrichment -- UV flux evolution'' debate.
With an IR spectrograph attached, the Hobby--Eberly Telescope (HET) is
ideally suited for a large {{\rm Mg}\kern 0.1em{\sc ii}} survey.
At Penn State, we (including Beatty, Charlton, Ramsey, \& Schneider)
are building JCAM, an $R = 10,000 - 20,000$ IR spectrograph, for the
HET.
With JCAM/HET, we can obtain a 0.15~{\AA} 3$\sigma$ rest--frame
equivalent width limit for $2.2 \leq z \leq 3.8$ in $\sim 1$~hr for
a $V=19$ QSO.
With the $R=10,000$ survey, we will obtain the first uniform and
complete sample of low ionization gas to $z=4$.
Our goal is to explore the implications of metallicity enrichment for
galaxy evolution and the redshift evolution of the UV meta--galactic
background.
At $R=20,000$, we will perform follow--up high resolution observations
to study the absorbing gas kinematics.
We aim to construct a $z\sim 3.5$ {{\rm Mg}\kern 0.1em{\sc ii}} TPCF
and directly measure the clustering evolution of low ionization gas by
comparing it to the $z\sim 1$ TPCF measured by Churchill {et.~al}
\cite{cwcref:cvc97}.
We also plan to compare the {{\rm Mg}\kern 0.1em{\sc ii}} TPCF with
the {{\rm C}\kern 0.1em{\sc iv}} TPCF measure by Rauch {et.~al}
\cite{cwcref:rauch97}.
The ultimate goal of our research program is (1) to make the unknown
UV rest--frame known in the currently unexplored redshift regimes,
and (2) to help develop a complete view of the evolution of gas and
its role in galaxy and chemical evolution.
|
1,108,101,565,996 | arxiv | \section{Introduction}\label{sec:intro}
\begin{figure}[h]
\centering
\includegraphics[height=60mm,width=\linewidth,keepaspectratio]{shema_general.pdf}
\caption{Overview of the proposed babbling approach. The robot chooses a point in the environment with which to interact and then observes whether an object has moved as a result of its action. On the basis of such interactions, a classifier is trained online using the information gathered. Finally, a relevance map is computed to guide exploration.}
\label{fig:schema_gen}
\end{figure}
Beyond a preprogrammed scenario, building robots that are able to act and fulfill a mission in uncontrolled and unstructured environments remains a challenge. Even everyday environments, which tend to be highly structured, are hard to deal with given their variability. To achieve tasks in such environments, a robotic system needs a robust and adaptive perception of the world. The focus of this work is to bootstrap a world's representation learning process for a robot, i.e. a robotic ecological perception.
Vision is a rich modality that carries a dense set of information reflecting the complexity of realistic environments. To understand a visual scene, a robot must first be able to focus on important components of its visual field according to its embodiment, skills, and current goal. To select an appropriate action, it must simplify this sensor flow. Certain hypotheses can be formulated related to the structure of an environment, e.g. the tabletop hypothesis, or to the shapes of objects; however, these hypotheses limit the ability of the robot to adapt to new environments. A way to avoid making such assumptions is to allow the robot to explore its surrounding via a direct interaction. In this way, by observing the effect of its action on the environment, the robot can gain novel sensory signals and learn from regularities in the sensorimotor space. This domain of research is known as interactive perception : action to enhance perception, and perception to enhance interaction \citep{bohg2017interactive}.
Most previous studies on interactive perception have object segmentation, recognition and manipulation as goals \citep{bohg2017interactive}. To achieve these complex objectives, researchers use a passive image processing bootstrap step to produce object hypotheses. Then, through interactions with the environment, the robot confirms or rejects these hypotheses. Certain assumptions must be introduced to implement this preliminary step, e.g. objects are on a planar surface \citep{VanHoof2014,Gupta,Chang2012,Bersch2012,Metta2002,Fitzpatrick,Fitzpatrick2003,schiebener2011segmentation,hermans2012guided} or object shapes are close to predefined primitives (spheres, cubes, cylinders, etc.) \citep{Schiebener2014,kuzmivc2010object}. The present work aims at removing the need for these hypotheses. To this end, we propose to build a useful segmentation of a scene with an interactive perception approach.
To deal with visual scenes complexity, it has been shown that humans do not consider all parts of the scene as equivalent. They possess a visual attention process that lowers energy consumption during the analysis of a scene \citep{carrasco2011visual} by focusing the attention on elements of the environment that are considered as salient \citep{Itti2001}.
In computer vision, the study of visual saliency in human attention has led to salient object detection within an image \citep{Borji2014}. Works in this field aim at producing a saliency map of images, i.e. a binary map representing an accurate segmentation of an object in an image.
With a saliency map, a robot could directly focus its attention on elements important to its task, for instance part of the environment that can move as a result of its own action, and thus decrease computational time. This represents a starting point for developing an autonomous capacity with which a robot can deal with realistic environments prior to having any representation of objects.
The goal of this work is to define a method that associates the concepts of interactive perception and salient object detection \textit{to autonomously learn a perceptual map which represent moveable parts in an environment}.
The perceptual map is built on the basis of an autonomous exploration involving interactions of a robotic arm with the environment. It represents relevant areas according to the robot capabilities and to the task being undertaken. Saliency maps are focused on what is considered as salient for a human. A robot has different goals and sensorimotor capabilities and may thus not consider the same areas as relevant. To avoid any ambiguity, we thus term this map a \textit{relevance map}. We assume that \textit{something moveable by a robot's end-effector is potentially an object}. The robot learns to identify the relevant features of components that are moveable as a result of its actions. Relevant areas shown in the map could be one object, group of objects, part of an objects or an articulated object. By learning a simpler representation than object models for recognition or manipulation, our method requires less assumptions specific to the environment or to the objects. So, scenarios that are different from tabletop and object with complex shapes and textures are considered in this study.
The main contribution of this work is \textit{a segmentation process to identify relevant parts of the visual scene using the interactive perception paradigm with minimum a priori knowledge about the environment structure.}
This work is not focused on objects extraction for recognition or manipulation; rather, it constitutes a previous step, allowing an initial identification prior to a more targeted exploration. With minimum a priori knowledge on the environment, this work represents the very first step of a developmental process which could lead to a robust and adaptive extraction and identification of objects that are completely unknown to the robot designer.
\section{Related Work}\label{sec:rela_works}
\subsection{Saliency Map}\label{sec:SalM}
Saliency is directly linked to the study of human attention. A saliency map shows the distribution of salient components in the visual field, i.e. parts of the visual field that attract the gaze \citep{Itti2001}. It assesses which object in a picture will most attract the visual attention of a human \citep{Borji2014}.
Saliency maps can be built by different methods. The three main methods are salient object detection (SOD), fixation prediction (FP) and object proposal generation (OPG) \citep{Borji2014}. These methods have the same goal, i.e. \textit{detecting objects based on the study of human attention.} In other words, they aim to determine which object in a picture would most attract the visual attention of a human. The aim of SOD is to generate a saliency map that represents with high accuracy the most salient object in a picture. The saliency map is composed of regions in the picture that represent salient objects. The aim of FP is to produce a saliency map that represents possible fixation points for human attention; the corresponding saliency map is a set of points. Finally, the aim of OPG is to propose bounding boxes that might include an object. The result of OPG is not exactly a saliency map, but it shares the same properties.
In the following, the focus will be on SOD methods, which are the closest methods to the one introduced later herein.
SOD is used to detect the most salient object in an image and then produce a clean segmentation of the object boundaries. Most methods focus on detecting one salient object (the most salient), but some attempt to detect several objects \citep{Borji2014}. SOD methods have two components. First, they detect the salient parts in the image to yield a grayscale map, with a white color indicating the most salient parts. Second, they build an accurate segmentation of the object boundaries by applying a threshold and generating a binary map in which white areas represent the most salient object.
SOD is divided into two main categories: approaches using heuristics and approaches using machine learning algorithms. In both cases, strong assumptions are made in relation to what object in a picture will be the most attractive to a human:
\begin{itemize}
\item Center prior: salient objects are more likely to be in the center of the picture \citep{liu2014adaptive,jiang2013submodular,peng2013salient}
\item Background prior: the narrow borders of the image is part of the background \citep{liu2014adaptive,li2013saliency,jiang2013saliency}
\item Focusness prior: the camera often focuses on a salient object to attract attention; this can be defined as the degree of focus blur \citep{jiang2013salient}
\item Boundary connectivity prior: salient objects are less connected to the image borders \citep{zou2013segmentation,zhu2014saliency}
\item Color prior: certain colors seem to be more attractive to humans, e.g. salient objects are more likely to contain warm colors such as red or yellow to be in salient objects \citep{shen2012unified,liu2014adaptive,jiang2013submodular,peng2013salient}
\item Semantic prior: humans pay more attention to certain objects such as faces, cars, dogs, etc. \citep{shen2012unified}
\end{itemize}
The boundary connectivity, background, and center priors suggest that salient areas always exist around the center of the image. The focusness prior assumes that the image has been taken by a human who knows where to focus. These priors cannot be used to detect objects as this would imply that a robots knows where to center its camera and therefore knows where the object is located. The color and semantic priors are specific to humans and may not be relevant in certain situations.
Other priors are heuristics in relation to what an object may look like in a 2D image:
\begin{itemize}
\item Objectness prior defines a measure of "objectness" based on a provided definition of what an object is and then relies on this measure to compute saliency \citep{jia2013category,jiang2013salient}
\item Spatial distribution prior: if a color is widely distributed in an image, the salient object will likely not contain this color \citep{jiang2013submodular}
\end{itemize}
These priors can be useful, but they are not essential and can reduce the generality of the method. In the method proposed herein, most of these priors are replaced by the interaction of the robot with the environment.
In addition to these priors, saliency map methods decompose the visual scene in either blocks or regions. Blocks are rectangles on the image used to compute the visual features. Regions can have an arbitrary shape. They rely on superpixels, which are clusters of similar pixels based on color and contrast.
According to \citet{Borji2015}, the best performing methods have three features in common:
\begin{itemize}
\item Superpixels: contrary to block-based approaches, superpixels produce an accurate object boundary segmentation.
\item Background prior is used. This contrasts with the location prior, which assumes a specific location for a salient object in an image; usually, this is the center of the image. This assumption is strong and restricts the method to single object detection. Moreover, an autonomous robot with no concept of object will not be able to center the image around the area of interest.
\item Machine learning algorithms is used to train a model of saliency. Discriminative regional feature integration \citep{jiang2013discri} can be used to train a regression model based on a 93-dimensional features vector. This allows the method to be adaptable and scalable to more complex scenarios.
\end{itemize}
These methods are used to build models of what humans would consider as salient. Furthermore, they are focused on static 2D pictures, rather than on the stream of images that a robot can collect while interacting with its environment. The focus here is not on building human-like saliency estimation. The question addressed is \textit{what are the most relevant areas of a real scene for an agent with given capabilities and with a certain goal ?} According to \citet{Borji2015}, a region-based approach (i.e. superpixel) with supervised learning is an efficient method for building a saliency map. In this paper, we thus propose a new region-based method to detect relevant objects based on self-supervised learning. Relevance is a similar concept to saliency, but depends on task and robot features instead of human features.
\subsection{Object Segmentation by Interactive Perception} \label{sec:ip}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{| l | l | l | l |}
\hline
Ref & Goal & Priors & Initial Segmentation \\
\hline
\citet{Kenney2009} & OS & RB, PM & - \\
\hline
\citet{Fitzpatrick2003} & OS & TS, RB, PM, OD & - \\
\hline
\citet{VanHoof2014} & OS & TS, RB, AP & pixel clustering, PS \\
\hline
\citet{ude2008making} & OS, OR & OH, HA & - \\
\hline
\citet{Gupta} & OS, OR & TS, RB, AP & color-based clustering, PS \\
\hline
\citet{Chang2012} & OS, OR & TS, RB, AP & pixel clustering, PS \\
\hline
\citet{hermans2012guided} & OS & TS, RB, AP & PS \\
\hline
\citet{Hausman2013} & OS, OR & TS, RB, AP & RANSAC (shape primitives) \\
\hline
\citet{Bersch2012} & OS, OR & TS, RB, AP & PS \\
\hline
\citet{kuzmivc2010object} & OS, OR & RB, AP, SP & SIFT, RANSAC \\
\hline
\citet{schiebener2011segmentation} & OS, OR & RB, AP, SP, TO & Harris Corner, RANSAC, PS \\
\hline
\citet{Schiebener2014} & OS, OR & TS, RB, AP, SP & saliency map, difference of gaussian \\
\hline
\citet{Bergstrom2011} & OS & RB, PM, AP, TS & HSV Histograms, 3D Ellipsoids \\
\hline
\citet{Xu2014} & OS & RB, PM, AP, TS & Supervoxels \\
\hline
\citet{eitel2017learning} & OS & RB, PM, AP, TS & surface-based \\
\hline
\hline
Our Approach & Relevance Map & AP & Supervoxels \\
\hline
\end{tabular}
\end{center}
\caption{Summary of methods in Interactive Perception to segment objects. The goals are object segmentation (OS) and object recognition (OR). The priors are the following : tabletop scenario (TS), rigid body (RB), action primitives (AP), planar motion of the objects (PM), object database (OD), textured objects (TO), object in hand (OH); shape primitives (SP), and human assistance (HA). In object hypothesis generation, PS stands for plane segmentation}
\label{tab:ip}
\end{table*}
Learning to perceive the world via interaction is known in robotics as interactive perception \citep{bohg2017interactive}. By interacting with its surrounding, a robot learns a representation of the environment through the relation between its capabilities and the effect of its actions.
Early works on this topic have been conducted by \citet{tsikos1988segmentation} in which they proposed a method to separate stacked and heaped objects thanks to interactions (like push, pick and shake) executed by a robotic system. This approach makes further image processing easier. Fifteen years later, interactive perception defined as above was studied by \citet{Metta2002,Fitzpatrick,Fitzpatrick2003}. In their works, a humanoid robot learns to segment a single object on a table and to recognize its arm. The robot interacts with its surrounding and uses optical flow extracted from 2D images to detect motions. By observing the motion as a consequence of its actions the system is able to segment the object from the background. In those studies, motions are restricted to planes and the experiments are tabletop scenarios in order to simplify the problem.
Most works on interactive perception start with a passive image processing step in which a first segmentation is done. This segmentation could be an oversegmentation with segments that are smaller than the objects \citep{VanHoof2014,schiebener2011segmentation,patten2018action}. In this case, assumptions are used to maximize the probability to interact with segments that are part of objects. In other approaches, segments are object candidates \citep{Gupta,Chang2012,Bergstrom2011,hermans2012guided}. Object candidates are clusters of pixels in 2D images or clusters of points in pointclouds. The actions of a robot are then designed to reject or confirm these hypotheses. The complete interactive perception method generally follows the cycle depicted in figure \ref{fig:IP}, i.e. choice of a segment or an object candidate with which to interact, application of an action to chosen part of the environment, observation of the effect, and update of the perception by merging segments, confirming or rejecting of the chosen hypothesis. Some methods use the interaction to collect data and train a model with machine learning algorithms.
The bootstrap step often uses passive computer vision methods and no interactions. This allows assumptions about objects and environments to bootstrap the system. Table \ref{tab:ip} summarizes the priors and techniques used for the bootstrap step. \citet{schiebener2011segmentation} used random sample consensus (RANSAC) to find planes and cylinders in a picture and generate object hypotheses. As they used 2D images, they needed highly textured objects to estimate 3D shapes. Other methods exploit depth information from 3D sensors, such as stereoscopic cameras or IR laser cameras, to detect planes \citep{VanHoof2014,Gupta,Chang2012, Bergstrom2011,hermans2012guided,eitel2017learning}. Indeed, the environments used in these studies were limited to tabletop scenarios to allow segmentation of the table from the objects on top of the table. It simplified the initial segmentation, which relied on a clustering method based on color. Their methods also aimed to separate objects via interaction.
To handle textureless objects, an alternative approach is to make assumptions about the shapes of the objects. Candidate object generation then relies on comparisons of objects with primitive shapes, such as cylinders, planes, or spheres \citep{Bersch2012,Hausman2013,Schiebener2014}. \citet{Bersch2012} also segment the table, and \citet{kuzmivc2010object} consider only objects containing planes.
In another study \citep{ude2008making}, no assumptions about object type are made; the method does not rely on candidate object hypotheses. The robot starts with an unknown object in its hand and analyzes it in front of a complex, cluttered environment. This allows the robot to learn a model of the background and of the object. This learning exercise cannot be achieved autonomously by the robot as it needs to start with the unknown object in hand.
These approaches aim at both discovering and separating objects. They require a predefinition of what an object is and also require a dedicated candidate object generation method. Scenarios are typically restricted to tabletop scenarios, in which objects are on a flat surface, to easily distinguish objects from the background. This is a significant limitation to the range of environments that can be handled by the robot. An alternative is to make assumptions about objects, e.g. about their shapes or their textures, but it also requires an a priori definition of the possible shapes of all objects with which the robot may interact. Finally, to prevent any assumptions about the environment or the objects, a human teacher or helper can be involved; however, it also reduces the autonomy of the robot.
\begin{figure}[h]
\centering
\includegraphics[height=60mm,width=1\linewidth,keepaspectratio]{IP_schema.pdf}
\caption{General workflow of interactive perception methods.}
\label{fig:IP}
\end{figure}
In the present study, the goal is to reduce the number of assumptions related to the objects and the environment in order to pave the way to more adaptive robot behaviors \citep{2016ACLN3729}. Provided that the perception system actually sees the objects, a single assumption is used: objects are parts of the environment that the robot can move. The robot uses interactive perception to learn to distinguish relevant objects from the background. An important feature of the proposed method is that \textit{the concept of object does not need to be defined}; it relies only on the concept of relevance.
\subsection{Interactive Perception to Build a Saliency Map}
Little work has focused on making a robot build a saliency map using an interactive perception process. Saliency map building is considered as a pure computer vision problem \citep{Borji2014,Borji2015}, and interactive perception aims to produce detailed representations of objects \citep{bohg2017interactive}. However, several studies make the association between these two fields, albeit indirectly. \citet{Craye2015} build a saliency map of an environment using a mobile robot. \citet{kim2015interactive} and \citet{Ugur2007} build a map that represents where in the environment a robot can apply certain actions. In these studies, the concepts of saliency or of relevance are not mentioned, but this map can be called a \textit{relevance} map as the robot focuses on regions with which it can interact.
\citet{Craye2015} look at how to build a saliency map of their experimental rooms using mobile robots. The robots navigates the laboratory and builds a saliency map of objects. In that study, salient objects are defined as elements protruding from a flat surface. The robot could then build saliency maps of the objects on the floor (chairs, tables, etc.) or of objects on a table or desk. The saliency map is built using robot exploration, but this is not exactly interactive perception as no change is made in the environment as a result of the actions performed by the robot. Also, the assumption that object are on a flat surface limits the range of objects that can be recognized as salient.
\citet{Ugur2007} proposed a method for learning "traversability" affordance with a wheeled mobile robot which explores a simulated environment. The robot tries to go through different obstacles: laying down cylinders, upright cylinders, rectangular boxes, and spheres. The laying down cylinders and spheres are traversable while boxes and upright cylinders are not. The robot is equipped with a 3D sensor and collects data after each action labeled with the success of going through the objects. The sample data are extracted thanks to a simulated RGB-D camera. Then, an online SVM (\citet{bordes2005fast}) is trained based on the collected data. The resulting model predicts the "traversability" of objects based on local features. To drive the exploration, an uncertainty measure is computed based on the soft margin of the model decision hyperplane. Finally, they tested their method on a navigation problem, on real robots and in a realistic environment. They demonstrate, by using the model learned in simulation, that the robot was able to navigate through a room full of boxes, spherical objects and cylindrical objects like trash bins without colliding with non-traversable objects.
\citet{kim2015interactive} in the same idea seeks to learn pushable objects in a simulated environment using a PR2 with an RGB-D camera. The objects are blocks of the size of the robots. They are either pushable in one or two directions, or not pushable. The PR2 uses its two arms to try to push the blocks. The learning process relies on a logistic regression classifier and a Markov random field is used to smooth spatially the predictions. The robot explores then the environment and collects data by trying to push the blocks. The outcome of the framework is what they called an affordance map indicating the probability of pushability of a block. When in \citet{Ugur2007} the learning is made on continuous space, in \citet{kim2015interactive} the environment is discretized in a grid with the cells of the size of a block, thus, the learning space is discrete. Finally, they use an exploration strategy based on uncertainty reduction to select the next block to interact with.
Kim et al. and U\v{g}ur et al. study how, by interaction, a robot could build a \textit{relevance} map according to a task or capability. They do not mention relevance map explicitly but the affordance map of \citet{kim2015interactive} is close from our relevance map by the way they both segment interesting elements for the agent. Exploration and learning were conducted in simulation only and in simple environments.
The present work aimed to consider more realistic environments in experiments performed with a real robot equipped with an arm.
\section{Method}
\subsection{Overview}
The goal of our method is to produce a \textit{relevance map} through an autonomous exploration driven by a robotic arm. The robot explores an unknown, dynamic\endnote{In this work, "dynamic" means that the state of the environment is not reinitialized at the beginning of each iteration.}, environment. This exploration is driven by a relevance map of the environment, which is built online. Our approach could actually bootstrap most of the methods described in section \ref{sec:ip}. We follow the main principles of interactive perception and SOD. The system first oversegments the scene into regions and then classifies them to generate a grayscale map representing the relevance of the regions. It then chooses an area to explore, interacts with it, observes the effects on the environment, and updates the classifier and the relevance map (see Figure \ref{fig:schema_gen}). The perception relies on a RGB-D camera (Microsoft Kinect 2\endnote{Other kind of 3D cameras could be used such as stereoscopic cameras}) to retrieve a 3D pointcloud of the scene. This camera is an active depth camera that has troubles to perceive dark and reflective surfaces \citep{lachat2015first}. This limitation comes from the perception device, not from the proposed method.
\begin{figure}[h]
\centering
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{schema_exp.pdf}
\caption{General workflow of the approach.}
\label{fig:schema_exp}
\end{figure}
Figure \ref{fig:schema_exp} presents the general workflow of the method. The exploration is sequential with each iteration structured into 5 steps :
\begin{itemize}
\item[\textbf{Step 1}] Oversegmentation of the pointcloud into regions of the same size, called supervoxels. The oversegmentation is described in section \ref{sec:vccs}. Visual features are extracted to characterize each region (see section \ref{sec:feat}).
\item[\textbf{Step 2}] Computation of the \textit{relevance map} based on the oversegmentation and according to the prediction of the classifier. This step is described in section \ref{sec:sm}.
\item[\textbf{Step 3}] Choice of a supervoxel with which to interact. This choice is driven by a sampling process that relies on the relevance map (see section \ref{sec:choice}).
\item[\textbf{Step 4}] The robot interacts with the selected supervoxel with its push primitive (see section \ref{sec:contr}).
\item[\textbf{Step 5}] Observation of a possible effect. A basic change detection method is applied locally on the chosen supervoxel. The features of the supervoxel are stored in the database as samples. A label of 1 is used if a change is detected, and a label of 0 is used otherwise (see section \ref{sec:motdet}).
\end{itemize}
At the beginning of the exploration, all relevance scores are initialized to 0.5. Without any information about the environment, all supervoxels are assumed to be uncertain and must be explored.
\subsection{Supervoxels}\label{sec:vccs}
The relevance map is based on an oversegmentation into supervoxels using voxel cloud connectivity segmentation (VCCS) \citep{Papon2013}. As illustrated in Figure \ref{fig:supervox}, supervoxels are clusters of voxels. Voxels are the smallest unit of a 3D image as pixels are for 2D images.
\begin{figure}[h]
\centering
\includegraphics[height=60mm,width=.5\linewidth,keepaspectratio]{sv2.pdf}
\caption{Examples of supervoxels segmentation. These images represent pointclouds, the top one is a pointcloud extracted from a kinect, the bottom one is a pointcloud representing supervoxels extracted on the top pointcloud. In these pictures, the voxels are points as their are points of the pointclouds.
}
\label{fig:supervox}
\end{figure}
VCCS is similar to the superpixel methods such as SLIC superpixels \citep{Achanta2010} and turbopixel \citep{levinshtein2009turbopixels}. It builds clusters of voxels directly on 3D pointclouds. The use of depth information to build supervoxels is a significant enhancement compared with superpixels methods because this segmentation respects object boundaries. The samples stored to update the classification are thus more likely to be associated with a single object; thus, it removes a significant source of noise in the classification. Also, VCCS works on all kinds of environments because the algorithm uses low-level features such as color, normals, and geometric descriptors (fast point feature histograms (FPFH) proposed by \citet{rusu2009fast}). Therefore, VCCS produces a meaningful oversegmentation of RGB-D images.
VCCS is based on a region growing algorithm. At the beginning, voxel seeds are evenly distributed on the pointcloud. Then, a local nearest neighbors algorithm is applied to each seed controlled by a distance which combines color, spatial and shape distances (see equation \ref{eq:vccsdist}). Color distance is computed in CIELab color space and the shape distance is computed with FPFH.
\begin{equation}\label{eq:vccsdist}
D = \sqrt{\frac{\lambda D_c^2}{m^2} + \frac{\mu D_s^2}{3 R_{seed}^2} + \epsilon D^2_{f}}
\end{equation}
Where $D_c$ is the distance on color CIELab space divided by a constant $m$, $D_s$ is the spatial distance divided by $R_{seed}$ which is the maximal distance considered for the clustering and $D_{f}$ is the distance on FPFH. For each of these distances, hyperparameters $\lambda$, $\mu$ and $\epsilon$ control the importance of color, spatial distance, and geometric similarity. This equation shows the four important hyperparameters of VCCS which control the size ($R_{seed}$) and the shape ($\lambda$, $\mu$, $\epsilon$) of the supervoxels. These last hyperparameters do not seem to be a limitation of the variety of environment on which our methods can be applied as they are not environment-specific. For the experiments presented in this article, the parameters were fixed to the values proposed by \citet{Papon2013}. However, the size of supervoxels is more critical as objects must be at least larger than a supervoxel.
Another drawback of VCCS methods is when extracted from a video stream on a static scene, the segmentation will change at each frame. This inconsistency of the segmentation over a video prevents to apply supervoxels tracking methods.
In this work, supervoxels are, the smallest visual unit considered. All visual features are extracted from a supervoxel and also the robot interacts with the supervoxels which have at least the size of its end-effector. We use the implementation of VCCS available in the PointCloud Library (PCL) (\citet{Rusu_ICRA2011_PCL}).
VCCS algorithm output, like implemented in PCL is a set of supervoxels, with a centroïd point for each supervoxels. The centroid point is at the average position and have the average color and the average normal of the points part of its supervoxel. The output includes also an adjacency map representing the graph of geographical proximity of the supervoxels. Thus, to access to the neighbors of a supervoxel, it is enough to use the adjacency map.
\subsection{Features Extraction}\label{sec:feat}
To estimate the relevance of a supervoxel, a classifier is trained on features based on the color and shape of the supervoxels. Each feature characterizes one supervoxel. The features used in this paper are the following:
\begin{itemize}
\item Color CIELab histogram: A five-bin histogram is computed for each dimension of the CIELab\endnote{CIELab is an international standard of colorimetry decided during the International Commission on Illumination (CIE) of 1978} color domain on the colored pointcloud of a supervoxel. Then, these three histograms are concatenated in a 15-bins histogram.
\item Fast Point Feature Histogram (FPFH) is a common descriptor that characterizes shape based on a pointcloud of normals \citep{rusu2009fast}. Among others, it is used in the computation of VCCS for oversegmentation in supervoxels. It is used generally for 3D registration thanks to its high capacity for shape discrimination. In this paper, FPFH is extracted from the pointcloud including the targeted supervoxel and its neighbors. The average descriptor is computed to finally obtain a 33-dimensional feature of the supervoxel.
\end{itemize}
FPFH is a simplification of PFH (point feature histogram) aimed at being faster to compute. FPFH is a combination of simplified PFH (SPFH) which are histograms of triplet of value $(\alpha,\phi,\theta)$ computed with equations \ref{eq:spfh}.
\begin{equation}\label{eq:spfh}
\begin{split}
\alpha & = v*n_t \\
\phi & = u*\frac{p_t-p_s}{\Vert p_t - p_s \Vert} \\
\theta & = arctan(w*n_t,u*n_t)
\end{split}
\end{equation}
Where $(. * .)$ is the scalar product, $(u,v,w)$ is an orthogonal frame defined in equation \ref{eq:spfhframe} and represented in figure \ref{fig:spfh}, $n_t$ and $n_s$ the normals to the surface at points $p_t$ and $p_s$.
To apply these equations, a coordinate frame is defined at one of the points like shown in figure \ref{fig:spfh} and written in equation \ref{eq:spfhframe}.
\begin{figure}[h]
\centering
\includegraphics[width=.9\linewidth]{pfh_frame.pdf}
\caption{Schema of how the orthogonal frame $(u,v,w)$ is defined on which the computation of SPFH is based. Figure reproduced from \citet{rusu2009fast}.}
\label{fig:spfh}
\end{figure}
\begin{equation}\label{eq:spfhframe}
\begin{split}
u & = n_s \\
v & = u \wedge \frac{p_t-p_s}{\Vert p_t - p_s \Vert} \\
w & = u \wedge v
\end{split}
\end{equation}
Where $(. \wedge .)$ is the vectorial product.
An SPFH is computed for the query point $p_q$ for its neighbors. Thus, the computation of FPFH involves the neighbors of $p_q$ and also the neighbors of the neighbors of $p_q$.
The concatenation of the CIELab histogram and FPFH features characterizes a supervoxel. It is a vector of size 48.
\subsection{Building the Relevance Map}\label{sec:sm}
Each supervoxel is weighted with a value between 0 and 1. These values represent the relevance of the supervoxel. These relevances are predictions of the classifier trained online during the exploration. They represent the probability of a supervoxel to be part of "something" moveable by the robot, i.e. an object. The relevance map is represented in this article as a grayscale map on a 3D pointcloud segmented in supervoxels. Each supervoxel is colored between yellow (for maximum relevance) and black (for minimum relevance). The classifier is described in detail in section \ref{sec:method}.
\subsection{Choice of the Next Area to Explore}\label{sec:choice}
After computation of the relevance map, the process must select the next region of the environment to explore. Therefore, a \textit{choice distribution map} is computed. This \textit{choice distribution map} represents the probability of a region to be chosen. The computation of these probabilities is based on the \textit{uncertainty} and \textit{confidence} of the classifier. The exploration is motivated by a reduction of uncertainty or an increase in the level of information (i.e. the entropy) with respect to the environment. The computation of the \textit{choice distribution map} is described in detail in section \ref{sec:samplproc}.
\subsection{Push Primitive}\label{sec:contr}
To enable interaction with the chosen supervoxel, a push primitive is used. This primitive is divided into three steps: approach movement, straight line motion towards the supervoxel center for interaction, and reverse motion.
A planning algorithm with obstacle avoidance is used for the approach phase \citep{sucan2012ompl,moveit}. In this phase, the end-effector of the robot moves to an approach position near the target position, which is the supervoxel center. An approach position is randomly chosen among those associated with a valid motion plan, i.e. a motion plan without self-collisions and collisions with the scene. At the end of this phase, the end-effector is at an approach point and positioned towards the target. Then, the end-effector moves to the target following a straight line of 5 centimeters and attempts to pursue this trajectory for a further few centimeters\endnote{This control sequence is open-loop, the system updates its perception of the environment only after the execution of the control program.}. In other words, the robotic arm tries to push the target. Thereby, pushing motions can take different orientations of the end-effector relative to the target before pushing it. Finally, the robot arm returns to its home position by following the reverse trajectory.
\subsection{Change Detection} \label{sec:motdet}
As the exploration is sequential, the change detection is simply a comparison between the pointcloud before and after the interaction. Thus, the detection is based only on vision. The comparison follows these steps:
\begin{itemize}
\item The \textit{octree pointCloud change detector} method provided in PCL \citep{Rusu_ICRA2011_PCL} is used to subtract the initial pointcloud (before the interaction) from the current pointcloud. This operation produces a pointcloud limited to the difference between both pointclouds.
\item With a \textit{statistical outlier removal} provided in PCL , the noise in the difference pointcloud is reduced.
\item Finally, points \textit{only in the selected supervoxel} are compared with the difference pointcloud, using an \textit{iterative closest point} algorithm implemented in PCL, to determine if this group of points is in the difference pointcloud. If this is the case, it is considered that the action of the robot produced a movement, and the feature of the supervoxel is given a "moveable" label; otherwise, it is given a "non-moveable" label.
\end{itemize}
\begin{figure}[!h]
\centering
\includegraphics[width=.9\linewidth]{change_detector.pdf}
\caption{Visualisation of the change detector. Right picture represents a part of a scene before a push and the left picture after a push. The red dot on both pictures represents the target of the push primitive which is here the upper part of the blue toy. This target correspond to the center of a supervoxel. The white areas represent the parts detected as different between both images.}
\label{fig:chdet}
\end{figure}
Figure \ref{fig:chdet} shows a visualisation of how the change detector works. The right picture represents a part of a scene before a push and the left picture after a push. The red dot represents the center of the supervoxel targeted by the system. The white areas represent the differences detected between both images. If the target supervoxel is included in the white areas, then a change is detected. The change detector considers only the targeted supervoxel, i.e. a small area around the target.
\section{Collaborative Mixture Models}\label{sec:method}
To classify the collected samples, a new classification method is introduced: the collaborative mixture models (CMMs).
The proposed method consists in classifying the samples extracted from the supervoxels in two categories: samples from a relevant object and samples from the background. CMMs predict the class of the newly perceived supervoxels and use this information to build a relevance map. This classification is supervised as the system labels the gathered samples owing to (i) the interactions of the robot with the environment and (ii) the change detector. These classes may be nonconvex and the collected samples may be nonlinearly separable as we make no assumptions about the feature space structure and about the environment. Both categories are represented as a set of clusters of the already seen samples. A multivariate normal distribution is associated with each cluster, and these are summed to form a mixture model; there is thus a Gaussian mixture model (GMM) for each category. The parameters of the multivariate normal distributions are statistically estimated using sample mean and sample covariance estimators. Each Gaussian of the mixture models is associated with a cluster of samples, which is called, a \textit{component}. The number of components in each model is not given \textit{a priori} and is adapted to the training set. Both models start with one component and add or remove new components with \textit{merge} and \textit{split} operations. These operations adapt the number of components to the data.
\begin{table*}[t!]
\centering
\resizebox{2\columnwidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
Ref & Type & non-convex & non-linear & uncertainty & environment specific hyperparameters & supervised \\ \hline
\citet{cauwenberghs2001incremental} & SVM & yes & yes & no & kernel, soft margin & yes \\ \hline
\citet{bordes2005fast} & SVM & yes & yes & yes & kernel, soft margin & yes \\ \hline
\citet{bordes2005huller} & SVM & yes & yes & no & kernel, soft margin & yes \\ \hline
\citet{tax2003online} & SVM & yes & yes & no & kernel, soft margin & no \\ \hline
\citet{saffari2009line} & random forest & yes & yes & no & -- & yes \\ \hline
\citet{saffari2010online} & boosting & no & yes & no & -- & yes \\ \hline
\citet{cappe2009line} & mixture model & yes & yes & yes & number of components & no \\ \hline
\citet{kristan2008incremental,kristan2011multivariate} & mixture model & yes & yes & yes & level of compression & no \\ \hline \hline
Our Approach & GMM & yes & yes & yes & tolerance ellipse size ($\alpha$) & yes \\ \hline
\end{tabular}
}
\caption{Online learning algorithms review}
\label{tab:online}
\end{table*}
\subsection{Choice of Classifier}\label{sec:choiceclassif}
The main goal of this work was to develop a method that can handle a large variety of environments. Therefore, the classifier must be able to adapt to different environments. The values of its hyperparameters should not be specific to a particular environment. In a complex environment, the classes may not be convex, and the training dataset extracted from it may not be linearly separable. Furthermore, the classifier must provide a measure of uncertainty, which will be used for the exploration process. The output of the classifier then needs to be a probability rather than a single net value. The choice of the classification algorithm must then fulfill the following criteria:
\begin{itemize}
\item[\textbf{1}] Handle nonconvex/nonlinearly separable datasets and feature spaces
\item[\textbf{2}] Provide uncertainty measurement
\item[\textbf{3}] Have hyperparameters that are not specific to a particular environment
\item[\textbf{4}] Be supervised and adapted for classification
\item[\textbf{5}] Be trained online
\end{itemize}
We chose to use GMM as a basis for the classifier. This model offers a good classification approach that meets the first and second criteria. The GMM is a regression method capable of approximating a large set of probabilistic distributions. By encoding each category with a GMM, we have a supervised learned classifier that gives as its output the probability of membership to a certain category (criterion 2). Also, the GMM can be seen as an ensemble learning method in which weak classifiers are combined to form a strong classifier. It is thus a classifier able to handle nonconvex classes or nonlinearly separable data (criterion 1).
The number of components (i.e. multivariate normal distributions) is generally a hyperparameter fixed before the training of the GMM \citep{mclachlan2004finite}. In the proposed method, this parameter depends on the environment because the more complex the scene in the feature space, the more components are needed (see section \ref{sec:res}). It is also trained with the expectation-maximization algorithm, which is, in its classical form, an offline algorithm. The expectation-maximization algorithm can estimate the latent parameters of a probabilistic distribution; however, in the case of mixture models, the number of components must be known beforehand. This is problematic with respect to the third and fifth criteria.
\citet{cappe2009line} proposed an online expectation-maximisation algorithm in which E-step is replaced by a stochastic approximation of expectation and let M-step unchanged. GMMs with an unknown number of components were studied by \citet{richardson1997bayesian} and \citet{rasmussen2000infinite}, who used a Markov-chain Monte Carlo approach to estimate the parameters and the number of components of the mixture. In these studies, GMM is trained offline. The study of \citet{cappe2009line} and the studies of \citet{richardson1997bayesian} and \citet{rasmussen2000infinite} are not intercompatible because the first study use expectation-maximisation and the other two used Markov-Chain Monte Carlo. We could either extend the work of Capp\'{e} and Moulines with an unknown number of components or extend the work of Richardson and Green or Rasmussen to include online training. Richardson and Green chose a Bayesian formalism and Markov-Chain Monte Carlo because they argued that it is more suitable for estimating the number of components than expectation-maximisation.
An alternative approach is to build a mixture model with kernel density estimator (KDE) as in the studies of
\citet{kristan2008incremental,kristan2011multivariate}. KDE algorithms proposed by \citet{kristan2008incremental,kristan2011multivariate} consist, in each training iteration, to add a new component with, as center, the last sample arrived, and then to reduce the mixture with a compression operation. This algorithm has two advantages : it is an online training and the number of components does not have to be specified beforehand. But the compression step could be computationally heavy especially in the multivariate case. Also, the frequency of the compression step must be well chosen as a compromise between the precision of the model in relation to the training data and its generalization ability. If too few compression steps are applied, the model will over-fit the data because there will be too much components, but if too much compression is done, the model will be inaccurate on the training data.
In a similar idea, \citet{declercq2008online} proposed a regression method based on bivariate GMM. The model is trained online and the number of component is estimated thanks to split and merge operations. As in \citet{kristan2008incremental, kristan2011multivariate}, the new data is added to the model as a Gaussian component with a prior covariance representing the observation noise. This new component is merged to the most uncertain component in the existing mixture. The uncertainty of a component is estimated thanks to an uncertain gaussian model which provides a quantitative estimate of the ability to describe data that follows a gaussian distribution. This measure is called \textit{fidelity}. If the fidelity of the newly merged component is below a certain threshold, then this component is split into two new components. The parameters of the new components are estimated via expectation-maximization.
The proposed methods of \citet{kristan2008incremental, kristan2011multivariate,declercq2008online} are interesting as they associate online learning with an estimation of the number of components, but they do not address the question of the sampling process. They also consider regression problems whereas in this study the problem is formulated as a classification problem. In a classification problem the output space is discrete, i.e. the labels, while in regression the output space is continuous. In our opinion, considering labels suit more to our problem which is discriminating features between those characterizing moveable elements and those characterizing non-moveable elements. When, for instance, regression is practical for estimating trajectories or continuous signals.
In our approach, we chose a Bayesian formalism and a statistical estimation of the parameters with geometrical criteria exploiting the labels of the training dataset to estimate the number of components. These choices allow us to have less latent parameters than did Richardson and Green or Rasmussen; thus, we avoid choosing a priori probability distributions for the parameters. However, other online classification methods are proposed in the literature. Next section gives a short review of some of these algorithms.
\subsection{Short Review of Online Classification Algorithms}
In offline learning, a set of training data is available beforehand, and the training can thus be performed in batch. In online learning, the data arrive progressively while the system is learning. As such, the classification algorithm must make predictions while is learning, on the basis of the history of previously encountered data. Online learning is of interest when not all data are available beforehand. In a first case, the data arrive in a stream like with financial prediction or real-time video analysis, thus the system has no control on the order of arrival of the samples. In a second case, the system has an exploration process which allows it to choose the next sample to pick and to this end, it needs to train a model online. This is the case of the present study. It is widely studied for application to several kinds of problems, e.g. speech or pattern recognition, failure prediction, and medical imaging \citep{obermaier2001hidden,salfner2010survey,saffari2010online}.
Table \ref{tab:online} summarizes the features of some state-of-the-art methods for online classification according to the criteria listed in section \ref{sec:choiceclassif}. These methods have an efficiency close to that of batch learning but they do not fulfill all of our criteria.
Saffari et al. proposed an online random forest \citep{saffari2009line} and an online boosting forest \citep{saffari2010online}. Random forest is an efficient algorithm for classifying nonconvex and nonlinear problems, with boosting the method is faster but less efficient on nonconvex problems. These algorithms are used to track faces in videos in real time. In this task, exploration is not needed; thus, estimation of the uncertainty or confidence of the classification to drive a sampling process is not required. As such, this issue is not addressed.
From random forest algorithm, an option for measuring uncertainty is to use the confidence of classification of a tree. This confidence is used to estimate which tree will give the best prediction for a given sample. And so, the algorithm estimates the area of classification in the features space of each weak classifier.
\citet{Craye2015} used online random forest as a classifier and intelligent and adaptive curiosity (IAC) \citep{oudeyer2004intelligent} as the exploration process. IAC does not involve uncertainty estimation, but it is based on learning progress maximization; which requires to segment the environment into areas to compute learning progress, i.e. entropy gain. In the study of Craye et al. these areas are defined by a human.
Online versions support vector machines (SVM) are also available \citep{tax2003online,bordes2005fast,bordes2005huller,cauwenberghs2001incremental}. By definition, SVM methods fulfill the first and the forth criteria. SVMs are highly dependent on the type of kernel used to separate the data. To have an efficient classification with SVMs, the kernel and its hyperparameters must be well chosen based on the problem \citep{burges1998tutorial,smits2002improved}; this does not meet our third criterion. However, an uncertainty measurement (2nd criterion) could be easily defined on the basis of the soft-margin. The soft-margin could define an uncertain classification area (\citet{bordes2005fast}). Of course the parameter of the soft-margin must be well chosen between precise classification and large exploration areas.
Random forest may be a good choice as it is efficient and general; however, it is not an algorithm designed to drive the exploration of an unknown feature space. A multivariate normal distribution gives a better estimation of uncertainty as it is a statistical approximation of a sample distribution.
Our problem therefore requires a new classification algorithm that can draw inspiration from previous algorithms: in particular, online mixture models and mixture models with an unknown number of components \citep{cappe2009line,richardson1997bayesian,kristan2008incremental, kristan2011multivariate,declercq2008online}.
\subsection{Definition of the Classifier}
CMMs are formalized in a Bayesian framework with conditional probabilities.
The classifier has the following parameters :
\begin{itemize}
\item $K_l$ : number of components of the mixture model of class $l$, with $l \in \{0,1\}$.
\item $S = \{s_i,l_i\}_{i<I}$: database of samples and their corresponding label.
\item $\Theta_l = \{\mu_k,\Sigma_k\}_{k<K_l}$: multivariate normal distribution parameters of both models with mean $\mu_k$ and covariance matrix $\Sigma_k$.
\item $W_l = \{w_k\}_{k<K_l}$ : weights of the mixture model of class $l \in \{0,1\}$.
\item $L = \{0;1\}$ : label asked to the classifier to be predicted.
\end{itemize}
\paragraph*{Class Definition} A class is a subspace of the feature space pointed out by a label. Equation \ref{eq:class_est} gives the probability of a sample X to be part of class 1.
\begin{equation}\label{eq:class_est}
P(L = 1 | W, \Theta, X) = \frac{1 + \Gamma(W_1,\Theta_1,X)}{2 + \Gamma(W_1,\Theta_1,X)+\Gamma(W_0,\Theta_0,X)}
\end{equation}
Where $\Gamma(W_i,\Theta_i,.)$ is the GMM of label i, $W = W_0 \cup W_1$ and $\Theta = \Theta_0 \cup \Theta_1$.
Equation \ref{eq:class_est} generates the output of the classifier.
\paragraph*{Component Definition} A component is a set of points of the feature space statistically represented by a multivariate normal distribution. A component is part of a class, i.e. all parts of a component have the same label or are members of the same class. Equation \ref{eq:comp_est} gives the probability for sample X to be part of a given component $i$.
\begin{equation}\label{eq:comp_est}
P(k=i|X,\Theta,l) = \frac{w_i*G(\mu_i,\Sigma_i,X)}{\sum^{K-1}_{k=0}{w_k*G(\mu_k,\Sigma_k,X)}}
\end{equation}
Let $C_k(X) = (w_k,G(\mu_k,\Sigma_k,X)),S_k,l)$ be a component and let $M_l = \{C_k\}_{k<K_l}$ be the set of $K_l$ components of class $l$, where $S_k$ is the set of samples used to estimate $\mu_k$, $\Sigma_k$, and $w_k$.
The weights of the mixture models are computed using equation \ref{eq:weights}.
\begin{equation}\label{eq:weights}
w_k = \frac{\vert C_k \vert}{\sum_i^K \vert C_i \vert}
\end{equation}
\begin{algorithm*}[!t]
\caption{IAGMM algorithm}\label{algo:genalgo}
\begin{algorithmic}[1]
\Procedure{IAGMM}{$(s,l),M_0, M_1$}
\For{iter = $1 \rightarrow N$}
\State Add new sample $(s,l)$ to the model :
\If{$M_l = \emptyset$}
\State $C \leftarrow \{w,G(s,cI,X),\{s\}\}$ \Comment{Create a new component}
\State $M_l \leftarrow M_l \cup C$ \Comment{Add the new component is the model of label l}
\Else
\State $C \leftarrow closest\_component(s,l)$ \Comment{Find the closest component from $s$ with label $l$}
\State Update the parameters of $C$ with the new sample $s$
\If{SPLIT($C$,l,$M_0, M_1$) is not successful}
\State MERGE($C$,l,$M_0, M_1$)
\EndIf
\EndIf
\State Choose randomly one gaussian per class model :
\For{$i = {0,1}$}
\State $C \leftarrow random\_choice(M_i)$ \Comment{Randomly choose a component from $M_i$}
\If{SPLIT($C$,l,$M_0, M_1$) is not successful} \Comment{Apply the split operation thanks to algorithm \ref{algo:split}}
\State MERGE($C$,l,$M_0, M_1$) \Comment{Apply the merge operation thanks to algorithm \ref{algo:merge}}
\EndIf
\EndFor
\EndFor
\State \Return $M = M_0 \cup M_1$ \Comment{Return the whole model}
\EndProcedure
\end{algorithmic}
\end{algorithm*}
\subsection{Algorithm}\label{sec:alg}
CMMs rely on a supervised learning algorithm, which is used here to solve a binary classification problem (relevant vs not relevant). The algorithm builds two GMMs, one for each class. Each GMM is made up of several Gaussian distributions, each associated with a weight. Each distribution supports a cluster of samples, called a component (see the previous section). At each iteration, the robot arm interacts with a supervoxel and stores the corresponding sample with the label deduced from the observation of the interaction result. Thus, each iteration consists of adding one sample with its label to the dataset.
Adding a sample consists of three main steps (Algorithm \ref{algo:genalgo}):
\begin{itemize}
\item[1.] If there is no component yet in the class of the new sample, create a new one; otherwise, find the closest component and add the sample to this component. Finally, update the parameters of the component.
\item[2.] A \textit{split} operation is applied to the updated component. If it is not successful, the \textit{merge} operation is then applied.
\item[3.] One component per class is randomly chosen, and the \textit{split} operation is applied to each. If a selected component is not split then the \textit{merge} operation is applied.
\end{itemize}
The goal of the \textit{split} operations is to keep only convex components. Indeed, multivariate normal distributions are only relevant to model convex spaces or sets of data as they can be represented as hyperellipses. The \textit{merge} operation aims at reducing the number of components to avoid overfitting and reduce computational cost. Figure \ref{fig:split_merge} illustrates the cases where \textit{split} and \textit{merge} operations are applied :
\begin{itemize}
\item \textbf{Split Case:} When two components of different categories cross or intersect, one of these components is nonconvex. This component must be split into two new components.
\item \textbf{Merge Case:} When two components of the same category cross or intersect, they can be merged to form a larger convex component.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[height=60mm,width=.8\linewidth,keepaspectratio]{split_merge.pdf}
\caption{Schema showing the cases when split and merge operations are applied. Top : If two components of opposite classes intersect themselves, apply the split operation. Bottom : If two components of same classes intersect themselves, apply the merge operation.}
\label{fig:split_merge}
\end{figure}
\paragraph*{Components intersection} The \textit{split} and \textit{merge} operations are based on a geometrical interpretation of the components. A multivariate normal distribution can be represented geometrically as an hyperellipsoid. The proposed geometrical interpretation relies on the region of tolerance of the distribution. A region of tolerance represents the area in which all the points have a probability greater than $1-\alpha$ of being in the corresponding component.
The axes of the hyperellipsoid are the eigenvectors of the inverse of the covariance matrix. In both cases (merge and split), there are two components that intersect each other. The parameters of the hyperellipsoid of tolerance can thus be computed as:
\begin{equation}\label{eq:elltole}
(X - \mu)^{T}\Sigma^{-1}(X - \mu) = \frac{(n-1)p}{n-p}\frac{n+1}{n}F_{1-\alpha}(p,n-p)
\end{equation}
Where $n$ is the number of samples used to estimate $\mu$ and $\Sigma$; $p$ is the dimension of the features space; and $F_{1-\alpha}$ is the quantile function of the Fisher distribution.
Equation \ref{eq:intercond} is used to determine whether component $C_1$ intersects with another component $C_2$, where $C_1$ is the component candidate to be split or merged with $C_2$:
\begin{equation}\label{eq:intercond}
(\rho - \mu)^{T}\Sigma^{-1}(\rho - \mu) <= \frac{(n-1)p}{n-p}\frac{n+1}{n}F_{1-\alpha}(p,n-p)
\end{equation}
Where $\mu$ and $\Sigma$ are the mean and covariance matrices of $C_1$; $\rho$ is the mean of $C_2$; and n is the number of sample in $C_1$.
It is worth noting that $n$ must be strictly greater than $p$ because both arguments of $F_{1-\alpha}$ must be strictly positive. Then, the candidate component must have more samples ($n$) than the number of dimensions of the feature space ($p$). In our case $n$ must be greater than $p = 48$.
For all experiments, the parameter $\alpha$ is fixed at $0.25$.
\paragraph*{Split operation}
Algorithm \ref{algo:split} describes the split operation. Let C be a component. If C intersects a component of the other class, a model candidate is computed with C split into two new components. If this candidate model has a greater \textit{loglikelihood} than the current model, the candidate is kept.
\begin{algorithm}[h]
\caption{SPLIT algorithm}\label{algo:split}
\begin{algorithmic}[1]
\Procedure {SPLIT}{$C$,l,$M_0$,$M_1$}
\State $lbl \leftarrow |l - 1|$
\For{Each $C' \in M_{lbl}$}
\If{$C' \cap C \neq \emptyset$}
\State $C_1, C_2$ = $split(C)$
\State $\tilde{M}_l \leftarrow (M_l \setminus \{C\}) \cup \{C_1,C_2\}$
\If{$\tilde{L} > L$} \Comment{L is the log-likelihood of M}
\State $M_l \leftarrow \tilde{M}_l$
\EndIf
\EndIf
\EndFor
\State \Return $M_0 \cup M_1$
\EndProcedure
\end{algorithmic}
\end{algorithm}
The split algorithm used to share the samples of the query component between two new components is the following:
\begin{itemize}
\item[Step 1:] Build a graph of minimal distances between the samples of the components
\item[Step 2:] Build a set of samples per sub-graph in which all vertices are connected.
\begin{itemize}
\item If there is only one set then cancel the split.
\item If there are 2 sets then go to step 3.
\item If there are more than 2 sets then merge the closest sets together by average distance until having 2 sets remaining and go to step 3
\end{itemize}
\item[Step 3:] Make two new components based on the two sets of samples, by computing the sample covariance and the sample mean.
\end{itemize}
This algorithm is illustrated in figure \ref{fig:split_schema}.
\begin{figure}[h]
\centering
\subfloat[Step 1]{\label{fig:split1}
\includegraphics[height=60mm,width=.3\linewidth,keepaspectratio]{split1.pdf}
}
\subfloat[Step 2]{\label{fig:split2}
\includegraphics[height=60mm,width=.3\linewidth,keepaspectratio]{split2.pdf}
}
\subfloat[Step 3]{\label{fig:split3}
\includegraphics[height=60mm,width=.3\linewidth,keepaspectratio]{split3.pdf}
}
\caption{Illustration of how the samples are shared between two new components during a split.}
\label{fig:split_schema}
\end{figure}
\paragraph*{Merge operation}
Algorithm \ref{algo:merge} describes the merge operation. Let C be a component. if C intersects a component C' of the same class, a candidate model is computed with C and C' merged. As for the split operation, if this model candidate has a greater \textit{loglikelihood} than the current model, the candidate is kept.
\begin{algorithm}[h]
\caption{MERGE algorithm}\label{algo:merge}
\begin{algorithmic}[1]
\Procedure {MERGE}{$C$,l,$M_0$,$M_1$}
\For{Each $C' \in M_l$}
\If{$C \cap C' \neq \emptyset$}
\State $\tilde{C} \leftarrow C \cup C'$
\State $\tilde{M}_l \leftarrow (M_l \setminus \ {C,C'}) \cup \tilde{C}$
\If{$\tilde{L} > L$} \Comment{L is the log-likelihood of M}
\State $M_l \leftarrow \tilde{M}_l$
\EndIf
\EndIf
\EndFor
\State \Return $M_0 \cup M_1$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Sampling Process}\label{sec:samplproc}
As the classifier is trained sample by sample, i.e. online, the choice of the next sample is critical. A process is used to choose the next sample to explore. This process generates a distribution choice map over the supervoxels based on the prediction of the classifier.
For a pointcloud with N extracted supervoxels and $\{X_i\}_{i<N}$ as the set of features of the supervoxels, the choice probability $P_c(X_i)$ of the feature $X_i$ of the $i^{th}$ supervoxel is defined as follows:
\begin{equation}\label{eq:samplproc}
P_c(X_i) = u(X_i)*(1-c(X_i))
\end{equation}
Where $u()$ is the classification uncertainty and $c()$ is the classification confidence.
\paragraph*{Uncertainty} As the classification is probabilistic, the output of the classifier can provide information on how certain the classification is. According to equation \ref{eq:class_est}, the closer to $\frac{1}{2}$ the probability of a sample to be part of a class, the higher the \textit{uncertainty} is. The following equation describes how the uncertainty is computed:
\begin{equation}\label{eq:u}
u(X_i) =
\begin{cases}
f(p) & \vert S_1 \vert <= \vert S_1 \vert \\
f(1-p) & \vert S_1 \vert > \vert S_0 \vert
\end{cases}
\end{equation}
where $p = P(L = 1 | W, \Theta, X)$ and $f$ is the following function:
\begin{equation}\label{eq:xlogx}
f(x) =
\begin{cases}
-2x(log(2x)-1) & x >= 0.5 \\
-4x^2(log(4x^2)-1) & x < 0.5
\end{cases}
\end{equation}
Theoretically, with this definition of uncertainty, the exploration focuses in priority on uncertain areas to be part of each class (equation \ref{eq:xlogx} and figure \ref{fig:xlogx}). It also tries to keep the same number of samples for each class by choosing the class with the fewest number of samples gathered (equation \ref{eq:u}).
\begin{figure}[h!]
\centering
\includegraphics[width=.8\linewidth]{xlogx.pdf}
\caption{Function used for uncertainty estimation. This function gives a higher probability of choice to uncertain classification, but also to certain classification to the chosen class, i.e the one with fewest samples.}
\label{fig:xlogx}
\end{figure}
This last feature is motivated by (i) the fact that, in most supervised learning problems, it is better to have a balanced number of samples, and (ii) the assumption that a balanced number of samples in each class better represents the environment. Issues related to the exploration process are discussed in section \ref{sec:disc}.
\paragraph*{Confidence} A GMM is the sum of Gaussian distributions. The classification is supported by a mapping of the feature space made by the Gaussians functions. Thus, the probability given by a multivariate normal distribution may provide useful information about the structure of the dataset. Given a sample X, its classification confidence is the probability of membership to the closest component defined in equation \ref{eq:comp_est}. The confidence gives a measure of the dataset density. In this way, the exploration focuses on areas with less information; therefore, confidence could be interpreted as an approximation of entropy.
\section{Experiments}\label{sec:exp}
\subsection{Protocol}\label{sec:proto}
The experiments performed to validate the method are of two types: the ones in a simplified setup and the ones on a real robotic platform. All experiments have a fixed budget of interactions. Each experiment has a single fixed background and a set of mobile objects.
An expert is used to evaluate the quality of the classifier trained during an experiment. The expert is built by saving the pointcloud of the background without the objects at the beginning of the experiment; thus, the objects are easily separated from the background. To determine whether a supervoxel is part of the background, the points of the supervoxel are simply compared with the saved background pointcloud. This defines the ground truth of the classification, which is of course not known by the classifier.
\begin{figure}[t]
\centering
\subfloat[\textbf{MoveableBalls:} Table with moving balls]{\label{fig:setup0}
\includegraphics[height=60mm,width=.7\linewidth,keepaspectratio]{env0_obj0-reduced.pdf}
} \\
\subfloat[\textbf{MoveableBricks1:} Table with moving bricks]{\label{fig:setup1}
\includegraphics[height=60mm,width=.7\linewidth,keepaspectratio]{env0_obj1-reduced.pdf}
} \\
\subfloat[\textbf{MoveableBricks2:} Table with fixed spheres and moveable bricks ]{\label{fig:setup3}
\includegraphics[height=60mm,width=.7\linewidth,keepaspectratio]{env2_obj1-reduced.pdf}
} \\
\subfloat[\textbf{WhiteMoveableBalls:} Table with fixed spheres and moveable bricks ]{\label{fig:setup4}
\includegraphics[height=60mm,width=.7\linewidth,keepaspectratio]{env3_obj2.pdf}
} \\
\subfloat[\textbf{WhiteMoveableBricks:} Table with fixed spheres and moveable bricks all white]{\label{fig:setup5}
\includegraphics[height=60mm,width=.7\linewidth,keepaspectratio]{env4_obj3.pdf}
} \\
\subfloat[\textbf{SimKitchen:} A kitchen with a teapot, a bawl, a cup and a spray cleaner]{\label{fig:simkitchen}
\includegraphics[height=60mm,width=.7\linewidth,keepaspectratio]{kitchen_sim.pdf}
}
\caption{Experimental simplified setups ordered by increasing difficulty. The moveable objects are marked by black circles.}
\label{fig:setups}
\end{figure}
\paragraph*{Simplified setup}
The experiments conducted using the simplified setup are used to evaluate the method in an ideal case. It is ideal because we use the gazebo simulator without any robot and with a simulated kinect. Thus, the interactions are fake, i.e. there is no robot that interacts with the environment. The category (moveable or not) of the explored supervoxel is assessed by an expert that knows in advance which supervoxels are part of an object and which supervoxels are part of the background (described in the previous paragraph). In the simplified setup, there is then no noise and no mislabelled sample. The results in this case are an upper bound of what can be expected in reality.
The experiments are conducted in several environments (see figure \ref{fig:setups}). They are designed to have an increasing difficulty with an increasing number of shared features between the objects and the background. For instance, the setup \textbf{MoveableBalls} (\ref{fig:setup0}) is very simple because the background is a wooden table, a flat surface with colors between orange and yellow, whereas the moving objects are blue and green spheres. Thus, the feature space is very easy to split. At each iteration during the experiment, each object is spawned in a random position and with a random orientation.
The objects chosen for the 5 first setups are very simple (cubes and spheres) to simplify the analysis of results according to the feature space. The setup \textbf{SimKitchen} (\ref{fig:simkitchen}) is a more realistic setup. The experiments performed using the real robot use more complex object shapes.
\begin{figure}[h]
\centering
\subfloat[ \textbf{Workbench1:} Simple toy workbench with three moveable toy cars.]{\label{fig:real_setup0}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{workbench-1-reduced.pdf}
}
\subfloat[\textbf{Workbench2:} Toy workbench with fixed object on it and three moveable toy cars]{\label{fig:real_setup1}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{workbench-5-setup-reduced.pdf}
} \\
\caption{Experimental setups with the real robot ordered by increasing difficulty. The moveable objects are marked by black circles.}
\label{fig:real_setups}
\end{figure}
\paragraph*{Real world setup}
Experiments are also conducted with a real robot to evaluate the method in a realistic scenario. The PR2 robot is used with a Kinect version 2 sensor. In figure \ref{fig:real_setups}, the setups used for the experiments are depicted. It is based on a modular workbench toy in two different configurations. These environments are colorful and have complex shapes that allow us to test the method on a complex and realistic setup. In these experiments, the robot interacts with the environment and the classifier learns from the label produces by these interactions.
\subsection{Classification Quality Measures}
\paragraph*{Precision, Recall, and Accuracy}
To measure the performance of the method, precision, recall, and accuracy are used. These are classical measures used in computer vision and more generally in classification tasks. In particular, these measures are used in most studies on SOD \citep{Borji2015}. The following equations define precision, recall, and accuracy, as used in this study:
\begin{equation}\label{eq:pra}
\begin{split}
precision & = \frac{tp}{tp + fp} \\
recall & = \frac{tp}{tp + fn} \\
accuracy & = \frac{1}{2}(\frac{tp}{G_{obj}} + \frac{tn}{G_{back}})
\end{split}
\end{equation}
Where $tp$ is the number of true positives and $tn$ is the number of true negatives (i.e. supervoxels well classified as part of moveable objects or as part of the background, respectively); $fp$ are false positives, i.e. supervoxels misclassify as moveable, and $fn$ are false negatives, i.e. supervoxels misclassified as non-moveable; and $G_{obj}$ is the ground truth for parts of the environment that are objects and $G_{back}$ is the ground truth for parts of the environment that are fixed. Their definitions, for N supervoxels extracted on a pointcloud, is the following:
\begin{equation} \label{eq:tptn}
\begin{split}
tp & = \sum_i^N{P(L = 1 | W, \Theta, x_i)*(1 - \delta_i)} \\
tn & = \sum_i^N{P(L = 0 | W, \Theta, x_i)*\delta_i} \\
fp & = \sum_i^N{P(L = 1 | W, \Theta, x_i)*\delta_i} \\
fn & = \sum_i^N{P(L = 0 | W, \Theta, x_i)*(1 - \delta_i)} \\
G_{obj} & = \sum_i^N{1 - \delta_i} \\
G_{back} & = \sum_i^N{\delta_i}
\end{split}
\end{equation}
Where $\delta_i$ is the Kronecker symbol equal to $1$ if the i$^{th}$ supervoxel is part of the background, and otherwise equal to $0$; $x_i$ represents the features of the i$^{th}$ supervoxel.
\paragraph*{Measure of the Exploration Dynamic}
We also measure the number of samples gathered during the exploration of each category. This allows us to assess how the exploration is conducted according to the knowledge available. Also, the number of components is monitored to determine whether it increases with the complexity of an environment. For the experiments with the real robot, the number of mislabeled samples, which corresponds to failed interactions, is counted.
\section{Results}\label{sec:res}
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:setup0}]{\label{fig:praid0}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pra_env0_obj0.pdf}
}
\subfloat[Plot for setup \ref{fig:setup1}]{\label{fig:praid1}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pra_env0_obj1.pdf}
}\\
\subfloat[Plot for setup \ref{fig:setup3}]{\label{fig:praid3}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pra_env2_obj1.pdf}
}
\subfloat[Plot for setup \ref{fig:setup4}]{\label{fig:praid4}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pra_env3_obj2.pdf}
} \\
\subfloat[Plot for setup \ref{fig:simkitchen}]{\label{fig:simkitchenpra}
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{kitchen_pra.pdf}
}
\\
\subfloat[Plot for setup \ref{fig:setup5}]{\label{fig:praid5}
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{graph_pra_env4_obj3.pdf}
}\\
\subfloat{
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{legend_pra.pdf}
}
\caption{Plots of precision, recall and accuracy for each setup presented in figure \ref{fig:setups}}
\label{fig:praid}
\end{figure}
\subsection{Simplified setup}
As expected, the classification reaches scores of almost 1 for \textbf{MoveableBalls1} setup (\ref{fig:setup0})(see figure \ref{fig:praid0}). There is nothing complex about this setup as the moveable objects share no features with the background. However, in \textbf{MoveableBricks1} (\ref{fig:setup1}) and \textbf{MoveableBricks2} (\ref{fig:setup3}) setups, the performance does not directly reach maximum performances as shown in figures \ref{fig:praid1} and \ref{fig:praid3}. This is due to the similarities between the table and the cubes in terms of color and shape. As the classifier takes more time to converge, the sampling process is less efficient at choosing suitable samples at each iteration, therefore, the system takes more time to gather a representative dataset as shown in figure \ref{fig:posneg}.
A minimum number of samples is needed to start splitting components because of the constraint introduced by the component intersection criterion (see equation \ref{eq:intercond}) which explains the decrease in performance until iteration 100.
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:setup1}]{\label{fig:praid_fix_comp0}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pra_env0_obj1_fix_comp.pdf}
}
\subfloat[Plot for setup \ref{fig:setup3}]{\label{fig:praid_fix_comp1}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pra_env2_obj1_fix_comp.pdf}
}\\
\subfloat[Plot for setup \ref{fig:setup5}]{\label{fig:praid_fix_comp2}
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{graph_pra_env4_obj3_fix_comp.pdf}
} \\
\subfloat{
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{legend_pra.pdf}
}
\caption{Plots of precision, recall, and accuracy for setups \ref{fig:setup1}, \ref{fig:setup3} and \ref{fig:setup5} for experiments conducted with one component per model. In these experiments, no split or merge operations were applied. These experiments are made to control the contribution of split and merge operations.}
\label{fig:praid_fix_comp}
\end{figure}
The shape feature is sufficient for setup \textbf{WhiteMoveableBalls} (\ref{fig:setup4}) as the classification reaches scores of almost 1 (see figure \ref{fig:praid4}). For setup \textbf{WhiteMoveableBricks}(\ref{fig:setup5}), the classification does not reach maximum performance (see figure \ref{fig:praid5}), but this is expected as the background and moveable objects have similar shapes. Even in this setup, the accuracy converges to a value above 0.8 and this seems sufficient to perform a relevant segmentation, as can be seen in figure \ref{fig:final_rm0} which depicts a relevance map obtained using setup \textbf{WhiteMoveableBricks}(\ref{fig:setup5}). The segmentation is not perfect, but it is accurate enough to identify object hypotheses that can be validated in the next step of the robot developmental process (see section \ref{sec:ip}).
On setup \textbf{SimKitchen} (\ref{fig:simkitchen}), the performance reaches a value above 0.8 after 150 interactions (see figure \ref{fig:simkitchenpra}). It is better than the results for setup \textbf{WhiteMoveableBricks}, probably because the simulated kitchen is richer in shapes and colors.
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:setup0}]{\label{fig:posneg0}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pos_neg_env0_obj0.pdf}
}
\subfloat[Plot for setup \ref{fig:setup1}]{\label{fig:posneg1}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pos_neg_env0_obj1.pdf}
}\\
\subfloat[Plot for setup \ref{fig:setup3}]{\label{fig:posneg3}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pos_neg_env2_obj1.pdf}
}
\subfloat[Plot for setup \ref{fig:setup4}]{\label{fig:posneg4}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pos_neg_env3_obj2.pdf}
} \\
\subfloat[Plot for setup \ref{fig:simkitchen}]{\label{fig:simkitchen_posneg}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{kitchen_pos_neg.pdf}
}
\subfloat[Plot for setup \ref{fig:setup5}]{\label{fig:posneg5}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pos_neg_env4_obj3.pdf}
} \\
\subfloat{
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{legend_pos_neg.pdf}
}
\caption{Plots of the number of samples gathered for each class at each iteration during the experiments for each simplified setup presented in figure \ref{fig:setups}.}
\label{fig:posneg}
\end{figure}
The selection process is far from a uniform random sampling, which would explore a majority of samples in the background. The comparison of the numbers of samples in each class can be considered as a convergence criterion: if the dataset converges to the same number of samples in both classes, it means that the classifier is able to distinguish the two classes with sufficient precision. As shown in figure \ref{fig:posneg}, all explorations reach the same number of samples in each class with no variability over the replications, indicating the stability of the selection process.
According to these results, setup \textbf{MoveableBricks2} (\ref{fig:setup3}) is actually harder for the classifier to deal with than setup \textbf{WhiteMoveableBalls} (\ref{fig:setup4}).
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:setup0}]{\label{fig:nbcomp0}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{env0_obj0_nbr_comp.pdf}
}
\subfloat[Plot for setup \ref{fig:setup1}]{\label{fig:nbcomp1}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{env0_obj1_nbr_comp.pdf}
}\\
\subfloat[Plot for setup \ref{fig:setup3}]{\label{fig:nbcomp3}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{env2_obj1_nbr_comp.pdf}
}
\subfloat[Plot for setup \ref{fig:setup4}]{\label{fig:nbcomp4}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{env3_obj2_nbr_comp.pdf}
}\\
\subfloat[Plot for setup \ref{fig:simkitchen}]{\label{fig:kitchencomp}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{kitchen_comp.pdf}
}
\subfloat[Plot for setup \ref{fig:setup5}]{\label{fig:nbcomp5}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{env4_obj3_nbr_comp.pdf}
} \\
\subfloat{
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{legend_nbr_comp.pdf}
}
\caption{Plots of the number of components of each class at each iteration during the experiments for each simplified setup presented in figure \ref{fig:setups}.}
\label{fig:nbcomp}
\end{figure}
Figure \ref{fig:nbcomp} shows the number of components for each class at each iteration.
The increasing number of components when the setup becomes more complex is worth to look at. Most likely this corresponds to cases when both classes share common features. Setup \textbf{WhiteMoveableBricks} (\ref{fig:setup5}), which led to the worst results (see figure \ref{fig:praid5}), has a rapidly increasing number of components and does not reach a plateau within 1000 iterations (see figure \ref{fig:nbcomp5}). Also, for all setups except setup \ref{fig:setup5}, new components are created only after iteration 100; this is a consequence of the constraint introduced by the intersection criterion (see equation \ref{eq:intercond}). For the setup on the simulated kitchen (\ref{fig:simkitchen}), the number of components are also increasing but seems to slow down (see figure \ref{fig:kitchencomp}). Moreover, the number of components for the non-moveable category is much lower than for the moveable category. This suggests that the background is less complex than the objects, which seems to be the case (see figure \ref{fig:simkitchen}).
Figure \ref{fig:praid_fix_comp} shows the performance of experiments conducted with only one component per model, i.e. when no split or merge operations are applied. In this case, performances are poorer than those including split and merge operations. This indicates that setups \textbf{MoveableBricks1} (\ref{fig:setup1}), \textbf{MoveableBricks2} (\ref{fig:setup3}) and \textbf{WhiteMoveableBricks} (\ref{fig:setup5}) involve nonconvex classes and nonlinearly separable datasets and justifies the need for the proposed split and merge operations.
\begin{figure}[h]
\centering
\subfloat[Accuracy]{\label{fig:alphas_acc}
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{accuracy_alphas.pdf}
} \\
\subfloat[Precision]{\label{fig:alphas_pre}
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{precision_alphas.pdf}
}\\
\subfloat[Recall]{\label{fig:alphas_rec}
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{recall_alphas.pdf}
} \\
\subfloat{
\includegraphics[height=60mm,width=.2\linewidth,keepaspectratio]{legend_alphas.pdf}
}
\caption{Results of the experiments conducted on the simulated kitchen with $\alpha$ varying between 0 and 1 with a step of 0.1. The replication with $\alpha$ strictly less than 1 are grouped into the black curve and equal to one into the red curve}
\label{fig:alphas}
\end{figure}
Figure \ref{fig:alphas} represents the results of experiments conducted on the simulated kitchen (\ref{fig:simkitchen}) with $\alpha$ varying between 0 and 1 with a step of 0.1. For better clarity, the replications with alphas between 0.9 and 0 are grouped in the black curve, thus, the black curve gather 100 replications and the red one 10 replications. The replications are grouped like that because there is no splits and merges only for $\alpha$ equal to 1. The accuracy (see figure \ref{fig:alphas_acc}) is approximately the same for any value of alpha. For $\alpha$ strictly less than 1, the precision converged to a value around 0.9 (see figure \ref{fig:alphas_pre}) and for recall to a value around 0.8 (see figure \ref{fig:alphas_rec}), while, for $\alpha$ equal to 1, the precision keeps decreasing and the recall converged quickly to a value close from 1. Also, for $\alpha$ strictly less than 1, recalls and precisions have a low variability. The results seem to indicate that with split and merge, for any value of $\alpha$ (strictly less than 1), the classification reaches a sufficient quality with a bit low recall, while without split and merge ($\alpha$ equal to 1), the classification has a very low precision despite a high accuracy and recall.
Finally, the $\alpha$ parameters could be fixed to any value between 0.9 and 0. The split and merge operations allow the system to have a better precision in classification but they lower the recall.
\subsection{Real world setup}
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:real_setup0}]{\label{fig:prareal0}
\includegraphics[height=60mm,width=.95\linewidth,keepaspectratio]{graph_pra_workbench1.pdf}
} \\
\subfloat[Plot for setup \ref{fig:real_setup1}]{\label{fig:prareal1}
\includegraphics[height=60mm,width=.95\linewidth,keepaspectratio]{graph_pra_workbench2.pdf}
} \\
\subfloat{
\includegraphics[height=60mm,width=.9\linewidth,keepaspectratio]{legend_pra.pdf}
}
\caption{Plots of precision, recall, and accuracy for each real setup presented in figure \ref{fig:real_setups} }
\label{fig:prareal}
\end{figure}
In this setup, the data is obtained after the application of the push movement primitive and the detection of an eventual change in the targeted area.
As expected, the performance in the real environment does not reach the maximum level, but it does reach 0.8 for accuracy (see figure \ref{fig:prareal}). This is enough to produce a useful segmentation of the scene, as shown in figures \ref{fig:final_rm1} and \ref{fig:final_rm2}. The classification is less efficient for setup \textbf{Workbench2} (\ref{fig:real_setup1}) than for setup \textbf{Workbench1} (\ref{fig:real_setup0}), and it is less stable over the iterations. This is expected as setup \textbf{Workbench2} (\ref{fig:real_setup1}) is more complex than setup \textbf{Workbench1} (\ref{fig:real_setup0}).
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:real_setup0}]{\label{fig:posnegreal0}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pos_neg_w1.pdf}
}
\subfloat[Plot for setup \ref{fig:real_setup1}]{\label{fig:posnegreal1}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{graph_pos_neg_w2.pdf}
} \\
\subfloat{
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{legend_pos_neg2.pdf}
}
\caption{Plots of the number of samples gathered for each class at each iteration during the experiments for each real setup presented in figure \ref{fig:real_setups}. }
\label{fig:posnegreal}
\end{figure}
A total of 400 iterations is enough to achieve almost the same amount of samples in both classes, as shown in figure \ref{fig:posnegreal}. Compared with the simulation, the exploration produces mislabeled samples owing to failed interactions and failures in the detection of motion. As the classifier is online, the training is more sensitive to mislabeled samples which introduce instability over the iterations and variability over the replications.
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:real_setup0}]{\label{fig:nbrcompreal0}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{workbench1_nbr_comp.pdf}
}
\subfloat[Plot for setup \ref{fig:real_setup1}]{\label{fig:nbrcompreal1}
\includegraphics[height=60mm,width=.45\linewidth,keepaspectratio]{workbench2_nbr_comp.pdf}
} \\
\subfloat{
\includegraphics[height=60mm,width=.7\linewidth,keepaspectratio]{legend_nbr_comp.pdf}
}
\caption{Plots of the number of components of each class at each iteration during the experiments for each real setup presented in figure \ref{fig:real_setups}.}
\label{fig:nbrcompreal}
\end{figure}
The increasing number of components seen in the simulation is related to the complexity of the environment; with the real robot, it is also related to mislabeled samples. Indeed, mislabeled samples introduce a higher complexity in the distribution of samples in the feature space, therefore components split is more likely to occur. This explains the large variability in the number of components on figure \ref{fig:nbrcompreal0}.
\begin{figure}[h]
\centering
\subfloat[Plot for setup \ref{fig:setup5}]{\label{fig:final_rm0}
\includegraphics[height=60mm,width=.95\linewidth,keepaspectratio]{maps_env4_obj3-reduced.pdf}
} \\
\subfloat[Plot for setup \ref{fig:real_setup0}]{\label{fig:final_rm1}
\includegraphics[height=60mm,width=.95\linewidth,keepaspectratio]{maps_w1-reduced.pdf}
} \\
\subfloat[Plot for setup \ref{fig:real_setup1}]{\label{fig:final_rm2}
\includegraphics[height=60mm,width=.95\linewidth,keepaspectratio]{maps_w2-reduced.pdf}
}
\caption{From right to left : Cloud pictures of both real setups (\ref{fig:real_setups}) and setup \textbf{WhiteMoveableBricks} (\ref{fig:setup5}); Relevance map with in yellow highest probability to be moveable and in black lowest (from 0.0 to 1.0); AveraThe selection process is far from random samplingge choice distribution map over an exploration with in red most explored areas and blue least explored areas (from 0.0 to 0.6).}
\label{fig:final_rm}
\end{figure}
Figure \ref{fig:final_rm} presents the best performing relevance map for both real setups and for the most complex simulated setup, \textbf{WhiteMoveableBricks} (\ref{fig:setup5}). It also shows an average choice distribution map over all iterations, which represents the most probable exploration areas. This map provides an insight into which parts of the environment are most considered during exploration. For the three setups, the exploration is more focused on complex areas such as moveable or fixed objects (that are then part of the background). With an exploration driven by uncertainty, this is an expected feature as the complex areas are the slowest to decrease their uncertainty.
Finally, figure \ref{fig:rm_seq} shows a sequence of relevance maps generated from classifiers taken at different moments of the exploration on the setup \textbf{Workbench2} (\ref{fig:real_setup1}). The first relevance map after 1 interaction is totally neutral, showing that all the environment is considered as moveable with a probability of $\frac{1}{2}$, while the last relevance map attributes a high probability to almost only the cars.
All the source code used to produce these results can be found on github\endnote{CMMs source code: \url{https://github.com/LeniLeGoff/IAGMM_Lib/tree/rm-pub-18}; Experiments source code: \url{https://github.com/robotsthatdream/babbling_experiments/}}.
\begin{figure*}[t!]
\centering
\subfloat[Relevance map after 1 interaction]{\label{fig:rm_it1}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it1-rm.pdf}
}
\subfloat[Relevance map after 10 interactions]{\label{fig:rm_it10}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it10-rm.pdf}
}
\subfloat[Relevance map after 50 interactions]{\label{fig:rm_it50}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it50-rm.pdf}
}
\subfloat[Relevance map after 100 interactions]{\label{fig:rm_it100}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it100-rm.pdf}
}
\subfloat[Relevance map after 400 interactions]{\label{fig:rm_it400}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it400-rm.pdf}
}
\subfloat{
\includegraphics[height=60mm,width=.013\linewidth,keepaspectratio]{legend_rm.pdf}
} \\
\subfloat[Pointcloud used to generate above image]{\label{fig:cloud_it1}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it1-cloud.pdf}
}
\subfloat[Pointcloud used to generate above image]{\label{fig:cloud_it10}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it10-cloud.pdf}
}
\subfloat[Pointcloud used to generate above image]{\label{fig:cloud_it50}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it50-cloud.pdf}
}
\subfloat[Pointcloud used to generate above image]{\label{fig:cloud_it100}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it100-cloud.pdf}
}
\subfloat[Pointcloud used to generate above image]{\label{fig:cloud_it400}
\includegraphics[height=60mm,width=.19\linewidth,keepaspectratio]{wb-it400-cloud.pdf}
}
\caption{Sequence of pointclouds representing a relevance map at different point of during the exploration. This images have been generated after an exploration.}
\label{fig:rm_seq}
\end{figure*}
\section{Discussion and Future work}\label{sec:disc}
The exploration process is the most crucial component of this method. On the one hand, the classifier requires a representative dataset of the scene; on the other hand, it requires a suitable sample at each iteration to learn efficiently. With only a uniform random exploration, the dataset would be composed of an overwhelming majority of background samples; therefore, the classifier would have difficulty converging (see figures \ref{fig:posneg} and \ref{fig:posnegreal}). In the proposed approach, the sampling process is based on an uncertainty reduction. Uncertainty is measured based on the probability of membership to each class being close to 0.5, i.e. at the border of both Gaussian mixture models. This focuses exploration on unknown or poorly known areas. The confidence of the classification is used to focus the exploration on areas in which the dataset has a low density. Confidence draws inspiration from entropy while being less costly to compute. The entropy of models is a measure of information quantity. The selection process thus increases the representativeness of the dataset. Combining uncertainty and confidence allows the exploration process to focus on unknown and informative areas, as shown in the left picture of figure \ref{fig:final_rm}. Finally, to balance the dataset between the two classes, priority is given to sampling the less represented class of the dataset.
The quality of the relevance map depends on the precision of the change detector. In this work, the change detector is a simple frames comparison between before and after the execution of the push primitive. This component of the framework could be easily enhanced by adding haptic sensors on the robots end-effector or real-time tracking system for motion detection. The other components are mostly independent of the change detector, thus, it can be changed without major issues.
The online training of CMMs does not offer a precise measure of convergence. In batch learning, test steps give a measure of overfitting and the generalization error, which is not achievable in online learning. In online learning, establishing a test dataset is complicated as the test dataset must be sufficiently different from the training dataset to detect overfitting and to compute generalization error. Within the budget fixed for the experiment, the classifier converges, as shown in figures \ref{fig:praid} and \ref{fig:prareal}, in which precision, recall, and accuracy converge to a mean value. In addition, the exploration always reaches a balanced dataset, as shown in figures \ref{fig:posneg} and \ref{fig:posnegreal}, suggesting the convergence of the classifier. But, for the most complex setup \ref{fig:real_setup1}, the performance is unstable and decrease several times. Thus, accumulating more samples does not guarantee an increase in performance.
Loglikelihood is a promising path to explore for a measure of convergence as it is used, for example, in the M-step of the EM algorithm.
The classifier used in this paper is designed to be non-specific to a particular kind of environments. In particular, the hyperparameters of the model should be the same for all environments. CMMs have one critical hyperparameter : the tolerance ellipse size ($\alpha$). It determines the sensitivity of the \textit{merge} and \textit{split} operations. This parameter could be tuned to have best classification efficiency but the experiments show a low variability on the results when changing $\alpha$ (see \ref{fig:alphas}). In this study, the value of $\alpha$ was fixed to the same value for all the experiments ($\alpha = 0.25$) (see sections \ref{sec:alg} and \ref{sec:proto}). Thus, a compromise for a large set of environments can be found for the value of this hyperparameter. Moreover, varying the value of $\alpha$ between 0.9 and 0 does not introduce a high variability in classification quality.
The quality of the classification is conditioned by the features used. A complex environment would require the use of features that can capture this complexity to generate an efficient segmentation. However, as the feature extractor is designed prior to exploration, it could potentially reduce the kinds of environment to which the robot can adapt. The results of setups \textbf{WhiteMoveableBalls} (\ref{fig:setup4}) and \textbf{WhiteMoveableBricks} (\ref{fig:setup5}) show that the feature space can be more complex than required in practice (in these setups, the color descriptor is ignored by the method). As in the work of \citet{jiang2013discri}, the feature space is 93-dimensional, which is more than what is actually required for a lot of problems but who can do more, can do less. In the proposed approach, the feature used is 48-dimensional, which is already large and integrates rich shape and color information. It allows the method to be more adaptive. Thus, to deal with any situation the robot may encounter, the use of a descriptor as rich as possible is recommended. One option is to use a designed visual features (as in this work); another option is to use the features built by a pretrained convolutional neural network \citep{hariharan2015hypercolumns}.
The interactive perception paradigm has a strong link with affordances. An affordance is a relational property which emerges from the agent-environment system . This concept allows to formalize a representation of the environment through the action of the robot \citep{gibson1979}. As the relevance map is built thanks to the interaction of the robot with a push primitive, it represents the probability of environment parts to be "pushable", in other words, it represents areas of the environment that afford the push action for the robot. As for the change detector, the primitive is an independent component of the framework, so, it is possible to change the primitive with a minimum of efforts. Other experiments with primitives like lifting, pulling, or grasping could be conducted with the proposed methods. Of course, a change detector adapted to the new primitive needs to be provided. In these cases, the relevance map would represent affordances like "liftable", "pullable" or "graspable".
\section{Conclusion}\label{sec:con}
A method has been introduced that allows a robot to segment a visual scene into two different classes: regions that belong to moveable areas and regions that belong to non-moveable areas. The method relies on interactive perception. It includes a classifier that is trained online and a sampling process that selects the regions upon which to focus. The classifier is called collaborative mixture models (CMMs). It has been designed to exhibit five properties: (1) ability to handle non-convex/non-linearly separable data, (2) ability to estimate classification uncertainty, (3) environment agnostic parameter tuning, (4) supervision, and (5) online training. The robot interacts with areas selected via a sampling process that aims to reduce uncertainty in the classification and balancing of samples in each class. A change detector determines the class of the region with which the robot has interacted. The corresponding data are added to a dataset used to train the classifier. The approach generates a relevance map segmenting potential objects from the background. This information can be used to bootstrap an object discovery method, reducing the assumptions on the structure of the environment and thus paving the way to approaches that can adapt to a wider range of environments. The approach has been tested on setups of increasing complexity using simulations and a real PR2 robot.
\section*{Acknowledgements}
This work is supported by the DREAM project \endnote{\url{http://www.robotsthatdream.eu}} through the European Unions Horizon 2020 research and innovation program under grant agreement No 640891. This work has been partially sponsored by the French government research program Investissements d'avenir through the Robotex Equipment of Excellence (ANR-10-EQPX-44).
\theendnotes
\bibliographystyle{SageH}
|
1,108,101,565,997 | arxiv | \section{Introduction}
Data related to meteorology, geology and hydrology are often connected to geographical locations. The data are typically linked to point locations, but there are also data observed over an areal unit, e.g. over a crop field, a forest, a grid from a satellite observation or an administrative unit like a country. While point referenced data give information about the process of interest at one specific location, the areal referenced data impose a constraint on the process and/or contain information about aggregated or mean values in a larger area.
For some processes, there exist point data \textit{and} areal data that give information about the same underlying process, and studies show that both observation types should be taken into account when making statistical inference and predictions \citep{areadata1,areadata2}. There are several challenges connected to simultaneously use data of different spatial support: The data types must be connected to the process of interest in a meaningful way, and expert opinions about the involved measurement uncertainties should be taken into account. In addition, information about how the point and areal data are related to each other is important, such that the observation types can be combined in a mathematically consistent way that preserves basic physical laws (i.e. the conservation of mass and energy).
In this article we consider runoff, which is an example of a process that can be observed through point and areal data. Runoff is defined as the part of the precipitation that flows towards a river on the ground surface (surface runoff) or within the soil (subsurface runoff or interflow) \citep{WMO}. Every point in the landscape contributes to runoff generation, and on an annual scale runoff can be approximated by the estimated point precipitation minus the actual point evaporation at a location of interest \citep{Sauquet2000}. With this interpretation, runoff is a continuous point referenced process in space. However, runoff accumulated over an area is typically observed by measuring the amount of water that flows through the outlet of a stream. The observed value does not primarily provide information about the runoff at the location of the stream outlet: It primarily provides information about the runoff generating process in the whole drainage area which is called a catchment. Such observations of runoff are therefore areal referenced.
Since most catchments in the world are ungauged (i.e. without runoff observations), a common task for hydrologists is to predict runoff in these catchments. In this article we consider predictions of $annual$ runoff which is a key hydrological signature. The annual runoff gives information about the total amount of water available in an area of interest and is fundamental for water resources management, i.e. in the planning of domestic, agricultural, and industrial water supply, and for allocation of water between stakeholders. Annual runoff is also commonly used as a key variable when predicting other runoff properties in ungauged catchments, i.e. low flows and floods \citep{PUB2}. Furthermore, the variability in annual runoff is interesting as it is a key quantity for understanding runoff's sensitivity to driving climatic factors in today's climate, and can be used to make inference about the runoff variability also for future climates.
There are several approaches to predict runoff in ungauged catchments in hydrology, e.g. process-based methods \citep{Beldring,wasmod} and geostatistical methods \citep{Gottschalk1993b, Sauquet2000,topkriging}. In this article, we choose a geostatistical approach. Within the geostatistical framework, runoff predictions in ungauged catchments have typically been done by interpolation of areal referenced runoff data by using Kriging methods (see e.g. \cite{topkriging} or \cite{Sauquet2000}). This has shown promising results. In these methods, precipitation data have often been avoided as an information source because these data are known to be uncertain and/or biased (see e.g. \cite{raingauge}, \cite{Groisman} or \cite{Regnusikkerhet}). Evaporation data are even more uncertain: It is seldom observed directly, but derived from meteorological observations and process-based models like in e.g. \cite{evap1} or \cite{evap3}. In spite of the large uncertainties linked to precipitation and evaporation measurements, precipitation and evaporation are the main drivers behind runoff, and it is reasonable to believe that these data sources can contribute to an increased understanding of the runoff generating process if used cleverly. Particularly in areas with few streamflow observations.
Motivated by this, we present a Bayesian geostatistical model for annual runoff where we in addition to runoff data, use precipitation and evaporation data for spatial interpolation. The suggested model is a Bayesian hierarchical model where the observation likelihood consists of areal referenced runoff observations from catchments and/or point observations of runoff, where the point observations are annual evaporation subtracted from annual precipitation. Informative priors based on expert knowledge are used on the measurement uncertainties to express our doubt on the precipitation and evaporation data, and to put more weight on the runoff observations that are considered more reliable.
The catchments we study in this article are located around Voss in western Norway. Voss is a mountainous area, and the areas west for Voss are among the wettest in Europe with annual precipitation around 3 m/year. This makes Voss flood exposed, and accurate runoff models are of high importance. Voss is also a challenging area when it comes to runoff estimation due to large spatial variability and low stream gauge density. However, there are several precipitation gauges in the area that can be exploited to increase the hydrological understanding. This makes the Voss area a good candidate for performing spatial interpolation of runoff by also including precipitation and evaporation data.
The large annual precipitation in western Norway is mainly caused by the orographic enhancement of frontal precipitation formed around extratropical cyclones. The orographic enhancement is explained by steep mountains that create a topographic barrier for the western wind belt, which transports moist air across the North Atlantic \citep{Stohl}. The topography and the elevation differences result in prominent patterns in precipitation and runoff.
Motivated by the strong orographic effect, we include a spatial component in the model that is constant over the years for which we have runoff observations. This represents the spatial variability of runoff caused by climatic conditions in the study area. Furthermore, it is reasonable to assume that not all of the spatial variability can be explained by the climate, and we include an additional spatial effect to describe the annual discrepancy from the climate.
The climatic part of the model is interesting because it let us quantify how much of the spatial variability that can be explained by long-term effects. Separating long-term spatial variability from year dependent effects can lead to a better understanding of systematic biases and uncertainties that occur in the prediction of environmental variables due to weather patterns and processes that are more or less apparent each year. A consequence of including the climatic component is also that we obtain a model for which it is possible to exploit short records of data: The climatic component captures how the short records vary relatively to longer data series from nearby catchments. This is a valuable property because sparse datasets are common in hydrology. There are several studies on how short records of runoff can be used to estimate different hydrological signatures \citep{shortrec1,shortrec2}, but our framework represents a new approach by incorporating the short records into a geostatistical framework where several years of runoff are modeled simultaneously through a climatic spatial field.
Making inference and predictions with geostatistical models often lead to computational challenges due to matrix operations on (dense) covariance matrices, and in our suggested model we have not only one, but two spatial fields. Our solution to the computation challenges is to use the \textsc{spde}-approach to spatial modeling from \cite{mainINLA}. \cite{mainINLA} utilizes that a Gaussian random field (\textsc{grf}) with a Matérn covariance function can be expressed as the solution of a stochastic partial differential equation (\textsc{spde}). By approximating the solution of the \textsc{spde} by using the finite element method \citep{FEM}, the involved GRFs can be expressed as Gaussian Markov random fields (\textsc{gmrf}s). The \textsc{gmrf} approximations enable fast simulation and inference \citep{GMRFbook}, and integrated nested Laplace approximations (\textsc{inla}) can be applied \citep{mainINLA}.
In geostatistical methods used for runoff interpolation it is common to link the involved catchments to point locations in space, not to areas (see e.g. \cite{euclid2} or \cite{euclid3}). However, interpreting catchment runoff as point referenced can lead to a violation of basic conservation laws: A significant property of catchments is that they are organized into subcatchments, and for annual runoff the water balance must be conserved for all subcatchments. That is, the total amount of annual runoff in a subcatchment cannot be larger than the total annual runoff in the main catchment. In the Top-Kriging approach developed by \cite{topkriging} the nested structure of catchments is taken into account by computing the covariance between two catchments based on the pairwise distance between all the grid nodes in a discretization of the target catchments. This way, information from a subcatchment is weighted more than information from a nearby non-overlapping catchment. The Top-Kriging approach is currently one of the leading interpolation methods for runoff, and has outperformed other methods in predicting several hydrological signatures in Austria \citep{Austriacompare}.
Our model is similar to the Top-Kriging approach by that we consider streamflow observations as areal referenced and compute the covariance between two catchments accordingly. However, our methodology differs from Top-Kriging and other hydrological interpolation methods by using precipitation (point) data in the interpolation framework in addition to nested streamflow (areal) data. As this is an important difference, one of the main objectives of this paper is to: \\
1) Explore how the runoff predictions in Voss are influenced by the two different observation types (point and areal observations), and assess if the combination of point and areal data can contribute to an increased predictive performance. \\\vspace{-3mm}
Furthermore, the model we suggest ensures that the water balance is preserved for any point in the landscape by defining annual runoff in a catchment as the integral of the point runoff over the catchment's area. Top-Kriging and other geostatistical models don't necessarily provide a full preservation of the water balance. A second objective is therefore to: \\
2) Show by example how the interaction between point observations and nested areal observations can contribute to improved predictions of annual runoff because the water balance is taken into account.\\ \vspace{-3mm}
A geostatistical model that combines point and areal data in the same way as we do already exists in the literature in \cite{areadata1}. What is new in our model in terms of statistical modeling is the climatic spatial component. A final objective of the paper is thus to:\\
3) Present a model for which the spatial variability due to long-term spatial patterns can be quantified, and show how this can be used as a tool for understanding the uncertainty and biases in the modeling of environmental variables, and for exploiting short records of data.
In the section that follows, we present the study area and the available data. Next, we introduce the theoretical background needed to develop the suggested runoff model that is presented in Section \ref{sec:models}. In Section \ref{sec:casestudy} the suggested model is fitted to the Voss data. Based on some observation schemes described in Section \ref{sec:eval}, the predictability of annual runoff in Voss is evaluated and discussed. To further demonstrate the value of including a climatic spatial field in the model, a simulation study was carried out. This is presented in Section \ref{sec:simulationstudy}. Finally, our key findings are discussed in Section \ref{sec:discussion}.
\section{Study area and data}\label{sec:data}
\begin{figure*}[h!!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[page=1, trim = 0mm 0mm 40mm 0mm, clip, width=6cm]{vossieuropa2.pdf}
\caption{\small The study area is located around Voss in Western Norway.}
\label{fig:vossieuropa}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[page=1, trim = 0mm 0mm 0mm 0mm, clip, width=8cm]{data10yr.pdf}
\caption{\small Mean annual observations [m/year].}\label{fig:data1}
\end{subfigure}
\caption{Mean annual runoff from 5 catchments and mean annual precipitation minus evaporation (m/year) at 15 precipitation gauges for 1988-1997. Catchment 3 is a subcatchment of Catchment 4 and 5, and Catchment 4 is a subcatchment of Catchment 5. Catchment 1 and Catchment 2 don't overlap with any of the other catchments. The coordinate system in Figure \ref{fig:data1} is utm32.}\label{fig:data}
\end{figure*}
When modeling hydrological processes on an annual scale, it is common to use the hydrological definition of a year. The basic water balance equation is given as $P = Q + E + S$, where $P$ is precipitation, $Q$ is runoff, $E$ is evapotranspiration and S is the change in stored water (i.e. snow, or groundwater). A hydrological year is defined such that the storage component in the water balance equation can be neglected, i.e. $S$ is much smaller than $P$ and $Q$. In Norway a hydrological year starts September 1st and ends August 31st, e.g. 1988 begins September 1st 1987 and ends August 31st 1988.
In this analysis, we have runoff data from the hydrological years 1988-2014. The dataset was provided by the Norwegian Water Resources and Energy directorate (\textsc{nve}) and consists of annual runoff observations from five catchments where three of them are nested (see Figure \ref{fig:data}). The unit of the data is m/year and gives the spatial average of the runoff within a catchment. The observations from 1988-1997 are used to make statistical inference, while the observations from 1998-2014 are used as a test set for assessing the model's ability to predict runoff for future years.
The annual runoff data were created by aggregating daily streamflow measurements. The stream gauges that gather the daily observations, don't measure runoff directly: They measure the river's daily stage. Runoff observations are then obtained by using a rating curve that gives the relationship between the stage of the water and the discharge or runoff at a specific point in the stream. The stage-discharge relationship is developed empirically by measuring the discharge across a cross-section of the specific river for a range of stream stages.
Errors in the observed runoff are composed of errors related to the river stage measurement process and errors in the rating curve model. However, on an annual time scale, the river stage measurement errors tend to average out, and the main contribution to errors originates from uncertainties in the rating curve. The dataset provided by \textsc{nve} includes an estimate of the standard deviation of the observation uncertainty for each (annual) runoff observation, and the standard deviations are relatively small ranging from 0.65 \% to 3.2 \% of the corresponding observed value. This information is used to make informative priors for the measurement uncertainties in Section \ref{sec:priors}. We refer to \cite{Reitan2009} for details on how the observation (rating curve model) uncertainty is obtained.
In addition to runoff data, we have precipitation data from 15 precipitation gauges. Daily precipitation data were downloaded from $\texttt{www.eKlima.no}$ which is a web portal maintained by the Norwegian Meteorological Institute. The observations were aggregated to annual values for the hydrological years 1988-1997. The observed precipitation ranges from 0.55 m/year to 4.6 m/year.
The evaporation data used, originate from the satellite remote sensing-based evapotranspiration algorithm presented in \cite{evap4}. The dataset consists of global monthly land surface evapotranspiration with spatial resolution of 1 degree (longitude, latitude). Evaporation data for the locations of the precipitation gauges around Voss were extracted, and monthly values were aggregated to hydrological years (1988-1997). As the spatial resolution of the gridded evaporation dataset is 1 degree and the study area is rather small, the observed annual evaporation within a specific year is the same for almost all of the precipitation gauges. The observed evaporation ranges from 0.23-0.32 m/year with mean 0.25 m/year and standard deviation 0.02 m/year. This means that approximately $12 \%$ of the annual precipitation evaporates around Voss, which is a small amount in a global perspective. The observations of evaporation must be considered as approximative estimates of the actual evaporation in the area of interest, with large uncertainties.
Figure \ref{fig:data} shows the 5 catchments where we have measurements of runoff and the locations of the 15 precipitation gauges. Mean annual values for areal referenced runoff and point referenced runoff (precipitation-evaporation) for 1988-1997 are included. We see a spatial pattern with high values of annual runoff in the western part of the study area and low values in the eastern part. This pattern is prominent for all years for which we have data, and indicates that climatic spatial effects dominate over annual spatial effects around Voss.
\section{Background}
We propose a Latent Gaussian model (\textsc{lgm}) for annual runoff that is computational feasible due to a stochastic partial differential equation (\textsc{spde}) formulation of Gaussian random fields (\textsc{grf}s). In this section we give a brief introduction of these concepts and other relevant background theory and notation for developing and evaluating the model for annual runoff that is presented in Section \ref{sec:models}.
\subsection{Latent Gaussian Models}\label{sec:LGM}
In this article we suggest a Latent Gaussian model (\textsc{lgm}) for combining point and areal observations of annual runoff. An \textsc{lgm} can be represented in a hierarchical structure consisting of three levels (see e.g. \cite{Gelman}). The first level is the observation likelihood, in this case consisting of two data types $(y_1,...,y_{n})$ and $(z_1,...,z_m)$. The data are observed with conditional independent likelihood $\Pi_{i=1}^{n} \pi(y_i|q_{i},\boldsymbol{\theta_1^y})\Pi_{j=1}^{m} \pi(z_j|Q_{j},\boldsymbol{\theta_1^z})$ given two linear predictors $q_{i}$ and $Q_{j}$, and some parameters ($\boldsymbol{\theta_1^y}$,$\boldsymbol{\theta_1^z}$) which we refer to as hyperparameters. The two linear predictors depend on the same set of latent variables $\boldsymbol{x}$, but connect the data to the latent field differently, through different projection matrices, e.g. $q_{i}=\boldsymbol{A}_i \boldsymbol{x}$ and $Q_{j}=\boldsymbol{B}_j \boldsymbol{x}$. Here, $\boldsymbol{A}$ and $\boldsymbol{B}$ are matrices that link elements in the latent field to the observations, and $\boldsymbol{A}_i$ and $\boldsymbol{B}_j$ denote row number $i$ and $j$ of the two matrices. The second level of the \textsc{lgm} is formed by the prior of the latent field $\boldsymbol{x}$ and is on the form $ \pi(\boldsymbol{x}|\boldsymbol{\theta_2})\sim \mathcal{N}(\boldsymbol{\mu}(\boldsymbol{\theta_2}),\boldsymbol{\Sigma}(\boldsymbol{\theta_2})),$
i.e. it is Gaussian conditioned on some hyperparameters $\boldsymbol{\theta_2}$. The third level is given by $\pi(\boldsymbol{\theta})$ which is the prior distribution of the hyperparameters $\boldsymbol{\theta}=(\boldsymbol{\theta_1^y},\boldsymbol{\theta_1^z},\boldsymbol{\theta_2})$.
\subsection{Gaussian random fields}
We use Gaussian random fields (\textsc{grf}s) to model the spatial variability of annual runoff. A continuous field $\{x(\boldsymbol{u});\boldsymbol{u}\in \mathcal{D}\}$ defined on a spatial domain $\mathcal{D}\in \mathcal{R}^2$ is a \textsc{grf} if for any collection of locations $\boldsymbol{u}_1,...,\boldsymbol{u}_n\in \mathcal{D}$ the vector $(x(\boldsymbol{u}_1),...,x(\boldsymbol{u}_n))$ follows a multivariate normal distribution \citep{Cressie}, i.e. $(x(\boldsymbol{u}_1),...,x(\boldsymbol{u}_n)) \sim \mathcal{N}(\boldsymbol{\mu},\boldsymbol{\Sigma})$. The covariance matrix $\boldsymbol{\Sigma }$ defines the dependency structure in the spatial domain, and can be constructed from a covariance function $C(\boldsymbol{u}_i,\boldsymbol{u}_j)$. Furthermore, the dependency structure for a spatial process is often characterized by two parameters: The marginal variance $\sigma^2$ and the range $\rho$. The marginal variance gives information about the spatial variability of the process of interest, while the range gives information about how the correlation between the process at two locations decays with distance. If the range and marginal variance are constant over the spatial domain, we have a stationary \textsc{grf}.
One popular choice of covariance function is the Matérn covariance function which is given by
\begin{equation}\label{eq:Matern}
C(\boldsymbol{u_i},\boldsymbol{u_j})=\frac{\sigma^2}{2^{\nu - 1}\Gamma(\nu)}(\kappa ||\boldsymbol{u_j}-\boldsymbol{u_i}||)^{\nu}K_{\nu}(\kappa||\boldsymbol{u_j}-\boldsymbol{u_i}||),
\end{equation}
where $||\boldsymbol{u_j}-\boldsymbol{u_i}||$ is the Euclidean distance between two locations $\boldsymbol{u_i}, \boldsymbol{u_j} \in \mathcal{R}^d$, $K_{\nu}$ is the modified Bessel function of the second kind and order $\nu>0$, and $\sigma^2$ is the marginal variance \citep{Matern}. The parameter $\kappa$ is the scale parameter, and it can be shown empirically that the spatial range can be expressed as $\rho = \sqrt{8 \nu}/\kappa$, where $\rho$ is defined as the distance where the spatial correlation between two locations has dropped to 0.1 \citep{SPDELindgren}. Using a Matérn \textsc{grf} is convenient because it makes it possible to apply the \textsc{spde} approach to spatial modeling which is outlined in the next subsection.
\subsection{The \textsc{spde} approach to spatial modeling}\label{sec:SPDE}
Making statistical inference and predictions on models including \textsc{grf}s involve matrix operations on the covariance matrix $\boldsymbol{\Sigma}$. This can lead to computational challenges if the covariance matrix is dense. In this paper, we suggest a model for annual runoff that includes not only one, but two \textsc{grf}s. Consequently, some simplifications have to be done to make the model computationally feasible. To achieve this, we use that the exact solution of the \textsc{spde}
\begin{equation}\label{eq:SPDE}
(\kappa^2-\Delta)^{\frac{\alpha}{2}}\tau x(\boldsymbol{u})=\mathcal{W}(\boldsymbol{u}), \quad \boldsymbol{u} \in \mathcal{R}^d, \quad \kappa>0, \quad \nu >0,
\end{equation}
is a Gaussian random field with Matérn covariance function. Here, $\mathcal{W}(\cdot)$ is spatial Gaussian white noise, $\Delta$ is the Laplacian, $\alpha$ is a smoothness parameter, $\kappa$ is the scale parameter in Equation \eqref{eq:Matern}, $d$ is the dimension of the spatial domain and $\tau$ is a parameter controlling the variance. The parameters of the Matérn covariance function in Equation \eqref{eq:Matern} is linked to the \textsc{spde} through
\begin{equation*} \sigma^2=\frac{\Gamma(\nu)}{\Gamma(\alpha)(4\pi)^{d/2}\kappa^{2\nu}\tau^2}; \hspace{20mm} \nu=\alpha - d/2 ,
\end{equation*}
where we will use that $d=2$ and set $\alpha = 2$, such that $\nu$ is fixed to $\nu=1$. The parameter $\nu$ is fixed because it is difficult to identify from data, and $\alpha=2$, $\nu=1$ are commonly used values for these parameters \citep{Rikke1,Chapter6}.
The link between the above \textsc{spde} and the Matérn \textsc{grf}, which was developed by \cite{Whittle54, Whittle63}, is used by \cite{SPDELindgren} to show that a \textsc{grf} can be approximated by a Gaussian Markov random field (\textsc{gmrf}). This is done by solving the \textsc{spde} in Equation \eqref{eq:SPDE} by the finite element method (\textsc{fem}) (see e.g \cite{FEM}). A \textsc{gmrf} is simply a multivariate Gaussian vector that is parametrized by the precision matrix $\boldsymbol{Q}$, which is the inverse $\boldsymbol{\Sigma}^{-1}$ of the covariance matrix. The term \textsc{gmrf} is most used for Gaussian processes with sparse precision matrices, i.e. matrices that contain many zero elements. The zero elements correspond to Markov properties, in this case conditional independence between locations in the spatial domain. It is convenient to work with \textsc{gmrf}s because there exist computationally efficient algorithms for sparse matrix operations \citep{GMRFbook}. Hence, through the \textsc{spde} approach from \cite{SPDELindgren} a \textsc{grf} with a dense precision matrix can be replaced by a \textsc{gmrf} with a sparser precision matrix with computational benefits.
\subsection{\textsc{pc} priors}
As we use a Bayesian approach, the hyperparameters $\boldsymbol{\theta}$ from Section \ref{sec:LGM} must be given prior distributions. For the majority of the hyperparameters we use penalized complexity (\textsc{pc}) priors. \textsc{pc} priors are proper prior distributions developed by \cite{PC1}. The main idea behind \textsc{pc} priors is to penalize the increased complexity induced by deviating from a simple base model. One of the goals is to avoid overfitting.
The \textsc{pc} prior for the precision $\tau$ of a Gaussian effect $\mathcal{N}(0,\tau^{-1})$ has density
\begin{equation}\label{eq:PCprior}
\pi(\tau)=\frac{\lambda}{2}\tau^{-3/2}\exp(-\lambda \tau^{-1/2}),\quad\quad \tau >0, \quad \lambda>0,
\end{equation}
where $\lambda$ is a parameter that determines the penalty of deviating from the base model. The parameter $\lambda$ can be specified through a quantile $u$ and probability $\alpha$ by $\mathrm{Prob}(1/\sqrt{\tau}>u)=\alpha$, where $u>0$, $0<\alpha<1$ and $\lambda=-ln(\alpha)/u$. Here, $1/\sqrt{\tau}$ is the standard deviation of the Gaussian distribution.
As the range and the marginal variance are easier to interpret than the Matérn covariance function parameters $\kappa$ and $\tau$ in Equation \eqref{eq:Matern}, we parametrize our model through $\rho$ and $\sigma$. For $\rho$ and $\sigma$ we use the prior suggested in \cite{PC2}. This is a joint prior for the spatial range $\rho$ and the marginal variance $\sigma$ constructed from \textsc{pc} priors. The joint prior can be specified through \begin{align*}
\mathrm{Prob}(\rho<u_\rho )=\alpha_\rho; \hspace{20mm} \mathrm{Prob}(\sigma>u_\sigma)=\alpha_\sigma,
\end{align*}
where $u_\rho$, $u_\sigma$, $\alpha_\rho$ and $\alpha_\sigma$ are quantiles and probabilities that must be determined.
\subsection{Evaluating the predictive performance}
To evaluate the predictive performance of the suggested runoff model, we use two criteria: The first criterion is the root mean squared error (RMSE). The RMSE measures the difference between a point prediction $\hat{y}_i$ and the observed value $y_i$ by
\begin{equation*}
\mathrm{RMSE}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_i-\hat{y_i})^2},
\end{equation*}
where $n$ is the total number of pairs of predictions and observations. We use the posterior mean as a point prediction when computing the RMSE. The second criterion is the continuous ranked probability score (CRPS). The CRPS is defined as
\begin{equation*}
\mathrm{CRPS}(F,y)=\int_{-\infty}^{\infty}(F(u)-1\{y\leq u\})^2du,
\end{equation*}
where $F$ is the predictive cumulative distribution and $y$ is the observed value \citep{Gneiting}. The CRPS takes the whole posterior predictive distribution into account, not only the posterior mean or median, and is penalized if the observed value falls outside the posterior predictive distribution.
Both the RMSE and the CRPS are negatively oriented, and a smaller value indicates a better prediction.
\subsection{Interpolation by using Top-Kriging}\label{sec:topkriging}
The focus of this article is mainly on highlighting properties of the suggested point and areal runoff model. However, we also compare some of our results to the predictive performance of Top-Kriging. Top-Kriging \citep{topkriging} is one of the leading methods for runoff interpolation. It is a Kriging approach \citep{Cressie} where it is assumed that the variable of interest can be modeled as a \textsc{grf}. A prediction of the target variable at an unobserved location is given by a weighted sum of the available observations, and the interpolation weights are estimated by finding the so-called best linear unbiased estimator (\textsc{blue}).
In the computation of the interpolation weights, the Top-Kriging approach calculates the covariance between two catchments based on the distance between all the grid nodes in a discretization of the involved catchments. As a consequence, a subcatchment get a higher Kriging weight than a nearby, non-overlapping catchment. This is different from other Kriging approaches traditionally used in hydrology, for which streamflow observations have been treated as point referenced (see e.g. \cite{euclid2, euclid3,euclid11}).
While the suggested Bayesian approach for runoff interpolation supports both areal and point observations, Top-Kriging only considers runoff (areal) data. Furthermore, Top-Kriging estimates the covariance (or variogram) empirically, while we take a fully Bayesian approach where the latent field and the parameters are estimated jointly. Another main difference is that Top-Kriging treats each year of runoff data separately, while we can model several years of runoff simultaneously through our two field model.
\section{Statistical Model for Annual Runoff}
\label{sec:models}
In this section we present the proposed \textsc{lgm} for annual runoff which is suitable for combining observations of different spatial support and that has a climatic spatial field that let us quantify long-term spatial variability.
\subsection{Spatial model for runoff}\label{sec:spatialmodels}
Let the spatial process $\{q_j(\boldsymbol{u}): \boldsymbol{u} \in \mathcal{D} \}$ denote the runoff generating process at a point location $\boldsymbol{u}$ in the spatial domain $\mathcal{D}\in \mathcal{R}^2$ in year $j$. The true runoff generation at
point location $\boldsymbol{u}$ is modeled as
\begin{equation}\label{eq:precip}
q_j(\boldsymbol{u})=\beta_c + c(\boldsymbol{u})+\beta_j + x_j(\boldsymbol{u}),\hspace{3mm} j=1,...,r.
\end{equation}
Here, the parameter $\beta_c$ is an intercept common for all years $j=1,...,r$, while $c(\boldsymbol{u})$ is a spatial effect common for all years. These two model components represent the runoff generation caused by the climate in the study area. Mark that the term climate here covers all long-term effects, i.e. both long-term weather patterns \textit{and} patterns that are repeated due to catchment characteristics. Further, we include a year specific intercept $\beta_j$ and a year specific spatial effect $x_j(\boldsymbol{u})$ for $j=1,..r$ to model the runoff generation due to the annual discrepancy from the climate. Both spatial effects $c(\boldsymbol{u})$ and $x_j(\boldsymbol{u})$ are modeled as \textsc{grf}s with zero mean and Matérn covariance functions given the model parameters; $c(\boldsymbol{u})$ with range parameter $\rho_c$ and marginal variance $\sigma_c^2$, and $x_j(\boldsymbol{u})$ with range $\rho_x$ and marginal variance $\sigma_x^2$. The spatial fields $x_j(\boldsymbol{s})$, j=1,...,r, are assumed to be independent realizations, or replicates of the same underlying \textsc{grf}. The same applies for the year specific intercepts $\beta_j$ which are assumed to be independent and identically distributed as $\mathcal{N}(0,\tau_{\beta}^{-1})$ given the parameter $\tau_{\beta}$ with $\beta_1,...,\beta_r$ being independent realizations of this Gaussian distribution.
The true mean runoff generated inside a catchment $\mathcal{A}$ in year $j$ can be expressed as
\begin{equation}\label{eq:runoff1}
Q_j(\mathcal{A})=\frac{1}{|\mathcal{A}|}\int_{\boldsymbol{u} \in \mathcal{A}}q_j(\boldsymbol{u})d\boldsymbol{u}, \hspace{3mm} j=1,...,r,
\end{equation}
where $|\mathcal{A}|$ is the area of catchment $\mathcal{A}$. By interpreting catchment runoff as an integral of point referenced runoff $q_j(\boldsymbol{u})$, we obtain a mathematically consistent model where the water balance is conserved for any point in the landscape.
\subsection{Observation model}
Annual precipitation and evaporation are observed at $n$ locations $\boldsymbol{u}_i \in \mathcal{D}$ for $i=1,..n$ and for $r$ years $j=1,..r$ . The observed annual runoff generation at point location $\boldsymbol{u}_i$, year $j$, is modeled as the difference between the observed annual precipitation $p_{ij}$ and annual evaporation $e_{ij}$,
\begin{equation}\label{eq:pointobs}
y_{ij}=p_{ij}-e_{ij}=q_j(\boldsymbol{u}_i)+\epsilon^y_{ij} \quad i=1,...,n; \quad j=1,...,r,
\end{equation}
where $q_j(\boldsymbol{u}_i)$ is the true annual point runoff from Equation \eqref{eq:precip}. The error terms $\epsilon_{ij}^y$ are independent and identically distributed as $ \mathcal{N}(0,s_{ij}^y \cdot \tau_y^{-1})$ and independent of the other model components. The measurement uncertainties for precipitation and evaporation are assumed to increase with the magnitude of the observed value, and we want to include this assumption in the model. This is done by scaling the precision parameter of the error terms $\tau_{y}$ with a fixed factor $s_{ij}^y$, that is further described in Section \ref{sec:priors}. \\\\
Runoff at catchment level is observed through streamflow data from $K$ catchments denoted $\mathcal{A}_1,...,\mathcal{A}_K$ for $r$ years denoted $j=1,..,r$. We use the following model for the annual runoff observed in catchment $\mathcal{A}_k$ in year $j$
\begin{equation}\label{eq:arealobs}
z_{kj}=Q_j(\mathcal{A}_k)+\epsilon^z_{kj} \quad k=1,...,K; j=1,...,r,
\end{equation}
where $Q_j(\mathcal{A}_k)$ is the true annual areal runoff from Equation \eqref{eq:runoff1}. The measurement errors $\epsilon^z_{kj}$ are independent and identically distributed as $\mathcal{N}(0,s_{kj}^z\cdot \tau_z^{-1})$ and independent of the other model components. As for the point referenced observations, the precision parameter of the error terms $\tau_z$ is scaled with a fixed factor $s_{kj}^z$ that is further described in the next subsection. This way the uncertainty estimates that the data provider \textsc{nve} has for each annual observation can be included in the modeling.
In Equation \eqref{eq:arealobs} the variable $Q(\mathcal{A})$ defines an areal representation of the annual runoff in catchment $\mathcal{A}_k$. Hence, through the likelihood, the annual runoff in catchment area $\mathcal{A}_k$ is constrained to be close to the actually observed value (with some uncertainty).
So far we have defined the observation likelihoods for the point and areal observations separately. To construct a joint model for point and areal runoff, we multiply the likelihoods defined in Equation \eqref{eq:pointobs} and \eqref{eq:arealobs} together as described in Section \ref{sec:LGM}. This is done for all $n$ precipitation gauge locations $i=1,..n$, for all catchments $k=1,..K$ and for all years $j=1,..r$ such that we obtain a model that simultaneously models several years of runoff. Different years are linked together through the climatic part of the model $c(\boldsymbol{u})+\beta_c$ from Equation \eqref{eq:precip}.
\subsection{Prior distributions}\label{sec:priors}
In the suggested model for annual runoff there are 8 parameters ($\tau_y$, $\tau_z$,$\rho_c$, $\rho_x$, $\sigma_c$, $\sigma_x$, $\beta_c$,$\tau_\beta$) that must be given prior distributions. We start by formulating priors for the measurement errors for the point and areal observations.
The variance of the measurement error of the point referenced observation from precipitation gauge $i$, year $j$, is given by $s_{ij}^y \tau_y^{-1}$ where $\tau_y$ is a hyperparameter and $s_{ij}^y$ is a deterministic value that scales the variance based on expert opinions from \textsc{nve} about the measurement errors for precipitation and evaporation.
The precipitation data are obtained by observing the amount of water or snow that falls into a bucket, but the buckets often fail to catch a large proportion of the actual precipitation, particularly for windy snow events \citep{raingauge, Groisman, Regnusikkerhet}. Based on this and recommendations from \textsc{nve}, the standard deviation of the observation uncertainty for precipitation is assumed to be $10 \%$ of the observed value $p_{ij}$. The evaporation data are obtained from satellite observations and process-models, and are more uncertain than the precipitation data. We assume that the standard deviation for evaporation is $20 \%$ of the observed value $e_{ij}$. The prior knowledge about the point data is used to specify the scale $s_{ij}^y$ for the point observation $y_{ij}$ at location $i$ and year $j$ as follows
\begin{equation*}
s_{ij}^y=\mathrm{Var}(y_{ij})=\mathrm{Var}(p_{ij}-e_{ij})=\mathrm{Var}(p_{ij})+\mathrm{Var}(e_{ij})-2\cdot \mathrm{Cov}(p_{ij},e_{ij})=(0.1 p_{ij})^2+(0.2e_{ij})^2-2\cdot\mathrm{Cov}(p_{ij},e_{ij}).
\end{equation*}
Here, the covariance between the observed precipitation and evaporation is estimated by
\begin{align*}\footnotesize
&\mathrm{Cov}(p_{ij},e_{ij})\hspace{2mm}=\sqrt{\mathrm{Var}(p_{ij})}\cdot \sqrt{\mathrm{Var}(e_{ij})}\cdot \mathrm{Cor}\{ (p_{i1},...,p_{ir}),(e_{i1},...,e_{ir}) \},
\end{align*}
where $Cor\{\cdot,\cdot\}$ is the Pearson correlation between all available observations of precipitation and evaporation at precipitation gauge $i$. Further, we assign the precision $\tau_{y}$ the \textsc{pc} prior from Equation \eqref{eq:PCprior} with $\alpha=0.1$ and $u=1.5$. With this prior, a prior 95 \% credible interval for the standard deviation $\sqrt{s^y_{ij} \tau_y^{-1}}$ of the measurement error for point runoff becomes approximately (0.002-30)\% of the corresponding observed value $y_{ij}$. This interval corresponds well to what \textsc{nve} knows about the measurement uncertainty for precipitation and evaporation.
The same approach is used to make a prior for the variance of the measurement error for the areal referenced observations $z_{kj}$. The precision $\tau_z$ is given a \textsc{pc} prior with $\alpha=0.1$ and $u=1.5$, while the scale $s_{kj}^{z}$ for catchment $k$, year $j$ is given by
\begin{equation}\label{eq:scales}
s_{kj}^z=\mathrm{Var}(z_{kj}).
\end{equation}
For the streamflow data, information about the variance of the observations is directly available through the dataset provided by \textsc{nve}. These data are inserted into Equation \eqref{eq:scales}. With the suggested prior, a prior 95 \% credible interval for the standard deviation $\sqrt{s^z_{kj} \tau_z^{-1}}$ of an areal observation, is approximately (0.002,4.0) $\%$ of the corresponding observed value $z_{kj}$. This is an informative prior that just covers the range of values suggested by \textsc{nve}. We have chosen a low prior standard deviation in order to try to put more weight on the runoff observations than to the point observations. There are only 5 areal observations available for each year in the dataset, but 15 point observations, and the aim is to avoid that the more unreliable point data dominate over the areal data.
For the spatial ranges and the marginal variances of the spatial fields $x_j(\boldsymbol{u})$ and $c(\boldsymbol{u})$, the joint \textsc{pc} prior from \cite{PC2} is used. The \textsc{pc} priors for $\sigma_x$, $\rho_x$, $\sigma_c$ and $\rho_c$ are specified through the following probabilities and quantiles:
\begin{align*}
\mathrm{Prob}(\rho_x<10 \text{ km})=0.1, \hspace{2mm} \mathrm{Prob}(\sigma_x>2 \text{ m/year})=0.1,\\ \mathrm{Prob}(\rho_c<10 \text{ km} )=0.1,\hspace{2mm} \mathrm{Prob}(\sigma_c>2 \text{ m/year})=0.1.
\end{align*}
The percentages and quantiles are chosen based on expert knowledge about the spatial variability in the area. The study area is approximately 80 km $\times$ 80 km, and it is reasonable to assume that there is a correlation larger than 0.1 between two locations that are less than 10 km apart. Furthermore, the spatial variability in the study area is large, and we can observe runoff values from 0.8 m/year to 3.2 m/year within the same year. However, it is reasonable to assume that the marginal standard deviation of the runoff generating process does not exceed 2 m/year. The parameters of the climatic \textsc{grf} $c(\boldsymbol{u})$ and the annual \textsc{grf} $x_j(\boldsymbol{u})$ are given the same prior as it is difficult to identify if the spatial variability mainly comes from climatic processes or from annual variations. We also want the data to determine which of the two effects that dominates in the study area.
As described in Section \ref{sec:spatialmodels}, the year specific intercept $\beta_j$ has prior $\mathcal{N}(0 , \tau_{\beta}^{-1})$ for all years $j=1,..r$. Its precision $\tau_{\beta}$ is given the \textsc{pc} prior from Equation \eqref{eq:PCprior} with $u=10$ and $\alpha=0.2$. This is a weakly informative wide prior with a prior $95 \%$ interval (0.002,40.5) m/year for the standard deviation $\sqrt{\tau_{\beta}^{-1}}$ of $\beta_j$. Finally, the climatic intercept $\beta_c$ is given a normal prior, $\beta_c\sim \mathcal{N}(2,0.5^2)$. This gives a prior $95 \%$ credible interval of (1.0,3.0) m/year for $\beta_c$ which covers all reasonable mean values of annual runoff around Voss.
\subsection{Inference}
In order to make the model computationally feasible, some simplifications of the suggested model are necessary. In Section \ref{sec:spatialmodels} the annual runoff for a catchment $\mathcal{A}_k$ was modeled as the integral of point referenced runoff over the catchment area. In practice, the integral in Equation \eqref{eq:runoff1} is calculated by a finite sum over a discretization of the target catchment. More specifically, let $\mathcal{L}_k$ denote the discretization of catchment $\mathcal{A}_k$. The total annual runoff in catchment $\mathcal{A}_k$ in year $j$ is approximated by
\begin{align}
Q_j(\mathcal{A}_k)&=\frac{1}{N_k}\sum_{\boldsymbol{u} \in \mathcal{L}_k}q_j(\boldsymbol{u}),
\label{eq:runoffbasis}
\end{align}
where $N_k$ is the total number of grid nodes in $\mathcal{L}_k$ and $q_j(\boldsymbol{u})$ is the point runoff at grid node $\boldsymbol{u} \in \mathcal{L}_k$. It is important that a subcatchment shares grid nodes with the main catchment in order to preserve the water balance. The discretization used in this analysis has 1 km spacing and is shown in Figure \ref{fig:catchgrid}.
The model suggested for annual runoff, is a latent Gaussian model with the structure described in Section \ref{sec:LGM}. Modeling annual runoff as a \textsc{lgm} is convenient because it allows us to use integrated nested Laplace approximations (\textsc{inla}) to make inference and predictions. \textsc{inla} can be used for making Bayesian inference on \textsc{lgm}s and is a faster alternative to \textsc{mcmc} algorithms \citep{mcmc}. The approach is based on approximating the marginal distributions by using Laplace or other analytic approximations, and on numerical integration schemes. The main computational tool is the sparse matrix calculations described in \cite{GMRFbook}, such that in order to work fast, the latent field of the \textsc{lgm} should be a \textsc{gmrf} with a sparse precision matrix. In our case, sparsity is obtained by using the \textsc{spde} approach from Section \ref{sec:SPDE} to approximate the \textsc{grf}s $x_j(\boldsymbol{u})$ and $c(\boldsymbol{u})$ by \textsc{gmrf}s. This is done through the finite element method (\textsc{fem}), and the triangulation used for \textsc{fem} is shown in Figure \ref{fig:catchmesh}. In order to obtain accurate approximations of the underlying two \textsc{grf}s, this triangular mesh must be dense enough to capture the rapid spatial variability of annual runoff around Voss. If the mesh is too coarse, unrealistic results such as negative runoff can occur, or we can get into numerical problems.
The R-package \texttt{r-inla} was used to make inference and predictions for the suggested model. This package provides a user-friendly interface for applying \textsc{inla} and the \textsc{spde} approach to spatial modeling without requiring that the user has deep knowledge about \textsc{spde}s. See \texttt{r-inla.org} or \cite{Chapter6} and \cite{Elias} for tutorials and examples. In particular, \cite{areadata1} is recommended for a description of how a model with point and areal data can be implemented in \texttt{r-inla}.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[page=1, trim = 20mm 20mm 10mm 10mm, clip, width=4cm]{diskretisering.pdf}
\caption{\small Discretization of catchments used to model areal runoff. We use a regular grid with 1 km spacing.}
\label{fig:catchgrid}
\end{subfigure}
~
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[page=1, trim = 20mm 10mm 1mm 20mm, clip, width=6cm]{meshplot.pdf}
\caption{\small Triangulation of the spatial domain used for FEM. The red points are the locations of the precipitation gauges.}
\label{fig:catchmesh}
\end{subfigure}
\caption{Discretization and triangular mesh used to make the model computationally feasible.}
\end{figure*}
\section{Case study of annual runoff in Voss}\label{sec:casestudy}
The model presented in Section \ref{sec:models} is used to explore the predictability of annual runoff in the Voss area. Recall that the main goals are to investigate how the predictions are affected by the two different observation types (point and areal data), to demonstrate how the water balance considerations can be beneficial, and to explore the properties of the climatic part of the model. To address this, we perform four tests that are inspired by common applications in hydrology. These are presented in the next subsection. In Section \ref{sec:results} the results from the tests are presented and discussed.
\subsection{Model evaluation}\label{sec:eval}
To explore how the two different observation types influence the predictions of annual runoff around Voss, we compare three observation designs: An observation design where only point referenced observations are included in the likelihood ($P$), an observation design where only areal referenced observations are included in the likelihood ($A$) and an observation design where all available observations are included in the likelihood ($P+A$). Recall that using only areal observations ($A$) corresponds to what typically has been done in hydrological applications, and we want to investigate if we can improve the predictability of runoff by also including point observations in the likelihood ($P+A$). Including $P$ as an observation design gives information about what influence the point data have on the predictions. The three observation designs are evaluated according to four tests that are described as follows:\\
\textbf{T1 - Inference:} The model from Section \ref{sec:models} is fitted to all available observations between 1988 and 1997 from Figure \ref{fig:data}. This is done for $P$, $A$ and $P+A$, such that we get information about how the different observation types affect the posterior estimates of the parameters.\\
\textbf{T2 -Spatial predictions in ungauged catchments:} In hydrological applications, the main interest is on estimating runoff at catchment level. Motivated by this, we perform spatial predictions of annual runoff for each of the five catchments $\mathcal{A}_1,...,\mathcal{A}_5$ by leave-one-out-cross-validation for $P$, $A$ and $P+A$. That is, data from the target catchment are left out and the catchment of interest is treated as ungauged. Runoff predictions are done for the target catchment for 1988-1997 and are based on observations from the remaining 4 catchments and/or point data from 1988-1997. The predictive performance is assessed by computing the RMSE and CRPS for each catchment based on the 10 years of predictions.
In \textbf{T2}, we also compare our results to the Top-Kriging approach described in Section \ref{sec:topkriging}. For Top-Kriging, we fit the default covariance function (or variogram) from the R package \texttt{rtop}. This is a multiplication of a modified exponential and fractal variogram model \citep{topkriging}. Recall that Top-Kriging only supports areal referenced (runoff) observations.\\
\textbf{T3u- Future predictions in ungauged catchments:} In \textbf{T2} we estimate the runoff that was generated in ungauged catchments in the past. However, quantifying the annual runoff we can expect in the future is more interesting for most hydrological applications. In \textbf{T3u} we therefore estimate annual runoff for a future year, i.e. for a year for which there are no observations of runoff, precipitation or evaporation. For an unobserved year ($j>10$) the posterior means of the year specific effects $\beta_j$ and $x_j(\boldsymbol{u})$ are zero. Thus, the posterior predicted future runoff is given by the posterior means of the climatic components $\beta_c$ and $c(\boldsymbol{u})$. However, all four model components as well as the observation uncertainty contribute to the predictive uncertainty.
In \textbf{T3u} the catchment of interest is treated as ungauged and left out of the dataset, and we use the remaining observations from 1988-1997 to predict annual runoff for 1998-2014. This is done for catchment $\mathcal{A}_1,...,\mathcal{A}_5$ in turn. The predictive performance is evaluated by computing the RMSE and CRPS for predictions of runoff for each of the 5 catchments for 17 future years. The average RMSE and CRPS over the 5 catchments are used as summary scores. As the posterior mean for an unobserved year is given by the posterior mean of the climatic effects $\beta_c$ and $c(\boldsymbol{u})$, this test lets us quantify the climatology in the study area.\\
\textbf{T3g - Future predictions in partially gauged catchments:} We predict annual runoff in catchment $\mathcal{A}_1,...,\mathcal{A}_5$ for a future year as in \textbf{T3u}. However, we allow the observation likelihood to contain 1 to 10 annual runoff observations from the catchment in which we want to predict runoff. This way, we assess the model's ability to exploit short records of runoff, which is a property enabled by the climatic component of the model. We denote this test \textbf{T3g}, for gauged, as opposed to \textbf{T3u} for ungauged.
The test is carried out by drawing $i$ observations between 1988-1997 randomly from the target catchment. Next, these observations are used together with the other point and/or areal observations of $P$, $A$ and $P+A$ from 1988-1997 to predict the annual runoff in 1998-2014 for this particular catchment. As the experimental results might depend on which runoff observations we pick from the target catchment, the experiment is repeated 10 times such that different observations are included for each experiment.
The above procedure is carried out with an increasing number of years included in the short record, i.e. for $i\in \{1,2,3,4,5,6,7,8,9,10\}$. The predictive performance is then evaluated for each $i$ by computing the RMSE and CRPS for each catchment $\mathcal{A}_1,...,\mathcal{A}_5$ based on 17 years of future predictions. The average RMSE and CRPS over 5 catchments and 10 experiments are reported as summary scores. \\
For our experiments we use the posterior mean as the predicted value when computing the RMSE. Furthermore, when evaluating the CRPS and when computing the coverage of the predictions, we assume that the posterior distributions are Gaussian with mean given by the posterior mean and standard deviation given by the posterior standard deviation. In the posterior standard deviation, we take the measurement uncertainty given by $s_{kj}^z\tau_z^{-1}$ into account, in addition to the uncertainty of the model components of the linear predictor in Equation \eqref{eq:precip}. The Gaussian distribution should be a good approximation for the resulting posterior distributions as they typically are symmetric with neither particularly short or long tails.
\subsection{Results from the case study} \label{sec:results}
\begin{figure*} [h!!]
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=15cm]{posterior_future.pdf}
\caption{\small Posterior mean [m/year].}
\label{fig:future_eta_mean}
\end{subfigure}\\
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=15cm]{posterior_future_sd.pdf}
\caption{\small Posterior standard deviation [m/year].}
\label{fig:future_eta_sd}
\end{subfigure}
~
\caption{\small Posterior mean and standard deviation for annual runoff for a future unobserved year when all available observations of point observations ($P$, left), areal ($A$, middle) and both point and areal observations ($P+A$, right) from 1988-1997 are used, i.e. all catchments are treated as gauged for $A$ and $P+A$, but ungauged for $P$ (test \textbf{T1}).}\label{fig:futurecompare}
\end{figure*}
We now present the results from the case study for our four tests \textbf{T1}, \textbf{T2}, \textbf{T3u} and \textbf{T3g} in turn.
Table \ref{tab:parameter_estimates} shows the posterior medians and the 0.025 and 0.975 quantiles for the hyperparameters for $P$ (point observations), $A$ (areal observations) and $P+A$ (point and areal observations) when all respective available observations from 1988-1997 are used to make inference (\textbf{T1}). In general, $P$ gives lower runoff values with a posterior median of the climatic intercept $\beta_c$ equal to 1.87 m/year compared to $A$ giving $\beta_c$ equal to $2.21$ m/year. Furthermore, the posterior median of the marginal standard deviation of the climatic \textsc{grf} $\sigma_c$ is considerably larger for $P$ with $\sigma_c=0.97$ m/year compared to $A$ and $P+A$ which give posterior medians 0.63 m/year and 0.76 m/year respectively. The posterior median of the range of the climatic \textsc{grf} $\rho_c$ is also larger for $P$ with 70 km compared to values around 20 km for $A$ and $P+A$.
The spatial runoff patterns corresponding to these parameter values are shown in Figure \ref{fig:futurecompare}. These figures show the posterior mean and standard deviation for runoff for an unobserved, future year. We see that larger values for $\rho_c$ and $\sigma_c$ lead to a more prominent spatial pattern for $P$ with large runoff values in the western part of the study area and lower values in the eastern part. A high climatic range $\rho_c$ also leads to a reduction of the posterior predictive uncertainty in a larger part of the study area for $P$, as can be seen in Figure \ref{fig:future_eta_sd}. The maps show that the choice of observation scheme ($P$, $A$ or $P+A$) has a large impact on the resulting predictions of annual runoff in terms of posterior mean and/or posterior standard deviation.
\begin{table}\small \centering
\caption{\small Posterior median (0.025 quantile, 0.975 quantile) when all available point (P), areal (A) and both point and areal (P+A) referenced observations from 1988-1997 are used for making inference (test \textbf{T1}). The precision parameters are transformed to standard deviations to make them more interpretable. Recall that the posterior estimates of the standard deviations $1/\sqrt{\tau_y}$ and $1/\sqrt{\tau_z}$ of the measurement uncertainties are multiplied with the scales from Section \ref{sec:priors} in order to obtain the final posterior observation uncertainty with unit [m/year].\vspace{3mm}}\label{tab:parameter_estimates}
\begin{tabular}{llllllllll}
\toprule
Parameter [unit]&& \multicolumn{3}{c}{Posterior median (0.025 quantile, 0.975 quantile)}\\
\cmidrule(r){1-1} \cmidrule(r){3-5}
& &P & A & P+A \\
\midrule
$\rho_x$ [km]& & 236 (148, 379) & 104 (32, 262) &102 (41, 249)\\
$\sigma_x$ [m/year]&& 0.27 (0.20, 0.38) &0.34 (0.18, 0.56)&0.29 (0.19, 0.44)\\
$\rho_c$ [km]& &70 (30, 180) & 25 (9, 74) & 20 (9, 46) \\
$\sigma_c$ [m/year]& &0.97 (0.56, 1.79) & 0.63 (0.34, 1.34) & 0.76 (0.53, 1.1)\\
$\beta_c$ [m/year]&&1.87 (1.13, 2.68) & 2.21 (1.57, 2.82) &1.96 (1.40, 2.50)\\
$1/\sqrt{\tau_y}$ [unitless]& & 0.48 (0.40, 0.57) & $\times$ & 0.37 (0.22, 0.54) \\
$1/\sqrt{\tau_z}$ [unitless]& &$\times$ &3.6 (2.3, 5.1) & 5.3 (3.8,6.8) \\
$1/\sqrt{\tau_\beta}$ [$\text{m}/\text{year}$]& &0.26 (0.01, 0.78) & 0.61 (0.31,1.0) & 0.48 (0.24, 0.75) \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[h!] \centering
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[page=1, trim = 0mm 0mm 0mm 0mm, clip, width=7cm]{RMSE_real_TK.pdf}
\caption{\small RMSE.}
\label{fig:RMSE_real}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[page=1, trim = 0mm 0mm 0mm 0mm, clip, width=7cm]{CRPS_real_TK.pdf}
\caption{\small CRPS.}
\label{fig:CRPS_real}
\end{subfigure}
~
\caption{\small Predictive performance for spatial predictions of runoff in 1988-1997 when the target catchment is treated as ungauged (test \textbf{T2}) for P, A and P+A. Here, we have also included results from the reference method Top-Kriging (TK) that only considers areal observations. Dashed lines mark the average performance over all catchments.}
\label{fig:RMSE_CRPS_real}
\end{figure*}
In \textbf{T2} we perform spatial predictions of annual runoff in 1988-1997 for a catchment that is left out of the dataset. The predictive performance for spatial predictions is summarized in Figure \ref{fig:RMSE_CRPS_real}. For four out of five catchments, $P+A$ gives the lowest RMSE and CRPS, or a RMSE and CRPS that is approximately as for $A$, $P$ or Top-Kriging (TK). We see that the Top-Kriging approach performs similar to $A$, which is reasonable as Top-Kriging only considers areal observations and uses a similar interpretation of covariance as our suggested model.
In Figure \ref{fig:RMSE_CRPS_real} we particularly highlight Catchment 3 because it provides an example of how the water balance properties of the model can be beneficial. Figure \ref{fig:RMSE_CRPS_real} shows that for Catchment 3, $P$ gives a RMSE around 0.9, while $A$ gives a RMSE around 0.4. Considering the posterior prediction intervals for Catchment 3 in Figure \ref{fig:Catchment3_T2}, we see that $P$ leads to an underestimation of the annual runoff. This can be explained by looking at the observations in Figure \ref{fig:data}: The point observations close to Catchment 3 all have mean values lower than the true mean annual runoff in this catchment. Next, considering the results for the areal observations (A), Figure \ref{fig:Catchment3_T2} shows that also these lead to an underestimation of Catchment 3's runoff. Intuitively, we would thus expect that combining $P$ and $A$ would result in underestimation. Instead, we get a large improvement in the predictions in Figure \ref{fig:Catchment3_T2} when $P$ and $A$ are combined, with a RMSE around 0.1 (Figure \ref{fig:RMSE_CRPS_real}). The predictions for Catchment 3 also turn out to be larger than any of the nearby observed values.
The result can be understood by looking at the nested structure of the catchments in the dataset. Catchment 4 and Catchment 5 cover Catchment 3, and through our model formulation they put constraints on the total runoff in this area. As Figure \ref{fig:data} shows, there are two precipitation gauges inside Catchment 5 for which the point runoff generated is lower than the mean annual runoff in the surrounding two catchments. To preserve the water balance, the predicted annual runoff in the remaining parts of Catchment 4 and Catchment 5 has to be larger than any of the values that are observed in the surrounding area. This interaction between nested areal observations and point observations makes the model able to correctly identify Catchment 3 as a wetter catchment than any of the nearby catchments, and we have demonstrated that we have a geostatistical model that does more than smoothing.
\begin{figure*} \centering
\hspace{17mm} $P$\hspace{39mm} $A$ \hspace{34mm} $P+A$\\ \vspace{-3mm}
\includegraphics[page=1,trim = 0mm 1mm 2mm 11mm,clip, width=15cm]{Catchment3_C2.pdf}
\caption{ \small The posterior mean for spatial predictions in Catchment 3 (test \textbf{T2}) with corresponding 95 $\%$ posterior prediction intervals for observation design $P$ (left), $A$ (middle) and $P+A$ (right).}\label{fig:Catchment3_T2}
\end{figure*}
This does not mean that the interaction between point and areal observations always lead to improved predictions (see e.g. Catchment 5 in Figure \ref{fig:RMSE_CRPS_real}). However, overall the results in Figure \ref{fig:RMSE_CRPS_real} show that on average we benefit from including all available data ($P+A$) in the analysis when making spatial predictions, and that using only point observations gives poor predictions. $P$ performs considerably worse than $A$, $P+A$ and Top-Kriging for three of the catchments (Catchment 3, 4 and 5)
\begin{figure*}
\centering
\includegraphics[page=1, trim = 0mm 1mm 1mm 0mm, clip, width=14cm]{scatterplot_real2.pdf}
\caption{\small The posterior mean for spatial predictions in ungauged catchments for $P$ (left), $A$ (middle) and $P+A$ (right) compared to the corresponding observed value (\textbf{T2}).}\label{fig:scatter_real}
\end{figure*}
The scatterplots in Figure \ref{fig:scatter_real} compare the spatial predictions from 1988-1997 (\textbf{T2}) to the actual observations for each (ungauged) catchment for $P$, $A$ and $P+A$. Overall, observation designs $A$ and $P+A$ provide predictions that are symmetric around the corresponding observed runoff. However, if we look more closely at the predictions for each catchment, we see that $A$ and $P+A$ tend to either overestimate or underestimate the annual runoff within a catchment. This is seen most clearly for Catchment 1 where the annual runoff is overestimated for $A$ and $P+A$, and for Catchment 2 where the runoff is underestimated for $A$. Top-Kriging is not visualized here, but this reference approach gives similar results as observation scheme $A$.
The results in Figure \ref{fig:scatter_real} show that the same systematic prediction error typically is done each year for a specific catchment. The biases are however small enough that the actual observations are covered by the corresponding $95 \%$ posterior prediction intervals for $A$ and $P+A$ for most catchments. This can be seen in Table \ref{tab:coverage_spatPred}. %
For $P$ the situation is different: Figure \ref{fig:scatter_real} shows that the annual runoff is underestimated for all catchments. In addition, the posterior standard deviation for runoff is typically unrealistically small for $P$ contributing to narrow posterior prediction intervals. Large biases combined with small posterior standard deviations lead to a low empirical coverage for the spatial predictions for $P$, and on average the coverage of a $95 \%$ posterior prediction interval is as low as $42 \%$. For $P$, neither the posterior mean nor the posterior variance reflects the properties of the underlying process.
\begin{table}\caption{\small The proportion of the observations that falls into the corresponding $95 \%$ posterior prediction interval for spatial predictions of runoff (\textbf{T2}) in catchment $\mathcal{A}_1,..,\mathcal{A}_5$ for 1988-1997 when the target catchment is treated as ungauged.} \label{tab:coverage_spatPred}
\small
\centering
\begin{tabular}{lllllll}
\hline
& $\mathcal{A}_1$ & $\mathcal{A}_2$ & $\mathcal{A}_3$ & $\mathcal{A}_4$ & $\mathcal{A}_5$ & All \\
\hline
$P$ & 1 & 0.5 & 0.5 & 0.1 & 0 & 0.42 \\
$A$ & 1 & 0.7 & 1 & 0.9 & 1 & 0.92 \\
$P+A$ & 1 & 1 & 1 & 1 & 0.60 & 0.92 \\
\hline
\end{tabular}
\end{table}
In tests \textbf{T3g} and \textbf{T3u} annual runoff was predicted for unobserved future years (1998-2014) when 0-10 observations from the target catchment between 1988 and 1997 were included in the likelihood, together with observations of $P$ and/or $A$ from other locations and catchments. The resulting predictive performance is visualized in Figure \ref{fig:RMSE_CRPS_time_real}. As for the spatial predictions, $P+A$ gives the lowest RMSE and CRPS on average. For ungauged catchments (when 0 years of observations from the target catchment are included), $P$ and $A$ perform considerably worse than $P+A$. However, when we include some years of observations from the target catchment, we see a large drop in the RMSE and CRPS for $P$ and $A$. The posterior mean for a future year is given by the posterior mean of $\int_{\boldsymbol{u} \in \mathcal{A}_k}( \beta_c+c(\boldsymbol{u}))d\boldsymbol{u}$, i.e. the plots show that we get a large change in the climatic part of the model when we include information from a new location or catchment.
This result can be understood from the results from the parameter estimation in \textbf{T1}: The posterior median of the standard deviation of the climatic \textsc{grf} $\sigma_c$ is approximately twice as large as the median of the marginal standard deviation for the annual \textsc{grf} $\sigma_x$ for all observation designs (Table \ref{tab:parameter_estimates}). Hence, the potential value of a new data point from an unobserved location can be large, as the new observation affects the climatic part of the model that has a substantial impact on the predictions for all years under study. Furthermore, the large spatial climatic effect can also be a possible explanation for the systematic errors we saw for the spatial predictions in Figure \ref{fig:Catchment3_T2} and Figure \ref{fig:scatter_real} (\textbf{T2}). A strong climatic field $c(\boldsymbol{u})$ indicates that the same spatial runoff pattern is repeated each year, and if we fail to characterize it, systematic errors are a reasonable consequence.
\begin{figure*} [h!!!]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[page=1, trim = 5mm 2mm 0mm 1mm, clip, width=6cm]{RMSE_future_real.pdf}
\caption{\small $\text{RMSE}$.}
\label{fig:RMSE_time_real}
\end{subfigure}
~
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[page=1, trim = 5mm 1mm 0mm 1mm, clip, width=6cm]{CRPS_future_real.pdf}
\caption{\small $\text{CRPS}$.}
\label{fig:CRPS_time_real}
\end{subfigure}
\caption{\small Predictive performance for future runoff (1998-2014) in catchment $\mathcal{A}_1,...,\mathcal{A}_5$ when 0-10 years of observations from the target catchment between 1988 and 1997 are included in the observation likelihood together with other observations of $P$ and/or $A$ (tests \textbf{T3u} and \textbf{T3g} ).}\label{fig:RMSE_CRPS_time_real}
\end{figure*}
\section{Simulation study}\label{sec:simulationstudy}
One of the objectives of this paper was to show how quantifying long-term spatial variability can be used as a tool for understanding the uncertainty and biases in the modeling of environmental variables. In the case study we have already suggested that a strong climatic field $c(\boldsymbol{u})$ can be an explanation for the systematic over- and underestimation we saw for some of the catchments. In the simulation study we present here, we aim to investigate this further, i.e. we explore if the over- and underestimation actually is a model property, and that it is not only caused by e.g. mismatch between the model and the runoff data in Voss. More specifically, if the true underlying process is driven by two different spatial processes, one climatic (common for all years) and one annual (different each year), can these systematic predictive biases be expected for a given catchment and set of observation locations?
In the simulation study, we explore the model properties for different values of the spatial parameters $\rho_c , \rho_x , \sigma_c$ and $\sigma_x$. The parameters could represent different environmental variables or different study areas. By this, we aim to show what insight one can obtain about a spatio-temporal environmental variable of interest and the corresponding study area by separating climatic spatial variability from year dependent effects.
\subsection{Experimental set-up}
In the simulation study, we simulate from the model described in Section \ref{sec:models} for 9 different configurations of the range parameters $\rho_c$, $\rho_x$ and the marginal standard deviations $\sigma_c$ and $\sigma_x$. These are shown in Table \ref{tab:simparameters}. We here refer to the proportion $\sigma_c^2/(\sigma_c^2 + \sigma_x^2)$ as the \textit{climatic spatial dominance} as it represents a quantification of how large the climatic spatial effect $c(\boldsymbol{u})$ is relative to the year specific spatial effects $x_j(\boldsymbol{u})$. Note that Parameter set 1 with $\sigma_c =0.8$, $\sigma_x=0.3$, $\rho_c=20$ and $\rho_x=100$ corresponds to the posterior medians obtained for the real case study for $P+A$ (Table \ref{tab:parameter_estimates}). The other parameter sets could represent the dependency structure of another climatic variable, e.g. temperature or \textit{monthly} runoff, or the annual runoff in another part of the world.
\begin{table}[h!!!]\caption{ \small Parameters used for the simulation study. Parameter set 1 corresponds to the parameters obtained for the case study for $P+A$ in Table \ref{tab:parameter_estimates}. We refer to the proportion $\sigma_c^2/(\sigma_c^2 + \sigma_x^2)$ as the climatic spatial dominance.}\label{tab:simparameters}
\begin{tabular}{llllll}\hline \small
Parameter set & $\sigma_c$ [m/year] & $\sigma_x$ [m/year] & $\rho_c$ [km] & $\rho_x$ [km] & $\sigma_c^2/(\sigma_c^2 + \sigma_x^2)$ \\
\hline
1 & 0.8 & 0.3 & 20 & 100 & 0.88 \\
2 & 0.5 & 0.5 & 20 & 100 & 0.50 \\
3 & 0.3 & 0.8 & 20 & 100 & 0.12 \\
\hline
4 & 0.8 & 0.3 & 50 & 100 & 0.88 \\
5 & 0.5 & 0.5 & 50 & 100 & 0.50 \\
6 & 0.3 & 0.8 & 50 & 100 & 0.12 \\ \hline
7 & 0.8 & 0.3 & 100 & 100 &0.88 \\
8 & 0.3 & 0.5 & 100 & 100 & 0.50 \\
9 & 0.5 & 0.8 & 100 & 100 & 0.12 \\
\hline
\end{tabular}
\end{table}
The remaining two parameters are set to $\beta_c=2$ and $\tau_\beta=5$ for all experiments, i.e. similar to the posterior medians for $P+A$ in Table \ref{tab:parameter_estimates}. Furthermore, we assume that the measurement errors of the point observations are normally distributed with standard deviation $15 \%$ of the corresponding simulated value, while the measurement errors of the areal observations are normally distributed with standard deviation $3 \%$ of the corresponding simulated value. These estimates are set based on recommendations from the data provider \textsc{nve} regarding the measurement errors we typically see for precipitation and runoff.
For all 9 parameter configurations, annual runoff is simulated for the point and areas in Figure \ref{fig:data}. This way we obtain a realistic distribution of observations. In total 50 datasets were generated for each parameter set, i.e. there are 50 simulated climates $c(\boldsymbol{u})+\beta_c$, and for each climate there are 10 replicates of the year specific component $x_j(\boldsymbol{u})+\beta_j$.
In our experiments, we predict runoff for two of the catchments in Figure \ref{fig:data}: Catchment 1 that is not nested and located relatively far from most point observations, and Catchment 4 that is nested and located in the middle of the study area with many surrounding observations. In turn, Catchment 1 or Catchment 4 is left out of the dataset, and 10 years of annual runoff (1988-1997) are predicted for the target catchment based on all point observations and the remaining areal observations from the same time period (1988-1997). That is, we use the setting $P+A$ for all simulated experiments. Furthermore, the predictions are done both when the target catchment is treated as ungauged with 0 annual runoff observation included in the likelihood, and when the target catchment is treated as partially gauged with 1 randomly drawn annual runoff observation (out of 10 years) included in the likelihood.
In order to investigate the relationship between the model parameters and prediction bias over time, we quantify bias as follows: For each of the 50 climates, we predict runoff for Catchment 1 and Catchment 4 for 10 years. Then, we compute the empirical probability that all of the 10 true (simulated) values of annual runoff are either below or above the 10 corresponding posterior medians for a specific catchment. We refer to this as the probability of \textit{systematic bias}, i.e.
\begin{equation*}
\text{Prob(Systematic bias)}=\text{Prob(All 10 simulated values are either below or above the 10 posterior medians)}.
\end{equation*}
Systematic bias was common in the case study, and can be seen for example for Catchment 5 in Figure \ref{fig:scatter_real} for P+A. We report the probability of systematic bias as \textit{one} value per parameter set, estimated based on 100 events (50 climates and 2 target catchments).
\subsection{Results from the simulation study}
We first present the overall 95 $\%$ coverage for the simulation study, based on predictions of 10 years of runoff for 2 catchments and 50 climates. These are shown in Table \ref{tab:coverage_sim}, and we find that the empirical coverages are close to 95 $\%$ for all the parameter sets in Table \ref{tab:simparameters}. If we next consider a scatter plot of the 1000 true and predicted values (not included here), the predictions are also unbiased with respect to the true runoff values. The 95 $\%$ coverages and the scatter plots confirm that the model behaves as expected asymptotically for all parameter sets.
\begin{table}[h!!] \small \caption{\small Overall 95 $\%$ coverage for the simulation study over 50 climates, 2 catchments and 10 years of predictions for $P+A$. For ungauged catchments, there are 0 observations from the target catchment in the likelihood while for partially gauged catchments there is 1 annual observation available from the target catchment.} \label{tab:coverage_sim}
\begin{tabular}{llllllllll}\hline
Parameter set. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\hline
Ungauged catchments & 0.96 & 0.96 & 0.94 & 0.94 & 0.98 & 0.96 & 0.96 & 0.96 & 0.96 \\
Partially gauged catchments & 0.93 & 0.93 & 0.94 & 0.95 & 0.95 & 0.96 & 0.96 & 0.96 & 0.95\\
\hline
\end{tabular}
\end{table}
Next, Figure \ref{fig:bias0} shows an visualization of the systematic bias obtained for the simulation study when the target catchments (Catchment 1 and Catchment 4) are treated as ungauged. Recall that systematic bias here is measured as the probability that all 10 true annual runoff values are either below or above the corresponding predicted value for a specific climate and catchment. We see a clear relationship between this bias and the climatic spatial dominance given by the proportion $\sigma_c^2/(\sigma_c^2 + \sigma_x^2)$:
When annual spatial effects dominate over climatic spatial effects and $\sigma_c \ll \sigma_x$, the probability of systematic bias is close to zero (around 0.2 $\%$). However, when most of the spatial variability is due to the climate ($\sigma_c\gg\sigma_x$), this probability increases to 30-65$\%$ depending on the values of the range parameters $\rho_c$ and $\rho_x$. For the parameters corresponding to the Norwegian case study, the probability of systematic bias was 65 $\%$. Hence, systematic errors like we saw for e.g. Catchment 5 (P+A) in Figure \ref{fig:scatter_real}, can be expected quite often for these parameter values. Figure \ref{fig:bias0} also shows that the probability of systematic bias is largest when the climatic range $\rho_c$ is low, i.e. when the information gain from the neighboring catchments is low.
From a statistical point of view, the above results are intuitive: If most of the spatial variability can be explained by climatic conditions, there are large dependencies between years. Either we typically perform accurate predictions all years, or poor predictions all years. Considering all ungauged catchments in Norway, we can expect that $95 \%$ of the true runoff values are inside the corresponding $95 \%$ posterior prediction intervals on average (Table \ref{tab:coverage_sim}), but if we consider predictions for individual catchments over time, a large proportion of the predictions will be biased in one direction or the other (Figure \ref{fig:bias0}). The simulation study shows that the systematic bias we obtained for the case study are not necessarily a result of mismatch between the data and the fitted model, but can indeed be a result of the strong climate around Voss ($\sigma_c\gg\sigma_x$).
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=0.9\textwidth]{bias0}
\caption{\small 0 observations from the target catchment.}
\label{fig:bias0}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=0.9\textwidth]{bias1}
\caption{\small 1 observation from the target catchment.}
\label{fig:bias1}
\end{subfigure}
~
\caption{ \small The empirical probability that all of the 10 true annual values are either above or below the posterior median value for Catchment 1 and Catchment 4 over 50 climates for ungauged catchments (Figure \ref{fig:bias0}) and partially gauged catchments (Figure \ref{fig:bias1}). The black circle corresponds to the parameter values we have for the case study from Voss. The black dashed line is the theoretical probability that all the observed values are above or below the posterior median when studying a process that actually is independent over years (2 $\cdot$ $0.5^{10} = 0.2 \%$ ). This is included as a reference. }\label{fig:bias}
\end{figure}
So far we have considered the probability of systematic bias when there are no data from the target catchments available. Next, in Figure \ref{fig:bias1}, we present the probability of systematic bias when there is one annual observation included in the likelihood. For $\sigma_c\ll \sigma_x$, i.e. when $\sigma_c^2/(\sigma_c^2+\sigma_x^2)$ is close to zero, we see that the probability of systematic bias in general is low for both ungauged catchments (Figure \ref{fig:bias0}) and partially gauged catchments (Figure \ref{fig:bias1}). For this scenario, a new data point from the target catchment don't have a considerable impact on the probability of systematic bias. However, if $\sigma_c \gg \sigma_x$ as in Voss, we find that the extra data point on average leads to a large reduction in the systematic bias probability in Figure \ref{fig:bias1} compared to the systematic bias probability we saw for the ungauged catchments in Figure \ref{fig:bias0}. This is found for all combinations of $\rho_c$ and $\rho_x$, but the tendency is strongest if $\rho_c \ll \rho_x$ as in Voss. The results in Figure \ref{fig:bias} are thus comparable to the results in Figure \ref{fig:RMSE_CRPS_time_real} for the case study, and illustrate the potential value of data from a new location for different parameter values.
\section{Discussion}\label{sec:discussion}
In this paper we have presented a model for annual runoff that consistently combines data of different spatial support. The suggested model is a geostatistical model with two spatial effects: A climatic long-term effect and a year dependent effect that describes the annual discrepancy from the climate. The model was used to estimate mean annual runoff in the Voss area in Norway.
The main focus of the study was on exploring how the combination of point and nested areal observation affects runoff predictions, to demonstrate that our model has mass-conserving properties and to show how quantifying long-term spatial variability can be used as a tool for understanding biases in environmental modeling and for exploiting short records of data. There are three key findings: 1) On average we benefit from including all available observations in the likelihood, both point and areal data. $P+A$ performed better than $P$ and $A$ in terms of RMSE, CRPS and the coverage of the 95 $\%$ posterior prediction intervals in our case study. $P+A$ also performed better than the referenced method Top-Kriging that only supports areal observations. 2) The suggested model that combines point and areal observations is particularly suitable for modeling the nested structure of catchments. The case study showed that the model was able to identify Catchment 3 as a wetter catchment than any of the surrounding catchments and precipitation stations. This was a consequence of using information from two overlapping catchments to constrain and distribute the annual runoff correctly. The interaction between the point and nested areal observations gives a geostatistical model that does more than smoothing, and this represents a main difference from e.g. Top-Kriging that doesn't constrain the predicted runoff. 3) How dominating climatic spatial effects are compared to annual spatial effects has a large influence on the predictability of runoff. If most of the spatial variability can be explained by long-term (climatic) weather patterns and processes, systematic biases for a location over time can be expected as long as the same observation design is used.
The fact that $P+A$ performed better than $A$ for most catchments around Voss, indicates that the point and areal observations of runoff were sufficiently compatible for most catchments, i.e. that evaporation subtracted from precipitation was a valid approximation of point runoff. This interpretation of point runoff is reasonable in areas like Voss where the annual precipitation is considerably larger than the annual evaporation. The evaporation data are uncertain and should not make a large impact on the resulting predictions. In many areas of the world, the observed annual evaporation is more than 50 $\%$ of the annual observed precipitation. In such areas, our framework could provide negative point observations and results that are hard to interpret. Negative runoff can in general be a problem in our Gaussian model. Log transforming the data is a solution if considering only point data ($P$), but is not an option when modeling areal data ($A$ and $P+A$) because the log transformation does not work well with the linear aggregation in Equation \eqref{eq:runoffbasis}. For areas with observed values close to zero, extra caution should therefore be taken regarding negative, non-physical results. To avoid negative predictions it is also important to make sure that the mesh used in the \textsc{spde} approach (Figure \ref{fig:catchmesh}) is fine enough to capture the rapid spatial variability in the study area.
Precipitation observations are often avoided as an information source when performing interpolation of runoff in hydrological applications, but the results presented here show that the point observations can contain valuable information when used together with areal observations. At least in data sparse areas with few streamflow observations. However, there is still room for improvement in the compatibility between the two observation types: The observation designs including only point observations $P$ provided a clear underestimation of annual runoff for most catchments in the case study. It was also seen that the spatial field provided by the precipitation observations ($P$) was smoother than the spatial field provided by the runoff observations ($A$) in Figure \ref{fig:future_eta_mean}, which is a typical result: The increase in spatial variability from precipitation to runoff is mainly explained by small scale variability introduced by soil and vegetation \citep{smoothfield}. Consequently, if the point data are allowed to dominate over the areal data, the point data can cause a runoff field that is too smooth, which affects both the posterior mean and the posterior standard deviation disadvantageously.
Furthermore, it is worth mentioning that all of the available precipitation gauges are located at a lower elevation than the mean elevation of the five catchments in the dataset. This is a common problem. Precipitation gauges are often located at low elevations, close to settlements where the gauges are easy to maintain. It is known that the amount of precipitation typically increases with elevation. There is therefore a lack of information about precipitation at high elevations in the data. Adding the fact that the precipitation gauges often fail to catch a large proportion of the precipitation, in particular when it comes as snow and it is windy \citep{kochen1}, essential information about the precipitation and runoff field could be lost. To solve the compatibility issues, elevation was considered as a covariate in a preliminary study \citep{Jorid}, but this did not lead to significant improvements, and the results are not included here. Another option could be a preferential sampling approach where we assume that the locations of the precipitation gauges are distributed according to a log-Gaussian Cox process that depends on the response variable, here through elevation implicitly \citep{diggle}.
Elevation is also known to be a factor that affects the spatial dependency structure of precipitation, and Voss is a mountainous area. The spatial range is typically larger in lowlands and decreases with elevation. A non-stationary model similar to the one presented in \cite{Rikke1} with a range and a marginal variance that changes with elevation could be considered. This can easily be implemented within the \textsc{inla}-\textsc{spde} framework. However, in this case the dataset is small and the complexity of the spatial variability large. We also have a model with only one replication of the climatic spatial effect which was the dominating spatial component. A non-stationary model would probably be too complicated and lead to identifiability issues \citep{Rikke2}.
Regardless of the increased complexity in an extended model, it is reasonable to believe that an accurate representation of the climatic conditions at a target location is crucial when predicting annual runoff and other climate related variables. In the simulation study, we demonstrated how systematic under- and overestimation of a target variable can be expected over time when we fail to characterize the underlying climate in areas where the climatic spatial field's marginal standard deviation $\sigma_c$ is large relatively to the other model standard deviations. We also found a clear relationship between the model parameters of the suggested model, and systematic prediction bias over time. This shows that the two field model (and its parameters) can contribute with useful insight about the properties of a study area and/or an environmental variable of interest.
In spite of the large biases documented for annual runoff predictions in this article, a dominating climate also gives opportunities. In this article a model with a climatic component was suggested. The climatic component included a spatial effect that was common for all years of observations. This component made it relatively simple to exploit short records of data, and the runoff predictions could easily be improved by including a few observations from the target catchments. Time series from several years are not needed because one or two observations from a new catchment updates the climatic component that has a large impact to the final model if $\sigma_c$ dominates over the other model variances. Here, we again note how the model parameters can contribute with useful information about the study area and/or the environmental variable of interest: The potential gain of collecting a new data point from a new location, i.e. a short record, can be indicated from the spatial parameters, in particular from the proportion $\sigma_c^2/(\sigma_c^2 + \sigma_x^2).$
The ability to exploit short records is another main benefit of the suggested model over existing spatial models used for runoff interpolation, like e.g. Top-Kriging. For practitioners, a model with the described properties can be useful in situations where there exist one or few observations from a catchment of interest. Short duration runoff observations are quite common in hydrological datasets, e.g. from planned short duration missions for water resources assessments, or from gauging stations that are closed after a revision of the gauging network. Large infrastructure projects measuring a few years of annual runoff for a relevant catchment is also achievable.
\bigskip
\bibliographystyle{plainnat}
|
1,108,101,565,998 | arxiv | \section{Introduction}
In this paper, we study the deformation space of geodesic triangulations of a surface within a fixed homotopy type. Such space can be viewed as a discrete analogue of the space of surface diffeomorphisms homotopic to the identity. Our main theorem is the following.
\begin{theorem}
For a closed orientable surface of negative curvature, the space of geodesic triangulations in a homotopy class is contractible. In particular, it is connected.
\end{theorem}
The group of diffeomorphisms of a smooth surface is one of the fundamental objects in the study of low dimensional topology. Determining the homotopy types of diffeomorphism groups has profound implications to a wide range of problems in Teichmuller spaces, mapping class groups, and geometry and topology of 3-manifolds.
Smale \cite{smale1959diffeomorphisms} proved that the group of diffeomorphisms of a closed 2-disk which fix the boundary pointwisely is contractible. This enables him to show that the group of orientation-preserving diffeomorphisms of the 2-sphere is homotopic equivalent to $SO(3)$ \cite{smale1959diffeomorphisms}.
Earle-Eells \cite{earle1969fibre} identified the homotopy type of the group of the diffeomorphisms homotopic to the identity for any closed surface. In particular, such topological group is contractible for a closed orientable surface with genus greater than one, consisting with our Theorem 1.1 for the discrete analogue.
Cairns \cite{cairns1944isotopic} initiated the investigation of the topology of the space of geodesic triangulations, and proved that if the surface is a geometric triangle in the Euclidean plane, the space of geodesic triangulations with fixed boundary edges is connected. A series of further developments culminated in a discrete version of Smale's theorem proved by Bloch-Connelly-Henderson \cite{bloch1984space} as follows.
\begin{theorem}
The space of geodesic triangulations of a convex polygon with fixed boundary edges is homeomorphic to some Euclidean space. In particular, it is contractible.
\end{theorem}
A simple proof of the contractibility of the space above is provided in \cite{1910.03070} using Tutte's embedding theorem \cite{tutte1963draw}. It also provides examples showing that the homotopy type of this space can be complicated if the boundary of the polygon is not convex.
For closed surfaces, it is conjectured in \cite{connelly1983problems} that
\begin{conj}
The space of geodesic triangulations of a closed orientable surface with constant curvature deformation retracts to the group of isometries of the surface homotopic to the identity.
\end{conj}
The connectivity of these spaces has been explored in \cite{cairns1944isotopic, chambers2021morph, hass2012simplicial}. Awartani-Henderson \cite{awartani1987spaces} identified a contractible subspace in the space of geodesic triangulations of the 2-sphere. Hass-Scott \cite{hass2012simplicial} showed that the space of geodesic triangulation of a surface with a hyperbolic metric is contractible if the triangulation contains only one vertex.
The main result of this paper affirms conjecture 1.3 in the case of hyperbolic surfaces.
\subsection{Set Up and the Main Theorem}
Assume $M$ is a connected closed orientable smooth surface with a smooth Riemannian metric $g$ of non-positive Gaussian curvature. A topological triangulation of $M$ can be identified as a homeomorphism $\psi$ from $|T|$ to $M$, where $|T|$ is the carrier of a 2-dimensional simplicial complex $T=(V,E,F)$ with the vertex set $V$, the edge set $E$, and the face set $F$.
For convenience, we label the vertices as $1,2,...,n$ where $n=|V|$ is the number of vertices.
The edge in $E$ determined by vertices $i$ and $j$ is denoted as $ij$. Each edge is identified with the closed unit interval $[0,1]$.
Let $T^{(1)}$ be the 1-skeleton of $T$, and denote $X=X(M,T,\psi)$ as the space of geodesic triangulations homotopic to $\psi|_{T^{(1)}}$. More specifically, $X$ contains all the embeddings $\varphi:T^{(1)}\rightarrow M$ satisfying that
\begin{enumerate}
\item The restriction $\varphi_{ij}$ of $\varphi$ on the edge $ij$ is a geodesic parameterized with constant speed, and
\item $\varphi$ is homotopic to $\psi|_{T^{(1)}}$.
\end{enumerate}
It has been proved by Colin de Verdi{\`e}re \cite{de1991comment} that such $X(M, T,\psi)$ is always non-empty. Further, $X$ is naturally a metric space, with the distance function
$$
d_X(\varphi,\phi)=\max_{x}d_{g}(\varphi(x),\phi(x)).
$$
Then our main theorem is formally stated as follows.
\begin{theorem}
\label{main}
If $(M,g)$ has strictly negative Gaussian curvature, then $X(M,T,\psi)$ is contractible. In particular, it is connected.
\end{theorem}
\subsection{Generalized Tutte's Embedding}
Let $\tilde X=\tilde X (M,T,\psi)$ be the super space of $X$, containing all the continuous maps $\varphi:T^{(1)}\rightarrow M$ satisfying that
\begin{enumerate}
\item The restriction $\varphi_{ij}$ of $\varphi$ on the edge $ij$ is geodesic parameterized with constant speed, and
\item $\varphi$ is homotopic to $\psi|_{T^{(1)}}$.
\end{enumerate}
Notice that elements in $\tilde X$ may not be embeddings of $T^{(1)}$ to $M$. The space $\tilde{X}$ is also naturally a metric space, with the same distance function
$$
d_{\tilde X}(\varphi,\phi)=\max_{x}d_{g}(\varphi(x),\phi(x)).
$$
We call an element in $\tilde{X}$ a \textit{geodesic mapping}. A geodesic mapping is determined by the positions $q_i = \varphi(i)$ of the vertices and the homotopy classes of $\varphi_{ij}$ relative to the endpoints $q_i$ and $q_j$. In particular, this holds for geodesic triangulations. Since we can perturb the vertices of a geodesic triangulation to generate another, $X$ is a $2n$ dimensional manifold.
Let $(i, j)$ be the directed edge starting from the vertex $i$ and ending at the vertex $j$. Denote
$
\vec E=\{(i,j):ij\in E\}
$
as the set of directed edges of $T$. A positive vector $w\in\mathbb R^{\vec E}_{>0}$ is called a \emph{weight} of $T$. For any weight $w$ and geodesic mapping $\varphi\in \tilde X$, we say $\varphi$ is \textit{$w$-balanced} if for any $i\in V$,
$$
\sum_{j:ij\in E}w_{ij}{v}_{ij}=0.
$$
Here $ v_{ij}\in T_{q_i}M$ is defined with the exponential map $\exp:TM\to M$ such that $\exp_{q_i}(t{v}_{ij}) = \varphi_{ij}(t)$ for $t\in[0, 1]$.
The main part of the proof of Theorem \ref{main} is to generalize Tutte's embedding theorem (see Theorem 9.2 in \cite{tutte1963draw} or Theorem 6.1 in \cite{floater2003one}) to closed surfaces of negative curvature. Specifically, we will prove
the following two theorems.
\begin{theorem}
\label{existence}
Assume $(M,g)$ has strictly negative Gaussian curvature. For any weight $w$, there exists a unique geodesic mapping $\varphi\in \tilde X(M, T,\psi)$ that is $w$-balanced. Such induced map $\Phi(w)=\varphi$ is continuous from $\mathbb R^{\vec E}_{>0}$ to $\tilde X$.
\end{theorem}
\begin{theorem}
\label{embedding}
If $\varphi\in\tilde X$ is $w$-balanced for some weight $w$, then $\varphi\in X$.
\end{theorem}
Theorem \ref{embedding} can be regarded as a generalization of the embedding theorems by Colin de Verdi{\`e}re (see Theorem 2 in \cite{de1991comment}) and Hass-Scott (see Lemma 10.12 in \cite{hass2012simplicial}), which imply that the minimizer of the following discrete Dirichlet energy
$$E(\varphi) = \frac{1}{2}\sum_{ij\in E}w_{ij}l^2_{ij}$$
among the maps $\varphi$ in the homotopy class of $\psi|_{T^{(1)}}$ is a geodesic triangulation. Here $l_{ij}$ is the geodesic length of $\varphi_{ij}$ in $M$. The minimizer is a $w$-balanced geodesic mapping with $w_{ij} = w_{ji}$ for $ij\in E$. Hence, Theorem \ref{embedding} extends the previous results from the cases of symmetric weights to non-symmetric weights.
We believe that, the proofs in Colin de Verdi{\`e}re \cite{de1991comment} and Hass-Scott \cite{hass2012simplicial} could be easily modified to work with our non-symmetric case. Nevertheless, we will give a new proof in Section 3 to make the paper self-contained.
\subsection{Mean Value Coordinates and the Proof of Theorem \ref{main}}
Theorem \ref{existence} and \ref{embedding} give a continuous map $\Phi$ from $\mathbb R^{\vec E}_{>0}$ to $X$.
For the oppositie direction, we can construct a weight $w$ for a geodesic embedding $\varphi\in X$, using \emph{mean value coordinates} which was firstly introduced by Floater \cite{floater2003mean}. Given $\varphi\in X$, the mean value coordinates are defined to be
$$
w_{ij}=\frac{\tan(\alpha_{ij}/2)+\tan(\beta_{ij}/2)}{| v_{ij}|},
$$
where $| v_{ij}|$ equals to the geodesic length of $\varphi_{ij}([0,1])$, and $\alpha_{ij}$ and $\beta_{ij}$ are the two inner angles in $\varphi(T^{(1)})$ at the vertex $\varphi(i)$ sharing the edge $\varphi_{ij}([0,1])$. The construction of mean value coordinates gives a continuous map $\Psi$ from $X$ to $\mathbb R^{\vec E}_{>0}$. Further,
by Floater's mean value theorem (see Proposition 1 in \cite{floater2003mean}),
any $\varphi\in X$ is $\Psi(\varphi)$-balanced.
Namely, $\Phi\circ \Psi=id_X$. Then Theorem \ref{main} is a direct consequence of Theorem \ref{existence} and \ref{embedding}.
\begin{proof}[Proof of Theorem \ref{main}]
Since $\mathbb R^{\vec E}_{>0}$ is contractible, $\Psi\circ\Phi$ is homotopic to the identity map. Since $\Phi\circ \Psi=id_X$, $X$ is homotopy equivalent to the contractible space $\mathbb R^{\vec E}_{>0}$.
\end{proof}
In the remaining of the paper, we will prove Theorem \ref{existence} in Section 2 and Theorem \ref{embedding} in section 3.
\section{Proof of Theorem \ref{existence}}
Theorem \ref{existence} consists of three parts: the existence of $w$-balanced geodesic mapping, the uniqueness of $w$-balanced geodesic mapping and the continuity of the map $\Phi$.
In this section, we will first parametrize $\tilde X$ by $\tilde M^n$, where $\tilde M$ is the universal covering of $M$, and then prove the three parts in Subsection 2.1 and 2.2 and 2.3 respectively.
Assume that $p$ is the covering map from $\tilde M$ to $M$, and $\Gamma$ is the corresponding group of deck transformations of the covering so that $\tilde{M}/\Gamma = M$. For any $i\in V$, fix a lifting $\tilde q_i\in\tilde M$ of $q_i\in M$. For any edge $ij$, denote $\tilde \varphi_{ij}(t)$ as the lifting of $\varphi_{ij}(t)$ such that $\tilde \varphi_{ij}(0) = \tilde q_i$. Then $p(\tilde \varphi_{ij}(1)) = \varphi_{ij}(1)=q_j=p(\tilde q_j)$, and there exists a unique deck transformation $A_{ij}\in\Gamma$ such that $\tilde \varphi_{ij}(1) = A_{ij} \tilde q_j$. It is easy to see that $A_{ij}=A_{ji}^{-1}$ for any edge $ij$.
Equip $\tilde M$ with the natural pullback Riemannian metric $\tilde g$ of $g$ with negative Gaussian curvature. This metric is equivariant with respect to $\Gamma$. For any $ x, y\in\tilde M$, there exists a unique geodesic with constant speed parameterization $\gamma_{x,y}:[0,1]\rightarrow\tilde M$ such that $\gamma_{x,y}(0)=x$ and $\gamma_{x,y}(1)=y$. We can naturally parametrize $\tilde X$ as follows.
\begin{theorem}
For any $( x_1,..., x_n)\in\tilde M^n$, define $\varphi=\varphi[x_1,...,x_n]$ as
$$
\varphi_{ij}(t)= p\circ\gamma_{x_i,A_{ij}x_j}(t)
$$
for any $ij\in E$ and $t\in[0,1]$.
Then such $\varphi$ is a well-defined geodesic mapping in $\tilde{X}$, and the map $(x_1,...,x_n)\mapsto\varphi[x_1,...,x_n]$ is a homeomorphism from $\tilde M^n$ to $\tilde X$.
\end{theorem}
Here we omit the proof of Theorem 2.1 which is routine but lengthy.
In the remaining of this section,
for any $x,y,z\in\tilde M$ and $u,v\in T_x\tilde M$, we denote
\begin{enumerate}
\item $d(x,y)$ as the intrinsic distance between $x,y$ in $(\tilde M,\tilde g)$, and
\item $v(x,y)=\exp_x^{-1}y\in T_x\tilde M$, and
\item $\triangle xyz$ as the geodesic triangle in $\tilde M$ with vertices $x,y,z$, which could possibly be degenerate, and
\item $\angle yxz$ as the inner angle of $\triangle xyz$ at $x$ if $d(x,y)>0$ and $d(x,z)>0$, and
\item $|v|$ as the norm of $v$ under the metric $\tilde g_x$, and
\item $u\cdot v$ as the inner product of $u$ and $v$ under the metric $\tilde g_x$.
\end{enumerate}
By scaling the metric if necessary, we may assume that the Gaussian curvatures of $(M,g)$ and $(\tilde M,\tilde g)$ are bounded above by $-1$.
\subsection{Proof of the Uniqueness}
We first prove the following Lemma \ref{CAT} using CAT($0$) geometry. See Theorem 4.3.5 in \cite{burago2001course} and Theorem 1A.6 in \cite{bridson2013metric} for the well-known comparison theorems.
\begin{lemma}
\label{CAT}
Assume $x,y,z\in\tilde M$, then
\begin{enumerate}
\item $|v(z,x) - v(z,y)|\leq d(x,y)$, and
\item $ v(x,y)\cdot v(x,z)+ v(y,x)\cdot v(y,z)\geq d(x,y)^2$,
\end{enumerate}
and the equality holds if and only if $\triangle xyz$ is degenerate.
\end{lemma}
\begin{proof}
If $\triangle xyz$ is degenerate, then there exists a geodesic $\gamma$ in $\tilde M$ such that $x,y,z\in\gamma$, and then the proof is straightforward. So we assume that $\triangle xyz$ is non-degenerate.
(1) Three points $v(z,x), v(z,y)$, and $0$ in $T_z\tilde M$ determine a Euclidean triangle, where $|v(z,x)|=d(x,z)$, and $|v(z,y)|=d(z,y)$ and the angle between $v(z,x)$ and $v(z,y)$ is equal to $\angle xzy$. Then by the CAT(0) comparison theorem,
$$
|v(z,x)-v(z,y)|< d(x,y).
$$
(2) Let $x',y',z'\in\mathbb R^2$ be such that
$$
|x'-z'|_2=|v(x,z)|,\quad\quad\quad |y'-z'|_2=|v(y,z)|,\quad\text{and}\quad|x'-y'|_2=|v(x,y)|.
$$
Then by the CAT$(0)$ comparison theorem,
$\angle yxz< \angle y'x'z'$, and $\angle xyz< \angle x'y'z'$.
Hence,
$$
v(x,y)\cdot v(x,z)+ v(y,x)\cdot v(y,z)>
(y'-x')\cdot(z'-x')+(x'-y')\cdot(z'-y')=|x'-y'|_2^2
= d(x,y)^2.
$$
\end{proof}
\begin{proof}[Proof of the uniqueness part in Theorem \ref{existence} ]
If not, assume $\varphi[x_1,...,x_n]$ and $\varphi[x_1',...,x_n']$ are two different geodesic mappings that are both $w$-balanced for some weight $w$.
We are going to prove a discrete maximum principle for the function $j\mapsto d(x_j,x_j')$.
Assume $i\in V$ is such that $d(x_i,x_i')=\max_{j\in V}d(x_j,x_j')>0$. By lifting the $w$-balanced assumption to $\tilde M$, we have that
\begin{equation}
\sum_{j:ij\in E}w_{ij} v(x_i,A_{ij}x_j)=0,
\end{equation}
and
\begin{equation}
\sum_{j:ij\in E}w_{ij}v(x_i',A_{ij}x_j')=0.
\end{equation}
Then by part (1) of Lemma \ref{CAT} and equation (1),
\begin{align*}
&\left|\sum_{j:ij\in E}w_{ij}v(x_i,A_{ij}x_j')\right| \\
= &\left|\sum_{j:ij\in E}w_{ij}v(x_i,A_{ij}x_j') - \sum_{j:ij\in E}w_{ij}v(x_i,A_{ij}x_j)\right|\\
\leq&\sum_{j:ij\in E}w_{ij} d(A_{ij}x_j,A_{ij}x_j')\\
=&\sum_{j:ij\in E}w_{ij} d(x_j,x_j')\\
\leq &d(x_i,x_i')\sum_{j:ij\in E}w_{ij}.
\end{align*}
By part (2) of Lemma \ref{CAT}, equation (2), and the Cauchy-Schwartz inequality,
\begin{align*}
&d(x_i,x_i')\cdot\left|\sum_{j:ij\in E}w_{ij}v(x_i,A_{ij}x_j')\right| \\
\geq &v(x_i,x_i')\cdot\sum_{j:ij\in E}w_{ij}v(x_i,A_{ij}x_j') +
v(x_i',x_i)\cdot\sum_{j:ij\in E}w_{ij}v(x_i',A_{ij}x_j')\\
\geq &\sum_{j:ij\in E}w_{ij}\cdot d(x_i,x_i')^2.
\end{align*}
Therefore, the equalities hold in both inequalities above. Then
for any neighbor $j$ of $i$, $d(x_j,x_j')=d(x_i,x_i')=\max_{k\in V}d(x_k,x_k')$, and
$A_{ij}x_j$ is on the geodesic determined by $x_i$ and $x_i'$. Then the one-ring neighborhood of $p(x_i)$ in $\varphi[x_1,...,x_n](T^{(1)})$ degenerates to a geodesic arc.
By the connectedness of the surface, we can repeat the above argument and deduce that $d(x_j,x_j')=d(x_i,x_i')$ for any $j\in V$. Further, for any triangle $\sigma\in F$, $\varphi[x_1,...,x_n](\partial\sigma)$ degenerates to a geodesic arc.
It is not difficult to extend $\varphi[x_1,...,x_n]$ to a continuous map $\tilde\varphi$ from $|T|$ to $M$, such that for any triangle $\sigma\in F$, $\tilde\varphi(\sigma)=\varphi[x_1,...,x_n](\partial\sigma)$ being a geodesic arc.
It is also not difficult to prove that $\tilde\varphi$ is homotopic to $\psi$. Therefore, $\tilde\varphi$ is
degree-one and surjective. This is contradictory to that $\tilde\varphi(|T|)$ is a finite union of geodesic arcs.
\end{proof}
\subsection{Proof of the Existence}
Here we prove a stronger existence result.
\begin{theorem}
\label{proper}
Given a compact subset $K$ of $\mathbb R^{\vec E}_{>0}$, there exists a compact subset $K'=K'(M,T,\psi,K)$ of $\tilde X$ such that for any $w\in K$, there exists a $w$-balanced geodesic mapping $\varphi\in K'$.
\end{theorem}
We first introduce a topological Lemma \ref{toplemma} and then reduce Theorem \ref{proper} to Lemma \ref{keylemma}.
\begin{lemma}
\label{toplemma}
Suppose $B^{n}=\{x\in\mathbb R^n: |x|\leq1\}$ is the unit ball in $\mathbb R^n$, and $f:B^{n}\rightarrow\mathbb R^n$ is a continuous map such that $x\neq f(x)/|f(x)|$ for any $x\in\partial B^n=S^{n-1}$ with $f(x)\neq0$. Then $f$ has a zero in $B^n$.
\end{lemma}
\begin{proof}
If not, $g(x) = f(x)/|f(x)|$ is a continuous map from $B^n$ to $\partial B^n$. Since $B^n$ is contractible, $g(x)$ is null-homotopic, and thus $g|_{S^{n-1}}$ is also null-homotopic. Since $g(x)\neq x$,
it is easy to verify that
$$
H(x,t)=\frac{tg(x)+(1-t)(-x)}{|tg(x)+(1-t)(-x)|}
$$
is a homotopy between $g|_{S^{n-1}}$ and $-id|_{S^{n-1}}$. This contradicts to that $-id|_{S_{n-1}}$ is not null-homotopic.
\end{proof}
\begin{lemma}
\label{keylemma}
We fix an arbitrary point $q\in\tilde M$. If $w\in \mathbb R^{\vec E}_{>0}$ and $(x_1,...,x_n)\in\tilde M^n$ satisfies that
\begin{equation}
\label{residue}
v (x_i,q)\cdot\sum_{j:ij\in E}w_{ij} v(x_i,A_{ij}x_j)\leq 0
\end{equation}
for any $i\in V$, then
$$
\sum_{i\in V}d(x_i,q)^2< R^2
$$
for some constant $R>0$ which depends only on $M,T,\psi,q$ and
$$
\lambda_w:=\frac{\max_{ij\in E} w_{ij}}{\min_{ij\in E} w_{ij}} .
$$
\end{lemma}
The vector in Figure \ref{residuev}
$$ r_i = \sum_{j:ij\in E}w_{ij} v(x_i,A_{ij}x_j)$$
is defined as the \textit{residue vector} $ r_i$ at $x_i$ of $\varphi[x_1, \cdots, x_n]$ with respect to the weight $w$. Notice that a geodesic mapping $\varphi$ is $w$-balanced if and only if all its residue vectors vanish with respect to $w$. Lemma \ref{keylemma} means that if all the residue vectors are dragging $x_i$'s away from $q$, then all the $x_i$'s must stay not far away from $q$.
The definition of residue vector is similar to the concept of \textit{discrete tension field} in \cite{gaster2018computing}.
\begin{figure}[h!]
\includegraphics[width=0.4\linewidth]{picture2.eps}
\caption{The residue vector and Lemma \ref{keylemma}.}
\label{residuev}
\end{figure}
\begin{proof}[Proof of Theorem \ref{proper}]
Fix an arbitrary base point $q\in\tilde M$, and then by Lemma \ref{keylemma} we can pick a sufficiently large constant $R=R(M,T,\psi,K)>0$ such that if
$$
\sum_{i=1}^n d(x_i,q)^2=R^2,
$$
there exists $i\in V$ such that
$$
v (x_i,q)\cdot\sum_{j:ij\in E}w_{ij} v(x_i,A_{ij}x_j)> 0.
$$
We will prove that the compact set
$$
K'=\{\varphi[x_1,...,x_n]:\sum_{i=1}^n d(x_i,q)^2\leq R^2\}
$$
is satisfactory.
For any $x\in \tilde M$, let $P_x:T_x\tilde M\rightarrow T_q\tilde M$ be the parallel transport along the geodesic $\gamma_{x,q}$. Set
$$
B=\{( v_1,..., v_n)\in (T_q\tilde M)^n:\sum_{i=1}^n| v_i|^2\leq 1\}
$$
as a Euclidean $2n$-dimensional unit ball,
and construct a map
$
F:B\rightarrow (T_q\tilde M)^n
$
in the following three steps. Firstly, we construct $n$ points $x_1,...,x_n\in\tilde M$ as
$
x_i(v_1,...,v_n)=\exp_q(Rv_i).
$
Secondly, we compute the residue vector at each $x_i$ as
$$
r_i=\sum_{j:ij\in E} w_{ij} v(x_i,A_{ij}x_j)\in T_{x_i}\tilde M.
$$
Lastly, we pull back the residues to $T_q\tilde M$ as
$F(v_1,...,v_n) = \left(P_{x_1}(r_1),...,P_{x_n}(r_n)\right).$
Notice that the map $(v_1,...,v_n)\mapsto \varphi[x_1,...,x_n]$ is a homeomorphism from $B$ to $K'$, and $F(v_1,...,v_n)=0$ if and only if the corresponding $\varphi[x_1,...,x_n]$ in $K'$ is $w$-balanced map. Hence, it suffices to prove that $F$ has a zero in $B$. By Lemma \ref{toplemma} it suffices to prove that for any $(v_1,...,v_n)\in\partial B$,
$$
(v_1,...,v_n)\neq \frac{F(v_1,..,v_n)}{|F(v_1,...,v_n)|}.
$$
Suppose $(v_1,...,v_n)$ is an arbitrary point on $\partial B$, and then
it suffices to prove that there exists $i\in V$ such that
$v_i\cdot F_i(v_1,...,v_n)=v_i\cdot P_{x_i}(r_i)<0$.
Notice that $x_1(v_1,...,v_n),...,x_n(v_1,...,v_n)$ satisfy that $\sum_{i=1}^nd(q,x_i)^2=R^2$, so by our assumption on $R$, there exists $i\in V$ such that
$$
v (x_i,q)\cdot\sum_{j:ij\in E}w_{ij} v(x_i,A_{ij}x_j)=v(x_i,q)\cdot r_i> 0,
$$
and thus,
$$
v_i\cdot P_{x_i}(r_i)=-\frac{1}{d(q,x_i)}P_{x_i}\left( v(x_i,q)\right)\cdot P_{x_i}(r_i)
=-\frac{1}{d(q,x_i)} v(x_i,q)\cdot r_i< 0.
$$
\end{proof}
In the rest of this subsection, we will prove Lemma \ref{keylemma} by contradiction. Let us first sketch the idea of the proof. Assume
$\sum_{i\in V}d(x_i,q)^2$ is very large,
then by a standard compactness argument, there exists a long edge $ij$ in the geodesic mapping $\varphi[x_1,..., x_n]$. Assume $d(q,x_i)\geq d(q,x_j)$, then the corresponding long edge $\gamma_{x_i,A_{ij}x_j}$ in $\tilde M$ is pulling $x_i$ towards $q$. This implies that there exists another long edge $\gamma_{x_i,A_{ik}x_k}$ dragging $x_i$ away from $q$, otherwise the residue vector $r_i$ would not drag $x_i$ away from $q$. It can be shown that $d(q,x_k)>d(q,x_i)$.
Repeating the above steps, we can find an arbitrary long sequence of vertices such that the distance from each of these vertices to $q$ is increasing. This is impossible as we only have finitely many vertices.
\begin{figure}[h!]
\includegraphics[width=0.8\linewidth]{triangles.eps}
\caption{Triangles in Step (b), (c), and (d).}
\label{triangles}
\end{figure}
Here is a listing of properties serving as the building blocks of the proof of Lemma \ref{keylemma}.
\begin{lemma}
\label{steps}
(a) For any constant $C>0$, there exists a constant $C_1=C_1(M,T,\psi,C)>0$ such that if
$$
\sum_{i\in V}d(x_i,q)^2\geq C_1,
$$
then
$$
\max_{ij\in E}d(x_i,A_{ij}x_j)\geq C.
$$
(b) There exists a constant $C_2=C_2(M,T,\psi)>0$ such that if
$$
d(A_{ij}x_j,q)\geq C_2,
$$
then
$$
\angle (A_{ji}q)x_j q = \angle q(A_{ij}x_j)(A_{ij}q) \leq\frac{\pi}{8}.
$$
(c) There exists a constant $C_3>0$ such that if $x,y\in\tilde M$ satisfy that
$$
d(y,q)\geq d(x,q)+C_3,
$$
then
$$\angle xyq\leq \frac{\pi}{4}.
$$
(d) There exists a constant $C_4>0$ such that if $x,y\in\tilde M$ satisfy that
$$
d(x,y)\geq C_4, \quad\text{ and }\quad d(x,q)\geq d(y,q),
$$
then
$$
\angle yxq\leq\frac{\pi}{8}.
$$
(e) For any constant $C>0$, there exists a constant $C_5=C_5(M,T,\psi,C)>0$ such that if
$$
\max_{ij\in E}d(x_i,A_{ij}x_j)\geq C_5,
$$
then there exists $ij\in E$ such that
$$
\frac{ v(x_i,q)}{| v(x_i,q)|}\cdot v(x_i,A_{ij}x_j)\geq C.
$$
(f) For any constant $C>0$, there exists a constant $C_6=C_6(M,T,\psi ,\lambda_w,C)>0$ such that if
$$
\frac{ v(x_i,q)}{| v(x_i,q)|}\cdot v(x_i,A_{ij}x_j)\geq C_6
$$
for some edge $ij\in E$, then there exists $ik\in E$ such that
$$
\frac{ v(x_i,q)}{| v(x_i,q)|}\cdot v(x_i,A_{ik}x_k)\leq -C.
$$
(g) For any constant $C>0$, there exists a constant $C_7=C_7(M,T,\psi,C)>0$ such that if
$$
\frac{ v(x_i,q)}{| v(x_i,q)|}\cdot v(x_i,A_{ik}x_k)\leq -C_7,
$$
then
$$
d(x_k,q)\geq d(x_i,q)+C.
$$
(h) For any constant $C>0$, there exists a constant $C_8=C_8(M,T,\psi,C)>0$ such that if
$$
d(x_j,q)\geq d(x_i,q)+C_8,
$$
then
$$
\frac{ v(x_j,q)}{| v(x_j,q)|}\cdot v(x_j,A_{ji}x_i)\geq C.
$$
\end{lemma}
\begin{figure}[h!]
\includegraphics[width=0.4\linewidth]{picture3.eps}
\caption{Vertices leaving the point $q$.}
\label{drag}
\end{figure}
\begin{proof}[Proof of Lemma \ref{keylemma} assuming Lemma \ref{steps}]
For any $C>0$, there exists a sufficiently large constant $\tilde{C} = \tilde{C} (M,T,\psi,\lambda_w,C) $ determined from (a), (e), (f) and (g) in Lemma \ref{steps} such that if
$$
\sum_{i\in V}d(x_i,q)^2\geq \tilde{C} ,
$$
then there exist three vertices $x_i$, $x_j$, and $x_k$ shown in Figure \ref{drag} with
$$
d(x_k, q)\geq d(x_j, q) + C.
$$
Moreover, by (g), (e) and (f) of Lemma \ref{steps}, we can find another vertex $x_l$ such that
$$
d(x_l,q)\geq d(x_k, q)+C\geq d(x_j, q) + 2C,
$$
if the constant $\tilde{C} (M,T,\psi,\lambda_w,C)$ is sufficiently large.
Inductively, we can find a sequence $i_1,...,i_{n+1}\in V$ such that
$$
d(x_{i_1},q)>d(x_{i_2},q)>...>d(x_{i_{n+1}},q).
$$
This contradicts to the fact that $V$ only has $n$ different elements.
\end{proof}
\begin{proof}[Proof of Lemma \ref{steps}]
(a)
By a standard compactness argument, the set
$$
\{\varphi\in\tilde X:\max_{ij\in E}~~length(\varphi_{ij}([0,1]))\leq C\}
$$
is a compact subset of $\tilde X$. Notice that $(x_1,...,x_n)\mapsto\varphi[x_1, ..., x_n]$ is a homeomorphism from $\tilde M^n$ to $\tilde X$ and
$$
length(\varphi_{ij}([0,1])) = d(x_i,A_{ij}x_j).
$$
Therefore,
$$
\{(x_1,...,x_n)\in\tilde M^n:\max_{ij\in E}~~d(x_i,A_{ij}x_j)\leq C\}
$$
is compact and the conclusion follows.
(b)
We claim that the constant $C_2$, which is determined by
$$
\sinh C_2=\frac{\max_{ij\in E}\sinh d(A_{ij}q,q)}{\sin\frac{\pi}{8}},
$$
is satisfactory.
Let $\triangle ABC$ be the hyperbolic triangle with the corresponding edge lengths
$$
a = d(A_{ij}x_j,q),\quad b=d(A_{ij}x_j,A_{ij}q),\quad c=d(A_{ij}q,q).
$$
Since $\tilde{M}$ is a CAT($-1$) space, it suffices to show that $\angle C\leq\pi/8$. By the hyperbolic law of sine,
$$
\sin\angle C=\frac{\sinh c\cdot\sin\angle A}{\sinh a}\leq\frac{\max_{ij\in E}\sinh d(A_{ij}q,q)\cdot 1}{\sinh C_2}=\sin\frac{\pi}{8}.
$$
(c)
We claim that the constant $C_3$ determined by
$$
\sinh C_3=\frac{1}{\sin\frac{\pi}{8}}
$$
is satisfactory.
Let $\triangle ABC$ be the hyperbolic triangle with the corresponding edge lengths
$$
a = d(x,y),\quad b=d(y,q),\quad c=d(x,q).
$$
Since $\tilde{M}$ is a CAT($-1$) space, it suffices to show that $\angle C\leq\pi/8$. By the hyperbolic law of sine
$$
\sin\angle C=\frac{\sinh c\cdot\sin\angle B}{\sinh b}\leq\frac{\sinh c}{\sinh b}\leq
\frac{\sinh c}{\sinh (c+C_3)}\leq\frac{\sinh c}{\sinh c\cdot\sinh C_3}=\sin\frac{\pi}{8}.
$$
(d)
We claim that the constant $C_4$ determined by
$$
\sin^2\frac{\pi}{8}\cdot \cosh C_4=2
$$
is satisfactory.
Let $\triangle ABC$ be the hyperbolic triangle with the corresponding edge lengths
$$
a = d(x,y),\quad b=d(y,q),\quad c=d(x,q).
$$
Since $\tilde{M}$ is a CAT($-1$) space, it suffices to show that $\angle B\leq\pi/8$. By the hyperbolic law of cosine,
$$
\cos A = -\cos B\cos C+\sin B\sin C\cosh a.
$$
Then,
$$
2\geq\sin B\sin C\cosh a\geq\sin B\sin C\cosh C_4=2\cdot\frac{\sin B\sin C}{\sin^2\frac{\pi}{8}}\geq2\cdot
\frac{\sin^2 B}{\sin^2\frac{\pi}{8}}.
$$
Thus, $\angle B\leq\pi/8$.
\begin{figure}[h!]
\includegraphics[width=0.65\linewidth]{triangle2.eps}
\caption{Triangles in Step (5).}
\label{step5}
\end{figure}
(e)
We claim that the constant $C_5$ determined by
$$
C_5=\max\{C_4,2C_2,\sqrt 2C\}
$$
is satisfactory. Assume $ij\in E$ and $d(x_i,A_{ij}x_j)\geq C_5$, and we have two cases shown in Figure \ref{step5}.
If $d(x_i,q)\geq d(A_{ij}x_j,q)$, then by part (d)
$$
\angle (A_{ij}x_j)x_iq\leq\frac{\pi}{8}\leq\frac{\pi}{4},
$$
and
$$
\frac{ v(x_i,q)}{| v(x_i,q)|}\cdot v(x_i,A_{ij}x_j)=\cos (\angle (A_{ij}x_j)x_iq) \cdot d(x_i,A_{ij}x_j)\geq \frac{1}{\sqrt 2}C_5\geq C.
$$
If $d(x_i,q)\leq d(A_{ij}x_j,q)$, then $d(A_{ij}x_j,q)\geq C_2$. By part (b) and part (d),
$$
\angle(A_{ji}q)x_j q \leq\frac{\pi}{8},\quad \text{ and }\quad
\angle (A_{ji}q)x_j(A_{ji}x_i)=\angle q(A_{ij}x_j)x_i\leq\frac{\pi}{8},
$$
and
$\angle qx_j(A_{ji}x_i)\leq\pi/4$. Therefore,
$$
\frac{ v(x_j,q)}{| v(x_j,q)|}\cdot v(x_j,A_{ji}x_i)=\cos (\angle (A_{ji}x_j)x_jq) \cdot d(x_j,A_{ji}x_i)\geq \frac{1}{\sqrt 2}C_5\geq C.
$$
(f)
We claim that the constant $C_6$ determined by
$$
C_6= n\lambda_w\cdot C
$$
is satisfactory.
If not, for any $ik\in E$, we have
$$
\frac{ v(x_i,q)}{| v(x_i,q)|}\cdot v(x_i,A_{ik}x_k)> -C.
$$
Then
$$
0\geq\frac{ v (x_i,q)}{| v (x_i,q)|}\cdot\sum_{ik\in E}w_{ik} v(x_i,A_{ik}x_k)
> w_{ij}C_6+\sum_{ik\in E}w_{ik}(-C)
$$
$$
\geq w_{ij}C_6+\sum_{ik\in E}\lambda_w w_{ij}(-C)\geq
w_{ij}(C_6-n\lambda_w C)\geq0,
$$
and it is a contradiction.
(g) We claim that $C_7=C + \max_{ij\in E}d(A_{ij}q,q)$ is satisfactory. Notice that
$$d(A_{ij}x_j, q) = d(x_j, A_{ji}q) \leq d(x_j, q) + d(q, A_{ji}q) \leq d(x_j, q) +\max_{ij\in E}d(A_{ij}q,q).$$
By part (1) of Lemma \ref{CAT},
$$
d(A_{ij}x_j,q)
\geq| v(x_i,A_{ij}x_j)- v(x_i,q)|
\geq -\left( v(x_i,A_{ij}x_j)- v(x_i,q)\right)\cdot \frac{ v(x_i,q)}{| v(x_i,q)|}
$$
$$
=C_7+| v(x_i,q)|=C_7+d(x_i,q).
$$
Then
$$d(x_j, q) - d(x_i, q) \geq C_7 - \max_{ij\in E}d(A_{ij}q,q) = C.$$
(h)
We claim that the constant $C_8$ determined by
$$
C_8=\max\{C_3,\sqrt 2 C\}+\max_{ij\in E}d(A_{ij}q,q)
$$
is satisfactory. Notice that
$$
d(x_j,q)\geq d(x_i,q)+C_8\geq d(x_i,A_{ij}q)-d(A_{ij}q,q)+C_8\geq d(A_{ji}x_i,q)+\max\{C_3,\sqrt2C\}.
$$
Then by part (c), $\angle (A_{ji}x_i)x_jq\leq\pi/4$, and by the triangle inequality,
$$d(x_j, A_{ji}x_i) \geq d(x_j, q) - d(A_{ji}x_i, q) \geq \sqrt{2}C. $$
Therefore,
$$
\frac{ v(x_j,q)}{| v(x_j,q)|}\cdot v(x_j,A_{ji}x_i)=\cos (\angle (A_{ji}x_i)x_jq) \cdot d(x_j,A_{ji}x_i)\geq \frac{1}{\sqrt 2}\cdot \sqrt2C=C.
$$
\end{proof}
\subsection{Proof of the Continuity}
\begin{proof}[Proof of the continuity part of Theorem \ref{existence}]
If not, there exists $\epsilon>0$ and a weight $w$ and a sequence of weights $w^{(k)}$ such that
\begin{enumerate}
\item $w^{(k)}$ converge to $w$, and
\item $d_{\tilde X}(\Phi(w^{(k)}),\Phi(w))\geq\epsilon$ for any $k\geq1$.
\end{enumerate}
By the stronger existence result Theorem \ref{proper}, the sequence $\Phi(w^{(k)})$ are in some fixed compact subset $K'$ of $\tilde X$. By picking a subsequence, we may assume that $\Phi(w^{(k)})$ converge to some $\varphi\in \tilde X$. Since $\Phi(w^{(k)})$ is $w^{(k)}$-balanced,
then by the continuity of the residue vectors $r_i$,
$\varphi$ is $w$-balanced, and thus $\Phi(w)=\varphi$, which is contradictory to that $\Phi(w^{(k)})$ does not converge to $\Phi(w)$.
\end{proof}
\section{Proof of Theorem \ref{embedding}}
\subsection{Set up and preparations}
Assume $\varphi\in\tilde X$ is $w$-balanced for some weight $w$, and we will prove that $\varphi$ is an embedding. Recall that $q_i=\varphi(i)$ for each $i\in V$, and denote $l_{ij}$ as the length of $\varphi_{ij}([0,1])$ for any $ij\in E$. It is not difficult to show that $\varphi$ has a continuous extension $\tilde\varphi$ defined on $|T|$, such that for any triangle $\sigma\in F$ a continuous lifting map $\Phi_\sigma$ of $\tilde\varphi|_\sigma$ from $\sigma$ to $\tilde M$ will
\begin{enumerate}
\item map $\sigma$ to a geodesic triangle in $\tilde M$ homeomorphically if $\varphi(\partial\sigma)$ does not degenerate to a geodesic, and
\item map $\sigma$ to $\Phi_\sigma(\partial\sigma)$ if $\varphi(\partial\sigma)$ degenerates to a geodesic.
\end{enumerate}
The main tool to prove Theorem \ref{embedding} is the Gauss-Bonnet formula. We will need to define the inner angles for each triangle in $\varphi(T^{(1)})$, even for the degenerate triangles. A convenient way is to assign a ``direction" to each edge, even for the degenerate edges with zero length.
\begin{definition}
A \textit{direction field} is a map $v:\vec E\rightarrow TM$ satisfying that
\begin{enumerate}
\item $ v_{ij}\in T_{q_i}M$ for any $(i,j)\in \vec E$, and
\item $| v_{ij}|=1$ for any $(i,j)\in \vec E$.
\end{enumerate}
Given a direction field $v$, define the inner angle of the triangle $\sigma = \triangle ijk$ at the vertex $i$ as
$$
\theta^i_\sigma=\theta^i_\sigma(v)=\angle v_{ij}0 v_{ik}=\arccos (v_{ij} \cdot v_{ik}),
$$
where $0$ is the origin and $\angle v_{ij}0 v_{ik}$ is the angle between $ v_{ij}$ and $ v_{ik}$ in $T_{q_i}M$.
\end{definition}
A direction field $v$ assigns a unit tangent vector in $T_{q_i}M$ to each directed edge starting from $i$, and determines the inner angles in $T$.
\begin{definition}
A direction field $v$ is \emph{admissible} if
\begin{enumerate}
\item
$$ v_{ij} = \frac{\varphi_{ij}'(0)}{l_{ij}}$$
if $l_{ij}>0$, and
\item $ v_{ij}=- v_{ji}$ in $T_{q_i}M =T_{q_j}M $ if $l_{ij}=0$, and
\item for a fixed vertex $i\in V$, if $l_{ij}=0$ for any neighbor $j$ of $i$, then there exist neighbors $j$ and $k$ of $i$ such that $ v_{ij}=- v_{ik}$, and
\item if $\sigma=\triangle ijk\in F$ and $l_{ij}=l_{jk}=l_{ik}=0$, then $\theta^i_\sigma(v)+\theta^j_\sigma(v)+\theta^k_\sigma(v)=\pi$.
\end{enumerate}
\end{definition}
An admissible direction field encodes the directions of the non-degenerate edges in $\varphi(T^{(1)})$, and the induced angle sum of a degenerate triangle is always $\pi$. Then for any admissible $v$ and triangle $\sigma\in F$, by the Gauss-Bonnet formula
\begin{equation}
\label{GB}
\pi=\sum_{i\in \sigma}\theta^i_\sigma(v)-\int_{\Phi_\sigma(\sigma)}KdA
\geq\sum_{i\in \sigma}\theta^i_\sigma(v)-\int_{\tilde\varphi(\sigma)}KdA.
\end{equation}
Here $dA$ is the area form on $(\tilde M,\tilde g)$ or $(M,g)$.
The concept of the direction field is similar to the \textit{discrete one form} defined in \cite{gortler2006discrete}.
\subsection{Proof of Theorem \ref{embedding}}
The proof of Theorem \ref{embedding} uses the four lemmas below. We will postpone their proofs to the the subsequent subsections.
\begin{lemma}
\label{star}
If $v$ is admissible and $\theta=\theta(v)$, then for any $i\in V$,
$$
\sum_{\sigma:i\in \sigma}\theta^i_\sigma=2\pi,
$$
and for any $\sigma,\sigma'\in F$,
$\tilde\varphi(\sigma)\cap \tilde\varphi(\sigma')$ has area $0$.
\end{lemma}
Based on Lemma \ref{star}, if admissible direction fields exist, the image of the star of each vertex determined by $\tilde\varphi$ does not contain any flipped triangles overlapping with each other. If $\tilde\varphi(\sigma)$ does not degenerate to a geodesic arc for any triangle $\sigma\in F$, then $\tilde\varphi$ is locally homeomorphic and thus globally homeomorphic as a degree-one map. Therefore, we only need to exclude the existence of degenerate triangles.
Define an equivalence relation on $V$ as follows. Two vertices $i,j$ are equivalent if there exists a sequence of vertices $i=i_0,i_1,...,i_k=j$ such that $l_{i_0i_1}=...=l_{i_{k-1}i_k}=0$. This equivalence relation introduces a partition $V=V_1\cup...\cup V_m.$ Denote $y_k\in M$ as the only point in $\varphi(V_k)$. For any $x\in M$ and $u,v\in T_xM$, denote $u\| v$ as $u$ and $v$ are parallel, i.e., there exists $(\alpha,\beta)\neq(0,0)$ such that $\alpha u+\beta v=0$.
The following Lemma \ref{prescribe} shows that there are plenty of choices of admissible direction fields.
\begin{lemma}
\label{prescribe}
For any $ v_1\in T_{y_1}M,..., v_m\in T_{y_m}M$, there exists an admissible $v$ such that $ v_{ij}\| v_k$ if $i\in V_k$ and $l_{ij}=0$.
\end{lemma}
The following Lemma \ref{flatstar} shows that for any $V_k$ with at least two vertices, the image of its ``neighborhood" lies in a geodesic.
\begin{lemma}
\label{flatstar}
If $|V_k|\geq2$, then there exists $ v_k\in T_{y_k}M$ such that $ v_k\| \varphi_{ij}'(0)$ if $i\in V_k$ and $l_{ij}>0$.
\end{lemma}
Now let $v_k$ be as Lemma \ref{flatstar} if $|V_k|\geq2$, and arbitrary if $|V_k|=1$. Then construct an admissible direction field $v$ as in Lemma \ref{prescribe}, with induced inner angles $\theta^i_\sigma=\theta^i_\sigma(v)$. If the image of a triangle $\sigma$ under $\varphi$ degenerates to a geodesic, then its inner angles $\theta^i_\sigma$ are $\pi$ or $0$.
Let $F' \neq \emptyset$ be the set of degenerate triangles under $\varphi$.
\begin{lemma}
\label{degenerate}
If $\sigma\in F'$, $i\in\sigma$, and $\theta^i_\sigma=\pi$, then $\sigma'\in F'$ for any $\sigma'$ in the star neighborhood of the vertex $i$.
\end{lemma}
Let $\Omega$ be a connected component of the interior of $\cup \{\sigma:\sigma\in F'\}\subset|T|$, and $\tilde\Omega$ be the completion of $\Omega$ under the natural path metric on $\Omega$.
Notice that $\tilde \Omega$ could be different from the closure of $\Omega$ in $M$.
Since $\tilde\varphi$ is surjective, $F'\neq F$ and $\Omega\neq|T|$ and $\tilde\Omega$ has non-empty boundary.
Then $\tilde\Omega$ is a connected surface with a natural triangulation $T'=(V',E',F')$, and
$$
\chi(\tilde\Omega)=2-2\times(\text{genus of } \tilde\Omega)-\#\{\text{boundary components of $\tilde\Omega$}\}\leq1.
$$
Assume $V'_I$ is the set of interior vertices, and $V'_B$ is the set of boundary vertices, and $E'_I$ is the set of interior edges, and $E'_B$ is the set of boundary edges of $\tilde{\Omega}$. Then $|V'_B|=|E'_B|$, and by Lemma \ref{degenerate}, if $i\in V'_B$ and $i\in \sigma$, then $\theta^i_\sigma=0$. Therefore,
$$
\pi|F'|=\sum_{\sigma\in F',i\in \sigma}\theta^i_{\sigma}=\sum_{i\in V'_I}\sum_{\sigma\in F':i\in \sigma}\theta^i_\sigma=2\pi|V_I'|.
$$
Thus,
\begin{align*}
1\geq \chi(\tilde{\Omega})= & |V'|-|E'|+|F'|=|V'_I|+|V'_B|-|E'_I|-|E'_B|+|F'| \\
= & |V'_B|-|E'_I|-|E'_B|+\frac{3}{2}|F'|=-|E'_I|+\frac{3}{2}|F'| \\
= & -|E'_I|+\frac{1}{2}(|E'_B|+2|E'_I|) =\frac{1}{2}|E_B'|.
\end{align*}
Therefore, $|V'_B|=|E'_B|\leq 2$. Since $\tilde\Omega$ has non-empty boundary, $|E_B'|=1$ or $2$. In either case, it contradicts with the fact that $T$ is a simplicial complex.
\subsection{Proof of Lemma \ref{star}}
We claim that for any $i\in V$,
$$
\sum_{\sigma:i\in \sigma}\theta^i_\sigma\geq2\pi.
$$
If $l_{ij}=0$ for any neighbor $j$ of $i$, this is a consequence of condition (3) in the definition of an admissible direction field. If $l_{ij}\neq0$, by the $w$-balanced condition,
$\{\varphi_{ij}'(0)/l_{ij}:ij\in E \}$ should not be contained in any open half unit circle, and the angle sum around $i$ should be at least $2\pi$.
By the fact that $\tilde\varphi$ is surjective and equation (\ref{GB}), we have
$$
\sum_{i\in V}(2\pi-\sum_{\sigma:i\in \sigma}\theta^i_\sigma)+\sum_{\sigma\in F}\int_{\tilde\varphi (\sigma)}KdA
\leq\sum_{\sigma\in F}\int_{\tilde\varphi (\sigma)}KdA\leq \int_M KdA=2\pi\chi(M),
$$
and
$$
\sum_{i\in V}(2\pi-\sum_{\sigma:i\in \sigma}\theta^i_\sigma)+\sum_{\sigma\in F}\int_{\tilde\varphi (\sigma)}KdA
\geq2\pi|V|-\sum_{\sigma\in F}(\sum_{i\in \sigma}\theta^i_\sigma-\int_{\tilde{\varphi}(\sigma)}KdA)=2\pi\chi(M).
$$
Hence, the inequalities above are equalities. This fact implies that
$$\sum_{i\in V}(2\pi-\sum_{\sigma:i\in \sigma}\theta^i_\sigma) = 0.$$
Since each term in this summation is non-positive, then $\sum_{\sigma:i\in \sigma}\theta^i_\sigma = 2\pi$. The statement on the area follows similarly.
\subsection{Proof of Lemma \ref{prescribe}}
We claim that for any $k$, there exists a map $h:V_k\rightarrow\mathbb R$ such that
\begin{enumerate}
\item $h(i)\neq h(j)$ if $i\neq j$, and
\item for a fixed $i\in V_k$, if $l_{ij}=0$ for any neighbor $j$ of $i$, then there exist neighbors $j$ and $j'$ of $i$ in $V_k$ such that $h(j)<h(i)<h(j')$.
\end{enumerate}
Given such $h$, set $v$ as
$$ v_{ij} = \begin{cases}
\text{sgn}[h(j)-h(i)]\cdot v_k, & \text{ if } i\in V_k \text{ and } l_{ij}=0,\\
\varphi_{ij}'(0), & \text{ if } l_{ij}>0,
\end{cases}
$$
where $\text{sgn}$ is the sign function. It is easy to verify that such $v$ is satisfactory.
To construct such function $h$, we will prove the following lemma, which is more general than our claim.
\begin{lemma}
Assume $G=(V',E')$ is a subgraph of the $1$-skeleton $T^{(1)}$, and $E'\neq E$. Denote
$$
int(G)=\{i\in V':ij\in E \Rightarrow ij\in E'\},
$$
and $\partial G=V'-int(G)$. Then there exists $h:V'\rightarrow \mathbb R$ such that
\begin{enumerate}
\item $h(i)\neq h(j)$ if $i\neq j$, and
\item for any $i\in int(G)$ there exist neighbors $j$ and $j'$ of $i$ in $V'$ such that $h(j)<h(i)<h(j')$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove by the induction on the size of $V'$. The case $|V'|=1$ is trivial. For the case $|V'|\geq2$, first notice that $|\partial G| \geq 2$ for any proper subgraph $G$ of $T^{(1)}$. Assign distinct values $\bar h(i)$ to each $i\in \partial G$, then solve the discrete harmonic equation
$$
\sum_{j:ij\in E} (\bar h(j)-\bar h(i))=0, \quad\forall i\in int(G),
$$
with the given Dirichlet boundary condition on $\partial G$.
Let $s_1<...<s_k$ be the all distinct values that appear in $\{\bar h(i):i\in V'\}$. Then consider the subgraphs $G_i = (V'_i, E'_i)$ defined as
$$
V_i'=\{j\in V':h(j)=s_i\},
$$
and
$$
E_i'=\{jj'\in E': j,j'\in V_i'\}.
$$
Notice that $|\partial G|\geq2$, so $k\geq2$ and $|V_i'|<|V'|$ for any $i=1,...,k$.
By the induction hypothesis, there exists a function $h_i:V_i'\rightarrow \mathbb R$ such that
\begin{enumerate}
\item $h_i(j)\neq h_i(j')$ if $j\neq j'$, and
\item for any $j\in int(G_i)$ there exist neighbors $j',j''$ of $j$ in $V'_i$ such that $h_i(j')<h_i(j)<h_i(j'')$.
\end{enumerate}
Define $h_i(j)=0$ if $j\notin V_i'$, then for sufficiently small positive $\epsilon_1,...,\epsilon_k$,
$${h} = \bar h+\sum_{i=1}^k \epsilon_ih_i$$
is a desirable function.
\end{proof}
\subsection{Proof of Lemma \ref{flatstar}}
It amounts to prove that if $i,i'\in V_k$ and $ij,i'j'\in E$ and $l_{ij}>0$ and $l_{i'j'}>0$, then $\varphi'_{ij}(0)\|\varphi'_{i'j'}(0)$.
Let
$$
D=\left(\cup_{i,i',i''\in V_k}\triangle i i' i''\right) \cup \left(\cup_{i,i'\in V_k}ii'\right),
$$
which is a closed path-connected set in $|T|$.
For any $i\in V_k$, $i\in \partial D$ if and only if there exists $ij\in E$ with $l_{ij}>0$. Therefore, it suffices to prove that
\begin{enumerate}
\item for any $i\in V_k$ and edges $ij,ij'$ with $l_{ij}>0$ and $l_{ij'}>0$, $\varphi_{ij}'(0)\|\varphi_{ij'}'(0)$, and
\item for any $ij\in E$ satisfying that $ij\subset\partial D$, there exists $m\in V-V_k$ such that $\triangle ijm\in F$, and thus $\varphi_{im}'(0)=\varphi_{jm}'(0)$, and
\item $\partial D$ is connected.
\end{enumerate}
For part (1), if it is not true, then there exists $i\in V_k$ and $ij\in E$ and $ij'\in E$ such that $l_{ij}>0$ and $l_{ij'}>0$ and $ \varphi_{ij}'(0)$ is not parallel to $\varphi_{ij'}'(0)$. Assuming this claim, by the $w$-balanced condition, there exists $ij''\in E$ with $l_{ij''}>0$, and the three vectors $ \varphi_{ij}'(0), \varphi_{ij'}'(0), \varphi_{ij''}'(0)$ are not contained in any closed half-space in $T_{q_i}M$. Assume $im\in E$ and $l_{im}=0$, and without loss of generality, $ij,im,ij',ij''$ are ordered counter-clockwise in the one-ring neighborhood of $i$ in $T$. By Lemma \ref{prescribe}, there exists an admissible $v$ such that $ v_{im}\| \varphi_{ij''}'(0)$. By possibly changing a sign, we may assume that $ v_{im}=\varphi_{ij''}'(0)/l_{ij''}$.
\begin{figure}[h!]
\includegraphics[width=0.6\linewidth]{balance.eps}
\caption{Overlapping triangles lead to angle surplus.}
\label{balanced}
\end{figure}
Then as Figure \ref{balanced} shows, a contradiction follows
$$
2\pi=\sum_{\sigma i\in \sigma}\theta^i_\sigma\geq\angle v_{ij}0 v_{im}+\angle v_{im}0 v_{ij'}+\angle v_{ij'}0 v_{ij''}+\angle v_{ij''}0 v_{ij}
$$
$$
=2\angle v_{ij}0 v_{ij''}+2\angle v_{ij''}0 v_{ij'}>2\pi.
$$
Part (2) is straightforward and we will prove part (3).
By our assumption on the extension $\tilde\varphi$, $\tilde\varphi(D)$ contains only one point, then the embedding map $i_D=\psi^{-1}\circ(\psi|_D)$ from $D$ to $|T|$ is homotopic to the constant map $\psi^{-1}\circ(\tilde\varphi|_D)$, meaning that $D$ is contractible in $|T|$. If $\partial D$ contains at least two boundary components, then it is not difficult to show that $|T|-D$ has a connected component $D'$ homeomorphic to an open disk. Let $\Phi_{ D}:D\to\tilde M$ be a lifting of $\tilde\varphi|_{D}$.
Then $\Phi_{ D}(\partial D')\subset\Phi_{D}(D)$ contains only a single point $x\in\tilde M$. Then by the $w$-balanced condition, it is not difficult to derive a maximum principle and show that $\tilde\varphi|_{ D'}$ equals to constant $x$. Then by the definition of $D$ it is easy to see that $D'$ should be a subset of $D$. It is contradictory.
\subsection{Proof of Lemma \ref{degenerate}}
Assume $ij$ and $ij'$ are two edges in $\sigma$. If the conclusion is not true, then there exists $ik\in E$ such that $l_{ik}>0$ and $ v_{ik}$ is not parallel to $ v_{ij}$. Notice that $ v_{ij}=- v_{ij'}$, and we have
$$
2\pi=\sum_{\sigma\in F:i\in \sigma}\theta^i_\sigma\geq \angle v_{ij}0 v_{ij'}+\angle v_{ij}0 v_{ik}+\angle v_{ij'}0 v_{ik}=2\pi.
$$
Then the equality holds in the above inequality, and for any $ik'\in E$, $ v_{ik'}$ should be on the half circle that contains $ v_{ij}, v_{ik}, v_{ij'}$. Let $ v_m$ be the middle point of this half circle, then
$$
v_m\cdot\sum_{j:ij\in E}w_{ij}l_{ij} v_{ij}\geq w_{ik}l_{ik} v_m\cdot v_{ik}>0.
$$
This contradicts to the fact that $\varphi$ is $w$-balanced.
|
1,108,101,565,999 | arxiv | \section{Introduction}
{\bf Introduction.} Dark matter (DM) constitutes a significant
fraction of the energy density in the universe, $\Omega_{\rm DM}=0.229
\pm 0.015$ \cite{Komatsu:2010fb}. This conclusion is based entirely on
gravitational effects of DM. A fundamental question is whether DM
interacts also non-gravitationally. There are a number of experiments
searching for signs of such DM interactions. Direct detection
experiments, for instance, are looking for a signal of DM particles
from the galactic halo that would scatter in underground detectors. A
characteristic feature of the resulting signal will be an annual
modulation, because the earth rotates around the sun, while at the
same time the sun moves relative to the DM halo~\cite{Drukier:1986tm}.
At present two experiments are reporting annually modulated signals,
DAMA/LIBRA~\cite{Bernabei:2008yi} (DAMA for short) and
CoGeNT~\cite{Aalseth:2011wp}, with significances of 8.9$\sigma$ and
$2.8\sigma$, respectively. Are these signals due to DM? The answer is
readily obtained by (i) assuming a specific local DM velocity
distribution and (ii) postulating the predominant DM--nucleus
interaction. Usually a simple Maxwellian DM halo is adopted. If
interpreted in terms of elastic spin-independent DM scattering both
claims are in tension \cite{Schwetz:2011xm, Fox:2011px} with bounds on
time integrated rates from other direct detection experiments such as
XENON10~\cite{Angle:2011th}, XENON100~\cite{Aprile:2011hi}, or
CDMS~\cite{Ahmed:2010wy}. The situation may change in the case of
non-standard DM halos with, e.g., highly anisotropic velocity
distributions, DM streams or DM debri flows. Recently CDMS provided a
direct bound on the modulation signal, which disfavors the CoGeNT
modulation without referring to any halo or particle physics model
\cite{Ahmed:2012vq}. Therefore we focus below mainly on DAMA.
In this Letter we present a general method that avoids astrophysical uncertainties
when comparing putative DM modulation signals with the bounds on time
averaged DM scattering rates from different experiments. The method combines
the results from \cite{Fox:2010bz, Fox:2010bu} with the bounds on the
modulation derived by us in \cite{HerreroGarcia:2011aa}. We are then
able to translate the bound on the DM scattering rate in one
experiment into a bound on the annual modulation amplitude in a
different experiment. The resulting bounds present roughly an order of
magnitude improvement over \cite{Fox:2010bz, Fox:2010bu} and
\cite{HerreroGarcia:2011aa}.
The bounds are (almost completely) astrophysics independent. Only very
mild assumptions about DM halo properties are used: (i) that it does
not change on the time-scales of months, (ii) that the density of DM
in the halo is constant on the scales of the earth-sun distance, and
(iii) that the DM velocity distribution is smooth on the scale of the
earth velocity $v_e= 29.8$~km/s. If the modulation signal is due to
DM, then the modulation amplitude has to obey the bounds. In the
derivation an expansion in $v_e$ over the typical DM velocity $\sim
200$ km/s is used. The validity of the expansion can be checked
experimentally, by searching for the presence of higher harmonics in
the time-stamped DM scattering data \cite{HerreroGarcia:2011aa}.
{\bf Bounds on the annual modulation.} We focus on the case of DM $\chi$
elastically scattering off a nucleus $(A,Z)$ and depositing the nuclear
recoil energy $E_{nr}$ in the detector. The differential rate in
events/keV/kg/day is then given by
\begin{equation}\label{eq:R}
R_A(E_{nr}, t) =
\frac{\rho_\chi \sigma_A^0}{2 m_\chi \mu_{\chi A}^2} \, F_A^2(E_{nr}) \,
\eta(v_m, t) \,,
\end{equation}
with $\rho_\chi$ the local DM density, $\sigma_A^0$ the total DM--nucleus
scattering cross section at zero momentum transfer, $m_\chi$ the DM mass,
and $\mu_{\chi A}$ the reduced mass of the DM--nucleus system. $F_A(E_{nr})$
is a nuclear form factor.
For SI interactions with a nucleus $(A,Z)$, $\sigma_A^0$ can be written as
$
\sigma^{\rm SI}_A = \sigma_p [Z+(A-Z)(f_n/f_p)]^2
\mu_{\chi A}^2 / \mu_{\chi p}^2,
$
where $\sigma_p$ is the DM--proton cross-section and $f_{n,p}$ are
coupling strengths to neutron and proton, respectively. Apart from a
common overall factor $\rho_{\chi}$ the astrophysics enters the
predicted rate in Eq.~\eqref{eq:R} through the halo integral
\begin{equation}\label{eq:eta}
\eta(v_m, t) \equiv
\int_{v > v_m} \negthickspace \negthickspace d^3 v
\frac{f_{\rm det}(\vect{v}, t)}{v} ,
\quad
v_{m}=\sqrt{ \frac{m_A E_{nr}}{2 \mu_{\chi A}^2}},
\end{equation}
where $v_m$ is the minimal velocity required to have at least $E_{nr}$
energy deposited in the detector. The function $f_{\rm det}(\vect v, t)$
describes the distribution of DM particle velocities in the detector rest
frame with $f_{\rm det}(\vect v, t) \ge 0$ and $\int d^3 v f_{\rm det}(\vect
v, t) = 1$. It is related to the velocity distribution in the rest frame of
the sun by $ f_{\rm det}(\vect{v},t) = f_{\rm sun}(\vect{v} + \vect{v}_e(t))$, where $\vect{v}_e(t)$ is the velocity vector of the earth. The
rotation of the earth around the sun introduces a time dependence in the
DM-nucleus scattering rate through
$
\eta(v_m,t)=\overline \eta (v_m) + \delta\eta(v_m, t),
$
where
\begin{equation}
\delta \eta(v_m, t) = A_\eta(v_m) \cos 2\pi[t - t_0(E_{nr})] ,
\end{equation}
when expanding to first order in $v_{\rm e}=29.8$ km/s $\ll v_{\rm
sun}\simeq$ 230 km/s. Here, $A_\eta(v_m)$ is defined to be positive.
Let us now assume that $f_{\rm sun}(v)$ is smooth on the scale of
$v_e$, and the only time dependence comes from the rotation of the
earth around the sun and $f_{\rm sun}(v)$ itself is constant in time
and space.
Then the modulation amplitude $A_\eta(v_m)$ can be
bounded in terms of the unmodulated halo integral $\overline\eta$ in
the following way \cite{HerreroGarcia:2011aa}:
\begin{equation} \label{eq:bound_gen}
A_\eta(v_m) \leq v_e
\left[-\frac{d \overline \eta}{d v_m}+
\frac{\overline{\eta}(v_m)}{v_m}
- \int_{v_m} dv \frac{\overline{\eta}(v)}{v^2}
\right] \,.
\end{equation}
The first term in \eqref{eq:bound_gen} is positive since $\overline
\eta(v_m)$ is a monotonously decreasing function of $v_m$.
If we further assume that the DM halo is symmetric, so that there is
only one single direction related to the DM flow (see
\cite{HerreroGarcia:2011aa} for details), then one obtaines a more
stringent constraint:
\begin{equation}\label{eq:bound_spec}
\int_{v_{1}}^{v_{2}} \negthickspace \negthickspace dv_m A_\eta(v_m)
\leq \sin\alpha \, v_e\left[\overline \eta(v_{1}) -
v_{1} \negthickspace \int_{v_{1}} \negthickspace \negthickspace
\negthickspace dv \frac{\overline\eta(v)}{v^2} \right] \,.
\end{equation}
Here $\alpha$ is the angle between the DM flow and the direction orthogonal
to the ecliptic. The most conservative bound is obtained for $\sin\alpha =
1$ (which would correspond to a DM stream parallel to the ecliptic). However, in many cases the DM
flow will be aligned with the motion of the sun within the galaxy.
This holds for any isotropic velocity distribution and, up to a
small correction due to the peculiar velocity of the sun, also for tri-axial
halos or a significant contribution from a possible dark-disc. In this case
we have $\sin\alpha \simeq 0.5$.
In the following we will use time averaged rates from various
experiments to derive an upper bound on $\overline \eta(v_m)$. In
order to be able to apply this information we integrate
Eq.~\eqref{eq:bound_gen} over $v_m$ and drop the negative terms in
Eqs.~\eqref{eq:bound_gen} and \eqref{eq:bound_spec}.
This gives the bounds
\begin{align}
\int_{v_{1}}^{v_{2}} \negthickspace \negthickspace dv_m A_\eta(v_m) &
\leq v_e\left[\overline \eta(v_{1}) +
\int_{v_{1}}^{v_{2}} \negthickspace \negthickspace
dv \frac{\overline\eta(v)}{v} \right] \,,
\label{eq:bound_gen2}\\
\int_{v_{1}}^{v_{2}} \negthickspace \negthickspace dv_m A_\eta(v_m) &
\leq \sin\alpha \, v_e \, \overline \eta(v_{1}) \,,
\label{eq:bound_spec2}
\end{align}
In practice the integrals on the l.h.s.\ are replaced by a sum over bins.
Below we will refer to the relations \eqref{eq:bound_gen2} and
\eqref{eq:bound_spec2} as the bounds from ``general halo'' and ``symmetric
halo'' (where we will take $\sin\alpha = 0.5$), respectively.
{\bf Bounds on the unmodulated halo integral.} Let us first consider
SI scattering with $f_n = f_p$. Generalization to isospin violating scattering
with $f_n \neq f_p$ and to SD scattering is straightforward. The
predicted number of events in an interval of observed energies
$[E_1,E_2]$ is given by
\begin{equation}\label{eq:events-pred}
N_{[E_1,E_2]}^{\rm pred}= M T A^2
\int_0^{\infty}
\negthickspace\negthickspace\negthickspace dE_{nr} F_A^2(E_{nr})
G_{[E_1,E_2]}(E_{nr}) \tilde\eta(v_m).
\end{equation}
Here $G_{[E_1,E_2]}(E_{nr})$ is the detector response function, which
describes the contribution of
events with true nuclear-recoil energy $E_{nr}$ to
the observed energy interval $[E_1,E_2]$. It may be
non-zero outside the $E_{nr}\in [E_1,E_2]$ interval due to the finite
energy resolution and includes also (possibly energy dependent)
efficiencies. $M$ and $T$ are the detector mass and exposure time,
respectively, and we defined
\begin{equation}\label{eq:etatilde}
\tilde\eta \equiv \frac{\sigma_p \rho_\chi}{2m_\chi \mu^2_{\chi p}} \overline\eta \,,
\end{equation}
where $\tilde\eta$ has units of events/kg/day/keV.
Now we can use the fact that $\tilde \eta$ is a falling function
\cite{Fox:2010bz} (see also \cite{Frandsen:2011gi,
Gondolo:2012rs}). Among all possible forms for $\tilde \eta$ such
that they pass through $\tilde \eta(v_m)$ at $v_m$, the minimal number
of events is obtained for $\tilde \eta$ constant and equal to $\tilde
\eta(v_m)$ until $v_m$ and zero afterwards. Therefore, for a given
$v_m$ we have a lower bound $N_{[E_1,E_2]}^{\rm pred}(v_m) \ge
\mu(v_m)$ with
\begin{equation}\label{eq:mu}
\mu(v_m) = M T A^2 \tilde\eta (v_m) \int_0^{E(v_m)}
\negthickspace dE_{nr} F_A^2(E_{nr}) G_{[E_1,E_2]}(E_{nr}),
\end{equation}
where $E(v_m)$ is given in \eqref{eq:eta}. Suppose an experiment
observes $N_{[E_1,E_2]}^{\rm obs}$ events in the intervall $[E_1,E_2]$. Then
we can obtain an upper bound on $\tilde\eta$ for a fixed $v_m$ at a
confidence level CL by requiring that the probability of obtaining
$N_{[E_1,E_2]}^{\rm obs}$ events or less for a Poisson mean of
$\mu(v_m)$ is equal to $1-$CL. Note that this is actually a lower
bound on the CL, since Eq.~\eqref{eq:mu} provides only a lower bound
on the true Poisson mean. For the same reason we cannot use the
commonly applied maximum-gap method to derive a bound on $\tilde\eta$.
If several different nuclei are present, there will be a corresponding
sum in Eqs.~\eqref{eq:events-pred} and \eqref{eq:mu}.
The limit on $\tilde\eta$ can then be used in the r.h.s.\ of
Eq.~\eqref{eq:bound_gen2} or \eqref{eq:bound_spec2} to constrain the
modulation amplitude. For concreteness we first focus on the annual
modulation in DAMA. If $m_\chi$ is around 10~GeV, then DM particles do not
have enough energy to produce iodine recoils above the DAMA
threshold. We can thus assume that the DAMA signal is entirely due to
the scattering on sodium. We define $\tilde A_\eta \equiv \sigma_p
\rho_\chi /(2m_\chi \mu^2_p) A_\eta$, which is related to the observed
modulation amplitude $A_i^{\rm obs}$ by
\begin{equation}\label{eq:Atilde}
\tilde A_\eta^{\rm obs}(v_m^i) = \frac{A_i^{\rm obs} q_{\rm Na}}
{A^2_{\rm Na} \langle F^2_{\rm Na} \rangle_i f_{\rm Na}} \,.
\end{equation}
Here $q_{\rm Na} = dE_{ee}/dE_{nr}$ is the sodium
quenching factor translating keVee into keVnr, for which we take $q_{\rm Na} =
0.3$. The index $i$ labels energy bins, with $v_m^i$ given by the corresponding energy bin
center using Eq.~\eqref{eq:eta}. Further, $\langle
F^2_{\rm Na} \rangle_i$ is the sodium form factor averaged over the bin
width and $f_{\rm Na} = m_{\rm Na} / (m_{\rm Na} + m_{\rm I})$ is the
sodium mass fraction of the NaI crystal. For the modulation amplitude in
CoGeNT we proceed analogously.
Note that the conversion factor from $\bar \eta$ to $\tilde\eta$ is
the same as for $A_\eta$ to $\tilde A_\eta$, and does not dependent on
the nucleus. Therefore, the bounds \eqref{eq:bound_gen2} and
\eqref{eq:bound_spec2} apply to $\tilde \eta$, $\tilde A_\eta$ without
change, even if the l.h.s. and r.h.s. refer to different experiments.
Let us briefly describe the data we use to derive the upper bounds on
$\tilde\eta$. We consider results from XENON10~\cite{Angle:2011th}
(XE10) and XENON100~\cite{Aprile:2011hi} (XE100). In both cases we
take into account the energy resolution due to Poisson fluctuations of
single electrons. XE100 is sensitive to the interesting region of $v_m$
only because of upward fluctuations from below the threshold. We adopt
the best-fit light-yield efficiency $L_{\rm eff}$ from
\cite{Aprile:2011hi}. The XE10 analysis is based on the so-called S2
ionization signal which allows to go to a rather low threshold. We
follow \cite{Angle:2011th} and impose a sharp cut-off of the
efficiency below the threshold. From CDMS we use results from a
dedicated low-threshold (LT) analysis~\cite{Ahmed:2010wy} of Ge data,
as well as data on Si \cite{Akerib:2005kh}. In the case of SD
scattering on protons particularly strong bounds are obtained from
experiments with a fluorine target. We consider the results from
SIMPLE \cite{Felizardo:2011uw}, which uses F$_5$C$_2$Cl. We use the
observed number of events and expected background events to calculate
the combined Poisson probability for Stage 1 and 2. For the prediction
we include energy dependent threshold efficiencies from
\cite{Felizardo:2011uw}.
For all experiments we use the lower bound on the expected events,
Eq.~\eqref{eq:mu}, to calculate the probability of obtaining
less or equal events than observed. For XE100, CDMS Si, and SIMPLE we just use
the total number of events in the entire reported energy range. For
XE10 and CDMS LT the limit can be improved if data are binned and the
corresponding probabilities for each bin are multiplied. This assumes
that the bins are statistically independent, which requires to make
bins larger than the energy resolution. For XE10 we only use two
bins. For CDMS LT we combine the 36 bins from Fig.~1 of
\cite{Ahmed:2010wy} into 9 bins of 2~keV where the energy resolution
is 0.2~keV.
\begin{figure}
\includegraphics[width=0.40\textwidth]{final_plots/rates.pdf}
\caption{\label{fig:rates} Upper bounds on $\tilde \eta$ at $3\sigma$ from
XENON100, XENON10, CDMS LT, CDMS Si, and SIMPLE.
The modulation amplitude $\tilde A_\eta$ is shown for DAMA (for $q_{Na}=0.3$) and
CoGeNT for both free phase fit (general) and fixing the phase to June 2nd
(symmetric). We assume a DM mass of 10~GeV and SI interactions.}
\end{figure}
\begin{figure}
\includegraphics[width=0.4\textwidth]{final_plots/SI_general.pdf}
\caption{\label{fig:SI} Integrated modulation signals, $\int_{v_1}^{v_2} d v A_{\tilde \eta}$, from DAMA and
CoGeNT compared to the $3\sigma$ upper bounds for the general
halo, Eq.~\eqref{eq:bound_gen2}. We assume SI interactions and a
DM mass of 10~GeV. The integral runs from $v_1=v_{\rm min}$ till
$v_2=743$ km/s (end of the 12th bin in DAMA).}
\end{figure}
{\bf Results.} In Fig.~\ref{fig:rates} we show the 3$\sigma$ limits
(CL~$= 99.73\%$) on $\tilde \eta$ compared to the modulation
amplitudes $\tilde A_\eta$ from DAMA and CoGeNT for a DM mass of
10~GeV. Similar results have been presented in \cite{Frandsen:2011gi,
Gondolo:2012rs}. The CoGeNT amplitude depends on whether the phase
is floated in the fit or fixed at June 2nd \cite{Fox:2011px}, which
applies to the ``general'' and ``symmetric'' halos, respectively.
Already at this level XE100 is in tension with the modulation from
DAMA (and to some extent also CoGeNT).
We now apply our method. As shown in Fig.~\ref{fig:SI} the null search results become significantly more constraining after applying the bounds on the integrated annual modulation $\int_{v_1}^{v_2}dv A_{\tilde \eta}$
from Eq.~\eqref{eq:bound_gen2}. DAMA
and GoGeNT are strongly excluded by the bounds from XE100, XE10, CDMS
LT even for the general halo. If one were to assume in addition that the halo is
symmetric, the bounds would get even stronger. Then also CDMS Si excludes DAMA,
and there is some tension with SIMPLE (not shown).
\begin{figure}
\includegraphics[width=0.4\textwidth]{final_plots/SD.pdf}
\includegraphics[width=0.39\textwidth]{final_plots/IV.pdf}
\caption{\label{fig:SD-IV} Integrated modulation signal
$\int_{v_{\rm min}}^{v_2}dv A_{\tilde \eta}$ from DAMA
compared to the $3\sigma$ upper bounds for the general halo,
Eq.~\eqref{eq:bound_gen2} (solid), and symmetric halo,
Eq.~\eqref{eq:bound_spec2} with $\sin\alpha = 0.5$ (dotted). We
assume a DM mass of 10~GeV, and SD interactions on protons (upper
panel) and SI interactions with $f_n/f_p = -0.7$ (lower panel).
The upper limit of the integration is $v_2=743$~km/s.}
\end{figure}
In Fig.~\ref{fig:SD-IV} we consider two variations of
DM--nucleus interaction. The upper panel is for the case when
the DM particle couples to the spin of the proton. The null search result of
Xe and Ge experiments are then irrelevant. However, the bound from SIMPLE
is in strong disagreement with the modulation signal in DAMA, due to
the presence of fluorine in their target. (A comparable limit from fluorine
has been published recently by PICASSO~\cite{Archambault:2012pm}.) In
the lower panel of Fig.~\ref{fig:SD-IV} we show the case of SI isospin violating interactions
with $f_n/f_p=-0.7$. This choice evades bounds from Xe, but now the
DAMA modulation is excluded by the bounds from CDMS~Si for the general
halo and CDMS Si, LT, and SIMPLE for the symmetric halo.
Let us now quantify the disagreement between the observed DAMA
modulation and the rate from another null-result experiment using our
bounds. We first fix $v_m$. To each value of $\tilde\eta(v_m)$
Eq.~\eqref{eq:mu} provides a Poisson mean $\mu(v_m)$. We can then
calculate the probability $p_\eta$ to obtain equal or less events than
measured by the null-result experiment. Then we construct the bound on
the modulation using the same value $\tilde \eta(v_m)$ on the
r.h.s.\ of Eq.~\eqref{eq:bound_gen2} or \eqref{eq:bound_spec2} (the
integrand $\tilde\eta(v)$ in Eq.~\eqref{eq:bound_gen2} is calculated
using the same $p_\eta$ but with $v>v_m$ in Eq.~\eqref{eq:mu}). We
calculate the probability $p_A$ that the bound is not violated by
assuming on the l.h.s.\ of Eq.~\eqref{eq:bound_gen2} or
\eqref{eq:bound_spec2} a Gaussian distribution for the DAMA modulation
signal with the measured standard deviations in each bin. Then $p_{\rm
joint}(\tilde\eta) = p_\eta p_A$ is the combined probability of
obtaining the experimental result for the chosen value of
$\tilde\eta$. Then we maximize $p_{\rm joint}(\tilde\eta)$ with
respect to $\tilde\eta$ to obtain the highest possible joint
probability.
The results of such an analysis are shown in Fig.~\ref{fig:prob}. The
analysis is performed at the fixed $v_m$ corresponding to the 3rd
modulation data point in DAMA, depending on the DM mass $m_\chi$. We
find that for all considered interaction types and $m_\chi \lesssim
15$~GeV at least one experiment disfavors a DM interpretation of the
DAMA modulation at more than $4\sigma$ even under the very modest
assumptions of the ``general halo''. In the case of SI interactions
the tension with XE100 is at more than $6\sigma$ for $m_\chi \gtrsim
8$~GeV and saturates at the significance of the modulation data point
itself at about $6.4\sigma$ for $m_\chi \gtrsim 13$~GeV. The exclusion
from XE10 is nearly independent of the DM mass slightly below
$6\sigma$. We show also a few examples of the joint probability
in case of a ``symmetric halo'' (dashed curves).
\begin{figure}[t]
\includegraphics[width=0.4\textwidth]{final_plots/prob.pdf}
\caption{\label{fig:prob} The probability that the integrated modulation
amplitude in DAMA (summed starting from the 3rd bin) is compatible with
the bound derived from the constraints on $\tilde\eta$ for various
experiments as a function of the DM mass. The label SI (SD), refers to
spin-independent (spin-dependent) interactions with $f_n=f_p$ ($f_n = 0$),
and IV refers to isospin-violating SI interactions with $f_n/f_p = -0.7$.
For solid and dashed curves we use the bounds from
Eqs.~\eqref{eq:bound_gen2} and \eqref{eq:bound_spec2}, respectively.}
\end{figure}
While astrophysics uncertainties are avoided, the obtained bounds are
still subject to nuclear, particle physics and experimental uncertainties. For instance, the tension
between the DAMA signal and the bounds depends on the value of the Na
quenching factor $q_{\rm Na}$, light yield or ionization yield
efficiencies in Xe, upward fluctuations from below threshold, and so
on. For example, if a value of $q_{Na} = 0.45$ is adopted instead of
the fiducial value of 0.3 consistency for SD and isospin violating
interactions can be achieved in the case of the general halo at around
$3\sigma$, while for SI interactions the XE10 bound still implies
tension at more than $5\sigma$ for $m_\chi \gtrsim 10$~GeV. Hence, the
precise CL of exclusion may depend on systematic uncertainties.
In conclusion, we have presented a powerful method to check the
consistency of an annual modulation signal in a DM direct detection
experiment with bounds on the total DM scattering rate from other
experiments, almost completely independent of astrophysics, for a
given type of DM--nucleus interaction. While our bounds strongly
disfavor a DM interpretation of present annually modulated signals in
the case of SI and SD elastic scattering, the method will be an
important test that any future modulated signal will have to pass
before a DM interpretation can be accepted.
{\bf Acknowledgements:}
J.H.-G.\ is supported by the MICINN under the FPU program.
|
1,108,101,566,000 | arxiv | \section{Introduction}
Flaring events at radio frequencies are known to take place in
Active Galactic Nuclei (AGN), usually followed by the observation
of new radio features in the parsec-scale jets \citep[e.g.,][]{sa02}. It has been shown that the ejection of those features, or components, is related to dips in the X-ray emission from the active nucleus in the case of 3C~120 \citep{Mar02}, and perhaps also in 3C~111 \citep{Mar06}. The dips in X-rays precede the observations of new radio-components. The decrease in X-ray emission may be caused by the loss of the inner regions of the disc. In this scenario, a fraction of the accreted material is injected in the jet and a new component is later observed in VLBI images,
after the material becomes detectable at the observing frequencies, as it evolves downstream. The components are interpreted as the shocks produced by the ejection of denser and/or faster plasma in the flaring event from the accretion disc \citep{mar85}. The conditions for triggering the ejection of the material in those radio features are still unknown. Hydrodynamical simulations \citep[][ A03 hereafter]{Alo03} have shown that such jet perturbations produce a forward and a reverse structure, which would be expected to be observed as a fast front and a slower back component.
In the jet of 3C~111 ($z=0.049$, $1\,\rm{mas}\simeq 1\,\rm{pc}$), a very strong flaring event in early 1996 gave rise to the ejection of two jet features observed at 15 GHz with the Very Long Baseline Array \citep[labeled as components E and F -see Fig.~\ref{fig:0} and][ K08 hereafter]{Kad08}. Both component trajectories can be back-extrapolated to similar ejection epochs within 3 months (around 1996.10). However, they show different speeds and the time
evolution of their brightness is different (see Fig.~\ref{fig:0}):
whereas the inner component F is initially brighter (1996.82 and 1997.19)
and fades out very rapidly (1997.66 and 1998.18), the leading component E
shows a slower decrease in flux density. After 1999,
F has disappeared and E evolves accelerating. In \cite{pe08} we propose the possibility that these components are the front and rear region of a single perturbation. Here we review this result and discuss on the problems that other possible scenarios could face.
\begin{figure}[t]
\includegraphics[clip,angle=0,width=0.23\textwidth]{fig1a}
\includegraphics[clip,angle=0,width=0.24\textwidth]{fig1b}
\caption{Core distance and flux density evolution with time of components E and F in 3C~111, based on the results from K08.}
\label{fig:0}
\end{figure}
\section{RHD and Emission Simulations}
We have performed one-dimensional numerical relativistic hydrodynamics (RHD) simulations in
which a square perturbation in density is injected into a steady jet, without modifying the initial Lorentz factor, and relaxing the condition that the initial jet flow is reestablished immediately after the perturbation. We have substituted this by a rarefied flow, representing a reduction of the injection rate. In this picture, the original jet injection rates should be recovered after some time. However, in this work we only focus on the evolution of the strong ejection and the period before the reestablishment of the jet flow. Multidimensional simulations are out of the scope of this work due to the computational effort required and to the one-dimensional character of this problem. The simulations have been performed using a numerical code that solves the equations of relativistic hydrodynamics written in the conservation form, as described in \cite{pe05} and \cite{mart97}.
The details of the simulation are given in the caption of Fig.~\ref{fig:1}. The top panels in Fig.~\ref{fig:1} show different snapshots of the evolution of the
square perturbation injected in a steady flow, in pressure, Lorentz factor and specific internal energy.
\begin{figure*}[!t]
\centering
\includegraphics[clip,angle=0,width=\linewidth]{fig2.eps}
\caption{
Snapshots of the evolution (left to right) of a square perturbation
injected in a steady jet, followed by a strong rarefaction. The
dotted-light-blue lines stand for Lorentz factor, the solid-dark-blue
line stands for pressure and the dashed-red lines stand for specific internal energy.
The simulation is run with 24000~cells; the velocity of the initial flow is $v_j=0.9\,c$; the perturbation is
injected with a density twice that of the jet and velocity $v_p=0.9\,c$; the rarefied medium is injected after the perturbation with the same velocity as the initial flow, and pressure ten times
smaller than that of the initial flow. Please note the change of scale in the abcissae. The bottom panels show the simulated total intensity emission along the jet axis at four representative epochs. The identification of the features in the simulation with the observed
components E and F in K08 is indicated in each panel. A jet width of 100 cells and axial simmetry is used to compute the emission.}
\label{fig:1}
\end{figure*}
Using the RHD simulations outlined above as input, we have computed the
corresponding 1D optically thin radio synchrotron emission as seen by an
observer with a line of sight at $19^{\circ}$ to the jet axis (K08). For these
computations, we used the numerical code and the procedure described in
\cite{go97} and references therein. This code takes into
account all the relevant relativistic effects, including the light
travel time delays.
In the simulation (see Fig.~\ref{fig:1}), the \textit{front} region includes the leading part of the perturbation and is identified with component E in K08, whereas we define the \textit{fading} region as the rear part of the perturbation and identify it with component F (see Fig.~\ref{fig:1}). The material in the front region, consisting of shocked material from the steady jet and rarefied material from the perturbation separated by a contact discontinuity, shows smaller values for the pressure, and some acceleration due the propagation in the lower pression steady jet fluid. The material in the fading region crosses the receding rarefaction that separates it from the front region (top panels in Fig.~\ref{fig:1}) and it is also ``eroded'' by the back rarefaction. Consequently, the front structure evolves increasing its size as the front shock incorporates material from the steady jet and the material from the fading region crosses the receding rarefaction. Thus, the front region consists of the forward shock structure of the perturbation (E in Fig.~\ref{fig:1}), and the fading region is formed by the remains of the perturbation that have not crossed the receding rarefaction (F). The synchrotron emissivity (bottom panel in Fig.~\ref{fig:1}) is governed by the jet pressure and hence the emission evolution is very similar to the pressure evolution of the RHD simulations. In the emission results, the front region (component E) propagates without much flux density evolution since injection. However, the fading structure (component F), which initially shows a notably larger flux density than component E, rapidly decreases in emission as the receding and back rarefactions erode it. The reverse shock (see A03) is neither relevant nor observationally significant in our simulations, as it propagates in a very rarefied medium. For that reason it is not shown in Fig.~\ref{fig:1}.
Notice that the Lorentz factor values in Fig.~\ref{fig:1} are those corresponding to the fluid. In contrast, VLBI observations provide us with pattern velocities. In the simulation, the velocity of the front shock is measured to be $v_{s}\sim 0.96\,c$ ($v_{\rm{E}}^{obs} \sim 3.5 c$), whereas that of the fading region is
$v_{r}\sim 0.87\,c$ ($v_{\rm{F}}^{obs} \sim 1.7 c$), both similar to those found in the observations (K08). We remark the fact that the velocity of the material in the fading region is faster than that of the receding rarefaction (cf. Fig.\ref{fig:1}), as expected from the explanation in the previous paragraph. We also point out that the dilute material shown in Fig.~\ref{fig:1} presents a modified velocity due to passage through the reverse shock.
A second simulation was performed for a faster perturbation, with Lorentz factor $\Gamma=3.6$, while keeping the rest of the parameters as in the previous simulation. The results \citep[see ][]{pe08} show that the front region of the perturbation is overpressured with respect to the rear region and, thus, the former is brighter than the latter, as shown by the emission simulations. This is in clear contradiction with the observations of the jet in 3C~111 (Fig.~\ref{fig:0} and K08). The difference is due to the stronger front shock produced in this case. It is also important to mention that the wave separating both regions is now a reverse shock, instead of the receding rarefaction shown in Fig.~\ref{fig:1}. This is a general result for fast perturbations.
\section{Discussion}
In any of the scenarios mentioned above, the perturbed regions have enhanced emission with
respect to the underlying jet. However, only 1) an overpressured perturbation with the same Lorentz factor as the underlying flow avoids the front region to be brighter from the beginning, and 2)
a rarefaction behind the perturbation avoids the formation of a strong reverse shock behind the perturbation. With these restrictions, the second component fades out rapidly and then the first one dominates the emission, as observed for components E and F in 3C111.
We note that a denser medium injected after the perturbation would lead to the formation of a brighter feature behind component F, produced by the reverse shock formed between the end of the perturbation and the medium injected afterwards. As stated above, this shock appears also in our simulations but it has a negligible effect in terms of emission. Observational support for the inclusion of this tenuous material in the simulations can be found in the prominent emission gap following behind the E/F complex in 3C~111 (see K08). New ejection of emitting material is detected on the time scale of more than 2 years, corresponding to a gap width of up to 2\,mas in 1999 (cf. K08). We have exagerated the effect of the dilute medium in order to focus on the evolution of the perturbation alone. However, such a dilute medium (a factor 10 less dense than the steady jet) is not needed for the conclusions of this work to be valid: a denser medium, but underdense with respect to the steady jet, would produce a similar effect, with little emission contributed from the reverse shock formed.
Thus, our result explains the evolution of these radio-components on the basis of the scenario explained above. From this work, we can derive a recipe for distinguishing between ejection events composed by denser material only or denser and/or faster material. On top of this, we want to remark that these results are based on the hypothesis that the injection of bulk particles in the jet is reduced after the injection of the perturbation, which could be related to the processes taking place in the accretion-disk/black-hole system, in the line of the results in \cite{Mar02}, who showed the existence of a relation between the processes taking place in the accretion-disk and the jet \citep[see][for discussion on this issue]{pe08}.
At present we are following a new radio-flare in this source with denser sampling. The aim is to perform a deeper analysis of the early stages of evolution of the expected radio-components and to compare this evolution with the model presented here.
\begin{acknowledgements}
MP acknowledges support from a postdoctoral fellowship of the
``Generalitat Valenciana'' (``Beca Postdoctoral
d'Excel$\cdot$l\`encia''). IA is
supported by an I3P contract with the Spanish ``Consejo Superior
de Investigaciones Cient\'{i}ficas''. MP, IA and JLG acknowledge support by
the Spanish ``Ministerio de Educaci\'on y Ciencia'' and the European Fund
for Regional Development through grants AYA2007-67627-C03-01 and AYA2007-67627-C03-03.
MK has been supported by the NASA Postdoctoral Program at the Goddard Space Flight Center, administered by the Oak Ridge Associated Universities through a contract with NASA. YK is a research fellow of the
Alexander von Humboldt Foundation. We thank J-M Mart\'{i} and M.-A. Aloy for useful discussion and comments.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,566,001 | arxiv | \section{Introduction}
The relationship between the neutron matter equation of state (EOS) and
the neutron skin thickness has been studied extensively by using
the Skyrme Hartree-Fock (HF) model, a relativistic mean field (RMF) model
\cite{Brown,Furn,Yoshi04}.
The neutron matter EOS is essential for studying the
properties of neutron stars, e.g., their size
\cite{Prakash}. It is also known that isovector nuclear matter
properties, including the symmetry energy, correlate strongly
with the neutron skin thickness in heavy nuclei
\cite{Furn,Dani,Yoshi06}.
Elastic electron scattering has provided accurate data on the
charge distributions of nuclei. Several experimental attempts have been
made to measure neutron distributions, for example,
by proton elastic
scattering \cite{pscatt1,pscatt2,Hoffman,Starodubsky}
and by inelastic alpha scattering to
giant dipole resonance excitations \cite{GDR}.
However, empirical results of
neutron skin thickness obtained by proton scattering are
controversial and do not agree with each other even within experimental
error. The accuracy of empirical data on neutron distributions from giant
resonance experiments is also rather poor, insufficient to extract accurate
information on the neutron matter EOS. One promising tool for
studying neutron distributions is the parity violation electron
scattering experiment \cite{escatt}.
Unfortunately, no data on parity violation electron
scattering experiments is available so far.
The model-independent sum rule strength of charge exchange
SD excitation is directly related to
information on the neutron skin thickness\cite{Gaarde}.
Recently, SD excitations were studied in $^{90}$Zr by the charge
exchange reactions
$^{90}$Zr(p,n)$^{90}$Nb \cite{Wakasa} and $^{90}$Zr(n,p)$^{90}$Y
\cite{Yako}, and the model-independent sum rule strengths for the
SD excitations were extracted in Ref. \cite{Yako06} by using multipole
decomposition (MD) analysis \cite{Ichi}.
The charge exchange reactions
($^3$He,$t$) on Sn isotopes
were also studied to extract the neutron skin thickness
\cite{SD-Pb}.
However, one needs the counter experiment ($t, ^3$He) or (n,p)
on Sn isotopes
in order to extract
the model-independent sum rule value from experimental data. This counter
experiment is missing in the case of Sn isotopes.
It is known that the SD strength has almost the same amount of contributions
to neutrino reactions as that of the Gamow-Teller strength \cite{Suzuki}.
The Pb target is considered to be the most promising candidate for detecting
the heavy-flavor neutrinos from the supernovae. Thus, it
is quite important to study the SD strength in the Pb target for a
precise evaluation of the cross-sections of charge-induced neutrino
reactions.
In this paper, we study the SD excitations
and the neutron skin thickness by using the
HF and HF+ random phase approximation (RPA) with Skyrme interactions.
As a theoretical model, the HF+RPA model
has been extensively applied to giant resonances
in a broad region of mass table\cite{Ber75,HSZ97}.
The same model was used for the study of
spin-dependent charge exchange excitations\cite{SG2,Colo,SSG,HS2000}.
It was shown that the model successfully predicts GT and SD states
in $^{48}$Sc and $^{90}$Nb\cite{Colo,SSG}.
First, we calculate the SD states in nuclei with mass A=90 and 208
by using the charge exchange HF + RPA
model with various Skyrme interactions. We will compare
calculated results of SD strength distributions with
empirical data obtained by
charge exchange (p,n) and (n,p) reactions on $^{90}$Zr.
The sum rule values are also compared with the empirical values in
Section 2.
Next, the correlations between the neutron matter EOS and
the SD sum rules are studied in the Skyrme HF model.
We will discuss the neutron matter EOS by using the experimental
SD data and other empirical information on the neutron skin.
This paper is organized as follows.
In Section 2, the SD strength
of the HF+RPA calculations
is presented
for both the $t_-$ and $t_+$ isospin channels on $^{90}$Zr and $^{208}$Pb.
The calculated results are
compared with experimental results of $^{90}$Zr(p,n)$^{90}$Nb
and $^{90}$Zr(n,p)$^{90}$Y reactions.
We study the correlations between the sum rules of SD strength
and the pressure of neutron matter EOS in Section 3.
A summary is given in section 4.
\section{HF+RPA calculations of SD strength}
The operators for SD transitions are defined as
\begin{eqnarray}
\hat{S}_{\pm} &=& \sum_{im\mu} t_{\pm}^{i}\sigma_{m}^{i} r_{i}Y_{1}^{\mu}
(\hat{r}_{i})
\label{eq:eq1}
\end{eqnarray}
with the isospin operators $t_{3} = t_{z},
t_{\pm} = (t_{x}\pm it_{y})$.
The model-independent sum rule for the
$\lambda -$pole SD operator $\hat{S}^{\lambda}_{\pm } =
\sum_{i} t_{\pm}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$ can be
obtained as
\begin{eqnarray}
S^{\lambda}_{-}- S^{\lambda}_{+}&=&
\sum _{i \in all} \mid \langle i\mid\ \hat{S}^{\lambda}_{-} \mid0\rangle
\mid ^2 -
\sum _{i \in all} \mid \langle i\mid\ \hat{S}^{\lambda}_{+} \mid0\rangle
\mid ^2 \nonumber \\
&=& \langle 0\mid\ [\hat{S}^{\lambda}_{-}, \hat{S}^{\lambda}_{+}] \mid0\rangle
= \frac{(2\lambda+1)}{4\pi}(N\langle r^2\rangle _n -Z\langle r^2\rangle _p).
\label{eq:sum_sd_a}
\end{eqnarray}
The sum rule for the spin-dipole operator (\ref{eq:eq1})
then becomes
\begin{eqnarray}
S_{-}- S_{+}=\sum_{\lambda}(S^{\lambda}_{-}- S^{\lambda}_{+})=
\frac{9}{4\pi}(N\langle r^2\rangle _n -Z\langle r^2\rangle _p).
\label{eq:sum_sd}
\end{eqnarray}
It should be noted that
the sum rule (\ref{eq:sum_sd}) is directly related to the difference between
the mean square radius of neutrons and protons
with the weight of neutron and
proton numbers.
We adopt four Skyrme interactions, namely, SIII, SGII, SkI3 and SLy4,
for the HF+RPA
calculations. The Landau parameters and nuclear matter properties
of these interactions
are shown in Table \ref{tab:landau}. For the spin-isospin excitations,
the value $G_0'$ plays the important role of determining the collective
properties of the excitation \cite{SG2}.
The RPA equation is solved using the basis expanded by the harmonic
oscillator wave functions up to the maximum major quantum number of $N_{max}$
=10 for $^{90}$Zr and $N_{max}$=12 for $^{208}$Pb.
The HF calculations are performed without the spin-gradient terms
(J$^2$ terms) since the adopted Skyrme interactions have been fitted
without them \cite{Bei,SG2}, but the RPA calculations incorporate
the spin-gradient terms. The two-body spin-orbit and
two-body Coulomb interactions are neglected in the RPA calculations.
We also performed the continuum HF+RPA calculations with one of the
interactions and found essentially the same strength
distributions as in the present calculations except
the width due to the coupling to the continuum
\cite{HSZ97}.
The calculated results are smoothed out
by using a weighting function, $\rho $:
\begin{equation}
\frac{dB(SD)_{ave}}{dE_x}=\int \frac{dB(SD)}{dE_x '}\rho(E_x '-E_x)dE_x '
\label{eq:ave}
\end{equation}
where the weighting function is defined as
\begin{equation}
\rho(E_x '-E_x)=\frac{1}{\pi}\frac{\Delta/2}{(E_x '-E_x)^2+(\Delta/2)^2}
\label{eq:weight}
\end{equation}
taking the width parameter $\Delta$.
In the present calculations with the discrete basis, the SD strength is given by
\begin{equation}
\frac{dB(SD)}{dE_x '}=\sum_i B(SD;E_i)\delta(E_i-E_x '). \nonumber
\end{equation}
\begin{table}[htp]
\footnotesize\rm
\caption{\label{tab:landau}
Landau parameters, effective mass $m^*$ and symmetry energy J
of Skyrme interactions
}
\begin{tabular}{l|c|c|c|c}
\hline
& SIII & SGII & SkI3 &SLy4 \\ \hline
$F_0$ & 0.309 & -0.235 & -0.318 & -0.273\\
$F_0'$ & 0.862 & 0.733 & 0.653 & 0.818 \\
$G_0 $ & 0.052 & 0.014 & 0.569 & 1.120 \\
$G_0'$ & 0.457 & 0.509 & 0.203 & -0.138 \\ \hline
$F_1$ & -0.709 & -0.646 & -1.269 &-0.926 \\
$F_1'$ & 0.490 & 0.521 &-0.843 & -0.399\\
$G_1 $ & 0.490 & 0.612 & 1.33 & 0.279 \\
$G_1'$ & 0.490 & 0.432 & 0.65 & 1.047\\ \hline
$m^*/m$& 0.76 &0.78 & 0.58 & 0.69 \\
J(MeV) & 28.1 & 26.9 & 34.8 & 32.3 \\ \hline
\end{tabular}
\end{table}
\subsection{Charge exchange SD excitations of $^{90}$Zr}
The HF calculations are performed by using four Skyrme
interactions in Table \ref{tab:landau}. The proton, charge and neutron radii of $^{90}$ Zr are
listed in Table \ref{tab:hf-zr}
together with the sum rule values $\Delta S=S_--S_+$ calculated through the
analytic equation (\ref{eq:sum_sd}). By using the same HF wave functions,
the charge exchange RPA calculations give the
SD strengths in $^{90}$Nb and $^{90}$Y excited by
the $t_{\pm} r\sigma Y_{1}(\hat{r})$ operators from the parent nucleus
$^{90}$Zr, as shown in
Figs. \ref{fig:zr90_sdm} and \ref{fig:zr90_sdp}.
The experimentally obtained distributions of the SD strengths
are also plotted in Figs. \ref{fig:zr90_sdm} and \ref{fig:zr90_sdp}.
The experimental SD strength distributions for the $t_-$ and
the $t_+$ channels were obtained from the $^{90}$Zr(p,n)$^{90}$Nb
and the $^{90}$Zr(n,p)$^{90}$Y data, respectively, by performing
MD analysis \cite{Yako06}. A comprehensive description of the MD
analysis can be found in ref. \cite{Ichi}.
\begin{table}[htp]
\caption{\label{tab:hf-zr}
Proton, neutron and charge radii of $^{90}$Zr.
The charge radius is obtained by folding the proton finite size.
The sum rule values $\Delta S=S_--S_+$ of spin-dipole excitations are
calculated
by Eq. (\ref{eq:sum_sd}) with the HF neutron and proton mean square radii.
Experimental data on the charge radius are taken from ref. \cite{Vries}.
The experimental values $r_n-r_p$ are taken from \cite{pscatt1,Yako06}.
The radii are given in
units of fm, while the SD sum rules are given in units of fm$^2$.}
\begin{tabular}{l|c|c|c|c|c}
\hline
& SIII & SGII & SkI3 &SLy4 &exp \\ \hline
$r_p$ & 4.257 & 4.198 & 4.174 & 4.225 & ----\\
$r_c$ & 4.321 & 4.263 & 4.240 & 4.290 &4.258$\pm$0.008\\
$r_n$ & 4.312 & 4.253 & 4.280 & 4.287& ---- \\ \hline
$r_n-r_p$ & 0.055 & 0.055 & 0.106 & 0.064 &0.09$\pm$0.07 \cite{pscatt1},
\,\,\, 0.07$\pm$0.04 \cite{Yako06} \\ \hline
$\Delta S$ & 146.7 & 142.9 & 156.9 & 146.9 & \\
\hline
\end{tabular}
\end{table}
\begin{figure}[htp]
\vspace{-2cm}
\includegraphics[width=3.0in,clip]{zr90_sdm_s3.eps}
\vspace{-2.0cm}
\includegraphics[width=3.0in,clip]{zr90_sdm_sg2.eps}
\includegraphics[width=3.0in,clip]{zr90_sdm_ski3.eps}
\includegraphics[width=3.0in,clip]{zr90_sdm_sly4.eps}
\caption{\label{fig:zr90_sdm}(Color online)
Charge exchange SD strengths for
the operators $\hat{S}^{\lambda}_{-} =
\sum_{i} t_{-}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$
calculated by the HF+RPA model with
the Skyrme interactions (a) SIII, (b) SGII, (c) SkI3 and (d) SLy4.
The excitation energy is referred to the ground state of the
parent nucleus $^{90}$Zr.
The dotted, dashed and long$-$dashed lines show the SD strengths of
$\lambda=0^-, 1^-$ and $2^-$ , respectively, while the solid curve shows
the sum of three multipoles.
The SD strength is averaged by the weighting function (\ref{eq:weight})
with the width
$\Delta $=1MeV. The experimental data shown by the
black dots are taken from ref. \cite{Yako06}.}
\end{figure}
\begin{figure}[htp]
\vspace{-2cm}
\includegraphics[width=3in,clip]{zr90_sdp_s3.eps}
\vspace{-2cm}
\includegraphics[width=3in,clip]{zr90_sdp_sg2.eps}
\includegraphics[width=3in,clip]{zr90_sdp_ski3.eps}
\includegraphics[width=3in,clip]{zr90_sdp_sly4.eps}
\caption{\label{fig:zr90_sdp}(Color online)
Charge exchange SD strengths for the operators $\hat{S}^{\lambda}_{+} =
\sum_{i} t_{+}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$
calculated by the HF+RPA model using
the Skyrme interactions (a) SIII, (b) SGII, (c) SkI3 and (d)SLy4.
The excitation energy is referred to the ground state of the
parent nucleus $^{90}$Zr.
The SD strength is averaged by the weighting function (\ref{eq:weight})
with a width of $\Delta $=1MeV.
The experimental data shown by the
black dots are taken from ref. \cite{Yako06}.
See the captions to Fig. \ref{fig:zr90_sdm} for details.}
\end{figure}
\begin{table}[htp]
\caption{\label{tab:Epeak}
Peak energies and the average energies
of charge exchange SD excitations in the A=90 nuclei
obtained by the self-consistent HF+RPA calculations: $t_-$ in $^{90}$Nb and
$t_+$ in $^{90}$Y. The average energy is calculated
by the ratio of EWSR to NEWSR: \=E(MeV)=$m_1/m_0$. See the text
for details.}
\footnotesize\rm
\begin{tabular}{l|c|c|c|c}
\hline
&\multicolumn{2}{c|}{$t_-$} &\multicolumn{2}{c}{$t_+$} \\ \hline
& $E_{peak}$(MeV) & \=E(MeV)& $E_{peak}$(MeV) & \=E(MeV) \\ \hline
SIII & 28.5 & 25.7 & 13.5 & 10.9 \\
SGII & 27.7 & 26.7 & 11.7 & 9.47 \\
SkI3 & 29.3 & 28.2 & 12.8 & 11.6 \\
SLy4 & 26.1 & 24.9 & 11.4 & 10.5 \\ \hline
\end{tabular}
\end{table}
In general,
the $t_-$ SD strength distributions
for 0$^{-}$ and 1$^{-}$ states in $^{90}$Nb
are concentrated in one
state at $E_{x}\sim$30MeV,
having a large portion of the non$-$energy weighted sum
rule (NEWSR) strength, while those for the 2$^{-}$ states
are separated into two dominant peaks, as shown in Fig. \ref{fig:zr90_sdm}.
The 0$^{-}$ peak appears at $E_x\sim$30MeV, having 73\%, 65\%, and 58\%
of
the NEWSR value for the SIII, SGII and SkI3
interactions, respectively.
The calculated results for 1$^{-}$ states show a peak at $E_x\sim$29MeV
having 50\%, 59\% and 48\% of the NEWSR value
for the SIII, SGII and SkI3
interactions, respectively. The three
results in Figs. \ref{fig:zr90_sdm}
(a), (b) and (c) show the 0$^{-}$ peak
at a very similar excitation energy,
while the values of NEWSR are somewhat different.
The same is true
for the 1 $^{-}$ peak in the three results.
For the SLy4 interaction
in Fig. \ref{fig:zr90_sdm}(d), the 0$^{-}$ and 1$^{-}$ peaks
appear at about 3 MeV lower than the other three results, having
76\% and 68\% of the NEWSR, respectively.
This is due to the negative value of the Landau parameter $G_0'$ in
SLy4 for the spin-isospin channel.
The dominant configurations of the collective 0$^{-}$ and 1$^{-}$ states
are the $(\pi1h_{9/2}\nu1g_{9/2}^{-1})$ and $(\pi1g_{7/2}\nu1f_{7/2}^{-1})$
configurations.
For the 2$^-$ excitations, the number of p-h configurations is larger than
those of 0$^{-}$ and 1$^{-}$ and therefore
the strength is fragmented in a wider energy range
compared with 0$^{-}$ and 1$^{-}$ excitations.
There is a small low-lying peak with $J^{\pi}=2^-$
at $E_{x}$ = 12.4 (14.1) MeV with
10.0 (9.0)\% of the NEWSR value in the case of the SIII (SGII) interaction.
This state is mainly due to the
$\pi1g_{9/2}\nu1f_{5/2}^{-1}$ configuration.
The major strengths are found in the two peaks around 21 and 27 MeV in both the
SIII and SGII results. The strength around $E_x$ = 21 MeV exhausts
50(41)\% of the NEWSR value, while the peak around $E_x$ = 27 MeV exhausts
30(37)\% of the NEWSR value for the SIII (SGII) interaction.
The peak energies in the two results are similar, while more SD strength is
shifted to the peak around $E_{x}$ = 21 MeV in the case of the SIII interaction.
The main configurations of the higher peak at $E_{x}$ = 27 MeV are the
same as those of the 0$^{-}$ and 1$^{-}$ peaks, namely,
$(\pi1h_{9/2}\nu1g_{9/2}^{-1})$ and $(\pi1g_{7/2}\nu1f_{7/2}^{-1})$. On the other hand, the main configurations of
the peak around $E_{x}$ = 21 MeV are
$(\pi1h_{11/2}\nu1g_{9/2}^{-1})$,
$(\pi2d_{5/2}\nu2p_{1/2}^{-1})$ and $(\pi2d_{5/2}\nu2p_{3/2}^{-1})$.
The 2$^{-}$ strength distributions of the SkI3 and SLy4 interactions are
somewhat different
than those of the SIII and SGII interactions. There is no isolated
low energy peak in the results for the SkI3 and SLy4 interactions.
Three large peaks are seen at $E_{x}$ = 23.5, 26.5 and 29.5 MeV
together with several small peaks, while the two peaks
at 20 and 26 MeV exhaust most of the strengths in the case of SLy4.
The SD strengths calculated by the
SD operator $\hat{S}^{\lambda}_{+ } =
\sum_{i} t_{+}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$
are shown in Fig. \ref{fig:zr90_sdp} (a), (b), (c) and (d)
for the SIII, SGII, SkI3 and SLy4 interactions,
respectively. The strength distributions are divided into two energy regions: a broad bump below 10 MeV and a peak around $E_{x}$ = 13 MeV.
The strengths below 10 MeV are due to the $\lambda^{\pi}$=1$^-$ and 2$^-$ states,
while the high
energy peak is induced mainly by the 1$^-$ states. The large 0$^-$ strength
is also found just above the high energy 1$^-$ peak.
The summed NEWSR values of all multipoles below 10 MeV are almost
equal to the strength of the high energy peak around $E_{x}$ = 13 MeV in
the case of the SIII and SGII interactions. The high energy peak of the
SIII interaction in Fig.
\ref{fig:zr90_sdp}(a) is
about 2 MeV higher than those of
SGII and
SLy4, as listed in Table \ref{tab:Epeak}. The main
configuration for the high energy peaks with $\lambda^{\pi}$=0$^-$ 1$^-$ and 2$^-$
is the $(\nu1g_{7/2}\pi1f_{7/2}^{-1})$ excitation. For the
low energy 1$^-$
strength, the $(\nu2d_{3/2}\pi2p_{3/2}^{-1})$ and
$(\nu1g_{7/2}\pi1f_{5/2}^{-1})$ configurations play the dominant roles.
The $(\nu2d_{5/2}\pi2p_{1/2}^{-1})$,
$(\nu2d_{5/2}\pi2p_{3/2}^{-1})$ and $(\nu3s_{1/2}\pi2p_{3/2}^{-1})$
configurations have a large contribution in the low energy peak of
$\lambda^{\pi}=2^-$.
The $(\nu2d_{3/2}\pi1f_{7/2}^{-1})$ configuration contributes substantially
to the high energy 2$^-$ peak together with the
$(\nu1g_{7/2}\pi1f_{7/2}^{-1})$ configuration. The large spread in the distributions
of the SD
strengths in Figs. \ref{fig:zr90_sdm} and \ref{fig:zr90_sdp} are
due to the fact that the $p-h$ excitations are very different in
unperturbed energy. Thus,
the collision-less Landau damping effect plays an important role
in the large
observed width of SD resonance, while the coupling to the continuum
plays a minor role. The coupling to 2-particle-2-hole (2p-2h) states
were shown to increase substantially the width of the main peak of
the $t_-$ SD excitations of $^{90}$Zr in ref. \cite{Dro87}.
The energies of the main peaks E$_{peak}$ are
tabulated in Table \ref{tab:Epeak} along with
the average excitation energies, which are calculated by the
ratio of the energy-weighted sum rule (EWSR) $m_1$ to the non-energy
weighted sum rule (NEWSR) $m_0$, \={E}$=m_1/m_0$.
The \={E} is always lower than
the E$_{peak}$ because of the low energy peak in the excitation
spectra.
For the $t_-$ response, the SkI3 interaction gives the highest excitation
energy for the peak, while the SLy4 is the lowest. Notice
that the energy of
SkI3 is the highest due to the small effective mass $m^*/m$, while the negative
Landau parameter $G_{0}'$ is responsible for the fact that SLy4 yields
the lowest energy value in Table \ref{tab:Epeak}.
The general trend of the average
excitation energy \={E} is the same for the $t_+$ response.
The SIII, however, gives a somewhat
higher energy for the E$_{peak}$ than SkI3 does.
\begin{figure}[htp]
\includegraphics[width=4.5in,clip]{zr-sd-exp.eps}
\caption{\label{fig:zr90_sd_exp}
Charge exchange SD strength $\frac{dB(SD_-)}{dE}$ (upper panel)
and $\frac{dB(SD_+)}{dE}$ (lower panel) of $^{90}$Zr. The circles and squares
are the experimental data taken from ref. \cite{Yako06}.
The spectra $\frac{dB(SD_+)}{dE}$ are shifted by the Coulomb
energy difference between the two daughter nuclei $^{90}$Nb and $^{90}$Y
(+23.6 MeV) to adjust
the isospin difference between the two nuclei.
The calculated results are plotted with the quenching factor
quf = 0.68. The SD strength is averaged by the weighting
function (\ref{eq:weight}) with the width $\Delta $ = 2 MeV.}
\end{figure}
The calculated results of SD strength are shown in Fig.~\ref{fig:zr90_sd_exp}
together with the experimentally obtained distributions of the
SD strengths \cite{Yako06}. The spectra for the $t_+$ channel are
shifted by +23.6 MeV, accounting for the Coulomb energy difference
between the daughter nuclei $^{90}$Nb and $^{90}$Yb. We introduce
the quenching factor quf = 0.68 for both the $t_-$ and $t_+$
channels. For the $t_-$ channel,
the experimental strength distribution peaked
at $E_x\sim$ 26 MeV is well described by the SLy4 interaction.
The results of SGII and SIII also give reasonable agreement with the
experimental peak energy.
None of the calculated results show any substantial strength above
$E_x\sim$ 36 MeV, while a significant portion of the sum rule value
is found above $E_x\sim$ 36 MeV in the experimental data.
This difference may be due to the lack of coupling to many-particle
many-hole states in the present RPA calculations.
In ref. \cite{Dro87}, the $t_-$ SD strengths in $^{90}$Zr have been
studied using the RPA model including the couplings to
2p-2h states. It was found that the
mixing between 1p-1h and 2p-2h
states gives a large asymmetric spread in the strength of the
SD resonances, and about 30$\%$ of the total strength is shifted to
excitation energies above 35 MeV, referred to the parent nucleus
$^{90}$Zr. This result is consistent with the
quenching factor adopted in Fig. \ref{fig:zr90_sd_exp}. It should be mentioned that
the peak energy of the $t_-$ SD strength is not changed
appreciably by the coupling to the 2p-2h states, while the peak height is decreased
substantially.
For the $t_+$ channel,
the two peak structures can be seen
in both the calculated and experimental results.
SkI3 and SLy4 describe
the SD strength well at the low energy spectra. The calculated strength
up to $E_x = $40 MeV exhausts 100\% of the sum rule value,
while the experimental data show
appreciable strength above $E_x = $40 MeV.
This difference may be due to the couplings to many-particle many-hole
states similar to the $t_-$ channel.
\begin{figure}[htp]
\includegraphics[width=4.5in,clip]{zr-sd-exp-s.eps}
\caption{\label{fig:zrsum}
Integrated charge exchange SD strength (\ref{eq:sum-ex})
excited by the operators $
\hat{S}_{-} = \sum_{i,m,\mu} t_{-}^{i}\sigma_{m}^{i} r_{i}Y_{1}^{\mu}
(\hat{r}_{i})$ and $
\hat{S}_{+} = \sum_{i,m,\mu} t_{+}^{i}\sigma_{m}^{i} r_{i}Y_{1}^{\mu}
(\hat{r}_{i})$ on $^{90}$Zr. The
calculated results are obtained by the HF+RPA model using
the Skyrme interactions SIII, SGII, SLy4 and SkI3.
The upper panel shows the $S_-$ and $S_+$ strength, while
the lower panel shows the $S_--S_+$ strength.
All strengths for the
three multipoles $\lambda^{\pi}$=0$^{-}$, 1$^{-}$ and 2$^{-}$ are
summed up in the results.
The experimental data are taken from ref. \cite{Yako06}.
No quenching
factor is introduced in the calculation of the integrated strength.
}
\end{figure}
Let us now discuss the integrated SD strength.
The integrated SD strength
\begin{equation}
m_0(E_x)=\sum_{\lambda^{\pi}=0^-,1^-,2^-}\int_0^{E_x}\frac{dB(\lambda)}{dE'}
dE'
\label{eq:sum-ex}
\end{equation}
is plotted as a function of
the excitation energy $E_x$ in Fig. \ref{fig:zrsum}
for the operators $\hat{S}^{\lambda}_{- } =
\sum_{i} t_{-}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$
and
$\hat{S}^{\lambda}_{+} =
\sum_{i} t_{+}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$.
The experimental data
are taken from ref. \cite{Yako06}.
The value $S_-$ is obtained by integrating up to $E_x$ = 50 MeV from the
ground state of the daughter nucleus $^{90}$Nb ($E_x$ = 57 MeV from
the ground state of the parent nucleus $^{90}$Zr),
while the corresponding value $S_+$ is evaluated up to $E_x$ = 26 MeV
from the ground state of $^{90}$Y ($E_x$ = 27.5 MeV from
the ground state of the $^{90}$Zr).
This difference between the two maximum energies of the integrals
stems from the
isospin difference
between the ground states of the daughter nuclei, i.e.,
T=4 in $^{90}$Nb and T=6 in $^{90}$Y.
That is, the 23.6 MeV difference originates from the difference in
excitation energy
between the T=6
Gamow-Teller
states in the (p,n) and (n,p)
channels\cite{Yako06}.
For both the $S_-$ and $S_+$ strength,
the calculated results overshoot the experimental data
in the energy range $E_x$ = 20-40 MeV.
These results suggest the quenching of
30-40\% of the calculated strength around the peak region, as was already
mentioned.
However, the integrated cross-sections up to $E_x$ = 56 MeV in Fig.
\ref{fig:zrsum}
approach the calculated values for both the
$t_{-}$ and $t_{+}$ channels.
\begin{table}[htp]
\caption{\label{tab:sum}
Sum rule values of charge exchange SD excitations in A=90 nuclei
obtained by the HF+RPA calculations; S$_-$ for $^{90}$Nb and
S$_+$ for $^{90}$Y. The SD strength is integrated up to $E_x$ = 50 MeV
for S$_-$ and $E_x$ = 26 MeV for S$_+$, respectively.
The experimental data are taken from ref. \cite{Yako06}.
The SD sum rules are given in units of fm$^2$.
See the text for details.}
\footnotesize
\begin{tabular}{l||c|c|c||c|c|c||c|c|c||c|c|c}
\hline
&\multicolumn{3}{c||}{SIII} &\multicolumn{3}{c||}{SGII}
&\multicolumn{3}{c||}{SkI3}&\multicolumn{3}{c}{SLy4} \\ \hline
$\lambda^{\pi} $ & $S_-$ & $S_+$ & $\Delta S$ & $S_-$ & $S_+$ & $\Delta S$
& $S_-$ & $S_+$ & $\Delta S$ & $S_-$ & $S_+$ & $\Delta S$ \\ \hline
0$^-$ & 34.8 &18.5 &16.4 & 33.2 & 17.4 & 15.8 & 36.6 & 19.1& 17.5&37.8 &21.4 & 16.4\\\hline
1$^-$ & 120.8 & 71.7 & 49.1 & 122.0 & 74.3 & 47.7 & 120.8 & 68.2 & 52.7 &115.8 & 66.4& 49.4\\\hline
2$^-$ & 130.1 & 48.5 & 81.6 & 125.5 & 45.9 & 79.5 & 139.0 & 51.1 & 87.9&138.7 & 56.4& 82.3\\\hline
sum & 285.7 & 138.6 &147.1 & 280.7 &137.6 &143.1 & 296.3 & 138.3 & 158.1 & 292.3 & 144.2 & 148.2\\ \hline \hline
exp & \multicolumn{3}{c||}{ $S_-=271\pm14$} &\multicolumn{3}{c||}{
$S_+=124\pm11$ }
&\multicolumn{3}{c||}{$\Delta S=147\pm13$}&\multicolumn{3}{c}{} \\ \hline
\end{tabular}
\end{table}
The calculated SD sum rule values in A=90 nuclei obtained by using the HF+RPA results
are tabulated in Table \ref{tab:sum} for the transitions with $\lambda ^{\pi}$=
0$^-$, 1$^-$ and 2$^-$.
Clearly, the $\Delta S$ values show
signs of multipole proportionality (2$\lambda$+1),
even though $S_-$ and $S_+$ themselves
do not show any clear multipole dependence.
The present
RPA results for $^{90}$Zr listed in Table \ref{tab:sum}
satisfy the sum rule value (2)
in Table \ref{tab:hf-zr}
with high accuracy, to an error of only (0.1$\sim$0.2)\%.
This agreement guarantees the numerical
accuracy of the present RPA calculations.
This is also the case in $^{208}$Pb, as will be shown in Section IIB.
The $\Delta S=S_--S_+$ value is shown as a function of $E_x$
in the lower panel of Fig.
\ref{fig:zrsum}. We note that the
$\Delta S$ value saturates both in the calculated and the experimental
values above $E_x=$ 40 MeV, while the empirical values $S_-$ and $S_+$
themselves increase gradually above $E_x=$ 40 MeV.
This is the crucial feature for extracting the model-independent sum rule
$\Delta S=S_--S_+$ from the experimental data.
The empirical values $S_-$, $S_+$ and $\Delta S$ obtained from these
analyses are shown in Table \ref{tab:sum}.
The indicated uncertainties of $S_-$, $S_+$ and $\Delta S$ contain not only
the statistical error of the data, but also errors due to the various
input of the DWIA calculations used in the MD analysis, such as,
the optical model parameters and the single-particle potentials
\cite{Yako}. There is an additional uncertainty in the
estimation of the SD unit cross-section, namely, the overall
normalization factor \cite{Yako06}, which should be studied further
experimentally.
From $\Delta S$, the neutron radius of
$^{90}$Zr is extracted to be $\sqrt{<r^2>_n}$ = (4.26$\pm$0.04)fm
from the model-independent SD sum rule (\ref{eq:sum_sd}), where the empirical
proton radius $\sqrt{<r^2>_p}$ = 4.19 fm is used. The proton radius
is obtained from the charge radius in Table \ref{tab:hf-zr} by
subtracting the
proton finite size correction.
The experimental uncertainty in the neutron skin thickness
obtained by proton scattering is rather large:
$\delta_{np}=r_n-r_p=(0.09\pm0.07$)fm. This is
because of the model-dependent
analysis of the proton scattering, with effective nucleon-nucleon
interactions in the nuclei \cite{pscatt1}.
On the other hand,
the sum rule analysis of the SD strength
determines the neutron radius with 1\% accuracy, which is
almost the same as that expected for the parity violation electron
scattering experiment.
The obtained value $r_n-r_p = (0.07\pm0.04$) fm
can be used to disentangle the neutron matter
EOS by using the strong linear correlation between the two quantities
\cite{Brown,Furn,Yoshi04}, as will be discussed in Section 3.
\subsection{Charge exchange SD excitations of $^{208}$Pb}
\begin{table}[htp]
\caption{\label{tab:hf-pb}
Proton, neutron and charge radii of $^{208}$Pb.
The charge radius is obtained by folding the proton finite size.
The sum rule values $\Delta S=S_--S_+$ of the SD excitations are calculated
by Eq. (\ref{eq:sum_sd}) with the HF neutron and proton mean square radii.
Experimental data on the charge radius are taken from ref. \cite{Vries}.
Experimental data on $\delta_{np}=r_n-r_p$ are obtained by the
proton scattering \cite{pscatt2,Hoffman,Starodubsky} and the
giant dipole excitations of $^{208}$Pb\cite{GDR}. The radii are given in
units of fm, while the SD sum rules are given in units of fm$^2$.
}
\begin{tabular}{l|c|c|c|c|l}
\hline
& SIII & SGII & SkI3 & SLy4 & exp \\\hline
$r_p$ & 5.521 & 5.454 & 5.421 & 5.457 & --- \\
$r_c$ & 5.578 & 5.512 & 5.479 & 5.515 & 5.503 $\pm$ 0.002 \\
$r_n$ & 5.646 & 5.589 & 5.649 & 5.617 & --- \\ \hline
$\delta_{np}=r_n-r_p$ & 0.125 & 0.135 & 0.228 & 0.160 & $0.083 < \delta_{np} < 0.111$\cite{pscatt2}, $0.19 \pm0.09$\cite{GDR} \\
\hline
$\Delta S$ & 1086. & 1072. & 1154. & 1098. & \\ \hline
\end{tabular}
\end{table}
\begin{figure}[htp]
\includegraphics[width=3.2in,clip]{pb-fig5a.eps}
\vspace{1cm}
\includegraphics[width=3.2in,clip]{pb-fig5b.eps}
\includegraphics[width=3.2in,clip]{pb-fig5c.eps}
\includegraphics[width=3.2in,clip]{pb-fig5d.eps}
\caption{\label{fig:pb208_sdm}(Color online)
Charge exchange SD strengths for the operators $\hat{S}^{\lambda}_{-} =
\sum_{i} t_{-}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$
calculated by the HF+RPA model with
the Skyrme interactions (a) SIII, (b) SGII, (c) SkI3, and (d) SLy4.
The excitation energy is referred to the ground state of the
parent nucleus $^{208}$Pb.
The SD strength is averaged by the weighting function in Eq. (\ref{eq:weight})
with the width $\Delta $ = 1 MeV.}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=3.2in,clip]{pb-fig6a.eps}
\vspace{1cm}
\includegraphics[width=3.2in,clip]{pb-fig6b.eps}
\includegraphics[width=3.2in,clip]{pb-fig6c.eps}
\includegraphics[width=3.2in,clip]{pb-fig6d.eps}
\caption{\label{fig:pb208_sdp}(Color online)
Charge exchange SD strengths for
the operators $\hat{S}^{\lambda}_{+} =
\sum_{i} t_{+}^{i}$
$r_{i}[\sigma \times Y_{1}(\hat{r}_{i})]^{\lambda}$
calculated by the HF+RPA model with
the Skyrme interactions (a) SIII, (b) SGII, (c) SkI3 and (d) SLy4.
The excitation energy is referred to the ground state of the
parent nucleus $^{208}$Pb.
The SD strength is averaged by the weighting function in
Eq. (\ref{eq:weight}) with the width
$\Delta$ = 1 MeV.}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=4.in,clip]{pb-fig7.eps}
\caption{\label{fig:pb_sdmp}
Charge exchange SD strength $\frac{dB(SD_-)}{dE}$ (upper panel)
and $\frac{dB(SD_+)}{dE}$ (lower panel) of $^{208}$Pb.
The spectra $\frac{dB(SD_+)}{dE}$ are shifted by +37.2 MeV due to
the Coulomb energy difference between the two daughter nuclei $^{208}$Bi and
$^{208}$Tl. The arrow in the upper panel
shows a peak energy at $E_x$ = 24.8 MeV
observed by the charge exchange reaction
$^{208}$Pb($^3$He,$t$)$^{208}$Bi \cite{Aki}.
}
\end{figure}
\begin{figure}[htp]
\includegraphics[width=4.in,clip]{pb-fig8.eps}
\caption{\label{fig:pbsum}
Integrated charge exchange SD strength (\ref{eq:sum-ex}) of $^{208}$Pb
for the operators $
\hat{S}_{-} = \sum_{i,m,\mu} t_{-}^{i}\sigma_{m}^{i} r_{i}Y_{1}^{\mu}
(\hat{r}_{i})$ and $
\hat{S}_{+} = \sum_{i,m,\mu} t_{+}^{i}\sigma_{m}^{i} r_{i}Y_{1}^{\mu}
(\hat{r}_{i})$
calculated by the HF+RPA model with
the Skyrme interactions SIII, SGII, SkI3 and SLy4.
The upper panel shows the $S_-$ and $S_+$ strength, while
the lower panel shows the $\Delta S=S_--S_+$ strength.
All strengths for the
three multipoles $\lambda^{\pi}$=0$^{-}$, 1$^{-}$ and 2$^{-}$ are
summed up in the results.
}
\end{figure}
The HF results of $^{208}$Pb are summarized in Table
\ref{tab:hf-pb}.
The RPA results of SD excitations of $^{208}$Pb are given
in Figs. \ref{fig:pb208_sdm} and \ref{fig:pb208_sdp}
for the four different Skyrme
interactions, namely, SIII, SGII, SkI3, and SLy4.
For the $t_-$ channel, the strength distributions are spread out in a broad
energy region (15 MeV$<E_x<$35 MeV) except a tiny peak
at $E_x\sim$ 5 MeV. On the other hand, the strength for the $t_+$ channel is
concentrated in a single narrow peak.
The highest peak of the $t_-$ channel
occurs at $E_x\sim$ 27-28 MeV in the cases of the
SIII, SGII, and SLy4 interactions, while it is shifted to
higher energies ($E_x\sim$ 33 MeV) in the case of SkI3.
The 0$^-$ and 1$^-$ excitations
have merged into one
peak,
having more than 40\% of the total strength at the high energy side,
while the 2$^-$ states split into a broad energy region.
The low-energy 2$^-$ state at around $E_x$ = 4 MeV
is mainly due to the ($\pi1h_{9/2}\nu1i_{13/2}^{-1}$)
excitation.
The 0$^-$ peak is predicted to occur at a slightly higher energy than the 1$^-$
peak. However,
it might be difficult to observe this peak experimentally
because
of its rather low strength.
There are appreciable differences in the peak energies between the Skyrme
interactions
for the $t_+$ channel:
$E_x\sim$ 3 MeV for SGII, $E_x\sim$ 5 MeV for SIII and $E_x\sim$ 6 MeV
for SLy4 and SkI3, as listed in Table \ref{tab:Epeak-pb}.
The sum rule values $S_-$ and $S_+$ are listed in Table
\ref{tab:sum-pb}. Because of the strong Pauli
blocking of neutron excess
in $^{208}$Pb, the $S_+$ value is much smaller than the S$_-$ value,
at most, 20\% of the corresponding $S_-$ value for each multipole.
The $S_+$ value is substantial in the case of A=90
as shown in Table \ref{tab:sum}, more
than 55\% of $S_{-}$ in some cases. However, $\Delta S=S_--S_+$ obeys
the (2$\lambda $+1) proportionality, as expected from Eq. (\ref{eq:sum_sd_a}).
The charge exchange $^{208}$Pb($^3$He,$t$)$^{208}$Bi reaction
was performed to study the SD strength in $^{208}$Bi.
The data were analyzed by a least-squares fitting method and the peak
of the SD strength was found to be at $E_{x}$ = 24.8$\pm$0.8 MeV, as measured
from the ground state of $^{208}$Pb \cite{Aki}. This empirical
peak energy is close to the average energy $\bar{E}$ of SD strength
obtained by SIII and SGII in
Table \ref{tab:Epeak-pb}. Further experimental effort is
urgently needed to obtain more quantitative strength
distributions, for example, for the multipole decomposition analysis
of charge exchange reactions on a $^{208}$Pb target.
\begin{table}[htp]
\caption{\label{tab:Epeak-pb}
Peak energies and the average energies
of charge exchange SD excitations in A=208 nuclei calculated
by the HF+RPA model; S$_-$ for $^{208}$Bi and
S$_+$ for $^{208}$Tl. The average energy is calculated
by the ratio of EWSR to NEWSR: \=E(MeV)=$m_1/m_0$. See the text
for details }
\footnotesize\rm
\begin{tabular}{l|c|c|c|c}
\hline
&\multicolumn{2}{c|}{$t_-$} &\multicolumn{2}{c}{$t_+$} \\ \hline
& $E_{peak}$(MeV) & \=E(MeV) & $E_{peak}$(MeV) & \=E(MeV) \\ \hline
SIII & 26.7 & 24.2 & 5.0 & 7.3 \\
SGII & 28.1 & 24.6 & 2.5 & 6.0 \\
SkI3 & 32.7 & 27.9 & 5.6 & 7.3 \\
SLy4 & 27.4 & 23.6 & 6.3 & 8.0 \\ \hline
\end{tabular}
\end{table}
\begin{table}[htp]
\caption{\label{tab:sum-pb}
Sum rule values of charge exchange SD excitations in A=208 nuclei
calculated by the HF+RPA model; S$_-$ for $^{208}$Bi and
S$_+$ for $^{208}$Tl. The SD strength is integrated up to $E_x$ = 57 MeV
for S$_-$ and $E_x$ = 20 MeV for S$_+$; the excitation energy is referred
to the ground state of $^{208}$Pb. The SD sum rules are given in units of fm$^2$.
See the text for details.}
\footnotesize\rm
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
&\multicolumn{3}{c|}{SIII} &\multicolumn{3}{c|}{SGII}
&\multicolumn{3}{c|}{SkI3}&\multicolumn{3}{c}{SLy4} \\ \hline
$\lambda^{\pi} $ & $S_-$ & $S_+$ & $\Delta S$ & $S_-$ & $S_+$ & $\Delta S$
& $S_-$ & $S_+$ & $\Delta S$ & $S_-$ & $S_+$ & $\Delta S$ \\ \hline
0$^-$ & 148.6 & 27.0 &121.6& 114.1 & 24.3 & 119.8 & 158.0 & 29.7 & 128.3&158.5 & 36.0 & 122.5\\\hline
1$^-$ & 442.7 & 78.8 & 363.9& 440.4 & 82.3 & 358.1 & 454.5 & 69.2 & 385.3 &430.8& 63.6 & 367.2\\\hline
2$^-$ & 632.2 & 28.3& 603.9 & 620.7 & 26.4 & 595.3 & 669.8 & 28.2 & 641.6&644.5 & 34.1 & 610.5\\\hline
sum & 1224. & 134.1 & 1089. & 1205. &132.0 & 1073. & 1282. & 127.1 & 1155. & 1234. & 133.7 & 1100. \\ \hline
\end{tabular}
\end{table}
One can see only one sharp peak in the $t_+$ channel in Fig.
\ref{fig:pb208_sdp}.
There are only two allowed 1p-1h configurations
($\nu2g_{9/2}\pi1h_{11/2}^{-1}$) and ($\nu1i_{11/2}\pi1h_{11/2}^{-1}$)
for both 1$^-$ and 2$^-$ excitations because of the strong
Pauli blocking effect of excess neutrons. Moreover, the
$\nu2g_{9/2}$ and $\nu1i_{11/2}$ states are almost degenerate
in energy in the HF potential. They are the reasons why there is only
one sharp peak in the $t_+$ channel of $^{208}$Pb.
It might be interesting to perform $^{208}$Pb(n,p)$^{208}$Tl or
$^{208}$Pb($t,^3$He)$^{208}$Tl
reactions in order to observe this peak experimentally.
The $^{208}$Pb(n,p) $^{208}$Tl reaction has been reported for
the $t_+$ channel, and a broad peak found at $E_{x}\sim$ 8MeV,
as measured from the ground
state of $^{208}$Pb with rather poor statistics \cite{Long}.
The integrated SD strengths for both the $t_-$ and $t_+$ channels
are shown in Fig. \ref{fig:pbsum}. The calculated NEWSR shows a
saturation at around $E_x\sim$ 30 MeV as can be seen in Fig. \ref{fig:pbsum}.
As noted previously, the $t_+$ channel has only a small contribution
to the model-independent sum rule $\Delta S$.
The couplings to the 2p-2h states may increase the spread in the SD strength in A=208 nuclei as well as A=90 nuclei.
So far, the charge exchange Gamow-Teller(GT) states in $^{208}$Bi
were studied by taking into account the couplings to 2p-2h states in
the particle-vibration model \cite{Colo}. While a large spread was found in the
GT states in the particle-vibration model calculations,
the peak energy did not change appreciably due to the couplings to
2p-2h states.
There have been no microscopic studies of SD states that take
into account the couplings to 2p-2h states in A=208 nuclei.
\section{SD sum rules and neutron matter EOS}
Sum rules are useful tools to study the collective nature of excitation modes
in many-body systems. In particular, for charge exchange excitations,
model-independent sum rules
are derived and used to analyze experimental
data on Gamow-Teller resonances and SD resonances \cite{Gaarde}.
For SD states, the sum rules can be used to extract the
neutron skin thickness, as was discussed in Section 2.
References \cite{Brown,Furn,Yoshi04} have reported a strong correlation
between the neutron skin thickness and the neutron matter EOS, as obtained
by using Skyrme and relativistic mean field theories.
In this section, we will study the relation between
the SD sum rules and the neutron matter EOS.
The strong linear correlation between the neutron skin thickness
\begin{equation}
\delta_{np}=\sqrt{\langle r^2\rangle _n}-\sqrt{\langle r^2\rangle _p}
\end{equation}
and the pressure of neutron matter
\begin{equation}
P=\rho_n \frac{d(E(\rho_n)/\rho_n)}{d\rho_n}.
\end{equation}
is essential for this study.
Other linear correlations between the
neutron skin thickness and various isovector nuclear matter
properties have also been pointed out recently \cite{Yoshi06}. Given these
correlations, accurate information on the neutron skin thickness
will be quite useful in determining empirically the pressure of neutron
matter EOS
and
isovector nuclear properties,
such as the volume and surface symmetry energies.
\begin{figure}[htp]
\includegraphics[width=4.5in,clip]{zr-fig9.eps}
\caption{\label{fig:zr90_np}
Correlations between the pressure of neutron matter and the SD sum rule
values of $^{90}$Zr with 12 different Skyrme interactions. The numbers denote
different Skyrme parameter sets: 1 for SI, 2 for SIII, 3 for
SIV, 4 for SVI, 5 for Skya, 6 for SkM, 7 for SkM$^{*}$, 8 for SLy4, 9 for MSkA,
10 for SkI3, 11 for SkX and 12 for SGII.
The correlation coefficient
is found to be r = 0.811. }
\end{figure}
\medskip
\medskip
\begin{figure}[htp]
\includegraphics[width=4.5in,clip]{pb-fig10.eps}
\caption{\label{fig:pb208_np}
Correlations between the pressure of neutron matter and the SD sum rule
values of $^{208}$Pb with 12 different Skyrme interactions. The numbers
denote different Skyrme parameter sets: 1 for SI, 2 for SIII, 3 for
SIV, 4 for SVI, 5 for Skya, 6 for SkM, 7 for SkM$^{*}$, 8 for SLy4, 9 for MSkA,
10 for SkI3, 11 for SkX and 12 for SGII. The dashed
line represents the result obtained by the least-squares method. The correlation coefficient
is found to be r = 0.888.
}
\end{figure}
The correlations between the pressure of
neutron matter at the neutron density $\rho_n$ = 0.1 fm$^{-3}$
and the charge exchange
SD sum rules of $^{90}$Zr and $^{208}$Pb
are shown in Figs. \ref{fig:zr90_np} and \ref{fig:pb208_np}
with 12 different Skyrme interactions. The numbers
denote different Skyrme parameter sets: 1 for SI, 2 for SIII, 3 for
SIV, 4 for SVI, 5 for Skya, 6 for SkM, 7 for SkM$^{*}$, 8 for SLy4,
9 for MSkA,
10 for SkI3, 11 for SkX and 12 for SGII.
The correlation coefficients from the extrapolated lines are
r = 0.888 and 0.811
for $^{208}$Pb and $^{90}$Zr, respectively. The correlation coefficients are
somewhat smaller than those of the calculated correlation between
the neutron skin thickness
$\delta_{np}$ and the pressure $P$ in ref. \cite{Yoshi04},
but still, we can see fairly good correlations in Figs. \ref{fig:zr90_np}
and \ref{fig:pb208_np}.
The rms proton, charge, and neutron radii in $^{90}$Zr
calculated by the HF model with the four interactions
SIII, SGII, SkI3 and SLy4 are shown in Table \ref{tab:hf-zr}.
The calculated charge radii of the SGII and SkI3 interactions
show reasonable agreement with the experimental values. However, there
is a factor 2 difference in the neutron skin thickness $\delta_{np}$
between the two interactions.
As seen in Table \ref{tab:hf-zr},
the neutron skin thickness $\delta_{np}$ obtained by the SD sum rules
is consistent with the value previously obtained from
the proton scattering data. However, the experimental uncertainty
in the value
$\delta_{np} = (0.07\pm0.04)$ fm obtained by the SD sum rules is
half that obtained through the proton data. This small uncertainty will
help to disentangle the neutron matter EOS using the strong
correlation with the neutron skin thickness.
The experimental skin thickness
$\delta_{np} = 0.07\pm0.04$fm is close to the HF results
of SLy4, as well as SGII and SIII.
The SkI3 result is not favored over the empirical result, even taking the experimental uncertainties into
consideration. We should also note that the experimental peak energy
of $t_-$ SD strength
in $^{90}$Nb coincides with the calculated peak energy of the SLy4
interaction, while that of SkI3 is 4 MeV above the experimental value,
as seen in Fig. \ref{fig:zr90_sd_exp}.
While all interactions lie within the experimental value
$\Delta S=(147\pm 13)$fm$^2$ in Fig. \ref{fig:zr90_np},
the empirical data favor the interactions indicated by the numbers 2(SIII),
11(SkX), 8(SLy4), 7(SkM*) and 6(SkM).
These interactions suggest a soft neutron matter EOS
with the pressure P($\rho_n$=0.1fm$^{-3}$) = (0.65$\pm$0.2) MeV.
Thus, the preferred nuclear matter symmetry energy extracted
from the SD experiment
is found to be
J = (30$\pm$2) MeV as a result of the strong correlation between the
neutron skin thickness and the symmetry energy \cite{Furn,Dani}.
Table \ref{tab:hf-pb} tabulates the rms proton, charge
and neutron radii in $^{208}$Pb
calculated by the
HF model, along with the experimental charge radius.
The HF results of SGII and SLy4 account for
the experimental charge radius, while there is a large variation in the predictions for the neutron skin thickness $\delta_{np}$.
The empirical value of the neutron skin thickness
$\delta_{np}$ in $^{208}$Pb was
obtained by proton scattering experiments.
However, the values obtained depend very much on the experiments
and analyses. That is,
the experimental errors are still large and some of
the values obtained have
no overlap, even when the uncertainty in the analyses is taken into account;
$\delta_{np} = (0.14 \pm 0.02)$ fm in ref.
\cite{Hoffman}, $\delta_{np} = (0.20 \pm 0.04)$ fm in
ref. \cite{Starodubsky}
and $(0.083 < \delta_{np} < 0.111)$ fm in ref. \cite{pscatt2}.
We quote in Table \ref{tab:hf-pb} the value in ref. \cite{pscatt2}
where the analyses were
performed comprehensively with many different sets
of data including those adopted in refs. \cite{Hoffman,Starodubsky}.
Although these results depend on the effective nucleon-nucleon
effective interactions in nuclei used in the analysis,
the comprehensive study of proton scattering
in ref. \cite{pscatt2}
reports rather small neutron skin thicknesses, even smaller than the
smallest value in Table
\ref{tab:hf-pb} obtained using the SIII interaction. Again, this small $\delta_{np}$
suggests a
soft neutron matter EOS similar to the conclusion reached by the
SD sum rules of $^{90}$Zr.
The charge exchange $^{208}$Pb($^3$He,$t$)$^{208}$Bi
reaction data \cite{Aki}
show an SD peak in $^{208}$Bi
at $E_{x}$ = 24.8$\pm$0.8 MeV measured
from the ground state of $^{208}$Pb, as marked by an arrow
in Fig. \ref{fig:pb_sdmp}. This peak position is close to the
calculated value of the SGII interaction, while the SkI3 peak is a few
MeV higher than the empirical value. This comparison may exclude the prediction
by SkI3, which gives a hard neutron matter EOS in Fig. \ref{fig:pb208_np}
marked by the number 10.
The neutron skin thickness was determined by the giant dipole
resonance experiment to be $\delta_{np}=(0.19 \pm0.09)$fm \cite{GDR}.
This analysis depends on
the adopted transition density and also
the optical potentials so that the result is highly
model-dependent. We
definitely need more quantitative information, i.e.,
model-independent information on the
neutron skin thickness in $^{208}$Pb
for precise determination of the neutron matter EOS as well as the
isovector nuclear matter properties. To this end, the charge
exchange SD experiments of $^{208}$Pb will provide useful model-independent
information with the same accuracy as the parity violation electron
scattering experiment.
\section{Summary}
We have investigated the SD excitations in $^{90}$Zr and $^{208}$Pb
using the
HF + RPA model with four
Skyrme interactions, viz., SIII, SGII, SkI3 and SLy4.
It is shown that
the Landau damping effect plays an important role in explaining
the large observed width of SD resonance, while
the coupling to the continuum is rather weak.
Among the four interactions, the peak position of the
experimental $t_-$ SD strength in $^{90}$Nb
is well described by the SLy4 interaction,
while the results of SIII and SGII are also acceptable.
For the $t_+$ excitation of $^{90}$Zr,
a two-peak structure was found in both
the experimental and calculated results. The
SLy4 and SkI3 results showed good
agreement with the observed low energy peak.
We pointed out that the calculated results need a quenching factor
quf$\simeq$0.68 to allow a quantitative comparison with the
experimental data up to $E_x$ = 36(40) MeV for the $t_- (t_+)$ channel in Fig.
\ref{fig:zr90_sd_exp}.
About 30\% of the NEWSR value is found in the excitation energy above
$E_x = $36(40) MeV for the $^{90}$Zr(p,n) $^{90}$Nb ($^{90}$Zr(n,p) $^{90}$Y)
experiments.
The calculated SD sum rule $\Delta S=S_- -S_+$
shows good saturation properties above $E_x = $40 MeV without any
quenching factor relative to
the observed data despite the fact that sum rules $S_-$ and $S_+$
themselves
increase gradually above $E_x\geq$40 MeV.
The neutron skin thickness $\delta_{np} = 0.07\pm0.04$ fm
extracted from the SD sum rules is
close to the calculated values obtained using SLy4 as well as SIII and SGII.
However, the extracted value does not favor
the SkI3 interaction which gives almost twice as large a neutron
skin thickness as SIII and SGII.
This is indicative of the soft neutron matter EOS induced by the
strong linear correlation between the neutron matter EOS and the neutron
skin thickness.
We showed that the SD strength of the $t_-$ excitation of $^{208}$Pb
has a large width due to the Landau damping effect.
In contrast, the $t_+$ excitation of $^{208}$Pb turns out to be
a single peak in a rather low energy region because of the strong
Pauli blocking effect of the excess neutrons.
The peak of the $t_-$ SD strength was observed by $^{208}$Pb($^3$He,t)
$^{208}$Bi at $E_x\sim$25 MeV. This peak energy
coincides with the peak calculated using
the SGII interaction, while the SkI3 interaction yields a peak that is a few MeV higher
than the empirical peak.
Thus,
the empirical SD sum rule values of $^{90}$Zr and the observed peak
energies of the $t_-$ SD strength distributions in $^{90}$Nb and $^{208}$Bi
indicate a soft neutron matter EOS with
a pressure of
P($\rho_n$=0.1fm$^{-3})$ = (0.65$\pm$0.2) MeV.
The nuclear matter symmetry energy is also
determined to be
J = (30$\pm$2) MeV from the strong correlation between the
neutron skin thickness and the symmetry energy.
In order to draw a more definite conclusion on the SD sum rules,
as well as the neutron skin thickness and the neutron matter EOS,
we need quantitative experimental work to obtain
the SD sum rules in heavy nuclei like $^{208}$Pb, both in
the $t_-$ and $t_+$ channels.
\section*{Acknowledgments}
The authors would like express their thanks to I. Hamamoto
for useful discussions.
This work was supported in part by Grant-in-Aid for Scientific Research
No. 16540259 and No. 17002003 from
the Ministry of Education, Science, Culture and Sports and
National Natural Science Foundation of China (No.10605018).
|
1,108,101,566,002 | arxiv | \section{Introduction}
\label{sec:intro}
Recent observations indicate that probably only a small fraction of about
$7\pm3$\% of O-type stars with masses exceeding 18~${\rm M}_\odot$
\citep{Grunhut2017}
and about $6\pm3$\% of early B- and O-type stars
\citep{Schoeller2017}
have measurable, mostly dipolar magnetic fields. Among the polarimetrically
studied O-type stars, all five Galactic stars with Of?p classification
possess measurable magnetic fields. These stars exhibit recurrent, and
apparently periodic, spectral variations in Balmer, \ion{He}{i},
\ion{C}{iii}, and \ion{Si}{iii} lines, sharp emission or P\,Cygni profiles in
\ion{He}{i} and the Balmer lines, and a strong \ion{C}{iii} blend in emission
around 4650\,\AA{}
\citep{Walborn1972}.
The presence of variable emission in the \ion{C}{iii} blend and in H$\alpha$
in Of?p stars is indicative of circumstellar structure, related to their
magnetospheres. Previous measurements of the mean longitudinal magnetic field
for the O7f?p star \mbox{NGC\,1624-2} carried out with ESPaDOnS (Echelle
SpectroPolarimetric Device for the Observation of Stars) at the
Canada-France-Hawaii Telescope between 2012 February 1 and 9 indicated
$\bz =5.35\pm0.5$\,kG with a polar magnetic field strength of about 20\,kG,
which is the strongest magnetic field ever measured in an O-type star
\citep{Wade2012a}.
All the other Of?p stars with studied magnetic field geometries have
$B_{\rm pole}\le2.6$\,kG
\citep{David-Uraz2019}.
The existence of a giant wind-fed dynamical magnetosphere around
\mbox{NGC\,1624-2} was discussed by
\citet{Petit2015}.
A very recent study by
\citet{Kurtz}
reported the detection of coherent pulsation modes in \mbox{NGC\,1624-2}
based on Transiting Exoplanet Survey Satellite
\citep[{\em TESS};][]{tess}
high-cadence photometry.
The study of spectral variability using archival optical spectroscopic
observations indicated $v\,\sin\,i \le3$\,\kms{} and a rotation period of
$157.99\pm0.94$\,d
\citep{Wade2012a}.
Due to the presence of the strong magnetic field, slow rotation, and the low
projected rotational velocity, a few spectral lines showed magnetic splitting
into Zeeman components corresponding to a maximum mean magnetic field modulus
of about $\langle B \rangle =14\pm1$\,kG. Furthermore,
\citet{Wade2012a}
reported the detection of reversed Stokes~$V$ profiles associated with weak,
high-excitation emission \ion{O}{iii} lines, and suggested that they may form
in the low-velocity plasma confined in closed magnetic loops above the
stellar surface. They measured a longitudinal field $\bz =2.58\pm0.7$\,kG,
which is by more than a factor of 2 smaller than the field measured using
photospheric absorption lines. On the other hand, the emission lines
\ion{He}{i}~5876, 6678, and 7065 exhibited Stokes~$V$ profiles with signs
consistent with those of the photospheric absorption lines, but yielding a
negative mean longitudinal magnetic field. Unfortunately, no field
measurements using these lines were presented in the work of
\citet{Wade2012a}.
The \ion{He}{i} emission lines are marked in the paper of
\citeauthor{Wade2012a}
as ``wind lines''. This expression was already used by
\citet{Howarth}
for H$\alpha$, where the authors discuss some lines as contaminated by
``windy'' emission. Several studies of magnetic O-type stars concluded that
these lines are not formed in the wind, but are possibly resulting from
confined circumstellar material
\citep[e.g.][]{Grunhut2009}.
In the context of the magnetically confined wind-shock model, it was
suggested that most emission comes from scattering by the cooling disk and
very little from the wind
\citep{Donati2002}.
The detected remarkable characteristics of \mbox{NGC\,1624-2} make it an
excellent laboratory to study the impact of a very strong magnetic field on
the behaviour of different elements in the stellar photosphere and
circumstellar environment. In this work, we discuss for the first time the
magnetic field geometry of \mbox{NGC\,1624-2} based on the variability of the
longitudinal magnetic field measured using photospheric lines, the
longitudinal field measurements using the emission \ion{O}{iii} and
\ion{He}{i} lines, and the variability of the magnetic field modulus over the
rotation period. Finally, using the {\sc NIRVANA} code, we discuss the first
magnetohydrodynamical (MHD) numerical simulations carried out to understand
the field geometry, mass distribution, and the gas motions in the vicinity of
\mbox{NGC\,1624-2}.
\section{Observations}
\label{sect:obs}
A few polarimetric observations were obtained at the beginning of 2012
\citep{Wade2012a}
with the ESPaDOnS spectropolarimeter installed at the 3.6\,m
Canada-France-Hawaii Telescope (CFHT) together with one observation from its
twin Narval, installed at the 2\,m Bernard Lyot Telescope (BLT) on Pic-du-Midi.
Six additional ESPaDOnS observations from 2012 September to 2013 January were
presented by
\citet{Grunhut2017}.
Subsequently, sixteen ESPaDOnS observations were obtained between 2013 August
and 2015 September. They are described in more detail in
\citet{daviduraz2020}.
All the ESPaDOnS data are publicly available in the CFHT Science
Archive\footnote{https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/cfht/}
and both ESPaDOnS and Narval data can be retrieved from the
PolarBase\footnote{http://polarbase.irap.omp.eu/} archive
\citep{PolarBase}.
Additionally, one Potsdam Echelle Polarimetric and Spectroscopic Instrument
\citep[PEPSI;][]{pepsi2015,pepsi2018}
spectrum has been recorded in linear polarized light in 2017 October at the
2$\times$8.4 m Large Binocular Telescope (LBT) on Mt.~Graham, Arizona and one
unpolarized spectrum was also obtained with the Astrophysical Research
Consortium Echelle Spectrograph
\citep[ARCES;][]{arces}
on the ARC 3.5\,m telescope at the Apache Point Observatory (APO) on 2019
March~1.
\begin{table}
\centering
\caption{
Logbook of the observations. The columns give the telescope and instrument
configuration, the heliocentric Julian date (HJD), the exposure time, and
the $S/N$ around 6100\,\AA{}. HJDs followed by a $-$ or \textbar{} indicate
subsequent spectra that were combined to increase the $S/N$ in Stokes~$V$
profiles that were used for the $\bz$ analysis. The HJD followed by a $\#$
indicates the spectrum with the overall lowest $S/N$, and which was not used
in our measurements. HJDs followed by a $*$ indicate noisy spectra that were
not combined with other spectra obtained at similar epochs.
}
\label{T:obs}
\begin{tabular}{llcc}
\noalign{\smallskip}\hline \noalign{\smallskip}
\multicolumn{1}{l}{Configuration} &
\multicolumn{1}{c}{HJD} &
\multicolumn{1}{c}{Exp.\ time} &
\multicolumn{1}{c}{$S/N$} \\
&
2\,450\,000+ &
[s] &
\\
\noalign{\smallskip}\hline \noalign{\smallskip}
CFHT + ESPaDOnS & 55958.719 & 2400 & 109 \\
CFHT + ESPaDOnS & 55959.719 & 2400 & 88 \\
CFHT + ESPaDOnS & 55960.717 & 2400 & 71 \\
CFHT + ESPaDOnS & 55961.717 & 2400 & 112 \\
CFHT + ESPaDOnS & 55966.725 & 2400 & 90 \\
TBL + Narval & 56011.330 & 4800 & 78 \\
CFHT + ESPaDOnS & 56197.950$-$ & 5400 & 146 \\
CFHT + ESPaDOnS & 56198.014$-$ & 5400 & 146 \\
CFHT + ESPaDOnS & 56200.974 \textbar & 5400 & 130 \\
CFHT + ESPaDOnS & 56201.039 \textbar & 5400 & 148 \\
CFHT + ESPaDOnS & 56270.751$-$ & 5400 & 115 \\
CFHT + ESPaDOnS & 56270.816$-$ & 5400 & 138 \\
CFHT + ESPaDOnS & 56285.761$*$ & 5400 & 93 \\
CFHT + ESPaDOnS & 56285.826 \textbar & 5400 & 127 \\
CFHT + ESPaDOnS & 56285.890 \textbar & 5400 & 131 \\
CFHT + ESPaDOnS & 56293.733$-$ & 5400 & 136 \\
CFHT + ESPaDOnS & 56293.798$-$ & 5400 & 138 \\
CFHT + ESPaDOnS & 56295.861$\#$ & 5400 & 19 \\
CFHT + ESPaDOnS & 56532.126 & 5160 & 156 \\
CFHT + ESPaDOnS & 56534.063 \textbar & 5160 & 154 \\
CFHT + ESPaDOnS & 56534.124 \textbar & 5160 & 135 \\
CFHT + ESPaDOnS & 56549.039$-$ & 5160 & 146 \\
CFHT + ESPaDOnS & 56549.102$-$ & 5160 & 141 \\
CFHT + ESPaDOnS & 56561.028 \textbar & 5160 & 150 \\
CFHT + ESPaDOnS & 56561.091 \textbar & 5160 & 123 \\
CFHT + ESPaDOnS & 56613.913$-$ & 5160 & 112 \\
CFHT + ESPaDOnS & 56614.045$-$ & 5160 & 125 \\
CFHT + ESPaDOnS & 56621.039 \textbar & 5160 & 140 \\
CFHT + ESPaDOnS & 56621.102 \textbar & 5160 & 135 \\
CFHT + ESPaDOnS & 56665.828$-$ & 5160 & 123 \\
CFHT + ESPaDOnS & 56665.889$-$ & 5160 & 149 \\
CFHT + ESPaDOnS & 57289.978$*$ & 5160 & 82 \\
CFHT + ESPaDOnS & 57290.041 \textbar & 5160 & 106 \\
CFHT + ESPaDOnS & 57291.059 \textbar & 5160 & 108 \\
LBT + PEPSI & 58042.022 & 3600 & 70 \\
ARC + ARCES & 58543.653 & 3600 & 98 \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\end{table}
The logbook of all available observations is presented in Table~\ref{T:obs}.
The recorded linear polarization PEPSI Stokes~$Q$ and $U$ spectra and the ARCES
spectrum were however very noisy, hence only the Stokes~$I$ spectrum was used
for measuring the magnetic field modulus. The resolving power of all archival
observations is $\sim 65\,000$, while the spectral resolution of the PEPSI
spectrum is 130\,000 whereas the ARCES spectrum has a resolution of only
31\,500. As most of the spectra have moderate signal-to-noise ratios ($S/N$)
in the Stokes~$I$ spectra, mostly in the range of 100--150, some spectra
obtained during the same night or, in one case, during subsequent nights have
been combined to increase the signal for the Stokes~$V$ profiles. However, in
cases where the $S/N$ of the spectrum was significantly lower than that of
the other spectra obtained during the same night, only the spectra with
better $S/N$ were used for the analysis of the mean longitudinal magnetic
field. Some spectra with low $S/N$ have been included in the analysis of the
line intensity variability or used for the measurements of the mean magnetic
field modulus, if the Zeeman splitting was detectable on them. Further, to
increase the accuracy of the mean longitudinal magnetic field determination,
we applied the least-squares deconvolution technique (LSD) described in
detail by
\citet{Donati1997}.
To create line masks for the measurements, we have used the Vienna Atomic
Line Database
\citep[VALD; e.g.,][]{Kupka2011,VALD3},
adopting the stellar parameters of \mbox{NGC\,1624-2}, $\teff=35\,000$\,K
and $\logg=4.0$
\citep{Wade2012a}.
The mask for the photospheric absorption lines includes spectral lines
belonging to the elements He, O, and C and is identical to that presented in
Table\,7 in the work of
\citet{Wade2012a},
apart from \ion{He}{i} $\lambda$7281, which is heavily distorted by numerous
strong telluric lines in most of the spectra. In massive magnetic early
B-type stars, He and Si are usually inhomogeneously distributed on the
stellar surface, with the He abundance spots located close to the magnetic
poles and the Si abundance spots close to the magnetic equator. Thus, to test
a possible inhomogeneous distribution of Si, we also included in our line
mask the \ion{Si}{iv} lines at $\lambda$4631 and $\lambda$4654. The available
data set allows us, in addition, for the first time to study the variability
of the high-excitation emission \ion{O}{III} lines and of the emission wind
\ion{He}{i} lines over the rotation cycle.
\section{Magnetic field measurements}
\label{sect:B}
\subsection{Mean longitudinal magnetic field $\bz$}
\begin{figure}
\centering
\includegraphics[width=0.80\columnwidth]{Bvsphase_e.eps}
\caption{
Mean longitudinal magnetic field $\bz$ measured using two different
line masks and using only the high-excitation line \ion{O}{iii}
$\lambda$7455 against the rotation phase. The dashed line represents
the best sinusoidal least-squares fit solution for the phase curve
defined by our magnetic field measurements (see text).
}
\label{fig:Bphase}
\end{figure}
\begin{table*}
\centering
\caption{
Longitudinal magnetic field measurements of \mbox{NGC\,1624-2}. The dates
are as in Table~\ref{T:obs}, but with the average of the combined dates if
observations were co-added. The rotation phases are listed in Column~2,
followed by the measurements using the whole sample of photospheric
absorption lines and the sample without the \ion{Si}{iv} lines. In
Columns~5, 6, and 7 we list the measurements using the high-excitation
emission \ion{O}{iii} $\lambda$7455 line and the wind \ion{He}{i}
$\lambda$5876 and \ion{He}{i} $\lambda$6678 lines.}
\label{T:Bz}
\begin{tabular}{l c r@{$\pm$}l r@{$\pm$}l r@{$\pm$}l r@{$\pm$}l r@{$\pm$}l}
\hline
\multicolumn{1}{c}{Date} &
\multicolumn{1}{c}{Phase} &
\multicolumn{10}{c}{$\bz$ (G)} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{ALL} &
\multicolumn{2}{c}{ALL $-$ Si} &
\multicolumn{2}{c}{\ion{O}{iii} $\lambda$7455} &
\multicolumn{2}{c}{\ion{He}{i} $\lambda$5876} &
\multicolumn{2}{c}{\ion{He}{i} $\lambda$6678} \\
\hline
55958.719 & 0.018 & 2944 & 360 & 4438 & 433 & 2324 & 851 & $-$2713 & 314 & $-$7420 & 791 \\
55959.719 & 0.024 & 3787 & 261 & 4125 & 425 & \multicolumn{2}{c}{} & $-$2538 & 294 & $-$6149 & 903 \\
55960.717 & 0.030 & 3710 & 540 & 4518 & 721 & \multicolumn{2}{c}{} & $-$1418 & 453 & $-$7050 & 1050 \\
55961.717 & 0.037 & 2689 & 376 & 3189 & 490 & 1899 & 761 & $-$1818 & 259 & $-$6838 & 800 \\
55966.724 & 0.068 & 2975 & 406 & 3905 & 573 & \multicolumn{2}{c}{} & $-$2869 & 401 & $-$4837 & 1066 \\
56011.330 & 0.351 & 1048 & 548 & 1315 & 549 & \multicolumn{2}{c}{} & $-$820 & 1016 & $-$2417 & 1813 \\
56197.982 & 0.532 & $-$80 & 137 & $-$74 & 139 & 578 & 524 & 1674 & 224 & $-$65 & 468 \\
56201.006 & 0.551 & $-$848 & 164 & $-$316 & 199 & 394 & 735 & 182 & 336 & 312 & 508 \\
56270.783 & 0.993 & 3314 & 304 & 3456 & 430 & 1846 & 544 & $-$2550 & 235 & $-$4074 & 750 \\
56285.858 & 0.088 & 3239 & 269 & 3945 & 376 & 1189 & 635 & $-$3201 & 185 & $-$4074 & 523 \\
56293.765 & 0.138 & 2688 & 279 & 2981 & 334 & \multicolumn{2}{c}{} & $-$2176 & 263 & $-$4121 & 595 \\
56532.126 & 0.647 & 1332 & 178 & 1284 & 215 & \multicolumn{2}{c}{} & $-$1792 & 504 & $-$1097 & 546 \\
56534.093 & 0.659 & 1478 & 190 & 1992 & 282 & 920 & 629 & $-$2150 & 379 & $-$ 309 & 569 \\
56549.070 & 0.754 & 2317 & 234 & 2800 & 306 & 1415 & 690 & $-$3524 & 232 & $-$3669 & 540 \\
56561.059 & 0.830 & 3195 & 273 & 3893 & 411 & 2454 & 556 & $-$3727 & 265 & $-$2431 & 567 \\
56613.979 & 0.165 & 2457 & 310 & 2559 & 300 & 1872 & 723 & $-$5120 & 488 & $-$5240 & 783 \\
56621.070 & 0.210 & 3209 & 311 & 2485 & 348 & \multicolumn{2}{c}{} & $-$7846 & 460 & $-$3324 & 628 \\
56665.858 & 0.493 & 363 & 128 & 272 & 144 & \multicolumn{2}{c}{} & $-$480 & 897 & 15 & 697 \\
57290.550 & 0.447 & 73 & 195 & $-$244 & 249 & \multicolumn{2}{c}{} & 155 & 1058 & $-$620 & 732 \\
\hline
\end{tabular}
\end{table*}
The mean longitudinal magnetic field is determined by computing the
first-order moment of the LSD Stokes~$V$ profile according to
\citet[][]{Mathys1989}:
\begin{equation}
\left<B_{\mathrm z}\right> = -2.14 \times 10^{11}\frac{\int \upsilon V
(\upsilon){\mathrm d}\upsilon }{\lambda_{0}g_{0}c\int
[I_{c}-I(\upsilon )]{\mathrm d}\upsilon},
\end{equation}
\noindent
where $\upsilon$ is the velocity shift from the line centre, in \kms, and
$\lambda_{0}$ and $g_{0}$ are the normalization values of the line weights.
We have adapted for them here the average values of, respectively, the
wavelengths (in nm) and the effective Land\'e factors of all the lines used
to compute the LSD profile. The results of our magnetic field measurements
using the line mask for the photospheric absorption lines, including the Si
lines, the line mask without the Si lines, and the measurements using the
high excitation emission line \ion{O}{iii} $\lambda$7455 and the strongest
emission lines \ion{He}{i} $\lambda$5876 and \ion{He}{i} $\lambda$6678 are
presented in Fig.~\ref{fig:Bphase} and in Table~\ref{T:Bz} in Columns~3 to 7.
All LSD profiles for the selected line masks, as well as Stokes~$I$, $V$,
and null profiles for the individual emission lines are shown in
Figs.~\ref{afig:IVNabsl}--\ref{afig:IVN7065} in the Appendix. The average
effective Land\'e factors used for the normalization for the line mask
including the Si lines and the mask without the Si lines are 1.55 and 1.24,
respectively. The corresponding wavelengths are 5013 and 5224\,\AA. For the
profiles calculated using all lines and the line list without the Si lines an
integration was made from $-120$ to $+25$\,\kms, for \ion{O}{iii}
$\lambda$7455 from $-70$ to $+10$\,\kms, for \ion{He}{i} $\lambda$5876 and
for \ion{He}{i} $\lambda$6678 from $-200$ to $+150$\,\kms.
In most cases the false alarm probability (FAP) for the field detections is
less than $10^{-6}$. According to
\citet{Donati1992},
a Zeeman profile with FAP $\leq 10^{-5}$ is considered as a definite
detection, $10^{-5} <$ FAP $\leq 10^{-3}$ as a marginal detection, and FAP
$> 10^{-3}$ as a non-detection. Our measurements show that the $\bz$ values
become lower if the \ion{Si}{iv} lines are included in the line mask. A
possible scenario to explain these measurements would be an inhomogeneous
surface element distribution with a region of Si concentration located closer
to the magnetic equator. It is also possible that we observe a filling of the
lines by emission. Our $\bz$ measurements in the range from $-0.2$ to 4.5\,kG
obtained for the sample of the photospheric absorption lines are in good
agreement with the $\bz$ values reported by
\citet{Grunhut2017},
but disagree with the measurement of $\bz=5.35$\,kG reported by
\citet{Wade2012a}.
Assuming a limb-darkening coefficient $u=0.3$
\citep{Claret2011}
and a centred dipolar magnetic field, we estimate the dipole strength of
\mbox{NGC\,1624-2} $B_{\rm d}\ge16$\,kG according to Equation~(1) of
\citet{Preston1967}.
The dashed line in Fig.~\ref{fig:Bphase} represents the best sinusoidal
least-squares fit solution for the phase curve defined by our magnetic field
measurements using the line mask for photospheric absorption lines, without
the \ion{Si}{iv} lines. The rotation phase presented in Table~\ref{T:Bz} in
the second column is calculated using the rotation period of 157.99\,d
adopted by
\citet{Wade2012a}
and the initial epoch for the phase origin HJD$_{0} = 2455955.94\pm0.21$
corresponding to the maximum of the longitudinal field phase curve. The
fitted sinusoid has an amplitude $A_{\left<B_{\rm z}\right>} = 2132\pm364$\,G
and a mean value $\overline{\left<B_{\rm z}\right>} = 2063\pm328$\,G, thus
$\bz_{\rm max}=4195$\,G and $\bz_{\rm min}=-69$\,G.
\begin{figure*}
\centering
\includegraphics[width=0.245\textwidth]{HeII_4686_dyn.eps}
\includegraphics[width=0.245\textwidth]{HeI_6678_dyn.eps}
\includegraphics[width=0.245\textwidth]{HeI_4713_dyn.eps}
\includegraphics[width=0.245\textwidth]{CIV_5801_dyn.eps}
\caption{Examples of line intensity variability of the emission lines
\ion{He}{ii} $\lambda$4686 and
\ion{He}{i} $\lambda$6678, and the photospheric absorption lines
\ion{He}{i} $\lambda$4713 and
\ion{C}{iv} $\lambda$5801 as a function of the rotation phase.
In the upper panel we present all profiles overplotted with the
observed weakest profiles highlighted in red colour.
The lower panels show the dynamic spectra of the difference
between the individual line profiles and the weakest emission or
absorption profile.
}
\label{fig:dyn4686}
\end{figure*}
Knowing the rotation period, we can estimate the equatorial velocity and the
inclination of the rotation axis of the star $i$. Assuming a radius
$R=9.7\,{\rm R}_{\odot}$
\citep{Petit2013}
and a projected rotation velocity $v\,\sin\,i\le3$\,\kms{}
\citep{Wade2012a},
we obtain $i\le75^\circ$. Using the well known relations developed by
\citet{Stibbs1950}
and
\citet{Preston1967}
for a centred magnetic dipole tilted to the rotation axis by angle $\beta$,
we estimate an obliquity angle $\beta \ge 15.5^{\circ}$. However, with
$v\,\sin\,i = 1$\,km\,s$^{-1}$, we obtain $i = 19^{\circ}$ and
$\beta = 72^{\circ}$, i.e.\ nearly a flip of the two axes, showing that the
exact geometry of the magnetic field can not be determined with the still too
loose constraint on $v\,\sin\,i$. The measured values of the longitudinal
magnetic field of \mbox{NGC\,1624-2} with predominantly positive polarity
imply that only one magnetic pole is observable over the rotation cycle. The
dynamic spectra of emission and photospheric lines show a single-wave
variation of their intensity: the intensity of the lines H$\alpha$,
\ion{He}{ii} $\lambda$6486, \ion{He}{i} $\lambda$5876 and $\lambda$6678 are
the strongest close to the positive extremum of $\bz$ whereas the
photospheric absorption lines of \ion{He}{i}, \ion{C}{iv}, \ion{Si}{iv}, and
the emission \ion{O}{iii} lines are the strongest close to the rotation phase
0.5. Examples of the variability of different lines are presented in
Fig.~\ref{fig:dyn4686}.
\subsection{Mean magnetic field moduls $\langle B \rangle$}
\begin{table}
\centering
\caption{
The mean magnetic field modulus $\langle B \rangle$ values obtained in the
spectra with measurable splitting in the \ion{C}{iv} $\lambda$5812 line.
}
\label{T:Bmod}
\begin{tabular}{l c r@{$\pm$}l}
\hline
\multicolumn{1}{c}{Date} &
\multicolumn{1}{c}{Phase} &
\multicolumn{2}{c}{$\langle B \rangle$ (G)} \\
\noalign{\smallskip}\hline \noalign{\smallskip}
55959.719 & 0.024 & 9731 & 431 \\
55960.717 & 0.030 & 9433 & 526 \\
55961.717 & 0.037 & 9650 & 950 \\
55966.725 & 0.068 & 10927 & 300 \\
56011.330 & 0.351 & 11546 & 1053 \\
56197.950 & 0.532 & 11424 & 304 \\
56198.014 & 0.532 & 10975 & 409 \\
56200.974 & 0.551 & 11059 & 356 \\
56201.039 & 0.551 & 10990 & 360 \\
56270.751 & 0.993 & 10299 & 385 \\
56270.816 & 0.993 & 10973 & 491 \\
56285.826 & 0.088 & 9253 & 713 \\
56285.890 & 0.088 & 9682 & 423 \\
56293.733 & 0.138 & 10322 & 616 \\
56293.798 & 0.138 & 9932 & 799 \\
56532.126 & 0.647 & 11099 & 301 \\
56534.063 & 0.659 & 11384 & 310 \\
56534.124 & 0.660 & 11405 & 377 \\
56549.039 & 0.754 & 11586 & 497 \\
56549.102 & 0.754 & 11372 & 408 \\
56561.028 & 0.830 & 10616 & 379 \\
56613.913 & 0.165 & 10751 & 485 \\
56614.045 & 0.165 & 9127 & 950 \\
56621.039 & 0.210 & 10433 & 713 \\
56621.102 & 0.210 & 9510 & 475 \\
56665.828 & 0.493 & 10840 & 341 \\
56665.889 & 0.494 & 10803 & 346 \\
57289.978 & 0.444 & 11396 & 319 \\
57290.041 & 0.444 & 10886 & 294 \\
57291.059 & 0.451 & 11683 & 555 \\
58042.022 & 0.204 & 11122 & 600 \\
58543.653 & 0.379 & 10913 & 425 \\
\noalign{\smallskip}\hline \noalign{\smallskip}
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.80\columnwidth]{Bmod.eps}
\caption{
Mean magnetic field modulus $\langle B \rangle$ of
\mbox{NGC\,1624-2} measured using the split components of the
magnetically resolved line \ion{C}{iv} $\lambda$5812 against the
rotation phase.
}
\label{fig:Bmod}
\end{figure}
The availability of the variation curve of the mean magnetic field modulus
$\langle B \rangle$ allows us to put stronger constraints on the structure of
the magnetic field of \mbox{NGC\,1624-2} as compared to any other magnetic
O-type star with unsplit lines. Our measurements of $\langle B \rangle$,
which is the average over the visible stellar hemisphere of the modulus of
the magnetic field vector, weighted by the local line intensity, are
presented in Table~\ref{T:Bmod} and Fig.~\ref{fig:Bmod}. The field modulus
was calculated by fitting the magnetically resolved \ion{C}{iv} $\lambda$5812
line with two Gaussian profiles and using the relation given e.g.\ by
\citet{HubrigNesvacil2007}.
The uncertainties in the presented $\langle B \rangle$ values are due to the
accuracy of the measured wavelengths of the blue and red split line components.
Our measurements indicating a field intensity between 9 and 12\,kG differ from
the measurements reported by
\citet{Wade2012a},
who reported $\langle B \rangle = 15$\,kG for the $\lambda$5801 line and
$\langle B \rangle = 13$\,kG for the $\lambda$5812 line using the Narval
spectrum. This difference can likely be explained by the Zeeman pattern
of the \ion{C}{iv} $\lambda$5812 line. We interpret in our measurements the
splitting as a pseudo-doublet, which underestimates the strength of the field
modulus.
The approximately sinusoidal variation of $\bz$ and the ratio of the values
of the extrema of $\langle B \rangle$ indicates that there is an important
component of the field that is dipolar. Further, a comparison of the
variation curves of the mean magnetic field modulus and the mean longitudinal
magnetic field shows that the minimum of the mean magnetic field modulus
corresponds to the maximum of the mean longitudinal magnetic field. This
indicates that the field structure must significantly depart from a centred
dipole. It is possible that we see a small shift between the modulus minimum
with respect to the positive extremum of the $\bz$, although the fidelity of
such a shift is only barely supported, given the rather large dispersion of
the data points. A future confirmation of the presence of a shift is
necessary, as this would indicate that the magnetic field structure of
\mbox{NGC\,1624-2} slightly departs from a simple dipole.
\subsection{Emission lines}
We confirm the conclusion of
\citet{Wade2012a}
that the $\bz$ values measured using the high-excitation \ion{O}{iii}
emission lines, suggested to form in the low-velocity plasma confined in
closed magnetic loops above the stellar surface, are significantly lower than
those obtained for photospheric absorption lines. The $\bz$ values obtained
using the \ion{O}{iii} $\lambda$7455 line are in the range from 0.4 to
2.3\,kG and show a variability pattern similar to that detected for the
absorption lines.
Interestingly, we detect in the spectra the forbidden [\ion{O}{i}] 6300 and
6363 lines, indicating the presence of low-density atomic material in the
vicinity of \mbox{NGC\,1624-2}. The formation mechanism of these lines is
unclear. It was suggested that they originate from disk winds or regions
where stellar UV radiation impinges on the disk surface in pre-main
sequence Herbig Ae/Be stars
\citep[e.g.][]{Finkenzeller, Corcoran}.
It is also possible that the forbidden [\ion{O}{i}] 6300 and 6363 lines are
formed in magnetospheric accretion columns
\citep{Muzerolle}.
The variability of the strength of these lines is presented in
Fig.~\ref{bfig:nebular}. They are the strongest at the rotational phases
0.03 and 0.35 and do not show rotational modulation. This suggests that they
are most likely formed beyond the Alfv\'en radius.
Our analysis of the observations of \mbox{NGC\,1624-2}, in the scenario of an
oblique magnetic rotator, indicates that stronger emissions in the hydrogen and
helium emission lines are detected when the magnetically-confined cooling
disk is seen closer to face-on, while the emission contribution is reduced
when the cooling disk is seen closer to edge-on (see Fig.~\ref{fig:dyn4686}).
\begin{figure}
\centering
\includegraphics[width=0.235\textwidth]{He5876Bz_e.eps}
\includegraphics[width=0.235\textwidth]{He6678Bz_e.eps}
\caption{
As Fig.~\ref{fig:Bphase}, but $\bz$ measured using the
\ion{He}{i} 5876\,\AA{} and \ion{He}{i} 6678\,\AA{} lines.
}
\label{fig:HeBphase}
\end{figure}
No measurements using the \ion{He}{i} emission lines were presented in
the study of
\citet{Wade2012a}.
Our $\bz$ values are predominantly negative and, similar to the measurements
of other spectral lines, show clear variability over the rotation period,
with the negative $\bz$ extremum observed around phase 0, where the
photospheric absorption lines show a positive $\bz$ extremum. The
distribution of our measurement values over the rotation period obtained
using the \ion{He}{i} $\lambda$5876 and $\lambda$6678 lines, which are the
strongest in the $2^3P^0-n^3D$ and the $2^1P^0-n^1D$ series, respectively,
is presented in Fig.~\ref{fig:HeBphase}. The significant data point
dispersion and the presence of two outliers in the measurements of
\ion{He}{i} $\lambda$5876 close to the rotation phase 0.2 is caused by the
short-term variability of the line intensity on a time scale of several days,
and is likely a consequence of either the pulsational variability recently
detected by
\citet{Kurtz}
or a time-variable structure of gas flows within the magnetosphere
\citep{udDoula2013}.
The origin of the emission in the \ion{He}{i} lines in the red spectral
region is not well understood. A long time ago,
\citet{Auer}
showed that NLTE effects are quite small for \ion{He}{i} lines in the
blue-violet region of the spectrum, whereas the red \ion{He}{i} lines, in
particular \ion{He}{i} $\lambda$5876 and \ion{He}{i} $\lambda$6678, are
strongly affected by departures from LTE. A large discrepancy between
observations and their NLTE results occurred for \ion{He}{i} $\lambda$5876,
where the observed lines are consistently stronger than computed. However,
the presented NLTE calculations always remained in absorption. Only at the
highest temperatures of about 50\,000\,K the NLTE results predicted that
lines come into emission.
Several mechanisms were suggested since then to understand the appearance of
the emission observed in these lines. To explain the emissions in the
\ion{He}{i} $\lambda$5876, $\lambda$6678 lines in the early type star
$\lambda$~Eri,
\citet{Smithetal}
suggested that emission can be produced in a dense slab that is optically
thick in the Lyman continuum. To produce a large enough column density for an
observable feature, the requirement was made that the slab must be cohesive
and is confined in a magnetic loop to resist the acceleration of the ambient
wind.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{windIVN.eps}
\caption{
Examples of Stokes~$I$ (top), Stokes~$V$ (middle, shifted to 0.8
and multiplied by two) and null (bottom, shifted to 0.6) profiles of
the emission lines
\ion{He}{i} $\lambda$5876, \ion{He}{i} $\lambda$6678, and
\ion{He}{i} $\lambda$7065 at two phases.
}
\label{fig:windIVN}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{Vhisto.eps}
\caption{
Differences between the relative areas of the blue and red parts of
the Stokes~$V$ profiles calculated at different rotation phases for the
emission lines \ion{He}{i} $\lambda\lambda$5876, 6678, and 7065.
The lighter color represents the cases where the differences in the
areas are within the measurement uncertainties and the darker color
the cases where they are larger than the measurement uncertainties.
Error bars for each calculated blue and red part of the Stokes~$V$
profiles are presented at the top of the individual bars.
}
\label{fig:Vhisto}
\end{figure}
The presence of variable Zeeman features corresponding to a longitudinal
field of negative polarity observed in the \ion{He}{i} emission lines, while
high-excitation emission \ion{O}{iii} lines show the same polarity as the
photospheric absorption lines, is puzzling and currently difficult to
interpret. One alternative hypothesis could be that the \ion{He}{i}
Stokes~$I$ and $V$ profiles have a composite structure and present a
contribution of several components. In Fig.~\ref{fig:windIVN} we present the
Stokes~$I$, $V$, and null profiles for all three \ion{He}{i} emission lines
at two rotational phases. Indeed, depending on the line, the observed
Stokes~$I$ profiles indicate a rather complex variable structure consisting
of at least two or three components (see also
Figs.~\ref{afig:IVN5875}--\ref{afig:IVN7065}). It is quite possible that
there is a correspondence between the different parts of the Stokes~$I$
profiles and those of the Stokes~$V$ profiles. The resulting composite
Stokes~$V$ profiles would then present just a blend of the individual
Stokes~$V$ profiles corresponding to different parts of the Stokes~$I$
profiles. It is noteworthy that for the majority of the obtained
observations, the shape of the Stokes~$V$ profiles displayed in
Figs.~\ref{afig:IVN5875}--\ref{afig:IVN7065} cannot be explained in terms
of the Zeeman effect: if these profiles are indeed formed in a hydrostatic
stellar atmosphere lacking gradients in velocity and magnetic field as a
function of optical depth
\citep[e.g.][]{sanchez, lopez},
they always have the zero-order moment equal to zero, that is the integral of
Stokes~$V$ over the region of the spectral line in Stokes~$I$ is equal to
zero. The observed differences between the areas corresponding to the blue
and red parts of the Stokes~$V$ profiles calculated for all three \ion{He}{i}
emission lines are presented in Fig.~\ref{fig:Vhisto}. The differences were
calculated only in rotation phases with a lower impact of the noise on the
shape of the Stokes~$V$ profiles. As this figure shows, for all three lines
the differences between the areas corresponding to the blue and red parts of
the Stokes~$V$ profiles are significant.
The changes in emission strength of the He lines and the different sign of
the longitudinal magnetic field in these lines can also be understood within
the following scenario. For a dipole magnetic field, the magnetic field lines
reverse their orientation at the equator relative to the pole. If emitting
material accummulates in the magnetosphere at the magnetic equator, it
experiences a magnetic field of opposite orientation to that at the magnetic
pole. Thus the longitudinal magnetic field measured in the \ion{He}{i} lines
as seen in emission from the magnetospheric material at the magnetic equator
would have a sign opposite from the longitudinal field measured from a
photospheric absorption line. The extended region near the equator with an
elevated density seems like a plausible source of circumstellar emission (see
Fig.~\ref{fig:sim1} for an illustration). The magnetic field there has an
opposite orientation to that dominating the upper hemisphere of the
photosphere. Then, as the star rotates, the longitudinal component of the
magnetic field in the emitting region becomes weaker, and the projected area
of the emitting region diminishes, and possibly becomes partially occulted
by the star.
Another alternative hypothesis could be related to the complex structure of the
magnetospheric circumstellar environment of \mbox{NGC\,1624-2} and the possible
presence of a reconnection-driven emission process. To understand the
behaviour of the gas flow and the mass distribution within the magnetosphere,
we carried out first MHD numerical simulations for this star.
\section{MHD numerical simulations}
A dipolar magnetic field with a polar field strength of 16\,kG will strongly
alter the gas flow from the star and inhibit mass loss. The impact of the
magnetic field is described by the wind confinement parameter,
$\eta_* = B_{\rm eq}^2R_*^2/\dot{M}\varv_\infty$, where $B_{\rm eq}$ is the
field strength on the stellar surface at the magnetic equator, $R_*$ the
stellar radius, $\dot{M}$ the mass loss rate with no magnetic field, and
$\varv_\infty$ the terminal velocity of the stellar wind
\citep{2002ApJ...576..413U}.
For NGC 1624-2, $\eta_*=1.5 \times 10^4$ was found by
\citet{Wade2012a},
which results in a value of $11.3\,R_*$ for the Alfv\'en radius, which
indicates the extent of the magnetically-dominated region. Within that
region, the magnetic field will keep its dipole geometry and gas flowing from
the star at low to mid-latitudes will be trapped. Only gas originating from
the polar caps leaves the system unimpeded along open field lines.
To study the field geometry, mass distribution, and gas motions in the
vicinity of the star, we ran a series of numerical simulations. The general
model setup is described in
\citet{2017AN....338..868K}.
We used the {\sc NIRVANA} MHD code
\citep{2008CoPhC.179..227Z}
in spherical polar coordinates. The geometry is axisymmetric with respect to
the polar axis of the magnetic field and the gas is assumed isothermal, with
the gas temperature being the effective temperature of the star. No rotation
is included. As the model does not include cooling, magnetic diffusion has
been added with a diffusion coefficient of $10^{15}{\rm cm^2 s^{-1}}$.
The stellar and wind parameters have been adopted from
\citet{Wade2012a}.
We thus use a stellar mass of 34 solar masses, a stellar radius of 10 solar
radii and an effective temperature of 35\,000\,K. The mass loss rate is
$1.6 \times 10^{-7} M_\odot {\rm yr^{-1}}$ and the terminal wind velocity
is 2785\,\kms{}. The start solution is a dipole magnetic field and a velocity
law given by
\begin{equation}
\varv(r) = \varv_\infty \left ( 1-\frac{R_*}{r}\right). \label{v0}
\end{equation}
The initial mass distribution is determined by the velocity law (\ref{v0})
and the mass loss rate, i.e.\
\begin{equation}
\rho=\frac{\dot{M}}{4 \pi \varv r^2}.
\end{equation}
A radiative force term after
\citet{1975ApJ...195..157C}
with $\bar{Q}=550$ and $\alpha=0.55$ is used to drive the wind. The parameters
have been chosen such that without the magnetic field a stationary solution
with the correct wind parameters is produced. We then repeat the computation
with a dipole field included. To limit the cost in CPU time, the outer
boundary of the simulation box is placed at 5 stellar radii and a polar field
strength of 4\,kG is chosen. While this is smaller than the observed field
strength of 16\,kG, it still leads to an extended region of magnetic
confinement with $\eta_* \approx 700$ and an Alfv\'en radius of 5.4 stellar
radii. Within the simulation box, the magnetic field geometry stays close to
the original dipolar geometry and only a fraction of the gas flowing from the
stellar surface can escape the system. The mass loss rate found by
\citet{Wade2012a}
is based on a prescription by
\citet{2001A&A...369..574V}
that does not take the magnetic field into account. It therefore refers to
the amount of gas that would leave the system if there was no magnetic field.
\begin{figure}
\centering
\includegraphics[width=.23\textwidth]{ngc1624_dens}
\includegraphics[width=.23\textwidth]{ngc1624_dens_zoom}
\caption{
Mass density and magnetic field structure in the vicinity of the
star. {\em Left}: the whole simulation box. {\em Right}: zoom into the central
region. The unit of the time stamps is ks.
}
\label{fig:sim1}
\end{figure}
In the inital phase of the simulation run, the gas flow and mass density
undergo drastic changes. While the gas density at high latitude drops, gas
piles up in the equatorial plane, where a thin disk forms from which
filaments grow that are aligned with the magnetic field.
Figure~\ref{fig:sim1} shows a snapshot of the mass density taken at a time
when the system has reached a semi-stationary state insofar as the general
mass distribution with a high mass concentration at low to mid latitudes does
not evolve much any more. The shape and length of the individual filaments
is highly variable, though, and the system does not reach a truly stationary
state. Note that all the field lines shown are closed. Compared to the original
dipole, the field is slightly compressed by the gas load but the general
field topology is unchanged. This is generally different for weak surface
fields or at greater distances from the star, where the field lines are
stretched away from the star and a thin disk of outflowing material forms.
\begin{figure}
\centering
\includegraphics[width=.23\textwidth]{ngc1624_b1000_dens}
\caption{
Mass density and magnetic field from a simulation with
$B_0 = {\rm 1kG}$.
}
\label{fig:sim2}
\end{figure}
Figure~\ref{fig:sim2} shows a snapshot from a simulation with a weaker field of
1\,kG. The magnetosphere is much less extended both in radius and latitude
and far away from the star the magnetic field is stretched out into a
configuration resembling a split monopole with a current sheet in the
equatorial plane. The gas forms a thin disk of enhanced mass density in the
equatorial plane. The disk is fragmented and the magnetic field is frozen
into the fragments. In the equatorial plane the magnetic field forms an
elongated tail. At the tip, reconnection occurs and magnetic flux is carried
away by the disk material. We expect a similar picture in case of stronger
fields, but at distances from the star outside our current simulation box.
Note also that for the stronger magnetic field in Fig.~\ref{fig:sim1}, the
regions of the stellar surface where open field lines originate is limited to
a small range of latitudes around the poles. We expect this effect to be even
stronger with a dipole field strength as large as 20\,kG. The right panel of
Fig.~\ref{fig:sim1} shows a zoom-in to the inner part of the simulation box.
At the time shown, the magnetosphere is still forming and the filaments
become longer. Close to the star the mass distribution is quite complex,
with an increased mass density at low latitudes close to the stellar surface
and blobs of gas at low to mid-latitudes. Within the filament structure, gas
moves back and forth along the field lines and inward towards the star. The
mass distribution is distinctly asymmetric with respect to the equatorial
plane. At this point we can not predict the final structure and extent of the
magnetosphere but we notice the high mass density at low to mid-latitudes.
To summarize, while there must be strong currents present to exert a Lorentz
force strong enough to trap the gas, we do not find any changes of the field
topology close to the star that would hint at possible reconnection events in
that region.
\section{Discussion}
\label{sec:disc}
Slow rotators with strong magnetic fields, such as \mbox{NGC\,1624-2}, form
dynamical magnetospheres, in which material flows along closed magnetic
field loops from both magnetic poles, colliding near the magnetic equator and
then falling back onto the stellar surface, leading to complex flows in the
magnetosphere. \mbox{NGC\,1624-2}'s magnetosphere was estimated to extend to
an Alfv\'en radius $R_{\rm A}$ of more than eleven stellar radii, hence
trapping 95\% of the outflowing wind, much more than other magnetic O stars
that have an $R_{\rm A}$ of just a few stellar radii
\citep{David-Uraz2019}.
A much larger and denser magnetosphere compared to that of any other magnetic
O-type star was also confirmed by Chandra observations, where the high X-ray
luminosity, its variation with stellar rotation, and its large attenuation are
consistent with a large dynamical magnetosphere with magnetically confined
wind shocks
\citep{Petit2015}.
\mbox{NGC\,1624-2} is the only magnetic O-type star for which the variability
of both the longitudinal magnetic field and the mean magnetic field modulus
is currently studied. The approximately sinusoidal variation of $\bz$ and
the ratio of the values of the extrema of $\langle B \rangle$ indicate that
there is an important component of the field that is dipolar. The absence of
a sign reversal of the mean longitudinal field over the rotation cycle
indicates that only one magnetic pole is visible over the rotation cycle.
Involving the mean field modulus measurements, we find that the geometrical
structure of the magnetic field of \mbox{NGC\,1624-2} must depart from a
centred dipole. A similar conclusion has been independently obtained by
\citet{daviduraz2020}.
Notably, among the eleven magnetic O-type stars with known $\bz$ variation
curves,for seven stars, HD\,108, HD\,37022 (=$\theta^1$\,Ori\,C), HD\,54879,
HD\,148937, HD\,191612, Tr\,16-22, and \mbox{NGC\,1624-2}, only one magnetic
pole is visible throughout the rotation cycle
\citep[][this work]{ShultzWade2017,Petit2008,Hubrig2020,Wade2012b,Donati2006,Naze2016}.
This implies that, due to an unfavorable inclination of the rotation axis,
a larger fraction of their surfaces can never be observed, leaving the
structure of the field over the invisible surface only constrained by the
assumption of a tilted centred dipole. There will thus always be a
considerable degree of ambiguity left in the magnetic models of these stars.
Our $\bz$ values measured using \ion{He}{i} emission lines are predominantly
negative and, similar to the measurements of other spectral lines, show clear
variability over the rotation period, with the negative $\bz$ extremum
observed around phase 0, where the photospheric absorption lines show a
positive $\bz$ extremum. To explain the presence of variable Stokes~$V$
signatures corresponding to a longitudinal field of negative polarity in
these lines, we suggest three alternative scenarios: one is related to a
composite structure representing the contribution of several components,
another one referring to the complex structure of the magnetospheric
circumstellar environment of \mbox{NGC\,1624-2} with the possible presence of
a reconnection-driven emission process, and a third one where the \ion{He}{i}
emission comes from circumstellar material accumulated in the equatorial
region of the magnetosphere, where the dipolar magnetic field shows an
inverted sign with respect to the pole.
As of today, magnetic reconnection is best studied in the solar photosphere,
but it is not clear how the scenarios developed for the Sun can apply for
massive stars. Although the detection of a negative longitudinal field
inferred from the \ion{He}{i} emission lines was reported about eight years
ago, neither magnetohydrodynamical (MHD) simulations nor analytical models
have been developed to provide a description of the magnetospheric structure
permitting the reproduction of the observed phenomenon. According to
\citet{Owocki2016}
the presence of a huge magnetosphere and a very strong surface field are
prohibitively expensive to model using numerical MHD. Our MHD numerical
simulations show that while reconnection events indeed occur in
magnetospheres of massive stars and the magnetic flux is carried away by the
disk material, such events take place too far from the stellar surface where
\ion{He}{i} emission lines are formed.
On the other hand, as the shape of the Stokes~$V$ profiles of the \ion{He}{i}
emission lines displayed in Figs.~\ref{afig:IVN5875}--\ref{afig:IVN7065}
cannot be explained in terms of the Zeeman effect, the alternative hypothesis
related to a composite structure of Stokes~$I$ and $V$ profiles presenting
the contribution of several components appears currently more promising.
It is possible that future observations of all four Stokes parameters in
these lines will help to disentangle the contribution of different line
components.
\section*{Acknowledgements}
We would like to thank the anonymous referee for their rigid review of our
article, especially for their suggestions concerning the interpretation of the
inverted magnetic field in the helium lines.
We also thank G.~Mathys for a discussion on the field modulus.
SPJ is supported by the German Leibniz-Gemeinschaft, project number P67-2018.
Based on observations collected at the Canada-France-Hawaii Telescope (CFHT),
which is operated by the National Research Council of Canada, the Institut
National des Sciences de l'Univers of the Centre National de la Recherche
Scientifique of France, and the University of Hawaii.
Also based on archival observations obtained at the Bernard Lyot Telescope
(Pic du Midi, France) of the Midi-Pyr\'en\'ees Observatory, which is operated
by the Institut National des Sciences de l'Univers of the Centre National de
la Recherche Scientifique of France.
Based on data acquired with the Potsdam Echelle Polarimetric and
Spectroscopic Instrument (PEPSI) using the Large Binocular Telescope (LBT) in
Arizona.
This work has made use of the VALD database, operated at Uppsala University,
the Institute of Astronomy RAS in Moscow, and the University of Vienna.
\section*{Data Availability}
The ESPaDOnS data underlying this article are available in the CFHT Science
Archive at https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/cfht/ and can be
accessed with the object name. Similarly, the Narval data as well as
the ESPaDOnS are available in PolarBase at http://polarbase.irap.omp.eu. The
PEPSI and ARCES data underlying this article will be shared on a reasonable
request to the corresponding author.
|
1,108,101,566,003 | arxiv | \section{Introduction}\label{intro}
The problem of classifying a category of objects by assigning objects of another category as complete invariants is fundamental to many disciplines of mathematics. This is particularly true in C$^*$-algebra theory, where the problem of classifying the nuclear simple separable C$^*$-algebras up to isomorphism is a major theme of the modern theory. Recent contact between descriptive set theorists and operator algebraists has highlighted two quite different views of what it means to have such a classification. Operator algebraists have concentrated on finding complete invariants which are assigned in a functorial manner, and for which there are good computational tools ($\mathrm{K}$-theory, for instance.) Descriptive set theorists, on the other hand, have developed an abstract \emph{degree theory} of classification problems, and have found tools that allow us to compare the complexity of different classification problems, and, importantly, allow us to rule out the use of certain types of invariants in a complete classification of highly complex concrete classification problems.
The aim of this paper is to investigate the complexity of the classification problem for nuclear simple separable C$^*$-algebras from the descriptive set theoretic point of view.
A minimal requirement of any reasonable classification is that the invariants are somehow definable or calculable from the objects being classified themselves. For example, it is easily seen that there are at most continuum many non-isomorphic separable C$^*$-algebras, and so it is possible, in principle, to assign to each isomorphism class of separable C$^*$-algebras a unique real number, thereby classifying the separable C$^*$-algebras completely up to isomorphism. Few mathematicians working in C$^*$-algebras would find this a satisfactory solution to the classification problem for separable C$^*$-algebras, let alone nuclear simple separable C$^*$-algebras, since we do not obtain a way of computing the invariant, and therefore do not have a way of effectively distinguishing the isomorphism classes.
Since descriptive set theory is the theory of definable sets and functions in Polish spaces, it provides a natural framework for a theory of classification problems. In the past 30 years, such an abstract theory has been developed. This theory builds on the fundamental observation that in most cases where the objects to be classified are themselves either countable or separable, there is a natural standard Borel space which parameterizes (up to isomorphism) all the objects in the class. From a descriptive set theoretic point of view, a classification problem is therefore a pair $(X,E)$ consisting of a standard Borel space $X$, the (parameters for) objects to be classified, and an equivalence relation $E$, the relation of isomorphism among the objects in $X$. In most interesting cases, the equivalence relation $E$ is easily definable from the elements of $X$, and is seen to be Borel or, at worst, analytic.
\begin{definition}
Let $(X,E)$ and $(Y,F)$ be classification problems, in the above sense. A \emph{Borel reduction} of $E$ to $F$ is a Borel function $f:X\to Y$ such that
$$
xEy\iff f(x) F f(y).
$$
If such a function $f$ exists then we say that $E$ is \emph{Borel reducible} to $F$, and we write $E\leq_B F$.
\end{definition}
If $f$ is a Borel reduction of $E$ to $F$, then evidently $f$ provides a complete classification of the points of $X$ up to $E$ equivalence by an assignment of $F$ equivalence classes. The ``effective'' descriptive set theory developed in 1960s and 1970s (see e.g. \cite{moschovakis80}) established in a precise way that the class of Borel functions may be thought of as a very general class of calculable functions. Therefore the notion of Borel reducibility provides a natural starting point for a systematic theory of classification which is both generally applicable, and manages to ban the trivialities provided by the Axiom of Choice. Borel reductions in operator algebras have been studied in the recent work of Sasyk-T\"ornquist \cite{sato09a,sato09b,sato09c}, who consider the complexity of isomorphism for various classes of von Neumann factors, and in that of Kerr-Li-Pichot \cite{kelipi} and Farah \cite{Fa:Dichotomy}, who concentrate on certain representation spaces and, in \cite{kelipi}, group actions on the hyperfinite $\mathrm{II}_1$ factor. This article initiates the study of Borel reducibility in separable C$^*$-algebras.
In \cite{Kec:C*}, Kechris introduced a standard Borel structure on the space of separable C$^*$-algebras, providing a natural setting for the study of the isomorphism relation on such algebras. This relation is of particular interest for the subset of (unital) nuclear simple separable C$^*$-algebras, as these are the focus of G. A. Elliott's long running program to classify such algebras via $\mathrm{K}$-theoretic invariants. To situate our main result for functional analysts, let us mention that an attractive class of invariants to use in a complete classification are the countable structures type invariants, which include the countable groups and countable ordered groups, as well as countable graphs, fields, boolean algebras, etc. If $(X,E)$ is a classification problem, we will say that $E$ is \emph{classifiable by countable structures} if there is a Borel reduction of $E$ to the isomorphism relation for some countable structures type invariant. If $(X,E)$ is not classifiable by countable structures, then it may still allow \emph{some} reasonable classification, in the sense that it is Borel reducible to the orbit equivalence relation of a Polish group action on a standard Borel space. Our main result is the following theorem (which is proved in \S\ref{borelreduction} and in \S\ref{s.bga}):
\begin{theorem}\label{mainintro}
The isomorphism relation $E$ for unital simple separable nuclear C$^*$-algebras is turbulent, hence not classifiable by countable structures.
Moreover, if $\mathcal L$ is any countable language and $\simeq^{\Mod(\mathcal L)}$ denotes the isomorphism relation for countable models of $\mathcal L$, then $\simeq^{\Mod(\mathcal L)}$ is Borel reducible to $E$.
On the other hand, $E$ is Borel reducible to the orbit equivalence relation of a Polish group action, namely, the action of $\mathrm{Aut}(\mathcal{O}_2)$ on the closed subsets of $\mathcal{O}_2$.
\end{theorem}
\noindent
This establishes that the isomorphism problem for nuclear simple separable unital C$^*$-algebras does not have the maximal complexity among analytic classification problems, and rules out the usefulness of some additional types of invariants for a complete classification of nuclear simple separable unital C$^*$-algebras. It also establishes that this relation has higher complexity than the isomorphism relation of any class of countable structures.
Remarkably, establishing both the lower and upper $\leq_B$ bounds of Theorem \ref{mainintro} requires that we prove Borel versions of two well-known results from Elliott's $\mathrm{K}$-theoretic classification program for nuclear simple separable C$^*$-algebras. The lower bound uses the classification of the unital simple approximately interval (AI) algebras via their $\mathrm{K}_0$-group and simplex of tracial states, while the upper bound requires that we prove a Borel version of Kirchberg's Theorem that a simple unital nuclear separable C$^*$-algebra satisfies $A \otimes \mathcal{O}_2 \cong \mathcal{O}_2$.
By contrast with Theorem \ref{mainintro}, we shall establish in \cite{FaToTo} that Elliott's classification of unital AF algebras via the ordered $\mathrm{K}_0$-group amounts to a classification by countable structures. This will follow from a more general result regarding the Borel computability of the Elliott invariant. We note that there are non-classification results in the study of simple nuclear C$^*$-algebras which rule out the possibility of classifying all simple nuclear separable C$^*$-algebras via the Elliott invariant in a functorial manner (see \cite{Ror} and \cite{Toms}). At heart, these examples exploit the structure of the Cuntz semigroup, an invariant whose descriptive set theory will be examined in \cite{FaToTo}.
The proof of Theorem \ref{mainintro} allows us to draw conclusions about the complexity of metrizable Choquet simplexes, too.
\begin{theorem}\label{mainchoquet}
The relation of affine homeomorphism on metrizable Choquet simplexes is turbulent, yet Borel reducible to the orbit equivalence relation of a Polish group action.
\end{theorem}
\noindent
Furthermore, and again as a by-product of Theorem \ref{mainintro}, we recover an unpublished result of Kechris and Solecki:
\begin{theorem}[Kechris-Solecki, 2006]\label{T.KS}
The relation of homeomorphism on compact subsets of the Hilbert cube is Borel reducible to the orbit equivalence relation of a Polish group action.
\end{theorem}
Finally, we show that a Borel equivalence relation which is {\it not} Borel reducible to any orbit equivalence relation of a Polish group action
has a C*-algebraic witness.
Recall that $E_{K_\sigma}$ is the complete $K_\sigma$ equivalence relation.
\begin{theorem}\label{biembeda}
$E_{K_\sigma}$ is Borel reducible to
bi-embeddability of unital AF algebras.
\end{theorem}
The early sections of this paper are dedicated to establishing that a variety of standard constructions in C$^*$-algebra theory are Borel computable, and that a number of important theorems in C$^*$-algebra theory have Borel computable counterparts. This is done both of necessity---Theorems \ref{mainintro}--\ref{biembeda} depend on these facts---and to provide the foundations for a general theory of calculability for constructions in C$^*$-algebra theory. Constructions that are shown to be Borel computable include passage to direct limits, minimal tensor products, unitization, and the calculation of states, pure states, and traces. Theorems for which we establish Borel counterparts include Kirchberg's Exact Embedding Theorem, as well as Kirchberg's $A\otimes\mathcal O_2\simeq\mathcal O_2$ Theorem for unital simple separable nuclear $A$.
\begin{figure}
\label{Figure1}
\usetikzlibrary{shapes}
\tikzstyle{smooth}=[ellipse,
font={\small},
thick,
text centered,
minimum size=1.2cm,
draw=gray!80,
fill=gray!05]
\tikzstyle{dark-side}=[rectangle,
font={\small},
thin,
minimum size=.7cm,
draw=gray!80,
fill=gray!05]
\tikzstyle{unknown}=[rectangle,
font={\small},
thick,
minimum size=.3in,
draw=gray!80,
fill=gray!05]
\tikzstyle{orbit}=[rectangle,
font={\small},
thick,
minimum size=.3in,
draw=gray!80,
fill=gray!05]
\tikzstyle{pioneone}=[rectangle,
font={\small},
thick,
minimum size=1.2cm,
draw=gray!80,
fill=gray!30]
\tikzstyle{ctble}=[rectangle,
font={\small},
thick,
minimum size=1.2cm,
draw=gray!80,
fill=gray!05]
\noindent\begin{tikzpicture}
\matrix[row sep=0.5cm,column sep=0.1cm] {
&& & \node (Banach)[dark-side] {\parbox{1in}{isomorphism of\\ Banach spaces}}; & &
\\
\node (biAF)[dark-side] {\parbox{1in}{biembeddability of AF algebras}};
& \node (orbit-l) {};
& & &
& \node (orbit-r) {}; &
\node (Cuntz)[unknown] {\parbox{.8in}{Cuntz\\ semigroups}};
\\
\node (EKsigma)[dark-side] {$E_{K_\sigma}$};
& & &
\node (max-orbit)[orbit] {$E^{X_\infty}_{G_\infty}$};
\\
&& &
\node(nuclear)[orbit] {simple nuclear C*-algebras};
& & &
\node(Ell)[orbit]{Elliott invariants};
\\
&
&
&
\node (AI)[orbit] {simple AI algebras}; &
\\
& & & \node (Choquet)[orbit] {\parbox{1.2in}{Choquet simplexes}};
& &
\\
&&
& \node (cpct)[orbit]{\parbox{1.4in}{ {homeomorphism of\\ compact metric spaces}}};
&\node (abelian) [orbit]{abelian C*-algebras};
&
\\
\node (Eone)[dark-side]{$E_1$};
& \node (ctble-l) {};
& & & & & \node (ctble-r) {};
\\
& & & \node (AF)[dark-side]{AF algebras} ;
\\
& & &
\node (Ezero) [dark-side] {$E_0$};
&
&
\\
& \node (smooth-l) {};
& & & & &
\node (smooth-r) {};
\\
& & & \node (UHF) [smooth]{UHF};
&
\node (biUHF)[smooth] {\parbox{1in}{biembeddability of UHF algebras}};
\\
\node (bottom-l) {};
\\
};
\path (Banach)edge (biAF);
\path(AI) edge (Choquet);
\path[double](abelian) edge[<->] (cpct);
\path (Ezero) edge (AF);
\path (AF) edge (cpct);
\path (Eone) edge (Ezero);
\path (Ezero) edge (UHF);
\path (Eone) edge (EKsigma);
\path (EKsigma) edge (biAF);
\path (AI) edge (nuclear);
\path (AI) edge (Cuntz);
\path (AI) edge (Ell);
\path (nuclear) edge (max-orbit);
\path(max-orbit) edge (Banach);
\path (Cuntz) edge (Banach);
\path (Ell) edge (max-orbit);
\path[double] (UHF) edge[<->](biUHF);
\path (cpct) edge (Choquet);
\path[dotted] (orbit-l) edge node[below]{\tt orbit equivalence relations} (orbit-r);
\path[dotted](orbit-l) edge (ctble-l);
\path[dotted](ctble-l) edge (smooth-l);
\path[dotted](ctble-r) edge (smooth-r);
\path[dotted] (ctble-l) edge node[below]{\tt countable structures} (ctble-r);
\path[dotted] (smooth-l) edge node[below]{\tt smooth} (smooth-r);
\path[dotted] (bottom-l) edge (smooth-l);
\end{tikzpicture}
\caption{Borel reducibility diagram}
\end{figure}
Figure~\ref{Figure1} summarizes the Borel reductions we obtain in this article, in addition to some known reductions. All classes of C*-algebras occurring in the diagram are unital and separable.
Unless otherwise specified, the equivalence relation on a given class is the isomorphism relation.
The bi-reducibility between the isomorphism for UHF algebras and bi-embeddability of UHF algebras
is an immediate consequence of Glimm's characterization of UHF algebras, or rather of its (straightforward) Borel version.
$E_0$ denotes the eventual equality relation in the space $2^{{\mathbb N}}$.
The fact that $E_0$ is the minimal non-smooth Borel equivalence relation is
the Glimm--Effros dichotomy, proved by Harrington, Kechris and Louveau (see \cite{Hj:Borel}).
$E_1$ denotes the eventual equality relation in $[0,1]^{\mathbb N}$.
By \cite{KeLou:Structure} $E_1$ is not Borel-reducible to any Polish group action.
$E_{K_\sigma}$ is the complete $K_\sigma$ equivalence relation, and
$E^{X_\infty}_{G_\infty}$ is the maximal orbit equivalence relation of a Polish group action (see \cite{Hj:Borel}).
The nontrivial direction of Borel bi-reducibility between abelian C*-algebras and compact metric spaces follows from
Lemma~\ref{L.SPT}.
A Borel reduction from compact metric spaces to Choquet simplexes is given in Lemma~\ref{L.Choq.4}.
A Borel reduction from Choquet simplexes to simple AI algebras is given in
Corollary~\ref{c.reduction}.
The Borel version of Elliott's reduction of simple AI algebras to Elliott invariant follows from Elliott's classification result and
the fact that the computation
of the Elliott invariant is Borel, proved in \cite{FaToTo}.
The reduction of the Elliott invariant to $E^{X_\infty}_{G_\infty}$, as well as the facts about the Cuntz semigroup, is proved in
\cite{FaToTo}.
Bi-embeddability of AF algebras is proved to be above $E_{K_\sigma}$ in Section~\ref{S.biembed}.
The isomorphism of separable Banach spaces is the complete analytic equivalence relation by \cite{FeLouRo}. Some of the reductions in Figure~\ref{Figure1} are not known to be sharp.
For example, it is not known whether the homeomorphism of compact metric spaces is
equireducible with $E^{X_\infty}_{G_\infty}$ (cf. remark at the end of \S\ref{S.problems}).
The standard reference for descriptive set theory is \cite{Ke:Classical} and specific facts about Borel-reducibility can
be found in \cite{hjorth00, Hj:Borel} and \cite{gao09}.
The general theory of C*-algebras can be found in \cite{blackadar} and references for Elliott's classification
program are \cite{Ror:Classification} and \cite{et}.
The paper is organized as follows: In section \ref{S.parameterize} we introduce a notion of standard Borel parameterization of a category of objects, and define several equivalent parameterizations of the class of separable C$^*$-algebras. In section \ref{S.Basicdef} we prove that most standard constructions in C$^*$-algebra theory correspond to Borel functions and relations.
In Section \ref{S.choquet} we give a parameterization of the set of metrizable separable Choquet simplexes. In section \ref{borelreduction} we establish the lower bound of Theorem \ref{mainintro}. A Borel version of Kirchberg's Exact Embedding Theorem is obtained in section \ref{s.exact}. The upper bound in Therorem \ref{mainintro} is proved in Section \ref{s.bga} using the Borel version of Kirchberg's $A\otimes\mathcal O_2\simeq\mathcal O_2$ Theorem; Theorem \ref{mainchoquet} is also established.
Section \ref{S.biembed} establishes Theorem \ref{biembeda}, and Section \ref{S.problems} discusses several questions that remain open and warrant further investigation.
\subsection*{Acknowledgements} Ilijas Farah was partially supported by NSERC. A. Toms was supported by NSF grant DMS-0969246 and the 2011 AMS Centennial Fellowship.
Asger T\"ornquist wishes to acknowledge generous support from the following grants: The Austrian Science Fund FWF grant no. P 19375-N18, The Danish Council for Independent Research (Natural Sciences) grant no. 10-082689/FNU, and Marie Curie re-integration grant no. IRG- 249167, from the European Union.
We would like to thank the referee for a very useful report.
\section{Parameterizing separable C$^*$-algebras}\label{S.parameterize}
\label{S.Space}
In this section we describe several standard Borel spaces that in a natural way parameterize the set of all separable C$^*$-algebras. To make this precise, we adopt the following definition.
\begin{definition}\label{d.parameterization}
Let $\mathscr C$ be a category of objects.
\begin{enumerate}[\indent (1)]
\item A \emph{standard Borel parameterization} of $\mathscr C$ is a pair $(X,f)$ consisting of standard Borel space $X$ and a function $f:X\to\mathscr C$ such that $f(X)$ meets each isomorphism class in $\mathscr C$. (For brevity, we often simply call $(X,f)$ a \emph{parameterization} of $\mathscr C$. We will also usually abuse notation by suppressing $f$ and writing $X$. Finally, note that despite the terminology, it is $X$ rather than the parameterization that is standard Borel.)
\item The equivalence relation $\simeq^{(X,f)}$ on $X$ is defined by
$$
x\simeq^{(X,f)} y\iff f(x)\text{ is isomorphic to } f(y).
$$
\item A parameterization $(X,f)$ is called \emph{good} if $\simeq^{(X,f)}$ is analytic as a subset of $X\times X$.
\item Let $(X,f)$ and $(Y,g)$ be two parameterizations of the same category $\mathscr C$. A \emph{homomorphism} of $(X,f)$ to $(Y,g)$ is a function $\psi:X\to Y$ such that for $\psi(x)$ is isomorphic to $g(\psi(x))$ for all $x\in X$. An isomorphism of $(X,f)$ and $(Y,g)$ is a bijective homomorphism; a monomorphism is an injective homomorphism.
\item We say that $(X,f)$ and $(Y,g)$ are \emph{equivalent} if there is a Borel isomorphism from $(X,f)$ to $(Y,g)$.\label{it.equiv}
\item We say that $(X,f)$ and $(Y,g)$ are \emph{weakly equivalent} if there are Borel homomorphisms $\psi:X\to Y$ of $(X,f)$ to $(Y,g)$ and $\phi:Y\to X$ of $(Y,g)$ to $(X,f)$.
\end{enumerate}
\end{definition}
When $f$ is clear from the context, we will allow a slight \emph{abus de langage} and say that $X$ is a parameterization of $\mathscr C$ when $(X,f)$ is. Further, we will usually write $\simeq^X$ for $\simeq^{(X,f)}$. Note that by the Borel Schr\"oder-Bernstein Theorem (\cite[Theorem~15.7]{Ke:Classical}), (\ref*{it.equiv}) is equivalent to
\begin{enumerate}[\indent({\ref*{it.equiv}}')]
\item There are Borel monomorphisms $\psi:X\to Y$ of $(X,f)$ to $(Y,g)$ and $\phi: Y\to X$ of $(Y,g)$ to $(X,f)$.
\end{enumerate}
We now introduce four different parameterizations of the class of separable C$^*$-algebras, which we will later see are all (essentially) equivalent and good.
\subsection{The space $\Gamma(H)$.} \label{ss.gamma}
Let $H$ be a separable infinite dimensional Hilbert space and let as usual $\mathcal B(H)$ denote the space of bounded operators on $H$. The space $\mathcal B(H)$ becomes a standard Borel space when equipped with the Borel structure generated by the weakly open subsets. Following \cite{Kec:C*} we let
$$
\Gamma(H)=\mathcal B(H)^{{\mathbb N}},
$$
and equip this with the product Borel structure. Every $\gamma\in \Gamma(H)$ is identified with a sequence $\gamma_n$, for $n\in {\mathbb N}$, of elements of $\mathcal B(H)$.
For each $\gamma\in\Gamma(H)$ we let $C^*(\gamma)$ be the C$^*$-algebra generated by the sequence $\gamma$. If we identify each $\gamma\in\Gamma(H)$ with the $C^*(\gamma)$, then naturally $\Gamma(H)$ parameterizes all separable C$^*$-algebras acting on $H$. Since every separable C$^*$-algebra is isomorphic to a C$^*$-subalgebra of $\mathcal B(H)$ this gives us a standard Borel parameterization of the category of all separable C$^*$-algebras. If the Hilbert space $H$ is clear from the context we will write $\Gamma$ instead of $\Gamma(H)$. Following Definition \ref{d.parameterization}, we define
$$
\gamma\simeq^{\Gamma}\gamma'\iff C^*(\gamma)\text{ is isomorphic to } C^*(\gamma').
$$
\subsection{The space $\hat\Gamma(H)$.} \label{ss.hatgamma}
Let ${\mathbb Q}(i)={\mathbb Q}+i{\mathbb Q}$ denote the complex rationals. Following \cite{Kec:C*}, let $(\mathfrak p_j: j\in {\mathbb N})$ enumerate the non-commutative $*$-polynomials without constant term in the formal variables $X_k$, $k\in{\mathbb N}$, with coefficients in ${\mathbb Q}(i)$, and for $\gamma\in \Gamma$ write $\mathfrak p_j(\gamma)$ for the evaluation of $\mathfrak p_j$ with $X_k=\gamma(k)$. Then $C^*(\gamma)$ is the norm-closure of $\{\mathfrak p_j(\gamma): j\in {\mathbb N}\}$. The map $\Gamma\to\Gamma:\gamma\mapsto\hat\gamma$ where $\hat\gamma(j)=\mathfrak p_j(\gamma)$ is clearly a Borel map from $\Gamma$ to $\Gamma$. If we let
$$
\hat\Gamma(H)=\{\hat\gamma:\gamma\in\Gamma(H)\},
$$
then $\hat\Gamma(H)$ is a standard Borel space and provides another parameterization of the C$^*$-algebras acting on $H$; we suppress $H$ and write $\hat\Gamma$ whenever possible. For $\gamma\in\hat\Gamma$, let $\check\gamma\in\Gamma$ be defined by
$$
\check\gamma(n)=\gamma(i)\iff \mathfrak p_i=X_n,
$$
and note that $\hat\Gamma\to\Gamma:\gamma\mapsto\check\gamma$ is the inverse of $\Gamma\to\hat\Gamma:\gamma\mapsto\hat\gamma$. We let $\simeq^{\hat\Gamma}$ be
$$
\gamma\simeq^{\hat\Gamma}\gamma'\iff C^*(\gamma)\text{ is isomorphic to } C^*(\gamma').
$$
It is clear from the above that $\Gamma$ and $\hat\Gamma$ are equivalent parameterizations.
An alternative picture of $\hat\Gamma(H)$ is obtained by considering the free (i.e., surjectively universal) countable unnormed ${\mathbb Q}(i)$-$*$-algebra $\mathfrak A$. We can identify $\mathfrak A$ with the set $\{\mathfrak p_n:n\in{\mathbb N}\}$. Then
$$
\hat \Gamma_{\mathfrak A}(H)=\{f:\mathfrak A\to\mathcal B(H):f\text{ is a $*$-homomorphism}\}
$$
is easily seen to be a Borel subset of $\mathcal B(H)^{\mathfrak A}$. For $f\in\hat\Gamma_{\mathfrak A}$ let $C^*(f)$ be the norm closure of $\im(f)$, and define
$$
f\simeq^{\hat\Gamma_{\mathfrak A}} f'\iff C^*(f)\text{ is isomorphic to } C^*(f').
$$
Clearly the map $\hat\Gamma\to\hat\Gamma_{\mathfrak A}:\gamma\mapsto f_\gamma$ defined by $f_\gamma(\mathfrak p_j)=\gamma(j)$ provides a Borel bijection witnessing that $\hat\Gamma$ and $\hat\Gamma_{\mathfrak A}$ are equivalent (and therefore they are also equivalent to $\Gamma$.)
We note for future reference that if we instead consider the free countable \emph{unital} unnormed ${\mathbb Q}(i)$-$*$-algebra $\Au$ and let
$$
\hat \Gamma_{\Au}(H)=\{f:\Au\to\mathcal B(H):f\text{ is a \emph{unital} $*$-homomorphism}\},
$$
then this gives a parameterization of all unital C$^*$-subalgebras of $\mathcal B(H)$. Note that $\Au$ may be identified with the set of all formal $*$-polynomials in the variables $X_k$ with coefficients in ${\mathbb Q}(i)$ (allowing a constant term.)
\subsection{The space $\Xi$.}\label{ss.xi}
Consider the Polish space ${\mathbb R}^{\mathbb N}$. We let $\Xi$ be the space of all $\delta\in {\mathbb R}^{\mathbb N}$ such that for some separable C$^*$-algebra $A$ and a sequence $y=(y_n)$ in $A$ generating it we have that
$$
\delta(j)=\|\mathfrak p_j(y)\|_A.
$$
Each $\delta\in\Xi$ defines a seminorm $\|\mathfrak p_j\|_\delta=\delta(j)$ on $\mathfrak A$ which satisfies the C$^*$-axiom. Letting $I=\{\mathfrak p_j:\delta(j)=0\}$ we obtain a norm on $\mathfrak A/I$. The completion of this algebra is then a C$^*$-algebra, which we denote by $B(\delta)$. It is clearly isomorphic to any C$^*$-algebra $A$ with $y=(y_n)$ as above satisfying $\|\mathfrak p_j(y)\|=\delta(j)$.
\begin{lemma}
The set $\Xi$ is closed in $\mathbb R^{{\mathbb N}}$.
\end{lemma}
\begin{proof} Assume $\delta^n\in \Xi$ converges to $\delta\in {\mathbb R}^{\mathbb N}$ pointwise. Fix C$^*$-algebras $A_n$ and sequences $y^n=(y^n_i\in A_n: i\in {\mathbb N})$ such that $\delta^n(j)=\|\mathfrak p_j(y^n)\|_{A_n}$ for all $n$ and $j$. For a nonprincipal ultrafilter ${\mathcal U}$ on ${\mathbb N}$, let $A_\infty$ be the subalgebra of the ultraproduct $\prod_{{\mathcal U}} A_n$ generated by the elements $\mathbf y_i=(y^0_i,y^1_i,\dots)$, for $i\in {\mathbb N}$. Then clearly $\delta(j)=\lim_{n\to {\mathcal U}} \|\mathfrak p_j(y^n)\|_{A_n}=\|\mathfrak p_j(\mathbf y_i)\|_{A_\infty}$, hence $A$ witnesses $\delta\in \Xi$.
\end{proof}
Thus $\Xi$ provides yet another parameterization of the category of separable C$^*$-algebras, and we define in $\Xi$ the equivalence relation
$$
\delta\simeq^{\Xi}\delta'\iff B(\delta)\text{ is isomorphic to } B(\delta').
$$
Below we will prove that this parameterization is equivalent to $\Gamma$ and $\hat\Gamma$. Note that an alternative description of $\Xi$ is obtained by considering the set of $f\in{\mathbb R}^{\mathfrak A}$ which define a C$^*$-seminorm on $\mathfrak A$; this set is easily seen to be Borel since the requirements of being C$^*$-seminorm are Borel conditions.
\subsection{The space $\hat\Xi$.}\label{ss.hatxi}
Our last parameterization is obtained by considering the set
$$
\hat\Xi \subseteq {\mathbb N}^{{\mathbb N}\times{\mathbb N}}\times{\mathbb N}^{{\mathbb Q}(i)\times{\mathbb N}}\times{\mathbb N}^{{\mathbb N}\times{\mathbb N}}\times{\mathbb N}^{\mathbb N}\times{\mathbb R}^{\mathbb N}
$$
of all tuples $(f,g,h,k,r)$ such that the operations (with $m,n$ in ${\mathbb N}$ and $q\in \mathbb Q(i)$) defined by
\begin{align*}
& m+_f n=f(m,n) \\
& q\cdot_g n = g(q,n)\\
& m\cdot_h n=h(m,n)\\
& m^{*_k}=k(m)\\
& \|n\|_r=r(n)
\end{align*}
give ${\mathbb N}$ the structure of a normed $*$-algebra over ${\mathbb Q}(i)$ which further satisfies the ``C$^*$-axiom'',
$$
\|n\cdot_{h} n^{*_k}\|_r=\|n\|_r^2
$$
for all $n\in{\mathbb N}$. The set $\hat\Xi$ is Borel since the axioms of being a normed $*$-algebra over ${\mathbb Q}(i)$ are Borel conditions. For $A\in\hat\Xi$, let $\hat B(A)$ denote the completion of $A$ with respect to the norm and equipped with the extension of the operations on $A$ to $\hat B(A)$. Note in particular that the operation of scalar multiplication may be uniquely extended from ${\mathbb Q}(i)$ to ${\mathbb C}$. We define for $A_0,A_1\in\hat\Xi$ the equivalence relation
$$
A_0\simeq^{\hat\Xi} A_1\iff \hat B(A_0)\text{ is isomorphic to } \hat B(A_1).
$$
For future reference, we note that the infinite symmetric group $\Sym({\mathbb N})$ acts naturally on $\hat\Xi$: If $\sigma\in\Sym({\mathbb N})$ and $(f,g,h,k,r)\in\hat\Xi$, we let $\sigma\cdot f\in{\mathbb N}^{{\mathbb N}\times{\mathbb N}}$ be defined by
$$
(\sigma\cdot f)(m,n)=k\iff f(\sigma^{-1}(m),\sigma^{-1}(n))=\sigma^{-1}(k),
$$
and defined $\sigma\cdot g$, $\sigma\cdot h$, $\sigma\cdot k$ and $\sigma\cdot r$ similarly. Then we let $\sigma\cdot (f,g,h,k,r)=(\sigma\cdot f, \sigma\cdot g,\sigma\cdot h, \sigma\cdot k, \sigma\cdot r)$. It is clear that $\sigma$ induces an isomorphism of the structures $(f,g,h,k,r)$ and $\sigma\cdot (f,g,h,k,r)$. However, it clearly \emph{does not} induce the equivalence relation $\simeq^{\hat\Xi}$, which is strictly coarser.
\begin{remark}
(1) It is useful to think of $\Gamma$ and $\hat\Gamma$ as parameterizations of \emph{concrete} C$^*$-algebras, while $\Xi$ and $\hat\Xi$ can be thought of as parameterizing \emph{abstract} C$^*$ algebras.
(2) The parameterizations $\Gamma$, $\hat\Gamma$ and $\Xi$ all contain a unique element corresponding to the trivial C$^*$-algebra, which we denote by $0$ in all cases. Note that $\hat\Xi$ does not parameterize the trivial C$^*$-algebra.
\end{remark}
\subsection{Equivalence of $\Gamma$, $\hat\Gamma$, $\Xi$ and $\hat\Xi$.}
\label{S.Equivalence}
We now establish that the four parameterizations described above give us equivalent parameterizations of the non-trivial separable C$^*$-algebras. First we need the following lemma.
\begin{lemma}\label{l.injection}
Let $X$ be a Polish space and let $Y$ be any of the spaces $\Gamma,\hat\Gamma,\Xi$ or $\hat\Xi$. Let $f:X\to Y$ be a Borel function such that $f(x)\neq 0$ for all $x\in X$. Then there is a Borel injection $\tilde f:X\to Y$ such that for all $x\in X$, $f(x)\simeq^Y \tilde f(x)$.
\end{lemma}
\begin{proof}
$Y=\Gamma$: We may assume that $X=[{\mathbb N}]^\infty$, the space of infinite subsets of ${\mathbb N}$. (Under the natural identification this is a $G_\delta$ subset of $2^{\mathbb N}$ and therefore Polish. It is then homeomorphic to the set of irrationals.) Given $\gamma\in\Gamma\setminus \{0\}$ and $x\in X$, let $n_0(\gamma)\in{\mathbb N}$ be the least such that $\gamma(n_0)\neq 0$ and define
$$
\gamma_x(k)=\left\{\begin{array}{ll}
i\gamma(n_0(\gamma)) & \text{ if } k=2^i \text{ for some } i\in x;\\
\gamma(j) & \text{ if } k=3^j \text{ for some } j\in{\mathbb N};\\
0 & \text{otherwise}.
\end{array}
\right.
$$
Clearly $C^*(\gamma)=C^*(\gamma_x)$, and $\tilde f:X\to \Gamma\setminus\{0\}$ defined by $\tilde f(x)= (f(x))_x$ is a Borel injection.
$Y=\hat\Gamma$: Clear, since $\Gamma$ and $\hat\Gamma$ are equivalent.
$Y=\Xi$. We may assume that $X={\mathbb R}_+$. Fix $x\in X$, let $\delta=f(x)$, and let $n_0(\delta)\in{\mathbb N}$ be least such that $\mathfrak p_{n_0}=X_{i_0}$ for some $i_0\in{\mathbb N}$ and $\delta(n_0)\neq 0$. Let $A$ be a C$^*$-algebra and $y=(y_n)$ be a dense sequence in $A$ such that $\delta(n)=\|\mathfrak p_n(y)\|_A$, and let $\tilde y=(\tilde y_n)$ be
$$
\tilde y_n=\left\{\begin{array}{ll}
\frac x {\|y_n\|_A}y_n & \text{if } n=i_0;\\
y_n & \text{otherwise.}
\end{array}\right.
$$
Then define $\tilde f(x)(n)=\|\mathfrak p_n(\tilde y)\|_A$. Clearly $\tilde f(x)\simeq^{\Xi} f(x)$ for all $x\in X$, and since $\|\mathfrak p_{n_0}(\tilde y)\|_A=x$, the function $\tilde f$ is injective. (Note that $\tilde f(x)$ does not depend on the choice of $A$ and $y$, so it is in fact a function.) Finally, $\tilde f$ is Borel by \cite[14.12]{Ke:Classical}, since
\begin{align*}
\tilde f(x)=\delta'\iff &(\exists\gamma,\gamma'\in\Gamma)(\exists\delta\in\Xi\setminus\{0\}) f(x)=\delta\wedge (\forall n) \delta(n)=\|\mathfrak p_n(\gamma)\|\wedge\\
&(\forall i) ((i\neq n_0(\delta)\wedge \gamma'(i)=\gamma(i))\vee (i=n_0(\delta)\wedge\gamma'(i)=\frac x {\|\gamma(i)\|}\gamma(i))),
\end{align*}
gives an analytic definition of the graph of $\tilde f$ (with $n_0(\delta)$ defined as above.)
$Y=\hat\Xi$: Assume $X=[2{\mathbb N}]^\infty$, the infinite subsets of the \emph{even} natural numbers. Given $(f,g,h,k,r)\in\hat\Xi$ and $x\in X$, we can find, in a Borel way, a permutation $\sigma=\sigma_{(f,g,h,j,r),x}$ of ${\mathbb N}$ so that
$$
x=\{2^n\cdot_{\sigma\cdot g} 1: n\in{\mathbb N}\}.
$$
Then $\tilde f(x)=\sigma_{(f,g,h,j,r),x}\cdot f(x)$ works.
\end{proof}
\begin{remark} The classical principle \cite[14.12]{Ke:Classical} that a function whose graph is analytic is Borel will be used frequently in what follows, usually without comment.
\end{remark}
\begin{prop}\label{p.xihatxiequiv}
$\Xi\setminus\{0\}$ and $\hat\Xi$ are equivalent.
\end{prop}
\begin{proof}
By the previous Lemma, it suffices to show that $\Xi\setminus\{0\}$ and $\hat\Xi$ are weakly equivalent.
Identify $\Xi\setminus\{0\}$ with a subset of ${\mathbb R}^{\mathfrak A}$ in the natural way, and define
$$
E=\{(\delta,m,n)\in\Xi\setminus\{0\}\times{\mathbb N}\times{\mathbb N}: \delta(\mathfrak p_m-\mathfrak p_n)=0\}.
$$
Then the section $E_\delta=\{(m,n)\in{\mathbb N}^2: (\delta,m,n)\in E\}$ defines an equivalence relation on ${\mathbb N}$. Let $f_n:\Xi\to{\mathbb N}$ be Borel functions such that each $f_n(\delta)$ is the least element in ${\mathbb N}$ not $E_\delta$-equivalent to $f_m(\delta)$ for $m<n$. If we let $I_\delta=\{\mathfrak p_n: \delta(\mathfrak p_n)=0\}$, then $n\mapsto f(n) I_\delta$ provides a bijection between $\mathfrak A/I_\delta$ and ${\mathbb N}$, and from this we can define (in a Borel way) algebra operations and the norm on ${\mathbb N}$ corresponding to $A\in\hat\Xi$ such that $A\simeq \mathfrak A/I_\delta$.
Conversely, given a normed ${\mathbb Q}(i)$-$*$-algebra $A\in\hat\Xi$ (with underlying set ${\mathbb N}$), an element $\delta_A\in\Xi$ is defined by letting $\delta_A(n)=\|\mathfrak p_n(X_i=i:i\in{\mathbb N})\|_A$, where $\mathfrak p_n(X_i=i:i\in{\mathbb N})$ denotes the evaluation of $\mathfrak p_n$ in $A$ when letting $X_i=i$.
\end{proof}
\begin{prop}\label{p.gammaxiequiv}
$\Gamma$ and $\Xi$ are equivalent. Thus $\Gamma$, $\hat\Gamma$ and $\Xi$ are equivalent parameterizations of the separable C$^*$-algebras, and $\Gamma\setminus\{0\}$, $\hat\Gamma\setminus\{0\}$, $\Xi\setminus\{0\}$ and $\hat\Xi$ are equivalent parameterizations of the non-trivial separable C$^*$-algebras.
\end{prop}
For the proof of this we need the following easy (but useful) Lemma:
\begin{lemma}\label{l.gammamaps}
Let $H$ be a separable infinite dimensional Hilbert space. Then:
(1) A function $f:X\to\Gamma(H)$ on a Polish space $X$ is Borel if and only if for some (any) sequence $(e_i)$ with dense span in $H$ we have that the functions
$$
x\mapsto (f(x)(n)e_i|e_j)
$$
are Borel, for all $n,i,j\in{\mathbb N}$.
(2) Suppose $g:X\to\bigcup_{x\in X}\Gamma(H_x)$ is a function such that for each $x\in X$ we have $g(x)\in\Gamma(H_x)$, where $H_x$ is a separable infinite dimensional Hilbert space, and there is a system $(e^x_i)_{j\in{\mathbb N}}$ with $\Span\{e^x_i:i\in{\mathbb N}\}$ dense in $H_x$. If for all $n,i,j\in{\mathbb N}$ we have that the functions
$$
X\to{\mathbb C}: x\mapsto (e_i^x|e_j^x)
$$
and
$$
X\to{\mathbb C}: x\mapsto (g(n)(x)e_i^x|e_j^x)
$$
are Borel, then there is a Borel $\hat g: X\to\mathcal B(H)$ and a family $T_x:H\to H_x$ of linear isometries such that for all $n\in{\mathbb N}$,
$$
g(x)(n)=T_x\hat g(x)(n) T_x^{-1}.
$$
\end{lemma}
We postpone the proof of Lemma \ref{l.gammamaps} until after the proof of Proposition \ref{p.gammaxiequiv}.
\begin{proof}[Proof of Proposition \ref{p.gammaxiequiv}]
By Lemma \ref{l.injection}, it is again enough to show that $\Gamma$ and $\Xi$ are weakly equivalent. For the first direction, the map $\psi:\Gamma\to\Xi$ given by
$$
\psi(\gamma)(n)=\|\mathfrak p_n(\gamma)\|
$$
clearly works.
For the other direction we rely on the GNS construction (e.g. \cite[II.6.4]{blackadar}). For each $\delta\in \Xi$ let $S(\delta)$ be the space of all $\phi\in \mathbb C^{{\mathbb N}}$ such that
\begin{enumerate}
\item $|\phi(k)|\leq \delta(k)$ for all $k$,
\item $\phi(k)=\phi(m)+\phi(n)$, whenever $\mathfrak p_k=\mathfrak p_m+\mathfrak p_n$,
\item $\phi(k)\geq 0$ whenever $\mathfrak p_k=\mathfrak p_m^*\mathfrak p_m$ for some $m$.
\end{enumerate}
Then $S(\delta)$ is a compact subset of $\mathbb C^{{\mathbb N}}$ for each $\delta\in\Xi$, and so since the relation
$$
\{(\delta,\phi)\in\Xi\times{\mathbb C}^{\mathbb N}:\phi\in S(\delta)\}
$$
is Borel, it follows by \cite[28.8]{Ke:Classical} that $\Xi\to K(\mathbb C^{{\mathbb N}}): \delta\mapsto S(\delta) $ is a Borel function into the Polish space $K(\mathbb C^{{\mathbb N}})$ of compact subsets of $\mathbb C^{{\mathbb N}}$. Consider the set
$$
N=\{(\delta,\phi,n,m)\in\Xi\times{\mathbb C}^{\mathbb N}\times{\mathbb N}\times{\mathbb N}: \phi\in S(\delta)\wedge (\exists k) \mathfrak p_k=(\mathfrak p_n-\mathfrak p_m)^*(\mathfrak p_n-\mathfrak p_m)\wedge \phi(k)=0\}.
$$
Then for each $\delta$ and $\phi$ the relation $N_{\delta,\phi}=\{(n,m)\in{\mathbb N}^2:(\delta,\phi,n,m)\in N\}$ is an equivalence relation on ${\mathbb N}$. Without any real loss of generality we can assume that $N_{\delta,\phi}$ always has infinitely many classes. Let $\sigma_n:\Xi\times{\mathbb C}^{\mathbb N}\to{\mathbb N}$ be a sequence of Borel maps such that for all $\delta$ and $\phi$ fixed the set
$$
\{\sigma_n(\delta,\phi):n\in{\mathbb N}\}
$$
meets every $N_{\delta,\phi}$ class once. For $\delta$ and $\phi$ fixed we can then define an inner product on ${\mathbb N}$ by
$$
(n|m)_{\delta,\phi}=\phi(k)\iff \mathfrak p_k=\mathfrak p_{\sigma_n(\delta,\phi)}^* \mathfrak p_{\sigma_m(\delta,\phi)}.
$$
Let $H(\delta,\phi)$ denote the completion of this pre-Hilbert space. Then there is a unique operator $\gamma(n)\in \mathcal B(H(\delta,\phi))$ extending the operator acting on $({\mathbb N},(\cdot|\cdot)_{\delta,\phi})$ defined by letting $\gamma_{\delta,\phi}(n)(m)=k$ iff there is some $k'\in{\mathbb N}$ such that
$$
\mathfrak p_{\sigma_n(\delta,\phi)}\mathfrak p_{\sigma_m(\delta,\phi)}=\mathfrak p_{k'}
$$
and $(\delta,\phi,k',\sigma_k(\delta,\phi))\in N$. Note that $n\mapsto \gamma_{\delta,\psi}$ corresponds to the GNS representation of the normed $*$-algebra over ${\mathbb Q}(i)$ that corresponds to $\delta$. Since the elements of ${\mathbb N}$ generate $H(\delta,\psi)$ and the map
$$
(\delta,\psi)\mapsto (\gamma_{\delta,\psi}(n)(i)|j)_{\delta,\psi}
$$
is Borel, it follows from Lemma \ref{l.gammamaps} that there is a Borel function
$$
\Xi\times{\mathbb C}^{\mathbb N}\to\Gamma(H): (\delta,\psi)\mapsto \tilde\gamma(\delta,\psi)
$$
such that $\tilde\gamma(\delta,\psi)\in\Gamma(H)$ is conjugate to $(\gamma_{\delta,\phi}(n))_{n\in{\mathbb N}}\in\Gamma(H(\delta,\psi))$ for all $\delta,\psi$.
Since the map $\Xi\to K(\mathbb C^{{\mathbb N}}): \delta\mapsto S(\delta)$ is Borel,
by the Kuratowski--Ryll-Nardzewski theorem (\cite[Theorem~12.13]{Ke:Classical})
there are Borel maps $\phi_n\colon \Xi\to \mathbb C^{{\mathbb N}}$ such that for every $\delta\in \Xi$
the set $\phi_n(\delta)$, for $n\in {\mathbb N}$, is dense in $S(\delta)$. Writing $H=\bigoplus_{n=1}^\infty H_n$ where $H_n$ are infinite dimensional Hilbert spaces, we may then by the above find a Borel map $\Xi\to\Gamma(H):\delta\to\gamma(\delta)$ such that the restriction $\gamma(\delta)\upharpoonright H_n$ is conjugate to $\tilde\gamma(\delta,\phi_n)$. Since the sequence $(\phi_n(\delta))$ is dense in $S(\delta)$, it follows that $\gamma(\delta)$ is a faithful representation of the algebra corresponding to $\delta$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l.gammamaps}]
(1) is clear from the definition of $\Gamma(H)$. To see (2), first note that the Gram-Schmidt process provides orthonormal bases $(f_i^x)_{i\in{\mathbb N}}$ for the $H_x$ such that
$$
f^x_i=\sum_{j=1}^i r_{i,j}^xe_j^x
$$
and the coefficient maps $x\mapsto r_{i,j}^x$ are Borel. Therefore the maps
$$
X\mapsto{\mathbb C}: (g(n)(x)f_i^x|f_j^x)
$$
are Borel for all $n,i,j\in{\mathbb N}$, and so we may in fact assume that $(e_i^x)_{i\in{\mathbb N}}$ forms an orthonormal basis to begin with. But then if $(e_i)_{i\in{\mathbb N}}$ is an orthonormal basis a function $\hat g:X\to\Gamma(H)$ is defined by
$$
\hat g(x)(n)=S\iff (\forall i,j) (Se_i, e_j)=(g(n)(x)e_i^x, e_j^x),
$$
and since this also provides a Borel description of the graph of $\hat g$, $\hat g$ is Borel by \cite[14.12]{Ke:Classical}. Finally, defining $T_x:H\to H_x$ to be the isometry mapping $e_i$ to $e_i^x$ for each $x$ provides the desired conjugating map.
\end{proof}
\subsection{Parameterizing unital C$^*$-algebras}\label{ss.paramunital}
We briefly discuss the parameterization of unital C$^*$-algebras. Define
$$
\Gammau=\{\gamma\in\Gamma: C^*(\gamma)\text{ is unital}\}.
$$
We will see (Lemma \ref{l.unital}) that this set is Borel. We can similarly define $\hatGammau\subseteq\hat\Gamma$, $\Xiu\subseteq\Xi$ and $\hatXiu\subseteq\hat\Xi$. However, as noted in \ref{ss.hatgamma}, the set
$$
\hat \Gamma_{\Au}(H)=\{f:\Au\to\mathcal B(H):f\text{ is a \emph{unital} $*$-homomorphism}\}
$$
is Borel and naturally parameterizes the unital C$^*$-subalgebras of $\mathcal B(H)$. In analogy, we define
$$
\Xi_{\Au}=\{f\in{\mathbb R}^{\Au}:f\text{ defines a C$^*$-seminorm on } \Au\text{ with } f(1)=1\},
$$
which is also Borel. Then a similar proof to that of Proposition \ref{p.gammaxiequiv} shows:
\begin{prop}\label{p.gammaxiunitalequiv}
The Borel sets $\hat\Gamma_{\Au}$ and $\Xi_{\Au}$ provide equivalent parameterizations of the unital $C^*$-algebras.
\end{prop}
\noindent In \S 3 we will see that $\hat\Gamma_{\Au}$ and $\Xi_{\Au}$ are also equivalent to $\Gammau$ (and therefore also $\hatGammau$, $\Xiu$ and $\hatXiu$.) For future use, we fix once and for all an enumeration $(\mathfrak q_n)_{n\in{\mathbb N}}$ of all the formal ${\mathbb Q}(i)$-$*$-polynomials (allowing constant terms), so that naturally $\Au=\{\mathfrak q_n:n\in{\mathbb N}\}$. Also for future reference, we note that Lemma \ref{l.injection} holds for $Y=\hat\Gamma_{\Au}$ and $Y=\Xi_{\Au}$ (the easy proof is left to the reader.)
\subsection{Basic maps and relations} We close this section by making two simple, but useful, observations pertaining to the parameterization $\Gamma$. While Borel structures of the weak operator topology, strong operator topology, $\sigma$-weak operator topology and $\sigma$-strong operator topology all coincide, the Borel structure of the norm topology is strictly finer. However, we have:
\begin{lemma} \label{L.B.1} Every norm open ball in $\mathcal B(H)$ is a Borel subset of $\Gamma$, and for every $\varepsilon>0$ the set $\{(a,b): \|a-b\|<\varepsilon\}$ is Borel.
It follows that the maps
$$
\mathcal B(H)\to\mathbb R: a\mapsto \|a\|, \ \ \mathcal B(H)^2\to{\mathbb R}: (a,b)\mapsto \|a-b\|
$$
are Borel.
\end{lemma}
\begin{proof} Clearly $\{a: \|a\|>\varepsilon\}$ is weakly open for all $\varepsilon\geq 0$.
Hence norm open balls are $F_\sigma$.
\end{proof}
\begin{lemma} \label{L.equality.borel}
The relations
$$
\{(\gamma,\gamma')\in \Gamma\times\Gamma: C^*(\gamma)\subseteq C^*(\gamma')\}
$$
and
$$
\{(\gamma,\gamma')\in\Gamma\times\Gamma: C^*(\gamma)= C^*(\gamma')\}
$$
are Borel.
\end{lemma}
\begin{proof}
We have
$$
C^*(\gamma)\subseteq C^*(\gamma')\iff (\forall n)(\forall \varepsilon>0)(\exists m)\|\gamma_n'-\mathfrak p_m(\gamma)\|<\varepsilon,
$$
which is Borel by Lemma~\ref{L.B.1}.
\end{proof}
\section{Basic definability results}\label{S.Basicdef}
In this section we will show that a wide variety of standard C$^*$-algebra constructions correspond to Borel relations and functions in the spaces $\Gamma$ and $\Xi$.
\begin{prop} \label{P.isomorphism}
\
\begin{enumerate}[{\rm \indent (1)}]
\item The relation $\precsim$ on $\Gamma$, defined by $\gamma\precsim\delta$ if
and only if $C^*(\gamma)$ is isomorphic to a subalgebra of
$C^*(\delta)$, is analytic.
\item The relation $\simeq^{\Gamma}$ is analytic. In particular, $\Gamma$, $\hat\Gamma$, $\Xi$ and $\hat\Xi$ are good standard Borel parameterizations of the class of separable $C^*$-algebras.
\end{enumerate}
\end{prop}
Before the proof of Proposition~\ref{P.isomorphism} we introduce some terminology
and prove a lemma.
The following terminology will be useful both here and later: We
call $\Phi:{\mathbb N}\to{\mathbb N}^{\mathbb N}$ a \emph{code} for a $*$-homomorphism
$C^*(\gamma)\to C^*(\gamma')$ if for all $m,n,k$ we have:
\begin{enumerate}
\item For each fixed $m$ the sequence $a_{m,k}=\mathfrak p_{\Phi(m)(k)}(\gamma')$, $k\in {\mathbb N}$, is Cauchy. Write $a_m=\lim_k a_{m,k}$.
\item If $\mathfrak p_m(\gamma)+\mathfrak p_n(\gamma)=\mathfrak p_k(\gamma)$ then $a_m+a_n=a_k$.
\item If $\mathfrak p_m(\gamma)\mathfrak p_n(\gamma)=\mathfrak p_k(\gamma)$ then $a_m a_n=a_k$.
\item If $\mathfrak p_m(\gamma)^*=\mathfrak p_k(\gamma)$ then $a_m^*=a_k$.
\item \label{L.subalgebra.4} $\|\mathfrak p_m(\gamma)\|\leq\|a_m\|$.
\end{enumerate}
We call $\Phi$ a code for a {\it monomorphism} if equality holds in
(\ref{L.subalgebra.4}).
\subsection{Definitions of $\Rhom$, $\Rmono$ and $\Riso$}\label{S.Rhom}
Let $H_0$ and $H_1$ be separable complex Hilbert spaces. Then it is easy to see that the relations $\Rhomx{H_0,H_1}, \Rmonox{H_0,H_1}\subseteq
\Gamma(H_0)\times\Gamma(H_1)\times({\mathbb N}^{\mathbb N})^{\mathbb N}$ defined by
\begin{align*}
& \Rhomx{H_0,H_1}(\gamma,\gamma',\Phi)\iff \ \Phi \text{ is a code for a *-homomorphism } C^*(\gamma)\to C^*(\gamma')\\
& \Rmonox{H_0,H_1}(\gamma,\gamma',\Phi)\iff \ \Phi \text{ is a code for a
*-monomorphism } C^*(\gamma)\to C^*(\gamma')
\end{align*}
are Borel. We let $\Rhomx{H}=\Rhomx{H,H}$ and $\Rmonox{H}=\Rmonox{H,H}$ for any Hilbert space $H$. If $H_0, H_1$ or $H$ are clear from the context or can be taken to be any (separable) Hilbert spaces then we will suppress the superscript and write $\Rhom$ and $\Rmono$.
The following is immediate from the definitions:
\begin{lemma}
If $(\gamma,\gamma',\Phi)\in \Rhom$ then there is a unique
homomorphism $\hat\Phi: C^*(\gamma)\to C^*(\gamma')$ which satisfies
$$
\hat\Phi(\gamma(j))=a_j
$$
for all $j\in{\mathbb N}$. If $(\gamma,\gamma',\Phi)\in \Rmono{}$ then
$\hat\Phi$ is a monomorphism.
\end{lemma}
\begin{proof}
If $\Rhom{}(\gamma,\gamma',\Phi)$ then
$$
\mathfrak p_m(\gamma)\mapsto a_m
$$
is a *-homomorphism from a dense subalgebra of $C^*(\gamma)$ into a
subalgebra of $C^*(\delta)$. Since it is a contraction it extends to
a *-homomorphism of $\hat\Phi:C^*(\gamma)\to C^*(\gamma')$ onto a
subalgebra of $C^*(\gamma)$. If $\Rmono{}(\gamma,\gamma',\Phi)$
holds then $\hat\Phi$ is clearly a monomorphism.
\end{proof}
We also define a relation $\Riso$ (we are suppressing $H_0$ and $H_1$) by
\begin{align*}
\Riso(\gamma,\gamma',\Phi) \iff & \Rmono{}(\gamma,\gamma',\Phi)\wedge\\
&(\forall m)(\forall\varepsilon>0)(\exists k\in{\mathbb N})(\forall n>k)
\|\mathfrak p_{\Phi(m)(n)}(\gamma)-\mathfrak p_m(\gamma')\|<\varepsilon.
\end{align*}
This relation states that $\Phi$ is a monomorphism and an epimorphism, and therefore an isomorphism.
It is Borel because $\Rmono$ is Borel.
\begin{proof}[Proof of Proposition \ref{P.isomorphism}] (1) Clear, since
$$
\gamma\precsim \gamma'\iff (\exists\Phi:{\mathbb N}\to{\mathbb N}^{\mathbb N})
\Rmono{}(\gamma,\gamma',\Phi).
$$
(2) We have
$$
C^*(\gamma)\simeq C^*(\gamma')\iff (\exists \Phi:{\mathbb N}\to{\mathbb N}^{{\mathbb N}})\Riso(\gamma,\gamma',\Phi),
$$
giving an analytic definition of $\simeq^{\Gamma}$, and so $\Gamma$ is a good parameterization. The last assertion follows from the equivalence of the four parameterizations.
\end{proof}
\begin{remark}
Note that the equivalence relation $\rE$ on $\Gamma$ defined by $\gamma\rE \delta$ if and only if there is a unitary $u\in\mathcal B(H)$ such that $u C^*(\gamma)u^*=C^*(\delta)$ is a proper subset of $\simeq^{\Gamma}$ and that $\rE$ is induced by a continuous action of the unitary group. We don't know whether the relation $\simeq^\Gamma$ is an orbit equivalence relation induced be the action of a Polish group action on $\Gamma$, see discussion at the end of \S9.
\end{remark}
For future use, let us note the following.
\begin{lemma} \label{L.B.2} The set $Y$ of all $\gamma\in \Gamma$ such that $\gamma_n$, $n\in {\mathbb N}$, is a
Cauchy sequence (in norm) is Borel.
The function $\Psi\colon Y\to \mathcal B(H)$ that assigns the limit to a Cauchy
sequence is Borel.
\end{lemma}
\begin{proof} We have $\gamma\in Y$ if and only if $(\forall \varepsilon>0)(\exists m)
(\forall n\geq m) \|\gamma_m-\gamma_n\|<\varepsilon$. By Lemma~\ref{L.B.1}, the
conclusion follows.
It suffices to show that the graph $G$ of $\Psi$ is a Borel subset of
$\mathcal B(H)^{{\mathbb N}}\times \mathcal B(H)$. But $(\gamma,a)\in G$ if and only if for all
$\varepsilon>0$ there is $m$ such that for all $n\geq m$ we have $\|\gamma_m-a\|\leq
\varepsilon$, which is by Lemma~\ref{L.B.1} a Borel set.
\end{proof}
\subsection{Directed systems, inductive limits, and $\Rdir$.}
A directed system of C*-algebras can be coded by a sequence $(\gamma_i)_{i\in {\mathbb N}}$ in $\Gamma$
and a sequence $\Phi_i: {\mathbb N}\to {\mathbb N}^{{\mathbb N}}$, for $i\in {\mathbb N}$, such that
$$
(\forall i\in{\mathbb N}) \Rhom{}(\gamma_i,\gamma_{i+1},\Phi_i).
$$
The set $\Rdir{}\subseteq \Gamma^{\mathbb N}\times(({\mathbb N}^{\mathbb N})^{\mathbb N})^{\mathbb N}$ of codes for inductive systems is defined by
$$
((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}})\in \Rdir{}
\iff (\forall i\in{\mathbb N}) \Rhom{}(\gamma_i,\gamma_{i+1},\Phi_i)
$$
and is clearly Borel.
\begin{prop} \label{P.Directed}
There are Borel maps $\LIM: \Rdir{}\to \Gamma$ and $\Psi_i:\Rdir{}\to ({\mathbb N}^{\mathbb N})^{\mathbb N}$ such that
$$
C^*(\LIM((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}}))\simeq\lim_{i\to\infty} (C^*(\gamma_i),\hat\Phi_i)
$$
and it holds that
$$
(\forall n\in{\mathbb N}) \Rhom{}(\gamma_n,\LIM((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}}),\Psi_n((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}}))
$$
and $\hat\Psi_n((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}}):
C^*(\gamma)\to C^*(\LIM((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}}))$ satisfies
$$
\hat\Psi_{n+1}\circ\hat\Phi_{n}=\hat\Psi_{n},
$$
i.e. the diagram
\[
\diagram
C^*(\gamma_{n+1}) \rrto^{\hat\Psi_{n+1}}
&& \LIM((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}})\\
C^*(\gamma_n)
\uto^{\hat\Phi_n}
\urrto^{\hat\Psi_n}
\enddiagram
\]
commutes.
\end{prop}
We start by noting the simpler Lemma~\ref{L.1step} below.
The constant $i$ sequence is denoted $\overline i$.
For $(\gamma,\gamma',\Phi)\in \Rhom$ define
the function $f:\Rhom{}\to\Gamma$ by
$$
f(\gamma,\gamma',\Phi)(m)=\left\{\begin{array}{ll}
\gamma'_k & \text{ if } m=3^k \text{ for } k\geq 1\\
a & \text{ if } m=2^k\text{ and } \lim_{i\to\infty}\gamma'_{\Phi(k)(i)}=a\\
0 & \text{ otherwise.}
\end{array}\right.
$$
Then the following is obvious:
\begin{lemma}\label{L.1step}
The function $f$
introduced above is Borel and
for all $(\gamma,\gamma',\Phi)\in \Rhom{}$ we have
$$
C^*(\gamma')\simeq C^*(f(\gamma,\gamma',\Phi)).
$$
Moreover, for $\Psi,\Phi':{\mathbb N}\to{\mathbb N}^{\mathbb N}$ defined by $\Phi'(m)=\overline{2^m}$ and $\Psi(m)=\overline{3^m}$ for $m\geq 1$, we have that
$\Riso(\gamma',f(\gamma,\gamma',\Phi),\Psi)$, $\Rhom{}(\gamma,f(\gamma,\gamma',\Phi),\Phi')$ and
$$
\hat\Psi\circ\hat\Phi=\hat\Phi'.
$$
\end{lemma}
\begin{proof}[Proof of Proposition~\ref{P.Directed}] By Proposition~\ref{p.gammaxiequiv}
it will suffice to define $\LIM$ with the range in $\Xi$.
Fix $((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}})\in \Rdir{}$, let
$$
A=\lim_{i\to\infty} (C^*(\gamma_i),\hat\Phi_i),
$$
and let $f_i:C^*(\gamma_i)\to A$ be the connecting maps satisfying
$f_{i+1}\circ\hat\Phi_i=f_i$. By Lemma~\ref {L.1step} we may assume
that that for all $m\in{\mathbb N}$ the sequence $\Phi_i(m)$, for $i\in{\mathbb N}$, is constant.
Let $\varphi_i(m)=\Phi_i(m)(1)$, define
$\varphi_{i,j}=\varphi_i\circ\cdots\circ\varphi_j$ for $j<i$, and
let $\beta:{\mathbb N}\to{\mathbb N}\times{\mathbb N}$ be a fixed bijection. Let
$\tilde\gamma\in\Gamma$ be defined by
$$
\tilde\gamma(i)=f_{\beta(i)_0}(\gamma_{\beta(i)_0}(\beta(i)_1)).
$$
Then a code $\delta\in\Xi$ for $\tilde\gamma$ is given by
$$
\delta(i)=\lim_{k\to\infty}\|\mathfrak p_{\varphi_{k,\beta(i)_0}(\beta(i)_1)}(\gamma_k)\|
$$
and if we define
$$
\Psi_j((\gamma_i)_{i\in {\mathbb N}},(\Phi_i)_{i\in {\mathbb N}})(m)(n)=k\iff \beta(k)_0=j\wedge
\beta(k)_1=m
$$
then $\Psi_j$ is a code for $f_j$.
\end{proof}
Next we prove that most standard constructions and relations that occur in C$^*$-algebra theory correspond to Borel maps and relations in the parameterizations we have introduced. The first lemma follows easily from the definitions, and we leave the proof to the reader.
\begin{lemma}\label{L.B.3.0} The following maps are Borel.
\begin{enumerate}
\item $\mathcal B(H)\times \mathcal B(H)\to\mathcal B(H):(a,b)\mapsto ab$,
\item $\mathcal B(H)\times \mathcal B(H)\to\mathcal B(H): (a,b)\mapsto a+b$,
\item $\mathcal B(H)\times \mathbb C\to\mathcal B(H): (a,\lambda)\mapsto \lambda a$,
\item $\mathcal B(H)\to\mathcal B(H): a\mapsto a^*$,
\item $\mathcal B(H)\times \mathcal B(H)\to\mathcal B(H)\otimes_{\min}\mathcal B(H): (a,b)\mapsto a\otimes b
$ (where $\mathcal B(H)\otimes_{\min} \mathcal B(H)$ is identified
with $\mathcal B(H)$ by fixing a $*$-isomorphism),
\item $\mathcal B(H)\times \mathcal B(H)\to M_2(\mathcal B(H)): (a,b)\mapsto \begin{pmatrix} a & 0 \\ 0 & b
\end{pmatrix}$ (where $M_2(\mathcal B(H))$ is identified with $\mathcal B(H)$
by fixing a $*$-isomorphism).
\end{enumerate}
\end{lemma}
\begin{lemma}\label{L.B.3} The following subsets of $\mathcal B(H),\mathcal B(H)^2$, and $\Gamma$ are Borel.
\begin{enumerate}
\item $\{(a,b): ab=ba\}$.
\item \label{L.B.3.sa} $\mathcal B(H)_{\sa}=\{a: a=a^*\}$.
\item $\mathcal B(H)_+=\{a\in \mathcal B(H)_{\sa}: a\geq 0\}$.
\item \label{L.B.3.projection} ${\mathcal P}(\mathcal B(H))=\{a\in \mathcal B(H): a$ is a projection$\}$.
\item \label{L.B.3.isometry} $\{a: a$ is a partial isometry$\}$.
\item \label{L.B.3.invertible} $\{a: a$ is invertible$\}$.
\item \label{L.B.3.normal} $\{a: a$ is normal$\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{L.B.3.projection} Immediate since the maps $a\mapsto a-a^2$ and
$a\mapsto a-a^*$ are Borel measurable.
\eqref{L.B.3.isometry} Since $a$ is a partial isometry if and only if $a^*a$
and $aa^*$ are both projections, this follows from the Borel-measurability of
these maps and~\eqref{L.B.3.projection}.
\eqref{L.B.3.invertible} Let $\xi_n$ be a countable dense subset of the unit
ball of $\mathcal B(H)$. Then $a$ is invertible if and only if there is $\varepsilon>0$ such
that $\|a\xi_n\|\geq \varepsilon$ and $\|a^*\xi_n\|\geq \varepsilon$ for all $n$ (\cite[3.2.6]{Pede:Analysis}).
\eqref{L.B.3.normal} Immediate since the map $a\mapsto [a,a^*]$ is Borel.
\end{proof}
Next we consider formation of the matrix algebra over a C$^*$-algebra. For this purpose, fix bijections $\beta_n:{\mathbb N}\to {\mathbb N}^{n\times n}$ for each $n$. While the next Lemma is in some sense a special case of the Lemma that follows it (which deals with tensor products) the formulation given below will be used later for the proof of Theorem \ref{mainintro}.
\begin{lemma}\label{L.matrix}
For each $n\in{\mathbb N}$ there are Borel functions $M_n:\Gamma(H)\to \Gamma(H^n)$ and $\theta_n:\Gamma(H)\times({\mathbb N}^{\mathbb N})^n\to{\mathbb N}^{\mathbb N}$ such that
(1)
$$
M_n(\gamma)=(\left(\begin{array}{ccc}
\gamma_{\beta_n(l)(1,1)} &\cdots & \gamma_{\beta_n(l)(1,n)}\\
\vdots & & \vdots\\
\gamma_{\beta_n(l)(n,1)} &\cdots & \gamma_{\beta_n(l)(n,n)}\\
\end{array}\right):l\in{\mathbb N})
$$
(2) If $(\gamma,\Psi_i)\in \Rhomx{H}$ for all $i=1,\ldots, n$ then
$$
(\gamma,M_n(\gamma),\theta_n(\gamma,\Psi_1,\ldots,\Psi_n))\in \Rhomx{H,H^n}
$$
and
$$
\theta_n(\gamma,\Psi_1,\ldots,\Psi_n)(k)(i)=m\implies \mathfrak p_m(M_n(\gamma))=\diag(\gamma(\Psi_1(k)(i)),\ldots,\gamma(\Psi_n(k)(i))).
$$
That is, $\theta_n(\gamma,\Psi_1,\ldots,\Psi_n)$ codes the diagonal embedding twisted by the homomorphisms $\hat\Psi_i$.
\end{lemma}
\begin{proof}
(1) is clear. (2) follows by letting $\theta_n(\gamma,\psi_1,\ldots,\psi_n)(k)(i)=m$ if and only if $m$ is the least such that
$$
\mathfrak p_m(M_n(\gamma))=\diag(\gamma(\Psi_1(k)(i)),\ldots,\gamma(\psi_n(k)(i))).
$$
\end{proof}
\begin{lemma}\label{L.tensor} There is a Borel-measurable map $\Tensor\colon
\Gamma\times \Gamma\to \Gamma$ such that
\[
C^*(\Tensor(\gamma,\delta))\cong C^*(\gamma)\otimes_{\min}
C^*(\delta)
\]
for all $\gamma$ and $\delta$ in $\Gamma$.
Moreover, there is a Borel-measurable map
$\Tenx\colon \Gamma\times\Gamma\to \Gamma$ such that if $1\in C^*(\delta)$
then $C^*(\Tenx(\gamma,\delta))$ is the canonical copy of $C^*(\gamma)$ inside
$C^*(\gamma)\otimes_{\min} C^*(\delta)$.
\end{lemma}
\begin{proof}
Fix a *-isomorphism $\Psi\colon \mathcal B(H)\otimes_{\min} \mathcal B(H)\to \mathcal B(H) $. Define
$$
\Tensor(\gamma,\delta)_{2^m(2n+1)}=\Psi(\gamma_m \otimes \delta_n).
$$
Then $\Tensor$ is clearly Borel and the algebra generated by
$\Tensor(\gamma,\delta)$ is $C^*(\gamma)\otimes_{\min} C^*(\delta)$. For the moreover part, $\Tenx(\gamma)_m=\Psi(\gamma_m\otimes 1)$ clearly works.
\end{proof}
It is not difficult to see that the set $\{\gamma\in \Gamma: 1\in C^*(\gamma)\}$ is Borel (cf. Lemma~\ref{l.unital}) but we shall not need this fact.
\begin{lemma}\label{L.C(X,A)} If $X$ is a locally compact Hausdorff
space then there is a Borel measurable map $\Phi\colon \Gamma\to
\Gamma$ such that
\[
\Phi(\gamma)\cong C_0(X,C^*(\gamma))
\]
for all $\gamma\in\Gamma$. In particular, letting $X=(0,1)$, we conclude that
there is a Borel map $\Phi$ such that $\Phi(\gamma)$ is isomorphic to
the suspension of $C^*(\gamma)$.
\end{lemma}
\begin{proof}
This is immediate from Lemma~\ref{L.tensor} since $C(X,A)\cong C(X)\otimes_{\min} A$.
\end{proof}
\begin{lemma}\label{L.unitization}
There is a Borel function $\Unit\colon \Gamma\to \Gamma$ such that
$C^*(\Unit(\gamma))$ is isomorphic to the unitization of
$C^*(\gamma)$.
\end{lemma}
\begin{proof} Fix a partial isometry $v$ such that $vv^*=1$ and $v^*v$ is a projection onto a space of codimension 1. Let $\Unit(\gamma)_0=1$ and
$\Unit(\gamma)_{n+1}=v^* \gamma_n v$. Then $C^*(\gamma)$ is as
required.
\end{proof}
\subsection{Effective enumerations}
\begin{lemma}\label{L.E.1}
\
\begin{enumerate}[\indent $(1)$]
\item \label{L.E.1.2} There is a Borel map $\Sa\colon \Gamma\to \Gamma$ such that for
every $\gamma\in \Gamma$ the set $\{\Sa(\gamma)(n): n\in {\mathbb N}\}$ is a
norm-dense subset of the set of self-adjoint elements
of~$C^*(\gamma)$.
\item \label{L.E.1.4} There is a Borel map $\Un:\Gamma\to\Gamma$ such that the set $\{\Un(\gamma):n\in{\mathbb N}\}$ is norm-dense in the set of unitaries in $C^*(\gamma)$ whenever $C^*(\gamma)$ is unital.
\item \label{L.E.1.1} There is a Borel map $\Pos\colon \Gamma\to \Gamma$
such that for every $\gamma\in \Gamma$ the set $\{\Pos(\gamma)(n):n\in {\mathbb N}\}$ is a norm-dense subset of the set of positive elements
of~$C^*(\gamma)$.
\item \label{L.E.1.3} There is a Borel map $\Proj\colon \Gamma\to \Gamma$ such that for every
$\gamma\in \Gamma$ the set $\{\Proj(\gamma)(n): n\in {\mathbb N}\}$ is a
norm-dense subset of the set of projections of~$C^*(\gamma)$.
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{L.E.1.2} Let $\Sa(\gamma)(n)=\frac 12(\mathfrak p_n(\gamma)+\mathfrak p_n(\gamma)^*)$ for all $n$.
Clearly each $\Sa(\gamma)(n)$ is self-adjoint. If $a\in C^*(\gamma)$ is self-adjoint
then $\|a-\Sa(\gamma)(n)\|\leq \|a-\mathfrak p_n(\gamma)\|$. Therefore the range of $\Sa$ is norm-dense
subset of the set of self-adjoint elements of $C^*(\gamma)$.
\eqref{L.E.1.4} Let $\Un(\gamma)(n)=\exp(i\Sa(\gamma)(n))$.
\eqref{L.E.1.1} Let
$\Pos(\gamma)(n)=\mathfrak p_n(\gamma)^* \mathfrak p_n(\gamma)$ for all $n$.
Pick a positive $a\in C^*(\gamma)$ and fix $\varepsilon>0$.
Pick $b\in C^*(\gamma)$ such that $a=b^*b$. Let $n$ be such that
$\|\mathfrak p_n(\gamma)-b\|<\varepsilon/(2\|b\|)$ and $\|\mathfrak p_n(\gamma)\|\leq \|b\|$.
Then
\[
\mathfrak p_n(\gamma)^*\mathfrak p_n(\gamma)-a=(\mathfrak p_n(\gamma)^*-b^*)\mathfrak p_n(\gamma)+
b^*(\mathfrak p_n(\gamma)-b)
\]
and the right hand side clearly has norm $<\varepsilon$.
\eqref{L.E.1.3} Fix a function $f\colon \mathbb R\to [0,1]$ such that the
iterates $f^n$, $n\in {\mathbb N}$, of $f$ converge uniformly to the
function defined by $g(x)=0$, $x\leq 1/4$ and $g(x)=1$ for $x\geq
3/4$ on $(-\infty,1/4]\cup [3/4,\infty)$. For example, we can take
\[
f(x)=\begin{cases} 0, & x\leq 0\\
\frac x2, & 0<x\leq \frac 14\\
\frac 32 x -\frac 14,& \frac 14<x\leq \frac 34\\
1-(1-x)/2, & \frac 34 <x\leq 1\\
1, & x>1.
\end{cases}
\]
The set $X=\mathcal B(H)_{\sa}$ is a Borel subset of $\mathcal B(H)$ by
Lemma~\ref{L.B.3}. Note that $X\cap \{\Pos(\gamma)(n): n\in {\mathbb N}\}$ is
dense in $X\cap C^*(\gamma)$. Let $\Psi\colon X\to \mathcal B(H)^{\mathbb N}$ be
defined~by
\[
\Psi(a)(n)=f^n(a).
\]
By Lemma~\ref{L.B.2} the set $Y=\{b\in X: \Psi(b)$ is Cauchy$\}$ is Borel. For $n$ such that $\Pos(\gamma)(n)\in Y$ let $\Proj(\gamma)(n)$ be the limit of this
sequence, and let $\Proj(\gamma)(n)=0$ otherwise. By Lemma~\ref{L.B.2} again, $\Proj$ is Borel.
Fix $\gamma$ and $n$. Clearly, the operator $\Proj(\gamma)(n)$ is
positive and its spectrum is a subset of $\{0,1\}$. Therefore it is a
projection in $C^*(\gamma)$. We need to check that for every
projection $p\in C^*(\gamma)$ and $\varepsilon>0$ there is $n$ such that
$\|\Proj(\gamma)(n)-p\|<\varepsilon$.
We may assume $\varepsilon<1/4$. Pick $n$ so that $\|\Pos(\gamma)(n)-p\|<\varepsilon$.
Since $\varepsilon<1/4$, the spectrum of $\Pos(\gamma)(n)$
is included in $(-\varepsilon,\varepsilon)\cup (1-\varepsilon,1+\varepsilon)\subseteq (-1/4,1/4)\cup (3/4,5/4)$ and therefore the sequence
$f^j(\Pos(\gamma)(n))$, $j\in {\mathbb N}$, converges to a projection, $q$.
Clearly $\|p-q\|<2\varepsilon$.
\end{proof}
Recall from \ref{ss.paramunital} that $\Gammau$ denotes the set of $\gamma\in\Gamma$ parameterizing unital C$^*$-algebras. From the previous Lemma we now obtain:
\begin{lemma} \label{l.unital}
The set $\Gammau$ is Borel, and there is a Borel map $u:\Gammau\to {\mathbb N}$ such that $\Proj(\gamma)(u(\gamma))$ is the unit in $C^*(\gamma)$.
\end{lemma}
\begin{proof} For projections $p$ and $q$ we have that $p\leq q$ and
$p\neq q$ implies $\|p-q\|=1$. Therefore $C^*(\gamma)$ is unital if
and only $\Proj(\gamma)(n)$ is its unit for some $n$. Also, $p$ is a
unit in $A$ if and only if $pa=a=ap$ when $a$ ranges over a dense
subset of $A$. Therefore $C^*(\gamma)$ is unital if and only if there
is $m$ such that for all $n$ we have
\begin{equation}\label{eq.unit}
\Proj(\gamma)(m)\mathfrak p_n(\gamma)=\mathfrak p_n(\gamma)\Proj(\gamma)(m)=\mathfrak p_n(\gamma).
\end{equation} To define $u:\Gammau\to{\mathbb N}$, simply let $u(\gamma)=m$ if and only if $m\in{\mathbb N}$ is least such that \ref{eq.unit} holds for all $n\in{\mathbb N}$.
\end{proof}
\begin{corollary}\label{c.paramunital}
The parameterization $\hat\Gamma_{\Au}$, $\Xi_{\Au}$, $\Gammau$, $\hatGammau$, $\Xiu$ and $\hatXiu$ of unital separable C$^*$-algebras all equivalent.
\end{corollary}
\begin{proof}
It is clear from the previous Lemma and Propositions \ref{p.xihatxiequiv} and \ref{p.gammaxiequiv} that $\Gammau$, $\hatGammau$, $\Xiu$ and $\hatXiu$ are equivalent standard Borel parameterizations. On the other hand, it is easy to see that Lemma \ref{l.injection} hold for $Y=\hat\Gamma_{\Au}$, and so it is enough to show weak equivalence of $\Gammau$ and $\hat\Gamma_{\Au}$. In one direction, the natural map $\hat\Gamma_{\Au}\to\Gamma:f\to\gamma(f)$ given by $\gamma(f)(n)=f(\mathfrak q_n)$ clearly works. The other direction can be proven by a GNS argument analogous to the proof of proposition \ref{p.gammaxiequiv}.
\end{proof}
\subsection{Effros Borel structure}\label{S.Eff}
If $X$ is a Polish space then $F(X)$ denotes the space of all
closed subsets of $X$ equipped with the $\sigma$-algebra generated by the sets
\[
\{K\in F(X): K\cap U\neq \emptyset\}
\]
for $U\subseteq X$ open.
This is a standard Borel space (\cite[\S 12.C]{Ke:Classical})
and its subspaces are typically used as Borel spaces of
separable Banach spaces, von Neumann algebras, etc. Since by a result of
Junge and Pisier there is no universal separable C*-algebra (\cite{JunPis}), the space of subalgebras of a given separable C*-algebra cannot be a Borel space of all C*-algebras.
However, the subspace of $F({\mathcal O}_2)$ consisting of subalgebras
of ${\mathcal O}_2$ (where ${\mathcal O}_2$ is the Cuntz algebra with two generators)
is, by a result of Kirchberg, a Borel space of all exact C*-algebras (see \S\ref{s.bga} and cf.
Lemma~\ref{l.borelassign}).
For $A\subseteq X\times Y$ and $x\in X$ let $A_x$ denote the (vertical)
\emph{section} of $A$ at $x$, that is, $A_x=\{y: (x,y)\in A\}$. Below (and later) we will need the following well-known fact (see \cite[28.8]{Ke:Classical})
\begin{lemma}\label{L.Effros} Let $X$ and $Y$ be Polish spaces and assume that
$A\subseteq X\times Y$ is Borel and all sections $A_x$ are compact. Then the set $A^+=\{(x,A_x): x\in X\}$ is a Borel subset of
$X\times F(Y)$, and the map $x\mapsto A_x$ is Borel.
\end{lemma}
\subsection{Coding states}\label{ss.codingstates}
Roughly following \cite[\S 2]{Kec:C*}, we shall describe a coding of states
on $C^*(\gamma)$. If $\phi$ is a functional on $C^*(\gamma)$ then,
being norm-continuous, it is uniquely determined by its restriction
to $\{\mathfrak p_n(\gamma): n\in {\mathbb N}\}$. Also, writing $\Delta(r)=\{z\in
\mathbb C: |z|\leq r\}$ we have $\|\phi\|\leq 1$ if and only if for every
$n$ we have $\phi(\mathfrak p_n(\gamma))\in \Delta(\|\mathfrak p_n(\gamma)\|)$.
Therefore we can identify $\phi$ with $\hat\phi\in \prod_n \Delta(\|\mathfrak p_n(\gamma)\|)$.
Clearly, the set of $\hat\phi$ such that $\phi$ is additive is compact in the product
metric topology. Since $\phi$ is positive if and only if $\phi(\mathfrak p_n^*(\gamma)\mathfrak p_n(\gamma))\geq 0$
for all $n$, the set of all states is also compact.
Similarly, the set of all traces is compact.
By the obvious rescaling of the coordinates, we can identify $\prod_n \Delta(\|\mathfrak p_n(\gamma)\|)$
with $\Delta^{\bbN}$ (writing $\Delta$ for $\Delta(1)$).
Consider the space $\mathbb K=K_c(\Delta^{\bbN})$ of compact
subsets of $\Delta^{\bbN}$ and its subspace $\Kconv$ of of compact convex subsets of $\Delta^{\bbN}$.
\begin{lemma} \label{L.SPT}
With the above identifications, there are Borel maps
${\mathbb S}\colon \Gamma\to \mathbb K$, ${\mathbb P}\colon \Gamma\to \mathbb K$ and
${\mathbb T}\colon \Gamma\to \mathbb K$ such that ${\mathbb S}(\gamma)$ is the set of
all states on $C^*(\gamma)$, ${\mathbb P}(\gamma)$ is the closure of the set of all pure
states on $C^*(\gamma)$ and ${\mathbb T}(\gamma)$ is the set of all tracial
states on $C^*(\gamma)$.
\end{lemma}
\begin{proof}
For ${\mathbb S}$ and ${\mathbb T}$ this is obvious from the above discussion and Lemma~\ref{L.Effros}.
The existence of ${\mathbb P}$ can be proved by a proof similar to that of
\cite[Lemma~2.2]{Kec:C*}.
\end{proof}
\begin{lemma} \label{L.States} There are Borel maps
$\State\colon \Gamma\to (\Delta^{\bbN})^{{\mathbb N}}$, $\Pure\colon
\Gamma\to (\Delta^{\bbN})^{{\mathbb N}}$ and $\Trace\colon \Gamma\to
(\Delta^{\bbN})^{{\mathbb N}}$ such that $\State(\gamma)(m)$, for $m\in
{\mathbb N}$, is a dense subset of ${\mathbb S}(\gamma)$, $\Pure(\gamma)(m)$, for
$m\in {\mathbb N}$ is a dense subset of ${\mathbb P}(\gamma)$ and
$\Trace(\gamma)(m)$, for $m\in {\mathbb N}$, is a dense subset of
${\mathbb T}(\gamma)$.
\end{lemma}
\begin{proof} For $\State$ and $\Trace$ this is a consequence of the
previous lemma and the Kuratowski--Ryll-Nardzewski Theorem
(\cite[Theorem~12.13]{Ke:Classical}).
The construction of the
map $\Pure$, was given in \cite[Corollary~2.3]{Kec:C*}.
\end{proof}
\section{Choquet and Bauer simplexes}
\label{S.choquet}
Let us first recall the pertinent definitions.
All compact convex sets considered here will be metrizable, and
therefore without a loss of generality subsets of the Hilbert cube.
For such $S$ its \emph{extreme boundary}, denoted $\partial S$,
is the set of its extremal points. By the Krein--Milman theorem
$S$ is contained in the closure of the convex hull of $\partial S$.
A {\it metrizable Choquet simplex} is a simplex $S$ as above with the following property: for every point $x$ in
$S$ there exists a unique probability boundary measure $\mu$
(i.e., a measure concentrated on $\partial S$)
such that $x$ is the barycentre of $\mu$.
This notion has a number of equivalent definitions, see \cite[\S II.3]{alfsen71}.
The isomorphism relation in the category of Choquet simplexes is affine homeomorphism.
The extreme boundary of a Choquet simplex $S$
is always $G_\delta$, and in the case that it is compact $S$ is said to be a \emph{Bauer simplex}.
It is not difficult to see that in this case $S$ is isomorphic to the space $P(\partial S)$ of Borel probability measures on $\partial S$. In particular Bauer simplexes $S$ and $L$ are isomorphic if and only if
their extreme boundaries $\partial S$ and $\partial L$ are homeomorphic.
Let $\Delta_{n}$ denote the $n$-simplex ($n\in{\mathbb N}$).
Every metrizable Choquet simplex $S$ can be represented as
an inverse limit of finite-dimensional Choquet simplexes
\begin{equation}\label{invlim}
S\simeq\lim_{\leftarrow} (\Delta_{n_i},\psi_i),
\end{equation}
where and $\psi_i:\Delta_{n_i} \to \Delta_{n_{i-1}}$ is an affine surjection
for each $i \in \mathbb{N}$.
This was proved in \cite[Corollary to Theorem~5.2]{LaLi} and we shall prove a Borel
version of this result in Lemma~\ref{L.Choq.2}.
\subsubsection{Order unit spaces} \label{OrderUnit}
Let $(A,A^+)$ be an ordered real Banach space.
Here $A^+$ is a cone in $A$ and the order is defined by $a\leq b$ if and only if $b-a\in A^+$.
Such a space is \emph{Archimedean} if for every $a\in A$ the set $\{ra: r\in \mathbb R^+\}$ has
an upper bound precisely when $a$ is \emph{negative}, i.e., $a\leq 0$.
An element $1_A\in A$ is an \emph{order unit} if for every $a\in A$ there is $r\in \mathbb R^+$ such that
$-r1_A\leq a\leq r1_A$.
We say that an Archimedean ordered vector space with a distinguished unit $(A,A^+,1_A)$ is
an \emph{order unit space}, and define a norm on $A$ by
\[
\|a\|=\inf\{r>0: -r1_A\leq a\leq r1_A\}.
\]
Our interest in order unit spaces stems from the fact that the
category of separable complete order unit spaces is the dual category to the category of metrizable Choquet simplexes. For a Choquet simplex $S$, the associated dual object is
$\Aff(S)$, the real-valued affine functions on $S$, with the natural
ordering and order unit set to be the constant function with value 1.
Conversely, given an order unit space $(A,A^+,1_A)$, the associated dual object is the space of positive real functionals $\phi$ on $A$ of norm one, with respect to the weak*-topology.
In the case of Bauer simplexes $S$ there is also a natural identification of the
complete separable order unit spaces $\Aff(S)$ and $C_\mathbb{R}(\partial S)$
obtained by restriction.
In particular,
for the simplex $\Delta_n$ we have
\[
\Aff(\Delta_n) \cong (\mathbb{R}^{n+1}, (\mathbb{R}^+)^{n+1}, (1,1,\ldots,1)).
\]
Setting $e_0$ to be the origin in $\mathbb{R}^n$, the co-ordinate functions $f_k:\Delta_n \to \mathbb{R}$, $0 \leq k \leq n$, given by the formula
\[
f_k(e_i) = \left \{ \begin{array}{ll} 1 & i=k \\ 0 & i \neq k \end{array} \right.
\]
on vertices and extended affinely, form a canonical basis for $\Aff(\Delta_n)$.
Let $X$ and $Y$ be separable order unit spaces
with order units $1_X$ and $1_Y$.
Let as usual $L(X,Y)$ denote the set of linear, continuous maps, and let $L_1(X,Y)=\{T\in L(X,Y): \|T\|\leq 1\}$. The space $L_1(X,Y)$ is a Polish space when given the strong topology.
The set of order unit preserving maps in $L_1$,
$$
L_{\rm ou}(X,Y)=\{T\in L_1(X,Y): (\forall x,x'\in X) x\leq x'\implies T(x) \leq T(x')\wedge T(1_X)=1_Y\}
$$
is a closed subset of $L_1(X,Y)$, and is therefore Polish in its own right.
(Our definition of $L_{\rm ou}$ involves some redundancy since it is
a standard fact that $T\in L_1(X,Y)$ such that $T(1_X)=1_Y$ is automatically order preserving.)
\subsection{Parameterizing metrizable Choquet simplexes and their duals.}\label{simplexbasic}
\subsubsection{The space $\Lambda$} \label{S.Lambda}
If $X=\Aff(K)$ and $Y=\Aff(L)$ for metrizable Choquet simplexes $K$ and $L$, then $L_{\rm ou}(X,Y)$ is the set of morphisms dual to the affine continuous maps from $L$ to $K$. It follows from (\ref{invlim}) that the separable complete order unit spaces all arise as direct limits of sequences
\[
\mathbb{R}^{m_1} \stackrel{\phi_1}{\longrightarrow} \mathbb{R}^{m_2} \stackrel{\phi_2}{\longrightarrow} \mathbb{R}^{m_3} \stackrel{\phi_3}{\longrightarrow} \cdots
\]
with $\phi_n \in L_{\rm ou}(\mathbb{R}^{m_n},\mathbb{R}^{m_{n+1}})$.
Since we can identify an operator in
$L_{\rm ou}(\mathbb{R}^{m_n},\mathbb{R}^{m_{n+1}})$
with its matrix, $L_{\rm ou}(\mathbb{R}^{m_n},\mathbb{R}^{m_{n+1}})$
is affinely homeomorphic with a closed
subspace of $m_n\times m_{n+1}$ matrices.
We can therefore parameterize the separable complete order unit spaces (and therefore their duals) using
$$
\Lambda={\mathbb N}^{\mathbb N}\times \prod_{(m,n)\in {\mathbb N}^2} L_{\rm ou}({\mathbb R}^m,{\mathbb R}^n)
$$
in the following way: each $(f,\psi)\in \Lambda$ corresponds to the limit $X(f,\psi)$ of the system
$$
{\mathbb R}^{f(1)}\underset{\psi(f(1),f(2))}{\longrightarrow} {\mathbb R}^{f(2)}\underset{\psi(f(2),f(3))}{\longrightarrow} {\mathbb R}^{f(3)}\underset{\psi(f(3),f(4))}{\longrightarrow}\cdots.
$$
Since $\Lambda$ is a Polish space with respect to the product topology, we have what we will refer to as the \emph{standard Borel space of metrizable Choquet simplexes}. We note that our parameterization is similar in spirit to that of $\Gamma$, as we identify our objects with something akin to a dense sequence.
This is a good Borel parameterization (see Definition~\ref{d.parameterization}).
\subsubsection{The space $\Lambda_2$} \label{S.LLambda}
The following Borel space
of metrizable Choquet simplexes was essentially defined by Lazar and
Lindenstrauss in \cite{LaLi} where the emphasis was put on Banach spaces $X$ instead of
the simplexes $B(X^*)$. Another difference is that in \cite{LaLi} the authors studied a wider class
of spaces whose dual is $L^1$.
A simple analysis of an $n\times (n+1)$ matrix
shows that, modulo permuting the basis of $\mathbb R^{n+1}$,
every $\phi\in L_{\rm ou}(\mathbb R^n,\mathbb R^{n+1})$ is of the form
\begin{equation}\label{representing}
\textstyle\phi (x_1,x_2,\dots, x_n)=(x_1,x_2,\dots, x_n, \sum_{i=1}^n a_{i} x_i)
\end{equation}
where $0\leq a_i \leq 1$ and $\sum_i a_i=1$ is in $L_{\rm ou}(\mathbb R^n,\mathbb R^{n+1})$.
A \emph{representing matrix} of a Choquet simplex
is a matrix $(a_{ij})_{(i,j) \in \mathbb{N}^2}$ in which all entries are non-negative,
$\sum_{i=1}^n a_{in}=1$, and $a_{in}=0$ for $i>n$.
By the above, such a matrix codes a directed system
\[
\mathbb{R}^{1} \stackrel{\phi_1}{\longrightarrow} \mathbb{R}^{2} \stackrel{\phi_2}{\longrightarrow} \mathbb{R}^{3} \stackrel{\phi_3}{\longrightarrow} \cdots
\]
where
$\phi_n (x_1,x_2,\dots, x_n)=(x_1,x_2,\dots, x_n, \sum_{i=1}^n a_{in} x_i)$.
A limit of this directed system is a Banach space $X$ and the unit ball of its dual is
a Choquet simplex with respect to its weak*-topology. This is because an inverse limit of Choquet simplexes is again a Choquet simplex. We let $\Lambda_2$ denote the set of all representing matrices, which is a closed set when viewed as a subset of $[0,\infty)^{{\mathbb N}\times{\mathbb N}}$.
On p. 184 of \cite{LaLi} the authors refer to the Borel space of representing matrices
when they point out that
``It seems to be a very difficult problem to determine the set of all representing matrices of a given separable infinite-dimensional predual of $L_1(\mu)$.
We know the answer to this question only for one such space, namely the space of Gurarii and even here the situation is not entirely clear.''
Gurarii space is dual to the Poulsen simplex and the Lazar--Lindenstrauss characterization
alluded to above implies that a dense $G_\delta$ set of representing matrices corresponds to
the Poulsen simplex. (By removing zeros,
here we identify the matrix $a_{in}$, $i\leq n\in {\mathbb N}$ with an element of $\prod_n [0,1]^n$.)
This can be taken as a remark about the Borel complexity of certain set,
close to the point of view of the present paper or of~\cite{Kec:C*}.
\subsubsection{The space $\Lambda_3$} \label{S.LLLambda}
Let $\delta_n=2^{-2n}$ and for
each $n$
consider the set of all $\phi\in L_{\rm ou}(\mathbb R^n,\mathbb R^{n+1})$
of the form \eqref{representing} such that all $a_{i}$ are
of the form $k2^{-2n}$ for $k\in {\mathbb N}$.
Let $\mathcal F_n$ be the set of all $n\times (n+1)$ matrices
representing such $\phi$.
Modulo permuting basis of $\mathbb R^{n+1}$, the set $\mathcal F_n$ is $\delta_n$-dense in
$L_{\rm ou}(\mathbb R^n,\mathbb R^{n+1})$.
\begin{lemma} \label{L.Choq.-1}
For all $m\leq n$ in ${\mathbb N}$ and every $\Phi\in L_{\rm ou}(\mathbb R^m,\mathbb R^n)$
there are $F_i\in \mathcal F_i$ for $m\leq i\leq n$ such that
$F_{n-1}\circ \dots \circ F_{m+1}\circ F_m$
is within $2^{-m}$
from $\Phi$ composed with a permutation of the canonical basis of $\mathbb R^n$ in the operator norm.
\end{lemma}
\begin{proof}
The linear operator $\Phi$
is coded by an $n\times m$-matrix $(a_{ij})$ that has at least one entry equal to 1 in each column.
After possibly re-ordering the basis, we may then assume $a_{ii}=1$ for all $i\leq m$.
Furthermore, we can canonically write $\Phi$ as a composition of
$m-n$ operators
\[
\Phi_{n-1}\circ \Phi_{n-2}\circ \dots \circ \Phi_{m}
\]
so that $\Phi_{k}\in L_{\rm ou}(\mathbb R^k,\mathbb R^{k+1})$, and the last row of the matrix of
$\Phi_{k}$ is the $k$-th row of the matrix of $\Phi$ padded with zeros.
Now choose
$F_{n-1}, \dots, F_{m}$ in $\prod_{k=m}^{n-1} \mathcal F_k$
such that $\|F_k-\Phi_{k}\|<2^{-2k}$.
Then
$F=F_{n-1}\circ \dots\circ F_{m}$
is within $2^{-m}$ of $\Phi$ in the operator norm, as required.
\end{proof}
Let $\Lambda_3$ be the compact metric space $\prod_n \mathcal F_n$.
By identifying $\psi\in \Lambda_3$ with $(\id,\psi)\in \Lambda$, one sees that each element of $\Lambda_3$ represents a Choquet simplex.
We fix a well-ordering $\prec_{\mathcal F}$ of finite sequences of elements of $\bigcup_n \mathcal F_n$, to be
used in the proof of Lemma~\ref{L.Choq.2}.
\subsubsection{The space $\Kchoq$} Recall that $\Kconv$ is the space of all compact convex subsets of the Hilbert cube. Let $\Kchoq$ denote the space of all Choquet
simplexes in $K\in \Kconv$. In Lemma~\ref{L.Choq.2} we shall show that $\Kchoq$ is a
Borel subspace of $\mathbb K$, and therefore $\Kchoq$ is the `natural' parameterization
of Choquet simplexes.
\subsubsection{Our Borel parameterizations of Choquet simplexes are weakly equivalent}
Weak equivalence of Borel parameterizations was defined in
(4') of Definition~\ref{d.parameterization}.
\begin{prop} \label{P.Choquet}
The four Borel parameterizations of Choquet simplexes introduced above,
$\Lambda$,
$\Lambda_2$,
$\Lambda_3$,
and $\Kchoq$,
are all weakly
equivalent.
\end{prop}
A proof of Proposition~\ref{P.Choquet} will take up the rest of this section.
Clearly the space $\Kconv$ is a closed subset
of $K_c(\Delta^{{\mathbb N}})$.
In the following consider the Effros Borel space $F(\CRD)$ of all closed subsets
of $\CRD$ (see \S\ref{S.Eff}).
Recall that a \emph{peaked partition of unity} in an order-unit space $(A.A^+,1_A)$
is a finite set $f_1,\dots, f_n$ of positive elements of $A$ such that $\sum_i f_i=1_A$
and $\|f_i\|=1$ for all $i$. A peaked partition of unity $P'$ \emph{refines} a peaked
partition of unity $P$ if every element of $P$ is a convex combination of
the elements of $P'$.
We shall need two facts about real Banach spaces.
For a separable Banach space $X$ let $S(X)$ denote the space of closed subspaces of $X$,
with respect to the Effros Borel structure (\S\ref{S.Eff}).
It was proved by Banach that $S(\CRD)$ is universal for separable Banach spaces, and therefore this space with respect to its Effros Borel structure can be considered as the standard Borel space of separable Banach spaces (see Lemma~\ref{L.dual.2}).
Consider the space
$\Kconv \subseteq K_c(\Delta^{\bbN})$ of compact convex
subsets of the Hilbert cube, $\Delta^{\bbN}$. With respect to the Borel structure induced by the Hausdorff metric, this is
the standard Borel space of all compact convex metrizable spaces.
For a Banach space $X$ let $B(X^*)$ denote the unit ball of the dual of $X$, with respect to the weak*-topology. Then $B(X^*)$ is a compact convex space, and it is metrizable if $X$ is
separable.
The idea in the following is taken from of the proof of
\cite[Lemma~2.2]{Kec:C*}.
\begin{lemma} \label{L.dual.1}
If $X$ is a separable Banach space
then there is a Borel map $\Phi\colon S(X)\to \Kconv$ such that
$\Phi(Y)$ is affinely homeomorphic to the unit ball $B(Y^*)$ of $Y^*$, with respect to
its weak*-topology.
\end{lemma}
\begin{proof} By Kuratowski--Ryll-Nardzewski's theorem (\cite[Theorem~12.13]{Ke:Classical})
there are Borel $f_n\colon S(X)\to X$ for $n\in {\mathbb N}$
such that $\{f_n(Y): n \in {\mathbb N}\}$
is a dense subset of $Y$ for every $Y$.
Fix an enumeration $F_n=(r_{n,i}: i\leq k_n)$, for $n\in {\mathbb N}$, of finite sequences of rationals.
Define $h_n\colon S(X) \to X$ by
\[
h_n(Y)=\sum_{i=1}^{k_n} r_{n,i} f_i(Y).
\]
Then $\{h_n(Y): n\in {\mathbb N}\}$ is a dense linear
subspace of $Y$ for each $Y\in S(X)$.
Let $\Delta(Y)=\prod_n [-\|h_n(Y)\|,\|h_n(Y)\|]$.
Let $K(Y)$ be the set of all $\phi \in \Delta(Y)$ such that
\begin{enumerate}
\item [(*)] $F_i + F_j = F_l$ (where the sum is taken pointwise) implies $\phi(i)+\phi(j)=\phi(l)$,
for all $i,j$ and $l$.
\end{enumerate}
Such a $\phi$ defines a functional of norm $\leq 1$ on a dense subspace of $Y$,
and therefore extends to an element of $B(Y^*)$.
Moreover, every functional in $B(Y^*)$ is obtained in this way.
Therefore the set of $\phi$ satisfying (*)
is affinely homeomorphic to $B(Y^*)$.
It remains to rescale $K(Y)$.
Let $\Phi(Y)=\{\phi \in \Delta^{\bbN}: (\phi(n)\|h_n(Y)\|)_{n\in {\mathbb N}}\in K(Y)\}$.
Then $\phi\in \Phi(Y)$ if and only if
\[
\phi(i)\|h_i(Y)\|+ \phi(j)\|h_j(Y)\|=\phi(l)\|h_l(Y)\|
\]
for all triples $i,j,l$ satisfying $F_i + F_j = F_l$ (a condition not depending on $Y$).
Since the map $y\mapsto \|y\|$ is continuous, the map $Y\mapsto \Phi(Y)$
is Borel, and clearly $\Phi(Y)$ is affinely homeomorphic to $K(Y)$ and therefore to $B(Y^*)$.
\end{proof}
\begin{lemma} \label{L.dual.2}
There is a Borel map $\Psi\colon \Kconv\to S(\CRD)$ such that the
Banach spaces $\AffR(K)$ and $ \Psi(K)$ are isometrically isomorphic for all $K$.
\end{lemma}
\begin{proof}
Identify $\Delta^{\bbN}$ with $\prod_n [-1/n,1/n]$. Consider the compatible
$\ell_2$ metric $d_2$ on $\Delta^{\bbN}$ and the set
\[
{\mathcal Z}=\{(K,x,y): K\in \Kconv, x\in \Delta^{\bbN}, y\in K, \text{ and } d_2(x,y)=\inf_{z\in K} d_2(x,z)\}.
\]
Since the map $(K,x)\mapsto \inf_{z\in K} d_2(x,z)$ is continuous
on $\{K\in \Kconv: K\neq\emptyset\}$,
this set is closed. Also, for every pair $K,x$ there is the unique point $y$ such that
$(K,x,y)\in {\mathcal Z}$ (e.g., \cite[Lemma~3.1.6]{Pede:Analysis}).
By compactness, the function $\chi$ that sends $(K,x)$ to the unique $y$
such that $(K,x,y)\in {\mathcal Z}$ is
continuous. Fix a continuous surjective map $\eta\colon \Delta\to \Delta^{\bbN}$.
Then $\chi_K(x)=\chi(K,\eta(x))$ defines a continuous surjection
from $\Delta$ onto $K$ and $K\mapsto \chi_K$ is a continuous
map from $\Kconv$ into $C(\Delta, \Delta^{\bbN})$ with respect to the uniform metric.
The set
\[
{\mathcal Y}=\{(K,f)\in \Kconv\times C_{\mathbb R}(\Delta) :
\text{ for some }
g\in \AffR(K)\}
\]
is closed.
To see this, note that $(K,f)\notin {\mathcal Y}$ iff one of the following two conditions
happens:
\begin{enumerate}
\item There are $x$ and $y$ such that $f(x)\neq f(y)$ but $\chi(K,f)(x)=\chi(K,f)(y)$, or
\item There are $x,y,z$ and $0<t<1$ such that
$f(tx+(1-t)y)\neq z$ but
$$
t\chi(K,f)(x)+(1-t)\chi(K,f)(y)=\chi(K,f)(z).
$$
\end{enumerate}
We need to prove that the map that sends $K$ to ${\mathcal Y}_K=\{f: (K,f)\in {\mathcal Y}\}$ is Borel.
Since ${\mathcal Y}_K$ is clearly isometric to $\Aff(K)$, this will conclude the proof.
Let $g_n$, for $n\in {\mathbb N}$, be a countable dense subset of $\CRD$.
By compactness
\[
h_n(K)= (g_n\restriction K)\circ \chi_K\circ \eta
\]
is a continuous map from $\Kconv$ to $\CRD$ such that $h_n(K)\in {\mathcal Y}_K$.
Moreover, the set $\{h_n(K): n\in {\mathbb N}\}$ is dense in ${\mathcal Y}_K$ for every $K$.
Since ${\mathcal Y}_K\neq \emptyset$,
we conclude that the map $\Psi(K)={\mathcal Y}_K$ is Borel.
This follows by \cite[12.14]{Ke:Classical} or directly by noticing that
$\Psi^{-1}(\{X\in S(\CRD): X\cap U\neq \emptyset\}=\bigcap_n h^{-1}(U)$
is Borel for every open
$U\subseteq \CRD$.
\end{proof}
Let $\Psi\colon \Kconv\to S(\CRD)$ be the Borel-measurable
map that sends $K$ to $\Aff(K)\subseteq \CRD$ from Lemma~\ref{L.dual.2}.
For every $K$ and $n$ the set
\[
\PPU_n(K)\subseteq (\CRD)^n
\]
of all $n$-tuples in $\Psi(K)$ forming a peaked partition of unity
is closed, by compactness of $K$.
The following lemma is a reformulation of \eqref{invlim}.
\begin{lemma} \label{L.Choq.1} For a metrizable compact convex
set $K$ the following are equivalent.
\begin{enumerate}
\item\label{L.Choq.1.1} $K$ is a Choquet simplex,
\item\label{L.Choq.1.2} for every finite $F\subseteq \Aff(K)$, every $\epsilon>0$
and every peaked partition of unity $P$ in $\Aff(K)$ there is a peaked partition of unity $P'$
that refines $P$ and is such that every element of $F$ is within $\varepsilon$ of the span of $P'$. \qed
\end{enumerate}
\end{lemma}
Another equivalent condition, in which (2) is weakened to approximate refinement,
follows from~\cite{Vill:Range} and it will be reproved
during the course of the proof of Lemma~\ref{L.Choq.2} below.
\begin{lemma} \label{L.Choq.0}
The map from $\Kconv$ to $F(\CRD^n)$
that sends $K$ to $ \PPU_n(K)$
is Borel for every fixed~$n$.
\end{lemma}
\begin{proof} The set of all $(K,f_1,f_2,\dots, f_n)\in \Kconv\times (\CRD)^n$
such that $f_i\in \Psi(K)$ for $1\leq i\leq n$ and $\sum_{i\leq n} f_i\equiv 1$
is a relatively closed subset of the set of all $(K,f_1,\dots, f_n)$ such that $f_i\in \Psi(K)$ for
all $i\leq n$, and the conclusion follows.
\end{proof}
By \cite[Theorem~12.13]{Ke:Classical} or the proof of Lemma~\ref{L.dual.2} and the above
we have Borel maps $h_n\colon \Kconv\to \CRD$
such that $\{h_n(K): n\in {\mathbb N}\}$ is a dense subset of $\Psi(K)$, and Borel maps $P_{i,n}\colon \Kconv\to (\CRD)^n$, for $i\in {\mathbb N}$, such that $\{P_{i,n}(K): i\in {\mathbb N}\}$ is a dense subset of $\PPU_n(K)$,
for every $K\in \Kconv$. Also fix $h_i\colon \mathbb K\to \Delta^{\bbN}$
such that $\{h_i(K): i\in {\mathbb N}\}$ is a dense subset of $K$ for all $K$.
\begin{lemma}\label{L.Choq.2}
The set $\Kchoq$ is a Borel subset of $\mathbb K$.
Moreover, there is a Borel map $\Upsilon\colon \Kchoq\to \Lambda_3$
such that $\Upsilon(K)$ is a parameter for~$K$.
\end{lemma}
\begin{proof}
We shall prove both assertions simultaneously.
Let $\varepsilon_i=i^{-2}2^{-i-4}$.
Fix $K\in \Kchoq$ for a moment. Let us say that a partition of unity $P$ \emph{$\varepsilon$-refines} a partition of unity
$P'$ if every element of $P'$ is within $\varepsilon$ of the span of $P$.
By Lemma~\ref{L.Choq.1}, there are sequences $d(j)=d(j,K)$,
$i(j)=i(j,K)$ and $n(j)=n(j,K)$, for $j\in {\mathbb N}$,
such that for each $j$ we have
\begin{enumerate}
\item $P_{i(j+1), n(j+1)}(K)$ is in $\PPU_{d(j)}(K)$,
\item $\{h_i(K)\restriction K: i\leq j\}$ and the restriction of all elements of $P_{i(j),n(j)}(K)$ to $K$
are within $\varepsilon_j$ of the rational linear span of the restrictions of elements of
$P_{i(j+1),n(j+1)}(K)$ to $K$,
\item $i(j+1), n(j+1)$ is the lexicographically minimal pair for which (1) and (2) hold.
\end{enumerate}
The set of all triples $(K,(i(j): j\in {\mathbb N}), (n(j): j\in {\mathbb N}))$ such that (1) and (2) hold
is Borel. Since a function is Borel if and only if its graph is Borel (\cite{Ke:Classical}),
the function sending $K$ to $((i(j,K), n(j,K)): j\in {\mathbb N})$ is Borel.
Still having $K$ fixed,
let us write $P_j$ for $P_{i(j,K), n(j,K)}(K)$.
Since each $f\in P_j$ is within $\varepsilon_j$ of
the span of $P_{j+1}$,
\cite[Lemma~2.7]{Vill:Range} implies
there is an isometry $\Phi_j\colon \Span(P_j)\to \Span(P_{j+1})$ such that
$\|\Phi_j(f)-f\|<2^{-j}$ for all $f\in \Span(P_j)$.
Using Lemma~\ref{L.Choq.-1}, we can fix the $\prec_{\mathcal F}$-least
composition of operators in $\bigcup_n \mathcal F_n$ (see \S\ref{S.LLambda}), $\psi(j)$,
that $2^{-j}$-approximates $\Phi_j$ in the operator norm.
This defines an element $\psi$ of $\Lambda_3$.
Again, the function that associates $\psi$ to $K$ is Borel since its graph is a Borel set.
It remains to prove that the direct limit of $\mathbb R^{d(j)}$, for $j\in {\mathbb N}$,
determined by $\psi$ is isometric to $\Aff(K)$.
For every fixed $k$ the sequence
of linear operators
$\psi(k+j)\circ \psi(k+j-1)\circ \dots \psi(k)$ for $j\in {\mathbb N}$ forms
a Cauchy sequence in the supremum norm.
Therefore the image of $P_k$ under this sequence converges to a peaked partition of unity,
denoted by $Q_k$,
of $\Aff(K)$. Then $Q_k$, for $k\in {\mathbb N}$, form a refining sequence of peaked partitions
of unity of $\Aff(K)$ such that the span of $\bigcup_ k Q_k$ is dense in $\Aff(K)$.
Thus with the dependence of $\psi$ on $K$ understood, we have that $\Upsilon(K):=\psi$ is the required parameter for $K$ in $\Lambda_3$.
\end{proof}
The following Lemma will only be used later in Section \ref{borelreduction}, but we include it here as it fits thematically in this section.
\begin{lemma} \label{L.Choq.4} There is a Borel map $\Psi\colon K_c(\Delta^{\bbN})
\to \Lambda_3$ such that $\Psi(K)$ represents a Choquet simplex
affinely homeomorphic to the Bauer simplex $P(K)$.
\end{lemma}
\begin{proof} By Lemma~\ref{L.Choq.2} it suffices to define a Borel
map $\Psi_0\colon K_c(\Delta^{\bbN})\to \Kchoq$ so that $\Psi_0(K)$ is affinely homeomorphic to $P(K)$ for all $K$.
For each $K\in K_c(\Delta^{\bbN})$ the set $P(K)$ is affinely homeomorphic to
a closed convex subset $Y_K$ of $P(\Delta^{{\mathbb N}})$, by identifying each measure $\nu$ on $K$
with its canonical extension $\nu'$ to $\Delta^{{\mathbb N}}$, $\nu'(A)=\nu(A\cap K)$.
Moreover, the map $K\mapsto Y_K$ is continuous with respect to the Hausdorff metric.
Fix an affine homeomorphism of $P(\Delta^{{\mathbb N}})$ into
$\Delta^{{\mathbb N}}$. For example, if $f_n$, for $n\in {\mathbb N}$, is a sequence uniformly dense
in $\{f\colon \Delta^{{\mathbb N}}\to \Delta\}$ then take $\nu\mapsto (\int f_n d\nu: n\in {\mathbb N})$.
By composing the map $K\mapsto Y_K$ with this map we conclude the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{P.Choquet}]
A Borel homomorphism from the parametrization $\Kchoq$ to $\Lambda_3$ was given in Lemma~\ref{L.Choq.2}.
If $\psi\in \Lambda_3$ then (possibly after permuting the basis of $\mathbb R^{n+1}$)
each $\psi(n)$ defines $a_{1n}, \dots, a_{nn}$ as in \S\ref{S.LLambda}.
Therefore we have a canonical Borel homomorphism from the parametrization $\Lambda_3$ into $\Lambda_2$.
This map is continuous, and even Lipschitz in the sense that $\psi(n)$ determines
all $a_{in}$ for $i\leq n$. Similarly, every representing matrix in $\Lambda_2$ canonically
defines a directed system in $\Lambda$.
We therefore only need to check that there is a Borel homomorphism from $\Lambda$ to $\Kchoq$.
Given $(f,\psi)\in \Lambda$, we define $K=K(f,\psi)$ as follows. With $f(0)=0$ let $k_n=\sum_{i=0}^n f(i)$. For $a\in \mathbb R^{{\mathbb N}}$ and $n\geq 0$ let $a_n=a\restriction [k_n,k_{n+1})$ and identify $a\in \mathbb R^{{\mathbb N}}$ with $(a_n)_{n\in {\mathbb N}}$. Let $B=\{ a\in \mathbb R^{{\mathbb N}}: (\forall n) \psi_n (a_n)=a_{n+1}\}$.
Then $B=B(f,\psi)$ is a separable subspace of $\ell_\infty$ closed in the product topology.
Also, $(f,\psi)\mapsto B(f,\psi)\cap \Delta^{{\mathbb N}}$ is a continuous map from $\Lambda$ into the hyperspace of $\Delta^{{\mathbb N}}$, and therefore the map $(f,\psi)\mapsto B(f,\psi)$ is a Borel map from $\Lambda$ into $F(\mathbb R^{{\mathbb N}})$.
Let $K(f,\psi)$ denote the unit ball $B^*(f,\psi)$ of the dual of the Banach space $B(f,\psi)$. When equipped with the weak*-topology, $K(f,\psi)$ is affinely homeomorphic to the Choquet simplex represented by $(f,\psi)$.
We complete the proof by applying Lemma~\ref{L.dual.1}.
\end{proof}
\section{The isomorphism relation for AI algebras}\label{borelreduction}
Recall that an \emph{approximately interval} (or \emph{AI}) C$^*$-algebra is a direct limit
$$
A = \lim_{\longrightarrow} (A_i,\phi_i),
$$
where, for each $i \in \mathbb{N}$, $A_i \cong F_i \otimes \mathrm{C}([0,1])$ for some finite-dimensional C$^*$-algebra $F_i$ and $\phi_i:A_i \to A_{i+1}$ is a $*$-homomorphism. In this section we will prove the following ($\Lambda$ is the space defined in~\S \ref{S.choquet} and
notation $X(f,\psi)$ was introduced in \S\ref{S.Lambda}):
\begin{theorem}\label{t.reduction}
There is a Borel function $\zeta: \Lambda\to \Gamma$ such
that for all $(f,\psi)\in \Lambda$,
\begin{enumerate}
\item $C^*(\zeta(f,\psi))$ is a unital simple AI algebra.
\item $(K_0(C^*(\zeta(f,\psi)), K_0^+(C^*(\zeta(f,\psi)), 1)\simeq ({\mathbb Q},{\mathbb Q}^+, 1)$ and $K_1(C^*(\zeta(f,\psi)))\simeq \{1\}$.
\item If $T$ is the tracial state simplex of $C^*(\zeta(f,\psi))$ then $\Aff(T)\simeq X(f,\psi)$.
\end{enumerate}
\end{theorem}
We note that Theorem \ref{t.reduction} immediately implies Theorem \ref{mainintro}:
\begin{corollary}\label{c.reduction}
The following relations are Borel reducible to isomorphism of simple unital $AI$ algebras:
\begin{enumerate}
\item Affine homeomorphism of Choquet simplexes.
\item Homeomorphisms of compact Polish spaces.
\item For any countable language $\mathcal L$, the isomorphism relation $\simeq^{\Mod(\mathcal L)}$ on countable models of $\mathcal L$.
\end{enumerate}
Moreover, isomorphism of simple unital AI algebras is not classifiable by countable structures, and is not a Borel equivalence relation.
\end{corollary}
\begin{proof}
For (1), let $\zeta$ be as in Theorem \ref{t.reduction}. Since simple unital AI algebras are classified by their Elliott invariant and since $(\mathbb Q,\mathbb Q^+,1)$ has a unique state, it follows that $(f,\psi)\simeq^\Lambda (f',\psi')$ if and only if $C^*(\zeta(f,\psi))\simeq C^*(\zeta(f',\psi'))$.
For (2), note that by Lemma \ref{L.Choq.4}, homeomorphism of compact subsets of $[0,1]^{\mathbb N}$ is Borel reducible to affine homeomorphism in $\Lambda$.
(3) follows from (2) and \cite[4.21]{hjorth00}, where it was shown $\simeq^{\mathcal L}$ is Borel reducible to homeomorphism.
It was shown in \cite[4.22]{hjorth00} that homeomorphism of compact subsets of ${\mathbb K}$ is not classifiable by countable structures, and so by (2) neither is isomorphism of AI algebras. Finally, it was shown in \cite{frst89} that $\simeq^{\Mod(\mathcal L)}$ is not Borel when $\mathcal L$ consists of just a single binary relation symbol, and so it follows from (3) that isomorphism of simple unital AI algebras is not Borel.
\end{proof}
The strategy underlying the proof of Theorem \ref{t.reduction} is parallel to the main argument in \cite{thomsen}. As a first step, we prove the following:
\begin{lemma}\label{l.conversion}
There is a Borel map $\varsigma: \Lambda\to L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}$ such that for all $(f,\psi)\in \Lambda$ we have
\begin{equation}\label{eq.conversion}
X(f,\psi)\simeq \lim (C_{\mathbb R}[0,1],\varsigma(f,\psi)_n).
\end{equation}
\end{lemma}
\begin{proof}
Let $f_{1,1}\in C_{\mathbb R}[0,1]$ be the constant 1 function, and for each $n>1$ and $0\leq i\leq n-1$, let $f_{n,i}:[0,1]\to{\mathbb R}$ be the function such that
$$
f_{n,i}\left(\frac j {n-1}\right)=\left\{ \begin{array}{ll}
1 & \text{if } j=i\\
0 & \text{if } j\neq i
\end{array}\right.
$$
and which is piecewise linear elsewhere. Then $\mathcal P_n=\{f_{n,i}: 0\leq i\leq n\}$ is a peaked partition of unity. For each $n$, let $\eta_n:{\mathbb R}^n\to C_{\mathbb R}[0,1]$ be the linear map given on the standard basis $(e_i)$ of ${\mathbb R}^n$ by $\eta_n(e_i)=f_{n,i}$, and let $\beta_n: C_{\mathbb R}[0,1]\to {\mathbb R}^n$ be given by $\beta_n(f)_i=f(\frac i {n-1})$. Then $\eta_n$ and $\beta_n$ are order unit space homomorphisms and $\beta_n\circ\eta_n=\id_{{\mathbb R}^n}$. Define $\varsigma(f,\psi)_n=\eta_{f(n+1)}\circ \psi(f(n),f(n+1))\circ\beta_{f(n)}$ and note that $\varsigma$ is continuous, and so it is Borel. Since the diagram
$$
\xymatrixcolsep{4pc}\xymatrix{
C_{\mathbb R}([0,1]) \ar[r]^{\varsigma(f,\psi)_1}\ar[d]_{\beta_{f(1)}} & C_{\mathbb R}[0,1] \ar[r]^{\varsigma(f,\psi)_2}\ar[d]_{\beta_{f(2)}} & C_{\mathbb R}[0,1] \ar[r]^{\varsigma(f,\psi)_3}\ar[d]_{\beta_{f(3)}} & \ \cdots\\
{\mathbb R}^{f(1)} \ar[r]_{\psi(f(1),f(2))} & {\mathbb R}^{f(2)} \ar[r]_{\psi(f(2),f(3))} & {\mathbb R}^{f(3)} \ar[r]_{\psi(f(3),f(4))} & {\ }\cdots
}
$$
commutes, \eqref{eq.conversion} holds.
\end{proof}
Before proceeding, we fix our notation and collect the key results from \cite{thomsen} that we need. We identify $C[0,1]\otimes {\mathbb M}_n({\mathbb C})$ and ${\mathbb M}_n(C[0,1])$ in the natural way. We call a *-homomorphism $\phi: {\mathbb M}_n(C[0,1])\to {\mathbb M}_m(C[0,1])$ a \emph{standard homomorphism} when there are continuous functions
$$
f_1,\ldots, f_{\frac m n}: [0,1]\to [0,1]
$$
such that $\phi(g)=\diag(g\circ f_1,\ldots, g\circ f_{\frac m n})$. Following \cite{thomsen}, we will call the sequence $f_1,\ldots, f_{\frac m n}$ the \emph{characteristic functions} of the standard homomorphism $\phi$. The tracial state space of ${\mathbb M}_n(C[0,1])$ is canonically identifed with the Borel probability measures on $[0,1]$ (see \cite[p. 606]{thomsen}), and so we canonically identify $\Aff(T({\mathbb M}_n(C[0,1])))$ and $C_{\mathbb R}[0,1]$.
The following Lemma collects the results from \cite{thomsen} that we need.
\begin{lemma}[Thomsen]\label{l.thomsen} \
\begin{enumerate}
\item Any AI algebra
can be represented as an inductive limit $\lim_n ({\mathbb M}_n(C[0,1],\phi_n)$, where each $\phi_n$ is a standard homomorphism.
\item If $\phi: {\mathbb M}_n(C[0,1])\to {\mathbb M}_m(C[0,1])$ is a standard homomorphism with characteristic functions $f_1,\ldots, f_{\frac m n}$, then the induced order unit space homomorphism $\hat\phi: C_{\mathbb R}[0,1]\to C_{\mathbb R}[0,1]$ (under the natural identification with the tracial state spaces) is given by
$$
\hat\phi(g)=\frac n m \sum_{i=1}^{\frac m n} g\circ f_i.
$$
\item Let $\phi_i,\psi_i\in L_{\rm ou} (C_{\mathbb R}[0,1])$ be order unit morphisms $(i\in{\mathbb N})$ and let $\delta_i\in {\mathbb R}_+$ be a sequence such that $\sum_{i=1}^\infty \delta_n<\infty$. Suppose there are finite sets $F_k\subseteq C_{\mathbb R}[0,1]$ such that
\begin{enumerate}
\item $F_k\subseteq F_{k+1}$ for all $k\in{\mathbb N}$;
\item $\bigcup_{k} F_k$ has dense span in $C_{\mathbb R}[0,1]$;
\item for all $f\in F_k$ there are $g,h\in F_{k+1}$ such that $\|\phi_i(f)-g\|,\|\psi_i(f)-h\|\leq \delta_{k+1}$ for all $i\leq k$;
\item for all $f\in F_k$ we have $\|\phi_k(f)-\psi_k(f)\|\leq \delta_k$.
\end{enumerate}
Then $\lim_{\to} (C_{\mathbb R}[0,1],\phi_i)$ and $\lim_{\to} (C_{\mathbb R}[0,1],\psi_i)$ are isomorphic as order unit spaces.
\item For any order unit homomorphism $\psi: C_{\mathbb R}[0,1]\to C_{\mathbb R}[0,1]$, $f_0\in C_{\mathbb R}[0,1]$, finite $F\subseteq C_{\mathbb R}[0,1]$, $n,k\in{\mathbb N}$ and $\varepsilon>0$ there is $m=m_0 n k\in{\mathbb N}$ and continuous $f_1,\ldots f_{\frac m n}:[0,1]\to [0,1]$ such that for all $g\in F$ we have
\begin{equation}\label{eq.standardapprox}
\|\psi(g)-\frac n m\sum_{i=1}^{\frac m n} g\circ f_i\|_\infty<\varepsilon.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
(1) and (2) are simply restatements of Lemma 1.1 and Lemma 3.5 in \cite{thomsen}, while (3) follows immediately from \cite[Lemma 3.4]{thomsen}. For (4), note that by the Krein-Milman type theorem \cite[Theorem 2.1]{thomsen}, we can find a multiple $d$ of $k$ and continuous $\tilde f_1,\ldots,\tilde f_N: [0,1]\to [0,1]$ such that $\|\psi(g)- \sum_{i=1}^{N} \frac {n_i} {d} (g\circ \tilde f_i)\|_\infty<\varepsilon$ where $1\leq n_i\leq d$ satisfy $\sum_{i=1}^N n_i=d$. Let $m=dn$ and let $f_1,\ldots, f_{\frac m n}$ be the list of functions obtained by repeating $n_1$ times $\tilde f_1$, then $n_2$ times $\tilde f_2$, etc. Then \eqref{eq.standardapprox} is clearly satisfied.
\end{proof}
For the next lemma we refer back to \S \ref{S.Rhom} and Lemma~\ref{L.matrix}
for the definition of the relation $\Rhomx{H_0,H_1}$ and the functions $M_n:\Gamma(H)\to \Gamma(H^n)$ and $\theta_n:\Gamma(H)\times({\mathbb N}^{{\mathbb N}})^n\to{\mathbb N}^{{\mathbb N}}$.
\begin{lemma}\label{biglimit}
View $C[0,1]$ as multiplication operators on $H=L^2([0,1])$. Then there is an element $\gamma\in\Gamma(H)$ such that $C^*(\gamma)$ is equal to $C[0,1]$ and such that there are Borel maps
$$
d_N:L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}\to {\mathbb N} \ \ \text{and} \ \ \Phi_N: L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}\to {\mathbb N}^{\mathbb N}
$$
for all $N\in{\mathbb N}$, so that for all $\vec\varsigma\in L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}$ we have:
\begin{enumerate}[$(I)$]
\item For all $N\in{\mathbb N}$ we have $(M_{d_N(\vec\varsigma)}(\gamma),M_{d_{N+1}(\vec\varsigma)}(\gamma),\Phi_N(\vec\varsigma))\in \Rhomx{H^{d_N(\vec\varsigma)},H^{d_{N+1}(\vec\varsigma)}}$.
\item The limit
$$
A_{\vec\varsigma}=\lim_N (C^*(M_{d_N(\vec\varsigma)_i}(\gamma)),\hat\Phi_N(\vec\varsigma))
$$
is a unital simple AI algebra, which satisfies
$$
(K_0(A_{\vec\varsigma}),K_0^+(A_{\vec\varsigma}), [1_{A_{\vec\varsigma}}])\simeq ({\mathbb Q},{\mathbb Q}^+,1),\qquad K_1(A_{\vec\varsigma})=\{1\}
$$
and
$$
\Aff(T(A_{\vec\varsigma}))\simeq \lim_N (C_{\mathbb R}[0,1],\vec\varsigma_N).
$$
\end{enumerate}
\end{lemma}
\begin{proof}
Fix a sequence of continuous functions $\lambda_n:[0,1]\to [0,1]$ which is dense in $C([0,1],[0,1])$ and such that $\lambda_1(x)=x$ and $\lambda_{2n}$ enumerates all rational valued constant functions with infinite repetition. Also fix a dense sequence $g_n\in C_{\mathbb R}[0,1]$, $n\in{\mathbb N}$, closed under composition with the $\lambda_n$ (i.e., for all $i,j\in{\mathbb N}$ there is $k\in{\mathbb N}$ such that $g_i\circ \lambda_j=g_k$.)
Pick $\gamma\in\Gamma(H)$ to consist of the operators on $H$ that correspond to multiplication by the $g_n$. Each $\lambda_n$ induces an endomorphism $\psi_{n,m}$ of $C^*(M_m(\gamma))$ by entry-wise composition. Let $\Psi_{n,m}:{\mathbb N}\to{\mathbb N}^{\mathbb N}$ enumerate a sequence of codes corresponding to the $\psi_{n,m}$. These may even be chosen so that $\Psi_{n,m}(l)$ is always a constant sequence since we assumed that the sequence $(g_n)$ is closed under composition with the $\lambda_k$.
Define for each $N\in{\mathbb N}$ a relation $R_N\subseteq L_{\rm ou}(C_{\mathbb R}[0,1])\times{\mathbb N}\times{\mathbb N}\times{\mathbb Q}_+\times{\mathbb N}\times {\mathbb N}^{<{\mathbb N}}$ by
\begin{align*}
R_N(\psi,n,k,\varepsilon,m,t)\iff
&\frac m {nk}\in{\mathbb N}\wedge \length(t)=\frac m n\wedge t(1)=1\wedge t(2)=2N\wedge \\
&(\forall j\leq N)\|\psi(g_j)-\frac n m \sum_{i=1}^{\frac m n} g_j\circ \lambda_{t(i)}\|_\infty<\varepsilon
\end{align*}
Note that this is an open relation in the product space when $L_{\rm ou}(C_{\mathbb R}[0,1])$ has the strong topology (and ${\mathbb N}$, ${\mathbb Q}_+$ and ${\mathbb N}^{<{\mathbb N}}$ have the discrete topology.) By Lemma \ref{l.thomsen}.(4) it holds that for all $\psi$, $n$, $k$ and $\varepsilon$ there is $m$ and $t$ such that $R_N(\psi,n,k,\varepsilon,m,t)$ holds. (Note that this still holds although we have fixed the first two elements of the sequence $t$, since $m$ can be picked arbitrarily large.) Let $t_N(\psi,n,k,\varepsilon)$ be the lexicographically least $t\in {\mathbb N}^{<{\mathbb N}}$ such that $R_N(\psi,n,k,\varepsilon,n\length(t),t)$ holds. We let $m_N(\psi,n,k,\varepsilon)=n\length(t)$, and note that $t_N$ and $m_N$ define Borel functions.
Fix a sequence $(\delta_i)_{i\in{\mathbb N}}$ in ${\mathbb Q}_+$ such that $\sum_{i=1}^\infty \delta_i<\infty$. Let $q_i\in{\mathbb N}$ enumerate the primes with each prime repeated infinitely often. We can then define Borel functions $G_N :L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}\to{\mathbb N}$, $d_N: L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}\to {\mathbb N}$, $\mathbf{d}_N: L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}\to{\mathbb N}$ and $s_N: L_{\rm ou}(C_{\mathbb R}[0,1])\to {\mathbb N}^{<{\mathbb N}}$ recursively such that the following is satisfied:
\begin{enumerate}[\indent $(A)$]
\item $G_1$,$d_1$ and $\mathbf {d}_1$ are the constant 1 functions, $s_1$ is constantly the empty sequence.
\item $G_{N+1}(\vec\varsigma)$ is the least natural number $k$ such that for all $i\leq N$ and $j\leq G_{N}(\vec\varsigma)$ there are $j_0,j_1\leq k$ such that
$$
\|\sum_{l=1}^{\mathbf{d}_N} g_j\circ \lambda_{s_i(\vec\varsigma)_l}-g_{j_0}\|\leq\delta_N
$$
and
$$
\|\vec\varsigma_i(g_j)\circ \lambda_{s_i(\vec\varsigma)_l}-g_{j_1}\|\leq\delta_N.
$$
\item $d_{N+1}(\vec\varsigma)=m_{G_{N+1}(\vec\varsigma)}(\vec\varsigma_{N+1}, d_N(\vec\varsigma), q_1\cdots, q_N,\delta_{N+1})$.
\item $s_{N+1}(\vec\varsigma)=t_{G_{N+1}(\vec\varsigma)}(\vec\varsigma_{N+1},d_N(\vec\varsigma),q_1\cdots q_N,\delta_{N+1})$.
\item $\mathbf{d}_{N+1}(\vec\varsigma)=\frac {d_{N+1}(\vec\varsigma)}{d_{N}(\vec\varsigma)}=\length(s_{N+1}(\vec\varsigma))$.
\end{enumerate}
Note that $\mathbf{d}_N$ takes integer values by the definition of $d_N$. Define
$$
\Phi_N(\vec\varsigma)=\theta_{\mathbf{d}_{N+1}(\vec\varsigma)}(M_{d_N(\vec\varsigma)}(\gamma),\Psi_{s_{N+1}(\vec\varsigma)_1,d_N(\vec\varsigma)},\ldots, \Psi_{s_{N+1}(\vec\varsigma)_{\mathbf{d}_{N+1}(\vec\varsigma)},d_N(\vec\varsigma)}).
$$
Then $\Phi_N$ and $d_N$ are Borel functions for all $N\in{\mathbb N}$, and (I) of the Lemma holds by definition of $\theta_n$.
We proceed to prove that (II) also holds. Fix $\vec\varsigma\in L_{\rm ou}(C_{\mathbb R}[0,1])^{\mathbb N}$. Note that the inductive system $(C^*(M_{d_N(\vec\varsigma)}(\gamma), \hat\Phi_N(\vec\varsigma))$ is isomorphic to the system $({\mathbb M}_{d_N(\vec\varsigma)}(C[0,1]),\phi_N)$ where
$$
\phi_N(f)=\diag(f\circ\lambda_{s_{N+1}(\vec\varsigma)_1},\ldots,f\circ \lambda_{s_{N+1}(\vec\varsigma)_{\mathbf{d}_{N+1}(\varsigma)}}).
$$
Since each natural number divides some $d_N(\vec\varsigma)$ we have
$$
(K_0(A_{\vec\varsigma}),K_0^+(A_{\vec\varsigma}), [1_{A_{\vec\varsigma}}])\simeq ({\mathbb Q},{\mathbb Q}^+,1)
$$
while $K_1(A_{\vec\varsigma})=\{1\}$ since $[0,1]$ is contractible.
To establish that $\Aff(T(A_{\vec\varsigma}))\simeq \lim_i (C_{\mathbb R}[0,1],\vec\varsigma_i)$ we apply Lemma \ref{l.thomsen}. By Lemma \ref{l.thomsen}.(2) the order unit space morphism induced by $\phi_N$ is given by
$$
\hat\phi_N(f)=\frac 1 {\mathbf{d}_{N+1}(\vec\varsigma)} \sum_{i=1}^{\mathbf{d}_{N+1}(\vec\varsigma)} f\circ \lambda_{s_{N+1}(\vec\varsigma)_i}.
$$
Letting $F_N=\{g_i:i\leq G_N(\vec\varsigma)\}$, it is clear that (a) and (b) of Lemma \ref{l.thomsen}.(3) are satisfied. That (c) of \ref{l.thomsen}.(3) then also is satisfied for the sequences $\hat\phi_N,\vec\varsigma_N\in C_{\mathbb R}[0,1]$ follows from property (B) above. Finally, \ref{l.thomsen}.(3).(d) holds by (D) and the definition of $t_N$ and $R_N$. Thus
$$
\lim_i(C_{\mathbb R}[0,1],\vec\varsigma_i)\simeq \lim_i(C_{\mathbb R}[0,1],\hat\phi_i)\simeq\Aff(T(A_{\vec\varsigma})).
$$
It remains only to verify that $A_{\vec\varsigma}$ is simple. For this we need only prove that if $0 \neq f \in {\mathbb M}_{d_N(\vec\varsigma)}(C[0,1])$, then for all $t\in [0,1]$ we have
$$
\phi_{N,j}(f) := \left( \phi_{j-1} \circ \phi_{j-2} \circ \cdots \circ \phi_N \right)(f)
$$
is nonzero at $t$ for some (and hence all larger) $j \geq N$. By the definition of the sequence $(\lambda_n)$, there is some $j \geq l$ such that $f\circ\lambda_{2j}\neq 0$. By the definition of the relations $R_n$, $f$ is a direct summand of $\phi_{l,j}(f)$, and so the constant function $f\circ\lambda_{2j} \neq 0$ is a direct summand of $\phi_{N,j+1}$. This implies $\phi_{N,j+1}(f)(t) \neq 0$ for each $t \in [0,1]$, as required.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t.reduction}]
Combine Lemma \ref{l.conversion} with Lemma \ref{biglimit}.
\end{proof}
\begin{corollary} There is a Borel measurable map $\Phi$ from $\{\gamma: C^*(\gamma)$ is
unital and abelian$\}$ into $\{\gamma: C^*(\gamma)$ is simple and unital AI$\}$ such that $C^*(\gamma)\cong C^*(\gamma')$
if and only if $C^*(\Phi(\gamma))\cong C^*(\Phi(\gamma'))$.
In other words, unital abelian C*-algebras can be effectively classified by simple, unital AI algebras.
\end{corollary}
\begin{proof}
By Gelfand--Naimark duality a unital abelian C*-algebra $A$ is isomorphic to $C({\mathbb P}(A))$, where ${\mathbb P}(A)$ denotes the pure states of $A$.
We therefore only need to compose three Borel maps:
The map taking the algebra $A$ to the space of its pure states (Lemma~\ref{L.SPT}),
the map taking a compact Hausdorff space $X$ to the Bauer simplex $P(X)$
(Lemma~\ref{L.Choq.4}), and the map from the space of Choquet simplexes into the set of AI-algebras that was defined in Theorem~\ref{t.reduction}.
\end{proof}
\section{A selection theorem for exact C$^*$-algebras}\label{s.exact}
For $2\leq n<\infty$, we will denote by $\mathcal O_n$ the Cuntz algebra generated by $n$ isometries $s_1,\ldots, s_n$ satisfying $\sum_{i=1}^n s_is_i^*=1$ (see \cite[4.2]{Ror:Classification}.)
Kirchberg's exact embedding Theorem states that the exact separable C$^*$-algebras are precisely those which can be embedded into $\mathcal O_2$. The purpose of this section is to prove a Borel version of this: There is a Borel function on $\Gamma$ selecting an embedding of $C^*(\gamma)$ into $\mathcal O_2$ for each $\gamma\in\Gamma$ that codes an exact C$^*$-algebra. In the process we will also see that the set of $\gamma\in\Gamma$ such that $C^*(\gamma)$ is exact forms a Borel set.
\subsection{Parameterizing exact C$^*$-algebras.}\label{ss.exact} There is a multitude of ways of parameterizing exact separable C$^*$-algebras, which we now describe. Eventually, we will see that they are all equivalent good standard Borel parameterizations.
Define
$$
\Gammaex=\{\gamma\in\Gamma: C^*(\gamma)\text{ is exact}\},
$$
and let $\Gammaexu=\Gammaex\cap\Gammau$ denote the set of unital exact C$^*$-algebras\footnote{The sets $\Gammaex$ and $\Gammaexu$ are prima facie analytic, but since we will show they are Borel, the use of the language of Definition \ref{d.parameterization} is warranted.}. An alternative parameterization of the exact separable C$^*$-algebras is given by elements of $\Gamma(\mathcal O_2)=\mathcal O_2^{\mathbb N}$, equipped with the product Borel structure, where we identify $\gamma\in\mathcal O_2^{\mathbb N}$ with the C$^*$-subalgebra generated by this sequence. Let $\Gammau(\mathcal O_2)$ denote the set of $\gamma\in\Gamma(\mathcal O_2)$ which code unital C$^*$-subalgebras of $\mathcal O_2$.
Note that a parameterization weakly equivalent to $\Gamma(\mathcal O_2)$ is obtained by considering in the Effros Borel space $F(\mathcal O_2)$ of closed subsets of $\mathcal O_2$, the (Borel) set
$$
\SA(\mathcal O_2)=\{A\in F(\mathcal O_2): A\text{ is a sub-C}^*\text{-algebra of } \mathcal O_2\}.
$$
Recall the parameterization $\Xi_{\Au}$ of unital separable C$^*$-algebras from \ref{ss.paramunital}. We define $\XiAuex$ to be the subset of $\Xi_{\Au}$ corresponding to exact unital C$^*$-algebras. Recall also that $\mathfrak A$ is the free countable unnormed ${\mathbb Q}(i)$-$*$-algebra, $\Au$ the unital counterpart. Define
$$
\hat\Gamma_{\mathfrak A}(\mathcal O_2)=\{\xi:\mathfrak A\to\mathcal O_2: \xi\text{ is a ${\mathbb Q}(i)$-$*$-algebra homomorphism } \mathfrak A\to\mathcal O_2\}
$$
and
$$
\hat\Gamma_{\Au}(\mathcal O_2)=\{\xi:\Au\to\mathcal O_2: \xi\text{ is a unital ${\mathbb Q}(i)$-$*$-algebra homomorphism } \Au\to\mathcal O_2\},
$$
and note that $\hat\Gamma_{\mathfrak A}(\mathcal O_2)$ and $\hat\Gamma_{\Au}(\mathcal O_2)$ are closed (and therefore Polish) in the subspace topology, when $\mathcal O_2^{\mathfrak A}$ and $\mathcal O_2^{\Au}$ are given the product topology. As previously noted, $\mathfrak A$ can be identified with the set of formal ${\mathbb Q}(i)$-$*$-polynomials $\mathfrak p_n$ in the formal variables $X_i$ without constant term, and $\Au$ with the formal ${\mathbb Q}(i)$-$*$-polynomials (allowing a constant term), which we enumerated as $\mathfrak q_n$. We define $g:\hat\Gamma_{\Au}(\mathcal O_2)\to\Xi_{\Au}$ by $g(\xi)(\mathfrak q_n)=\|\xi(\mathfrak q_n)\|_{\mathcal O_2}$. Note that $g$ is continuous. By the exact embedding Theorem we have $g(\hat\Gamma_{\Au}(\mathcal O_2))=\XiAuex$.
Define an equivalence relation $E^g$ in $\hat\Gamma_{\Au}(\mathcal O_2)$ by
$$
\xi E^g\xi'\iff g(\xi)=g(\xi').
$$
For $\xi\in\hat\Gamma_{\Au}(\mathcal O_2)$, a norm is defined on $\Au / \ker(\xi)$ by letting $\|\mathfrak q_n\ker(\xi)\|_{\xi}=\|\xi(\mathfrak q_n)\|_{\mathcal O_2}$. We define $A_u(\xi)$ to be the unital C$^*$-algebra obtained from completing $(\Au,\|\cdot\|_\xi)$, and we note that $\xi$ extends to an injection $\bar\xi:A_u(\xi)\to\mathcal O_2$. It is clear that the definition of $A_u$ is $E^g$-invariant.
\begin{prop}\label{pr.selector}
With notation as above, there is a Borel set in $\hat\Gamma_{\Au}(\mathcal O_2)$ meeting every $E^g$ class exactly once (i.e., there is a Borel \emph{transversal} for $E^g$).
\end{prop}
Before giving the proof, we first prove two general lemmas.
\begin{lemma}\label{l.borelassign}
Let $X,Y$ be Polish spaces. Suppose $B\subseteq X\times Y$ is a Borel relation such that for all $x\in X$ the section $B_x$ is closed (and possibly $\emptyset$.) Then the following are equivalent:
\begin{enumerate}
\item The map $X\to F(Y):x\mapsto B_x$ is Borel;
\item $\proj_X(B)$ is Borel and there are Borel functions $f_n:\proj_X(B)\to Y$ such that for all $x\in\proj_X(B)$ we have $f_n(x)\in B_x$ and $(f_n(x))_{n\in{\mathbb N}}$ enumerates a dense sequence in $B_x$;
\item the relation $R\subseteq X\times{\mathbb N}\times{\mathbb Q}_+$ defined by
$$
R(x,n,\varepsilon)\iff (\exists y\in Y) y\in B_x\wedge d(y,y_n)<\varepsilon
$$
is Borel for some (any) complete metric $d$ inducing the topology on $Y$ and $(y_n)_{n\in{\mathbb N}}$ dense in $Y$.
\end{enumerate}
In particular, if any of (1)--(3) above hold, there is a Borel function $F_0:X\to Y$ such that $F_0(x)\in B_x$ for all $x\in X$, and $F_0(x)$ depends only on $B_x$.
\end{lemma}
\begin{proof}
The equivalence of the first two is well-known, see \cite[12.13 and 12.14]{Ke:Classical}. Clearly (2) implies (3) since
$$
R(x,n,\varepsilon)\iff (\exists i) d(y_n,f_i(x))<\varepsilon.
$$
To see (3)$\implies$(1), simply notice that (3) immediately implies that for all $n\in{\mathbb N}$ and $\varepsilon\in{\mathbb Q}_+$ the set
$$
\{x\in X: B_x\cap \{y\in Y: d(y,y_n)<\varepsilon\}\neq\emptyset\}
$$
is Borel. This shows that the inverse images under the map $x\mapsto B_x$ of the sets $\{F\in F(Y):F\cap\{y\in Y:d(y,y_n)<\varepsilon\}\neq\emptyset\}$ for $n\in{\mathbb N}$, and $\varepsilon\in{\mathbb Q}_+\}$ are Borel, and since the latter sets generate the Effros Borel structure, it follows that the map $X\to B(Y):x\mapsto F_x$ is Borel, as required.
Finally, the last statement follows from (1) and the Kuratowski--Ryll-Nardzewski Theorem (\cite[Theorem~12.13]{Ke:Classical}).
\end{proof}
\begin{lemma}\label{l.select}
Let $X,Y$ and $B\subseteq X\times Y$ be as in Lemma \ref{l.borelassign}, and suppose moreover that $\proj_X(B)$ is Borel. Let $G$ be a Polish group, and suppose there is a continuous $G$-action on $Y$ such that the sets $B_x$ are $G$-invariant for all $x\in X$, and that for all $(x,y)\in B$ we have that the $G$-orbit of $y\in B_x$ is dense in $B_x$. Let $d$ be a complete metric on $Y$ and let $y_n$ be dense in $Y$. Then $R$ defined as in the previous Lemma is Borel, and so in particular (1) and (2) hold for $B$.
\end{lemma}
\begin{proof}
It is clear from the definition that
$$
R(x,n,\varepsilon)\iff (\exists y\in Y) y\in B_x\wedge d(y_n,y)<\varepsilon
$$
is an analytic set. To see that it is in fact Borel, fix a dense sequence $g_n\in G$. Then since all $G$-orbits are dense in $B_x$ we also have
$$
R(x,n,\varepsilon)\iff x\in\proj_X(B)\wedge (\forall y\in Y) y\notin B_x\vee (\exists i) d(g_i\cdot y,y_n)<\varepsilon,
$$
which gives a co-analytic definition of $R$, so that $R$ is Borel.
\end{proof}
We now turn to the proof of Proposition \ref{pr.selector}. Recall that if $A,B$ are C$^*$-algebras, $B$ is unital, and $\varphi_0,\varphi_1:A\to B$ are $*$-homormorphisms, we say that $\varphi_0$ and $\varphi_1$ are \emph{approximately unitarily equivalent} if for all finite $F\subseteq A$ and all $\varepsilon>0$ there is a unitary $u\in B$ such that $\|u^*\varphi_0(x)u-\varphi_1(x)\|<\varepsilon$ for all $x\in F$.
\begin{proof}[Proof of Proposition \ref{pr.selector}]
Let $U(\mathcal O_2)$ denote the unitary group of $\mathcal O_2$. The group $U(\mathcal O_2)$ acts continuously on $\hat\Gamma_{\Au}(\mathcal O_2)$ by
$$
u\cdot\xi(\mathfrak q_n)=u^*\xi(\mathfrak q_n)u=\Ad_u(\xi(\mathfrak q_n)),
$$
and this action preserves the equivalence classes of $E^g$. Further, it is clear that $E^g$ is closed as a subset of $\hat\Gamma_{\Au}(\mathcal O_2)^2$.
We claim that for all $\xi\in\hat\Gamma_{\Au}(\mathcal O_2)$, the $U(\mathcal O_2)$-classes in $[\xi]_{E^g}$ are dense. To see this, let $\xi'E^g \xi$, and let $\bar\xi: A_u(\xi)\to\mathcal O_2$, $\bar\xi': A_u(\xi')\to\mathcal O_2$ be the injections defined before Proposition \ref{pr.selector}. Since $A_u(\xi)=A_u(\xi')$, it follows by \cite[Theorem 6.3.8]{Ror:Classification} that $\bar\xi$ and $\bar\xi'$ are approximately unitarily equivalent, and so we can find $u\in U(\mathcal O_2)$ such that $u\cdot\xi$ is as close to $\xi'$ as we like in $\mathcal O_2^{\mathfrak A}$.
Applying Lemma \ref{l.select} and \ref{l.borelassign}, we get a Borel function $F_0:\hat\Gamma_{\Au}(\mathcal O_2)\to \hat\Gamma_{\Au}(\mathcal O_2)$ selecting a unique point in each $E^g$-class. Then the set $F_0(\hat\Gamma_{\Au}(\mathcal O_2))=\{\gamma\in\hat\Gamma_{\Au}(\mathcal O_2): F_0(\gamma)=\gamma\}$
is clearly a Borel transversal.
\end{proof}
From Proposition \ref{pr.selector} we can obtain a Borel version of Kirchberg's exact embedding theorem. We first need a definition.
\begin{definition} Let $A$ be a separable C$^*$-algebra and $\gamma\in\Gamma$. Call $\Psi:{\mathbb N}\to A$ a \emph{code} for an embedding of $C^*(\gamma)$ into $A$ if for all $n,m,k\in{\mathbb N}$ we have:
\begin{enumerate}[\indent (1)]
\item If $\mathfrak p_m(\gamma)+\mathfrak p_n(\gamma)=\mathfrak \mathfrak p_k(\gamma)$ then $\Psi(m)+\Psi(n)=\Psi(k)$;
\item if $\mathfrak p_m(\gamma)=\mathfrak p_n^*(\gamma)$ then $\Psi(m)=\Psi(n)^*$;
\item if $\mathfrak p_m(\gamma)\mathfrak p_n(\gamma)=\mathfrak \mathfrak p_k(\gamma)$ then $\Psi(m)\Psi(n)=\Psi(k)$;
\item $\|\Psi(m)\|_{A}=\|\mathfrak p_m(\gamma)\|$.
\end{enumerate}
It is clear that if $\Psi:{\mathbb N}\to A$ is such a code then there is a unique $*$-monomorphism $\hat\Psi:C^*(\gamma)\to A$ satisfying $\hat\Psi(\mathfrak p_n(\gamma))=\Psi(\mathfrak p_n(\gamma))$. If $A$ is unital with unit $1_A$ and $C^*(\gamma)$ is unital, and $\Psi$ further satisfies
\begin{enumerate}[\indent (5)]
\item if $1_{C^*(\gamma)}$ is the unit in $C^*(\gamma)$ then $\hat\Psi(1_{C^*(\gamma)})=1_{A}$
\end{enumerate}
then we will call $\Psi$ a code for a \emph{unital} embedding into $A$.
Let $P^A\subseteq\Gamma\times A^{\mathbb N}$ be the relation
$$
P^A(\gamma,\Psi)\iff \Psi \text{ is a code for an embedding into } A
$$
and, assuming $A$ is unital, let $P^A_{\text{u}}\subseteq \Gammau\times A^{\mathbb N}$ be
$$
P^A_{\text{u}}(\gamma,\Psi)\iff \Psi \text{ is a code for a unital embedding into } A.
$$
\end{definition}
We note that the sections $P^A_\gamma$ and $(P^A_u)_\gamma$ are closed for all $\gamma\in\Gamma$.
\begin{theorem}[Borel Kirchberg exact embedding Theorem, unital case]
\
\begin{enumerate}[\indent\rm (1)]
\item The sets $\Gammaexu$, $\XiAuex$, $\Gammau(\mathcal O_2)$ and $\hat\Gamma_{\Au}(\mathcal O_2)$ are Borel and provide equivalent good parameterizations of the unital separable exact $C^*$-algebras.
\item There is a Borel function $f:\Gammaexu\to\mathcal O_2^{\mathbb N}$ such that $f(\gamma)$ is a code for a unital embedding of $C^*(\gamma)$ into $\mathcal O_2$ for all $\gamma\in\Gammaexu$. In other words, the relation $P_{\text{u}}^{\mathcal O_2}$ admits a Borel uniformization.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Let $T$ be a selector for $E^g$ as guaranteed by Proposition \ref{pr.selector}. Then $g:\hat\Gamma_{\Au}(\mathcal O_2)\to\Xi_{\Au}$ is injective on $T$, and so $\ran(g)=\ran(g\upharpoonright T)=\XiAuex$ is Borel, and admits a Borel right inverse $h:\XiAuex\to\hat\Gamma_{\Au}(\mathcal O_2)$. Since Lemma \ref{l.injection} also holds with $Y=\Xi_{\Au}$, there is a Borel injection $\tilde g:\hat\Gamma_{\Au}(\mathcal O_2)\to\Xi_{\Au}$ such that $g(\xi)\simeq^{\Xi_{\Au}} \tilde g(\xi)$ for all $\xi\in \hat\Gamma_{\Au}(\mathcal O_2)$, and this shows that $\hat\Gamma_{\Au}(\mathcal O_2)$ and $\XiAuex$ are equivalent good parameterizations.
Since $\Gammau$ and $\Xi_{\Au}$ are equivalent (Corollary \ref{c.paramunital}), any witness to this is also a witness to that $\Gammaexu$ and $\XiAuex$ are equivalent, in particular $\Gammaexu$ is also Borel. Finally, by fixing a faithful representation of $\mathcal O_2$ on the Hilbert space $H$ we obtain a Borel injection of $\Gammau(\mathcal O_2)$ into $\Gammaexu(H)$, while on the other hand there clearly is a natural Borel injection from $\hat\Gamma_{\Au}(\mathcal O_2)$ into $\Gammau(\mathcal O_2)$. This finishes the proof of (1).
(2) Arguing exactly as in the proof of Proposition \ref{pr.selector}, the action of $U(\mathcal O_2)$ on the sections of $P^{\mathcal O_2}_{\text{u}}$ satisfy Lemma \ref{l.select}, since any two injective unital embeddings of $C^*(\gamma)$ are approximately unitarily equivalent for $\gamma\in\Gammaexu$.
\end{proof}
Since by Lemma \ref{L.unitization} the map that assigns to $\gamma\in\Gamma$ its unitization is Borel, we obtain:
\begin{theorem}[Borel Kirchberg exact embedding Theorem, non-unital case]
\
\begin{enumerate}[\indent\rm (1)]
\item The sets $\Gammaex$, $\Gamma(\mathcal O_2)$ and $\hat\Gamma_{\mathfrak A}(\mathcal O_2)$ are Borel and provide equivalent parameterizations of the separable exact $C^*$-algebras.
\item There is a Borel function $f:\Gammaex\to\mathcal O_2^{\mathbb N}$ such that $f(\gamma)$ is a code for a embedding of $C^*(\gamma)$ into $\mathcal O_2$ for all $\gamma\in\Gammaex$. In other words, the relation $P^{\mathcal O_2}$ admits a Borel uniformization.
\end{enumerate}
\end{theorem}
\section{Below a group action}\label{s.bga}
Conjugacy of unitary operators on a separable Hilbert space cannot be reduced to isomorphism of countable structures by \cite{KecSof:Strong}. However, a complete classification of this relation is provided by the spectral measures. We may therefore consider a more general notion of classifiability: being reducible to an orbit equivalence relation of a Polish group action.
We don't know whether isomorphism of separable (simple) C*-algebras is implemented
by a Polish group action.
(see Question~\ref{Q.action} and Problem~\ref{P.action}).
In this section we will prove that isomorphism of \emph{unital} nuclear simple separable C$^*$-algebras is indeed Borel reducible to an orbit equivalence relation induced by a Polish group action,
Theorem \ref{t.belowgrp} below. We also note a simpler fact that this also applies to isomorphism
of abelian separable C*-algebras (Proposition~\ref{P.abelian}).
Before turning to the proof of this we briefly discuss Question \ref{Q.action} in general.
\begin{theorem} Assume $A$ and $B$ are separable weakly dense
subalgebras of $\mathcal B(H)$. Then $A\cong B$ if and only if there is a unitary $u$ such that
$uAu^*=B$.
\end{theorem}
\begin{proof} Only the direct implication
requires a proof. Let $\alpha\colon A\to B$ be an isomorphism.
Fix a unit vector $\xi\in H$ and let $\omega_\xi$ denote the
vector state corresponding to $\xi$, $\omega_\xi(a)=(a\xi|\xi)$.
Since $A$ is weakly dense in $\mathcal B(H)$, the restriction of $\omega_\xi$
to $A$ is a pure state of $A$, and similarly the restriction of $\omega_\xi$ to $B$
is a pure state of $B$.
By \cite{KiOzSa} there is an automorphism $\beta$ of $B$ such that
$\omega_\xi\restriction B=\beta\circ \alpha\circ \omega_\xi$. The isomorphism
$\beta\circ \alpha$ extends to the isomorphism between the
GNS representations of $A$ and $B$ corresponding to the pure states
$\omega_\xi\restriction A$
and $\omega_\xi\restriction B$. This automorphism of $\mathcal B(H)$ is implemented by a
unitary $u$ as required.
\end{proof}
By the previous theorem, in order to give a positive answer to Question~\ref{Q.action}
it would suffice to have a natural Borel space whose points are separable C*-subalgebras of $\mathcal B(H)$. The space $\Gamma$ defined in \ref{ss.gamma} appears to be similar to such a space, but
the following proposition, suggested to the first author by Alekos Kechris, is an obstacle to the direct approach.
\begin{prop} \label{P.nonstandard}
On the space $\Gamma$ consider the relation $\gamma\, E\,\gamma'$ iff
$C^*(\gamma')=C^*(\gamma)$. Then the quotient Borel structure is nonstandard, even when restricted to $\{\gamma\in \Gamma: C^*(\gamma)$ is simple, unital, and nuclear$\}$.
\end{prop}
\begin{proof} Note that $E$ is Borel by Lemma~\ref{L.equality.borel}.
It will suffice to construct a Borel map $\Phi\colon 2^{{\mathbb N}}\to \Gamma$ such that
$x\, E_0 \, y$ if and only if $C^*(\Phi(x))=C^*(\Phi(y))$. (Here $E_0$ denotes eventual equality in the space $2^{\mathbb N}$.) We will assure that every parameter
in the range of
$\Phi$ corresponds to a simple nuclear algebra.
Let $H_j$ be the two-dimensional complex Hilbert space and
let $\zeta_j$ denote the vector $\begin{pmatrix} 1 & 0 \end{pmatrix}$ in $H_j$.
Identify $H$ with $\bigotimes_{j\in {\mathbb N}} (H_j,\zeta_j)$.
For $x\subseteq {\mathbb N}$ let
\[
u_x=\bigotimes_{j\in x}
\begin{pmatrix}
1 & 0 \\
0 & -1.
\end{pmatrix}
\]
This is a unitary operator on $H$.
Fix a set $\gamma_j$, for $j\geq 1$, that generates the CAR algebra
$A=\bigotimes_j M_2(\mathbb C)$, represented on $H$ so that the $j$'th copy of $M_2(\mathbb C)$
maps to $\mathcal B(H_j)$.
Let $\Phi(x)=\gamma$ be such that $\gamma_0=u_x$ and $\gamma_j$, for $j\geq 1$,
are as above.
Then $C^*(\Phi(x))=C^*(\Phi(y))$ if and only if $u_x u_y^*\in A$,
if and only if $x\Delta y$ is finite.
\end{proof}
\subsection{A reduction to an action of $\Aut(\mathcal O_2)$.}
Let $\Aut(\mathcal O_2)$ denote the automorphism group of $\mathcal O_2$, and equip $\Aut(\mathcal O_2)$ with the strong topology, which makes it a Polish group. We now aim to prove:
\begin{theorem}\label{t.belowgrp}
The isomorphism relation for nuclear simple unital separable C$^*$-algebras is Borel reducible to an orbit equivalence relation induced by a Borel action of $\Aut(\mathcal O_2)$ on a standard Borel space.
\end{theorem}
The proof of this requires some preparation, the most substantial part being a version of Kirchberg's ``$A\otimes\mathcal O_2\simeq\mathcal O_2\otimes\mathcal O_2$ Theorem'' for nuclear simple unital and separable $A$. However, we start by noting the following:
\begin{prop}\label{p.simple}
The set
$$
\{\gamma\in\Gamma: C^*(\gamma) \text{ is simple}\}
$$
is Borel.
\end{prop}
\begin{proof} We use the facts that a C*-algebra $A$ is simple if and only if for
every state $\phi$ the GNS representation
$\pi_\phi$ is an isometry and that the operator norm in the GNS representation reads as
$\|\pi_\phi(a)\|=\sup_{\phi(b^*b)\leq 1} \phi(b^*a^*ab)$ for all $a\in A$.
Recall from \ref{ss.codingstates} the coding of states. We define $R\subseteq\Gamma\times{\mathbb C}^{\mathbb N}$ by
$$
R(\gamma,\hat\phi)\iff \hat\phi\text{ codes a state on } C^*(\gamma).
$$
Then $R$ is easily Borel, and as noted in \ref{ss.codingstates}, the sections $R_\gamma=\{\hat\phi\in{\mathbb C}^{\mathbb N}: R(\gamma,\hat\phi)\}$ are compact. For $n\in{\mathbb N}$ and $\varepsilon>0$, define $Q_{n,\varepsilon}\subseteq\Gamma\times{\mathbb C}^{\mathbb N}$ by
\begin{align*}
Q_{n,\varepsilon}(\gamma,\hat\phi)\iff
(\forall k)(\forall l)(\forall m) &(\mathfrak p_k(\gamma)=\mathfrak p_m(\gamma)^*\mathfrak p_n(\gamma)^*\mathfrak p_n(\gamma)\mathfrak p_m(\gamma)\\
& \land \mathfrak p_l(\gamma)=\mathfrak p_m(\gamma)^*\mathfrak p_m(\gamma)
\land \hat\phi(l)\leq 1 \implies \hat\phi(k)+\varepsilon\leq\|\mathfrak p_n(\gamma)\|).
\end{align*}
Then $Q_{n,\varepsilon}$ is Borel, and the sections $(Q_{n,\varepsilon})_\gamma$ are closed, and therefore compact. Thus the sets
$$
S_{n,\varepsilon}=\{\gamma\in\Gamma:(\exists\hat\phi) Q_{n,\varepsilon}(\gamma,\hat\phi)\}
$$
are Borel, by \cite[Theorem 28.8]{Ke:Classical}. We claim that
$$
\{\gamma\in\Gamma: C^*(\gamma)\text{ is simple}\}=\Gamma\setminus\bigcup_{n\in{\mathbb N}, \varepsilon>0} S_{n,\varepsilon}.
$$
To see this, first note that if $C^*(\gamma)$ is simple then the GNS representation of any state $\phi$ on $C^*(\gamma)$ is faithful, and so for any $n\in{\mathbb N}$ we have
\begin{equation}\label{eq.faithful}
\sup\{\phi(\mathfrak p_m(\gamma)^*\mathfrak p_n(\gamma)^*\mathfrak p_n(\gamma)\mathfrak p_m(\gamma)):
\phi(\mathfrak p_m(\gamma)\mathfrak p_m(\gamma)^*\leq 1 \}
=\|\mathfrak p_n(\gamma)\|
\end{equation}
Hence $\gamma\notin S_{n,\varepsilon}$ for all $n\in{\mathbb N}$ and $\varepsilon>0$. On the other hand, if $\gamma\notin S_{n,\varepsilon}$ for all $n\in{\mathbb N}$ and $\varepsilon>0$, then \eqref{eq.faithful} holds, and so all states are faithful. Hence $C^*(\gamma)$ is simple.
\end{proof}
A strengthening of Proposition~\ref{p.simple} will be given in \cite{FaToTo}.
\medskip
Since Effros has shown that the class of nuclear separable C$^*$-algebras is Borel (see \cite[\S 5]{Kec:C*} for a proof), we now have:
\begin{corollary}\label{c.unsBorel}
The set
$$
\{\gamma\in\Gamma: C^*(\gamma)\text{ is simple, nuclear and unital}\}
$$
is Borel.
\end{corollary}
\subsection{A Borel version of Kirchberg's $A\otimes \mathcal O_2$ Theorem}
For $A$ and $B$ fixed separable C$^*$-algebras, let
$$
\Hom(A,B)=\{f:A\to B: f \text{ is a $*$-homomorphism}\}.
$$
Then $\Hom(A,B)\subseteq L_1(A,B)$, the set of bouned linear maps from $A$ to $B$ with operator norm at most 1, and is closed in the strong operator topology, hence is a Polish space. We let $\End(A)=\Hom(A,A)$.
Kirchberg's $A\otimes\mathcal O_2$ Theorem states that $A$ is nuclear simple separable and unital if and only if $A\otimes\mathcal O_2$ is isomorphic to $\mathcal O_2\otimes\mathcal O_2$. The latter is itself isomorphic to $\mathcal O_2$ by a result of Elliott, see e.g. \cite[7.1.2 and 5.2.1]{Ror:Classification}. Our next theorem is an effective version of this theorem.
Let $\SAu(\mathcal O_2)$ denote the standard Borel space of closed unital $*$-subalgebras of $\mathcal O_2$. Since the parameterizations $\Gammaexu$ and $\SAu(\mathcal O_2)$ are weakly equivalent (see \ref{ss.exact}), it follows from Corollary~\ref{c.unsBorel} that the set
$$
\SAuns(\mathcal O_2)=\{A\in \SAu(\mathcal O_2): A\text{ is nuclear and simple}\}
$$
is Borel. We will work with this parameterization of unital nuclear simple separable C$^*$-algebras below.
\begin{theorem}\label{t.AtensorO2}
There is a Borel map $F:\SAuns(\mathcal O_2)\to \mathrm{End}(\mathcal O_2\otimes\mathcal O_2)$ such that $F(A)$ is a monomorphism of $\mathcal O_2\otimes\mathcal O_2$ onto $A\otimes\mathcal O_2$.
\end{theorem}
The proof uses an approximate intertwining argument\footnote{We refer the reader to \cite[2.3]{Ror:Classification} for a general discussion of approximate intertwining. The argument is also known as Elliott's Intertwining Argument.} that we now describe. Let $A$ be a simple unital separable nuclear C$^*$-algebra, viewed as a unital subalgebra of $\mathcal{O}_2$. Recall that for such $A$, the algebra $A \otimes \mathcal{O}_2$ (and so in particular $\mathcal{O}_2 \otimes \mathcal{O}_2 \cong \mathcal{O}_2$) has the property that every unital $*$-endomorphism is approximately inner (see \cite[6.3.8]{Ror:Classification}.) Fix a $*$-isomorphism $\gamma: \mathcal{O}_2 \otimes \mathcal{O}_2 \to \mathcal{O}_2$ and a summable sequence $(\epsilon_n)$ of strictly positive tolerances. We will apply Elliott's Intertwining Argument to the {\it a priori} non-commuting diagram
\[
\xymatrix{
{A \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[d]^{\iota} &
{A \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[d]^{\iota} &
{A \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[d]^{\iota} & \cdots \\
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[ur]^{\eta} &
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[ur]^{\eta} &
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[ur]^{\eta} & \cdots
}
\]
where $\iota$ is the tensor product of the inclusion $A \hookrightarrow \mathcal{O}_2$ with the identity map on $\mathcal{O}_2$ and $\eta$ is given by $a \mapsto 1_A \otimes \gamma(a)$. Let us describe the procedure step-by-step, so that we may refer back to this description when arguing that the intertwining can be carried out effectively. \vspace{2mm}
Fix a dense sequence $(x_n^A)$ in $A \otimes \mathcal{O}_2$ and a dense sequence $(y_n)$ in $\mathcal{O}_2 \otimes \mathcal{O}_2$. Fix also a dense sequence of unitaries $u_n^A\in A\otimes\mathcal O_2$ and a dense sequence of unitaries $v_n\in\mathcal O_2\otimes\mathcal O_2$. We assume that $u_1^A=1_{A\otimes\mathcal O_2}$ and $v_1=1_{\mathcal O_2\otimes\mathcal O_2}$. We define by recursion a sequence of finite sets $F_k^A\subseteq A$ and $G_k^A\subseteq\mathcal O_2\otimes\mathcal O_2$, sequences $(n_k^A)_{k\in{\mathbb N}}$ and $(m_k^A)_{k\in{\mathbb N}}$ of natural numbers, and homomorphisms $\iota_k:A\otimes \mathcal O_2\to\mathcal O_2\otimes\mathcal O_2$, $\eta_k:\mathcal O_2\otimes\mathcal O_2\to A\otimes\mathcal O_2$ subject to the following conditions: $n_1^A=1$, $m_1^A=1$, $F_1^A=G_1^A=\emptyset$, $\iota_1^A=\iota$, $\eta_1^A=\eta$, and for $k>1$ we require that
\begin{enumerate}
\item $F_k^A=\{x_{k-1}^A\}\cup F_{k-1}^A\cup \eta_{k-1}(G_{k-1}^A)$.
\item $G_k^A=\{y_{k-1}\}\cup G_{k-1}^A\cup\iota_{k-1}(F_k^A)$.
\item $n_k^A> n_{k-1}^A$ is least such that if we let $\eta_k^A=\Ad(u^A_{n_k^A})\circ\eta$ then the diagram
$$
\xymatrix{
{A \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[d]^{\iota_{k-1}^A} &
{A \otimes \mathcal{O}_2}\\
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[ur]^{\eta_k^A} &\\
}
$$
commutes up to $\epsilon_k$ on $F_k^A$. This is possible because any two endomorphisms of $A \otimes \mathcal{O}_2$ are approximately unitarily equivalent and the sequence $(u^A_n)$ is dense in the unitaries of $A \otimes \mathcal{O}_2$.
\item $m_k^A>m_{k-1}^A$ is least such that if we let $\iota_k^A=\Ad(v_{m_k})\circ\iota$ then the diagram
$$
\xymatrix{
&
{A \otimes \mathcal{O}_2}\ar[d]^{\iota_k^A}\\
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[ur]^{\eta_k^A} &
{\mathcal{O}_2 \otimes \mathcal{O}_2}
}
$$
commutes up to $\epsilon_k$ on $G_k^A$. This is possible because any two endomorphisms of $\mathcal{O}_2 \otimes \mathcal{O}_2$ are approximately unitarily equivalent and the sequence $(v_n)$ is dense in the unitaries of $\mathcal{O}_2 \otimes \mathcal{O}_2$.
\end{enumerate}
With these definitions the diagram
$$
\xymatrix{
{A \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[d]^{\iota_1^A} &
{A \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[d]^{\iota_2^A} &
{A \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[d]^{\iota_3^A} & \cdots \\
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[ur]^{\eta_2^A} &
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[ur]^{\eta_3^A} &
{\mathcal{O}_2 \otimes \mathcal{O}_2}\ar[r]^{\mathbf{id}}\ar[ur]^{\eta_4^A} & \cdots
}
$$
is an approximate intertwining (in the sense of \cite[2.3.1]{Ror:Classification}), and so
$$
\eta_\infty^A:\mathcal O_2\otimes\mathcal O_2\to A\otimes\mathcal O_2:\eta^A_\infty(b)=\lim_{k\to\infty}\eta_k(b)
$$
defines an isomorphism.
\begin{proof}[Proof of Theorem \ref{t.AtensorO2}]
Fix a dense sequence $z_n\in\mathcal O_2$. Also, let $y_n, v_n\in\mathcal O_2\otimes\mathcal O_2$ be as above. By the Kuratowski-Ryll-Nardzewski Selection Theorem we can find Borel maps $f_n:\SAu(\mathcal O_2)\to\mathcal O_2$ such that $(f_n(A))_{n\in{\mathbb N}}$ is a dense sequence in $A$. Let $\pi:\mathbb{N} \to \mathbb{N}^2$ be a bijection with $\pi(n) = (\pi_1(n),\pi_2(n))$. Associating the $\mathbb{Q} + i \mathbb{Q}$ span of $\{ f_{\pi_1(n)}(A) \otimes z_{\pi_2(n)} \ | \ n \in \mathbb{N} \}$ to $A$ is clearly Borel, and this span is dense; let us denote it by $x_n^A$. From (the proof of) Lemma \ref{L.E.1}.(\ref{L.E.1.4}), we obtain a sequence of Borel maps $\Un_k:\SAu(\mathcal O_2)\to \mathcal O_2\otimes\mathcal O_2$ such that $\Un_k(A)$ is dense in the set of unitaries in $A\otimes \mathcal O_2$. We let $u_k^A=\Un_k(A)$.
With these definitions there are unique Borel maps $A\mapsto F_k^A$, $A\mapsto G_k^A$, $A\mapsto n_k^A$ and $A\mapsto m_k^A$ satisfying (1)--(4) above; in particular, if these maps have been defined for $k=l-1$, and $F_l^A$ is defined, then $n_l^A$ is defined as the least natural number $n$ greater than $n_k^A$ such that
$$
(\forall a\in F_k^A) \|\Ad(u^A_{n} u^A_{n_{l-1}} \cdots u^A_{n_1})\circ\eta\circ \Ad(v_{m_{l-1}}\cdots v_{m_1})\circ\iota(a)-a\|_{\mathcal O_2}<\varepsilon_k.
$$
Thus the graph of $A\mapsto n_l^A$ is Borel. Similarly, $A\mapsto m_k^A$ is seen to be Borel for all $k\in{\mathbb N}$. But now we also have that the map $F:\SAu(\mathcal O_2)\to\mathrm{End}(\mathcal O_2\otimes\mathcal O_2):A\mapsto\eta_\infty^A$ is Borel, since
$$
F(A)=\eta_\infty^A\iff (\forall l) F(A)(y_l)=\lim_{k\to\infty} \Ad(u_{n_k}u_{n_{k-1}}\cdots u_1)\circ\eta(y_l)
$$
provides a Borel definition of the graph of $F$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t.belowgrp}]
The Polish group $\Aut(\mathcal O_2)$ acts naturally in a Borel way on $\SAu(\mathcal O_2)$ by $\sigma\cdot A=\sigma(A)$. Let $E$ be the corresponding orbit equivalence relation. We claim that isomorphism in $\SAuns(\mathcal O_2)$ is Borel reducible to $E$.
By the previous Theorem there is a Borel map $g:\SAuns(\mathcal O_2)\to\SAuns(\mathcal O_2)$ with the following properties:
\begin{itemize}
\item $g(A)\cong A$.
\item For all $A\in\SAuns(\mathcal O_2)$ there is an isomorphism $G_A:A\otimes\mathcal O_2\to\mathcal O_2$ under which\\ $G_A(A\otimes 1_{\mathcal O_2})=g(A)$.
\end{itemize}
In other words, there is an effective unital embedding of nuclear unital simple separable C$^*$-algebras into $\mathcal{O}_2$ with the property that the relative commutant of the image of any such algebra in $\mathcal{O}_2$ is in fact isomorphic to $\mathcal{O}_2$. We claim that $g$ is a Borel reduction of $\cong^{\SAuns(\mathcal O_2)}$ to $E$.
Fix $A,B\in\SAuns(\mathcal O_2)$. Clearly if $g(A) E g(B)$ then $A\cong B$. On the other hand, if $A\cong B$ then there is an isomorphism $\varphi:A\otimes \mathcal O_2\to B\otimes\mathcal O_2$ which maps $A\otimes 1_{\mathcal O_2}$ to $B\otimes 1_{\mathcal O_2}$. Thus
$$
\sigma=G_B\circ\varphi\circ G_A^{-1}\in\Aut(\mathcal O_2)
$$
satisfies $\sigma\cdot A=B$.
\end{proof}
Since by Corollary \ref{c.reduction} it holds that the homeomorphism relation for compact subsets of $[0,1]^{\mathbb N}$ is Borel reducible to isomorphism of nuclear simple unital AI algebras, we recover the following unpublished result of Kechris and Solecki:
\begin{theorem}[Kechris-Solecki]\label{t.kechrissolecki}
The homeomorphism relation for compact subsets of $[0,1]^{\mathbb N}$ is below a group action.
\end{theorem}
Note that the set $A=\{\gamma\in \Gamma: C^*(\gamma)$ is abelian$\}$ is
Borel since $A$ is clearly closed in the weak operator topology in $\Gamma$.
As a subspace of $\Gamma$ it therefore provides a good standard Borel parameterization
for abelian C*-algebras, used in the following.
\begin{prop}\label{P.abelian}
The isomorphism relation for unital
abelian separable C*-algebras is Borel reducible to isomorphism of AI algebras, and therefore to an orbit equivalence relation induced by a Polish group action.
\end{prop}
\begin{proof}
For $\gamma\in A$ we have that
$C^*(\gamma)\cong C(X)$ where $X$ is the pure state space of $C^*(\gamma)$.
The result now follows by Lemma~\ref{L.SPT} and Corollary \ref{c.reduction}. An alternative proof of the last claim appeals to Theorem~\ref{t.kechrissolecki} instead.
\end{proof}
\section{Bi-embeddability of AF algebras}\label{S.biembed}
In this section we will show that the bi-embeddability relation of separable unital AF algebras is not Borel-reducible to a Polish group action (Corollary~\ref{C.Biembed}).
More precisely, we prove that every $K_\sigma$-equivalence relation is Borel reducible to this analytic equivalence relation. (Recall that a subset of a Polish space is $K_\sigma$ if it is a countable union of compact sets.)
This Borel reduction is curious since bi-embeddability of separable unital UHF algebras is bi-reducible
with the isomorphism of separable unital UHF algebras, and therefore smooth.
For $f$ and $g$ in the Baire space ${\mathbb N}^{\mathbb N}$ we define
\begin{align*}
f\leq^\infty g& \text{ if and only if } (\exists m)(\forall i) f(i)\leq g(i)+m\\
f=^\infty g & \text{ if and only if } f\leq^\infty g \text{ and } g\leq^\infty f
\end{align*}
This equivalence relation, also denoted $E_{K_\sigma}$, was introduced by Rosendal in \cite{Ro:Cofinal}. Rosendal proved that $E_{K_\sigma}$ is \emph{complete} for $K_\sigma$ equivalence relation in the sense that (i) every $K_\sigma$ equivalence relation is Borel reducible to it and (ii) $E_{K_\sigma}$ is itself $K_\sigma$. By a result of Kechris and Louveau (\cite{KeLou:Structure}), $E_{K_\sigma}$ is not Borel reducible to any orbit equivalence relation of a Polish group action. In particular, $E_{K_\sigma}$, or any analytic equivalence relation that Borel-reduces $E_{K_\sigma}$, is not effectively classifiable by
the Elliott invariant.
If $A$ and $B$ are C*-algebras then we denote by $A\hookrightarrow B$ the existence of a $*$-monomorphism of $A$ into $B$; $A$ is therefore bi-embeddable with $B$ if and only if $A\hookrightarrow B$ and $B\hookrightarrow A$.
\begin{prop} There is a Borel-measurable map $ {\mathbb N}^{\mathbb N}\to\Gamma:f\mapsto C_f$
such that
\begin{enumerate}
\item each $C_f$ is a unital AF algebra;
\item $C_f$ isomorphically embeds into $C_g$ if and only if $f\leq^\infty g$;
\item $C_f$ is bi-embeddable with $C_g$ if and only if $f=^\infty g$.
\end{enumerate}
\end{prop}
\begin{proof}
We first describe the construction and then verify it is Borel. Let $p_i$, $i\in {\mathbb N}$, be the increasing enumeration of all primes. For $f\in {\mathbb N}^{\mathbb N}$ and $n\in {\mathbb N}$ define UHF algebras
\begin{align*}
A_f& = \bigotimes_{i=1}^\infty M_{p_i^{f(i)}}(\mathbb C)\qquad\text{and}\qquad
B_n =\bigotimes_{i=1}^\infty M_{p_i^n}(\mathbb C).
\end{align*}
Hence $B_n$ is isomorphic to $A_f$ if $f(i)=n$ for all $i$. For $f$ and $g$ we have that $A_f\hookrightarrow A_g$ if and only if $f(i)\leq g(i)$ for all $i$. Also, $A_f\hookrightarrow A_g\otimes B_n$ if and only if $f(i)\leq g(i)+n$ for all $i$. Therefore, $f\leq^\infty g$ if and only if $A_f\hookrightarrow A_g\otimes B_n$ for a large enough $n$. Let $C_f$ be the unitization of
$
A_f\otimes \bigoplus_{n=1}^\infty B_n.
$
We claim that $f\leq^\infty g$ if and only if $C_f\hookrightarrow C_g$.
First assume $f\leq^\infty g$ and let $n$ be such that $f(i)\leq g(i)+n$ for all $i$. Then by the above $A_f\otimes B_m \hookrightarrow A_g\otimes B_{m+n}$ for all $m$, and therefore $C_f\hookrightarrow C_g$.
Now assume $C_f\hookrightarrow C_g$. Then in particular $A_f\hookrightarrow \bigoplus_{n=1}^\infty A_g\otimes B_n$. Since $A_f$ is simple, we have $A_f\hookrightarrow A_g\otimes B_n$ for some $n$ and therefore $f\leq^\infty g$. We have therefore proved that the map $f\mapsto C_f$ satisfies (2). Clause (3) follows immediately.
It remains to find a Borel measurable map $\Phi\colon {\mathbb N}^{\mathbb N}\to \Gamma$ such that $C^*(\Phi(f))$ is isomorphic to $C_f$ for all $f$. Since $\otimes$ (for nuclear C*-algebras) and $\bigoplus$ are Borel (by Lemma~\ref{L.B.3.0} for the former; the latter is trivial), it suffices to show that there is a Borel map $\Psi\colon {\mathbb N}^{\mathbb N}\to \Gamma$ such that $C^*(\Psi(f))$ is isomorphic to $A_f$ for all $f$.
Let $D$ denote the maximal separable UHF algebra, $\bigotimes_{i=1}^\infty \bigotimes_{n=1}^\infty M_{p_i}(\mathbb C)$. Let $\phi$ be its unique trace and let $\pi_\phi\colon D\to \mathcal B(H_\phi)$
be the GNS representation corresponding to $\phi$. Then $H_\phi$ is a
tensor product of finite-dimensional Hilbert spaces $H_{n,i}$
such that $\dim(H_{n,i})=i^2$. Also, for each pair $n,i$ there is an isomorphic copy $D_{n,i}$ of $M_i(\mathbb C)$ acting on $H_{n,i}$ and a unit vector $\xi_{n,i}$ such that $\omega_{\xi_{n,i}}$ agrees with the normalized trace on $D_{n,i}$.
The algebra generated by $D_{n,i}$, for $n,i\in{\mathbb N}$, is isomorphic to $D$.
Now identify $H_\phi$ with $H$ as used to define $\Gamma$. Each $D_{n,i}$ is singly generated, so we can fix a generator $\gamma_{n,i}$. Fix a bijection $\chi$ between ${\mathbb N}$ and ${\mathbb N}^2$, and write $\chi(n)=(\chi_0(n), \chi_1(n))$. For $f\in {\mathbb N}^{\mathbb N}$ let $\Psi(f)=\gamma$ be defined by $\gamma_n=0$ if $f(\chi_1(n))<\chi_0(n)$ and $\gamma_n=\gamma_{\chi_0(n),\chi_1(n)}$ if $f(\chi_1(n))\geq \chi_0(n)$. Then $C^*(\gamma)$ is isomorphic to the tensor product of the $D_{n,i}$ for $n\leq f(i)$, which is in turn isomorphic to $A_f$. Moreover, the map $f\mapsto \Psi(f)$ is continuous when $\Gamma$ is considered with the
product topology, because finite initial segments of $\Psi(f)$ are determined by finite initial segments of $f$.
\end{proof}
\begin{corollary} \label{C.Biembed}
$E_{K_\sigma}$ is Borel reducible to the bi-embeddability relation $E$ on separable AF C*-algebras. Therefore $E$ is not Borel reducible to a Polish group action. \qed
\end{corollary}
\section{Concluding remarks and open problems}\label{S.problems}
In this section we discuss several open problems and possible directions for further investigations related to the theme of this paper. The first is related to Section \ref{S.biembed}.
\begin{problem}\label{nucbiembed}
Is the bi-embeddability relation for nuclear simple separable C$^*$-algebras a complete analytic equivalence relation? What about bi-embeddability of AF algebras?
\end{problem}
\noindent
We remark that the bi-embeddability, and even isomorphism, of separable Banach spaces is known to be complete for analytic equivalence relation (\cite{FeLouRo}). Moreover, bi-embeddability of countable graphs is already complete for analytic equivalence relations by \cite{LouRo}
\begin{quest} \label{Q.action}
Is isomorphism of separable (simple) C*-algebras implemented
by a Polish group action?
\end{quest}
George Elliott observed that
the isomorphism
of nuclear simple separable C*-algebras is Borel reducible to an orbit equivalence relation
induced by a Borel action of the automorphism group of ${\mathcal O}_2\otimes \mathcal K$.
This is proved by an extension of the proof of Theorem~\ref{t.belowgrp} together
with Borel version of
Kirchberg's result that $A\otimes {\mathcal O}_2\otimes\mathcal K$ is isomorphic to ${\mathcal O}_2\otimes \mathcal K$
for every nuclear simple separable C*-algebra $A$. The following is an extension of Question~\ref{Q.action}.
\begin{problem} \label{P.action}
What is the Borel cardinality of the isomorphism relation of
larger classes of separable C*-algebras (simple or not), such as:
\begin{enumerate}
\item[(i)] nuclear C*-algebras;
\item[(ii)] exact C*-algebras;
\item[(iii)] arbitrary C*-algebras?
\end{enumerate}
Do these problems have strictly increasing Borel cardinality?
Are all of them Borel reducible to the orbit equivalence relation of a Polish group action on a standard Borel space?
\end{problem}
\noindent
We can ask still more of the classes in Problem \ref{P.action}.
On the space $\Delta^{{\mathbb N}}$ (recall that $\Delta$ is the closed unit disk in $\mathbb{C}$) define the relation $E_1$ by letting $x\, E_1\, y$ if
$x(n)=y(n)$ for all but finitely many $n$.
In \cite{KeLou:Structure} it was proved that $E_1$ is not Borel-reducible
to any orbit equivalence relation of a Polish group action, and therefore $E_1\leq_B E$
implies $E$ is not Borel-reducible to
an orbit equivalence relation of a Polish group action.
Kechris and Louveau
have even conjectured that for Borel equivalence relations
reducing $E_1$ is equivalent to not being induced by a Polish group action.
While Corollary~\ref{c.reduction} implies that
the relations considered in Problem~\ref{P.action} are not Borel,
it is natural to expect that they either reduce~$E_1$ or can be reduced to
an orbit equivalence relation of a Polish group action.
We defined several Borel parameterizations (see Definition~\ref{d.parameterization})
of separable C*-algebras that were subsequently shown to be equivalent.
The phenomenon that all natural Borel parameterizations of a given classification problem
seem to wind up being equivalent has been observed by other authors.
One may ask the following general question.
\begin{problem} Assume $\Gamma_1$ and $\Gamma_2$ are good standard
Borel parameterizations that model isomorphism of structures in the same category $\mathcal C$.
Find optimal assumptions that guarantee the existence of a Borel-isomorphism $\Phi\colon \Gamma_1\to \Gamma_2$ that preserves the isomorphism in class $\mathcal C$ in the sense that
\[
A\cong B\text{ if and only if } \Phi(A)\cong \Phi(B).
\]
\end{problem}
\noindent
A theorem of this kind could be regarded as an analogue of an automatic continuity theorem.
We have addressed many basic C*-algebra constructions here and proved that they are Borel. We have further proved that various natural subclasses of separable C*-algebras are Borel. There is, however, much more to consider.
\begin{problem}\label{classes} Determine whether the following C*-algebra constructions and/or subclasses are Borel:
\begin{enumerate}
\item[(i)] the maximum tensor product, and tensor products more generally;
\item[(ii)] crossed products, full and reduced;
\item[(iii)] groupoid C*-algebras;
\item[(iv)] $\mathcal{Z}$-stable C*-algebras;
\item[(v)] C*-algebras of finite or locally finite nuclear dimension;
\item[(vi)] approximately subhomogeneous (ASH) algebras;
\item[(vii)] the Thomsen semigroup of a C$^*$-algebra.
\end{enumerate}
\end{problem}
\noindent
Items (iv)--(vi) above are of particular interest to us as they are connected to Elliott's classification program (see \cite{et}).
Items (iv) and (v) are connected to the radius of comparison by the following conjecture of Winter and the second author.
\begin{conj} Let $A$ be a simple separable unital nuclear C*-algebra. The following are equivalent:
\begin{enumerate}
\item[(i)] $A$ has finite nuclear dimension;
\item[(ii)] $A$ is $\mathcal{Z}$-stable;
\item[(iii)] $A$ has radius of comparison zero.
\end{enumerate}
\end{conj}
\noindent
We have shown here that simple unital nuclear separable C$^*$-algebras form a Borel set.
We will show in a forthcoming article that separable C*-algebras with radius of comparison zero also form a Borel set. Thus, those $A$ as in the conjecture which satisfy (iii) form a Borel set. It would be interesting to see if the same is true if one asks instead for (i) or (ii).
Clearly, the ${\mathcal Z}$-stable algebras form an analytic set.
As for item (vi) of Problem \ref{classes}, the question of whether every unital simple separable nuclear C$^*$-algebra with a trace is ASH has been open for some time. One might try to attack this question by asking where these formally different classes of algebras sit in the Borel hierarchy.
In \cite{Ell:Towards}, Elliott introduced an abstract approach to functorial classification.
A feature of his construction is that morphisms between classifying invariants lift
to morphisms between the objects to be classified.
This property is shared with the classification of C*-algebras, where
morphisms between K-theoretic invariants lift to (outer, and typically not unique)
automorphisms of the original objects.
It would be interesting to have a set-theoretic analysis of this phenomenon
parallel to the set-theoretic analysis of abstract classification problems
used in the present paper.
\begin{problem} Is there a set-theoretic model for the functorial inverse in the sense of
Elliott? More precisely, if the categories are modelled by Borel spaces and the
functor that assigns invariants is Borel, is the inverse functor necessarily Borel?
\end{problem}
Finally, the following question was posed by Greg Hjorth to the third author:
\begin{problem}
Is $\simeq^{{\bf Ell}}$ Borel reducible to $\simeq^{\Lambda}$? That is, does isomorphism of Elliott invariants Borel reduce to affine isomorphism of separable metrizable Choquet simplexes?
\end{problem}
We will show in a forthcoming article that at least the Elliott invariant is below a group action.
It would also be natural to try to obtain an answer to the following:
\begin{problem}\label{Pr.Choquet}
Is isomorphism of separable metrizable Choquet simplexes Borel reducible to homeomorphism of compact Polish spaces? I.e., is $\simeq^\Lambda$ Borel reducible to $\simeq^{{\mathbb K}}_{{\rm homeo}}$?
\end{problem}
In connection to this problem we should point out a misstatement in \cite{Hj:Borel}.
In \cite[p. 326]{Hj:Borel} it was stated that Kechris and Solecki have proved that
the homeomorphism
of compact Polish spaces is Borel bi-reducible to $E^{X_\infty}_{G_\infty}$.
The latter is
an orbit equivalence relation of a Polish group action with the property that every
other orbit equivalence relation of a Polish group action is Borel-reducible to it.
If true, this would give a positive solution to Problem~\ref{Pr.Choquet}.
However, Kechris and Solecki have proved only one direction, referred to in Theorem~\ref{T.KS}.
The other direction is still open.
\bibliographystyle{amsplain}
|
1,108,101,566,004 | arxiv | \section{Introduction}
Open star clusters offer an opportunity to investigate stellar evolution in detail due to the shared properties of their member stars. Analysis of eclipsing binary stars and asteroseismology are strong tools to improve such studies further \citep[e.g.][]{Brogaard2011,Brogaard2012,Brogaard2017, Brogaard2018,Brogaard2018A,Miglio2012,Miglio2016,Handberg2017,Arentoft2019,Sandquist2020}. Here, we investigate the eclipsing binary HD\,27130 and the oscillating giant $\epsilon$ Tau to constrain the properties of the Hyades open cluster.
The Hyades is a relatively young open cluster. According to the literature it is $625-790$\,Myr \citep[e.g.][]{Perryman98, Brandt15, Gossage18, Martin18,Gaia2018}, has super-solar metallicity ($\text{[Fe/H]}$ determinations range from $+0.1$\,dex to $+0.2$\,dex, \citealt{Takeda2020} and references therein), and is located at a distance from the Sun of $46.75\pm{0.46}$\,pc \citep{Gaia17}.
\object{HD\,27130} (BD\,+16 577, vB\,22, V818\,Tau, HIP 20019) is a double-lined eclipsing binary system \citep{Schiller87} in the Hyades open cluster \citep[see its membership information in][]{Griffin88, Schwan91, Perryman98, Douglas14}. According to previous studies, the components constituting the HD\,27130 binary system are G8\,V (hereafter primary) and K3-5\,V (hereafter secondary) stars \citep{Schiller87, Svechnikov04}, with a period of about 5.6 days \citep{McClure82, Svechnikov04, Watson06, Peterson87, Peterson88}.
\citet{Schiller87} carried out the $UBVRI$ observations of HD\,27130 and used the light curves to obtain the effective temperatures of $5470$\,K and $3977$\,K for the primary and secondary components, respectively. Slightly higher $T_\text{eff}$ values equal to $5530\pm{100}$\,K and $4220\pm{150}$\,K were determined by \citet{Torres02}. They also derived masses of $1.08\pm{0.017}$\,$M_\odot$ and $0.771\pm{0.011}$\,$M_\odot$ as well as radii of $0.905\pm{0.019}$\,$R_\odot$ and $0.773\pm{0.010}$\,$R_\odot$ for the primary and secondary members of the system, respectively. Similar mass values of $1.064\pm{0.011}$\,$M_\odot$, $1.072\pm{0.010}$\,$M_\odot$, $1.0591\pm{0.0062}$\,$M_\odot$ for the primary and $0.763\pm{0.005}$\,$M_\odot$, $0.769\pm{0.005}$\,$M_\odot$, $0.7605\pm{0.0062}$\,$M_\odot$ for the secondary components were attributed by \cite{Peterson87, Peterson88, Torres02}, respectively.
The observational spectroscopic data, and orbital elements of the star system were reviewed by \citet{Griffin12}, including the detection of a faint third component. The impact of this on the physical parameters of HD\,27130 seems to have gone unnoticed. Recently, \citet{Torres2019A,Torres2019B} re-examined the mass-$M_V$ relation for the Hyades based on new and updated parameters for some of the astrometric spectroscopic binaries in the cluster. Along with their new measurements, they included literature measurements for other spectroscopic binaries, including HD\,27130, for which they adopt the measurements of \citet{Torres02}. As stressed by \citet{Torres2019B}, significant improvements are to be expected from new measurements of more of these systems. Specifically, some of the systems have properties that are based on rather old measurements. \citet{Torres2019A} mention in their introduction that HD\,27130 has its masses measured to better than 1\%, presumably based on published numbers, including those of \citet{Torres02} that they adopt for their mass-luminosity relation of the Hyades. However, given the triple nature of the system, and the updated spectroscopic parameters given by \citet{Griffin12}, the previously published masses \citep{Peterson87, Peterson88, Torres02} are likely not accurate at the precision level of 1\%.
Here, we obtain new photometric and spectroscopic measurements of HD\,27130, and measure new precise and accurate properties of the components. This is used to constrain the helium content of the Hyades open cluster.
The giant star $\epsilon$ Tau is also a member of the Hyades. Given its late evolutionary stage, it is much better suited for an age estimate than the main-sequence components of HD\,27130. \citet{Arentoft2019} showed that $\epsilon$ Tau displays solar-like oscillations and measured its asteroseismic parameters $\Delta\nu$ and $\nu_{\rm max}$. We adopt the asteroseismic measurements of \citet{Arentoft2019} and re-asses the physical properties of $\epsilon$ Tau to estimate the age of the Hyades cluster.
The paper outline is as follows. First, we present our observations of HD\,27130 in Sect.~\ref{sec:observations}. Sect.~\ref{data reduction} contains our data reduction and basic analysis, while Sect.~\ref{sec:binary} describes our eclipsing binary modelling. $\epsilon$ Tau is presented and re-assessed in Sect.~\ref{sec:epstau}.
The properties of HD\,27130 and $\epsilon$ Tau are used in Sect.~\ref{sec:results} to establish the helium content and the age of the Hyades. Summary, conclusions, and outlook are given in Sect.~\ref{conclusions}.
\section{Observations}
\label{sec:observations}
HD\,27130 was observed both photometrically and spectroscopically.
Time series of photometric and spectral observations were taken at the Mol\.{e}tai Astronomical Observatory of the Vilnius University (MAO, Lithuania). Additional photometric observations of the secondary eclipse were performed at the Astronomical Observatory of the Siena University (AO SU) in Italy. We also include a light curve from the MASCARA telescope. All light curves are available as electronic tables at CDS.
Spectral observations were also obtained from the Stellar Observations Network Group (SONG) Hertzsprung telescope located at the Teide Observatory of the Astrophysics Institute of the Canary Islands, Spain.
\subsection{Photometric observations at the Mol{\. e}tai Astronomical Observatory }
At MAO we used a 51\,cm Maksutov-type telescope with the 35\,cm working diameter of the primary mirror and the Apogee Alta U47 CCD camera. The observations were carried out in a semi-robotic mode. During the period between 16 September, 2018, and 22 October, 2019, we observed 17 runs and took in total 11906 images in the Johnson-Cousins photometric system $B$, $V$, and $I$ filters. Exposure times varied between 2.5\,s for the $I$ filter and 5\,s for the $B$ filter. In Table~\ref{table:MAO-phot}, we present information about the observing runs at MAO.
\begin{table}
\caption{Dates of runs and numbers of images in different filters observed for HD~27130 at MAO and AO SU.}
\label{table:MAO-phot}
\centering
\begin{tabular}{c c c c }
\hline\hline
Dates of runs & $B$ images & $V$ images & $I$ images \\
(JD-2458000) & & & \\
\hline
378.46496-378.56400 &122 & 136 & 134 \\
383.43266-383.56133 &120 & 119 & 110 \\
393.43604-393.59516 &46 & 23 & 5 \\
401.41744-401.46681 &55 & 51 & 54 \\
402.41152-402.64392 &218 & 236 & 232 \\
403.41291-403.64824 &362 & 368 & 370 \\
404.41396-404.63789 &340 & 354 & 338 \\
405.42203-405.62959 &338 & 323 & 286 \\
406.42025-406.62940 &395 & 400 & 397 \\
407.42510-407.62244 &372 & 366 & 344 \\
409.40506-409.64799 &463 & 300 & 314 \\
451.42251-451.59986 &286 & 270 & 259 \\
452.42069-452.59040 &222 & 220 & 224 \\
493.15244-493.39132 &443 & 426 & 101 \\
555.21378-555.35725 &bad & 144 & 133 \\
751.39102-751.62263 & -- & 444 & 443 \\
779.54398-779.62327 & -- & 101 & 99 \\
779.38032-779.68534 & -- & 246 & 242 \\
\hline
\end{tabular}
{\it Note}. The last line corresponds to observations at AO SU.
\end{table}
\subsection{Photometric observations at the Astronomical Observatory of the Siena University}
At AO SU we used a 0.30-m f/5.6 Maksutov-Cassegrain type telescope atop a Comec 10~micron GM2000-QCI equatorial mount.
Image acquisition was provided by a Sbig STL-6303 CCD camera (with a 3072~$\times$~2048 pixels of 1.15~arcsec size) and an Optec TCF-S precision focuser with temperature compensation.
The images were acquired alternating the Johnson-Cousins photometric system $V$ and $I$ filters with exposure times of 20~seconds for both filters and defocusing the telescope to increase the signal to noise ratio. HD\,27130 was observed on the night of the $22^{\rm nd}$ of October, 2019, when a secondary eclipse of HD\,27130 was expected.
\subsection{Photometric observations with MASCARA}
In addition to the BVI photometry we also use observations obtained by the MASCARA survey \citep{Talens2017}. The MASCARA survey utilizes two stations, located in the northern and southern hemisphere, to search for transiting exoplanets. For this purpose it obtains continuous, high-cadence observations of the brightest stars in the sky (4 < V < 8.4), and such observations are also very well suited for use in variable star studies \citep{Mellon2019}. HD\,27130 passes over the East, West and Zenith pointing cameras of the northern MASCARA station and we use data obtained from February 2015 to August 2018.
MASCARA uses no filter in its optics, as such the bandpass is determined by the transmission of the windows and lenses and the response of the CCDs. No laboratory measurement of this combined response was carried out, however investigation of the measured relative MASCARA magnitudes reveals no deviations from $V$-band magnitudes. The quantum efficiency as a function of wavelength for the CCD can be seen in fig.~4 of \citet{Talens2017}.
The raw MASCARA photometry was initially processed with the primary and secondary calibration steps of the MASCARA calibration pipeline \citep{Talens2018}, removing common mode systematics present in all light curves and residual systematics in individual light curves, respectively. In order to better disentangle the eclipse signal and the residual systematics we obtained a running mean of the fully calibrated phase-folded light curve using a period of 5.6092159 days and window size of 15 minutes. The running mean was then subtracted from the post-primary calibration light curve before re-running the secondary calibration step, and this procedure was iterated until convergence was reached.
\subsection{Spectroscopic observations at the Mol{\. e}tai Astronomical Observatory}
Spectral observations of HD\,27130 were carried out on the f/12 1.65~m Ritchey-Chretien telescope at MAO with the high-resolution Vilnius University Echelle Spectrograph (VUES, \citealt{Jurgenson2014,Jurgenson2016}). A log of these observations is provided in Table~\ref{table:maoobs}.
The VUES is designed to observe spectra in the 4\,000~to~8\,800~\AA~wavelength range with three spectral resolution modes ($R$~=~30\,000, 45\,000, and 60\,000). The observations of HD\,27130 were carried out using the 60\,000 mode, however the actual resolution of this mode that is measured using Thorium-Argon (ThAr) lines is around 68\,000. The data were reduced and calibrated following standard reduction procedures which included a subtraction of the bias frame, correction for flat field, extraction of orders, wavelength calibration, and a cosmic ray removal as described by \citet{Jurgenson2016}. The spectra consist of 83 spectral orders. For most spectra the ThAr lamp spectra were taken just prior and just after each exposure and 1583 ThAr lines used for the wavelength calibration.
\subsection{Spectroscopic observations with the Hertzsprung SONG telescope}
Spectroscopic data were obtained with the Hertzsprung SONG telescope \citep{Andersen2014, Grundahl2017,Frandsen2018,Fredslund2019} at the Teide observatory on Tenerife island on $9-26$ January, 2018. A log of these observations is provided in Table~\ref{table:songobs}.
Briefly, the observations were carried out with the SONG spectrograph at a resolution
of 90\,000 (1.2" slit width) and exposure times of 1800\,s. The spectra consist of 51 spectral orders,
spanning $4400-6800$~{\AA}. To ensure a precise wavelength calibration, we obtained one spectrum of a
ThAr calibration lamp just prior and just after each exposure. The number of ThAr lines used
in the wavelength calibration varied between 1053 and 1102.
\begin{table}
\caption{Information on spectroscopic observations with the MAO 1.65~m telescope.}
\label{table:maoobs}
\centering
\begin{tabular}{c c c c c }
\hline\hline
Midtime of exp. & Exp. time & Resolution & Altitude \\
(JD-2458000) & (s) & & (deg)\\
\hline
493.33566 & 600 & 68740 & 51.28 \\
493.34305 & 600 & 68740 & 50.89 \\
493.35039 & 600 & 68740 & 50.41 \\
512.30415 & 1800 & 68280 & 49.97 \\
521.29437 & 1800 & 68113 & 48.57 \\
525.18322 & 1800 & 68115 & 50.04 \\
525.20444 & 1800 & 68115 & 51.35 \\
525.22566 & 1800 & 68115 & 51.79 \\
537.21869 & 1800 & 68344 & 51.12 \\
555.26402 & 1800 & 68220 & 39.50 \\
575.28386 & 1800 & 68054 & 25.06 \\
576.27406 & 1800 & 68049 & 26.50 \\
578.25433 & 1800 & 68370 & 29.39 \\
587.26544 & 1800 & 68014 & 22.11 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{Information on spectroscopic observations with SONG.}
\label{table:songobs}
\centering
\begin{tabular}{c c c c c }
\hline\hline
Midtime of exp. & Slit & Exp. time & Flux & Altitude \\
(JD-2458000) & & (s) & (ADU) & (deg)\\
\hline
128.32321 & 6 & 1800.0 & 5743 & 59.00 \\
130.44565 & 6 & 1800.0 & 7634 & 72.37 \\
131.36576 & 6 & 1800.0 & 3844 & 73.48 \\
132.42151 & 6 & 1800.0 & 6536 & 76.50 \\
133.42460 & 6 & 1800.0 & 5150 & 75.34 \\
139.30799 & 6 & 1800.0 & 5209 & 63.49 \\
140.30879 & 6 & 1800.0 & 2029 & 64.55 \\
140.33224 & 6 & 1800.0 & 2212 & 71.19 \\
141.30902 & 6 & 1800.0 & 1516 & 65.41 \\
141.38665 & 6 & 1800.0 & 4756 & 78.02 \\
143.35550 & 6 & 1800.0 & 1541 & 77.78 \\
144.33251 & 6 & 1800.0 & 4204 & 74.01 \\
144.55573 & 6 & 1800.0 & 2174 & 26.77 \\
\hline
\end{tabular}
\end{table}
\section{Data reduction and analysis}
\label{data reduction}
Phased light curves of the photometric observations are presented in Fig.~\ref{phaseMol} and Fig.~\ref{phaseMas}.
The photometric images observed at MAO were processed with a Muniwin program from the software package C-Munipack\footnote{http://c-munipack.sourceforge.net/} (\citealt{Muniwin14}). The Muniwin program is built on the basis of the software package DAOPHOT for stellar photometry in crowded stellar fields (\citealt{Daophot87}) and is designed for time series differential aperture photometry and search for variable stars. We used the {\it Advanced} image calibration procedure, to perform the bias and dark frame subtraction, and flat-field correction. For this purpose we used more than 10 images of the bias, dark and sky flat fields in each filter for the CCD image calibration of each night observations.
We used the Muniwin program to determine the instrumental magnitudes of all detected stars in the field using an aperture of 8~arcsec
and selected among them
one comparison star
which was the brightest possible comparison star in the field
and which did not show any signal indicating variability.
For the further analysis, we calculated differential magnitudes of the target using the selected comparison star.
Photometric analysis of images observed at AO SU was done with the MaxIm DL\footnote{http://diffractionlimited.com/product/maxim-dl/} software
using an aperture of 14~arcsec for the $V$ filtered images and of 18~arcsec for the $I$ filtered images. Differential photometry was performed using a photometric sequence from the AAVSO Variable Star Charts service\footnote{https://www.aavso.org/variable-star-charts/}.
We calculated apparent magnitudes of HD\,27130 in the Johnson $B$ and $V$ filters observed at MAO using HD\,285678 as a reference star. We transformed the $B_T$ and $V_T$ magnitudes of the reference star from the Tycho-2 catalogue \citep{Tycho-2-2000} to Johnson $B=11.45$ and $V=10.64$ magnitudes using equations in Appendix C of \citet{Mamajek2002}, accounting for the sign error as corrected in \citet{Mamajek2006}. There was no suitable reference star with known magnitude in $I$ filter. The observed light curve of HD\,27130 in the $V$ filter at AO~SU was fitted to the MAO observations. We could combine light curves observed at two different observatories since both observatories have observed HD\,27130 at the same phase. The apparent magnitude in the $I$ filter was calculated using the AO~SU observations, as its field of view is larger and we could use another reference star HD\,27110 recommended by AAVSO. Its value of $I$ magnitude, which is 7.852~mag, was taken from The Amateur Sky Survey TASS \citep{TASS1997}. Then the MAO observed light curve of HD\,27130 in the $I$ filter was fitted to the AO~SU observations at certain phases of the light curve.
Since the reference stars were chosen to optimize relative photometry rather than absolute, we do not use our measurements to derive precise absolute magnitudes of HD\,27130. The light curve analysis we perform later is invariant to the photometric zero-point.
\begin{figure}
\centering
\includegraphics[width=\hsize]{Phased_LCs_mag.eps}
\caption{Phased $BVI$ light curves of HD\,27130
calculated with a period of 5.6092159~days.}
\label{phaseMol}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{MASCARA_folded_LC.eps}
\caption{Phased MASCARA light curve of HD\,27130
calculated with a period of 5.6092159~days.}
\label{phaseMas}
\end{figure}
\subsection{Radial velocity measurements}
For measuring radial velocities (RVs) of the binary components at each epoch, and to separate their spectra, we
used a spectral separation code based on the description of
\citet{Gonzalez2006} combined with the broadening
function (BF) formalism by \citet{Rucinski1999, Rucinski2002}.
This method works best when the observations sample
a wide range in radial velocity as evenly as possible. Calculations were done order by order. For each epoch the final radial velocity
was taken as the mean of the results from each order and the RMS scatter across orders divided by the square root of number of orders was taken as
a first estimate of the uncertainty. Later, when we fitted the
binary solution, we found that the first estimate of the RV uncertainties of the primary component were significantly smaller than the
RMS of the O–C of the best fit to the RVs. Our first solutions had O–C errors of up to 0.5~km\,s$^{-1}$ in
a systematic pattern identical for the primary and secondary
components. This indicated that the radial velocity errors were
dominated by radial velocity zero-point offsets between epochs.
Since (almost all) our spectra are calibrated with ThAr spectra taken immediately after the stellar spectra, these offsets cannot be attributed to wavelength calibration errors.
As discussed by \citet{Griffin12}, HD\,27130 is a hierarchical triple system with a very faint tertiary star that shifts the system velocity of the primary-secondary orbit periodically with a semiamplitude of about 2.3~km\,s$^{-1}$ and a period of about 3079 days. Our observations with VUES were spread randomly in time over almost 100 days (see Table \ref{table:maoobs}) and were therefore significantly affected by this. The SONG spectra, by contrast, were all observed within just 17 days (see Table \ref{table:songobs}) and should therefore not be significantly affected. This lead us to continue with the radial velocity measurements only for the SONG spectra, given that we did not have spectra covering a long enough period to model the presence of the third component.
We did first attempt to use radial velocities from the VUES spectra by correcting for the motion of the inner pair's center of mass in the outer orbit at each epoch using the elements from \citet{Griffin12}. However, we could not reach a precision level on the orbit comparable to that with the SONG spectra. This is likely due to a combination of several issues, of which we mention some significant ones;
1) Five VUES epochs happened to be at times with small RV differences between components, resulting in overlapping spectral lines, and thus potential systematic uncertainties on the radial velocities. 2) Although we had the elements of \citet{Griffin12} to correct for the outer orbit, we found that our measurements were not on the same RV zero-point for the SONG spectra, and even when accounting for that, we still found an additional mean offset in systemic velocity between SONG and VUES spectra of more than 1 km\,s$^{-1}$. Given a remaining larger-than-expected RV O-C scatter among the VUES spectra, it was not clear how to best subtract this additional RV offset. 3) We therefore attempted to force an RV offset so that at each VUES epoch the RV of the primary matches the orbital solution,
and use the so obtained corrected RVs for the secondary to improve on the solution. However, this also did not work well. The secondary component RVs still had O-C scatter of the order of 1 km\,s$^{-1}$, much larger than for the SONG spectra. A significant cause for this is the lower resolution of the VUES spectra compared to SONG, meaning that more systematic noise will be smoothed into the BF peak.
Future simultaneous observations with the two instruments SONG and VUES will help improve the understanding of these issues.
Assuming that the radial velocities from SONG spectra were not significantly affected by the third component, another reason had to be found for their large O–C errors. They occurred because the
starlight cannot by exactly centred across the SONG spectrograph slit width. This shifts the spectrum in wavelength relative to a star perfectly centred across the slit, thereby mimicking a radial velocity shift.
To correct for this shift, we used the following procedure originally developed for the analysis of \citet{Brogaard2011}: We identified a spectral order with very strong telluric absorption lines ($\lambda = 6282-6311$~\AA). In the adjacent order we measured the combined BF profile of both components using a template spectrum. Tests show that the best reproduction of the combined spectrum from the BF is obtained when the template spectrum matches the dominant contributor. Therefore, we used a template from \citet{Coelho2005} with parameters matching the primary component. The BF profile obtained in this way is nearly identical for close-by orders. We then convolved our synthetic stellar template spectrum, covering the
wavelength region with telluric absorption lines, with the obtained BF, to produce the expected stellar spectrum in that order. This spectrum was then multiplied with a synthetic telluric spectrum \citep{Bertaux2014} which was broadened to the spectrograph resolution, shifted
in radial velocity from $-1$ to +1~km\,s$^{-1}$ in steps of 10~m\,s$^{-1}$, and
multiplied with factors between 0.5 and 1.5 in steps of 0.1. The
resulting set of artificial spectra of stellar+telluric lines were all
cross-correlated with the observed spectra, and the one giving
the highest cross correlation determined the shift of the telluric
lines. This shift corresponds to the radial velocity zero-point correction, since the telluric absorption lines are always at zero radial velocity
(except for small shifts due to winds in the Earth’s atmosphere),
and the observed shift can only be caused by imperfect slit centring.
Applying the zero-point radial velocity shifts reduced the errors on the orbital parameters very significantly from O–C RMS of 300~m\,s$^{-1}$ to about 100~m\,s$^{-1}$. The uncertainties were however still underestimated. Therefore, a RV zero-point uncertainty of 100~m\,s$^{-1}$ was added in quadrature to the RV
uncertainties in order for the binary analysis to yield a reduced $\chi^2$
close to 1 for the RV of both components. The final radial velocity estimates are given in Table \ref{table:RV}.
\begin{table}
\centering
\caption{Radial velocity measurements of HD\,27130}
\label{table:RV}
\begin{tabular}{lrr}
\hline
\hline
BJD-TDB & RV primary & RV secondary \\
& $({\rm km}\cdot {\rm s}^{-1})$ & $({\rm km}\cdot {\rm s}^{-1})$ \\
\hline
58128.32814 & $ 3.15\pm0.11$ & $88.50\pm0.16 $\\
58130.45043 & $98.85\pm0.11$ & $-43.47\pm0.12$\\
58131.37047 & $60.62\pm0.11$ & $ 9.41\pm0.13$\\
58132.42615 & $-5.37\pm0.11$ & $100.26\pm0.12$\\
58133.42916 &$-17.43\pm0.11$ & $116.92\pm0.12$\\
58139.31208 & $-8.25\pm0.11$ & $ 103.97\pm0.12$\\
58140.31280 & $52.79\pm0.11$ & $ 20.17\pm0.13$\\
58140.33625 & $54.20\pm0.11$ & $ 17.77\pm0.13$\\
58141.31295 & $98.25\pm0.11$ & $-42.65\pm0.13$\\
58141.39057 & $99.39\pm0.11$ & $-43.99\pm0.13$\\
58143.35926 & $9.85\pm0.11$ & $79.09\pm0.13$\\
58144.33618 & $-21.48\pm0.11$ & $122.88\pm0.12$\\
58144.55938 & $-19.20\pm0.11$ & $119.79\pm0.13$\\
\hline
\end{tabular}
\end{table}
\subsection{Light ratio}
\label{sec:LR}
The light ratio between the components is an important parameter for both the spectral separation before the spectroscopic analysis, and for the binary analysis of systems without a total eclipse, as is the case for HD\,27130.
We estimated the light ratio from an iterative procedure. First, we identified HD\,27130 in the Gaia colour-magnitude diagram (CMD) of the Hyades by \citet{Lodieu2019}. We then fitted the shape of the main sequence of the Hyades with a low-order polynomial. The light ratio in the $G$-band was estimated by requiring that both components are located on this main sequence while reproducing the system photometry when combined. The colours of the components were then used to estimate the $T_{\rm eff}$ values of the components; Guess values for $T_{\rm eff}$ were adjusted until bolometric corrections from the calibration of \citet{Casagrande2018} reproduced the component colours. These $T_{\rm eff}$ estimates were used to select template spectra from the library of \citet{Coelho2005} to fit the spectroscopic broadening functions of the separated spectra with a rotational profile. For this fit, we assumed a fixed linear limb darkening coefficient of 0.6. The ratio of the areas under the rotational profiles of the binary components corresponds to the luminosity ratio of the stars assuming that the effective temperatures are correct. The luminosity ratio is wavelength dependent, so we weighted results from the individual spectral orders in order for the averaged value to correspond to the $V$-band. This was then extended to the Gaia $G$-band by the use of Planck functions and the $T_{\rm eff}$ estimates. With the new $G$-band estimate of the luminosity ratio, the next iteration begins. In this iteration the light ratio is fixed in the CMD and both components cannot be forced to be on the main sequence. We required the primary star to be on the sequence, and allowed for the secondary to deviate. It came out a bit redder than the main sequence, which is perhaps a sign of the very faint third component.
In few iterations, we obtained ($L_2/L_1)_V = 0.127\pm0.015$, $T_{\rm p}$ = 5660 K, and $T_{\rm s}$ = 4235 K.
\subsection{Spectral analysis}
\label{sec:specanal}
The separated spectra of the binary components resulting from the spectral separation procedure were adjusted according to the light ratio based on the Gaia CMD analysis and BF area ratios (cf. Sect.~\ref{sec:LR}) to recover the true depths of the spectral lines. This was done to remove the continuum contribution from the other component. Specifically, for the primary star, this was achieved by multiplying the separated spectrum by a factor of $\frac{L_{\rm 1} + L_{\rm 2}}{L_{\rm 1}}$ followed by a subtraction of $\frac{L_{\rm 2}}{L_{\rm 1}}$. Here, $L_{\rm 1}$ and $L_{\rm 2}$ refer to the luminosities of the primary and secondary star, respectively. We used the light ratio from the $G$-band CMD solution (cf. Sect.~\ref{sec:LR}) scaled linearly to the $V$-band. After this procedure, we estimated the S/N of the spectra as the RMS scatter in narrow wavelength-regions that were visually inspected and judged to be without spectral lines. In this way, the S/N of the separated SONG spectra were found to be roughly 150 for the primary component and 30 for the secondary at 5000~\AA, increasing to 250 and 55 at 6000~\AA. The corresponding numbers for the separated VUES spectra are 90 and 25 at 5000~\AA, and 200 and 45 at 6000~\AA.
We then performed classical equivalent width spectral analysis. This was done using two different methods, and for the SONG and the VUES spectra separately.
The first setup was as described in \citet{Slumstrup2019} with log$g$ fixed to the values from the binary solution (cf. Sect.~\ref{sec:binary}). This yielded $T_{\rm eff}$ = $5750\pm107$ K and [Fe/H]=$+0.14\pm0.02$ from the SONG spectrum of the primary star, and $T_{\rm eff}$ = $4630\pm202$ K and [Fe/H]=$+0.05\pm0.03$ from the SONG spectrum of the secondary star. The micro turbulence for the secondary was unrealistically high when unconstrained ($2.30\pm0.62$). If the micro turbulence for the secondary was fixed according to e.g. the calibration of \citet{Bruntt2010}, [Fe/H] came out higher, and in agreement with that of the primary.
Repeating the analysis for the VUES spectra gave $T_{\rm eff}$ = $5650\pm99$ K and [Fe/H]=$+0.03\pm0.02$ for the primary star, while robust results were not obtained for the secondary.
In the second setup, log\,$g$ was not fixed, but results converged naturally to the expected values. For the spectra from SONG (VUES), the analysis made use of 161 (164) $\rm Fe\,{\sc I}$ lines and 21 (14) $\rm Fe\,{\sc II}$ lines for the primary star, and 64 (87) $\rm Fe\,{\sc I}$ lines and 2 (2) $\rm Fe\,{\sc II}$ lines for the secondary star. This yielded $T_{\rm eff} = 5614\pm40$ $(5571\pm48)$~K and [Fe/H]=$+0.00\pm0.12$ ($-0.07\pm0.10$) for the primary star and $T_{\rm eff} = 4484\pm51$ $(4534\pm63)$~K and [Fe/H]=$-0.07\pm0.11$ ($-0.05\pm0.10$) for the secondary star, using the spectra from SONG and VUES (in the brackets), respectively.
As mentioned previously, we used the Gaia CMD to establish a first-estimate of the luminosity ratio of HD\,27130. The Gaia colours obtained from this exercise resulted in the first estimate $T_{\rm eff}$ values of 5645~K and 4420~K for the primary and secondary, respectively. For the second iteration, with the light ratio constrained from the relative spectral line strengths, the $T_{\rm eff}$ of the primary changed slightly to 5660~K, while that of the secondary changed more significantly. Enforcing the spectroscopic constraint on the luminosity ratio, the secondary is no longer on the main-sequence if the colours are to be matched. In this case, the secondary $T_{\rm eff}$ becomes 4235~K, while it is 4310~K if we just choose the point on the main sequence giving the correct light ratio in Gaia $G$-band, but not for the $G_{RP}$-band.
From all the different spectroscopic $T_{\rm eff}$ estimates and the corresponding photometric measurements, we adopt $5650\pm50$~K and $4300\pm100$~K as our best estimates of the effective temperatures. For the secondary this $T_{\rm eff}$ is somewhat lower than all our spectroscopic estimates above. However, the S/N is very low for the spectra of the secondary star, and the procedure to re-establish the true line-depths of the faint component of a spectroscopic binary is very sensitive to uncertainties in the adopted light ratio. Therefore, we put more reliance on the photometric measurements. Our after the fact comparison using the Gaia parallax in Sect.\ref{sec:dEBres} supports this choice, with agreement within 40~K.
The [Fe/H] values that we obtained are generally below the current best estimates of the metallicity of the Hyades, e.g. +0.15 \citep{Arentoft2019}. We suspect that this is due to systematic effects arising from the spectral separation and light ratio correction procedures on these relatively low S/N spectra, especially for the secondary star.
\section{Eclipsing binary analysis}
\label{sec:binary}
To determine model independent stellar parameters we used the JKTEBOP\footnote{https://www.astro.keele.ac.uk/~jkt/codes/jktebop.html} eclipsing binary code \citep{Southworth2004} which is based on the EBOP program developed by P. Etzel \citep{Etzel1981,Popper1981}. We made use of non-linear limb darkening \citep{Southworth2007}, and simultaneous fitting of the light curve and the measured radial velocities \citep{Southworth2013}.
As seen in Fig.~\ref{phaseMol}, the light curves of HD\,27130 display small variations outside the eclipses, likely due to spot activity. Consequently, the magnitude level is different just before and after the primary eclipse compared to just before and after the secondary eclipse. To minimise this effect in the binary analysis, we kept only the parts of the light curves very close to or in eclipse. We adjusted the light curves on individual night basis such that the mean magnitude of observations taken outside eclipse aligned. We did not use the $B$-band light curve because we did not succeed in obtaining coverage of the secondary eclipse.
For the other three filters separately, we fitted for the following parameters: orbital period $P$, time of first primary eclipse $T_0$, central surface brightness ratio $J$, sum of the relative radii $r_{\rm 1}+r_{\rm 2}$, ratio of the radii $k=\frac{r_{\rm 2}}{r_{\rm 1}}$, orbit inclination $i$, $e$cos$\omega$, $e$sin$\omega$, semi-amplitudes of the components $K_{\rm 1}$ and $K_{\rm 2}$ and system velocity of the components $\gamma_{\rm 1}$ and $\gamma_{\rm 2}$. We allowed for two system velocities because the components and their analysis could be affected differently by gravitational redshift and convective blueshift \citep{Gray2009} effects. As it turns out, they appear not to be.
We used a quadratic limb darkening law with coefficients calculated using JKTLD \citep{Southworth2015} with tabulations for the $V$ and $I$ bandpasses by \citet{Claret2000}. We ran JKTEBOP iteratively, starting with limb darkening coefficients from first guesses based on the Gaia CMD analysis of HD\,27130 and then using $T_{\rm eff}$ values from the spectral analysis. New limb darkening coefficients were then calculated with JKTLD using these $T_{\rm eff}$ and log$g$ values from the solution in the next JKTEBOP iteration.
Gravity darkening coefficients were taken from \citet{Claret2011} though even very large changes to these numbers had negligible effects as expected for nearly spherical stars. The same is true for reflection effects, which were set to zero due to our pre-analysis adjustment of the out-of-eclipse magnitude level.
As is usually the case for eclipsing binary stars without a total eclipse, $k$, the ratio of the radii was poorly constrained by the light curve. We ran tests with $k$ fixed to different values and found that a very large range of $k$ values produced almost equally good solutions. Therefore, without an external constraint on $k$, the derived stellar parameters are too uncertain to be useful. This suggests that \citet{Schiller87} were too optimistic when estimating uncertainties in their light curve analysis. To circumvent this problem, we added the light ratio $(L_2/L_1)_V=0.127\pm0.015$ as an external constraint in JKTEBOP. This light ratio was derived using the separated spectra and the Gaia CMD (cf. Sect~\ref{sec:LR}). Using Planck functions we determined and employed also the corresponding light ratio for the MASCARA filter, which is very close to the $V$-band, since these filters have nearly the same effective wavelength despite the MASCARA filter being much broader. For the $I_C$-band we proceeded differently, since $I$-filter transmissions are not so similar. We derived the $k$ values for the other filters first and then fixed the $k$ value in the $I_C$-band solution to the weighted mean from the other two filters.
\begin{figure}
\centering
\includegraphics[width=\hsize]{HD27130_multi_model.eps}
\caption{Light curves and radial velocity measurements of HD27130 compared to eclipsing binary models.
Upper panels: Phased light curves showing relative magnitude in MASCARA, $V$ and $I$ filters at phases at or close to eclipse. Green lines are the best model for the given filter. In the $V$-band panel, the red dashed line is the best model using the MASCARA light curve.
Bottom panels: Radial velocity measurements of the HD\,27130 components from SONG spectra. Red indicate the primary component and blue is the secondary. In the O–C panel, the measurements of the primary and secondary component are shifted by +/- 0.005, respectively, in phase for clarity. Note that the upwards-pointing errorbar gives the measurement error, while the downwards-pointing errorbar indicates the maximum difference between models based on different light curves.}
\label{fig:jktebop}
\end{figure}
The JKTEBOP solutions are given in Table~\ref{table:EBdata} and compared to the observed light curves and measured radial velocities in Fig.~\ref{fig:jktebop}. To estimate uncertainties we used the residual-permutation uncertainty estimation method of JKTEBOP, which estimates parameter uncertainties while accounting for correlated noise. The $3\sigma$ ranges of these parameter distributions correspond roughly to the filter-to-filter solution differences, and we therefore adopted these as our final $1\sigma$ uncertainties. This inflation of internal errors seems justified by the comparison of $V$-band and MASCARA light curve solutions in the middle panel of Fig.~\ref{fig:jktebop}; both limb darkening coefficients and light ratios are very close to identical for these two filters. Therefore, the difference in model eclipse depth is related to a combination of measurement errors and improper rectification of the light curves, time-variable spot patterns on the stars, and potential blended light from other sources in the MASCARA light curve - neither of which is accounted for by the internal errors. A flat mean of the three different filter solutions were used as the best parameter estimate.
\begin{table}
\caption{Binary solutions for HD\,27130}
\label{table:EBdata}
\begin{tabular}{lrrr}
\hline
\hline
Parameter & $V$ & MASCARA & $I_C$ \\
\hline
Constraints: & & &\\
$L_{\rm s}/L_{\rm p}$ & 0.127(15) & 0.128(15)& -- \\
$k=(r_{\rm s}/r_{\rm p})$ & -- & -- & 0.7962\tablefootmark{*} \\
\hline
$P$ ($\rm days$) & 5.609217(7) & 5.609209(5) & 5.609233(6) \\
Inclination \emph{i} & 85.830(54)& 85.495(112) & 85.640(19)\\
$k = r_{\rm s}/r_{\rm p}$ & 0.790(27)& 0.815(47)& 0.7962\tablefootmark{*}\\
$r_{\rm s} + r_{\rm p}$ & 0.1008(7)& 0.1053(17)& 0.1043(4) \\
$J_{\rm s}/J_{\rm p}$ & 0.2374(48)& 0.2148(121)& 0.4180(52) \\
$L_{\rm s}/L_{\rm p}$ & 0.133(15) & 0.129(15)& 0.248 \\
$\sigma (\rm{mmag})$ & 8.38 & 14.88 & 9.23 \\
\hline
$K_{\rm p} ({\rm km}\cdot {\rm s}^{-1})$ & 60.705(41)& 60.702(41)& 60.702(41) \\
$K_{\rm s} ({\rm km}\cdot {\rm s}^{-1})$ & 83.754(47)& 83.750(47)& 83.751(47) \\
$\gamma_{\rm p} ({\rm km}\cdot {\rm s}^{-1})$ & 39.049(31)& 39.039(31) & 39.040(31)\\
$\gamma_{\rm s} ({\rm km}\cdot {\rm s}^{-1})$ & 39.041(36)& 39.052(36)& 39.052(36) \\
$e$ & 0.0013 & 0.0013 & 0.0013 \\
$\omega $ & 15.47 & 1.38 & 1.98 \\
\emph{a} $\rm(R_{\odot})$ & 16.05 & 16.06 & 16.06 \\
$M_{\rm p} \rm(M_{\odot})$ & 1.0239 & 1.0251 & 1.0246 \\
$M_{\rm s} \rm(M_{\odot})$ & 0.7421 & 0.7430 & 0.7426 \\
$R_{\rm p} \rm(R_{\odot})$ & 0.9033 & 0.9317 & 0.9330 \\
$R_{\rm s} \rm(R_{\odot})$ & 0.7139 & 0.7597 & 0.7429 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{*}{The value was fixed.}
}
\end{table}
\begin{table}
\centering
\caption{Properties of HD\,27130}
\label{table:EBparameters}
\begin{tabular}{lr}
\hline
\hline
IDs & HD\,27130, vB\,22, V818\,Tau \\
$\alpha_{\rm J2000}$ & 04 17 38.946 \\
$\delta_{\rm J2000}$ & +16 56 52.21 \\
$G_{\rm TOT}$ & 8.098\\
$V_{\rm TOT}$ & 8.315\tablefootmark{1}\\
$T_{\rm p} (\rm K)$ & 5650(50) \\
$T_{\rm s} (\rm K)$ & 4300(100) \\
$M_{\rm p} (\rm M_{\odot})$ & 1.0245(24) \\
$M_{\rm s} (\rm M_{\odot})$ & 0.7426(16) \\
$R_{\rm p} (\rm R_{\odot})$ & 0.9226(150) \\
$R_{\rm s} (\rm R_{\odot})$ & 0.7388(260) \\
$V_{\rm p}$ & 8.445(14)\\
$V_{\rm s}$ & 10.685(210)\\
$BC_{V \rm ,p}$ & $-0.080$\\
$BC_{V \rm ,s}$ & $-0.746$\\
parallax, Gaia DR2 (mas) & 21.399(69) \\
parallax, derived, primary (mas)& 21.38\\
parallax, derived, secondary (mas)& 22.33\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{1}{Adopted from \citet{Kharchenko2001}}
}
\end{table}
We did not attempt to derive a precise $V$-band system magnitude for HD\,27130 from our photometry, since it was obtained with relative photometry in mind, not absolute. Instead, We estimated the apparent magnitude of the binary components by correcting the HD\,27130 system photometry in the $V$-band by \citet{Kharchenko2001} according to our spectroscopically measured $V$-band light ratio of 0.127.
We then calculated the luminosity of the components from their radii and $T_{\rm eff}$ and used equation (10) of \citet{Torres2010} generalised to an arbitrary filter $X$ to calculate their absolute component magnitudes:
\begin{equation}
M_X=-2.5\rm{log}\left(\frac{\it L}{{\it L}_\odot}\right)+{\it V}_\odot+31.572-\left(BC_X-BC_{V,\odot}\right)
\end{equation}
Here, $V_\odot=-26.76$ as recommended by \cite{Torres2010} and BC$_{V,\odot}=-0.068$ as obtained from the calibration of \citet{Casagrande2014}. Bolometric corrections were calculated from the calibration of \citet{Casagrande2014}
With an assumed zero interstellar reddening and absorption, we then calculated a predicted parallax for the components from the $V$-band magnitudes:
\begin{equation}
\pi = 10^{(\frac{M_V-V-5}{5})}
\end{equation}
The predicted parallaxes are given in Table \ref{table:EBparameters} along with the Gaia DR2 parallax \citep{Gaia2018}. As seen, they are in excellent agreement; a $T_{\rm eff}$ decrease of less than 10~K would be enough to obtain exact agreement for the primary. Even for the secondary, which is more sensitive to the assumed light ratio, a $T_{\rm eff}$ increase of less than 40~K is needed for exact agreement with the Gaia DR2 parallax. Given the relatively large parallax of the Hyades, the systematic parallax offset of the order of 0.05~mas in Gaia DR2 that has been investigated quite extensively by now (e.g. \citealp[and references therein]{Brogaard2018}, \citealt{Khan2019}) is impossible to address. The distance to HD\,27130 is thus 46.5~pc as calculated from the binary parameters, very close to the 46.7~pc implied by the Gaia DR2 parallax.
\label{sec:dEBres}
The final parameters and their uncertainties for HD\,27130 are given in Table~\ref{table:EBparameters}.
The masses are in agreement within uncertainties with what was obtained from spectroscopy by \citet{Griffin12}.
Our velocity semi-amplitudes, $K_{\rm p}=60.703(41)\, {\rm km\, s}^{-1}$ and $K_{\rm s}=83.752(47)\, {\rm km\, s}^{-1}$ are much more precise, but still in reasonable agreement with those derived by \citet{Griffin12}, $K_{\rm p}=60.87(6)\, {\rm km\, s}^{-1}$ and $K_{\rm s}=84.09(49)\, {\rm km \,s}^{-1}$.
Our mass estimates are also seen to be in good agreement with the masses given in Table~17 of \citet{Griffin12} that are just very slightly larger than ours. Given that all other mass estimates we have been able to find in the literature are even larger, our mass estimates are lower than any previous estimates. We ascribe this to the effects of the third component that was extensively discussed by \citet{Griffin12}, but not properly accounted for in other studies.
\section{Re-evaluation of the properties of $\epsilon$ Tau}
\label{sec:epstau}
In order to get a tight age constraint on the Hyades age, we need a more evolved star than the components of HD\,27130. We therefore turn our attention to the giant Hyades member $\epsilon$ Tau (HD\,28305).
The physical properties of $\epsilon$ Tau were measured by \citet{Arentoft2019} from a combination of their own asteroseismic, interferometric, and spectroscopic measurements, coupled to photometric observations and the parallax measurement from Hipparcos data \citep{vanLeeuwen2007}.
They obtained a radius of $R=12.06\pm0.16\,R_{\odot}$ and mass of $M=2.458\pm0.072\,M_{\odot}$ using a method that empirically determines also the correction needed for the asteroseismic scaling relations. However, as mentioned by \citet{Arentoft2019}, that solution suggests that $\epsilon$ Tau is an RGB star, in conflict with timescales of the secondary clump (RC2) compared to the RGB, the CMD position, and even a neural network evaluation of the oscillation power spectrum.
Therefore, we consider the properties obtained instead by using a model-predicted correction to the asteroseismic scaling relation assuming that $\epsilon$ Tau is in the RC2 phase of evolution. Doing that, we obtained $R=12.42\,R_{\odot}$ and mass of $M=2.607\,M_{\odot}$ for a self-consistent model-predicted correction $f_{\Delta\nu}=1.005$ from the figures of \citet{Rodrigues2017} and corresponding unpublished ones showing the correction as a function of $\nu_{\rm max}$ (priv. comm. A. Miglio, see Appendix \ref{fig:scaling}). This solution requires a parallax of $\pi=21.60$ mas for an exact match between the asteroseimic and interferometric radius, 2.56 $\sigma$ away from the Hippacos measurement of $22.24\pm0.25$ mas. However, if using the $1-\sigma$ lower boundary value for $\nu_{\rm max}$, the parallax increases to $\pi=22.03$ mas, just within the $1-\sigma$ lower boundary of the Hipparcos parallax. The values of radius and mass for this solution are $R=12.17\,R_{\odot}$ and mass of $M=2.458\,M_{\odot}$, close to those of \citet{Arentoft2019}. This solution seems to be the one that best agrees with all observations. We summarize the parameters of $\epsilon$\,Tau in Table~\ref{table:epstauparameters}.
\begin{table}
\centering
\caption{Properties of $\epsilon$\,Tau}
\label{table:epstauparameters}
\begin{tabular}{lr}
\hline
\hline
IDs & HD\,28305, $\epsilon$\,Tau \\
$\alpha_{\rm J2000}$ & 04 28 36.999 \\
$\delta_{\rm J2000}$ & +19 10 49.54 \\
$V$ (mag) & 3.53\\
$T_{\rm eff}(\rm K)$ & $4950\pm22$\tablefootmark{*} \\
$BC_{V}$ (mag) & $-0.257$\\
$\nu _{\rm max}(\mu$Hz) & $56.4\pm1.1$$^*$\\
$\Delta \nu(\mu$Hz) & $5.00\pm0.01$$^*$\\
$\theta _{\rm LD}$ (mas) & $2.493\pm0.019$$^*$ \\
parallax, Gaia DR2 (mas) & $20.31\pm0.43$ \\
parallax, Hipparcos (mas)& $22.24\pm0.25$ \\
\hline
\multicolumn{2}{l}{Asteroseismic parameters constrained by exact Hipparcos $\pi$,}\\
\multicolumn{2}{l}{Arentoft et al. (2019):}\\
$M (\rm M_{\odot})$ & $2.458\pm0.072$$^*$ \\
$R (\rm R_{\odot})$ & $12.06\pm0.16$$^*$ \\
\hline
\multicolumn{2}{l}{Asteroseismic parameters constrained by Hipparcos $\pi$,}\\
\multicolumn{2}{l}{This work:}\\
$\nu _{\rm max}(\mu$Hz) & $55.3$\\
$M (\rm M_{\odot})$ & 2.458 \\
$R (\rm R_{\odot})$ & 12.17 \\
parallax, derived (mas)& 22.03\\
\hline
\multicolumn{2}{l}{Asteroseismic parameters NOT constrained by Hipparcos $\pi$:}\\
\multicolumn{2}{l}{This work:}\\
$M (\rm M_{\odot}$) & 2.607 \\
$R (\rm R_{\odot}$) & 12.42 \\
parallax, derived (mas)& 21.60\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{*}{Derived by \citet{Arentoft2019}}
}
\end{table}
Both \citet{Schroder2020} and \citet{Gray2019} find significantly larger radii for $\epsilon$ Tau and the other three well-known Hyades giants.
\citet{Gray2019} do so while noting that for the four Hyades giants the Hipparcos and Gaia DR2 parallaxes have "disquieting differences of about 10\%", which can also be seen in Table \ref{table:epstauparameters}. We add that even the relative star-to-star differences are inconsistent. The fact that the Gaia DR2 parallax of $\epsilon$\,Tau is $20.31\pm0.43$~mas, many $\sigma$ away from the Hipparcos value calls for reconsideration of the physical properties in the future, when Gaia parallaxes of bright stars are better estimated.
Part of the larger radii found by both \citet{Schroder2020} and \citet{Gray2019} is however not related to the parallax, but rather the use of bolometric corrections (BCs) that are quite different from the ones we adopt from \citet{Casagrande2014}. \citet{Schroder2020} adopts a BC$_{V}$ of $-0.5$~mag, mentioning also the consequences for radius and mass given a BC of $-0.4$~mag. \citet{Gray2019} do not mention the source or value of the BCs they use, but reversing their calculations of the radii under the assumption that BC$_{V,\odot}=-0.068$ \citep{Casagrande2014}, we found that they used BC$_V=-0.39$ for $\epsilon$ Tau (and $\delta$ Tau, and BC$_V=-0.36$ for the other two giants).
The value we use for $\epsilon$\,Tau, corresponding to $T_{\rm eff}=4950$~ K and log$g=2.7$ from \citep{Casagrande2014} is BC$_V=-0.257$.
By comparing the physical radius obtained from interferometry to the physical radius obtained from the spectroscopic $T_{\rm eff}$ and $V$-band magnitude for the same parallax value, the validity of the bolometric correction can be tested. Doing this, we confirmed the excellent agreement for our value of BC$_V=-0.257$ that was also established in \citet{Arentoft2019}; the two radius estimates agree within 0.2\%. Using instead the bolometric corrections suggested by \citet{Schroder2020} or \citet{Gray2019}, the radii become 12\% or 7\% larger than the interferometric radius. This disagreement remains regardless of which parallax is adopted, suggesting that the $BC_V$ of \citet{Casagrande2014} is the more accurate one.
\section{Stellar models and isochrone comparisons}
\label{sec:results}
We used Victoria models \citep{VandenBerg2014}, PARSEC models \citep{Bressan2012}, and two different MESA model grids \citep{Paxton2011,Paxton2018}.
The Victoria models assume the \citet{Asplund2009} solar abundances, scaled to different [Fe/H] values. \citet{Choi2016} find that an initial $Z = 0.015$ and $Y = 0.261$ are reduced to the Asplund $Z$ and $Y$ consistent with asteroseismology when diffusion acts for the solar age (4.57~Gyr). For the Victoria models generated here:
[Fe/H] = 0.10, $Y$ = 0.27, $Z$ = 0.0163;
[Fe/H] = 0.10, $Y$ = 0.30, $Z$ = 0.0155;
[Fe/H] = 0.15, $Y$ = 0.27, $Z$ = 0.0182; \\
one might expect that the Hyades would have $Y$ fairly close to 0.27 if it has [Fe/H] between 0.10 and 0.15 suggested by spectroscopic investigations (e.g. \citet{Arentoft2019}. Victoria models do not include diffusion of metals, only settling of helium, but at an age of <~1~Gyr, the effects of metals diffusion will be minimal.
The PARSEC model are those of \citet{Bressan2012} with revised and calibrated surface boundary conditions in low-mass dwarfs described by \citet{Chen2014}.
We use two different versions of MESA models, which both include diffusion of helium and metals; The MIST models \citep{Dotter2016,Choi2016} that assume the \citet{Asplund2009} solar abundances and those from a specific grid of \citet{Miglio2020} that assume \citet{Grevesse1993}\ solar abundances. The latter models are as described in \citet{Rodrigues2017} except that they include diffusion.
\subsection{Model comparisons to HD\,27130}
Fig.~\ref{fig:M-R-T} shows the Mass-Radius and Mass-$T_{\rm eff}$ diagrams of the HD\,27130 components compared to selected isochrones. The primary component matches most of the isochrones well; In the mass-radius diagram, all isochrones except the one assuming a very high helium content agree well within 1 $\sigma$. In the Mass-$T_{\rm eff}$ diagram, both the high helium and lower metallicity Victoria isochrones and the PARSEC isochrone are hotter than the HD\,27130 primary. The secondary is larger and cooler than all the isochrones. For comparison, we also show the measurement by \citet{Torres02}. They also found the secondary to be larger and cooler than the isochrones, but their estimates for the primary also does not match the isochrones shown.
At the mass range spanned by the components of HD\,27130, the isochrones have no significant dependence on age. Therefore, the measurements of HD\,27130 can be used to make predictions about the helium content of the Hyades by adopting a measured [Fe/H] for the cluster.
We disregard the secondary component, since it is larger and cooler than all the isochrones regardless of the composition. This is likely related to inflation caused by magnetic fields, as suggested for other binary components of similar mass in the literature (e.g. \citealt{Sandquist2016}). The PARSEC isochrones are corrected empirically to match the lower main-sequences of open and globular clusters \citep{Chen2014}. This can be seen in both panes of Fig.~\ref{fig:M-R-T} where the PARSEC isochrones bend slightly at a mass very close to the mass of the HD\,27130 secondary. However, the effect is too insignificant to make a difference , which is in accord with the findings of \citet{Chen2014} that the effects are very small at this mass. Thus, the inflated radius and decreased $T_{\rm eff}$ of the HD\,27130 secondary relative to isochrone predictions are likely caused by an increased magnetic activity relative to single low mass stars.
Comparing the position of the primary to the Victoria isochrones, we estimate $Y=0.274\pm0.017$ from the Mass-Radius diagram and $Y=0.267\pm0.006$ from the Mass-$T_{\rm eff}$ diagram at [Fe/H]=$+0.15$. $Y=0.27$ implies a helium enrichment law close to $\frac{\Delta Y}{\Delta Z}=1.2$ when calculated from a primordial value of $Y=0.248$. This is in agreement with $\frac{\Delta Y}{\Delta Z}=1.4\pm0.1$ estimated for the open cluster NGC6791 by \citet{Brogaard2012} through analysis of eclipsing members. It thus appears that the helium enrichment of the Hyades is similar to other open clusters.
The MIST models, which also assume the \citet{Asplund2009} solar abundances, suggest very close to the same helium content, $Y=0.274$ from the Mass-Radius diagram but $Y=0.277$ from the Mass-$T_{\rm eff}$ diagram. This illustrates that different assumptions about model physics contributes at least 0.005 the uncertainty in $Y$ when determined from the Mass-Radius and Mass-$T_{\rm eff}$ diagrams.
The PARSEC models assume another solar abundance reference \citep{Caffau2011}. They suggest $Y=0.285$ from the Mass-Radius diagram but $Y=0.277$ from the Mass-$T_{\rm eff}$ diagram. A higher [Fe/H] is needed in order for these isochrones to yield a consistent value for $Y$. Therefore, although these models suggest a larger helium content, the predicted helium enrichment law will be similar, because of the corresponding larger $Z$ value.
The lower helium content of $Y=0.255$ derived by \citet{Lebreton2001} from HD27130 was likely an artifact caused mainly by the previous mass overestimate due to the unseen third component.
\begin{figure}
\centering
\includegraphics[width=\hsize]{hyades_mr.eps}
\includegraphics[width=\hsize]{hyades_mt.eps}
\caption{Mass-Radius (upper panel) and Mass-$T_{\rm eff}$ (lower panel) diagrams comparing the components of HD\,27130 to isochrones from different model sets and different compositions. Notation is such that e.g. p15y27 means [Fe/H]$=+0.15$ and $Y=0.27$.
The red squares are the measurement from this work, while the gray squares are the corresponding values from \citet{Torres02}.}
\label{fig:M-R-T}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{hyades_mr3.eps}
\includegraphics[width=\hsize]{hyades_mt3.eps}
\caption{Mass-Radius (upper panel) and Mass-$T_{\rm eff}$ (lower panel) diagrams comparing the asteroseismic measurements of $\epsilon$ Tau to isochrones from different model sets and different compositions. The green square is the measurement of $\epsilon$ Tau from \citet{Arentoft2019}. The red squares are the measurements of HD27130 from this work, while the gray squares are the corresponding values from \citet{Torres02}.}
\label{fig:M-R-T3}
\end{figure}
\subsection{Model comparisons to $\epsilon$\,Tau}
With the helium content constrained by the primary component of HD\,27130, we can constrain the age of the Hyades using an evolved star. We make use of the secondary clump star member $\epsilon$ Tau with properties determined by \citet{Arentoft2019} as re-evaluated in the present paper. Since our best estimates of the mass and radius were very close to those of \citet{Arentoft2019} we use their measured values in Fig.~\ref{fig:M-R-T3}, where we compare to some of the same isochrones as in Fig.~\ref{fig:M-R-T}. As seen in the upper panel, an age close to $0.8$ Gyr is compatible with the observations if $\epsilon$ Tau is in the secondary clump phase, as the evidence strongly suggests.
The age needed for an exact match at the measured mass of $\epsilon$ Tau is $0.90$~Gyr for the MESA isochrones and $0.83$~Gyr for the PARSEC isochrone (The Victoria models were not extended to the core He-burning phase.). The size of the $1-\sigma$ uncertainty in mass can be seen to correspond to a 0.1~Gyr $1-\sigma$ uncertainty in age. It is also worth noting that due to the mass-radius correlation in the asteroseismic estimates, the best age estimates are younger by only about 0.1~Gyr if we adopt our alternative mass and radius estimates of $\epsilon$ Tau that was not consistent with the Hipparcos distance.
The lower panel of Fig.\ref{fig:M-R-T3} compares $\epsilon$ Tau to the same models in the mass-$T_{\rm eff}$ diagram. The effective temperatures of all models are within or just outside the 1-$\sigma$ uncertainty in $T_{\rm eff}$ although different models show differences of more than 100 K for the same age. Unfortunately, the model $T_{\rm eff}$ of evolved stars is sensitive to e.g. the calibration of the mixing length parameter, adopted surface boundary conditions, the detailed abundance pattern, and convective overshooting. Therefore, it makes little sense to attempt to force a finer match within the level of the 1-$\sigma$ error-bar. While all of the many spectroscopic investigations of the Hyades giants have found their $T_{\rm eff}$ to be below 5000 K, some of the mentioned model assumptions may be responsible for the model reaching such high $T_{\rm eff}$. The two isochrones with identical assumptions except the age clearly demonstrate that the $T_{\rm eff}$ of $\epsilon$ Tau is insensitive to the assumed age. Instead, the good level of agreement show that these stellar models self-consistently predict the correct $T_{\rm eff}$ of the secondary clump ($\epsilon$ Tau) within errors if constrained to do so at the unevolved part of the main-sequence (HD\,27130).
\subsection{Mass-luminosity and Gaia CMD analysis}
Fig. \ref{fig:M-L} reproduces the Mass-$M_V$ diagram of \citet{Torres2019B} with the addition of our measurements of HD27130 and the measurements of $\epsilon$ Tau by \citet{Arentoft2019}. Some of the isochrones from previous figures are also shown for comparison. Although the positions of the HD\,27130 components in this diagram have changed with our measurements relative to those of \citet{Torres02}, they have shifted along the isochrone in such a way that the same isochrone would be preferred from either set of measurements. The measurements of $\epsilon$ Tau also falls nicely along the model predictions.
\begin{figure}
\centering
\includegraphics[width=\hsize]{hyades_ml.eps}
\caption{Mass-$M_V$ diagram comparing the components of HD\,27130 to isochrones from different model sets and different compositions. The red squares are the measurement from this work. The gray squares are the corresponding values from \citet{Torres02} according to \citet{Torres2019B} (we were unable to recover those numbers in \citealt{Torres02}). Blue squares are measurements of the triple HD\,28263 by \citet{Torres2019A}. Black circles are measurements of additional binary stars belonging to the Hyades, measured by different authors and presented by \citet{Torres2019A, Torres2019B}. The green square is the asteroseismic measurements of $\epsilon$ Tau by \citet{Arentoft2019}. Isochrones are a selected subset of those given in the legend of Fig. \ref{fig:M-R-T} Those covering the full mass range are both of age 800~Myr. The Victoria models are only calculated for low masses where the isochrone shape is not significantly affected by age.}
\label{fig:M-L}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\hsize]{hyades_cmd.eps}
\includegraphics[width=\hsize]{hyades_cmd_RC2.eps}
\caption{Upper panel: Gaia colour-magnitude diagram of the Hyades with members from \citet{Lodieu2019} compared to isochrones with the same ages as in Fig. \ref{fig:M-R-T3} and compositions as in Fig. \ref{fig:M-R-T}.
Lower panel: Zoom on the giant part of the same colour-magnitude diagram. The positions of the four well-known Hyades giants are marked. Circles indicate their positions when adopting the Hipparcos parallaxes. Squares mark the corresponding positions if adopting the Gaia parallaxes.}
\label{fig:CMD}
\end{figure}
In the upper panel of Fig. \ref{fig:CMD} we compare the same models as in Fig. \ref{fig:M-R-T3} to the Gaia CMD of selected Gaia members from \citet{Lodieu2019}. The $M_G$ magnitudes have been obtained by adjusting the $G$ magnitudes according to their parallax. The four giants have been marked individually. $\theta_{1}$\,Tau was added, since it was not selected as a member by \citet{Lodieu2019}.
As seen, the upper main sequence and turn-off of these models are quite different even though they predict a very similar mass for $\epsilon$\,Tau. However, unfortunately, the upper main sequence close to the terminal age main sequence (TAMS) is very poorly defined, and there is at most measurements for one subgiant star. Without further investigations, it would not be well justified to demand that isochrones need to match this one star.
The star marked with a blue diamond is $\theta_{2}$\,Tau, a known binary where two stars contribute to the combined light. Isochrones should therefore not match this point. Further investigations of the stars close to the TAMS would be needed if tight isochrone constraints are to be obtained from this Gaia CMD. However, it does appear that the PARSEC models are too blue as they fail to match the upper main-sequence stars of the Hyades, which is consistent with the isochrones being too hot compared to the HD\,27130 primary in Fig.~\ref{fig:M-R-T3}. The 0.8~Gyr MESA isochrone is somewhat too red before it bends back to the blue at the end of the core H-burning phase.
The lower panel of Fig. \ref{fig:CMD} shows only the four giants in the CMD. Squares mark their CMD positions if using the Gaia DR2 parallaxes, while circles indicate their positions if Hipparcos parallaxes are used. As seen, it makes a very significant difference if one or the other measures of parallaxes are used. On top of that, the $T_{\rm eff}$ differences among the four giants implied by the relative $G_{BP}-G_{RP}$ colours amount to a range of 218~K, while relative spectroscopic studies suggest a much smaller $T_{\rm eff}$ range of only 50~K or less \citep{Gray2019, Schroder2020}. $B-V$ colour differences suggest a range of 130~K among the giants.
This causes huge complications in trying to compare the Hyades giants to Gaia CMD observations. If, however, these four giants all belong to the secondary clump, as evolutionary timescales, their similar spectroscopic $T_{\rm eff}$ and the asteroseismic analysis of mixed modes \citep{Bedding2011,Arentoft2017,Arentoft2019} in $\epsilon$\,Tau all suggest, then the lower panel of Fig.\ref{fig:CMD} implies that the Hipparcos parallaxes are the correct ones. The Gaia DR2 parallaxes simply position the giants too far away from each other in both colour and magnitude for them to be matched by the secondary clump phase of \emph{any} isochrone. We checked that applying the Gaia saturation effect corrections in appendix B of \citet{Evans2018} did not change this conclusion significantly. Applying these corrections changes the magnitudes of all fours giant by close to identical amounts, which is not surprising, since their magnitudes are similar. The $G$-mags change by very close to +0.11~mag for all four stars, moving them closer to the isochrone predictions for the clump phase. The $G_{BP}-G_{RP}$ colours change by less than 0.01 mag for each star, but in such a way that the colour-range covered by the stars decreases by 0.016 mag corresponding to a reduction in the photometric $T_{\rm eff}$ range by 38~K to 180~K, which is still very large. Therefore, if saturation effects are a play, we suspect that random star-to-star saturation variations are significant. We have not applied saturation corrections in Fig.~\ref{fig:CMD}.
\subsection{Comparisons to other cluster age estimates.}
Our investigations above suggest that the age of the Hyades is 0.9 $\pm$ 0.1 (stat) $\pm$ 0.1 (sys)~Gyr.
This age estimate is significantly larger than the $588\pm60$~Myr found by \citet{Schroder2020}. Part of the age difference (60~Myr) is explained by differences in the bolometric corrections as discussed above and in \citet{Schroder2020}. Thus, in light of our investigation of the bolometric corrections for the Hyades giants, the best age estimate from \citet{Schroder2020} is instead $648\pm60$~Myr. This is still younger than our age estimate.
The age of the Hyades derived in this work is also larger than the white dwarf (WD) cooling age of $640^{+67}_{-49}$~Myr found by \citet{El-Badry2018}. However, we believe that their younger age could be partly due to their assumption of solar composition for the calculation of the pre-WD evolution time, and perhaps also the assumption of solar composition in the derivation of the initial-final mass relation (IFMR). \citet{Salaris2018} derive the IFMR for the Hyades by {\it assuming} an age of 800 Myr, closer to the age that we found. This IFMR is not very different from the one by \citet{El-Badry2018} for the lower-mass WDs, but deviates for the higher-mass WDs. This indicates that perhaps the slight age discrepancy arises due to inaccuracies in the model treatment of convective core overshooting, which affects the pre-WD age, and thus the initial masses.
\section{Summary, conclusions and outlook}
\label{conclusions}
We used new observations to establish the physical properties of the components of the eclipsing Hyades member HD\,27130 as given in Table \ref{table:EBparameters}. The properties of the primary component were used to constrain the helium content, which we found to be $Y=0.27$, corresponding to a helium enrichment law close to $\frac{\Delta Y}{\Delta Z}=1.2$.
The properties of $\epsilon$\,Tau were re-analysed, finding that the mass and radius established by \citet{Arentoft2019} seem robust. A higher mass is only likely if the Hipparcos parallax of $\epsilon$\,Tau is too large by $2.56 \sigma$.
We estimated the age of the Hyades to be $0.9\pm0.1$(stat) $\pm0.1$~(sys)~Gyr in slight tension with recent age estimates based on the cluster white dwarfs and based on the Hyades giants, but without asteroseismic constraints.
The age precision can be much improved through asteroseismology of the other three Hyades giants in a similar fashion as was done for $\epsilon$\,Tau by \citet{Arentoft2019}. That would reduce the random error on the age, and help assessing the accuracy of the Hipparcos parallaxes of the Hyades giants. Potential future improvements to the Gaia parallax of bright stars would also help clarify the situation.
Only then the age of the Hyades can be accurately established and used to investigate convective core overshoot through its effects on the turn-off morphology, the mass of the helium burning giants, and the WD cooling sequence. We have taken the first steps down this path by providing constraints on the helium content from HD\,27130 and investigating the constraints on the mass of the giant $\epsilon$\,Tau.
\begin{acknowledgements}
We gratefully acknowledge the grant from the European Social Fund via the Lithuanian Science Council (LMTLT) grant No. 09.3.3-LMT-K-712-01-0103.\\
Observations were partially made with the Hertzsprung SONG telescope operated on the Spanish Observatorio del Teide on the island of Tenerife by the Aarhus and Copenhagen Universities and by the Instituto de Astrofísica de Canarias.\\
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.\\
This research has made use of the SIMBAD database,
operated at CDS, Strasbourg, France\\
Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106).\\
AM acknowledges support from the ERC Consolidator Grant funding scheme (project ASTEROCHRONOMETRY, \url{https://www.asterochronometry.eu}, G.A. n. 772293).
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,566,005 | arxiv | \section{Introduction}
The production of $W$-boson pairs at electron-positron colliders
is a process of crucial relevance for a precise determination
of the $W$ mass. If the International Linear Collider will measure
the total cross section at the per-mille level~\cite{AguilarSaavedra:2001rg},
a direct reconstruction of the $W$-decay products will allow
to reach a $10$ MeV accuracy on the determination of the $W$
mass~\cite{monig}. A higher precision could be achieved through a
dedicated threshold scan leading to a $6$ MeV accuracy~\cite{wilson}.
The aforementioned estimates rely on statistics and the performance
of the future collider, and assume that the cross section for $W$-pair
production is theoretically under control. In particular, in view of the
$6$ MeV precision goal, accurate predictions are needed for a final
state containing the fermion pairs produced by $W$ decay, instead of
on-shell $W$ bosons.
A full next-to-leading order (NLO) evaluation of four-fermion
production in the complex-mass scheme has been performed by
the authors of~\cite{Denner:2005es}, extending the methods introduced
in ~\cite{Denner:1999gp}. Recently, a compact analytic result for the
threshold region has been derived in ~\cite{Beneke:2007zg}
(see~\cite{Schwinn:2007mv} for reviews) using the method of
unstable-particle effective field theory~\cite{Beneke:2003xh}.
The work of~\cite{Beneke:2007zg} has concluded that collinear
logarithms arising from initial-state radiation have to be re-summed
at next-to-leading accuracy for reducing the threshold-scan error
on the $W$ mass to less than $30$ MeV. Furthermore, it has been
shown that the NLO partonic evaluation in the effective-theory
framework is affected by a residual error of $10-15$ MeV. Although
a large part of the uncertainty at the partonic level can be removed
using the full NLO result of ~\cite{Denner:2005es}, the evaluation
of the dominant next-to-next-to-leading order (NNLO) corrections
is mandatory to secure the $6$ MeV threshold-scan accuracy goal.
In~\cite{Actis:2008rb} we have evaluated the parametrically
dominant NNLO corrections to the total cross section for the
production process $e^- e^+ \to \mu^- \overline{\nu}_\mu u
\overline{d} + X$, where $X$ is an arbitrary flavor-singlet state.
The result is expressed through a compact semi-analytic formula
that can be easily added on top of both
effective-theory~\cite{Beneke:2007zg} and full NLO~\cite{Denner:2005es}
predictions.
In Section~\ref{sec:NNLO} of this note we show an overview
of the NNLO corrections. Next, in Section~\ref{sec:num}, we
discuss their numerical impact.
\section{Overview of the dominant NNLO corrections}
\label{sec:NNLO}
The inclusive cross section for the process $e^- e^+ \to
\mu^- \overline{\nu}_\mu u \overline{d} + X$ is computed
in the context of the effective theory~\cite{Beneke:2003xh}
by means of a non-standard perturbative expansion in three
small parameters of the same order $\delta$:
1) $\alpha_{ew} \equiv \alpha \slash \sin^2 \theta_w $, where
$\alpha$ is the fine-structure constant and $\theta_w$ stands
for the weak-mixing angle;
2) $(s - 4 M_W^2) \slash (4 M_W^2) \sim v^2$, where $s \equiv
(p_{e^-} + p_{e^+})^2$, $M_W$ is the $W$ mass and $v$ is the
non-relativistic velocity of the $W$;
3) $\Gamma_W \slash M_W$, with $\Gamma_W$ denoting the $W$ decay
width.
The re-organized loop and kinematical expansion is performed
through the method of regions~\cite{Beneke:1997zp} and relies
on the identification of different momentum scalings in the
center-of-mass frame in order to exploit the hierarchy of scales
around threshold. Denoting by $k$ an arbitrary loop-integration
momentum, we deal with hard ($k_0\sim|\vec{k}|\sim M_W$), potential
($k_0\sim M_W \delta, |\vec{k}| \sim M_W \sqrt{\delta}$), soft
($k_0\sim |\vec{k}| \sim M_W \delta$), collinear ($k_0 \sim M_W,
k^2 \sim M_W^2 \delta$) and semi-soft ($k_0\sim |\vec{k}|\sim M_W
\sqrt{\delta}$) momentum scalings. Semi-soft modes are not relevant
for the NLO evaluation~\cite{Beneke:2007zg}, and start playing a
role for the NNLO calculation~\cite{Actis:2008rb}.
After integrating hard modes out, the residual dynamical degrees
of freedom contribute to genuine loop computations in the context
of the effective theory. The different scaling properties lead to a
peculiar half-integer power counting in the expansion parameter
$\delta$ and to a straightforward identification of the parametrically
dominant radiative corrections.
\begin{wrapfigure}{r}{0.7\columnwidth}
\centerline{\includegraphics[width=0.65\columnwidth]{figure1.eps}}
\caption{sample LO and NLO diagrams in the effective theory
(first line) and in the full Standard Model (second line).}
\label{Fig:Cuts2}
\end{wrapfigure}
The total cross section for four-fermion production is computed
from the cuts of the $e^- e^+$ forward-scattering amplitude, as
shown in Figure~\ref{Fig:Cuts2}a for the leading order (LO)
diagram (see Figure~\ref{Fig:Cuts2}d for the Standard Model
counterpart). Here the LO operator ${\cal O}_p^{(0)}$
(${\cal O}_p^{(0)\dagger}$) accounts for the production (destruction)
of a pair of non-relativistic $W$ bosons, denoted by $\Omega$. In
Figure~\ref{Fig:Cuts2}b and Figure~\ref{Fig:Cuts2}c we show also
the NLO Coulomb- and soft-photon corrections evaluated
in~\cite{Beneke:2007zg}. The conventional Standard Model (SM)
loop expansion of Figure~\ref{Fig:Cuts2}e and Figure~\ref{Fig:Cuts2}f
treats virtual Coulomb effects ($\gamma_c$) and soft real-photon
contributions ($\gamma_s$) as genuine NLO terms. In the framework
of the effective theory, instead, a simple power-counting argument
shows that Coulomb corrections at the $W$-pair threshold are
suppressed by a factor $\delta^{1\slash 2}$ with respect to the
LO result, and can be classified as dominant NLO effects, whereas
soft-photon diagrams, being weighted by one power of $\delta$,
lead to sub-dominant NLO effects.
Relying on analogous observations, we have analyzed in~\cite{Actis:2008rb}
the set of SM diagrams which are suppressed in the effective-theory
framework by a factor $\delta^{3\slash 2}$ rather than $\delta^2$ with
respect to the LO cross section, and can be classified as parametrically
dominant NNLO corrections. They can be conveniently organized
in three sub-sets: 1) mixed hard-Coulomb corrections; 2) interference
effects of Coulomb and soft (collinear) photons; 3) radiative corrections
to the Coulomb potential.
\begin{wrapfigure}{l}{0.7\columnwidth}
\centerline{\includegraphics[width=0.63\columnwidth]{figure2.eps}}
\caption{mixed hard-Coulomb corrections in the effective theory
(first line) and in the Standard Model (second line).}
\label{fig:NLOcount}
\end{wrapfigure}
Mixed hard-Coulomb corrections, given by diagrams with a Coulomb
photon and one insertion of a hard NLO correction, are illustrated
in Figure~\ref{fig:NLOcount}. Here a hard correction has been
inserted at the: {\rm a}) production stage, replacing the LO
production operator ${\cal O}_p^{(0)}$ with the NLO expression
${\cal O}_p^{(1)}$ as in Figure~\ref{fig:NLOcount}a; {\rm b}) decay stage,
as graphically shown in Figure~\ref{fig:NLOcount}b by the insertion of
the black dot labeled $\delta_{\text{decay}}$, summarizing flavor-specific
contributions to $W$ decay; {\rm c}) propagation stage, as illustrated by
the $\delta_{\text{residue}}$ insertion in Figure~\ref{fig:NLOcount}c.
The last contribution is inherent
to the inclusion of wave-function renormalization factors in the
effective-theory matching coefficients. SM counterparts for all three
cases are shown in Figure~\ref{fig:NLOcount}d, Figure~\ref{fig:NLOcount}e
and Figure~\ref{fig:NLOcount}f.
\begin{wrapfigure}{r}{0.7\columnwidth}
\centerline{\includegraphics[width=0.63\columnwidth]{figure3.eps}}
\caption{diagrams involving Coulomb, soft and collinear
photons and corrections to the Coulomb potential in the effective
theory (first line) and in the Standard Model (second line).}
\label{fig:NLOcount2}
\end{wrapfigure}
Interference effects of Coulomb and soft ($\gamma_s$) or
collinear ($\gamma_{coll.}$) photons are shown in
Figure~\ref{fig:NLOcount2}a and Figure~\ref{fig:NLOcount2}b
with their SM counterparts in Figure~\ref{fig:NLOcount2}d
and Figure~\ref{fig:NLOcount2}e. As discussed
in~\cite{Actis:2008rb}, they are naturally merged with
the mixed hard-Coulomb corrections at the production stage
of Figure~\ref{fig:NLOcount}a.
Radiative corrections to the Coulomb potential due to
the insertion of a semi-soft fermion bubble ($f_{ss}$)
are shown in Figure~\ref{fig:NLOcount2}c and
Figure~\ref{fig:NLOcount2}f.
\section{Results}
\label{sec:num}
The NNLO total cross section follows from the convolution of the
corrections shown in Figure~\ref{fig:NLOcount}
and Figure~\ref{fig:NLOcount2} with the electron structure functions
provided in ~\cite{Skrzypek:1992vk}, in order to re-sum collinear
logarithms from initial-state radiation.
\begin{wraptable}{l}{0.44\columnwidth}\centerline{
\begin{tabular}{|c|c|c|}
\hline
$\sqrt{s}$\,[GeV] & $\sigma_{\rm NLO}$ [fb] & $\Delta\sigma_{\rm NNLO}$ [fb] \\
\hline
161 &117.81(5)& 0.087 \\\hline
164 &234.9(1) & 0.544 \\\hline
167 &328.2(1) & 0.936 \\\hline
170 &398.0(2) & 1.207 \\\hline
\end{tabular}}
\caption{NLO total cross section for $e^- e^+ \to \mu^-
\overline{\nu}_\mu u \overline{d} + X$ and NNLO shift.}
\label{tab:limits}
\end{wraptable}
Results for the NLO evaluation of~\cite{Beneke:2007zg} and
the NNLO shifts of~\cite{Actis:2008rb},
ranging from $0.07 \%$ for $\sqrt{s}= 161$ GeV to
$0.3 \%$ for $\sqrt{s}= 170$ GeV,
are summarized in Table~\ref{tab:limits}.
Using the procedure of~\cite{Beneke:2007zg},
we have found that the impact of the dominant NNLO corrections on the $W$-mass
determination is about $3$ MeV. The result is well below the
$6$ MeV error in the measurement from an energy scan
in electron-positron collisions.
We conclude observing that, although a differential calculation in
the effective theory is not currently feasible (see developments for top-antitop
production in~\cite{Hoang:2008ud}), the analysis of~\cite{Actis:2008rb}
has shown that the inclusive NNLO result is adequate
for practical applications.
\section{Acknowledgments}
M. Beneke, P. Falgari and C. Schwinn are gratefully acknowledged for the collaboration.
Diagrams have been drawn with {\sc Axodraw}/{\sc Jaxodraw}~\cite{Vermaseren:1994je}.
\begin{footnotesize}
|
1,108,101,566,006 | arxiv | \section{Linguistic Introduction}
The Lambek calculus~\citep{Lambek58} was introduced for mathematical modelling of natural language syntax via {\em categorial grammars.}
The concept of categorial grammar goes back to ideas of~\citet{Ajdukiewicz} and~\citet{BarHillel}.
The framework of categorial grammars aims to describe natural language by means of logical derivability
(see \citet{Buszkowski2003,Carpenter,MorrillBook,MootRetore} {\em etc}).
From the modern logical point of view, the calculus of Lambek grammars (the Lambek calculus) is a variant of Girard's linear logic~\citep{Girard} in its
non-commutative intuitionistic version~\citep{Abrusci}.
Nowadays Lambek-style categorial grammars form one framework in a family of closely related formalisms, including combinatory categorial grammars~\citep{Steedman},
categorial dependency grammars~\citep{DikovskyDekhtyar}, and others.
A categorial grammar assigns logical formulae to lexemes (words) of the language. These formulae are syntactic categories, or {\em types,} of these words.
In Lambek grammars, types are constructed using three binary connectives, namely two divisions, $\mathop{\backslash}$ and $\mathop{/}$, and the product, $\cdot$.
Following the usual introduction into categorial grammars, we start with the standard example: {\sl ``John loves Mary.''} Here {\sl ``John''} and {\sl ``Mary''} receive syntactic type $N$ (noun); {\sl ``loves,''} as a transitive
verb, is of type $(N \mathop{\backslash} S) \mathop{/} N$. Here $S$ is the syntactic category of grammatically valid sentences. Thus, a transitive verb is handled as something
that needs a noun phrase on the left and a noun phrase on the right to become a complete sentence. In the Lambek calculus, $A, A \mathop{\backslash} B$ yields $B$, and so does
$B \mathop{/} A, A$ (the complete formulation of the Lambek calculus is presented in Section~\ref{S:MALC}). Thus, $N, (N \mathop{\backslash} S) \mathop{/} N, N \to S$ is a theorem
of the Lambek calculus, which validates {\sl ``John loves Mary''} as a correct sentence.
Lambek grammars are also capable of handling more sophisticated syntactic constructions, in particular, coordination ({\sl ``and,''} {\sl ``or''}) and
some cases of dependent clauses. These cases include examples like {\sl ``the girl whom John loves''} (parsed as $N$). Here the most interesting syntactic type is
the one for {\sl ``whom'':} $(CN \mathop{\backslash} CN) \mathop{/} (S \mathop{/} N)$. The type $CN$ stands for ``common noun,'' {\em i.e.,} a noun without article. {\sl ``Whom''} takes,
as its right argument, an incomplete sentence {\sl ``John loves,''} which lacks a noun phrase on the right to become a complete sentence
(like {\sl ``John loves Mary''}) and is therefore of type $S \mathop{/} N$. The complete analysis of {\sl ``the girl whom John loves''} corresponds to the following
theorem of the Lambek calculus: $$N \mathop{/} CN, CN, (CN \mathop{\backslash} CN) \mathop{/} (S \mathop{/} N), N, (N \mathop{\backslash} S) \mathop{/} N \to N.$$
Coordination between two sentences ({\sl ``John loves Mary and Pete loves Ann''}) is handled by assigning $(S \mathop{\backslash} S) \mathop{/} S$ to {\sl ``and.''}
There are, however, serious limitations of the expressive power of Lambek grammars. Namely, the famous result of~\citet{PentusCF} states that
any language described by a Lambek grammar is necessarily context-free.
On the other hand, context-freeness of real natural language syntax had been a disputed question in the linguistic community, see~\citet{PullumGazdar}. Finally,~\citet{Shieber} demonstrated a non-context-free construction in Swiss German. Though examples like Shieber's one may seem exotic, constructing context-free grammars for sophisticated natural phenomena, even if such grammars exist, is practically quite hard.
This discrepancy motivates extending and modifying the Lambek calculus in order to obtain more
powerful categorial grammar formalisms.
In this paper we consider some of these extensions. In the analysis of linguistic examples, we generally follow~\citet{MorrillBook} and
later papers by Morrill and his co-authors.
The first extension handles the syntactic phenomenon called {\em medial extraction} by means of a subexponential modality allowing permutation.
To make it clear what medial extraction is, recall the {\sl ``the girl whom John loves''} example. In this example, the dependent clause
{\sl ``John loves''} is a sentence which lacks a noun phrase. Let us call the place where this noun phrase is omitted a {\em gap} and denote it by
$[]$. A sentence with a gap in the end ({\sl ``John loves $[]$,''} cf. {\sl ``John loves Mary''}) is of type $S \mathop{/} N$. Symmetrically, a gap in the
beginning yields type $N \mathop{\backslash} S$, like for {\sl ``$[]$ loves Mary''} in {\sl ``the boy who loves Mary.''} Here {\sl ``who''} receives type
$(CN \mathop{\backslash} CN) \mathop{/} (N \mathop{\backslash} S)$. Unfortunately, this does not cover dependent clauses in which the gap is located in the middle of the sentence, {\em i.e.,}
examples like {\sl ``the girl whom John met $[]$ yesterday.''} This dependent clause is neither of type $S \mathop{/} N$, nor of type $N \mathop{\backslash} S$.
Medial extraction can be handled by adding a subexponential modality (cf.~\citet{KanKuzNigSce2018Dale}), denoted by ${!}$, which allows permutation.
In general, the Lambek calculus is non-commutative, thus, the order of the words in a sentence matters. For formulae of the form ${!}A$, however,
permutation is allowed, and they can be freely moved. Now the gap gets type ${!}N$ and can be relocated to an arbitrary place of the dependent clause;
the clause in whole receives type $S \mathop{/} {!}N$.
Another issue connected to dependent clauses is overgeneration ({\em i.e.,} wrong judgement of incorrect syntactic structures as valid ones), which arises
when dependent clauses and {\sl ``and''}-coordination appear together. An example is $^*${\sl ``the girl whom John loves Mary and Pete loves.''} This is
not a correct noun phrase (which is denoted by the asterisk put before it). Unfortunately, {\sl ``John loves Mary and Pete loves $[]$''} is still
of type $S \mathop{/} N$ (cf. {\sl ``John loves Mary and Pete loves Ann''} being of type $S$), which incorrectly validates our example as a noun phrase.
Another example is $^*${\sl ``the paper that John saw the person who wrote''} (again, we have {\sl ``John saw the person who wrote $[]$''} is of type
$S \mathop{/} N$).
These wrong derivations can be cut off using the mechanism of {\em brackets}~\citep{Morrill1992,MoortgatMultimodal}, which introduces controlled
non-associa\-ti\-vi\-ty. Brackets are instantiated by special bracket modalities (see Section~\ref{S:calculi} for details) and embrace certain parts of the sentence
into {\em islands.} Islands typically include {\sl and}-coordinated sentences, {\sl that}-clauses, gerund clauses, {\em etc.} Brackets (borders of islands)
cannot be penetrated by the permutation rules for ${!}N$. Thus, the dependent clause with brackets inside is no longer of type $S \mathop{/} {!}N$, and the whole wrong derivation
gets invalidated.
Finally, we consider
a more rare syntactic phenomenon called {\em parasitic extraction,} a typical example of which is given by the following noun phrase:
{\sl ``the paper that John signed without reading.''} In this example we have {\em two} gaps:
{\sl ``John signed $[]$ without reading $[]$,''} and both gaps should be filled with {\em the same} object of type $N$:
{\sl ``John signed \underline{the paper} without reading \underline{the paper}.''}
Of course, one can think of examples with three and more gaps, like {\sl ``the paper that the author of $[]$ signed $[]$ without reading $[]$,''} and so on.
In a series of papers~\citep{Morrill2014,MorrillValentin,MorrillLACompLing,MorrillPhilosophy,Morrill2018JLM,Morrill2019}, Morrill, with his co-author Valent\'{\i}n, uses several different
calculi for handling parasitic extraction. All these approaches, however, use a subexponential modality the {\em contraction rule,}
which makes proof search problematic and
often yields algorithmic undecidability. Generally, contraction is a rule of the form
$$
\infer{\ldots, {!}A, \ldots \to C}{\ldots, {!}A, \ldots, {!}A, \ldots \to C}
$$
Morrill and his co-authors, however, suggest more sophisticated versions of contraction, which involve brackets. The general idea of their approaches
is as follows: in the situation of parasitic extraction, only one gap lies plainly in the dependent clause; other gaps, which are called
parasitic, reside in bracketed subislands of the clause. Moreover, they can get nested. Thus, the contraction rule becomes highly non-standard.
Morrill's systems differ one from another in the rules for ${!}$.
In this article, we give a logical analysis for two of these systems. One is presented in~\citet{MorrillValentin,MorrillLACompLing} and is closely related to the
one of~\citet{MorrillPhilosophy}. The other one is from~\citet{Morrill2019,Morrill2018JLM}. For both systems, we discuss issues connected to cut elimination, and then
prove cut elimination for modified versions of these systems. Next, we provide a generic method of encoding semi-Thue systems in extensions of the Lambek calculus with
subexponential modalities, and use this method to prove undecidability of the derivability problems for Morrill's systems. We also show that categorial grammars
based on these systems generate all recursively enumerable languages. Finally, using methods of~\citet{BuszkoZML}, we strengthen these algorithmic results by
restricting ourselves to smallest reasonable fragments, which includes only one division, subexponential, brackets, and bracket modalities.
This journal article extends our conference papers in the 21st International Symposium on Fundamentals of Computation Theory, FCT 2017, held
in Bordeaux in September 2017~\citep{KanKuzSceFCT},
and in the 24th Conference on Formal Grammar, FG 2019, held in Riga in August 2019~\citep{KanKuzSceFG19}. However, here we provide a significant refinement of the results presented in the FCT~'17 and FG~'19 papers. First, here we consider the system
with additive connectives. This makes cut elimination results stronger.
Second, besides undecidability, we also show that categorial grammars based on each of the calculi in question generate
{\em all} recursively enumerable languages, not just one $\Sigma_1^0$-hard one (Section~\ref{S:grammar}). Third, using a variant of Buszkowski's
translation~\citep{BuszkoZML}, we establish undecidability even for the one-division fragments of the calculi in question (Section~\ref{S:Buszko}).
In comparison with a series of our papers on the Lambek calculus and non-commutative linear logic with
subexponential modalities~\citep{KanKuzSceLFCS,KanKuzSceJLC,KanKuzSceFG,KanKuzNigSce2018Dale,KanKuzNigSce2018IJCAR},
the principal difference of this paper
is the presence of brackets and bracket modalities. Contraction rules used by Morrill in the bracketed calculi
essentially interact with brackets and become disfunctional in the bracket-free fragment.
Undecidability results, on their turn,
rely on contraction. Thus, they should be proved for calculi with brackets independently from the bracket-free case.
On the other side, two papers on the bracketed Lambek calculus~\citep{KanKuzMorSceFSCD,MorKuzKanSce2018FG}
do not deal with the subexponential modality ($!$), and feature effective
algorithms instead of undecidability results.
\section{The Multiplicative-Additive Lambek Calculus with Exponential/Relevant Modality}\label{S:MALC}
We start with more traditional calculi without brackets and bracket modalities, namely, the multiplicative-additive Lambek calculus
extended with a (sub)expo\-nen\-tial modality.
Formulae of the calculi we are going to define in this section are constructed from a countable set $\mathrm{Var}$ of variables and
the unit constant $\mathbf{1}$ using five binary connectives: $\cdot$ (product, or multiplicative conjunction), $\mathop{\backslash}$ (left division), $\mathop{/}$ (right division),
$\wedge$ (additive conjunction), and $\vee$ (additive disjuncton), and one unary connective, ${!}$ (exponential).
Sequents are expressions of the form $\Pi \to A$, where $A$ is a formula, and $\Pi$ is a finite linearly ordered sequence of formulae.
Notice that these calculi are in general non-commutative, $\Pi$ is a sequence, not a set or multiset.
The first calculus we consider is $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$,
the multiplicative-additive Lambek calculus extended
with a relevant subexponential modality ($\mathbf{r}$ stands for ``relevant,'' see below). The axioms of $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ are sequents of the
form $A \to A$ and $\Lambda \to \mathbf{1}$, and the rules of inference are as follows:
$$
\infer[\mathop{\backslash} R]{\Pi \to A\mathop{\backslash} B}{A, \Pi \to B}
\qquad
\infer[\mathop{\backslash} L]{\Delta_1, \Pi, A \mathop{\backslash} B, \Delta_2 \to C}{\Pi \to A & \Delta_1, B, \Delta_2 \to C}
$$
$$
\infer[\mathop{/} R]{\Pi \to B \mathop{/} A}{\Pi, A \to B}
\qquad
\infer[\mathop{/} L]{\Delta_1, B \mathop{/} A, \Pi, \Delta_2 \to C}{\Pi \to A & \Delta_1, B, \Delta_2 \to C}
$$
$$
\infer[\cdot R]{\Gamma, \Delta \to A \cdot B}{\Gamma \to A & \Delta \to B}
\qquad
\infer[\cdot L]{\Delta_1, A \cdot B, \Delta_2 \to C}{\Delta_1, A, B, \Delta_2 \to C}
\qquad
\infer[\mathbf{1} L]{\Delta_1, \mathbf{1}, \Delta_2 \to C}{\Delta_1, \Delta_2 \to C}
$$
$$
\infer[\wedge R]{\Pi \to A_1 \wedge A_2}{\Pi \to A_1 & \Pi \to A_2}
\qquad
\infer[\wedge L_i\mbox{, $i = 1,2$}]{\Delta_1, A_1 \wedge A_2, \Delta_2 \to C}{\Delta_1, A_i, \Delta_2 \to C}
$$
$$
\infer[\vee R_i\mbox{, $i = 1,2$}]{\Pi \to A_1 \vee A_2}{\Pi \to A_i}
\qquad
\infer[\vee L]{\Delta_1, A_1 \vee A_2, \Delta_2 \to C}{\Delta_1, A_1, \Delta_2 \to C & \Delta_1, A_2, \Delta_2 \to C}
$$
$$
\infer[{!} R]{{!}A_1, \dots, {!}A_n \to {!}B}{{!}A_1, \dots, {!}A_n \to B}
\qquad
\infer[{!} L]{\Delta_1, {!}A, \Delta_2 \to C}{\Delta_1, A, \Delta_2 \to C}
$$
$$
\infer[{!}P_1]{\Delta_1, \Phi, {!}A, \Delta_2 \to C}{\Delta_1, {!}A, \Phi, \Delta_2 \to C}\qquad
\infer[{!}P_2]{\Delta_1, {!}A, \Phi, \Delta_2 \to C}{\Delta_1, \Phi, {!}A, \Delta_2 \to C}
$$ $$
\infer[{!}C]{\Delta_1, {!}A, \Delta_2 \to C}{\Delta_1, {!}A, {!}A, \Delta_2 \to C}
$$
$$
\infer[\mathrm{cut}]{\Delta_1, \Pi, \Delta_2 \to C}{\Pi \to A & \Delta_1, A, \Delta_2 \to C}
$$
Notice that, from the proof-theoretic point of view, it is better to use, instead of $(\mathrm{contr})$, the following non-local contraction rules~\citep{KanKuzNigSce2018Dale}:
$$
\infer[{!}NC_1]{\Delta_1, \Phi, {!}A, \Delta_2 \to C}{\Delta_1, {!}A, \Phi, {!}A, \Delta_2 \to C}\qquad
\infer[{!}NC_2]{\Delta_1, {!}A, \Phi, \Delta_2 \to C}{\Delta_1, {!}A, \Phi, {!}A, \Delta_2 \to C}
$$
In the presence of ${!}P_{1,2}$, however, ${!}C$ has the same power as ${!}NC_{1,2}$.
The ${!}$ modality here is called ``relevant,'' since it allows contraction and permutation, but not
weakening, like in relevant logic.
The second system without brackets is $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$, the multiplicative-addi\-ti\-ve Lambek calculus extended with a full-power exponential modality.
It is obtained from $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ by adding the lacking structural rule for ${!}$, namely weakening:
$$
\infer[{!}W]{\Gamma, {!}A, \Delta \to C}{\Gamma, \Delta \to C}
$$
Both $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ and $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ are particular cases of $\mathbf{SMALC}_\Sigma$, the multi\-plicative-additive Lambek calculus extended with an arbitrary
family of subexponentials $\Sigma$, considered by~\cite{KanKuzNigSce2018Dale}. In that paper it is shown that these calculi
enjoy cut elimination and that the derivability problems for these calculi are undecidable.
Cut elimination yields the subformula property (each formula occurring in the cut-free derivation is a subformula of the
goal sequent) and thus conservativity of elementary fragments. Namely, if one wants to derive only sequents that include
formulae with a restricted set of connectives, it is sufficient just to restrict the set of rules of the calculus to this set
of connectives.
For convenience, we use a shorter notation, $\boldsymbol{!}^{\mathbf{r}} \mathbf{L}^{\boldsymbol{*}}$ and $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, for the fragments without additive connectives
($\vee$ and $\wedge$) of $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ and $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ respectively.
Let us formally define the notion of categorial grammar based on a non-commutative intuitionistic-style sequent calculus $\mathcal{L}$ without brackets, like the systems
$\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ and $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ defined above.
\begin{definition}
An $\mathcal{L}$-grammar is a triple $\mathcal{G} = \langle \Sigma, \rhd, H \rangle$, where $\Sigma$ is a finite alphabet,
$H$ is a formula, and $\rhd$ is a finite binary correspondence between letters of $\Sigma$ and formulae (called {\em lexicon}).
A word $w = a_1 \ldots a_n$ over $\Sigma$ is accepted by $\mathcal{G}$ if there exist formulae $A_1, \ldots, A_n$
such that $a_i \rhd A_i$ ($i = 1, \ldots, n$) and the sequent $A_1, \ldots, A_n \to H$ is derivable in $\mathcal{L}$.
The language generated, or recognised, by $\mathcal{G}$ consists of all words recognised by $\mathcal{G}$.
\end{definition}
The system with weakening, $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$, has not so much to do with linguistic applications, but is interesting from the logical point of view.
In particular, we use it as an intermediate calculus in our undecidability proofs (Sections~\ref{S:undec} and~\ref{S:Buszko}).
The system with a relevant modality, $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$, supports analysis of many cases of extraction from dependent clauses, including parasitic extraction.
For example, {\sl ``the paper that John signed without reading''} is analysed as follows. First, we define the necessary fragment of the lexicon:
\vspace*{-10pt}
{\small \begin{align*}
\mbox{\sl the} & {} \rhd N \mathop{/} CN && & \mbox{\sl John} & {} \rhd N \\
\mbox{\sl paper} & {} \rhd CN && & \mbox{\sl signed, reading} & {} \rhd (N \mathop{\backslash} S) \mathop{/} N \\
\mbox{\sl that} & {} \rhd (CN \mathop{\backslash} CN) \mathop{/} (S \mathop{/} {!}N) && & \mbox{\sl without} & {} \rhd ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S)
\end{align*}}
Here $N$ stands for ``noun phrase,'' $CN$ states for ``common noun'' (without an article), and $S$ stands for ``sentence.''
Next, we derive the sequent
\begin{multline*}
N \mathop{/} CN, CN, (CN \mathop{\backslash} CN) \mathop{/} (S \mathop{/} {!}N), N, (N \mathop{\backslash} S) \mathop{/} N, \\
((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N \to N
\end{multline*}
in $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$, as shown on Figure~\ref{Fig:exampleA}.
\begin{figure}
\centerline{\rotatebox{90}{
$$
\infer[\mathop{/} L]{N \mathop{/} CN, CN, (CN \mathop{\backslash} CN) \mathop{/} (S \mathop{/} {!}N), N, (N \mathop{\backslash} S) \mathop{/} N,
((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N \to N}
{\infer[\mathop{/} R]{N, (N \mathop{\backslash} S) \mathop{/} N, ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N \to S \mathop{/} {!}N}
{\infer[{!}NC_1]{N, (N \mathop{\backslash} S) \mathop{/} N, ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N, {!}N \to S}
{\infer[{!}L]{N, (N \mathop{\backslash} S) \mathop{/} N, {!}N, ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N, {!}N \to S}
{\infer[{!}L]{N, (N \mathop{\backslash} S) \mathop{/} N, N, ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N, {!}N \to S}
{\infer[\mathop{/} L]{N, (N \mathop{\backslash} S) \mathop{/} N, N, ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N, N \to S}
{N \to N & \infer[\mathop{/} L]{N, N \mathop{\backslash} S, ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), (N \mathop{\backslash} S) \mathop{/} N, N \to S}
{N \to N & \infer[\mathop{/} L]{N, N \mathop{\backslash} S, ((N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S)) \mathop{/} (N \mathop{\backslash} S), N \mathop{\backslash} S \to S}
{N \mathop{\backslash} S \to N \mathop{\backslash} S & \infer[\mathop{\backslash} L]{N, N \mathop{\backslash} S, (N \mathop{\backslash} S) \mathop{\backslash} (N \mathop{\backslash} S) \to S}
{N \mathop{\backslash} S \to N \mathop{\backslash} S & \infer[\mathop{\backslash} L]{N, N \mathop{\backslash} S \to S}{N \to N & S \to S}}}}}}}}}
&
\infer[\mathop{/} L]{N \mathop{/} CN, CN, CN \mathop{\backslash} CN \to N}
{CN \to CN & \infer[\mathop{/} L]{N \mathop{/} CN, CN \to N}{CN \to CN & N \to N}}}
$$
}}
\caption{Derivation for {\sl ``the paper that John signed without reading''} in $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ (like \cite[Fig.~24]{Morrill2019}, but with
brackets and bracket modalities removed)}\label{Fig:exampleA}
\end{figure}
Without brackets, however, categorial grammars based on $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ suffer from overgeneration, parsing ungrammatical phrases like
{\sl *``the girl whom John loves Mary and Pete loves''} (see Introduction). In the next section, we introduce systems including
both brackets and a restricted subexponential, developed by Morrill in a series of papers.
\section{Morrill's Calculi with Brackets and Subexponential}\label{S:calculi}
In this section we describe extensions of the Lambek calculus, which include both brackets (and bracket modalities which control them) and
a subexponential, which interacts with brackets in an intricate way.
In his papers, Morrill (sometimes with his co-author Valent\'{\i}n) introduces different variants of his calculus---the difference is in the most
interesting rule, contraction. We consider two of Morrill's calculi, and denote these calculi by $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, by the year of first publication.
In this notation, ``(st)'' means the presence of stoups, the $\mathbf{b}$ on the right stands for ``brackets,'' and
$\boldsymbol{!}_{\mathbf{b}}$ means that the subexponential ${!}$ interacts with the bracketing structure.
The $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ system, in its version without stoups, appears in~\citet{MorrillValentin}, and then in~\cite{MorrillLACompLing}
(\cite{MorrillPhilosophy} features a slightly different version of this system). The $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ system appears in Morrill's recent papers~\citep{Morrill2018JLM,Morrill2019};
however, essentially here Morrill returns to an older formulation of the bracket-aware contraction rule~\citep{MorrillBook,Morrill2014}.
Morrill's systems are quite involved, including up to 45 connectives. In this article we consider their simpler fragments, including multiplicative and additive Lambek
connectives ($\mathop{\backslash}, \mathop{/},\cdot, \mathbf{1}, \vee, \wedge$), brackets and bracket modalities ($\langle\rangle$ and $[]^{-1}$), and the subexponential ${!}$. Since all Morrill's systems
do not include cut as a rule, these fragments are conservative inside the bigger systems, and our undecidability results also work for the latter. (The question of {\em admissibility} of cut in Morrill's system is more subtle, and we discuss it later on.)
Before going forward, let us notice that full Morrill's systems also include Kleene star, axiomatised by means of an $\omega$-rule (Morrill calls it
``existential exponential'' and denotes by ``?''). In the presence of Kleene star, the Lambek calculus is known to be at least
$\Pi_1^0$-hard~\citep{BuszkoPalka,Kuzn2017WoLLIC}, if the $\omega$-rule is used, and at least $\Sigma_1^0$-hard~\citep{KuznLICS19}, if the Kleene star
is axiomatised by means of induction axioms. In both cases, this means undecidability.
Moreover, in the view of Kozen's results on complexity of Horn theories
of Kleene algebras~\citep{Kozen2002}, the complexity of a system with both Kleene star (with an $\omega$-rule) and a subexponential modality allowing contraction is likely
to rise up to $\Pi_1^1$-completeness. Morrill, however, emphasizes the fact that in formulae used in categorial grammars designed for real
languages the Kleene star never occurs with positive polarity. Thus, the $\omega$-rule
is never used, and the Kleene star does not incur problems with decidability.
Thus, the only possible source of undecidability is the specific contraction rule
for the subexponential. We consider fragments of Morrill's systems with this
rule, which are sufficient to show undecidability.
The syntax and metasyntax of sequents in Morrill's systems (in particular, their fragments considered throughout this article) is more involved, if compared to the
calculi without brackets. First, in the antecedents we now have brackets which operate along with the structural comma
(a metasyntactic correspondent of the product connective), introducing partial non-associativity. Second, in order to
avoid superfluous usage of permutation rules for ${!}$-formulae and to facilitate proof search, in his systems Morrill groups the
${!}$-formulae to specifically designated commutative areas in the sequent. Using the terminology of ~\citet{GirardStoupMSCS,GirardStoupAPAL}, Morrill calles these areas {\em stoups.} Morrill's calculi, both technically and ideologically, are close to the sequent system by~\cite{hodas94ic}. In that system, antecedents are split into two zones, $\zeta;\Delta$, where $\zeta$ is the intuitionistic zone (formulae there are allowed to contract and weaken) and $\Delta$ is the linear one. In Morrill's terms, $\zeta$ is the stoup. Morrill's rules are more complicated, because of non-commutativity of the system in general, and also partial non-associativity introduced by brackets.
Introducing
the stoups, in fact, is the first step towards a focused proof system~\citep{Andreoli,Morrill2015Fiji,KanKuzNigSce2018IJCAR}.
Since permutations for ${!}$-formulae cannot penetrate brackets, each pair of brackets has its own stoup.
Let us define the syntax formally.
Formulae will be built from variables (primitive types) $p,q,\ldots$ and the multiplicative unit constant $\U$ using
three binary operations: $\mathop{\backslash}$ (left division), $\mathop{/}$ (right division), $\cdot$ (product), and three unary operations:
$\langle\rangle$ and $[]^{-1}$ (bracket modalities) and ${!}$ (subexponential). Sequents (in Morrill's terminology, {\em h-sequents)} are expressions of the form
$\Xi \Rightarrow A$, where $A$ is a formula and $\Xi$ is a complex metasyntactic structure which we call {\em meta-formula} (Morrill calls them {\em zones).}
Meta-formulae are built from formulae using comma and brackets; also formulae which are intended to be marked by the subexponential ${!}$, which
allows permutation, are placed into stoups.
Following~\citet{Morrill2019}, we define the notion of meta-formula along with two auxiliary notions, stoup and {\em tree term}, simultaneously.
\begin{itemize}
\item A stoup is a multiset of formulae: $\zeta = \{ A_1, \ldots, A_n \}$. A stoup could be empty, the empty stoup is denoted by $\varnothing$.
\item A tree term is either a formula or a bracketed expression of the form $[\Xi]$, where $\Xi$ is a meta-formula.
\item A meta-formula is an expression of the form $\zeta; \Gamma$, where $\zeta$ is a stoup and $\Gamma$ is a linearly
ordered sequence of tree terms. Here $\Gamma$ could also be empty; the empty sequence is denoted by $\Lambda$.
\end{itemize}
We use comma both for concatenation of tree term sequences and for multiset union of stoups (Morrill uses $\uplus$ for the latter).
Moreover, for adding one formula into a stoup we write $\zeta, A$ instead of $\zeta, \{A\}$. Empty stoups are omitted: instead of
$\varnothing; \Gamma$ we write just $\Gamma$.
Let us first formulate the rules which do not operate ${!}$, since these rules are the same in all Morrill's systems.
$$
\infer[\mathrm{id}]{A \to A}{}
$$
$$
\infer[{\mathop{/}} L]{\Xi (\zeta_1, \zeta_2 ; \Delta_1, C \mathop{/} B, \Gamma, \Delta_2) \to D}
{\zeta_1; \Gamma \to B & \Xi(\zeta_2 ; \Delta_1, C, \Delta_2 ) \to D}
\qquad
\infer[{\mathop{/}} R]{\zeta; \Gamma \to C \mathop{/} B}{\zeta; \Gamma, B \to C}
$$
$$
\infer[{\mathop{\backslash}} L]{\Xi (\zeta_1, \zeta_2 ; \Delta_1, \Gamma, A \mathop{\backslash} C, \Delta_2) \to D}
{\zeta_1; \Gamma \to A & \Xi(\zeta_2 ; \Delta_1, C, \Delta_2 ) \to D}
\qquad
\infer[{\mathop{\backslash}} R]{\zeta; \Gamma \to A \mathop{\backslash} C}{\zeta; A, \Gamma \to C}
$$
$$
\infer[{\cdot} L]{\Xi (\zeta; \Delta_1, A \cdot B, \Delta_2) \to D}
{\Xi (\zeta; \Delta_1, A, B, \Delta_2) \to D}
\qquad
\infer[{\cdot} R]{\zeta_1, \zeta_2 ; \Delta, \Gamma \to A \cdot B}
{\zeta_1; \Delta \to A & \zeta_2; \Gamma \to B}
$$
$$
\infer[\vee R_i\ i=1,2]{\Xi \to A_1 \vee A_2}{\Xi \to A_i}
\qquad
\infer[{\U} L]{\Xi(\zeta; \Delta_1, \U, \Delta_2) \to A}{\Xi(\zeta;\Delta_1,\Delta_2) \to A}
$$
$$
\infer[\vee L]{\Xi(\zeta; \Delta_1, A_1 \vee A_2, \Delta_2) \to C}
{\Xi(\zeta; \Delta_1, A_1, \Delta_2) \to C & \Xi(\zeta; \Delta_1, A_2, \Delta_2) \to C}
\qquad
\infer[{\U} R]{\Lambda \to \U}{}
$$
$$
\infer[\wedge L_j\ j=1,2]{\Xi(\zeta; \Delta_1, A_1 \wedge A_2, \Delta_2) \to C}
{\Xi(\zeta; \Delta_1, A_j, \Delta_2) \to C}
\qquad
\infer[\wedge R]{\Xi \to A_1 \wedge A_2}{\Xi \to A_1 & \Xi \wedge A_2}
$$
$$
\infer[{[]^{-1}} L]{\Xi(\zeta; \Delta_1, [[]^{-1} A], \Delta_2) \to B}
{\Xi (\zeta; \Delta_1, A, \Delta_2) \to B}
\qquad
\infer[{[]^{-1}} R]{\Xi \to []^{-1} A}{[ \Xi ] \to A}
$$
$$
\infer[{\langle\rangle} L]{\Xi(\zeta; \Delta_1, \langle\rangle A, \Delta_2) \to B}
{\Xi(\zeta; \Delta_1, [ A], \Delta_2) \to B}
\qquad
\infer[{\langle\rangle} R]{ [\Xi] \to \langle\rangle A}{\Xi \to A}
$$
The two calculi, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, also share two rules for ${!}$:
$$
\infer[{!} L]{\Xi(\zeta; \Gamma_1, {!}A, \Gamma_2) \to B}{\Xi(\zeta, A ; \Gamma_1, \Gamma_2) \to B}
\qquad
\infer[{!} P]{\Xi(\zeta, A; \Gamma_1, \Gamma_2) \to B}{\Xi (\zeta; \Gamma_1, A, \Gamma_2) \to B}
$$
However, the ${!}R$ rule and, most importantly, the contraction rule ${!}C$ are different.
In the ``older'' system $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ they are formulated as follows:
$$
\infer[{!}R]{\zeta; \Lambda \to {!}B}{\zeta; \Lambda \to B}
\qquad
\infer[{!}C,\ \zeta_2 \ne \varnothing]{\Xi(\zeta_1, \zeta_2; \Gamma_1, \Gamma_2, \Gamma_3) \to B}
{\Xi(\zeta_1, \zeta_2; \Gamma_1, [\zeta_2; \Gamma_2], \Gamma_3) \to B}
$$
The ``newer'' system $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ uses the following formulation of ${!}R$ and ${!}C$:
$$
\infer[{!} R]{{!}A \to {!}B}{{!}A \to B}
\qquad
\infer[{!} C]{\Xi(\zeta, A; \Gamma_1, [[\Gamma_2]], \Gamma_3) \to B}
{\Xi(\zeta, A; \Gamma_1, [A; \Gamma_2], \Gamma_3) \to B}
$$
As noticed above, in the absence of cut we can easily formulate fragments of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ without additive connectives: one just removes the corresponding
rules ($\vee L$, $\vee R_{1,2}$, $\wedge L_{1,2}$, $\wedge R$). In the notations, we just replace ``$\mathbf{MALC}$'' with ``$\mathbf{L}$'':
$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$. In the following sections we use the same naming convention: if a calculus' name includes ``$\mathbf{MALC}$,'' then
replacing it with ``$\mathbf{L}$'' gives a name for the fragment of this calculus without additive connectives.
For calculi with brackets, defining recognition of words in categorial grammars is trickier. One can keep the definition
from Section~\ref{S:MALC} and say that $w = a_1 \dots a_n$ is accepted
by the grammar if $A_1, \dots, A_n \to H$ is derivable, for some $A_i$ such that $a_i \rhd A_i$ ($i = 1, \dots, n$). Notice that this
sequent does not include brackets, but may include bracket modalities, $\langle\rangle$ and $[]^{-1}$. Thus, brackets could appear inside the derivation.
This notion of recognition is called {\em s-recognition}~\citep{Jaeger2003}.
Linguistic applications, however, suggest another notion of recognition for Lambek grammars with brackets, called {\em t-recognition.}
A word $w = a_1 \dots a_n$ is t-accepted by an grammar $\mathcal{G}$ if the sequent $\Pi \to H$ is derivable for some $\Pi$ such that if one
removes all brackets (but not bracket modalities!) from $\Pi$, it yields $A_1, \dots, A_n$, where $a_i \rhd A_i$ ($i = 1,\dots,n$). In other words,
a word is accepted if there corresponding sequent is derivable {\em for some bracketing $\Pi$.}
In the implementation of Morrill's bracketed calculi in the CatLog parser, the bracket structure on $A_1, \ldots, A_n$ is requested from the user
as part of input data~\citep{CatLog3tech}. There is an ongoing project of implementing automatic guessing of the correct bracket structure (so-called {\em bracket induction});
at the present time, there exists such an algorithm for the fragment with only multiplicative connectives and bracket modalities, without
subexponential~\citep{MorKuzKanSce2018FG}.
As an example, we analyse the phrase {\sl ``the paper that John signed without reading''} using $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$. Our analysis is a simplification
of the one of~\citet{Morrill2019}. In comparison with the analysis in Section~\ref{S:MALC} (Figure~\ref{Fig:exampleA}), here we take care
of the bracketed domains, which cannot be penetrated by associativity of product or permutations of ${!}$-formulae. Also notice that the
contraction rule here implements parasitic extension in the following sense: applying contraction to ${!}N$ (actually, to $N$ located in
the stoup) instantiates a secondary (parasitic) copy of ${!}N$ into an island. In order to prevent reusage of islands for parasitic extraction,
the island transforms from a strong (double-bracketed) to a weak (single-bracketed) one. The lexicon now is as follows (if compared to
the one in Section~\ref{S:MALC}, the types here are augmented with bracket modalities):
{\small
\begin{align*}
\mbox{\sl the} &\triangleright N \mathop{/} CN && & \mbox{\sl likes, signed} &\triangleright (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \\
\mbox{\sl man, paper} &\triangleright CN && & \mbox{\sl without} &\triangleright ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S) \\
\mbox{\sl reading} &\triangleright (\langle\rangle N \mathop{\backslash} S) \mathop{/} N && & \mbox{\sl who, that} &\triangleright ([]^{-1}\NMod (CN \mathop{\backslash} CN)) \mathop{/} (S \mathop{/} {!}N)\\
\mbox{\sl John} &\triangleright \langle\rangle N
\end{align*}}
Before parsing, we have to impose the right bracket structure on our phrase. This is done as follows:
{\sl ``the paper {\rm [[}that {\rm [}John{\rm]} signed {\rm [[}without reading{\rm]] ]]}.''}
Indeed, in Morrill's CatLog categorial grammar the subject group and the {\sl without}-clause form islands, and the {\sl that}-clause forms a strong island, embraced by
double brackets. Moreover, we also have to double-bracket our without-clause
(make it a ``strong island''), since it will be used for parasitic extraction.
Now the sequent we have to derive in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ is as follows:
\begin{multline*}
N \mathop{/} CN, [[\, ([]^{-1}\NMod (CN \mathop{\backslash} CN)) \mathop{/} (S \mathop{/} {!}N), [N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, \\
[[\, ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\, ]]\;]] \to N
\end{multline*}
The derivation is presented on Figure~\ref{Fig:Johnsigned}.
\begin{figure}
\centerline{\rotatebox{90}{$
\infer[\mathop{/} L]{N \mathop{/} CN, CN, [[\, ([]^{-1}\NMod (CN \mathop{\backslash} CN)) \mathop{/} (S \mathop{/} {!}N), [N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N,
[[\,([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\,]]\;]] \to N}
{\infer[\mathop{/} L]{CN, [[\, ([]^{-1}\NMod (CN \mathop{\backslash} CN)) \mathop{/} (S \mathop{/} {!}N), [N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N,
[[\, ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\,]]\:]] \to CN}
{\infer[\mathop{/} R]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N,
[[\,([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\,]] \to S \mathop{/} {!}N}
{\infer[{!} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N,
[[\, ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\,]], {!}N \to S}
{\infer[{!} C]{N; [N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N,
[[\, ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\,]] \to S}
{\infer[{!} P]{N; [N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N,
[\,N; ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\,] \to S}
{\infer[{!} P]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N,
[\,N; ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N\,] \to S}
{\infer[\mathop{/} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N,
[\, ([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N\,] \to S}
{N \to N &
\infer[\mathop{/} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N, [\,([]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))) \mathop{/} (\langle\rangle N \mathop{\backslash} S), \langle\rangle N \mathop{\backslash} S\,] \to S}
{\langle\rangle N \mathop{\backslash} S \to \langle\rangle N \mathop{\backslash} S &
\infer[[]^{-1} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N, [\,[]^{-1} ((\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S))\,] \to S}
{\infer[\mathop{\backslash} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N, (\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S)) \to S}
{N \to N & \infer[\mathop{\backslash} L]{[N], \langle\rangle N \mathop{\backslash} S, (\langle\rangle N \mathop{\backslash} S) \mathop{\backslash} (\langle\rangle N \mathop{\backslash} S)) \to S}
{\langle\rangle N \mathop{\backslash} S \to \langle\rangle N \mathop{\backslash} S & \infer[\mathop{\backslash} L]{[N], \langle\rangle N \mathop{\backslash} S \to S}
{\infer[\langle\rangle R]{[N] \to \langle\rangle N}{N \to N} & S \to S}}}}}}}}}}} &
\infer[[]^{-1} L]{CN, [[\, []^{-1}\NMod (CN \mathop{\backslash} CN)\, ]] \to CN}
{\infer[[]^{-1} L]{CN, [\,[]^{-1} (CN \mathop{\backslash} CN)\,] \to CN}
{\infer[\mathop{\backslash} L]{CN, CN \mathop{\backslash} CN \to CN}{ CN \to CN & CN \to CN}}}}
& N \to N}
$
}}
\caption{Derivation for {\sl ``the paper that John signed without reading''} in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ (cf. \citet[Fig.~24]{Morrill2019})}\label{Fig:Johnsigned}
\end{figure}
\section{Issues with Cut Elimination}\label{S:issues}
Cut elimination is one of the standard logical properties which is expected from a reasonable Gentzen-style sequent calculus.
Since in the systems discussed in this article cut is not included as an official rule, the question of cut elimination
appears as the question of the {\em admissibility} of cut.
From the linguistic perspective, cut supports the principle of compositionality: once we have proved that a phrase has
syntactic type, say, $NP$, we can use it at any place where a noun phrase is allowed.
\citet{Morrill2019} mentions a semantic approach to prove admissibility of cut in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ as an ongoing work by O.~Valent\'{\i}n. In this
paper, we wish to pursue the more traditional syntactic approach for cut elimination, both in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$.
Unfortunately, Morrill's systems, as formulated above (Section~\ref{S:calculi}), fail to enjoy cut elimination (cut admissibility).
For $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, the counter-example is ${!}p, q \to q \cdot {!}p$. This sequent expresses the natural property that ${!}$-formulae
commute with arbitrary formulae, and it is derivable using cut:
$$
\infer[\mathrm{cut}]{{!}p, q \to q \cdot {!}p}
{\infer[!R]{{!}p \to {!}{!}p}{{!}p \to {!}p}
& \infer[!L]{{!}{!}p, q \to q \cdot {!}p}
{\infer[!P]{{!}p; q \to q \cdot {!}p}
{\infer[{\cdot} R]{q, {!}p \to q \cdot {!}p}{q \to q & {!}p \to {!}p}}}}
$$
However, no cut-free derivation is available. Indeed, the lowermost rule of such a derivation should be either ${\cdot}R$ or $!L$.
The former is impossible, since neither $\Lambda \to q$, nor ${!}p \to q$, nor ${!}p,q \to q$ is derivable. In the latter case,
we get $p; q \to q \cdot {!}p$ and again have two possibilities: ${\cdot} R$ or $!P$. For ${\cdot R}$, the only possible way
of splitting could be $q \to q$ and $p; \Lambda \to {!}p$. The latter, however, is counter-intuitively not derivable (though $p$
in stoup should mean $!p$): one cannot immediately apply $!R$, and applying $!P$ gives $p \to {!}p$. Applying $!P$ would give
either $p,q \to q \cdot {!}p$ or $q,p \to q \cdot {!}p$, none of which is derivable.
Notice that the proof search here is finite, since the contraction rule could not be used in the absence of brackets.
Thus, we have to modify $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ in order to restore the cut elimination property.
We do this by replacing ${!}R$ and ${!}C$ with the following rules:
$$
\infer[!R']{A; \Lambda \to {!}B}{A; \Lambda \to B}
\qquad
\infer[!C']
{\Xi(\zeta, A; \Gamma_1, [[ \zeta'; \Gamma_2 ]], \Gamma_3) \to B}
{\Xi(\zeta, A; \Gamma_1, [ \zeta', A; \Gamma_2 ], \Gamma_3) \to B}
$$
Notice that ${!}R'$ corresponds to the $!R$ rule of Morrill's $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$. The only difference is that here
the stoup should include exactly one formula.
We denote the modified calculi by $\sysB'$.
In what follows, we shall show (Theorem~\ref{Th:cutelim}) that $\sysB'$ admits the cut rule in the following stoup-aware form:
$$
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C}
{\xi;\Pi \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}
$$
Using cut and the left rules for ${!}$, one can derive the old ${!}R$ rule from the new ${!}R'$ one:
$$
\infer[{!}L]{{!}A \to {!}B}
{\infer[{!}R']{A; \Lambda \to {!}B}{\infer[\mathrm{cut}]{A; \Lambda \to B}
{\infer[{!}R']{A; \Lambda \to {!}A}{\infer[{!}P]{A; \Lambda \to A}{A \to A}} & {!}A \to B}}}
$$
As for ${!}C'$, the old rule ${!}C$ is just its particular case, for $\zeta' = \varnothing$. Thus,
$\sysB'$ is an extension of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$.
For $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, problems come from the non-emptiness restriction imposed on the contraction rule.
The ${!}C$ rule in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ is formulated in the ``multi-contraction'' form, allowing to contract
several formulae in the stoup at once. However, it should contract {\em at least one} formula.
This constraint can be easily violated by cut with $\Lambda \to {!}\mathbf{1}$ (which is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$).
In systems without brackets this would not be an issue, since in such systems contraction of zero
formulae does nothing. In $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, however, ${!}C$ operates brackets, so such a ``zero-contraction'' would violate
bracket discipline.
The concrete counter-example is $q \to \langle\rangle q$. This sequent clearly has no cut-free derivation, but can be derived using cut:
$$
\infer[\mathrm{cut}]{q \to \langle\rangle q}{\infer[{!}R]{\Lambda \to {!}\mathbf{1}}{\Lambda \to \mathbf{1}} &
\infer[{!}L]{{!}\mathbf{1}, q \to \langle\rangle q}{\infer[{!}C]{\mathbf{1}; q \to \langle\rangle q}
{\infer[{!}P]{\mathbf{1}; [\mathbf{1}; q] \to \langle\rangle q}
{\infer[{!}P]{\mathbf{1}, [\mathbf{1}; q] \to \langle\rangle q}
{\infer[\mathbf{1} L]{\mathbf{1}, [\mathbf{1}, q] \to \langle\rangle q}
{\infer[\mathbf{1} L]{[\mathbf{1},q] \to \langle\rangle q}
{\infer[\langle\rangle R]{[q] \to \langle\rangle q}{q\to q}}}}}}}}
$$
We modify $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ in the following way, yielding the system $\sysA'$:
$$
\infer[{!}R',\ \zeta \ne \varnothing]{\zeta; \Lambda \to {!}B}{\zeta; \Lambda \to B}
\qquad
\infer[{!}C',\ \zeta_2 \ne \varnothing]{\Xi(\zeta_1, \zeta_2, \zeta'; \Gamma_1, \Gamma_2, \Gamma_3) \to C}
{\Xi(\zeta_1, \zeta_2; \Gamma_1, [\zeta', \zeta_2; \Gamma_2], \Gamma_3) \to C}
$$
Theorem \ref{Th:cutelimA} establishes cut admissiblity in $\sysA'$.
\section{Lambek's Restriction}\label{S:restriction}
The original Lambek calculus~\citep{Lambek58} has an important difference from the systems discussed above,
namely {\em Lambek's non-emptiness restriction.}
Let us start with a linguistic example~\citep[Sect. 2.5]{MootRetore}. In the calculi defined above,
one can derive $(N \mathop{/} N) \mathop{/} (N \mathop{/} N), N \to N$. This sequent validates ``very book'' as an object of type $N$
(common noun), which is incorrect. Indeed, the type $(N \mathop{/} N) \mathop{/} (N \mathop{/} N)$ for ``very'' is a left modifier for
adjective, cf. ``very interesting book,'' analyzed as $(N \mathop{/} N) \mathop{/} (N \mathop{/} N), N \mathop{/} N, N \to N$.
This example motivates the following constraint: {\em left-hand sides of all sequents
are required to be non-empty.} This constraint existed in the original Lambek calculus~\citep{Lambek58}. It is quite strange
from the logical point of view, but is natural from the linguistic side and also in the view of algebraic interpretations
(considering residuated semigroups instead of monoids).
In the presence of a full-power exponential modality, however, imposing Lambek's restriction is quite a subtle matter~\citep{KanKuzSceLFCS,KanKuzSceJLC}.
Actually, there is no way of doing it without losing at least one of the desired properties of a good logical system---cut elimination and substitution.
Similar issues arise with reconciling Lambek's restriction with the relevant modality.
The subexponential modalities used by Morrill, however, are not that powerful, and their behaviour is constrained by brackets. This makes it possible
to impose Lambek's restriction in a linguistically consistent manner.
In this section, we present $\sysBr'$, a version of $\sysB'$ with Lambek's restriction imposed.
Before going into the formalism, let us consider one more linguistic example~\citep{Morrill2018JLM}.
This example features an incorrect noun phrase, {\sl *``man who likes.''} The dependent clause here
is analysed with two gaps, {\sl *``man who {\rm []} likes {\rm []}.''} The intended semantics (and the correct
version of the phrase) is {\sl ``man who likes himself,''} that is, both gaps should be filled with
the same $N$, using the parasitic extraction mechanism. The lexicon here is the same as in the example in
Section~\ref{S:calculi}.
Since the dependent clause forms a strong (double-bracketed) island, the brackets are imposed as follows:
{\sl ``man {\rm [[}who likes{\rm ]]}.''} Next, we recall that the subject should form a weak (single-bracketed) island, and
in these brackets can be generated in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ by the contraction rule. This allows $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ to parse (incorrectly) {\sl ``likes''} as
a dependent clause with two gaps, a host one for the object and a parasitic one for the subject:
$$
\infer[\mathop{/} R]{(\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S \mathop{/} {!}N}
{\infer[{!}L]{(\langle\rangle N \mathop{\backslash} S) \mathop{/} N, {!}N \to S}
{\infer[{!}C]{N; (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S}
{\infer[{!}P]{N; [N; \Lambda], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S}
{\infer[{!}P]{[N; \Lambda], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N \to S}
{\infer[\mathop{/} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N \to S}
{N \to N & \infer[\mathop{\backslash} L]{[N], \langle\rangle N \mathop{\backslash} S \to S}
{\infer[\langle\rangle R]{[N] \to \langle\rangle N}{N \to N} & S \to S}}}}}}}
$$
The complete derivation for {\sl *``man who likes''} as a common noun group ($CN$) in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ is given on Figure~\ref{Fig:manwholikes}
\begin{figure}
\centerline{
{
$$
\infer[\mathop{/} L]
{CN, [[ ([]^{-1}\NMod (CN \mathop{\backslash} CN)) \mathop{/} (S \mathop{/} {!}N), (\langle\rangle N \mathop{\backslash} S) \mathop{/} N ]] \to CN}
{\infer[\mathop{/} R]{(\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S \mathop{/} {!}N}
{\infer[{!}L]{(\langle\rangle N \mathop{\backslash} S) \mathop{/} N, {!}N \to S}
{\infer[{!}C]{N; (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S}
{\infer[{!}P]{N; [N; \Lambda], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S}
{\infer[{!}P]{[N; \Lambda], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N \to S}
{\infer[\mathop{/} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N \to S}
{N \to N & \infer[\mathop{\backslash} L]{[N], \langle\rangle N \mathop{\backslash} S \to S}
{\infer[\langle\rangle R]{[N] \to \langle\rangle N}{N \to N} & S \to S}}}}}}}
&
\infer=[[]^{-1} L]{CN, [[ []^{-1}\NMod (CN \mathop{\backslash} CN) ]] \to CN}
{\infer[\mathop{\backslash} L]{CN, CN \mathop{\backslash} CN \to CN}{CN \to CN & CN \to CN}}
}
$$
}
}
\caption{Derivation for {\sl *``man {\rm [[}who likes{\rm ]]}''} in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ (cf.~\cite{Morrill2018JLM})}
\label{Fig:manwholikes}
\end{figure}
The problem here is the empty island (subject of the dependent clause) generated by the ${!}C$ rule.
This issue was one of the motivations for~\citet{Morrill2018JLM} to introduce the new system~$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, which features
another version of ${!}C$.
With this new version, the island for parasitic extraction should be given in the bracketing of the goal sequent.
Moreover, it should be declared as a strong (double-bracketed) island, and then the ${!}C$ rule will transform it
into a weak one. The erroneous phrase {\sl *``man who likes,''} however, can still be parsed by $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, but requires
the empty subject island to be explicitly introduced in the bracketing, see Figure~\ref{Fig:manwholikesB}.
\begin{figure}
\centerline{{
$$
\infer[\mathop{/} L]
{CN, [[ ([]^{-1}\NMod (CN \mathop{\backslash} CN)) \mathop{/} (S \mathop{/} {!}N), [[ \Lambda ]], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N ]] \to CN}
{\infer[\mathop{/} R]{[[\Lambda]], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S \mathop{/} {!}N}
{\infer[{!}L]{[[\Lambda]],(\langle\rangle N \mathop{\backslash} S) \mathop{/} N, {!}N \to S}
{\infer[{!}C]{N; [[\Lambda]], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S}
{\infer[{!}P]{N; [N; \Lambda], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N \to S}
{\infer[{!}P]{[N; \Lambda], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N \to S}
{\infer[\mathop{/} L]{[N], (\langle\rangle N \mathop{\backslash} S) \mathop{/} N, N \to S}
{N \to N & \infer[\mathop{\backslash} L]{[N], \langle\rangle N \mathop{\backslash} S \to S}
{\infer[\langle\rangle R]{[N] \to \langle\rangle N}{N \to N} & S \to S}}}}}}}
&
\infer=[[]^{-1} L]{CN, [[ []^{-1}\NMod (CN \mathop{\backslash} CN) ]] \to CN}
{\infer[\mathop{\backslash} L]{CN, CN \mathop{\backslash} CN \to CN}{CN \to CN & CN \to CN}}
}
$$
}
}
\caption{Derivation of {\sl *``man {\rm [[}who {\rm [[\ ]]} likes{\rm ]]}''} in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ (notice the empty
subject island)}\label{Fig:manwholikesB}
\end{figure}
An easy way of making phrases like {\sl *``man who likes''} invalid in grammars based on $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ is to forbid the user (or an automated
bracket-inducing system) to put empty bracket domains on the original sentence. This is essentially the idea which motivates
the usage of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ in favour of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$: in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, the brackets embracing an empty island appeared only inside the derivation, while
in the newer system $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ they should be provided as an input, which could be disallowed externally.
A more logically consistent approach, however, requires imposing non-emptiness restriction
systematically for all sequents in derivations. The restriction is formulated as follows:
\begin{center}
{\em every meta-formula, both the whole antecedent and each bracketed domain, should be non-empty.}
\end{center}
Non-emptiness of a meta-formula means that it is not equal to the empty one, $\varnothing;\Lambda$. In other words, it should
{\em either} include a non-empty sequence of formulae, {\em or} have a non-empty stoup.
The first thing one has to do in order to maintain Lambek's restriction is to remove the unit constant $\mathbf{1}$. The unit essentially
means ``empty,'' and there is no consistent way of reconciling it with Lambek's restriction. Indeed, having the unit, we can put it
into any meta-formula, thus making it formally non-empty. (Unfortunately, it seems that Morrill needs the unit for handling discontinuity,
that is why he does not impose Lambek's restriction on his systems.)
Most of the $\sysB'$ rules keep this restriction, {\em i.e.,} if in the premises all meta-formulae are non-empty, then the same holds for the conclusion.
Only three rules need specifically imposed restrictions:
\begin{itemize}
\item for $\mathop{\backslash} R$ and $\mathop{/} R$, we require that either $\Gamma \ne \Lambda$ or $\zeta \ne \varnothing$ (this is the original Lambek's restriction);
\item for the contraction rule, ${!}C'$, we require that either $\Gamma_2 \ne \Lambda$ or $\zeta' \ne \varnothing$.
\end{itemize}
The latter constraint exactly captures the idea that parasitic gapping into an empty bracketed island is ungrammatical (cf. the ``man that likes'' example above).
We denote the version of $\sysB'$ with Lambek's restriction by $\sysBr'$.
\section{Cut Elimination in Modified Systems}\label{S:cutelim}
In this section we prove that the cut rule in the following form
$$
\infer[\mathrm{cut}]
{\Xi(\xi, \zeta; \Gamma_1, \Pi, \Gamma_2) \to C}
{\xi; \Pi \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}
$$
is admissible in the following calculi: $\sysB'$, $\sysBr'$, and $\sysA'$. We show this by a
single inductive argument for $\sysB'$ and $\sysBr'$, and then make necessary changes for $\sysA'$.
\begin{theorem}\label{Th:cutelim}
Let sequents $\xi; \Pi \to A$ and $\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C$ be derivable
in $\sysB'$ or $\sysBr'$.
Then
$\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C$ is also derivable
in $\sysB'$ or, respectively, $\sysBr'$.
\end{theorem}
The proof of cut elimination traditionally goes by nested induction: on the complexity of the formula being cut,
and on the depth of the cut, that is, the number of rules applied in the derivation above the cut.
For the original Lambek calculus, cut elimination was shown by~\citet{Lambek58}. \citet{MoortgatMultimodal}
extended Lambek's proof to the Lambek calculus with brackets.
The presence of ${!}$ and
stoups, however, makes cut elimination more involved. Namely, the principal case for ${!}$ moves the active formula
being cut to the stoup:
$$
\infer[\mathrm{cut}]
{\Xi(\zeta, B; \Gamma_1, \Gamma_2) \to C}
{\infer[{!}R']{B; \Lambda \to {!}A}{B; \Lambda \to A} &
\infer[{!}L]{\Xi(\zeta; \Gamma_1, {!}A, \Gamma_2) \to C}
{\Xi(\zeta, A; \Gamma_1, \Gamma_2) \to C}}
$$
Propagating the cut upwards in this situation would require a specific version of cut for formulae inside the stoup,
and eliminate it together with the usual cut rule by simultaneous induction.
Contraction, however, raises yet another issue with propagating cut. Namely, if we contract the formula $A$ being cut,
then after propagation we get two cut applications, one under another. For the lower cut, we fail to maintain the decrease
of induction parameters, see~\citet{KanKuzNigSce2018Dale}.
The standard strategy, going back to~\citet{Gentzen} and applied to linear logic with exponentials by~\citet{Girard} and~\citet{LMSS},
replaces the cut rule with a more general rule called mix. Mix is a combination of cut and contractions, and this more general rule
is then eliminated by a straightforward inductive argument. In the presence of brackets and stoups, however, formulating mix becomes
an extremely tedious job. In the view of that, we follow another strategy, ``deep cut elimination'' by~\citet{BraunerBRICS,dePaiva};
see also~\cite{Brauner2000IGPL,EadesPaiva}.
\begin{proof}
Let $\xi; \Pi \to A$ and $\Xi(\zeta; \Gamma_1, A, \Gamma_2 \to C)$
have cut-free derivations $\Der_{\mathrm{left}}$ and $\Der_{\mathrm{right}}$ respectively.
We proceed by nested induction on two parameters: $\kappa$, the complexity of the formula $A$ being cut;
$\sigma$, the total number of rule applications in the derivations of $\Der_{\mathrm{left}}$ and $\Der_{\mathrm{right}}$.
In each case either $\kappa$ gets reduces, or $\sigma$ gets reduced with
the same $\kappa$.
We consider the lowermost rules of $\Der_{\mathrm{left}}$ and $\Der_{\mathrm{right}}$.
We call ${!}P$ and ${!}C$ {\em structural} rules; all other rules (excluding cut, which is not allowed
in our derivations) are {\em logical} ones. Being the lowermost rule of $\Der_{\mathrm{left}}$ or $\Der_{\mathrm{right}}$, a logical rule is called
{\em principal,} if it introduces the formula $A$ being cut.
The axiom $\Lambda\to\mathbf{1}$ in this proof is considered a principal rule (with no premises) introducing $\mathbf{1}$.
Structural rules are never principal, since they operate only inside the stoup, while the formula $A$
being cut is not in the stoup.
First we list all possible cases, with short comments, and then accurately consider each of them:
\begin{enumerate}
\item The lowermost rule in $\Der_{\mathrm{left}}$ is $!R$ and the lowermost rule in $\Der_{\mathrm{right}}$ is $!L$ ({\em i.e.,} the principal case with ${!}$).
This is actually the most interesting case, in which deep cut elimination
differs from traditional cut elimination schemes. In this case, we are going to perform a non-local transformation
of the $\Der_{\mathrm{right}}$ tree, as shown below.
\item Both lowermost rules of $\Der_{\mathrm{left}}$ and $\Der_{\mathrm{right}}$ are principal, and $A$ is not of the form ${!}A'$ (if it is, we are in Case~1).
This is the standard principal case for cut elimination in the Lambek calculus: the tricky part with ${!}$ is considered in
Case~1, not here.
\item The lowermost rule in $\Der_{\mathrm{left}}$ is a non-principal one. In this case we propagate cut to the left.
\item The lowermost rule in $\Der_{\mathrm{right}}$ is a non-principal one. Propagate cut to the right.
\item One of the premises of cut is an axiom of the form $A \to A$. Cut disappears.
\end{enumerate}
{\bf Case 1 (deep: principal for ${!}$):} the lowermost rule in $\Der_{\mathrm{left}}$ is
${!}R$ and the lowermost rule in $\Der_{\mathrm{right}}$ is ${!}L$. Cut is applied as follows:
$$
\infer[\mathrm{cut}]{\Xi(\zeta,B; \Gamma', \Gamma'') \to C}
{\infer[!R]{B; \Lambda \to {!}A}{B; \Lambda \to A} &
\infer[!L]{\Xi(\zeta; \Gamma', {!}A, \Gamma'') \to C}
{\Xi(\zeta, A; \Gamma', \Gamma'') \to C)}}
$$
Let us trace the designated occurrence of $A$ inside the stoup upwards along $\Der_{\mathrm{right}}$.
Each principal $!C'$ application branches the trace.
The trace also branches on applications of $\wedge R$ and $\vee L$.
Each branch ends at a principal application of ${!}P$ (see Figure~\ref{Fig:deep1}).
\begin{figure}
\includegraphics[scale=.9]{deepcut1b.pdf}
\caption{Tracing $A$ in the stoup up to ${!}P$}\label{Fig:deep1}
\end{figure}
Now we perform
the deep cut elimination step. In $\Der_{\mathrm{right}}$, we replace the designated occurrences of $A$ in the stoup with $B$.
The applications of $!C'$ remain valid. Other
rules do not operate $A$ in the stoup and therefore remain intact. After this replacement
applications of ${!}P$ transform into applications of cut with $B;\Lambda \to A$ as the left
premise (Figure~\ref{Fig:deep2}). One trace could go through several instances of ${!}P$ with
the active $A$, like $\Xi_2$ and $\Xi_3$ in the example; in this case we go from top to
bottom.
\begin{figure}
\includegraphics[scale=.85]{deepcut2b.pdf}
\caption{Deep cut elimination}\label{Fig:deep2}
\end{figure}
The new cuts have lower $\kappa$ (the cut formula is $A$ instead of ${!}A$), and therefore
they are eliminable by induction hypothesis.
For the case with Lambek's restriction, notice that in the deep cut elimination step we just
changed $A$ to $B$ in the stoups, so Lambek's restriction could not get violated.
The figures illustrating deep cut elimination (Figure~\ref{Fig:deep1} and Figure~\ref{Fig:deep2}) are taken
from~\citet{KanKuzSceFCT}, with the necessary changes for calculi with stoups.
{\bf Case 2 (principal, but not ${!}$).} As said before, this case is the standard principal case for cut elimination
in the multiplicative-additive Lambek calculus with brackets~\citep{MoortgatMultimodal}, since ${!}$ does not appear in this case.
Adding the stoups makes only a minor difference.
All the interesting things about ${!}$ have already happened in the ``deep'' Case~1.
So, the main connective of $A$ is not ${!}$. Consider other possible cases.
{\em Subcase 2.a.} $A = A_1 \mathop{\backslash} A_2$ or $A = A_2 \mathop{/} A_1$ (the latter is of course symmetric to the former).
In this case
$$\small
\infer[\mathrm{cut}]{\Xi(\xi, \zeta_1, \zeta_2; \Delta_1, \Gamma, \Pi, \Delta_2) \to C}
{\infer[\mathop{\backslash} R]{\xi; \Pi \to A_1 \mathop{\backslash} A_2}{\xi; A_1, \Pi \to A_2} &
\infer[\mathop{\backslash} L]{\Xi(\zeta_1, \zeta_2; \Delta_1, \Gamma, A_1 \mathop{\backslash} A_2, \Delta_2) \to C}{\zeta_1; \Gamma \to A_1 & \Xi(\zeta_2; \Delta_1, A_2, \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathrm{cut}]{\Xi (\xi, \zeta_1, \zeta_2; \Delta_1, \Gamma, \Pi, \Delta_2) \to C}
{\zeta_1; \Gamma \to A_1 & \infer[\mathrm{cut}]{\Xi(\xi, \zeta_2; \Delta_1, A_1, \Pi, \Delta_2) \to C}{\xi; A_1, \Pi \to A_2 & \Xi(\zeta_2; \Delta_1, A_2, \Delta_2) \to C}}
$$
Both new cuts have a smaller $\kappa$ parameter (and then we do not care for $\sigma$).
Here in the new derivation no new $\mathop{\backslash} R$ instance was added, so Lambek's restriction is observed.
{\em Subcase 2.b.} $A = A_1 \cdot A_2$.
In this case
$$\small
\infer[\mathrm{cut}]{\Xi (\xi_1, \xi_2, \zeta; \Delta_1, \Pi_1, \Pi_2, \Delta_2) \to C}
{\infer[\cdot R]{\xi_1, \xi_2; \Pi_1, \Pi_2 \to A_1 \cdot A_2}{\xi_1; \Pi_1 \to A_1 & \xi_2; \Pi_2 \to A_2} &
\infer[\cdot L]{\Xi(\zeta; \Delta_1, A_1 \cdot A_2, \Delta_2) \to C}{\Xi(\zeta; \Delta_1, A_1, A_2, \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathrm{cut}]{\Xi (\xi_1, \xi_2, \zeta; \Delta_1, \Pi_1, \Pi_2, \Delta_2) \to C}
{\xi_1; \Pi_1 \to A_1 & \infer[\mathrm{cut}]{\Xi(\xi_2, \zeta; \Delta_1, A_1, \Pi_2, \Delta_2) \to C}{\xi_2; \Pi_2 \to A_2 &
\Xi(\zeta; \Delta_1, A_1, A_2, \Delta_2) \to C}}
$$
(again $\kappa$ decreases).
{\em Subcase 2.c.} $A = \mathbf{1}$:
$$\small
\infer[\mathrm{cut}]{\Xi(\zeta; \Delta_1, \Delta_2) \to C}{\Lambda \to \mathbf{1} & \infer[\mathbf{1} L]{\Xi(\zeta; \Delta_1, \mathbf{1}, \Delta_2) \to C}
{\Xi(\zeta; \Delta_1, \Delta_2) \to C}}
$$
The goal coincides with the right premise, so we just remove this detour and arrive at a cut-free proof of
$\Xi(\zeta; \Delta_1, \Delta_2) \to C$.
{\em Subcase 2.d.} $A = A_1 \vee A_2$:
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Delta_1, \Pi, \Delta_2) \to C}
{\infer[\vee R_i]{\xi; \Pi \to A_1 \vee A_2}{\xi; \Pi \to A_i} &
\infer[\vee L]{\Xi(\zeta; \Delta_1, A_1 \vee A_2, \Delta_2) \to C}
{\Xi(\zeta; \Delta_1, A_1, \Delta_2) \to C & \Xi(\zeta; \Delta_1, A_2, \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Delta_1, \Pi, \Delta_2) \to C}
{\xi; \Pi \to A_i & \Xi(\zeta; \Delta_1, A_i, \Delta_2) \to C}
$$
($\kappa$ gets decreased, and the derivation of $\Pi \to A_j$ for $j \ne i$ gets forgotten).
{\em Subcase 2.e.} $A = A_1 \wedge A_2$:
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Delta_1, \Pi, \Delta_2) \to C}
{\infer[\wedge R]{\xi; \Pi \to A_1 \wedge A_2}{\xi; \Pi \to A_1 & \xi; \Pi \to A_2}
& \infer[\wedge L_j]{\Xi(\xi,\zeta; \Delta_1, A_1 \wedge A_2, \Delta_2) \to C}
{\Xi(\xi,\zeta; \Delta_1, A_j, \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Delta_1, \Pi, \Delta_2) \to C}
{\xi; \Pi \to A_j & \Xi(\xi,\zeta; \Delta_1, A_j, \Delta_2) \to C}
$$
($\kappa$ gets decreased, and the derivation of $\xi; \Pi \to A_i$ for $i \ne j$ gets forgotten).
{\em Subcase 2.f.} $A = \langle\rangle A'$. Notice that here $\xi$ is empty, otherwise $\langle\rangle R$ could not be applied.
$$\small
\infer[\mathrm{cut}]{\Xi(\zeta; \Delta_1, [\Pi], \Delta_2) \to C}
{\infer[\langle\rangle R]{[\Pi] \to \langle\rangle A'}{\Pi \to A'} &
\infer[\langle\rangle L]{\Xi(\zeta; \Delta_1, \langle\rangle A', \Delta_2 \to C}{\Xi(\zeta; \Delta_1, [A'], \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathrm{cut}]{\Xi(\zeta; \Delta_1, [\Pi], \Delta_2) \to C}{\Pi \to A' & \Xi(\zeta; \Delta_1, [A'], \Delta_2) \to C}
$$
($\kappa$ decreases).
{\em Subcase 2.g.} $A = []^{-1} A'$. In this subcase, notice that the stoup of the meta-formula including the active $[]^{-1} A'$ is empty, by the $[]^{-1} L$ rule;
$\zeta'$ below is the stoup of a {\em different} bracketed domain.
$$\small
\infer[\mathrm{cut}]{\Xi(\zeta'; \Delta_1, [\xi; \Pi], \Delta_2) \to C}
{\infer[[]^{-1} R]{\xi;\Pi \to []^{-1} A'}{[\xi;\Pi] \to A'} &
\infer[[]^{-1} L]{\Xi(\zeta'; \Delta_1, [[]^{-1} A'], \Delta_2) \to C}{\Xi(\zeta'; \Delta_1, A', \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathrm{cut}]{\Xi(\zeta'; \Delta_1, [\xi; \Pi], \Delta_2) \to C}{[\xi; \Pi] \to A' & \Xi(\zeta'; \Delta_1, A', \Delta_2) \to C}
$$
($\kappa$ decreases).
{\bf Case 3 (left non-principal).}
The lowermost rule of $\Der_{\mathrm{left}}$ is non-principal if and only if it is a left rule, {\em i.e.,}
operates in the antecedent.
{\em Subcase 3.a.} The lowermost rule in $\Der_{\mathrm{left}}$ is one of the one-premise rules,
$\cdot L$, $\mathbf{1} L$, $\vee L$, $\langle\rangle L$, $[]^{-1} L$, ${!}L$, ${!}P$, ${!} C'$.
Such rules are called {\em easy rules} in~\cite{KanKuzNigSce2018Dale}, therefore we denote this rule by $ER$:
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C}{\infer[ER]{\xi;\Pi \to A}{\xi'; \Pi' \to A} &
\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}
$$
The easy rule is still valid in a larger context, where $\Pi$ is put between $\Gamma_1$ and $\Gamma_2$ and $\xi$ is
added to $\zeta$, therefore one can reconstruct the
derivation as follows:
$$\small
\infer[ER]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C}
{\infer[\mathrm{cut}]{\Xi(\xi',\zeta; \Gamma_1, \Pi', \Gamma_2) \to C}{\xi';\Pi' \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}}
$$
The new cut has a smaller $\sigma$ with the same $\kappa$, and can be eliminated by induction.
Lambek's restriction on ${!}C'$, if it was imposed, is kept.
{\em Subcase 3.b.} The lowermost rule in $\Der_{\mathrm{left}}$ is $\mathop{\backslash} L$ or $\mathop{/} L$.
Here
$$\small
\infer[\mathrm{cut}]
{\Xi(\xi,\zeta; \Gamma_1, \Pi(\sigma,\chi; \Delta_1, \Psi, E \mathop{\backslash} F, \Delta_2), \Gamma_2) \to C}
{\infer[\mathop{\backslash} L]{\xi; \Pi(\sigma,\chi; \Delta_1, \Psi, E \mathop{\backslash} F, \Delta_2) \to A}
{\chi; \Psi \to E & \xi; \Pi(\sigma; \Delta_1, F, \Delta_2) \to A} &
\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}
$$
transforms into
$$\small
\infer[\mathop{/} L]
{\Xi(\xi,\zeta; \Gamma_1, \Pi(\sigma,\chi; \Delta_1, \Psi, E \mathop{\backslash} F, \Delta_2), \Gamma_2) \to C}
{\chi; \Psi \to E &
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi(\sigma; \Delta_1, F, \Delta_2), \Gamma_2) \to C}
{\xi; \Pi(\sigma; \Delta_1, F, \Delta_2) \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}}
$$
In particular, $\Pi$ could just coincide with the designated meta-formula inside. In this case $\sigma,\chi$ gets
merged with $\xi,\zeta$ into one stoup.
The new cut has a smaller $\sigma$ with the same $\kappa$. The $\mathop{/}$ case is symmetric.
{\em Subcase 3.c.} The lowermost rule in $\Der_{\mathrm{left}}$ is $\vee L$. Then a derivation of the sequent $\Xi(\xi,\zeta; \Gamma_1, \Pi(\sigma; \Delta_1, B_1 \vee B_2, \Delta_2), \Gamma_2) \to C$ by cut from $\xi; \Pi(\sigma; \Delta_1, B_1 \vee B_2, \Delta_2) \to A$ and $\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C$ is transformed into a derivation by $\vee L$, whose premises are derived as follows ($i=1,2$):
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi(\sigma; \Delta_1, B_i, \Delta_2), \Gamma_2) \to C}
{\xi; \Pi(\sigma; \Delta_1, B_i, \Delta_2) \to A &
\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}
$$
In particular, $\Pi$ could just coincide with the designated meta-formula inside. In this case $\sigma$ gets
merged with $\xi,\zeta$ into one stoup.
Now we have two cuts, but each of it has a smaller $\sigma$ parameter with the same $\kappa$, and they are independent, {\em i.e.,} derivations of the premises of
these cuts are already cut-free. Thus, we proceed by induction.
{\bf Case 4 (right non-principal).}
{\em Subcase 4.a.}
In the right case we also have the notion of {\em easy rule,} which is a one-premise rule that does something
in the context ($\Xi$, $\zeta$, $\Gamma_1$, $\Gamma_2$, $C$) while keeping the active occurrence of $A$ intact. These rules are non-principal instances of
$\mathop{\backslash} R$, $\mathop{/} R$, $\cdot L$, $\mathbf{1} L$, $\wedge L_i$, $\vee R_i$, $\langle\rangle L$, $\langle\rangle R$, $[]^{-1} L$, $[]^{-1} R$, ${!}L$, ${!}P$. Contraction,
${!}C'$, is also essentially an easy rule, but we consider it more accurately below (Subcase~3.a$'$).
For easy rules,
the transformation is as follows:
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C}
{\xi; \Pi \to A & \infer[ER]{\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C}{\Xi'(\zeta'; \Gamma'_1, A, \Gamma'_2) \to C'}}
$$
transforms into
$$\small
\infer[ER]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C}
{\infer[\mathrm{cut}]{\Xi'(\xi,\zeta'; \Gamma'_1, \Pi, \Gamma'_2) \to C'}{\xi; \Pi \to A & \Xi'(\zeta'; \Gamma'_1, A, \Gamma'_2) \to C'}}
$$
which is legal, since the easy rule is still valid with $\Pi$ substituted for $A$ (recall that $A$ was not the active
formula in the easy rule) and $\xi$ added to the stoup. Moreover, if the original application of the easy rule, $\mathop{\backslash} R$ or $\mathop{/} R$,
obeyed Lambek's restriction, so will the new one. The $\sigma$ parameter decreases, with the same $\kappa$.
{\em Subcase 4.a$'$.} The lowermost rule in $\Der_{\mathrm{right}}$ is ${!}C'$. The interesting situation here is when contraction and cut are
performed in the same meta-formula; otherwise ${!}C'$ acts as an easy rule, considered in the previous subcase. Since ${!}C'$ operates two
bracketed domains (the outer domain and the island), we have two subsituations.
Namely,
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta,B; \Gamma'_1, \Pi, \Gamma''_1, [[\zeta'; \Gamma_2]], \Gamma_3) \to C}
{\xi;\Pi \to A & \infer[{!}C']{\Xi(\zeta, B; \Gamma'_1, A, \Gamma''_1, [[\zeta'; \Gamma_2]], \Gamma_3) \to C}
{\Xi(\zeta, B; \Gamma'_1, A, \Gamma''_1, [\zeta', B; \Gamma_2], \Gamma_3) \to C}}
$$
transforms into
$$\small
\infer[{!}C']{\Xi(\xi,\zeta,B; \Gamma'_1, \Pi, \Gamma''_1, [[\zeta'; \Gamma_2]], \Gamma_3) \to C}
{\infer[\mathrm{cut}]{\Xi(\xi,\zeta,B; \Gamma'_1, \Pi, \Gamma''_1, [\zeta',B; \Gamma_2], \Gamma_3) \to C}
{\xi;\Pi \to A & \Xi(\zeta, B; \Gamma'_1, A, \Gamma''_1, [\zeta', B; \Gamma_2], \Gamma_3) \to C}}
$$
(the case with $A$ in $\Gamma_3$ is considered symmetrically), and
$$\small
\infer[\mathrm{cut}]{\Xi(\zeta,B; \Gamma_1, [[\xi,\zeta'; \Gamma'_2, \Pi, \Gamma''_2]], \Gamma_3) \to C}
{\xi;\Pi \to A & \infer[{!}C']{\Xi(\zeta,B; \Gamma_1, [[\zeta'; \Gamma'_2, A, \Gamma''_2]], \Gamma_3) \to C}
{\Xi(\zeta,B; \Gamma_1, [\zeta', B; \Gamma'_2, A, \Gamma''_2], \Gamma_3) \to C}}
$$
transforms into
$$\small
\infer[{!}C']{\Xi(\zeta,B; \Gamma_1, [[\xi,\zeta'; \Gamma'_2, \Pi, \Gamma''_2]], \Gamma_3) \to C}
{\infer[\mathrm{cut}]{\Xi(\zeta,B; \Gamma_1, [\xi,\zeta',B; \Gamma'_2, \Pi, \Gamma''_2], \Gamma_3) \to C}
{\xi;\Pi \to A & \Xi(\zeta,B; \Gamma_1, [\zeta', B; \Gamma'_2, A, \Gamma''_2], \Gamma_3) \to C}}
$$
The $\sigma$ parameter decreases, with the same $\kappa$. Lambek's restriction on ${!}C$, if imposed, is kept.
{\em Subcase 4.b} (the counterpart of 3.b). The lowermost rule in $\Der_{\mathrm{right}}$ is a non-principal instance of $\mathop{\backslash} L$ or $\mathop{/} L$.
We consider $\mathop{\backslash} L$; $\mathop{/} L$ is symmetric. There are three possible situations, depending on the relative locations of the active $A$ of cut
and the active $E \mathop{\backslash} F$ of $\mathop{\backslash} L$.
If the active $A$ goes to the left premise of $\mathop{\backslash} L$, then
$$\small
\infer[\mathrm{cut}]{\Xi(\chi,\sigma; \Delta_1, \Phi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2), E \mathop{\backslash} F, \Delta_2) \to C}
{\xi;\Pi \to A &
\infer[\mathop{\backslash} L]{\Xi(\chi,\sigma; \Delta_1, \Phi(\zeta; \Gamma_1, A, \Gamma_2), E \mathop{\backslash} F, \Delta_2) \to C}
{\chi; \Phi(\zeta; \Gamma_1, A, \Gamma_2) \to E & \Xi(\sigma; \Delta_1, F, \Delta_2) \to C}
}
$$
transforms into
$$\small
\infer[\mathop{\backslash} L]
{\Xi(\chi,\sigma; \Delta_1, \Phi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2), E \mathop{\backslash} F, \Delta_2) \to C}
{\infer[\mathrm{cut}]{\chi; \Phi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to E}{\xi;\Pi \to A & \chi; \Phi(\zeta; \Gamma_1, A, \Gamma_2) \to E} &
\Xi(\sigma; \Delta_1, F, \Delta_2) \to C}
$$
In particular, $\Phi$ could be just $\xi,\zeta; \Gamma_1, \Pi, \Gamma_2$, in which case $\xi,\zeta$ gets merged with $\chi,\sigma$ into one stoup.
If the active $A$ goes to the right premise of $\mathop{\backslash} L$, there are two more cases:
$$\small
\infer[\mathrm{cut}]{\Xi(\chi,\sigma; \Delta_1 (\xi,\zeta; \Gamma_1, \Pi, \Gamma_2), \Phi, E \mathop{\backslash} F, \Delta_2) \to C}
{\xi; \Pi \to A &
\infer[\mathop{\backslash} L]{\Xi(\chi,\sigma; \Delta_1(\zeta; \Gamma_1, A, \Gamma_2), \Phi, E \mathop{\backslash} F, \Delta_2) \to C}
{\chi; \Phi \to E & \Xi(\sigma; \Delta_1(\zeta; \Gamma_1, A, \Gamma_2), F, \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathop{\backslash} L]{\Xi(\chi,\sigma; \Delta_1 (\xi,\zeta; \Gamma_1, \Pi, \Gamma_2), \Phi, E \mathop{\backslash} F, \Delta_2) \to C}
{\chi; \Phi \to E &
\infer[\mathrm{cut}]{\Xi(\sigma; \Delta_1(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2), F, \Delta_2) \to C}
{\xi; \Pi \to A & \Xi(\sigma; \Delta_1(\zeta; \Gamma_1, A, \Gamma_2), F, \Delta_2) \to C}}
$$
(in particular, $\Delta_1$ could be just $\xi, \zeta; \Gamma_1, \Pi, \Gamma_2$, in which case $\xi,\zeta$ gets merged with
$\chi,\sigma$ into one stoup; the $\Delta_2$ case is symmetric), and, finally,
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2)(\chi, \sigma; \Delta_1, \Phi, E \mathop{\backslash} F, \Delta_2) \to C}
{\xi; \Pi \to A & \infer[\mathop{\backslash} L]{\Xi(\zeta; \Gamma_1, A, \Gamma_2)(\chi, \sigma; \Delta_1, \Phi, E \mathop{\backslash} F, \Delta_2) \to C}
{\chi; \Phi \to E & \Xi(\zeta; \Gamma_1, A, \Gamma_2)(\sigma; \Delta_1, F, \Delta_2) \to C}}
$$
transforms into
$$\small
\infer[\mathop{\backslash} L]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2)(\chi, \sigma; \Delta_1, \Phi, E \mathop{\backslash} F, \Delta_2) \to C}
{\chi; \Phi \to E & \infer[\mathrm{cut}]
{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2)(\sigma; \Delta_1, F, \Delta_2) \to C}
{\xi; \Pi \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2)(\sigma; \Delta_1, F, \Delta_2) \to C}}
$$
The notation $\Xi(\ldots)(\ldots)$ means a meta-formula with {\em two} designated sub-meta-formulae, which are
independent (i.e., do not intersect).
The $\sigma$ parameter decreases, with the same $\kappa$.
{\em Subcase 4.c} (similar to the previous one). The lowermost rule of $\Der_{\mathrm{right}}$ is $\cdot R$:
$$\small
\infer[\mathrm{cut}]{\sigma_1, \sigma_2; \Gamma_1(\xi,\zeta; \Delta_1, \Pi, \Delta_2), \Gamma_2 \to E \cdot F}
{\xi;\Pi \to A &
\infer[\cdot R]{\sigma_1, \sigma_2; \Gamma_1(\zeta; \Delta_1, A, \Delta_2), \Gamma_2 \to E \cdot F}
{\sigma_1; \Gamma_1(\zeta; \Delta_1, A, \Delta_2) \to E & \sigma_2; \Gamma_2 \to F}}
$$
transforms into
$$\small
\infer[\cdot R]{\sigma_1, \sigma_2; \Gamma_1(\xi,\zeta; \Delta_1, \Pi, \Delta_2), \Gamma_2 \to E \cdot F}
{\infer[\mathrm{cut}]{\sigma_1; \Gamma_1(\xi, \zeta; \Delta_1, \Pi, \Delta_2) \to E}
{\xi;\Pi \to A & \sigma_1; \Gamma_1(\zeta; \Delta_1, A, \Delta_2) \to E}
& \sigma_2; \Gamma_2 \to F}
$$
Again, $\Gamma_1$ could coincide with its designated part; in this situation, $\xi,\zeta$ gets merged into $\sigma_1$.
The case where $A$ appears in $\Gamma_2$ is symmetric.
Here $\sigma$ decreases, with the same $\kappa$.
{\em Subcase 4.d} (the counterpart of 3.c). The lowermost rule in $\Der_{\mathrm{right}}$ is $\vee L$. Here we have two cases, depending on the
mutual location of the active $A$ of cut and the active $B_1 \vee B_2$ of $\vee L$. Namely, if they are in different (non-intersecting)
meta-formulae, then
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2)(\sigma; \Delta_1, B_1 \vee B_2, \Delta_2) \to C}
{\xi;\Pi \to A &
\infer[\vee L]{\Xi(\zeta; \Gamma_1, A, \Gamma_2)(\sigma; \Delta_1, B_1 \vee B_2, \Delta_2) \to C}
{\Xi(\zeta; \Gamma_1, A, \Gamma_2)(\sigma; \Delta_1, B_1, \Delta_2) \to C &
\Xi(\zeta; \Gamma_1, A, \Gamma_2)(\sigma; \Delta_1, B_2, \Delta_2) \to C}}
$$
transforms into two cuts ($i=1,2$) with simpler cut formulae:
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2)(\sigma; \Delta_1, B_i, \Delta_2) \to C}
{\xi;\Pi \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2)(\sigma; \Delta_1, B_i, \Delta_2) \to C}
$$
whose goals are then merged by $\vee L$.
and if $B_1 \vee B_2$ is in $\Gamma_1$ (or, symmetrically, in $\Gamma_2$), then
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1(\sigma; \Delta_1, B_1 \vee B_2, \Delta_2), \Pi, \Gamma_2) \to C}
{\xi; \Pi \to A & \infer[\vee L]{\Xi(\zeta; \Gamma_1(\sigma; \Delta_1, B_1 \vee B_2, \Delta_2), A, \Gamma_2) \to C}
{\Xi(\zeta; \Gamma_1(\sigma; \Delta_1, B_1, \Delta_2), A, \Gamma_2) \to C
& \Xi(\zeta; \Gamma_1(\sigma; \Delta_1, B_2, \Delta_2), A, \Gamma_2) \to C}}
$$
transforms into two cuts ($i=1,2$), whose goals again can be merged by $\vee L$:
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1(\sigma; \Delta_1, B_2, \Delta_2), \Pi, \Gamma_2) \to C}
{\xi; \Pi \to A & \Xi(\zeta; \Gamma_1(\sigma; \Delta_1, B_2, \Delta_2), A, \Gamma_2) \to C}
$$
Again, $\Gamma_1$ could coincide with its designated sub-meta-formula; in this case, $\sigma$ gets merged with
$\xi,\zeta$.
In both situations, the two new cuts are independent and have a smaller $\sigma$ parameter with the same $\kappa$.
{\em Subcase 4.e} (similar to the previous one). The lowermost rule in $\Der_{\mathrm{right}}$ is $\wedge R$:
$$\small
\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C_1 \wedge C_2}{\xi;\Pi \to A &
\infer[\wedge R]{\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C_1 \wedge C_2}
{\Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C_1 & \Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C_2}}
$$
transforms into
$$\small
\infer[\wedge R]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C_1 \wedge C_2}
{\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C_1}{\xi;\Pi \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C_1}
&\infer[\mathrm{cut}]{\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C_2}{\xi;\Pi \to A & \Xi(\zeta; \Gamma_1, A, \Gamma_2) \to C_2}}
$$
Again, the two new cuts have a smaller $\sigma$ and the same $\kappa$.
{\bf Case 5 (axiom).} One of the sequents is an axiom of the form $A \to A$.
Then cut disappears, since the other premise coincides with the goal.
\qed
\end{proof}
\begin{theorem}\label{Th:cutelimA}
Let sequents $\xi; \Pi \to A$ and $\Xi(\zeta; \Gamma_1, A, \Gamma_2 \to C)$ be derivable
in $\sysA'$.
Then the sequent $\Xi(\xi,\zeta; \Gamma_1, \Pi, \Gamma_2) \to C)$ is also derivable
in $\sysA'$.
\end{theorem}
\begin{proof}
The $\sysA'$ system differs from $\sysB'$ only in two rules: ${!}R'$ and ${!}C'$. Thus, we have to
reconsider Case~1 and Subcase~4.a$'$ (in Subcase~3.a, ${!}C'$ still acts as an ``easy rule'').
{\em Case~1.} The bottom of $\Der_{\mathrm{left}}$ now is
$$
\infer[!R']{\xi; \Lambda \to {!}A}{\xi;\Lambda \to A}
$$
with $\xi \ne \varnothing$. We perform the same deep cut elimination procedure, as in Theorem~\ref{Th:cutelim}. Namely,
the lowermost rule application in $\Der_{\mathrm{right}}$ is
$$
\infer[!L]{\Xi(\zeta; \Gamma', {!}A, \Gamma'') \to C}{\Xi(\zeta, A; \Gamma', \Gamma'') \to C}
$$
and we trace the $A$ in the stoup upwards until applications of ${!}P$. Next, we replace these occurrences of $A$ with $\xi$.
The active ${!}P$ applications,
$$
\infer[{!}P]
{\Xi_i(\zeta_i, A; \Gamma'_i, \Gamma''_i) \to C_i}
{\Xi_i(\zeta_i; \Gamma'_i, A, \Gamma''_i) \to C_i}
$$
transform into cuts with a smaller $\kappa$:
$$
\infer[\mathrm{cut}]{\Xi_i(\zeta_i, \xi; \Gamma'_i, \Gamma''_i) \to C_i}
{\xi; \Lambda \to A & \Xi_i(\zeta_i; \Gamma'_i, A, \Gamma''_i) \to C_i}
$$
Rule applications along the trace remain valid. Notice that the stoup non-emptiness conditions on ${!}R'$ and ${!}C'$ are maintained by
non-emptiness of $\xi$.
{\em Subcase 4.a$'$.} In this case,
$$
\infer[\mathrm{cut}]{\Xi(\xi,\zeta_1,\zeta_2,\zeta'; \Gamma'_1, \Pi, \Gamma''_1, \Gamma_2, \Gamma_3) \to C}
{\xi; \Pi \to A &
\infer[{!}C']{\Xi(\zeta_1,\zeta_2,\zeta'; \Gamma'_1, A, \Gamma''_1, \Gamma_2, \Gamma_3) \to C}
{\Xi(\zeta_1, \zeta_2; \Gamma'_1, A, \Gamma''_1, [\zeta',\zeta_2; \Gamma_2], \Gamma_3) \to C}}
$$
(with $\zeta_2 \ne \varnothing$) transforms into
$$
\infer[{!}C']{\Xi(\xi,\zeta_1,\zeta_2,\zeta'; \Gamma'_1, \Pi, \Gamma''_1, \Gamma_2, \Gamma_3) \to C}
{\infer[\mathrm{cut}]{\Xi(\xi,\zeta_1,\zeta_2; \Gamma'_1, \Pi, \Gamma''_1, [\zeta',\zeta_2;\Gamma_2], \Gamma_3) \to C}
{\xi; \Pi \to A & \Xi(\zeta_1, \zeta_2; \Gamma'_1, A, \Gamma''_1, [\zeta',\zeta_2; \Gamma_2], \Gamma_3) \to C}}
$$
(the case with $A$ in $\Gamma_3$ is considered symmetrically), and
$$
\infer[\mathrm{cut}]{\Xi(\zeta_1,\zeta_2,\xi,\zeta'; \Gamma_1, \Gamma'_2, \Pi, \Gamma''_2, \Gamma_3) \to C}
{\xi;\Pi \to A &
\infer[{!}C']{\Xi(\zeta_1,\zeta_2,\zeta' ; \Gamma_1, \Gamma'_2, A, \Gamma''_2, \Gamma_3) \to C}
{\Xi(\zeta_1, \zeta_2; \Gamma_1, [\zeta',\zeta_2; \Gamma'_2, A, \Gamma''_2], \Gamma_3) \to C}}
$$
(again, $\zeta_2 \ne \varnothing$) transforms into
$$
\infer[{!}C']{\Xi(\zeta_1,\zeta_2,\xi,\zeta'; \Gamma_1, \Gamma'_2, \Pi, \Gamma''_2, \Gamma_3) \to C}
{\infer[\mathrm{cut}]{\Xi(\zeta_1,\zeta_2; \Gamma_1, [\xi,\zeta',\zeta_2; \Gamma'_2, \Pi, \Gamma''_2], \Gamma_3) \to C}
{\xi;\Pi \to A & \Xi(\zeta_1, \zeta_2; \Gamma_1, [\zeta',\zeta_2; \Gamma'_2, A, \Gamma''_2], \Gamma_3) \to C }}
$$\qed
\end{proof}
\section{Versions of Modified Morrill's Systems without Stoups}\label{S:nostoups}
In this section we provide alternative, in a sense more traditional formulations of $\sysA'$, $\sysB'$, and $\sysBr'$ without
stoups (like in~\cite{MorrillValentin}). We denote these calculi by $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALCb}$ respectively.
Formulae, meta-formulae, and sequents are defined as in Morrill's system (Section~\ref{S:calculi}), but all the stoups are empty now.
We start with $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$. Both calculi are built on top of~$\mathbf{MALC}^{\boldsymbol{*}}\mathbf{b}$, the multiplicative-additive Lambek calculus with
brackets~\citep{Morrill1992,MoortgatMultimodal}. Axioms and rules of $\mathbf{MALC}^{\boldsymbol{*}}\mathbf{b}$ are as follows:
$$
\infer[\mathrm{id}]{A \to A}{}
$$
$$
\infer[{\mathop{/}} L]{\Xi (\Delta_1, C \mathop{/} B, \Gamma, \Delta_2) \to D}
{\Gamma \to B & \Xi(\Delta_1, C, \Delta_2 ) \to D}
\qquad
\infer[{\mathop{/}} R]{\Gamma \to C \mathop{/} B}{\Gamma, B \to C}
$$
$$
\infer[{\mathop{\backslash}} L]{\Xi (\Delta_1, \Gamma, A \mathop{\backslash} C, \Delta_2) \to D}
{\Gamma \to A & \Xi(\Delta_1, C, \Delta_2 ) \to D}
\qquad
\infer[{\mathop{\backslash}} R]{\Gamma \to A \mathop{\backslash} C}{A, \Gamma \to C}
$$
$$
\infer[{\cdot} L]{\Xi (\Delta_1, A \cdot B, \Delta_2) \to D}
{\Xi (\Delta_1, A, B, \Delta_2) \to D}
\qquad
\infer[{\cdot} R]{\Delta, \Gamma \to A \cdot B}
{\Delta \to A & \Gamma \to B}
$$
$$
\infer[{\U} L]{\Xi(\Delta_1, \U, \Delta_2) \to A}{\Xi(\Delta_1,\Delta_2) \to A}
\qquad
\infer[{\U} R]{\Lambda \to \U}{}
$$
$$
\infer[{[]^{-1}} L]{\Xi(\Delta_1, [[]^{-1} A], \Delta_2) \to B}
{\Xi (\Delta_1, A, \Delta_2) \to B}
\qquad
\infer[{[]^{-1}} R]{\Xi \to []^{-1} A}{[ \Xi ] \to A}
$$
$$
\infer[{\langle\rangle} L]{\Xi(\Delta_1, \langle\rangle A, \Delta_2) \to B}
{\Xi(\Delta_1, [A], \Delta_2) \to B}
\qquad
\infer[{\langle\rangle} R]{[\Xi] \to \langle\rangle A}{\Xi \to A}
$$
The following rules for ${!}$ are the same in both systems, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$:
$$
\infer[!L]{\Xi(\Delta_1, {!}A, \Delta_2) \to C}{\Xi(\Delta_1, A, \Delta_2) \to C}
$$
$$
\infer[!P_1]{\Xi(\Delta_1, \Phi, {!}A, \Delta_2) \to C}{\Xi(\Delta_1, {!}A, \Phi, \Delta_2) \to C}
\qquad
\infer[!P_2]{\Xi(\Delta_1, {!}A, \Phi, \Delta_2) \to C}{\Xi(\Delta_1, \Phi, {!}A, \Delta_2) \to C}
$$
The difference between $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$ is in the $!R$ and $!C$ rules. In $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$, they are formulated as follows
$$
\infer[!R,\ n \geq 1]{{!}A_1, \ldots, {!}A_n \to {!}B}{{!}A_1, \ldots, {!}A_n \to B}
$$
$$
\infer[!C,\ n \geq 1]{\Xi({!}A_1, \ldots, {!}A_n, \Gamma_1, \Gamma_2, \Gamma_3) \to C}{\Xi({!}A_1, \ldots, {!}A_n, \Gamma_1, [{!}A_1, \ldots, {!}A_n, \Gamma_2], \Gamma_3) \to C}
$$
For $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$, these rules are formulated as follows:
$$
\infer[!R]{{!}A \to {!}B}{{!}A \to B}
\qquad
\infer[!C]{\Xi({!A}, \Gamma_1, [[\Gamma_2]], \Gamma_3) \to C}{\Xi({!}A, \Gamma_1, [{!}A, \Gamma_2], \Gamma_3) \to C}
$$
The cut rule of all stoup-free calculi is formulated as follows:
$$
\infer[\mathrm{cut}]{\Xi(\Gamma_1, \Pi, \Gamma_2) \to C}{\Pi \to A & \Xi(\Gamma_1, A, \Gamma_2) \to C}
$$
In order to obtain $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALCb}$, we impose Lambek's restriction on the rules of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$ in the following natural way:
\begin{itemize}
\item in $\mathop{\backslash} R$ and $\mathop{/} R$, we require $\Gamma \ne \Lambda$;
\item in ${!}C$, we require $\Gamma_2 \ne \Lambda$.
\end{itemize}
\begin{proposition}\label{Prop:nostoupB}
Let $\Xi \to C$ be a sequent without stoups (in the context of $\sysB'$ we consider it as a sequent with empty stoups).
Then the following are equivalent:
\begin{enumerate}\itemsep=3pt
\item $\Xi \to C$ is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$ without cut;
\item $\Xi \to C$ is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$, possibly using cut;
\item $\Xi \to C$ is derivable in $\sysB'$, possibly using cut;
\item $\Xi \to C$ is derivable in $\sysB'$ without cut.
\end{enumerate}
The same holds for the variants with Lambek's restriction, $\sysBr'$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALCb}$.
\end{proposition}
\begin{proof}
We proceed by round-robin implications: $1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 1$.
\fbox{$1 \Rightarrow 2$} Obvious.
\fbox{$2 \Rightarrow 3$}
Consider a derivation of $\Xi \to C$ in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$ (possibly with cuts) and translate it into $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$. The interesting case concerns
$!$-operating rules. These rules are simulated using cut, by temporarily moving the active $!$-formula to the stoup (which is
normally empty):
{\small
$$
\infer[!P_1]
{\Xi(\Gamma_1, {!}A, \Phi, \Gamma_2) \to C}
{\Xi(\Gamma_1, \Phi, {!}A, \Gamma_2) \to C}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\quad
\infer[!L]
{\Xi(\Gamma_1, {!}A, \Phi, \Gamma_2) \to C}
{\infer[\mathrm{cut}]{\Xi(A; \Gamma_1, \Phi, \Gamma_2) \to C}
{A; \Lambda \to {!}A & \Xi(\Gamma_1, \Phi, {!}A, \Gamma_2) \to C}}
$$
$$
\infer[!C]
{\Xi({!}A, \Gamma_1, [[\Gamma_2]], \Gamma_3) \to C}
{\Xi({!}A, \Gamma_1, [{!}A, \Gamma_2], \Gamma_3) \to C}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\!
\infer[!L]
{\Xi({!}A, \Gamma_1, [[\Gamma_2]], \Gamma_3) \to C}
{\infer[!C]{\Xi(A; \Gamma_1, [[\Gamma_2]], \Gamma_3) \to C}
{\infer[\mathrm{cut}]{\Xi(A; \Gamma_1, [A; \Gamma_2], \Gamma_3) \to C}
{A; \Lambda \to {!}A &
\infer[\mathrm{cut}]{\Xi({!}A, \Gamma_1, [A; \Gamma_2], \Gamma_3) \to C}
{A; \Lambda \to {!}A &
\Xi({!}A, \Gamma_1, [{!}A, \Gamma_2], \Gamma_3) \to C}}}}
$$
$$
\infer[!R]
{{!}A \to {!}B}
{{!}A \to B}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\quad
\infer[!L]
{{!}A \to {!}B}
{\infer[!R]{A; \Lambda \to {!}B}
{\infer[\mathrm{cut}]{A; \Lambda \to B}{A; \Lambda \to {!}A & {!}A \to B}}}
$$
$$
\infer[!L]
{\Xi(\Gamma_1, {!}A, \Gamma_2) \to C}
{\Xi(\Gamma_1, A, \Gamma_2) \to C}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\quad
\infer[!L]
{\Xi(\Gamma_1, {!}A, \Gamma_2) \to C}
{\infer[!P]{\Xi(A; \Gamma_1, \Gamma_2) \to C}{\Xi(\Gamma_1, A, \Gamma_2) \to C}}
$$
}
The sequent $A; \Lambda \to {!}A$ is derived as follows:
{\small
$$
\infer[{!}R]{A; \Lambda \to {!}A}
{\infer[{!}P]{A; \Lambda \to A}{A \to A}}
$$}
All other rules, including cut, are translated straightforwardly.
\fbox{$3 \Rightarrow 4$} This is due to cut elimination (Theorem~\ref{Th:cutelim}).
\fbox{$4 \Rightarrow 1$} Consider a cut-free proof of $\Xi \to C$ in $\sysB'$.
In the goal sequent, all stoups are empty, but this could be not the case for sequents inside the derivation.
In each sequent, we ``flatten'' the stoups, replacing each meta-formula $\zeta; \Gamma$, where
$\zeta = \{ A_1, \ldots, A_N \}$, with ${!}A_1, \ldots, {!}A_N, \Gamma$. For $\zeta = \{ A_1, \ldots, A_N \}$, let us
denote ${!}A_1, \ldots, {!}A_N$ by ${!}\zeta$.
The rules of $\sysB'$ which do not operate the stoup are mapped to those of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$, adding permutations for ${!}$-formulae, if necessary.
For example, this is how it is performed for $\mathop{\backslash} R$ and $\mathop{\backslash} L$:
{\small
$$
\infer[\mathop{\backslash} R]{\zeta;\Gamma \to B \mathop{\backslash} C}{\zeta;B,\Gamma \to C}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\quad
\infer[\mathop{\backslash} R]{{!}\zeta, \Gamma \to B \mathop{\backslash} C}
{\infer=[{!}P]{B, {!}\zeta, \Gamma \to C}{{!}\zeta, B, \Gamma \to C}}
$$
$$
\infer[\mathop{\backslash} L]{\Xi(\zeta_1, \zeta_2; \Delta_1, \Gamma, B \mathop{\backslash} C, \Delta_2) \to D}
{\zeta_1; \Gamma\to B & \Xi(\zeta_2; \Delta_1, C, \Delta_2) \to D}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\quad
\infer=[{!}P]{\Xi({!}\zeta_1, {!}\zeta_2, \Delta_1, \Gamma, B \mathop{\backslash} C, \Delta_2) \to D}
{\infer[\mathop{\backslash} L]{\Xi({!}\zeta_2, \Delta_1, {!}\zeta_1, \Gamma, B \mathop{\backslash} C, \Delta_2) \to C}
{{!}\zeta_1, \Gamma \to B & \Xi({!}\zeta_2, \Delta_1, C, \Delta_2) \to D}}
$$
}
Notice how Lambek's restriction is conserved in the $\mathop{\backslash} R$ rule.
Contraction is handled as follows:
{\small $$
\infer[!C']{\Xi(\zeta, A; \Gamma_1, [[\zeta'; \Gamma_2]], \Gamma_3) \to B}
{\Xi(\zeta, A; \Gamma_1, [\zeta', A; \Gamma_2], \Gamma_3) \to B}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\quad
\infer[!P]{\Xi({!}\zeta, {!}A, \Gamma_1, [[{!}\zeta', \Gamma_2]], \Gamma_3) \to B}
{\infer[!C]{\Xi({!}A, {!}\zeta, \Gamma_1, [[{!}\zeta', \Gamma_2]], \Gamma_3) \to B}
{\infer[!P]{\Xi({!}A, {!}\zeta, \Gamma_1, [{!}A, {!}\zeta', \Gamma_2], \Gamma_3) \to B}
{\infer[!P]{\Xi({!}A, {!}\zeta, \Gamma_2, [{!}\zeta', {!}A, \Gamma_2], \Gamma_3) \to B}
{\Xi({!}\zeta, {!}A, \Gamma_2, [{!}\zeta', {!}A, \Gamma_2], \Gamma_3) \to B}}}}
$$}
Again, we notice that Lambek's restriction (${!}\zeta_2, \Gamma_2 \ne \Lambda$) is conserved.
Finally, ${!}L$ becomes just permutation, ${!}R'$ maps to ${!}R$, and the version of ${!}P$ in $\sysB'$ maps to a combination of permutation and ${!}L$ of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$.
\qed
\end{proof}
\begin{proposition}\label{Prop:nostoupA}
Let $\Xi \to C$ be a sequent without stoups (in the context of $\sysA'$ we consider it as a sequent with empty stoups).
Then the following are equivalent:
\begin{enumerate}\itemsep=3pt
\item $\Xi \to C$ is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$ without cut;
\item $\Xi \to C$ is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$, possibly using cut;
\item $\Xi \to C$ is derivable in $\sysA'$, possibly using cut;
\item $\Xi \to C$ is derivable in $\sysA'$ without cut.
\end{enumerate}
\end{proposition}
\begin{proof}
\fbox{$1 \Rightarrow 2$} Obvious.
\fbox{$2 \Rightarrow 3$} The ${!}R$ and ${!}C$ rules of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$ are simulated in the system $\sysA'$ as follows:
{\small
$$
\infer[{!}R,\ n \ge 1]{{!}A_1, \ldots, {!}A_n \to {!}B}
{{!}A_1, \ldots, {!}A_n \to B}
\quad\mbox{\raisebox{.8em}{$\leadsto$}}\quad
\infer=[{!}L]{{!}A_1, \ldots, {!}A_n \to {!}B}
{\infer[{!}R',\ \{A_1,\ldots,A_n\} \ne \varnothing]{A_1, \ldots, A_n; \Lambda \to {!}B}
{\infer[\mathrm{cut}]{A_1, \ldots, A_n; \Lambda \to B}
{A_1; \Lambda \to {!}A & \infer{A_2, \ldots, A_n; {!}A_1 \to B}
{\infer[\mathrm{cut}]{\vdots}{A_n; \Lambda \to {!}A_n & {!}A_1, \ldots, {!}A_n \to B}}}}}
$$
$$
\infer[{!}C,\ n \ge 1]{\Xi({!}A_1, \ldots, {!}A_n, \Gamma_1, \Gamma_2, \Gamma_3) \to C}
{\Xi({!}A_1, \ldots, {!}A_n, \Gamma_1, [{!}A_1, \ldots, {!}A_n, \Gamma_2], \Gamma_3) \to C}
$$
$$ \mbox{\rotatebox{270}{$\leadsto$}} $$
$$
\infer=[{!}L]{\Xi({!}A_1, \ldots, {!}A_n, \Gamma_1, \Gamma_2, \Gamma_3) \to C}
{\infer[{!}C',\ \{A_1,\ldots,A_n\} \ne \varnothing]{\Xi(A_1, \ldots, A_n; \Gamma_1, \Gamma_2, \Gamma_3) \to C}
{\infer[\mathrm{cut}]{\Xi(A_1, \ldots, A_n; \Gamma_1, [A_1, \ldots, A_n; \Gamma_2], \Gamma_3) \to C}
{A_1; \Lambda \to {!}A_1 & \infer{\Xi(A_2, \ldots, A_n; {!}A_1, \Gamma_1, [A_1, \ldots, A_n; \Gamma_2], \Gamma_3) \to C}
{\infer[\mathrm{cut}]{\vdots}{A_n; \Lambda \to {!}A_n & \Xi({!}A_1, \ldots, {!}A_n, \Gamma_1, [{!}A_1, \ldots, {!}A_n, \Gamma_2], \Gamma_3) \to C }}}}}
$$
}
where $A_i; \Lambda \to {!}A_i$ ($i = 1, \ldots, n$) is derived as follows:
{\small
$$
\infer[{!}R,\ \{A_i\} \ne \varnothing]{A_i; \Lambda \to {!}A_i}
{\infer[{!}P]{A_i; \Lambda \to A_i}{A_i \to A_i}}
$$
}
All other rules are translated exactly as in the proof of the $2 \Rightarrow 3$ implication of Proposition~\ref{Prop:nostoupB}.
\fbox{$3 \Rightarrow 4$} This is due to cut eliminaton, Theorem~\ref{Th:cutelimA}.
\fbox{$4 \Rightarrow 1$}
The ${!}R'$ rule of $\sysA'$ maps directly onto the ${!}R$ rule of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$. Contraction is handled as follows:
{\small
$$
\infer[{!}C',\ \zeta_2 \ne \varnothing]
{\Xi(\zeta_1, \zeta_2, \zeta'; \Gamma_1, \Gamma_2, \Gamma_3) \to C}
{\Xi(\zeta_1, \zeta_2; \Gamma_1, [\zeta', \zeta_2; \Gamma_2], \Gamma_3) \to C}
$$
}
transforms into
{\small
$$
\infer=[{!}P]
{\Xi({!}\zeta_1, {!}\zeta_2, {!}\zeta', \Gamma_1, \Gamma_2, \Gamma_3) \to C}
{\infer[{!}C,\ {!}\zeta_2 \ne\Lambda]{\Xi({!}\zeta_2, {!}\zeta_1, \Gamma_1, {!}\zeta', \Gamma_2, \Gamma_3) \to C}
{\infer=[{!}P]{\Xi({!}\zeta_2, {!}\zeta_1, \Gamma_1, [{!}\zeta_2, {!}\zeta', \Gamma_2], \Gamma_3) \to C}
{\Xi({!}\zeta_1, {!}\zeta_2, \Gamma_1, [{!}\zeta', {!}\zeta_2, \Gamma_2], \Gamma_3) \to C}}}
$$
}
All other rules are translated exactly as in the proof of the $4 \Rightarrow 1$ implication of Proposition~\ref{Prop:nostoupB}.
\qed
\end{proof}
Finally, for $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ we prove only one implication, since the other one does not hold (for counter-example see Section~\ref{S:issues}).
\begin{proposition}\label{Prop:nostoupBx}
If a sequent without stoups is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, then it is also derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$.
\end{proposition}
\begin{proof}
Recall that derivations in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ are cut-free by definition. Thus, this proposition can be proved by modifying the $4 \Rightarrow 1$ implication
of Proposition~\ref{Prop:nostoupB}. The rules that are different in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, if compared with
$\sysB'$, are ${!}R$ and ${!}C$. Thus, we need to reconsider these rules.
Fortunately, after ``flattening'' the stoups, ${!}R$ and ${!}C$ become identical to ${!}R'$ and ${!}C'$ respectively.
\qed
\end{proof}
\section{The $\pi$ and $\pi_q$ Projections}\label{S:nobrackets}
The calculi presented above are related to the bracket-free system $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ by means of so-called
{\em bracket-forgetting projections (BFP).}
The BFPs are going to be used in the undecidability proofs (Section~\ref{S:undec} below), in order to make
use of the standard undecidability proof for $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ in the undecidability proofs for more sophisticated sequents
with brackets. We define two versions of BFP, and for simplicity we
do this for the syntax without stoups.
\begin{definition}
The {\em $\pi$-projection} of a formula is defined in the following recursive way:
\begin{align*}
& \pi(p) = p \mbox{ for $p \in \mathrm{Var}$} && \pi(A \cdot B) = \pi(A) \cdot \pi(B)\\
& \pi(A \mathop{\backslash} B) = \pi(A) \mathop{\backslash} \pi(B) && \pi(B \mathop{/} A) = \pi(B) \mathop{/} \pi(A) \\
& \pi(A_1 \wedge A_2) = \pi(A_1) \wedge \pi(A_2) && \pi(A_1 \vee A_2) = \pi(A_1) \vee \pi(A_2) \\
& \pi({!}A) = {!} \pi(A) && \pi(\mathbf{1}) = \mathbf{1}\\
& \pi([]^{-1} A) = \pi(A) && \pi(\langle\rangle A) = \pi(A)
\end{align*}
For meta-formulae (tree terms) and sequents without stoups the $\pi$-projection is defined as follows:
\begin{align*}
& \pi(\Gamma_1, \ldots, \Gamma_k) = \pi(\Gamma_1), \ldots, \pi(\Gamma_k) && \pi(\Lambda) = \Lambda\\
& \pi([\Xi]) = \pi(\Xi) && \pi(\Xi \to C) = (\pi(\Xi) \to \pi(C))
\end{align*}
\end{definition}
For technical reasons we shall also need the following modification of $\pi$-projection.
\begin{definition}
Let $q$ be a designated variable. Then the {\em $\pi_q$-projection} is defined on variables as follows:
$$
\pi_q(p) = \left\{
\begin{aligned}
& p, && \mbox{if $p \ne q$}\\
& \mathbf{1}, && \mbox{if $p = q$}
\end{aligned}
\right.
$$
and then propagated to formulae, meta-formulae, and sequents exactly as $\pi$.
\end{definition}
The $\pi$-projection erases all bracket information from a sequent. Since brackets block some unwanted derivabilities, this projection is only one-way sound,
as formulated in the following definition. The $\pi_q$-projection additionally makes the special variable $q$ behave as a unit (this is going to be necessary when
handling $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALCb}\mathrm{(st)}$, the system with Lambek's restriction which does not include a unit constant).
\begin{definition}
A calculus $\mathcal{L}$ is {\em $\pi$-sound ($\pi_q$-sound)} in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ if for any sequent derivable in $\mathcal{L}$ its $\pi$-projection (resp., $\pi_q$-projection) is
derivable in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$.
\end{definition}
Since the rules of all Morrill's systems (in the version without stoups), after applying the $\pi$-projection, map to rules of $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$,
we automatically get $\pi$-soundness for Morrill's systems. For $\pi_q$-soundness, we additionally notice that the axiom $q \to q$ maps to
a derivable sequent $\mathbf{1} \to \mathbf{1}$ (everything else remains as for the $\pi$-projection).
\begin{proposition}
The following calculi are $\pi$-sound and $\pi_q$-sound in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$: $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALCb}$.
\end{proposition}
Notice that $\pi$ and $\pi_q$ lose essential information about bracketing, so the reverse implication, ``$\pi$-completeness'' or ``$\pi_q$-completeness,''
does not (and is not intended to) hold. For example, $\langle\rangle p \to p$ is not derivable in any of the Morrill's systems,
while its $\pi$-projection (and also $\pi_q$-projection), $p \to p$, is an axiom of $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$. More interesting examples arise from the linguistic usage
of brackets (see Linguistic Introduction): {\em e.g.,} {\sl *``the girl whom John loves Mary and Pete loves''} is not assigned type $NP$
(parsed as a valid noun phrase) in bracketed calculi, but after forgetting the brackets and bracket modalities the corresponding sequent becomes
derivable in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$.
For fragments without additives, there are analogous notions of $\pi$-sound\-ness and $\pi_q$-sound\-ness of a given calculus in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
\section{Undecidability}\label{S:undec}
In this section we prove algorithmic undecidability of the derivability problems for systems defined above.
In order to make our results stronger, we confine ourselves to fragments of these systems which include only
multiplicative connectives (product and divisions), brackets and bracket modalities, and the subexponential, but
not additive connectives. For the full $\mathbf{MALC}^{\boldsymbol{*}}$-variants of the calculi, the corresponding undecidability results follow as corollaries, by
conservativity.
Recall the well-known proof of undecidability for $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, the Lambek calculus (without brackets) enriched with a full-power exponential
modality~\citep{LMSS,Kanazawa,KanKuzNigSce2018Dale}, via encoding of {\em semi-Thue systems}.
A {\em semi-Thue system}~\citep{Thue} is a pair $\mathcal{S} = \langle \mathfrak{A}, P \rangle$, where $\mathfrak{A}$ is a finite alphabet and
$P$ is a finite set of rewriting rules of the form $\alpha \Rightarrow \beta$, where $\alpha$ and $\beta$
are words (possibly empty) over $\mathfrak{A}$. A rule from $P$ can be {\em applied} as follows:
$\eta\,\alpha\,\theta \Rightarrow_\mathcal{S} \eta\,\beta\,\theta$, where $(\alpha \Rightarrow \beta) \in P$ and
$\eta$ and $\theta$ are arbitrary (possibly empty) words over $\mathfrak{A}$. By $\Rightarrow^*_\mathcal{S}$ (rewritability
relation in $\mathcal{S}$) we denote the
reflexive-transitive closure of $\Rightarrow_\mathcal{S}$.
We use the following classical result by~\citet{markov47dan} and~\citet{post47jsl}:
\begin{theorem}\label{Th:Markov}
There exists a semi-Thue system $\mathcal{S}$ for which the $\Rightarrow^*_\mathcal{S}$ relation is algorithmically undecidable,
i.e., there exists no algorithm that, given words $\gamma$ and $\delta$, decides whether $\gamma \Rightarrow^*_\mathcal{S} \delta$.
\end{theorem}
Before going into the details of our undecidability proof, we sketch the ideas. As noticed by~\citet{BuszkoZML,Buszkowski2005nonlog}, semi-Thue systems can be encoded as finite {\em theories} (that is, sets of sequents considered as additional axioms) over the Lambek calculus.
Following~\citet{Buszkowski2005nonlog}, such an encoding can
performed in a very natural way, by taking a new axiom $y_1, \ldots, y_m \to x_1 \cdot\ldots\cdot x_k$ for each rule
$x_1 \ldots x_k \Rightarrow y_1 \ldots y_m$ of the semi-Thue system. (Notice how the arrows change their direction here.)
\begin{proposition}\label{Pr:STStoTheory}
The sequent $a_1, \ldots, a_n \to b_1 \cdot\ldots\cdot b_k$ is derivable in $\mathbf{MALC}^{\boldsymbol{*}}$ extended with the set
of new axioms produced from rules of a semi-Thue system $\mathcal{S}$ (as explained above) if and only if
$b_1 \ldots b_k \Rightarrow^*_{\mathcal{S}} a_1 \ldots a_n$.
\end{proposition}
By Theorem~\ref{Th:Markov}, this yields undecidability for the problem of derivability from finite sets of hypotheses in the Lambek calculus.
In his earlier paper, \citet{BuszkoZML} provides a much more sophisticated encoding of semi-Thue derivations using only one division operation
(the encoding above also uses product). We discuss this encoding later, in Section~\ref{S:Buszko}.
Reducing derivability from finite theories to ``pure'' derivability requires a sort of deduction theorem, which allows to {\em internalise}
additional axioms (hypotheses) into the sequent being derived. For classical or intuitionistic logic, for example, this could be implemented
as follows: formula $\varphi$ is derivable from hypotheses $\psi_1, \ldots, \psi_n$ if and only if the formula $\psi_1 \to (\psi_2 \to \ldots \to (\psi_n \to \varphi)
\ldots)$ is derivable without hypotheses. In the Lambek calculus without (sub)exponentials, however, such a theorem is impossible, due to the substructural nature
of the system. This is due to the fact that in derivations from hypotheses each hypothesis can be used several (but also maybe zero) times, while in the absence
of weakening and contraction the ``pure'' calculus treats each formula as a ``resource'' which should be used exactly once.
The full-power exponential, as in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$, enables the structural rules of weakening, contraction, and permutation, and
enjoys internalisation of extra axiom, in the following form.
\begin{proposition}\label{Pr:dedEL}
A sequent $\Delta \to D$ is derivable in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ from a set of sequents $\{ \Gamma_1 \to C_1, \ldots, \Gamma_N \to C_N \}$ (possibly using cut) if and only if
the sequent ${!}(C_1 \mathop{/} \prod\Gamma_1), \ldots, {!}(C_N \mathop{/} \prod\Gamma_N), \Delta \to D$ is derivable in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$.
\end{proposition}
By $\prod \Gamma$, for $\Gamma = E_1, \ldots, E_k$, we here and further denote the product
$E_1 \cdot \ldots \cdot E_k$.
Theorem~\ref{Th:Markov} and Propositions~\ref{Pr:STStoTheory} and~\ref{Pr:dedEL} together yield undecidability for $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
Subexponentials, with restricted sets of structural rules, also allow internalisation, but the
$!$-formulae used in order to obtain it are more complicated. For $\boldsymbol{!}^{\mathbf{r}} \mathbf{L}^{\boldsymbol{*}}$, this is performed by~\cite{KanKuzNigSce2018Dale}
(\cite{KanKuzSceFG} use a slightly different strategy).
We perform internalisation of finite sets of hypotheses in Morill's systems, where ${!}$ interacts with brackets.
For our undecidability proofs, it will be convenient to use Chomsky's type-0 (unrestricted) grammars~\citep{Chomsky}, a formalism closely
related to semi-Thue systems. A type-0 grammar $\mathcal{G}$ can be defined as a semi-Thue system $\mathcal{S}$ with the following additional features:
\begin{itemize}
\item a designated symbol $s \in \mathfrak{A}$, called the {\em starting symbol};
\item a designated subset $\Sigma \subset \mathfrak{A}$, called the {\em terminal alphabet};
\item left-hand sides of rewriting rules are required to be non-empty.
\end{itemize}
The {\em language} generated by the type-0 grammar $\mathcal{G}$ is defined as the set of all words $w$ over the {\em terminal} alphabet $\Sigma$,
such that $s \Rightarrow^*_\mathcal{S} w$, where $\mathcal{S}$ is the rewriting (semi-Thue) system of $\mathcal{G}$. Further we use the notation $\Rightarrow^*_\mathcal{G}$ instead
of $\Rightarrow^*_\mathcal{S}$.
For type-0 grammars, there is the following form of Theorem~\ref{Th:Markov}:
\begin{theorem}\label{Th:Markov_Gc}
There exists a type-0 grammar $\mathcal{G}$ such that the language generated by $\mathcal{G}$ is algorithmically undecidable,
i.e., there exists no algorithm that, given a word $w$ over $\Sigma$, decides whether $s \Rightarrow^*_\mathcal{G} w$.
\end{theorem}
Wishing to prove undecidability for several closely related calculi,
we first introduce an abstract notion
of internalisation of finite theories in a deduction-theorem style. Then we prove undecidability for an arbitrary calculus $\mathcal{L}$ which enjoys
this property and is $\pi_q$-sound in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$. The internalisation property facilitates the ``forward'' direction of the encoding,
from a type-0 grammar (semi-Thue system) to $\mathcal{L}$, and $\pi_q$-soundness is used for the ``backwards'' direction, from derivations
in $\mathcal{L}$, via $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, back to derivations in the grammar.
For simplicity, we are going to internalise formulae rather than sequents: recall that $\Gamma \to C$ corresponds to $C \mathop{/} \prod\Gamma$.
Further we shall designate two specific variables, $s$ and $q$. The $s$ variable is going to be the starting symbol of the grammar, and
$q$ is a fresh variable which should not appear in the grammar. The $q$ variable will be used {\em in lieu} of $\mathbf{1}$, since the latter could
be unavailable in presence of Lambek's restriction.
\begin{definition}\label{Df:intern}
Let $\mathcal{A} = \{ A_1, \ldots, A_N \}$ be a finite set of formulae.
A meta-formula $\Phi$ {\em internalises} $\mathcal{A}$ in the calculus $\mathcal{L}$, if the following holds:
\begin{enumerate}
\item \label{It:internS} the sequent $\Phi, s \to s$ is derivable in $\mathcal{L}$;
\item \label{It:internLand} the following `landing' rule is admissible in $\mathcal{L}$:
$$
\infer[\mathrm{land},\ A_i \in \mathcal{A}]
{\Phi, \Delta_1, \Delta_2 \to C}
{\Phi, \Delta_1, A_i, \Delta_2 \to C}
$$
\item \label{It:internBack}
the sequent ${!}A_1, \ldots, {!}A_N \to \prod\pi_q(\Phi)$ is derivable in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
\end{enumerate}
If for any finite set $\mathcal{A}$ of Lambek formulae there exists a meta-formula $\Phi$ which internalises $\mathcal{A}$ in $\mathcal{L}$, then
we say that $\mathcal{L}$ {\em internalises finite sets of Lambek formulae.}
\end{definition}
Let $\mathcal{G}$ be a type-0 grammar and suppose that all letters of its alphabet are
variables ($\mathfrak{A} \subset \mathrm{Var}$). We also introduce an extra fresh variable $q \notin \mathfrak{A}$. Let
$$
\mathcal{A}_{\mathcal{G}} = \{ (x_1 \cdot\ldots\cdot x_k) \mathop{/} (y_1 \cdot\ldots\cdot y_m) \mid
x_1 \ldots x_k \Rightarrow y_1 \ldots y_m \mbox{ is a rewriting rule of $\mathcal{G}$}\}.
$$
By definition of a type-0 grammar, $x_1 \ldots x_k$ is always non-empty. On the other hand,
$y_1 \ldots y_m$ could be empty ($m = 0$), and in this case we include just $x_1 \cdot\ldots\cdot x_k$ into $\mathcal{B}_\mathcal{G}$.
Such a rule, with an empty right-hand side, is called an {\em $\varepsilon$-rule} and written as
$x_1 \ldots x_k \Rightarrow \varepsilon$ ($\varepsilon$ stands for the empty word).
Now we are ready to formulate and prove the key lemma.
\begin{lemma}\label{Lm:undecRR}
Let $\mathcal{L}$ be $\pi_q$-sound in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$ and admit the $\cdot L$, $\cdot R$, and $\mathop{/} L$ rules.
Let $\Phi_{\mathcal{G}}$ internalise $\mathcal{A}_{\mathcal{G}} = \{ A_1, \ldots, A_N \}$ in $\mathcal{L}$ in the sense of Definition~\ref{Df:intern}.
Then the following are equivalent:
\begin{enumerate}
\item $s \Rightarrow_{\mathcal{G}}^* a_1 \ldots a_n$;
\item the sequent $\Phi_{\mathcal{G}}, a_1, \ldots, a_n \to s$ is derivable in $\mathcal{L}$;
\item there exists such a bracketing $\Delta$ of $a_1, \ldots, a_n$ that the sequent
$\Phi_{\mathcal{G}}, \Delta \to s$ is derivable in $\mathcal{L}$;
\item the sequent ${!}A_1, \ldots, {!}A_N, a_1, \ldots, a_n \to s$ is derivable in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove round-robin implications: $1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 1$.
\fbox{$1 \Rightarrow 2$}
Proceed by induction on the number of rewriting steps in $\Rightarrow_{\mathcal{G}}^*$. The base case is no rewriting steps,
$s \Rightarrow^*_{\mathcal{G}} s$, and $\Phi_{\mathcal{G}}, s \to s$ is derivable by Definition~\ref{Df:intern}, item~\ref{It:internS}.
Next, let
$a_1 \ldots a_i x_1 \ldots x_m a_r \ldots a_n \Rightarrow_{\mathcal{G}} a_1 \ldots a_i y_1 \ldots y_k a_r \ldots a_n$ be the last rewriting
step. By induction hypothesis, since $s \Rightarrow_{\mathcal{G}}^* a_1 \ldots a_i x_1 \ldots x_m a_r \ldots a_n$ in fewer rewriting steps,
we have $\Phi_{\mathcal{G}}, a_1, \ldots, a_i, x_1, \ldots, x_m, a_r, \ldots, a_n$. Since $x_1 \ldots x_m \Rightarrow y_1 \ldots y_k$ is a rule of
$\mathcal{G}$, we have $(x_1 \cdot \ldots \cdot x_m) \mathop{/} (y_1 \cdot \ldots \cdot y_m) \in \mathcal{A}_{\mathcal{G}}$.
Now the needed sequent $\Phi_{\mathcal{G}}, a_1, \ldots, a_i, y_1, \ldots, y_m, a_r, \ldots, a_n \to s$ is derived as follows,
using the `landing' rule provided by Definition~\ref{Df:intern}, item~\ref{It:internLand}:
$$
\infer[\mathrm{land}]{\Phi_{\mathcal{G}}, a_1, \ldots, a_i, y_1, \ldots, y_k, a_r, \ldots, a_n \to s}
{\infer[\mathop{/} L]{\Phi_{\mathcal{G}}, a_1, \ldots, a_i, (x_1 \cdot \ldots \cdot x_m) \mathop{/} (y_1 \cdot \ldots \cdot y_k), y_1, \ldots, y_k, a_r, \ldots, a_n \to s}
{\infer=[\cdot R]{\vphantom{\Phi} y_1, \ldots, y_k \to y_1 \cdot \ldots \cdot y_k}{\mbox{axioms}} &
\infer=[\cdot L]{\Phi_{\mathcal{G}}, a_1, \ldots, a_i, x_1 \cdot \ldots \cdot x_m, a_r, \ldots, a_n \to s}
{\Phi_{\mathcal{G}}, a_1, \ldots, a_i, x_1, \ldots, x_m, a_r, \ldots, a_n \to s}}}
$$
(Here and further double horizontal line means several applications of the rule.)
\fbox{$2 \Rightarrow 3$} is obvious, since one just takes the trivial bracketing $\Delta = a_1, \ldots, a_n$.
\fbox{$3 \Rightarrow 4$}
Since $a_1, \ldots, a_n = \pi_q(\Delta)$ and $\mathcal{L}$ is $\pi_q$-sound in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, the sequent
$\pi_q(\Phi_{\mathcal{G}}), a_1, \ldots, a_n \to s$ is derivable in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$. Next, proceed as follows, using item~\ref{It:internBack} of
Definition~\ref{Df:intern}:
$$
\infer[\mathrm{cut}]{{!}B_1, \ldots, {!}B_N, a_1, \ldots, a_n \to s}{{!}B_1, \ldots, {!}B_N \to \prod \pi_q(\Phi_{\mathcal{G}}) &
\infer=[\cdot L]{\prod \pi_q(\Phi_{\mathcal{G}}), a_1, \ldots, a_n \to s}{\pi_q(\Phi_{\mathcal{G}}), a_1, \ldots, a_n \to s}}
$$
\fbox{$4 \Rightarrow 1$}
This part comes directly from the standard undecidability proof for $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, see~\cite{KanKuzNigSce2018Dale}.
Consider
the derivation of the sequent ${!}A_1, \ldots, {!}A_N, a_1, \ldots, a_n \to s$ in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
The cut rule in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$ is eliminable~\citep{KanKuzNigSce2018Dale},
so we can suppose that
this derivation is cut-free. All formulae in this
derivation are subformulae of the goal sequent, and the only applicable rules
are ${\cdot} L$, ${\cdot} R$, ${\mathop{/}} L$, and rules operating ${!}$ in the antecedent:
${!}L$, ${!}C_{1,2}$, ${!}W$.
Now let us hide all the formulae which include $\mathop{/}$ or ${!}$.
This trivialises all ${!}$-operating rules. Next, let us replace
all $\cdot$'s in the antecedents with commas.
This, in its turn, trivialises ${\cdot} L$.
All sequents in our derivation
are now of the form $b_1, \ldots, b_\ell \Rightarrow C$, where $\ell \ge 0$ and $C = c_1 \cdot \ldots \cdot c_r$ ($r \ge 1$) or $C = \U$.
For the sake of uniformity, we also write $C = \U$ as $C = c_1 \cdot \ldots \cdot c_r$ with $r = 0$.
The $\mathop{/} L$ rule reduces to
$$
\infer{b_1, \ldots, b_i, b_{i+1}, \ldots, b_j, b_{j+1}, \ldots, b_s \to C}
{b_{i+1}, \ldots, b_j \to y_1 \cdot \ldots \cdot y_k &
b_1, \ldots, b_i, x_1, \ldots, x_m, b_{j+1}, \ldots, b_s \to C}
$$
where $x_1 \ldots x_m \Rightarrow y_1 \ldots y_k$ is a rewriting rule of $\mathcal{G}$.
For each $\varepsilon$-rule $x_1 \ldots x_m \Rightarrow \varepsilon$ in $\mathcal{G}$ we get the
rule
$$
\infer{b_1, \ldots, b_i, b_{i+1}, \ldots, b_s \to C}
{b_1, \ldots, b_i, x_1, \ldots, x_m, b_{i+1}, \ldots, b_s \to C}
$$
(which is the reduction of ${!}L$ for ${!}(x_1 \cdot \ldots \cdot x_m)$).
Finally, $\cdot R$ transforms into
$$
\infer{b_1, \ldots, b_i, b_{i+1}, \ldots, b_\ell \to c_1 \cdot \ldots \cdot c_j \cdot c_{j+1} \cdot \ldots \cdot c_r}
{b_1, \ldots, b_i \to c_1 \cdot \ldots \cdot c_j & b_{i+1}, \ldots, b_\ell \to c_{j+1} \cdot \ldots \cdot c_r}
$$
and axioms are of the form $a \to a$.
Now straightforward induction on derivation establishes the following fact: if $b_1, \ldots, b_\ell \to c_1 \cdot \ldots \cdot c_r$ is derivable
in the simplified calculus presented above, then $b_1 \ldots b_\ell$ is derivable from $c_1 \ldots c_r$ in the type-0 grammar $|Gc$. This finishes
our proof.
\qed
\end{proof}
Theorem~\ref{Th:Markov_Gc} and Lemma~\ref{Lm:undecRR} immediately yield the following generic undecidability result (``meta-theorem'').
\begin{theorem}\label{Th:undec_generic}
Let $\mathcal{L}$ be $\pi_q$-sound in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, admit the $\cdot L$, $\cdot R$, and $\mathop{/} L$ rules, and internalise
finite sets of Lambek formulae. Then the derivability problem in $\mathcal{L}$ is undecidable.
\end{theorem}
Now, in order to prove undecidability, it is sufficient to show that the calculi considered in this paper internalise finite sets of Lambek formulae.
The easiest example is $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, the multiplicative-additive Lambek calculus with a full-power exponential modality.
A set $\mathcal{A} = \{ A_1, \ldots, A_N \}$ is internalised in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$ by $\Phi = {!}A_1, \ldots, {!}A_N$ (cf. Proposition~\ref{Pr:dedEL}).
For $\boldsymbol{!}^{\mathbf{r}} \mathbf{L}^{\boldsymbol{*}}$, the situation is a bit trickier, since in the absence of the weakening rule
${!}A_1, \ldots, {!}A_N, s \to s$ (item~\ref{It:internS} of Definition~\ref{Df:intern}) is not derivable.
This issue is handled by extending $\Phi$ with extra formulae which neutralise ${!}A_i$. Namely, $\Phi = \mathbf{1} \mathop{/} {!}A_1, {!}A_1, \ldots
\mathbf{1} \mathop{/} {!}A_N, {!} A_N$ internalises $\mathcal{A}$ in $\boldsymbol{!}^{\mathbf{r}} \mathbf{L}^{\boldsymbol{*}}$~\citep{KanKuzNigSce2018Dale}.
Actually, the $\mathbf{1}$ constant here can be replaced by $s \mathop{/} s$.
For the calculi with brackets, which interact with the contraction rule, we go further along this line. We still have to add formulae like
$\mathbf{1} \mathop{/} {!}A_i$, for item~\ref{It:internS} of Definition~\ref{Df:intern}; but now we also need to neutralise the changes which the contraction rule
makes on the bracketing structures. We carry this strategy out in Propositions~\ref{Pr:internA} and~\ref{Pr:internB} below.
We start with systems without stoups: $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$.
\begin{proposition}\label{Pr:internA}
The meta-formula
$$
\Phi = {!} ((s \mathop{/} s) \mathop{/} {!} []^{-1} A_1), {!}[]^{-1} A_1, \ldots,
{!} ((s \mathop{/} s) \mathop{/} {!} []^{-1} A_N), {!} []^{-1} A_N
$$
internalises $\{A_1, \ldots, A_N\}$ in~$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$.
\end{proposition}
\begin{proof}
1. The sequent $\Phi, s \to s$ is derived in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$ as follows:
{\small
$$
\infer=[!L]{{!} ((s \mathop{/} s) \mathop{/} {!} []^{-1} A_1), {!}[]^{-1} A_1, \ldots,
{!} ((s \mathop{/} s) \mathop{/} {!} []^{-1} A_N), {!} []^{-1} A_N, s \to s}
{\infer=[\mathop{/} L]{(s \mathop{/} s) \mathop{/} {!} []^{-1} A_1, {!} []^{-1} A_1, \ldots,
(s \mathop{/} s) \mathop{/} {!} []^{-1} A_N, {!} []^{-1} A_N, s \to s}
{{!} []^{-1} A_1 \to {!} []^{-1} A_1 & \ldots & {!}[]^{-1} A_N \to {!}[]^{-1} A_N &
\infer[\mathop{/} L]{s \mathop{/} s, \ldots, s \mathop{/} s, s \mathop{/} s, s \to s}{s \to s & \infer[\mathop{/} L]{s \mathop{/} s, \ldots, s \mathop{/} s, s \to s}{s \to s & \infer{\vdots}{s \to s}}}}}
$$
}
2. The `landing' rule is derived in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$ as follows:
{\small
$$
\infer[!C\mbox{ applied to ${!}[]^{-1} A_i$}]
{\Phi, \Delta_1, \Delta_2 \to C}
{\infer[{!}L]{\Phi, \Delta_1, [{!}[]^{-1} A_i], \Delta_2 \to C}
{\infer[[]^{-1} L]{\Phi, \Delta_1, [[]^{-1} A_i], \Delta_2 \to C}{\Phi, \Delta_1, A_i, \Delta_2 \to C}}}
$$
}
3. Notice that
$$
\pi_q(\Phi) = {!} ((s \mathop{/} s) \mathop{/} {!}A_1), {!}A_1, \ldots, {!}((s \mathop{/} s) \mathop{/} {!}A_N), {!}A_N.
$$
Next, by applying $\cdot R$ to $\Lambda \to {!} ((s \mathop{/} s) \mathop{/} {!}A_1)$; ${!}A_1 \to {!}A_1$; \ldots
$\Lambda \to {!} ((s \mathop{/} s) \mathop{/} {!}A_N)$; ${!}A_1 \to {!}A_N$, we get the necessary sequent
${!}B_1, \ldots, {!}B_N \to \prod \pi_q(\Phi)$.
The sequents $\Lambda \to {!} ((s \mathop{/} s) \mathop{/} {!}A_i)$ are derived in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$ as follows:
{\small
$$
\infer{\Lambda \to {!}((s \mathop{/} s) \mathop{/} {!}B_1)}
{\infer{\Lambda \to (s \mathop{/} s) \mathop{/} {!}A_1}
{\infer{{!}A_1 \to s \mathop{/} s}
{\infer{\Lambda \to s \mathop{/} s}{s \to s}}}}
$$}
\qed
\end{proof}
\begin{proposition}\label{Pr:internB}
The meta-formula
$$
\Phi = {!} ((s \mathop{/} s) \mathop{/} {!} Z_1), {!}Z_1, \ldots, {!}((s \mathop{/} s) \mathop{/} {!} Z_n), {!}Z_n,
{!}((s \mathop{/} s) \mathop{/} \langle\rangle\PMod q), [[ q ]],
$$
where
$$
Z_i = ([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q,
$$
internalises $\{A_1, \ldots, A_n\}$ in~$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$ and~$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$.
\end{proposition}
Notice that in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ we do not need to impose any additional non-emptiness restriction on the `landing' rule,
since Lambek's restriction is automatically satisfied by non-emptiness of $\Phi$. Thus, for our undecidability proof
$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ is capable of encoding arbitrary semi-Thue systems, even the ones which include $\varepsilon$-rules.
For categorial grammars, however, Lambek's restriction will make a difference in the expressive power of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ as opposed
to $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$, as explained in the next section.
\begin{proof}
1. The sequent $\Phi, s \to s$ is derived in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ (and therefore in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$) as follows:
{\small $$
\infer=[!L]{{!} ((s \mathop{/} s) \mathop{/} {!}Z_1), {!}Z_1, \ldots, {!}((s \mathop{/} s) \mathop{/} {!}Z_n), {!}Z_n,
{!}((s \mathop{/} s) \mathop{/} \langle\rangle\PMod q), [[ q ]], s \to s}
{\infer=[\mathop{/} L]{(s \mathop{/} s) \mathop{/} {!}Z_1, {!}Z_1, \ldots, (s \mathop{/} s) \mathop{/} {!}Z_n, {!}Z_n,
(s \mathop{/} s) \mathop{/} \langle\rangle\PMod q, [[ q ]], s \to s}
{{!}Z_1 \to {!}Z_1 & \ldots & {!}Z_n \to {!}Z_n & \infer[\langle\rangle R]{[[q]] \to \langle\rangle\PMod q}{\infer[\langle\rangle R]{[q] \to \langle\rangle q}{q \to q}} &
\infer[\mathop{/} L]{s \mathop{/} s, \ldots, s \mathop{/} s, s \mathop{/} s, s \to s}{s \to s & \infer[\mathop{/} L]{s \mathop{/} s, \ldots, s \mathop{/} s, s \to s}{s \to s & \infer{\vdots}{s \to s}}}}}
$$}
Notice that all antecedents in this derivation are non-empty, so Lambek's restriction is observed.
2. Let $\Phi'$ be
$(s \mathop{/} s) \mathop{/} {!}Z_1), {!}Z_1, \ldots, {!}((s \mathop{/} s) \mathop{/} {!}Z_n), {!}Z_n, {!}((s \mathop{/} s) \mathop{/} \langle\rangle\PMod q)$, that is, $\Phi = \Phi', [[q]]$,
and recall that ${!}Z_i = {!} (([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q)$.
Now the `landing' rule is derived in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ (and therefore also in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$) as follows.
{\small $$
\infer[!C\mbox{ applied to ${!}Z_i$}]{\Phi', [[q]], \Delta_1, \Delta_2 \to C}
{\infer[!L]{\Phi', [{!} (([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q), q], \Delta_1, \Delta_2 \to C}
{\infer[\mathop{/} L]{\Phi', [ ([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q, q ], \Delta_1, \Delta_2 \to C}
{q \to q & \infer[[]^{-1} L]{\Phi', [ []^{-1} ({!}A_i \cdot \langle\rangle \langle\rangle q) ], \Delta_1, \Delta_2 \to C}
{\infer[\cdot L]{\Phi', {!}A_i \cdot \langle\rangle\PMod q, \Delta_1, \Delta_2 \to C}
{\infer[!P_2]{\Phi', {!}A_i, \langle\rangle\PMod q, \Delta_1, \Delta_2 \to C}
{\infer[!L]{\Phi', \langle\rangle\PMod q, \Delta_1, {!}A_i, \Delta_2 \to C}
{\infer[\langle\rangle L]{\Phi', \langle\rangle\PMod q, \Delta_1, A_i, \Delta_2 \to C}
{\infer[\langle\rangle L]{\Phi', [\langle\rangle q], \Delta_1, A_i, \Delta_2 \to C}
{\Phi', [[q]], \Delta_1, A_i, \Delta_2 \to C}}}}}}}}}
$$}
Again, notice how Lambek's restriction is maintained by non-emptiness of $\Phi'$ and by the $q$ in the brackets
(when applying $!C$).
3.
Notice that
\begin{multline*}
\pi_q(\Phi) = {!} ((s \mathop{/} s) \mathop{/} {!}({!}A_1 \cdot \mathbf{1})) \mathop{/} \mathbf{1},
{!}({!}A_1 \cdot \mathbf{1}), \ldots,\\
{!} ((s \mathop{/} s) \mathop{/} {!}({!}A_n \cdot \mathbf{1})) \mathop{/} \mathbf{1},
{!}({!}A_n \cdot \mathbf{1}), {!}((s \mathop{/} s) \mathop{/} \mathbf{1}), \mathbf{1}.
\end{multline*}
Next, we enjoy the following derivations in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$:
{\small $$
\infer[\mathop{/} R]{\Lambda \to {!} ((s \mathop{/} s) \mathop{/} {!}({!}A_i \cdot \mathbf{1})) \mathop{/} \mathbf{1}}
{\infer[\mathbf{1} L]{\mathbf{1} \to {!} ((s \mathop{/} s) \mathop{/} {!}({!}A_i \cdot \mathbf{1}))}
{\infer[{!} R]{\Lambda \to {!} ((s \mathop{/} s) \mathop{/} {!}({!}A_i \cdot \mathbf{1}))}
{\infer[\mathop{/} R]{\Lambda \to (s \mathop{/} s) \mathop{/} {!}({!}A_i \cdot \mathbf{1})}
{\infer[{!} W]{{!}({!}A_i \cdot \mathbf{1}) \to s \mathop{/} s}
{\infer[\mathop{/} R]{\Lambda \to s \mathop{/} s}{s \to s}}}}}}
\qquad
\infer[{!} R]{{!}A_i \to {!}({!} A_i \cdot \mathbf{1})}
{\infer[\cdot R]{{!}A_i \to {!}A_i \cdot \mathbf{1}}
{{!}A_i \to {!}A_i & \Lambda \to \mathbf{1}}}
\qquad
\infer[{!} R]{\Lambda \to {!}((s \mathop{/} s) \mathop{/} \mathbf{1})}
{\infer[\mathop{/} R]{\Lambda \to (s \mathop{/} s) \mathop{/} \mathbf{1}}
{\infer[\mathbf{1} L]{\mathbf{1} \to s \mathop{/} s}
{\infer[\mathop{/} R]{\Lambda \to s \mathop{/} s}{s \to s}}}}
$$}
Also recall that $\Lambda \vdash \mathbf{1}$ is an axiom of $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
Now several applications of $\cdot L$ yield ${!}A_1, \ldots, {!}A_n \to \prod \pi_q(\Phi)$.
\qed
\end{proof}
Now by Theorem~\ref{Th:undec_generic} we get undecidability for the systems without stoups.
\begin{theorem}
The derivability problems for $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ are undecidable.
\end{theorem}
For systems with stoups, the situation is as follows.
First, concerning derivability of sequents with empty stoups, $\LsysA'$, $\LsysB'$, and $\LsysBr'$ just equivalent to $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$,
$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ respectively (Propositions~\ref{Prop:nostoupB} and~\ref{Prop:nostoupA}).
This gives undecidability for these systems.
\begin{theorem}
The derivability problems for $\LsysA'$, $\LsysB'$, and $\LsysBr'$ are undecidable.
\end{theorem}
Proving undecidability for original Morrill's systems, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ (which we altered in order to gain cut elimination, see
Section~\ref{S:issues}), requires some extra work. For $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ it is easy.
The right rule for ${!}$ is never used in the proof of item~\ref{It:internS} in Proposition~\ref{Pr:internA}.
Also, in the landing rule (item~\ref{It:internLand}) there are no other ${!}$-formulae moved into the newly created
bracketed island. Thus, from the point of view of Proposition~\ref{Pr:internA} $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ is indistinguishable from $\LsysA'$, and
this yields the necessary undecidability result.
\begin{theorem}
The derivability problem for $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ is undecidable.
\end{theorem}
The case of $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ is trickier. The issue is that the ${!}L$ rule in this calculus is not invertible, {\em i.e.,}
derivability of $\Xi(\zeta; \Delta_1, {!}A, \Delta_2) \to C$ is
not always equivalent to derivability of $\Xi(\zeta,A; \Delta_1, \Delta_2) \to C$. In particular,
${!}p \to {!}p$ is derivable, while $p; \Lambda \to {!}p$ is not. For our construction this issue
is crucial. Namely, when proving the `landing' rule (item~\ref{It:internLand} of Definition~\ref{Df:intern}), in
$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ we would have to move the ${!}$-formulae from $\Phi$ to the stoup. Otherwise, we could not operate the contraction
rule. This, however, would cause problems with item~\ref{It:internS}: the left premises of the derivation
(see proof of Proposition~\ref{Pr:internB}) become $Z_i; \Lambda \to {!}Z_i$ (instead of ${!}Z_i \to {!}Z_i$), and in general
are not derivable.
Fortunately, this issue is easily resolved by adding an extra ${!}$ on $Z_i$~\citep{KanKuzSceFG19}. Indeed,
${!}Z_i; \Lambda \to {!}Z_i$ {\em is} derivable from ${!}Z_i \to {!}Z_i$ by application of ${!}P$. This yields the
following internalisation property, where we essentially use the stoup for ${!}$-formulae in $\Phi$ (the rightmost
$[[q]]$ is kept outside the stoup).
\begin{proposition}\label{Pr:internBx}
The meta-formula
$$
\Phi = (s \mathop{/} s) \mathop{/} {!} Z_1, {!}Z_1, \ldots, (s \mathop{/} s) \mathop{/} {!} Z_N, {!}Z_N,
(s \mathop{/} s) \mathop{/} \langle\rangle\PMod q; [[ q ]],
$$
where
$$
Z_i = ([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q,
$$
satisfies items~\ref{It:internS} and~\ref{It:internLand} of Definition~\ref{Df:intern} for internalisation of
$\{A_1, \ldots, A_N\}$ in~$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$.
\end{proposition}
\begin{proof}
1. The sequent $\Phi,s \to s$ is derived in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ as follows:
{\small
$$
\infer=[!P]{(s \mathop{/} s) \mathop{/} {!} Z_1, {!}Z_1, \ldots, (s \mathop{/} s) \mathop{/} {!} Z_N, {!}Z_N,
(s \mathop{/} s) \mathop{/} \langle\rangle\PMod q; [[ q ]], s \to s}
{\infer=[\mathop{/} L]{(s \mathop{/} s) \mathop{/} {!} Z_1, {!}Z_1, \ldots, (s \mathop{/} s) \mathop{/} {!} Z_N, {!}Z_N,
(s \mathop{/} s) \mathop{/} \langle\rangle\PMod q, [[ q ]], s \to s}
{{!}Z_1 \to {!}Z_1 & \ldots & {!}Z_N \to {!}Z_N & [[q]] \to \langle\rangle\PMod q &
s \mathop{/} s, \ldots, s \mathop{/} s, s \mathop{/} s, s \to s}}
$$
}
2. Let $\zeta$ be the stoup of $\Phi$, that is, $\Phi = \zeta; [[q]]$. Then the ``landing'' rule is established as follows:
{\small
$$
\infer[{!}C\mbox{ applied to ${!}Z_i$ in $\zeta$}]
{\zeta; [[q]], \Delta_1, \Delta_2 \to C}
{\infer[{!}P]{\zeta; [{!}Z_i; q], \Delta_1, \Delta_2 \to C}
{\infer[{!}L]{\zeta; [{!}Z_i, q], \Delta_1, \Delta_2 \to C}
{\infer[{!}P,\ Z_i = ([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q]{\zeta; [Z_i; q], \Delta_1, \Delta_2 \to C}
{\infer[\mathop{/} L]{\zeta; [([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q, q], \Delta_1, \Delta_2 \to C}
{q \to q & \infer[[]^{-1} L]{\zeta; [ []^{-1} ({!}A_i \cdot \langle\rangle\PMod q) ], \Delta_1, \Delta_2 \to C}
{\infer[\cdot L]{\zeta; {!}A_i \cdot \langle\rangle\PMod q, \Delta_1, \Delta_2 \to C}
{\infer[{!}L]{\zeta; {!}A_i, \langle\rangle\PMod q, \Delta_1, \Delta_2 \to C}
{\infer[{!}P]{\zeta, A_i; \langle\rangle\PMod q, \Delta_1, \Delta_2 \to C}
{\infer=[\langle\rangle L]{\zeta; \langle\rangle\PMod q, \Delta_1, A_i, \Delta_2 \to C}
{\zeta; [[ q ]], \Delta_1, A_i, \Delta_2 \to C}}}}}}}}}}
$$
}
\qed
\end{proof}
Now we can finalise our undecidability proof for $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$.
\begin{lemma}\label{Lm:keyBx}
Let $\mathcal{A}_{\mathcal{G}} = \{ A_1, \ldots, A_N \}$ and let $Z_i = ([]^{-1} ({!}A_i \cdot \langle\rangle\PMod q)) \mathop{/} q$ ($i = 1, \ldots, N$). Then the
sequent
$${!}((s \mathop{/} s) \mathop{/} {!} Z_1), {!}{!}Z_1, \ldots, {!}((s \mathop{/} s) \mathop{/} {!} Z_n), {!}{!}Z_n,
{!}((s \mathop{/} s) \mathop{/} \langle\rangle\PMod q), [[ q ]], a_1, \ldots, a_n \to s
$$
is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ if and only if $s \Rightarrow^*_{\mathcal{G}} a_1 \ldots a_n$.
\end{lemma}
\begin{proof}
For the ``if'' direction, we use Proposition~\ref{Pr:internBx} and proceed exactly as in the $1 \Rightarrow 2$ implication from the proof
of Lemma~\ref{Lm:undecRR}. This gives derivability of
$$
(s \mathop{/} s) \mathop{/} {!} Z_1, {!}Z_1, \ldots, (s \mathop{/} s) \mathop{/} {!} Z_n, {!}Z_n,
(s \mathop{/} s) \mathop{/} \langle\rangle\PMod q; [[ q ]], a_1, \ldots, a_n \to s,
$$
which yields the necessary sequent
$${!}((s \mathop{/} s) \mathop{/} {!} Z_1), {!}{!}Z_1, \ldots, {!}((s \mathop{/} s) \mathop{/} {!} Z_n), {!}{!}Z_n,
{!}((s \mathop{/} s) \mathop{/} \langle\rangle\PMod q), [[ q ]], a_1, \ldots, a_n \to s
$$
by several applications of ${!}L$.
For the ``only if'' direction, since the sequent in question is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$, it is also derivable in
$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$ (Proposition~\ref{Prop:nostoupBx}). By cut with ${!}Z_i \to {!}{!}Z_i$ (which is derivable in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}$) we get derivability
of
$${!}((s \mathop{/} s) \mathop{/} {!} Z_1), {!}Z_1, \ldots, {!}((s \mathop{/} s) \mathop{/} {!} Z_n), {!}Z_n,
{!}((s \mathop{/} s) \mathop{/} \langle\rangle\PMod q), [[ q ]], a_1, \ldots, a_n \to s.
$$
Recall that cut is admissible in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$ by Proposition~\ref{Prop:nostoupB}. Now we use Proposition~\ref{Pr:internB} and
the $4 \Rightarrow 1$ implication of Lemma~\ref{Lm:undecRR} and conclude that $s \Rightarrow^*_\mathcal{G} a_1 \ldots a_n$.
\qed
\end{proof}
\begin{theorem}
The derivability problem in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ is undecidable.
\end{theorem}
\begin{proof}
Immediately by Lemma~\ref{Lm:keyBx} and Theorem~\ref{Th:Markov_Gc}.\qed
\end{proof}
\section{Generative Power of Categorial Grammars}\label{S:grammar}
Besides undecidability results,
Lemma~\ref{Lm:undecRR} has another corollary:
{\em categorial grammars} based on the calculi with subexponentials considered throughout this paper
generate exactly the class of all recursively enumerable languages.
The definitions of categorial grammars are given in Section~\ref{S:MALC} for calculi without brackets and in Section~\ref{S:calculi} for bracketed systems.
Recall that for the latter we distinguish two notions of recognition, namely s-recognition and t-recognition.
Let us briefly survey known results characterising classes of languages generated by categorial grammars over different extensions of the Lambek calculus.%
For the pure (multiplicative-only) Lambek calculus this class coincides with the class of context-free languages.
The hard part, Lambek to context-free, was done by~\citet{PentusCF}.
For the easier direction, context-free to Lambek, we refer to Gaifman~\citep{BGS1960}, as the first one who obtained the result,
and~\citet{Buszko1985ZML}, for a modern way of proving it using Greibach normal form~\citep{Greibach}.\footnote{Methods of Gaifman and Buszkowski work only for context-free languages without the empty word. The empty word case
was handled by~\cite{Kuz2012IGPL}.}
Adding the unit constant, $\mathbf{1}$, does not extend the class of languages generated by Lambek grammars---all $\mathbf{L}_{\U}$-languages are
still context-free~\citep{Kuz2012FG}.
Grammars based on the Lambek calculus with brackets, $\mathbf{Lb}^{\boldsymbol{*}}$, also generate only context-free languages, both in the
sense of s- and t-recognition. This was initially claimed by~\citet{Jaeger2003}, but, as noticed by~\citet{FaddaMorrill2005},
his proof for t-recognition relied on an incorrect lemma by~\citet{Versmissen1996}. A correct proof was given by~\citet{Kanazawa2018}.
Additive connectives, $\vee$ and $\wedge$, increase the generative power
of Lambek grammars. Namely, as noticed by~\citet{KanazawaJoLLI},
$\mathbf{MALC}^{\boldsymbol{*}}$-grammars an generate finite intersections of context-free languages and, moreover,
images of such intersections under symbol-to-symbol homomorphisms
(that is, homomorphisms $h \colon \Sigma^* \to \Sigma^*$
that map $\Sigma$ to $\Sigma$). Furthermore, as shown by~\cite{Kuz2013}
and~\cite{KuznetsovOkhotin}, the class of $\mathbf{MALC}^{\boldsymbol{*}}$-languages
includes the class of languages generated by conjunctive grammars~\cite{OkhotinSurvey}. This latter class is strictly greater than the
class of intersections of context-free languages, but, unless $\mathsf{P} =
\mathsf{NP}$, is still not closed under symbol-to-symbol homomorphisms~\citep{KuznetsovOkhotin}.
In this section we show that adding the (sub)exponential modality, even constrained by brackets, increases
the power of Lambek categorial grammars to the highest possible level---all
recursively enumerable (r.e.) languages.
From many definitions of the class of r.e. languages we choose the one based on type-0 grammars
(see Section~\ref{S:undec} above for definition). A language $M$ over alphabet $\Sigma$ is r.e. if and only if there
exists a type-0 grammar $\mathcal{G}$ such that $M = \{ w \in \Sigma^* \mid s \Rightarrow_\mathcal{G}^* w \}$, {\em i.e.,} $M$ is the
language generated by $\mathcal{G}$.
Obviously, all languages generated by categorial grammars based on calculi considered in this paper are r.e.
The converse statement is non-trivial, and extends undecidability results of the previous section. For bracketed
calculi, moreover, we have two notions of recognition (s-recognition and t-recognition).
As in the previous section, we provide a generic result. Notice how Lambek's restriction comes into play here.
\begin{theorem}\label{Th:gram_generic}
\begin{enumerate}
\item Let $\mathcal{L}$ satisfy the conditions of Theorem~\ref{Th:undec_generic} and additionally admit
$\langle\rangle L$ and $\mathop{\backslash} R$ without Lambek's restriction. Then any r.e. language can be generated by a categorial
grammar based on $\mathcal{L}$.
\item Let $\mathcal{L}$ satisfy the conditions of Theorem~\ref{Th:undec_generic} and admit
$\langle\rangle L$ and $\mathop{\backslash} R$ with Lambek's restriction. Then any r.e. language without the empty word can be generated by a categorial
grammar based on $\mathcal{L}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Consider the type-0 grammar $\mathcal{G}$ which generates the given r.e. language.
Let $\Phi_\mathcal{G}$ internalise $\mathcal{G}$ in $\mathcal{L}$.
We are going to prove that the following three statements are equivalent:
\begin{enumerate}
\item a word $a_1 \ldots a_n$ belongs to the language generated
by $\mathcal{G}$ (that is, $s \Rightarrow^*_\mathcal{G} a_1 \ldots a_n$);
\item the sequent $a_1, \ldots, a_n \to \prod \Phi_\mathcal{G} \mathop{\backslash} s$ is derivable in $\mathcal{L}$;
\item the sequent $\Delta \to \prod \Phi_\mathcal{G} \mathop{\backslash} s$ is derivable in $\mathcal{L}$ for some bracketing $\Delta$ of
$a_1, \ldots, a_n$.
\end{enumerate}
The meta-formula $\Phi_\mathcal{G}$ could contain brackets; for brackets, we define $\prod$ as follows:
$\prod [ \Gamma ] = \langle\rangle (\prod \Gamma)$.
In order to establish \fbox{$1 \Rightarrow 2$}, apply Lemma~\ref{Lm:undecRR}. Let $s \Rightarrow^*_\mathcal{G} a_1 \ldots a_n$. We get
$\Phi_\mathcal{G}, a_1, \ldots, a_n \to s$ derivable in $\mathcal{L}$. Applying $\cdot L$ and $\langle\rangle L$ several times,
we obtain $\prod \Phi_\mathcal{G}, a_1, \ldots, a_n \to s$. Now we apply $\mathop{/} R$. In Case~1, without Lambek's restriction,
we always get $a_1, \ldots, a_n \to \prod \Phi_\mathcal{G} \mathop{\backslash} s$. In Case~2, this is possible only for $n > 0$, and in
this case we consider only type-0 grammars which do not generate the empty word.
The \fbox{$2 \Rightarrow 3$} implication is established by taking the trivial bracketing $\Delta = a_1, \ldots, a_n$. Finally,
for the backwards \fbox{$3 \Rightarrow 1$} implication we use $\pi_q$-soundness of $\mathcal{L}$ in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$ and derive
$a_1, \ldots, a_n \to \prod \pi_q(\Phi_\mathcal{G}) \mathop{\backslash} s$ in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
In $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, the $\mathop{\backslash} R$ rule is invertible using cut:
$$
\infer[\mathrm{cut}]{F, \Pi \to E}{\Pi \to F \mathop{\backslash} E & \infer[\mathop{\backslash} L]{F, F \mathop{\backslash} E \to E}{F \to F & E \to E}}
$$
Thus, we get $\prod \pi_q(\Phi_\mathcal{G}), a_1, \ldots, a_n \to s$, and by cut with item~\ref{It:internBack} of Definition~\ref{Df:intern}
we get ${!}A_1, \ldots, {!}A_N, a_1, \ldots, a_n \to s$. The $4 \Rightarrow 1$ implication of Lemma~\ref{Lm:undecRR} finishes the job.
Now the necessary categorial grammar is constructed as follows. The lexicon $\rhd$ is trivial, just the identity relation:
$\rhd = \{ \langle a,a \rangle \mid a \in \Sigma \}$. All the information is kept in the goal formula $H = \prod \pi_q(\Phi_\mathcal{G}) \mathop{\backslash} s$.
By definition, this grammar generates the same language as $\mathcal{G}$, both in the sense of s-recognition and t-recognition. \qed
\end{proof}
Notice that the grammars constructed in Theorem~\ref{Th:gram_generic} have the property of {\em unique type assignment:} for each
letter $a$ there exists exactly one formula $A$ such that $a \rhd A$. For the pure Lambek calculus, constructing such grammars is
much harder. However, for each context-free language without the empty word there exists a Lambek grammar with unique type assignment,
as shown by~\citet{Safiullin}, see also~\citet{Kuzn2017WoLLIC}.
Internalisation properties (Propositions~\ref{Pr:internA}, \ref{Pr:internB}, \ref{Pr:internBx}) now yield the following theorems.
\begin{theorem}
Let $M$ be a recursively enumerable language. Then
\begin{enumerate}
\item there exists a $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$-grammar which generates $M$;
\item there exists a $\boldsymbol{!}^{\mathbf{r}} \mathbf{L}^{\boldsymbol{*}}$-grammar which generates $M$;
\item for each of the calculi $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$, $\LsysA'$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$, $\LsysB'$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$ there exist
a grammar which generates $M$, both in the sense of s-recognition and t-recognition.
\end{enumerate}
\end{theorem}
\begin{theorem}
Let $M$ be a recursively enumerable language without the empty word. Then
for each of the calculi $\LsysBr'$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ there exists a grammar
which generates $M$, both in the sense of s-recognition and t-recognition.
\end{theorem}
\section{Undecidability for One-Division Fragments}\label{S:Buszko}
In this section we strengthen our complexity lower bounds by restricting ourselves to the smallest non-trivial
fragment of the Lambek calculus with only one division operation,
extended by ${!}$ and, in the bracketed case, bracket modalities $\langle\rangle$ and $[]^{-1}$.
For these one-division systems, we obtain the same complexity results. Thus, the situation is different from
the pure Lambek: while checking derivability in the Lambek calculus (without additives, brackets, and exponentials)
is an NP-complete problem~\citep{PentusNP}, for the one-division fragment there exists a polynomial time
algorithm~\citep{Savateev2010,KuznetsovTrMIAN2016}.
In contrast, in the presence of ${!}$ the one-division fragment is as powerful, as the whole system, both with and without brackets.
Our construction is based on Buszkowski's method of encoding type-0 grammars in the one-division fragment of the Lambek calculus
extended with extra (non-logical) axioms~\citep{BuszkoZML}. Then these non-logical axioms are internalised using ${!}$, in the same way
as in Section~\ref{S:undec}.
We present a new, simplified version of Buszkowski's construction. This version does not require the type-0 grammar to be
translated into binary normal form, and, in particular, works also for languages with the empty word.
Recall that the one-division fragment of the Lambek calculus uses formulae constructed from variables using only one division operation, $\mathop{/}$.
We denote this fragment by $\mathbf{L}^{\boldsymbol{*}}(/)$ (the asterisk means that we do not impose Lambek's restriction). Axioms of $\mathbf{L}^{\boldsymbol{*}}(/)$ are of the form $A \to A$,
and its rules of inference are as follows:
$$
\infer[\mathop{/} L]{\Delta_1, B \mathop{/} A, \Pi, \Delta_2 \to C}{\Pi \to A & \Delta_1, B, \Delta_2 \to C}
\qquad
\infer[\mathop{/} R]{\Pi \to B \mathop{/} A}{\Pi, A \to B}
$$
$$
\infer[\mathrm{cut}]{\Delta_1, \Pi, \Delta_2 \to C}{\Pi \to A & \Delta_1, A, \Delta_2 \to C}
$$
Let us call a {\em B-rule} (after Buszkowski) an inference rule of the following form:
$$
\infer[B_{q_1,\ldots,q_m,r;p_1,\ldots,p_k,t}]{p_1,\ldots, p_k, \Delta \to t}{\Delta, q_1, \ldots, q_m \to r}
$$
In this rule, $q_1,\ldots,q_m,p_1,\ldots,p_k,r$, and $t$ are {\em concrete variables} from $\mathrm{Var}$, not meta-variables.
Each tuple of variables produces an independent B-rule.
On the other hand, $\Delta$ stands for an arbitrary sequence
of Lambek formulae (in the one-division language).
With each rule, we associate the following Gentzen-style rule, which we call a {\em $\mathrm{B}'$-rule:}
$$
\infer[B'_{q_1,\ldots,q_m,r;p_1,\ldots,p_k,t}]{\Pi_1, \ldots, \Pi_k, \Delta \to t}{\Pi_1 \to p_1 & \ldots & \Pi_k \to p_k & \Delta, q_1, \ldots, q_m \to r}
$$
and the following {\em B-formula:}\footnote{Here $E \mathop{/} F_1 \ldots F_n$ is used as a shortcut
for $(E \mathop{/} F_n) \mathop{/} \ldots \mathop{/} F_1$, thus, $B$ is a one-division formula.}
$$
B = (t \mathop{/} (r \mathop{/} q_1 \ldots q_m)) \mathop{/} p_1 \ldots p_k.
$$
For a B-formula $B$, we introduce the corresponding {\em B-axiom} $\Lambda \to B$.
B-rules, B-axioms, and $\mathrm{B}'$-rules will be used as extensions of the Lambek calculus with one division, and each of the three variants has its benefits:
\begin{enumerate}
\item with B-rules, a derivation of a sequent without Lambek connectives ({\em i.e.,} of the form $z_1, \ldots, z_m \to s$) is
non-branching, which will allow us to encode it using a $!$ modality with a restricted version of contraction rule;
\item B-formulae are convenient for incorporating into the main sequent using a ``deduction theorem'' with $!$;
\item finally, the calculus with $\mathrm{B'}$-rules, unlike the first two variants, admits cut elimination (see Lemma~\ref{Lm:LdBcut} below),
which facilitates analysis of derivations.
\end{enumerate}
B-rules, B-axioms, and $\mathrm{B'}$-rules yield equivalent extensions of the Lambek calculus:
\begin{lemma}\label{Lm:eqBB}
In the presence of cut, the extensions of $\mathbf{L}^{\boldsymbol{*}}(/)$ with: (1) a set of B-rules; (2) the corresponding set of $\mathrm{B}'$-rules;
(3) the corresponding set of B-axioms are equivalent, i.e., derive the same set of sequents.
\end{lemma}
\begin{proof}
\fbox{$(1) \Rightarrow (2)$} (modelling B-rules using $\mathrm{B}'$-rules)
{\small
$$
\infer[B'_{q_1,\ldots,q_m,r;p_1,\ldots,p_k,t}]
{p_1, \ldots, p_k, \Delta \to t}
{p_1 \to p_1 & \ldots & p_k \to p_k & \Delta, q_1, \ldots, q_m \to r}
$$
}
\fbox{$(2) \Rightarrow (3)$} (modelling $\mathrm{B}'$-rules using B-axioms and cut)
{\small
$$
\infer=[\mathop{/} L]{(t \mathop{/} (r \mathop{/} q_1 \ldots q_m)) \mathop{/} p_1 \ldots p_k, \Pi_1, \ldots, \Pi_k, \Delta \to t}
{\Pi_1 \to p_1 & \ldots & \Pi_k \to p_k & \infer[\mathop{/} L]{t \mathop{/} (r \mathop{/} q_1 \ldots q_m), \Delta \to t}
{\infer=[\mathop{/} R]{\Delta \to r \mathop{/} q_1 \ldots q_m}{\Delta, q_1, \ldots, q_m \to r} & t \to t}}
$$
}
Then a cut with the B-axiom $\Lambda \to (t \mathop{/} (r \mathop{/} q_1 \ldots q_m)) \mathop{/} p_1 \ldots p_k$ yields the goal sequent
$\Pi_1, \ldots, \Pi_k, \Delta \to t$.
\fbox{$(3) \Rightarrow (1)$} (deriving B-axioms using B-rules)
{\small
$$
\infer=[\mathop{/} R]{\Lambda \to (t \mathop{/} (r \mathop{/} q_1 \ldots q_m)) \mathop{/} p_1 \ldots p_k}
{\infer[\mathop{/} R]{p_1, \ldots, p_k \to t \mathop{/} (r \mathop{/} q_1 \ldots q_m)}
{\infer[B_{q_1,\ldots,q_m,r;p_1,\ldots,p_k,t}]{p_1, \ldots, p_k, r \mathop{/} q_1 \ldots q_m \to t}
{\infer=[\mathop{/} L]{r \mathop{/} q_1 \ldots q_m, q_1, \ldots, q_m \to r}
{q_1 \to q_1 & \ldots & q_m \to q_m & r \to r}}}}
$$
}
\qed
\end{proof}
\begin{lemma}\label{Lm:LdBcut}
The one-division Lambek calculus $\mathbf{L}^{\boldsymbol{*}}(/)$, extended with an arbitrary finite set of $\mathrm{B}'$-rules, admits cut elimination.
\end{lemma}
\begin{proof}
The proof goes via a standard argument, exactly as for the Lambek calculus itself~\citep{Lambek58}. The global induction is by the number
of cuts in a derivation. Each cut is eliminated by nested induction, where the outer parameter is the complexity of the formula being cut,
and the inner one is the summary derivation depth of the premises of the cut.
The base case is cut with axiom ($A \to A$), which just disappears. For the induction step, one distinguishes principal and non-principal cut premises.
A premise is called principal, if the last rule in its derivation introduces the formula being cut. Thus, if the left premise is introduced by a
$\mathrm{B'}$-rule, it is always principal (since this rule introduces the succedent $t$). The key trick, however, is that if the {\em right} premise of a cut is introduced by a $\mathrm{B}'$-rule, then it is
{\em never} principal. This is due to the fact that a $\mathrm{B'}$-rule introduces nothing to the antecedent: the antecedent of its goal, $\Pi_1, \ldots, \Pi_k, \Delta$, is composed from antecedents of the premises.
Thus, there are three possible cases.
{\em Case 1:} the left premise is non-principal. Cut can be exchanged with the last rule in the left premise derivation, and propagates upward. The
inner induction parameter gets smaller, while the outer one is intact.
Propagation of cut through non-principal $(\to\mathop{/})$ and $(\mathop{/}\to)$ is standard and is performed exactly as in the
cut elimination proof of~\citet{Lambek58}. As for $\mathrm{B}'$-rules, such a rule cannot yield a non-principal left premise of cut.
{\em Case 2:} the right premise in non-principal. Cut propagates to the right.
{This is how cut gets propagated through a $\mathrm{B}'$-rule:
{\small $$
\infer[\mathrm{cut}]
{\Pi_1, \ldots, \Pi_k, \Delta', \Psi, \Delta'' \to t}{\Psi \to A & \infer[B']{\Pi_1, \ldots, \Pi_k, \Delta', A, \Delta'' \to t}
{\Pi_1 \to p_1 & \ldots & \Pi_k \to p_k & \Delta', A, \Delta'', q_1, \ldots, q_m \to t}}
$$}
transforms into
{\small
$$
\infer[B']
{\Pi_1, \ldots, \Pi_k, \Delta', \Psi, \Delta'' \to t}
{\Pi_1 \to p_1 & \ldots & \Pi_k \to p_k & \infer[\mathrm{cut}]{\Delta', \Psi, \Delta'', q_1, \ldots, q_m \to t}
{\Psi \to A & \Delta', A, \Delta'', q_1, \ldots, q_m \to t}}
$$}
and
{\small
$$
\infer[\mathrm{cut}]
{\Pi_1, \ldots, \Pi'_i, \Psi, \Pi''_i, \ldots, \Pi_k, \Delta \to t}{\Psi \to A &
\infer[B']{\Pi_1, \ldots, \Pi'_i, A, \Pi''_i, \ldots, \Pi_k, \Delta \to t}
{\Pi_1 \to p_1 & \ldots & \Pi'_i, A, \Pi''_i \to p_i & \ldots & \Pi_k \to p_k & \Delta, q_1, \ldots, q_m \to t}}
$$}
transforms into
{\small
$$
\infer[B']
{\Pi_1, \ldots, \Pi'_i, \Psi, \Pi''_i, \ldots, \Pi_k, \Delta \to t}
{\Pi_1 \to p_1 & \ldots &
\infer[\mathrm{cut}]{\Pi'_i, \Psi, \Pi''_i \to p_i}{\Psi \to A & \Pi'_i, A, \Pi''_i \to p_i} & \ldots & \Pi_k \to p_k &
\Delta, q_1, \ldots, q_m \to t}
$$}
\normalsize
}
Propagation of cut to the right through non-principal $\mathop{/} R$ and $\mathop{/} L$ is again due to Lambek.
{\em Case 3:} both left and right premises are principal, being introduced by $\mathop{/} R$ and $\mathop{/} L$ respectively. In this case
cut transforms into two cuts of lower complexity:
$$
\infer[\mathrm{cut}]{\Gamma, \Psi, \Pi, \Delta \to C}
{\infer[\mathop{/} R]{\Psi \to E \mathop{/} F}{\Psi, F \to E} &
\infer[\mathop{/} L]{\Gamma, E \mathop{/} F, \Pi, \Delta \to C}
{\Pi \to F & \Gamma, E, \Delta \to C}}
$$
becomes
$$
\infer[\mathrm{cut}]{\Gamma, \Psi, \Pi, \Delta \to C}
{\Pi \to F & \infer[\mathrm{cut}]{\Gamma, \Psi, F, \Delta \to C}
{\Psi, F \to E & \Gamma, E, \Delta \to C}}
$$
(This transformation, again, comes from the original Lambek's proof.)
\qed
\end{proof}
Now we are ready to present the encoding of type-0 grammars. Consider a grammar $\mathcal{G} = \langle N, \Sigma, P, s \rangle$.
For each production $\mathfrak{p} = (v_1 \ldots v_m \Rightarrow w_1 \ldots w_k) \in P$ add $\mathbf{a}^{\pf}$, $\mathbf{b}^{\pf}$, $\mathbf{c}^{\pf}$, $\mathbf{d}^{\pf}$, $\mathbf{e}^{\pf}$, $\mathbf{f}^{\pf}$, and
$\overline{y}^{\pf}$, for each $y \in N \cup \Sigma$, as distinct variables to $\mathrm{Var}$, and consider the following seven B-rules:
\begin{align*}
& \infer[(1_{\mathfrak{p}})]{\mathbf{e}^{\pf}, \Delta \to \mathbf{a}^{\pf}}{\Delta \to s} && \infer[(4_{\mathfrak{p}})]{\overline{y}^{\pf}, \Delta \to \mathbf{b}^{\pf}}{\Delta, y \to \mathbf{b}^{\pf}} \\
& \infer[(2_{\mathfrak{p}})]{\overline{y}^{\pf}, \Delta \to \mathbf{a}^{\pf}}{\Delta, y \to \mathbf{a}^{\pf}} && \infer[(5_{\mathfrak{p}})]{\mathbf{f}^{\pf}, \Delta \to \mathbf{c}^{\pf}}{\Delta, \mathbf{e}^{\pf} \to \mathbf{b}^{\pf}} \\
& \infer[(3_{\mathfrak{p}})]{\overline{w}^{\pf}_1, \ldots, \overline{w}^{\pf}_k, \Delta \to \mathbf{b}^{\pf}}{\Delta, v_1, \ldots, v_m \to \mathbf{a}^{\pf}}
&& \infer[(6_{\mathfrak{p}})]{y, \Delta \to \mathbf{c}^{\pf}}{\Delta, \overline{y}^{\pf} \to \mathbf{c}^{\pf}} \\
& \infer[(7_{\mathfrak{p}})]{\Delta \to s}{\Delta, \mathbf{f}^{\pf} \to \mathbf{c}^{\pf}}
\end{align*}
By $\mathbf{B}_\mathcal{G}$ we denote the set of all B-rules obtained from production rules of $\mathcal{G}$ as shown above;
let $\mathcal{B}_\mathcal{G}$ be the corresponding set of B-formulae and $\mathbf{B}'_\mathcal{G}$ be the corresponding set of
$\mathrm{B}'$-rules.
Before going further, let us comment a bit on these B-rules. In the language without product, we cannot directly implement the `landing' rule which replaces one subword with another (that is, applies a semi-Thue transition) at an arbitrary place of the antecedent. However, if we manage to move the subword to the right-hand side of the antecedent, it can be indeed replaced by another one (and moved to the left-hand side) by a B-rule, which is our main rule $(3_\mathfrak{p})$. Other rules do the necessary preparations.
This idea is essentially due to Buszkowski; here we present it more straightforwardly. First, $(1_\mathfrak{p})$ starts the replacement procedure. Second, several applications of $(2_\mathfrak{p})$ rotate the antecedent so that the necessary subword is on the right-hand side of the antecedent. The usage of an alternative alphabet ($\overline{y}^{\pf}$ instead of $y$) and special variables in the succedent ($\mathbf{a}^{\pf}$, ...) here ensures that this process cannot be aborted, and other rules cannot be applied until we finish. Third, as said above, $(3_\mathfrak{p})$ performs the actually semi-Thue transition. Finally, $(4_\mathfrak{p})$--$(7_\mathfrak{p})$ perform the backwards rotation and quit the procedure. This strategy is formalized in the proof of the key Lemma~\ref{Lm:undecRRBuszko} below, which is the
version of Lemma~\ref{Lm:undecRR} for encoding Busz\-kow\-ski's rules.
\begin{lemma}\label{Lm:undecRRBuszko}
Let $\mathcal{L}$ be $\pi_q$-sound in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$ and admit the $\mathop{/} L$ and $\mathop{/} R$ rules (maybe with Lambek's restriction for the latter).
Let $\Psi_{\mathcal{G}}$ internalise $\mathcal{B}_{\mathcal{G}} = \{ B_1, \ldots, B_N \}$ in $\mathcal{L}$ (see Definition~\ref{Df:intern}).
Also let all formulae in $\Psi_{\mathcal{G}}$ be ${!}$-formulae, for which permutation rules are allowed in $\mathcal{L}$.
Then the following are equivalent:
\begin{enumerate}
\item $s \Rightarrow^*_{\mathcal{G}} z_1 \ldots z_n$;
\item $z_1, \ldots, z_n \to s$ is derivable from axiom $s \to s$, using only rules from $\mathbf{B}_{\mathcal{G}}$, without cut;
\item $\Psi_{\mathcal{G}}, z_1, \ldots, z_n \to s$ is derivable in~$\mathcal{L}$.
\item there exists such a bracketing $\Delta$ of $z_1, \ldots, z_n$ that the sequent $\Psi_{\mathcal{G}}$ is derivable in~$\mathcal{L}$;
\item the sequent ${!}B_1, \ldots, {!}B_N, z_1, \ldots, z_n \to s$ is derivable in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$;
\item $z_1, \ldots, z_n \to s$ is derivable in $\mathbf{L}^{\boldsymbol{*}}(/)$ extended with rules from $\mathbf{B}'_\mathcal{G}$.
\end{enumerate}
\end{lemma}
\begin{proof}
This proof shares much with the proof of Lemma~\ref{Lm:undecRR}.
\fbox{$1 \Rightarrow 2$}
Proceed by induction on the derivation of $z_1 \ldots z_n$ from $s$ in $\mathcal{G}$.
The base case, $s \Rightarrow^*_\mathcal{G} s$, corresponds to the $s \to s$ axiom. For the induction step, consider the last
production rule $\mathfrak{p} = (v_1 \ldots v_m \Rightarrow w_1 \ldots w_k)$
applied in the derivation: $$s \Rightarrow^*_\mathcal{G} z_1 \ldots z_i v_1 \ldots v_m z_j \ldots z_n \Rightarrow_\mathcal{G}
z_1 \ldots z_i w_1 \ldots w_k z_j \ldots z_n.$$ By induction hypothesis, the sequent
$z_1, \ldots, z_i, v_1, \ldots, v_m, z_j, \ldots, z_n \to s$ is derivable. The necessary sequent
$z_1, \ldots, z_i, w_1, \ldots, w_k, z_j, \ldots, z_n \to s$ is now derived as follows:
$$
\infer[(7_\mathfrak{p})]{z_1, \ldots, z_i, w_1, \ldots, w_k, z_j, \ldots, z_n \to s}
{\infer=[(6_\mathfrak{p})]{z_1, \ldots, z_i, w_1, \ldots, w_k, z_j, \ldots, z_n, \mathbf{f}^{\pf} \to \mathbf{c}^{\pf}}
{\infer[(5_\mathfrak{p})]{\mathbf{f}^{\pf}, \overline{z}^{\pf}_1, \ldots, \overline{z}^{\pf}_i, \overline{w}^{\pf}_1, \ldots, \overline{w}^{\pf}_k, \overline{z}^{\pf}_j, \ldots, \overline{z}^{\pf}_n \to \mathbf{c}^{\pf}}
{\infer=[(4_\mathfrak{p})]{\overline{z}^{\pf}_1, \ldots, \overline{z}^{\pf}_i, \overline{w}^{\pf}_1, \ldots, \overline{w}^{\pf}_k, \overline{z}^{\pf}_j, \ldots, \overline{z}^{\pf}_n, \mathbf{e}^{\pf} \to \mathbf{b}^{\pf}}
{\infer[(3_\mathfrak{p})]{\overline{w}^{\pf}_1, \ldots, \overline{w}^{\pf}_k, \overline{z}^{\pf}_j, \ldots, \overline{z}^{\pf}_n, \mathbf{e}^{\pf}, z_1, \ldots, z_i \to \mathbf{b}^{\pf}}
{\infer=[(2_\mathfrak{p})]{\overline{z}^{\pf}_j, \ldots, \overline{z}^{\pf}_n, \mathbf{e}^{\pf}, z_1, \ldots, z_i, v_1, \ldots, v_m \to \mathbf{a}^{\pf}}
{\infer[(1_\mathfrak{p})]{\mathbf{e}^{\pf}, z_1, \ldots, z_i, v_1, \ldots, v_m, z_j, \ldots, z_n \to \mathbf{a}^{\pf}}
{z_1, \ldots, z_i, v_1, \ldots, v_m, z_j, \ldots, z_n \to s}}}}}}}
$$
\fbox{$2 \Rightarrow 3$}
Proceed by induction on derivation.
The base case, $\Phi, s \to s$, is derivable by item~1 of Definition~\ref{Df:intern}. The induction step,
{\em i.e.,} application of a B-rule of the form
$$
\infer[B]{p_1, \ldots, p_k, \Delta \to t}{\Delta, q_1, \ldots, q_m \to r}
$$
is handled using the `landing' rule (item~2 of Definition~\ref{Df:intern}) as follows:
{\small
$$
\infer[\mathrm{land}]{\Psi_{\mathcal{G}}, p_1, \ldots, p_k, \Delta \to t}
{\infer=[\mathop{/} L]{\Psi_{\mathcal{G}}, (t \mathop{/} (r \mathop{/} q_1 \ldots q_m)) \mathop{/} p_1 \ldots p_k, p_1, \ldots, p_k, \Delta \to t}
{p_1 \to p_1 & \ldots & p_k \to p_k &
\infer=[!P_2\mbox{ applied to formulae of $\Psi_{\mathcal{G}}$}]{\Psi_{\mathcal{G}}, t \mathop{/} (r \mathop{/} q_1 \ldots q_m), \Delta \to t}
{\infer[\mathop{/} L]{t \mathop{/} (r \mathop{/} q_1 \ldots q_m), \Psi_{\mathcal{G}}, \Delta \to t}
{\infer=[\mathop{/} R]{\Psi_{\mathcal{G}}, \Delta \to r \mathop{/} q_1 \ldots q_m}{\Psi_{\mathcal{G}}, \Delta, q_1, \ldots, q_m \to r} & t \to t}}}}
$$}
Notice that here we essentially used the permutation rules for formulae of $\Psi_{\mathcal{G}}$.
\fbox{$3 \Rightarrow 4$} Obvious: take the trivial (empty) bracketing $\Delta = z_1, \ldots, z_n$.
\fbox{$4 \Rightarrow 5$} is handled exactly as in Lemma~\ref{Lm:undecRR}.
\fbox{$5 \Rightarrow 6$} Consider a cut-free derivation of ${!}B_1, \ldots, {!}B_N, z_1, \ldots, z_n$ in $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$
and erase all $!$-formulae from it. Then applications of structural rules for ${!}$ become trivial, and $!L$ transforms into
$$
\infer{\Delta_1, \Delta_2 \to A}{\Delta_1, B, \Delta_2 \to A}
$$
where $B$ is a B-formula from $\mathcal{B}_\mathcal{G}$. This is equivalent to cut with the B-axiom $\Lambda \to B$.
Thus, $z_1, \ldots, z_n$ is derivable in $\mathbf{L}^{\boldsymbol{*}}(/)$ extended by the set of B-axioms obtained from $\mathcal{G}$ and, by Lemma~\ref{Lm:eqBB}, in the corresponding extension by $\mathrm{B}'$-rules,
$\mathbf{B}'_\mathcal{G}$.
\fbox{$6 \Rightarrow 1$}
The extension of the Lambek calculus with $\mathbf{B}'_\mathcal{G}$, admits
cut elimination (Lemma~\ref{Lm:LdBcut}), and in the cut-free derivation the only rules that can be applied
are $\mathrm{B}'$-rules.
Proceed by induction on this derivation. The base case is the $s \to s$ axiom, and we have $s \Rightarrow_\mathcal{G}^* s$.
For the induction step, let us go upwards along the derivation, turning right at each application of a $\mathrm{B}'$-rule, and trace the succedent:
$$
\xymatrix{
s \ar[r]_{(7'_\mathfrak{p})} &
\ar@(ul,ur)^{(6'_\mathfrak{p})} \mathbf{c}^{\pf} \ar[r]_{(5'_\mathfrak{p})} &
\ar@(ul,ur)^{(4'_\mathfrak{p})} \mathbf{b}^{\pf} \ar[r]_{(3'_\mathfrak{p})} &
\ar@(ul,ur)^{(2'_\mathfrak{p})} \mathbf{a}^{\pf} \ar[r]_{(1'_\mathfrak{p})} &
s
}
$$
(Since variables $\mathbf{a}^{\pf}$, $\mathbf{b}^{\pf}$, and $\mathbf{c}^{\pf}$ could never appear in antecedents,
the derivation cannot stop at an axiom of the form $\mathbf{a}^{\pf} \to \mathbf{a}^{\pf}$ or alike.)
Essentially, as we shall see below, once we started with $(7'_\mathfrak{p})$, we fix the production rule $\mathfrak{p}$
and perform, as a whole, the block of $\mathrm{B}'$-rules which emulates application of $\mathfrak{p}$
(as the last production rule in the derivation). Then we return to a sequent of the form $\Delta \to s$,
ready to perform our backtracking further.
Variables $\mathbf{d}^{\pf}$, $\mathbf{e}^{\pf}$, $\mathbf{f}^{\pf}$, and $\overline{y}^{\pf}$ ($y \in N \cup \Sigma$) are never
succedents of conclusions of rules from $\mathbf{B}'_\mathcal{G}$. Therefore, left premises of the rules
$(1'_\mathfrak{p})$--$(5'_\mathfrak{p})$, which are of the form $\Pi_i \to p_i$, where $p_i$ is one of the
aforementioned variables, could only be axioms $p_i \to p_i$. This means that
$(1'_\mathfrak{p})$--$(5'_\mathfrak{p})$ actually transform into the corresponding B-rules,
$(1_\mathfrak{p})$--$(5_\mathfrak{p})$. As for $(7'_\mathfrak{p})$, it already coincides with $(7_\mathfrak{p})$.
Thus, the bottom of our derivation looks as follows, where $\Delta_1, \ldots, \Delta_{n'} = z_1, \ldots, z_n$:
$$
\infer[(7_\mathfrak{p})]
{\Delta_1, \ldots, \Delta_{n'} \to s}
{\infer=[(6'_\mathfrak{p})]{\Delta_1, \ldots, \Delta_{n'}, \mathbf{f}^{\pf} \to \mathbf{c}^{\pf}}
{\Delta_1 \to y_1 & \ldots & \Delta_{n'} \to y_{n'} &
\infer[(5_\mathfrak{p})]{\mathbf{f}^{\pf}, \overline{y}^{\pf}_1, \ldots, \overline{y}^{\pf}_i, \overline{w}^{\pf}_1, \ldots, \overline{w}^{\pf}_k, \overline{y}^{\pf}_j, \ldots, \overline{y}^{\pf}_{n'} \to \mathbf{c}^{\pf}}
{\infer=[(4_\mathfrak{p})]{\overline{y}^{\pf}_1, \ldots, \overline{y}^{\pf}_i, \overline{w}^{\pf}_1, \ldots, \overline{w}^{\pf}_k, \overline{y}^{\pf}_j, \ldots, \overline{y}^{\pf}_{n'}, \mathbf{e}^{\pf} \to \mathbf{b}^{\pf}}
{\infer[(3_\mathfrak{p})]{\overline{w}^{\pf}_1, \ldots, \overline{w}^{\pf}_k, \overline{y}^{\pf}_j, \ldots, \overline{y}^{\pf}_{n'}, \mathbf{e}^{\pf}, y_1, \ldots, y_i \to \mathbf{b}^{\pf}}
{\infer=[(2_\mathfrak{p})]{\overline{y}^{\pf}_j, \ldots, \overline{y}^{\pf}_{n'}, \mathbf{e}^{\pf}, y_1, \ldots, y_i, v_1, \ldots, v_m \to \mathbf{a}^{\pf}}
{\infer[(1_\mathfrak{p})]{\mathbf{e}^{\pf}, y_1, \ldots, y_i, v_1, \ldots, v_m, y_j, \ldots, y_{n'} \to \mathbf{a}^{\pf}}
{y_1, \ldots, y_i, v_1, \ldots, v_m, y_j, \ldots, y_{n'} \to s}}}}}}}
$$
Consider sequents of the form $\Delta_i \to y_i$ (left premises); $w_1, \ldots, w_k$ are also $y$'s. If $y_i \ne s$, then it could not be the succedent
of the conclusion of a rule from $\mathbf{B}'_\mathcal{G}$, therefore $\Delta_i = y_i$ and this is just an axiom. If $y_i = s$,
then by induction hypothesis we have $\Delta_i$ derivable from $s$ in $\mathcal{G}$. Thus, in both cases\footnote{This part can
be simplified a bit by modifying $\mathcal{G}$. Namely, we could introduce a new starting symbol $s'$ with a rule
$s' \Rightarrow s$. The language generated by $\mathcal{G}$ will not change. After this transformation, the starting symbol $s'$ will never
appear in the derivation, except for its start, and therefore there would be always $y_i \ne s'$, and $\Delta_i = y_i$.}
$\mathcal{G}$ derives
$\Delta_i$ from $y_i$.
By induction hypothesis we have $s \Rightarrow_\mathcal{G}^* y_1 \ldots y_i v_1 \ldots v_m y_j \ldots y_{n'}$, and since
$\mathfrak{p} = (v_1 \ldots v_m \Rightarrow w_1 \ldots w_k)$ is a production rule of $\mathcal{G}$ (the form of $\mathfrak{p}$ is taken from
$(3_\mathfrak{p})$), $s \Rightarrow_\mathcal{G}^* y_1 \ldots y_i w_1 \ldots w_k y_j \ldots y_{n'}$. Finally, we recall a well-known property of derivations in type-0 grammars: if $s \Rightarrow_\mathcal{G}^* y_1 \dots y_{n'}$ ($w_i$'s are also part of $y_j$'s) and $y_i \Rightarrow_\mathcal{G}^* \Delta_i$ for each $i$, then $s \Rightarrow^*_\mathcal{G} \Delta_1 \ldots \Delta_{n'} = z_1 \ldots z_n$.
\qed
\end{proof}
This lemma yields results on complexity and generative power of categorial grammars for one-division fragments, exactly as Lemma~\ref{Lm:undecRR} does in the
general case.
\begin{theorem}
The derivability problems for one-division fragments (that is, fragments including $\mathop{/}$, ${!}$, brackets and bracket modalities) of
$\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$, $\boldsymbol{!}^{\mathbf{r}} \mathbf{L}^{\boldsymbol{*}}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}$, $\LsysA'$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ are undecidable.
\end{theorem}
\begin{theorem}
For any r.e. language $M$ and for each of the calculi mentioned in the previous theorem there exists a categorial grammar for $M$ based on the
given calculus. For bracketed systems, such a grammar both s-recognises and t-recognises $M$.
\end{theorem}
For $\sysB''$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALCb}\mathrm{(st)}$, however, we cannot directly use the internalisation given by Proposition~\ref{Pr:internB},
since the meta-formula $\Phi$ used there includes the product connective. Also, $\Phi$ includes $[[q]]$, which is not
a $!$-formula and does not allow permutation, as required in Lemma~\ref{Lm:undecRRBuszko}.
We overcome this issue by slightly modifying the notion of internalisation and proving a new version of Proposition~\ref{Pr:internB}.
\begin{definition}\label{Df:Vintern}
Let $\mathcal{B} = \{ B_1, \ldots, B_N \}$ be a finite set of formulae and $\mathcal{V} \subseteq \mathrm{Var}$ be a finite set of
variables.
A meta-formula $\Psi$ {\em $\mathcal{V}$-internalises} $\mathcal{B}$ in the calculus $\mathcal{L}$, if the following holds:
\begin{enumerate}
\item the sequent $\Psi, s \to s$ is derivable in $\mathcal{L}$;
\item the following `$t$-landing' rule is admissible in $\mathcal{L}$ for any $t \in \mathcal{V}$:
$$
\infer[\mathrm{land}_t,\ B_i \in \mathcal{B}]{\Psi, \Delta_1, B_i, \Delta_2 \to t}{\Psi, \Delta_1, \Delta_2 \to t}
$$
\item the sequent ${!}B_1, \ldots, {!}B_N \to \prod \pi_q(\Psi)$ is derivable in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$.
\end{enumerate}
\end{definition}
The new notion of $\mathcal{V}$-internalisation differs from the original notion of internalisation (Definition~\ref{Df:intern}) in item~2.
This item is formulated in a weaker form: we restrict the antecedents of sequents in the `landing' rule by a finite set
$\mathcal{V}$ of variables. The key observation is that $\mathcal{V}$-internalisation, where $\mathcal{V}$ is the set of all variables
used in $\mathcal{B}$, is already sufficient for Lemma~\ref{Lm:undecRRBuszko}. Thus, now we only have to prove the $\mathcal{V}$-internalisation property for
$\LsysB'$.
\begin{proposition}
Let $\mathcal{V} = \{ t_1, \ldots, t_m \}$ be a finite set of variables and $\mathcal{B} = \{ B_1, \ldots, B_N \}$ be
a finite set of Lambek formulae. Then the following meta-formula
$$
\Psi_{\mathcal{B},\mathcal{V}} = {!}((s \mathop{/} s) \mathop{/} {!}Z_{1,1}), {!}Z_{1,1}, \ldots, {!}((s \mathop{/} s) \mathop{/} {!}Z_{m,N}), {!}Z_{m,N},
{!}((s \mathop{/} s) \mathop{/} \langle\rangle\PMod q), {!}\langle\rangle\PMod q,
$$
where
$$
Z_{i,j} = ([]^{-1} (t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} {!}\langle\rangle\PMod q))) \mathop{/} q,
$$
$\mathcal{V}$-internalises $\mathcal{B}$ in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$ and in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$.
\end{proposition}
\begin{proof}
For short, denote $\Psi_{\mathcal{B},\mathcal{V}}$ by just $\Psi$. Item~1 of Definition~\ref{Df:Vintern} is checked exactly as in Proposition~\ref{Pr:internB}.
For item~2, let us check the $t$-landing rule for $t = t_j \in \mathcal{V}$ and $B_i \in \mathcal{B}$.
Let $\Psi = \Psi', {!}\langle\rangle\PMod q$.
{\small
$$
\infer[!L]{\Psi', {!}\langle\rangle\PMod q, \Delta_1, \Delta_2 \to t_j}
{\infer[\langle\rangle L]{\Psi', \langle\rangle\PMod q, \Delta_1, \Delta_2 \to t_j}
{\infer[\langle\rangle L]{\Psi', [\langle\rangle q], \Delta_1, \Delta_2 \to t_j}
{\infer[!C\mbox{ applied to ${!}Z_{i,j}$}]
{\Psi', [[q]], \Delta_1, \Delta_2 \to t_j}
{\infer[!L]{\Psi', [!(([]^{-1} (t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} ({!}\langle\rangle\PMod q)))) \mathop{/} q), q], \Delta_1, \Delta_2 \to t_j}
{\infer[\mathop{/} L]{\Psi', [([]^{-1} (t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} ({!}\langle\rangle\PMod q)))) \mathop{/} q, q], \Delta_1, \Delta_2 \to t_j}
{q \to q & \infer[[]^{-1} L]{\Psi', [[]^{-1} (t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} ({!}\langle\rangle\PMod q))))], \Delta_1, \Delta_2 \to t_j}
{\infer=[!P_1\mbox{ applied to formulae of $\Psi'$}]{\Psi', t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} ({!}\langle\rangle\PMod q))), \Delta_1, \Delta_2 \to t_j}
{\infer[\mathop{/} L]{t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} ({!}\langle\rangle\PMod q))), \Psi', \Delta_1, \Delta_2 \to t_j}
{\infer[\mathop{\backslash} R]{\Psi', \Delta_1, \Delta_2 \to (t_j \mathop{/} {!}B_i) \mathop{/} ({!}\langle\rangle\PMod q)}
{\infer[\mathop{\backslash} R]{\Psi', \Delta_1, \Delta_2, {!} \langle\rangle\PMod q \to t_j \mathop{/} {!}B_i}
{\infer[!P_2]{\Psi', \Delta_1, \Delta_2, {!}\langle\rangle\PMod q, {!}B_i \to t_j}
{\infer[!L]{\Psi', \Delta_1, {!}B_i, \Delta_2, {!}\langle\rangle\PMod q \to t_j}
{\infer[!P_2]{\Psi', \Delta_1, B_i, \Delta_2, {!}\langle\rangle\PMod q \to t_j}
{\Psi', {!}\langle\rangle\PMod q, \Delta_1, B_i, \Delta_2 \to t_j
}}} }}
& t_j \to t_j}}}}}}
}}}
$$
}
Finally, let us check item~3. The $\pi_q$-projection of $\Psi$ includes the following formulae (the order does not matter due to permutation rules):
\begin{enumerate}
\item ${!}((s \mathop{/} s) \mathop{/} {!} \pi_q(Z_{i,j}))$;
\item ${!} Z_{i,j} = {!}((t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} {!}\mathbf{1}))) \mathop{/}\mathbf{1})$;
\item ${!} ((s \mathop{/} s) \mathop{/} \mathbf{1})$ and ${!}\mathbf{1}$.
\end{enumerate}
We enjoy the following derivations in $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$:
$$
\infer[{!} R]{\Lambda \to {!}((s \mathop{/} s) \mathop{/} {!} \pi_q(Z_{i,j}))}
{\infer[\mathop{/} R]{\Lambda \to (s \mathop{/} s) \mathop{/} {!} \pi_q(Z_{i,j})}
{\infer[{!} W]{{!} \pi_q(Z_{i,j}) \to s \mathop{/} s}{\infer[\mathop{/} R]{\Lambda \to s \mathop{/} s}{s \to s}}}}
\qquad
\infer [{!} R]{\Lambda \to {!}((s \mathop{/} s) \mathop{/} \mathbf{1})}
{\infer[\mathop{/} R]{\Lambda \to (s \mathop{/} s) \mathop{/} \mathbf{1}}{\infer[\mathbf{1} L]{\mathbf{1} \to s \mathop{/} s}
{\infer[\mathop{/} R]{\Lambda \to s \mathop{/} s}{s \to s}}}}
$$
$$
\infer[{!} R]{{!}B_i \to {!}((t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} {!}\mathbf{1})) \mathop{/}\mathbf{1})}
{\infer[\mathop{/} R]{{!}B_i \to (t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} {!}\mathbf{1})) \mathop{/}\mathbf{1}}
{\infer[\mathbf{1} L]{{!}B_i, \mathbf{1} \to t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} {!}\mathbf{1})}
{\infer[\mathop{/} R]{{!}B_i \to t_j \mathop{/} ((t_j \mathop{/} {!}B_i) \mathop{/} {!}\mathbf{1})}
{\infer[\mathop{/} L]{{!}B_i, (t_j \mathop{/} {!}B_i) \mathop{/} {!}\mathbf{1} \to t_j}
{\infer[{!} R]{\Lambda \to {!}\mathbf{1}}{\Lambda\to\mathbf{1}} & \infer[{!}P]{{!}B_i, t_j \mathop{/} {!}B_i \to t_j}{\infer[\mathop{/} L]{t_j \mathop{/} {!}B_i, {!}B_i \to t_j}
{{!}B_i \to {!}B_i & t_j \to t_j}}}}}}}
$$
By $\cdot R$, we derive ${!}B_1, \ldots, {!}B_N, \ldots, {!}B_1, \ldots, {!}B_N \to \prod \pi_q(\Psi)$
(here ${!}B_1, \ldots, {!}B_N$) is repeated $m$ times. Permutations and contractions yield the needed sequent
${!}B_1, \ldots, {!}B_N \to \prod \pi_q(\Psi)$. \qed
\end{proof}
By Proposition~\ref{Prop:nostoupB}, we propagate this construction to $\LsysB'$ and $\LsysBr'$.
Finally, for the original Morrill's system $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ we use the same trick as in Section~\ref{S:undec} (Proposition~\ref{Pr:internBx}),
adding an extra ${!}$ over $Z_{i,j}$. In a whole, this yields the necessary results.
\begin{theorem}
The derivability problems for one-division fragments (that is, fragments including $\mathop{/}$, ${!}$, brackets and bracket modalities) of
$\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$, $\LsysB'$, $\LsysBr'$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$ are undecidable.
\end{theorem}
\begin{theorem}
For any r.e. language $M$ and for each of the calculi mentioned in the previous theorem there exists a categorial grammar based on the
given calculus which both s-recognises and t-recognises $M$, for calculi without Lambek's restriction ($\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}$, $\LsysB'$, and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{L^{\boldsymbol{*}}b}\mathrm{(st)}$), and
$M - \{\varepsilon\}$, in the case with Lambek's restriction ($\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{Lb}$ and $\LsysBr'$).
\end{theorem}
\section{Conclusions and Future Work}\label{S:conclusion}
In this article, we have performed the logical analysis of two systems introduced by Morrill as a base for the CatLog categorial grammar
parser, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ and $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$. We have pointed out issues with cut elimination in these systems, and provided necessary modification for which cut elimination is proved.
We also discussed how Lambek's non-empti\-ness restriction can be imposed on $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$. From the algorithmic point of view, we have proved undecidability for each of
Morrill's systems, even in the smallest possible language with only one division, brackets and bracket modalities, and the subexponential. Moreover, we have
shown that categorial grammars based on Morrill's calculi can generate arbitrary recursively enumerable languages (in the case with Lambek's restriction---arbitrary
r.e. languages without the empty word).
One of the most interesting questions for future research is as follows. The undecidability results presented in this article look unfortunate, since
the calculi it is applied to are intended to be used in natural language parsing software.
Thus, it is an important task to explore fragments of the calculi,
guarded by certain syntactic conditions on applying $!$, for which the derivability problem is decidable. For systems without brackets, in particular,
$\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$, such an algorithm exists under the condition that $!$ is applied only to variables~\citep{KanKuzSceFG}. The complexity of this algorithm
is the same as for the calculus without ${!}$: NP for $\boldsymbol{!}^{\mathbf{r}} \mathbf{L}^{\boldsymbol{*}}$ and PSPACE for $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$. Moreover, for $\boldsymbol{!} \mathbf{L}^{\boldsymbol{*}}$ decidability is known for a broader class
of formulae allowed under ${!}$, namely, formulae of implication depth 1, that is, of the form $p_1 \ldots p_k \mathop{\backslash} q \mathop{/} r_1 \ldots r_m$~\citep{Fofanova2018}.
Extending this result to $\boldsymbol{!} \mathbf{MALC}^{\boldsymbol{*}}$ and $\boldsymbol{!}^{\mathbf{r}} \mathbf{MALC}^{\boldsymbol{*}}$ is still an open question.
For systems with brackets, the class of formulae for which the derivability problem becomes decidable appears to be much broader. For Morrill's first
system, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, this class is guarded by so-called bracket non-negative condition (BNNC) imposed on ${!}$-formulae. Under this condition, ${!}$ can be applied
to any formula which does not include negative occurrences of $\langle\rangle$ and does not include positive occurrences of $[]^{-1}$. In particular, ${!}$ is allowed
to be applied to any formula which does not include bracket modalities at all, no matter how complex this formula is. \citet{MorrillValentin} show that
the derivability problem in $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2015}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$ for sequents obeying BNNC is decidable; \citet{KanKuzSceFCT} establish an NP upper complexity bound for its fragment without additives, also with BNNC imposed.
We conjecture that for the full system, including additives, the complexity bound is PSPACE. These complexity boundaries are tight, since the multiplicative-only Lambek calculus is already NP-complete~\citep{PentusNP} and $\mathbf{MALC}^{\boldsymbol{*}}$ is PSPACE-complete~\citep{KanovichKazimierz}.
For Morrill's second system, $\boldsymbol{!}_{\mathbf{b}}^{\mathbf{2018}} \mathbf{MALC^{\boldsymbol{*}}b}\mathrm{(st)}$, formulating
the corresponding version of BNNC and proving decidability for the fragment guarded by this new condition is a problem for further investigation.
Another, potentially simpler but more technical question left for further research is the question of extending our cut elimination proof to calculi with discontinuous operations~\citep{MorValDispl}. We conjecture that the proof could be obtained as a combination of our proof (using ``deep cut elimination'' for ${!}$-formulae) presented here and the proof by~\citet{MorValDispl} for displacement calculus. The notations, however, would become extremely complicated---thus, a digestable presentation of such a proof becomes a separate challenge.
\paragraph*{Acknowledgements.}\
We are grateful to Glyn Morrill for a number of very helpful interactions we benefited from at various stages of this work.
The work of Max Kanovich was partially supported by
EPSRC Programme Grant EP/R006865/1: ``Interface Reasoning for Interacting
Systems (IRIS).'' The work of Andre Scedrov and Stepan Kuznetsov was
prepared within the framework of the HSE University Basic Research Program
and partially funded by the Russian Academic Excellence Project `5--100.' The
work of Stepan Kuznetsov was also partially supported by the Council of the President of Russia for Support of Young Russian Researchers and Leading Research Schools of the Russian Federation, by the Young Russian Mathematics Award, and by
the Russian Foundation for Basic Research grant 20-01-00435.
\bibliographystyle{bbs}
|
1,108,101,566,007 | arxiv | \subsubsection{Abstract}
Deep Bayesian neural networks (BNNs) are a powerful tool, though computationally demanding, to perform parameter estimation while jointly estimating uncertainty around predictions.
BNNs are typically implemented using arbitrary normal-distributed prior distributions on the model parameters.
Here, we explore the effects of different prior distributions on classification tasks in BNNs and evaluate the evidence supporting the predictions based on posterior probabilities approximated by Markov Chain Monte Carlo sampling and by computing Bayes factors.
We show that the choice of priors has a substantial impact on the ability of the model to confidently assign data to the correct class (true positive rates).
Prior choice also affects significantly the ability of a BNN to identify out-of-distribution instances as unknown (false positive rates).
When comparing our results against neural networks (NN) with Monte Carlo dropout we found that BNNs generally outperform NNs.
Finally, in our tests we did not find a single best choice as prior distribution. Instead, each dataset yielded the best results under a different prior, indicating that testing alternative options can improve the performance of BNNs.
\clearpage
\newpage
\section{Introduction}
Neural networks (NN) have become a widespread and increasingly powerful tool for classification and regressions, with applications in a wide range of research fields including medicine \cite{wang2020}, biology \cite{zhou2015, Silvestro2019}, sociology \cite{severyn2015}, and finance \cite{bao2017}.
An NN maps a set of features (input layer) onto an output layer of desired size, e.g. the number of classes in a classification task, through one or multiple hidden layers \cite{Goodfellow2016}.
While a lot of the development in machine learning classifiers has focused on improving the accuracy of the models and scalability of the algorithms \cite{wang2019deep, pytorch2019}, there is an increasing awareness of the tendency of neural networks to provide highly confident predictions, even when these are wrong \cite{Goodfellow2016, hendrycks2016}.
Erroneous classifications with high confidence (false positives) reduce the reliability of a classifier and can even be harmful in some contexts, prompting increased discussion about artificial intelligence safety \cite{amodei2016}.
This is an evident problem when the neural network is applied to out-of-distribution data, i.e. data from a class that was not included in the training set \cite{hendrycks2016}. In this case, samples that would ideally be identified as \textit{unknown}, often result in highly confident assignment to a class, particularly when interpreting directly the probability of a predicted label from the output layer as a measure of confidence \cite{Gal2016}.
Several methods have been developed to quantify uncertainties in predictions using neural networks, for instance using Monte Carlo (MC) dropout \cite{Gal2016} and deep ensembles \cite{lakshminarayanan2016}.
An explicit probabilistic approach to estimate uncertainties in the form of posterior credible intervals is the use of Bayesian neural networks (BNN; \cite{neal2012}), which place prior distributions (generally normal densities) on the weight parameters of an NN and sample them from their posterior distribution or an approximation of it.
BNNs typically implement Markov chain Monte Carlo (MCMC) algorithms \cite{neal2012, hoffman2014nuts,wenzel2020} or variational inference \cite{graves2011, blundell2015} to approximate the posterior distributions of the parameters from which predictions are sampled proportionally to their posterior probability.
Because the parameters of an NN do not usually have a direct interpretation, it is difficult to define prior distributions reflecting prior knowledge about the weights.
While some studies have described the theoretical implications of using different priors in BNNs
\cite{lee2004, vladimirova2018}, the sensitivity of BNN estimates to these choices remains poorly understood.
Here, we explore the effects of using different prior distributions on the performance of BNNs in classification tasks.
We assess the accuracy and true positive rate for in-distribution data and false positive rate for out-of-distribution data under different priors.
We evaluate the statistical support for predictions in two ways:
1) using posterior probabilities for each class as approximated by MCMC sampling and
2) by calculating Bayes factors to account for potentially uneven prior expectations across the classes.
We compare accuracy and true and false positive rates obtained from BNNs with analogous metrics obtained from a standard NN using MC dropout to estimate uncertainties.
\section{Methods}
\subsection{Models and implementation}
We implemented a BNN framework using Python v.3 based on the modules Numpy and Scipy and used a Metropolis Hastings Markov chain Monte Carlo (MCMC) algorithm to sample the weights from their posterior distribution.
In our simulations we used fully connected deep neural networks \cite{lecun2015deeplearning} with two hidden layers varying the number on nodes for each dataset (see below), and with an additional bias node in the input layer.
We used a rectified linear units function ($ReLU$) activation function for hidden layers \cite{ReLU2010} and
a softmax function in the output layer to obtain the parameters of the categorical distribution used to compute the likelihood of the data.
The BNN implementation used in this study is available here: \href{https://github.com/dsilvestro/npBNN}{github.com/dsilvestro/npBNN}.
We tested different prior distributions (Fig. \ref{figPriorPDF}) on the weights to explore their effect on the performance of posterior predictions:
\begin{itemize}
\item Uniform: $P(w) \sim \mathcal{U}(-b,b)$
\item Standard normal: $P(w) \sim \mathcal{N}(0,1)$
\item Truncated Cauchy: $ P(w) \sim \begin{cases} \mathcal{C}(0,1), & \mbox{if } -b \geq w \leq b \\ 0, & \mbox{otherwise } \end{cases} $
\item Laplace: $ P(w) \sim \mathcal{L}(0, 1)$
\end{itemize}
where the parameter $b$ defined the boundaries of the uniform distribution and of the truncated Cauchy distribution, in our analyses set to $b = 5$.
Although the Cauchy distribution, unlike the uniform, does not need hard boundaries to be a proper prior, we found that the truncated Cauchy did not change noticeably the results in our analyses in terms of accuracy, but improved the converge of the MCMC, compared to a non truncated Cauchy prior.
While uniform priors result in the posterior distribution matching the likelihood surface (within the allowed range of model parameters), normal, Cauchy, and Laplace distributions introduce some shrinkage toward weights around 0.
However, their regularization effects differ in how they pull the parameters toward 0 by a constant proportion (normal), by constant amount (Laplace), or shrinking more strongly parameter values close 0 (Cauchy) \cite{Gelman_et_al2013}.
\subsection*{Datasets}
We tested the effects of different priors on BNNs based on three datasets. For each dataset, we excluded one or more classes from the training set and used them as out-of-distribution data to quantify the rate of false positives.
All the data and scrips used to run and summarize the analyses are available here \href{https://doi.org/10.5281/zenodo.3816927}{\underline{doi:10.5281/zenodo.3816928}}.
I. Wine data --
To test the effect of prior choice on a simple classification task, we used the scikit-learn wine data (\href{https://scikit-learn.org/stable/}{scikit-learn.org}).
The dataset consist of 178 samples with 13 numeric features, which result from a chemical analysis of Italian wines. The wines are classified into three classes based on their provenance. We rescaled all features using a min-max scaler prior to training.
II. Virus data --
As a possible real-world application, we compiled and analyzed a dataset of influenza virus RNA sequences,
obtained from the Influenza Research Database \cite{zhang2017influenza} (\href{www.fludb.org}{www.fludb.org}).
Even though Influenza viruses are RNA viruses, the sequences that are available for download are coded in common DNA notation, i.e. the RNA nucleotide uracil (U) is coded as thymine (T).
Our dataset was restricted to Influenza A viruses (e.g. the swine flu virus H1N1), which are categorized into subtypes based on the combination of two surface proteins hemagglutinin (H) and neuraminidase (N).
We downloaded the coding RNA sequences (genes) of both of these proteins as sequence alignments (individually for each protein) and trimmed the alignments at the first methionine (Met) codon.
To deal with individual sequences of differing lengths, we trimmed the end of the alignments to ensure \textgreater 50\% of sequence coverage throughout the complete alignment.
Next, we randomly selected 600 sequences for each subtype, only choosing samples with sequence data for both genes; the final dataset included 6,600 samples belonging to 11 Influenza A subtypes.
We then transformed the nucleotide sequences into numerical arrays, by determining the frequencies of each possible nucleotide triplet (i.e. all possible 3-letter permutations of the 4 nucleotides A,C,G,T), independently of the amino acid reading frame.
The resulting arrays of $4^{3} = 64$ frequencies of nucleotide triplets for each of the two segments were concatenated between the two genes, resulting in a set of 128 features for each instance.
III. Synthetic data --
We assessed the effect of prior choice on a classification task for which we expected a relatively low accuracy for in-distribution samples.
To this end we simulated a dataset in which features were largely overlapping among classes. The dataset included 20 classes each represented by 199 instances. For each class $k$ we sampled 10 features drawn from beta distributions, such that the $i^{\text{th}}$ feature was a random draw from $\mathcal{B}(a_{k_i}, b_{k_i})$, where $a_{k_i}, b_{k_i} \sim \mathcal{U}(0.2,5)$.
Although each class is defined by a distinct set of shape parameters of the beta distribution, these are not guaranteed to be substantially different, as they are drawn from the same uniform distribution. As a consequence of this and the limited number of features, the dataset is expected to produce a classification accuracy $<$ 1 for in-distribution instances and potentially a high frequency of false positives for out-of-distribution data.
IV. MNIST data --
Finally we used a subset of the MNIST dataset of hand-written digits (10 classes) \cite{lecun1998mnist}, which includes 784 features. We randomly sampled 1,000 instances out of the original 60,000 to reduce computing time, even though this results in lower accuracy for in-distribution data.
\subsection{Training}
Since one of the aim of this study is to assess the ability of a BNN to distinguish between in-distribution and out-of-distribution data under different priors, we left out one or more classes during training and used them as test sets to estimate the false positive rates.
We performed cross-validation by repeating this procedure 10 times, for datasets II--IV, while leaving out each time a randoms subset of classes.
We repeated all the analyses under the four prior distributions described above (Fig. \ref{figPriorPDF}).
For all datasets we approximated the posterior distribution of the model parameters through 100 million MCMC iterations and assessed their convergence based on the effective sample sizes of the posterior samples.
For the wine dataset we split the data into in-distribution data (samples belonging to classes 1 and 2) and out-of-distribution data (samples belonging to class 3), thus training the BNN on two of the three classes. We set the number of nodes to 10 + 1 bias node and 5 for the first and second hidden layer, respectively.
For the virus dataset we generated 10 subsets of the training data, each containing the samples of 5 randomly selected subtypes (classes). The reason for this repeated random sampling of training classes is that some virus subtypes are more similar to each other than to others, and therefore the choice of the training classes is expected to affect the classification of out-of-distribution samples.
Of the 600 instances for each class, we used 450 for training and 50 to monitor the test accuracy during the MCMC sampling,
while setting aside the remaining 100 instances as test set for in- and out-of-distribution predictions.
The BNN included 20 nodes + 1 bias node in the first hidden layer and 5 nodes in the second.
Of the 199 samples for each class included in dataset III, 99 were used for training and 100 as test set.
Across the 10 cross-validation replicates, we partitioned the synthetic data into two subset of 10 classes each and use them as in-distribution and out-of-distribution datasets. We configured the BNN with 15 nodes + 1 bias node in the first hidden layer and 10 in the second.
For the subset of the MNIST dataset, we trained the BNN using 500 instances from 5 classes randomly selected at each cross-validation replicate. The BNN included 5 nodes + 1 bias node in the first hidden layer and 5 in the second.
To evaluate the performance of the BNN we used the full MNIST test set (10,000 instances) split into in-distribution and out-of-distribution classes.
For comparison we used the same data to train a standard NN as implemented in Tensorflow v.2.1 (\href{https://tensorflow.org}{tensorflow.org}), using the same configurations
implemented in the BNN analyses. We split the datasets into training (90\%, of which 20\% was used for validation during training), and test (10\%) sets and used the ADAM optimizer \cite{kingma2014adam} with default learning rate to minimize the cross-entropy loss function. The optimal number of epochs was selected as to minimize the validation cross-entropy loss.
\subsection{Predictions}
We sampled 5,000 posterior samples of the weights estimated by the BNN to perform predictions based on in-distribution and out-of-distribution test sets.
We computed the following statistics to quantify the performance of the BNNs using different priors and that of NNs with MC dropout:
\begin{itemize}
\item Accuracy (for in-distribution data): rate of correct predictions based on the class with highest posterior probability for BNNs and MC dropout support for NNs
\item True positives (for in-distribution data): rate of correct predictions with significant statistical support quantified using posterior probability and Bayes Factors for BNN and MC dropout support for NNs (see below)
\item False positives (for both in-distribution and out-of-distribution data): rate of erroneous predictions that received significant statistical support (based on the same thresholds applied for true positives).
\end{itemize}
We quantified the statistical support for BNN predictions in two ways.
First we considered a predicted class as significantly supported if it was sampled by the MCMC with posterior probability (PP) greater than 0.95.
As an additional measure of evidence supporting a prediction we computed Bayes factors (BF) based on a comparison between posterior and prior sampling frequencies obtained by MCMC. BF is defined as the ratio between the posterior odds and the prior odds \cite{Kass_1995}. Thus for a class $k$, we computed
\begin{equation}
BF = \frac{P(k|D_i)}{1 - P(k|D_i)} / \frac{P(k)}{1-P(k)}
\end{equation}
where $P(k|D_i)$ is the posterior probability of the class, approximated for sample $D_i$ by the MCMC sampling frequency.
We computed prior probability $P(k)$ empirically, as the sampling frequency of class $k$ for sample $D_i$ obtained from a distribution of BNN parameters sampled from the prior. This was achieved through an MCMC in which the acceptance probability of a state was only based on prior ratio, regardless of the likelihood ratio.
We used a threshold of $BF > 150$ to determine very strong support for a prediction \cite{Kass_1995}.
To rank the performance of different BNN models based on in-distribution data we evaluated the balance between true and false positive rates.
We computed the informedness as the difference between the true positive rate and the false positive rate \cite{powers2011}. The informedness reaches a value of 1 when both sensitivity and specificity are 1.
For standard NNs, we evaluated the statistical support of the prediction using the MC dropout technique, since a direct interpretation of the softmax probabilities is known to produce over-confident interpretations \cite{Gal2016} (but see \cite{hendrycks2016}).
We applied a dropout rate adjusted to the different datasets, which was set to 0.2 for the datasets I and II, 0.01 for dataset III, and 0.05 for dataset IV. Dropout rates were adjusted to best balance the true and false positive rates.
For each instance we sampled 1,000 predictions and used a threshold frequency of 0.95 to determine if the predicted class received significant statistical support.
\section{Results}
\subsection{Prior choice and in-distribution data}
The accuracy and informedness obtained for in-distribution data in a simple classification task (dataset I) were greater than 0.95 regardless of the prior (Table \ref{tbl_res}, Fig. \ref{wine_data_predictions}).
Similar results were obtained for dataset II (Fig. \ref{virus_data_stats}), indicating that the numerical features scored from the two RNA regions considered here can be used to accurately and confidently identify the different Influenza subtypes.
In dataset III (simulated data) the test accuracy was around 0.85--0.90 and the informedness was around 0.5--0.6, thus showing a substantially lower level of confidence in the predictions (Fig. \ref{simulated_data_stats}).
The accuracy of dataset IV (a subset of the MNIST data) was greater than 0.90, which is lower then expected \cite{lecun1998mnist} due to the small subset of training instances used here (Fig. \ref{mnist_data_stats}).
In our tests, the informedness was largely driven by changes in true positive rates, since false positive rates were generally low for in-distribution data.
Our analyses show that different priors can have moderate effects on the test accuracy, which, for datasets III and IV varied by 3-4\% across prior settings (Table \ref{tbl_res}).
Similarly, the true positive rates showed a variation up to 10\% depending on the dataset and prior.
For instance in dataset I, a uniform prior yielded a sensitivity of 1, whereas a Laplace prior resulted in a sensitivity of 0.95.
The false positive rates for in-distribution data were only appreciable for datasets III and IV, where the differences associated with prior choice were substantial, despite their limited effect on the overall informedness.
False positive rates under the Laplace prior were about 2-fold lower than under a uniform prior for dataset III and 4-fold lower for dataset IV.
\subsection{Prior choice and out-of-distribution data}
The false positive rates based on out-of-distribution data were significantly higher than for in-distribution data (Table \ref{tbl_res}).
Prior choice was linked with a variation of false positive rates of ranging from 2-fold (Figs. \ref{simulated_data_stats}, \ref{mnist_data_stats}) to orders of magnitude (Figs. \ref{virus_data_stats}).
The lowest false positive rates were obtained under different priors depending on the dataset:
uniform and Laplace for dataset I, Laplace for dataset II, and Cauchy for dataset III, and normal for dataset IV.
The ranking of the models based on informedness for in-distribution data did not match the relative performance of the priors with out-of-distribution data.
\subsection{Bayes factors versus posterior probabilities}
The true and false positive rates obtained from BFs were more conservative than those assessed based on posterior probabilities in dataset I, where both rates were consistently lower (Table \ref{tbl_res}).
While BF and PP returned comparable results for dataset III, BFs resulted in significantly lower false positive rates for out-of-distribution data for datasets I, II, and IV.
The BFs, as computed here, measure the statistical evidence supporting the best class against all the others combined. However they can also be computed to compare and rank all classes.
The calculation of BFs explicitly includes the empirical prior probability of each class and for each instance in the dataset, inferred through MCMC.
This might be particularly important when the priors on the weight parameters result in unexpected and unbalanced prior probabilities among classes for a given sample, as often observed in BNNs \cite{wenzel2020}.
For instance, a prediction for a sample with PP = 0.93 from a model with 5 classes results in moderate BF support if the prior probability is 0.20 (BF = 53), but in very strong support if the prior probability is 0.05, due to unbalanced priors (BF = 252).
Unbalanced prior probabilities among classes are also observed in our analyses, as shown in Fig. \ref{virus_prior_freq}, and they are not the product of an explicit choice based on prior knowledge, but rather the indirect result of how the features interact with the network architecture and prior distribution on the weight parameters.
Thus, a measure like BF that uses prior probabilities to quantify the marginal likelihood of a prediction might provide a more informed, if computationally more expensive, way to assess statistical support for a class rather than posterior probabilities.
\subsection{Comparison between BNNs and NNs}
In our tests, BNNs and NNs showed comparable levels of accuracy across the different datasets.
This is in contrast to previous research reporting a lower accuracy of BNNs and showing that a ``cold" MCMC sampler can alleviate this issue (with the caveat that the posterior is no longer exact) \cite{wenzel2020}.
The close consistency between BNN and NN accuracy in our tests might be linked to the relative small size of the datasets analyzed here and the limited complexity of the networks.
Similarly, the informedness, which combines sensitivity and specificity, did not differ substantially between BNNs and NNs, based on in-distribution data, except for dataset IV, where it was substantially lower based on NN.
However, BNNs strongly outperformed NNs in their ability to identify out-of-distribution data as unknown. The false positive rates of BNNs for out-of-distribution data were orders of magnitude lower for datasets I, II, 3-fold lower for dataset III and 6-fold lower for dataset IV, in comparison with NNs.
Here we used the same network architectures among BNNs and NNs.
It is possible, however, that changing the configuration of the NN model (e.g. number of layers, number of nodes) can bring the performance of NNs closer to that of BNNs \cite{Gal2016, hendrycks2016,lakshminarayanan2016}.
\section{Conclusion}
We tested the effects of using different prior distributions on classification tasks performed through BNNs.
In our tests we used Metropolis-Hastings MCMC to obtain posterior estimates of the model parameters, but other more efficient algorithms can be used to extend these tests to larger datasets and more complex neural networks \cite{graves2011,hoffman2014nuts, blundell2015, polson2017,zhang2019}.
We show that BFs are an alternative approach to assess the statistical support for the predictions, and hypothesize that they might better account for uneven prior probabilities in classification tasks.
Understanding the impact of priors on BNNs is important for the application of machine learning methods within a Bayesian framework \cite{lee2004,vladimirova2018}.
While here we focused on small datasets, the sensitivity of BNNs to prior choice for larger datasets might need further exploration.
Prior choice affects the accuracy of BNNs to some extent, and more strongly the rates of true and false positives.
However, the most significant impact of prior choice is on the ability of a BNN to detect out-of-distributions samples.
False positives in out-of-distribution samples can be interpreted as a measure of the accuracy of a model in distinguishing known samples from unknowns.
This is important when applying classifiers to datasets extending beyond the described classes used for training the model.
For instance, with biological data such as the Influenza subtypes used here, the ability to identify unknowns can be a tool to identify samples representing new virus types or undescribed species.
Gaussian priors are generally the default choice for BNNs \cite{zhang2017, wenzel2020}, and indeed performed comparatively well in our tests.
However, our results also show that different distributions can yield better results depending on the dataset.
In the absence of additional knowledge that can inform a decision about the most appropriate priors on the parameters of a neural network, testing and comparing models with different prior distributions, for instance using a validation dataset, can help to substantially improve the performance of classification using BNNs.
\section{Acknowledgements}
We thank Gytis Dudas for advice on the virus data and Stefano Goria and Xavier Meyer for feedback on the manuscript.
D.S. received funding from the Swiss National Science Foundation (PCEFP3\_187012; FN-1749) and from the Swedish Research Council (VR: 2019-04739).
\clearpage
\begin{table}[ht!]
\caption{Accuracy, true positive rate, false positive rate, and informedness for I) the wine dataset, II) the Influenza datasets, and III) the simulated Beta dataset. For BNNs, true and false positive rates are shown based on PP $>$ 0.95 and BF $>$ 150 (in parentheses), whereas for NNs they were based on prediction frequency $>$ 0.95 based on MC dropout. We used the informedness (here indicated with $J$) to rank the performance of the different models. Values reported for datasets II-IV are averaged across 10 replicates based on different partitions of in-distribution and out-of-distribution classes.}
\label{tbl_res}
\begin{center}
\footnotesize
\renewcommand{\arraystretch}{1.1}
\begin{tabular} {l l l l l l l l l}
\hline
& & \multicolumn{4}{c}{In-distribution} & & Out-of-distribution \\
Dataset & Model & Accuracy & True positives & False positives & $J$ & & False positives \\
\hline
I & $\mathcal{U}(-5,5) $ & \textbf{1} & \textbf{1 (0.877)} & 0 (0) & 1 & & \textbf{0.042} (0) \\
& $\mathcal{N}(0,1) $ & \textbf{1} & 0.954 (0.785) & 0 (0) & 0.954 & & 0.063 (0) \\
& $\mathcal{C}_T(0,1,5)$ & \textbf{1} & 0.985 (0.823) & 0 (0) & 0.985 & & 0.083 (0) \\
& $\mathcal{L}(0,1) $ & \textbf{1} & 0.954 (0.769) & 0 (0) & 0.954 & & 0.042 (0) \\
& NN (dropout) & \textbf{1} & \textbf{1} & 0 & 0.946 & & 0.313 \\
& NN & \textbf{1} & - & - & - & & - \\
\hline
II & $\mathcal{U}(-5,5) $ & \textbf{1} & 0.989 (0.983) & 0 (0) & 0.989 & & 0.014 (0.005) \\
& $\mathcal{N}(0,1) $ & \textbf{1} & \textbf{0.997 (0.996)} & 0 (0) & 0.997 & & 0.040 (0.019) \\
& $\mathcal{C}_T(0,1,5)$ & \textbf{1} & 0.994 (0.991) & 0 (0) & 0.994 & & 0.002 ($<$0.001) \\
& $\mathcal{L}(0,1) $ & \textbf{1} & 0.995 (0.990) & 0 (0) & 0.995 & & \textbf{$<$0.001 ($<$0.001)} \\
& NN (dropout) & 0.999 & 0.977 & 0 & 0.979 & & 0.332 \\
& NN & \textbf{1} & - & - & - & & - \\
\hline
III & $\mathcal{U}(-5,5) $ & 0.863 & 0.598 (0.613) & 0.022 (0.024) & 0.591 & & 0.228 (0.243) \\
& $\mathcal{N}(0,1) $ & \textbf{0.895} & 0.600 (0.617) & 0.010 (0.011) & 0.576 & & 0.152 (0.164) \\
& $\mathcal{C}_T(0,1,5) $ & 0.883 & 0.564 (0.582) & 0.010 (0.011) & 0.534 & & \textbf{0.133 (0.147)} \\
& $\mathcal{L}(0,1) $ & 0.890 & 0.575 (0.593) & 0.009 (0.010) & 0.550 & & 0.148 (0.161) \\
& NN (dropout) & 0.856 & \textbf{0.681} & 0.061 & 0.620 & & 0.626 \\
& NN & 0.857 & - & - & - & & - \\
\hline
IV & $\mathcal{U}(-5,5) $ & 0.914 & \textbf{0.535 (0.462)} & 0.003 (0.001) & 0.769 & & 0.098 (0.068) \\
& $\mathcal{N}(0,1) $ & \textbf{0.931} & 0.490 (0.394) & $<$0.001 ($<$0.001) & 0.736 & & \textbf{0.047 (0.025)} \\
& $\mathcal{C}_T(0,1,5) $ & 0.923 & 0.501 (0.406) & 0.001 ($<$0.001) & 0.752 & & 0.057 (0.032) \\
& $\mathcal{L}(0,1) $ & 0.924 & 0.499 (0.403) & 0.001 ($<$0.001) & 0.753 & & 0.061 (0.034) \\
& NN (dropout) & 0.757 & 0.306 & 0.079 & 0.227 & & 0.303 \\
& NN & 0.882 & - & - & - & & - \\
\hline&
\end{tabular}
\end{center}
\end{table}
\clearpage
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{figs/prior_pdf_functions.pdf}
\caption{Prior densities on weight distributions used in this study to assess their effect on the performance of BNNs.}
\label{figPriorPDF}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{figs/label_predictions_wine_data_dropout_020.pdf}
\caption{Class probabilities (y-axis) predicted for dataset I (x-axis) using four BNN configurations (with uniform, normal, truncated-Cauchy, and Laplace priors, color-coded as in Fig. \ref{figPriorPDF}) as well as by an NN with MC dropout (dark gray) and a regular NN in light gray. All models were trained on data belonging to classes 1 and 2. Dots and whiskers show the mean and 95\% interval of label probabilities for each sample. For regular NNs, we report the probability of a class resulting from the SoftMax function at the output layer. Gray shaded areas indicate out-of-distribution instances belonging to the third class.
}
\label{wine_data_predictions}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/prediction_stats_plot_020_virus.pdf}
\caption{Summary of BNN and NN predictions for dataset II, showing the test accuracy of the predictions (1st panel), the true positive rates (2nd panel), and false positive rates for out-of-distribution data (3rd panel). The results are shown for BNNs with four priors (Uniform, Normal, truncated-Cauchy, and Laplace (color-coded as in Fig. \ref{figPriorPDF}) and an NN with MC dropout (dropout rate = 0.2). The first panel additionally shows the prediction accuracy of a regular NN (no dropout, light grey).
True an false positives for BNNs were computed based on posterior probabilities (PP $>$ 0.95; bars on the left).
For each model we summarized the results of 10 independent runs, trained on a different random subset of 5 of the total 11 virus subtypes, and the whiskers show the range of values derived from these 10 subsets.
Note that the third panel is plotted in log-space for better visibility of small values.
}
\label{virus_data_stats}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/prediction_stats_plot_001_betasim.pdf}
\caption{Summary of BNN and NN predictions for dataset III, showing the test accuracy of the predictions (a), the true positive rates (b), and false positive rates for out-of-distribution data (c). The results are shown for BNNs with four priors (Uniform, Normal, truncated-Cauchy, and Laplace (color-coded as in Fig. \ref{figPriorPDF}) and an NN with MC dropout (dropout rate = 0.01). For each model we summarized the results of 10 independent runs, trained on a different random subset of 10 of the total 20 classes, and the whiskers show the range of values derived from these 10 subsets.}
\label{simulated_data_stats}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/prediction_stats_plot_005_mnist.pdf}
\caption{Summary of BNN and NN predictions for dataset IV, showing the test accuracy of the predictions (a), the true positive rates (b), and false positive rates for out-of-distribution data (c). The results are shown for BNNs with four priors (Uniform, Normal, truncated-Cauchy, and Laplace (color-coded as in Fig. \ref{figPriorPDF}) and an NN with MC dropout (dropout rate = 0.05). For each model we summarized the results of 10 independent runs, trained on a different random subset of 5 of the total 10 classes, and the whiskers show the range of values derived from these 10 subsets.}
\label{mnist_data_stats}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/virus_prior_freq.pdf}
\caption{Empirical prior probabilities of the 5 training classes across all instances within one of the 10 resampled virus datasets, obtained under four priors: Uniform, Normal, truncated-Cauchy, and Laplace. These plots show that all tested prior distributions applied to the BNN parameters result in unbalanced prior probabilities for the classes, although these probabilities are consistent among the 500 instances within the dataset. To produce these values, we ran an MCMC sampling the weights from their prior distributions. We then used the resulting models to predict labels for each sample (same as Fig. \ref{virus_data_stats}) and plotted the histograms of the resulting label probabilities of each class.}
\label{virus_prior_freq}
\end{figure}
\clearpage
\bibliographystyle{unsrt}
|
1,108,101,566,008 | arxiv | \section{Introduction}
The dynamic range is defined as the ratio of the intensity of the brightest point to the intensity of the darkest point in a scene or image. The dynamic range of a natural scene can go beyond
120 dB, which exceeds the dynamic range of almost all modern image sensors. The traditional approach for capturing a wide dynamic scene is to take multiple images with different exposures and fuse all these images together to form a wide dynamic range (WDR) image \cite{debevec1997recovering}. However, progress in image sensors has made direct capturing of wide dynamic range images possible. {Logarithmic response sensors \cite{bouvier2014logarithmic,kavadias2000logarithmic}, multimode sensors \cite{bae2016linear,storm2006extended}, capacitance adjustment sensors \cite{decker1998256,fossum2005high} and other technologies such as \cite{kronander2014unified} offer a possibility to extend the sensor dynamic range.}
However, the dynamic range of traditional display devices such as LCD, CRT and LED are usually limited to 8 bits, and hence in many cases it is impossible to properly reproduce the WDR image on the display directly. In order to close the gap of dynamic range difference between WDR image and LDR display devices, tone mapping algorithms were developed. Tone mapping algorithms, also called tone mapping operators (TMO), serve two purposes: the first one is to compress the WDR image to the dynamic range of display devices, and the second one is to generate {high quality images for different application scenarios.}
Based on the application, tone mapped images may require different attributes. For example, in photography human preference is the first priority, while in security and machine vision satisfaction of certain image property or visibility are more important requirements.
A global tone mapping process is to apply a single global function to all pixels in the image where an identical pixels will be given {an} identical output value within the range of the display devices. Tumblin and Rushmeier \cite{tumblin1993tone} and Ward \cite{ward1994contrast} were the early researchers who developed global operators for tone mapping. Drago et al. \cite{drago2003adaptive} proposed an adaptive logarithmic mapping method that can change the base of the logarithmic function based on the brightness. Recently, Hor{\'e} et al.\cite{hore2014statistical,hore2014new} proposed an hybrid tone mapping algorithm and its hardware implementation \cite{ambalathankandy2016fpga} that takes into account image local statistics. Such kind of algorithms can also be found in \cite{larson1997visibility,qiu2005optimal,tsai2012fast}. In general, global tone mapping algorithms are computationally easy to implement and mostly ``artifacts"-free, and they have unique advantages in hardware implementations.
However, the tone mapped images of these algorithms may suffer low brightness, low contrast or loss of details due to the global compression of the dynamic range. Local TMOs become the mainstream of tone mapping. Inspired by some features of the human visual system, some local tone mapping algorithms \cite{reinhard2005dynamic,van2006encoding,spitzer2003biological} try to mimic the dynamic range compression process of our photoreceptors.
Some researchers solve the WDR compression as a constrained optimization problem. Mantiuk et al. \cite{mantiuk2008display} considered the tone mapping as a minimum visible distortion problem. Ma et al. \cite{ma2014high} proposed a tone mapping method by optimizing the tone mapped image quality index. However, optimizing a single metric can hardly guarantee the best results. Additionally, solving constrained optimization problem is computationally expensive and difficult to implement in real-time.
In recent years, various algorithms emerged based on the Retinex theory \cite{land1971lightness, herscovitz2004modified,meylan2006high}. Edge preserving filters were used to separate the WDR image into illuminance and reflectance channels. The illuminance channel was regarded as base layer whose information was believed less important for our visual system, and hence its dynamic range was greatly compressed. On the other hand, reflectance channel was treated as detail layer whose information was mostly preserved during tone mapping.
The tone mapped images using these edge-preserving filters give state-of-the-art quality \cite{durand2002fast,farbman2008edge,he2010guided, gu2013local, paris2015local}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.35]{Fig_1_new.png}
\end{center}
\caption{Tone mapping local blocks from Memorial Church image. {(a) Original WDR image (tone mapped for display).
(b) Logarithmic tone curve that maps the green block in (a) and the result block image.
(c) Logarithmic tone curve that maps the blue block in (a) and the result block image.
(d) Logarithmic tone curve that maps the red block in (a) and the result block image.
(e) Tone mapping curves of green, blue and red blocks in logarithmic domain.
Memorial radiance map courtesy of Paul Debevec, University of California at Berkeley.}}
\label{Local_Adaptation}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale = 0.65]{Fig_2.png}
\end{center}
\caption{Statistic measurements of the local image block dynamic range.}
\label{Fig_2}
\end{figure}
{We have briefly introduced our new tone mapping algorithm based on multi-scale histogram synthesis (MS-Hist) in \cite{yang2017multi}. In this paper, we expand the work presented to include motivation, detailed description and analysis of the algorithm and its optimization process. Moreover, comprehensive experiments and analysis including a developed iOS application evaluation, objective and subjective assessments were carried out and presented here.}
{The MS-Hist algorithm can generate tone curves for every pixel of a WDR image based on the local histograms. Some excellent works have been done using local histogram based tone curves. For example, Duan \textit{et al.} \cite{duan2010tone} have proposed a tone mapping algorithm based on histogram adjustment which combines linear mapping and histogram equalized mapping. The adjustment method is applied to non-overlapping local regions and the final value of a pixel is a weighted average of the results from tone mapping functions of adjacent regions. Boschetti \textit{et al.} \cite{boschetti2010high} reported an algorithm derived from the Contrast Limited Adaptive Histogram Equalization (CLAHE) technique, the algorithm can adaptively control the local contrast limit, the adaptation is achieved by combining the local mean and variance values.
Recently, Eilertsen \textit{et al.} \cite{eilertsen2015real} proposed an algorithm which employs histogram based locally adaptive tone curves and global tone curves to compress dynamic range and remove visible discontinuities.
However, beyond some similarities our algorithm shares with the mentioned works, we are significantly in two ways. Firstly, all three works first split a WDR image into multiple regions and then interpolate tone curves for the pixels. The MS-Hist algorithm directly computes the per-pixel tone curve without any interpolation process. Moreover, this per-pixel tone curve computation does not increase our computation complexity because we take advantage of the integral image and integral histogram. Secondly, our final tone mapped pixel value is a weighted average of multiple tone curves of different scales where small scales are used to extract local detail and large scales are used for maintaining global brightness consistency. Works \cite{duan2010tone} of Duan \textit{et al.} and \cite{boschetti2010high} of Boschetti \textit{et al.} both use only a single scale. Although work \cite{eilertsen2015real} of Eilerstsen \textit{et al.} adopts a 2-scale tone-curve weighting mechanism, its main purpose is to reduce the artifacts caused by the discontinuity of the local tone curves. The discontinuity artifacts in our algorithm are inherently removed by the fusion process.}
This paper is organized as follows: Section II introduces the motivation of this work. Section III explains the algorithm and optimization process in detail. Experimental results and comparisons are reported in Section IV, and we conclude the paper in Section V.
\section{Motivation}
The Human visual system (HVS) uses local adaptation to cope with large dynamic range in the real world scenarios. Local adaptation is an ability to accommodate to the level of a certain visual field around the current fixation point \cite{larson1997visibility}. Experiments carried out by Stevens and Stevens \cite{stevens1963brightness} prove that the HVS can have different responses for the same luminance level when under different backgrounds. For example, higher luminance points can be perceived darker than lower ones when they are located in different background with uniform luminance. Their results also showed that overlapped reactions in the human visual system can extend our visual response range to cope with the contrast scenes in real world. {Weber-Fechner law} \cite{hecht1924visual} states that the relation between the actual change in luminance and the perceived change is logarithmic. Logarithmic function can compress the higher luminance values as well as increase the contrast and brightness for the low luminance values.
In \autoref{Local_Adaptation}, we utilize the local adaptation and logarithimic processing features to the memorial image. The three windows' positions shown in \autoref{Local_Adaptation} (a) are randomly selected, and each window has height and width of 80 pixels. we use three different functions (shown in (b), (c) and (d)) to adapt the local dynamic ranges of the three windows. The functions map the minimum pixel value $I_{min}$ and maximum pixel value $I_{max}$ of the windows to 0 and 255, respectively. All three functions have the following form:
\begin{equation}
D(I) = a \cdot log(I) - b
\label{linear log tone}
\end{equation}
where $I$ denotes the pixel value in each window. \autoref{Local_Adaptation} (d) shows the three functions in logarithmic domain. It is obvious that to adapt the three local windows the slope ($a$ value) of the three functions varies greatly. Despite of the simple compression functions used for local adaptation, the details of each window is clearly visible (also shown in \autoref{Local_Adaptation} (b), (c) and (d)). The three windows have dynamic range of 25 dB, 43 dB and 90 dB, respectively. They are much smaller than the dynamic range of the WDR image which is equal to 110.8 dB. To get a general idea about the local dynamic range level, we carried out a statistical measurement experiment.
{We have collected over 200 WDR images from various sources including the accompany
disk of \cite{reinhard2010high}, public web page of \cite{fattal2002gradient},
\cite{debevec1997recovering}, ETHyma database\footnote{http://ivc.univ-nantes.fr/en/databases/ETHyma/}, Ward database\footnote{http://www.anyhere.com/gward/hdrenc/pages/originals.html}
and Funt database\footnote{http://www.cs.sfu.ca/~colour/data/funt\_hdr/}. We used these WDR images because they are commonly used materials for WDR and tone mapping related research.}
For each WDR image, we randomly select 200 blocks with size of 80 $\times$ 80, 120 $\times$ 120, 160 $\times$ 160, 200 $\times$ 200 and 240 $\times$ 240, respectively. We measure the average local dynamic range and standard deviation using a error bar,
the results are shown in \autoref{Fig_2}.
Local dynamic range increases with the size of the block. However, even when the block size reaches 240 $\times$ 240, the average local dynamic range does not exceed 40 dB.
Motivated by the local adaptation mechanism, and the measured local dynamic range feature of WDR images, we believe that even a simple tone mapping algorithm that applied to local areas could provide satisfactory tone mapping results. An apparent advantage of this local processing would be that the local area can have the full display dynamic range and therefore better preserve details. In the following, we give the details about the proposed algorithm and its optimization process.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale = 0.45]{Fig_3.png}
\end{center}
\caption{Tone mapping function based on histogram in logarithmic domain.}
\label{Hist_Equ}
\end{figure}
\section{The Proposed Algorithm}
\subsection{Local Adaptation with Histogram}
Histogram is a useful tool that takes pixel distribution into account and has been used in previously reported tone mapping algorithms such as \cite{larson1997visibility}, \cite{duan2010tone} and \cite{eilertsen2015real}.
A histogram based tone mapping is shown in \autoref{Hist_Equ} where $l$ represents the logarithmic luminance of the pixels, and $u$ is the luminance of the displayed value. The number of bins in the histogram is a user-defined parameter $n$. If $l_{max}$ and $l_{min}$ are the maximum and minimum value, then pixels that are within the same ($l_{max}$-$l_{min})/n$ interval will fall into the same bin. The population of each bin determines the importance of the corresponding luminance levels. If one bin contains more pixels, we should assign more display levels to this bin so that details and contrast can be better preserved. Assuming $p_k$ is the population of the $k$-th bin, we assign the $k$-th bin the number of display levels that are proportional to its population and keep the display levels monotonically increasing, the maximum display level for the $k$-th bin can be calculated:
\begin{equation}
u_k = \frac{\sum_{i=1}^{k}p_i}{\sum_{i=1}^{n}p_i } \cdot 255
\end{equation}
The cumulative sum of the histogram is used as a piece-wise linear function as shown in \autoref{Hist_Equ}. This function is locally adaptive and gives logarithmic response.
To process a WDR image locally and adaptively, the simple approach would be to divide the image into non-overlapping rectangular blocks, and compute the histogram, function and tone mapped value of each rectangular. \autoref{Fig_4} shows the tone mapping results using this method. As we expected, these images show more details and local contrast in either dark or bright regions. However, there are obvious boundary artifacts and brightness inconsistency between blocks which make the image unacceptable. In \autoref{Fig_4}, the boundary artifacts are shown as visible edges that are between any two adjacent blocks, and brightness inconsistency is shown in the snow area on the ground where pixels are mapped to dark gray rather than white. The great variation of histograms and the tone mapping functions of two adjacent blocks lead to different tone mapping functions, and this explains why pixels have different values on both sides of the boundary. The brightness inconsistency is created because local minimal value is always mapped to 0 even if the local minimal value represents high luminance. In other words, the local processing lost the global sense of the scene luminance. Besides, in uniform areas, pixels are more likely falling into the same bin of the histogram, and consequently, $a_k$ in Eq. 1 becomes a very large number, which causes small fluctuations in uniform area to be greatly exaggerated.
In order to remove the boundary artifacts,
{we plan for the adjacent blocks to have as much overlap as possible so that the variation of histograms and the tone mapping functions are minimal. Hence, we adopt a pixel-by-pixel tone mapping approach in our algorithm.}
For any pixel in a WDR image, we can always find a window $w$ whose center is that pixel. The pixel distribution, tone mapping function of the window and the tone mapped value of the center pixel can be computed. If we calculate each pixel using this method, the tone mapping functions of adjacent pixels will almost be the same because the pixel distributions are highly similar between two windows which have one pixel offset. However, the local histogram and the tone mapping function will change with the scale of the window size, and therefore the tone mapped value for the centre pixel will change accordingly. \autoref{Fig_5} shows the tone mapped results using the mentioned pixel-by-pixel method of different window sizes. In \autoref{Fig_5} (a) and (d) the block size is the size of image, and the block size is 1/2 and 1/4 of the image size for \autoref{Fig_5} (b, e) and \autoref{Fig_5} (c, f), respectively. We can see that in all images, the boundary artifacts between blocks are gone. However, the brightness inconsistency issue remains for images tone mapped with scales that are smaller than the image size. In Fig. 5(b) the ground area pixels are mapped to low luminance values, and in Fig. 5(c) this area becomes totally dark.
In Fig. 5(a), the snow ground area is mapped to correct values because the window size is the same with the image size (in this case the tone mapping function is a global operator). Despite the brightness inconsistency in Fig. 5 (b) and (c), contrast and brightness of details and texture areas increase. It is obvious that smaller window scale reveal more detail of the WDR image and make certain areas of the image brighter, meanwhile the large scales can maintain more global brightness consistency.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale = 0.27]{Fig_4.pdf}
\end{center}
\caption{Tone mapping results using block by block method. Each
block shows good contrast and brightness. Boundary artifacts are visible at
the block edges; brightness inconsistency is also visible at some parts of the images. Radiance maps courtesy of corresponding author(s).}
\label{Fig_4}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[scale = 0.39]{Fig_5.pdf}
\end{center}
\caption{Tone mapping results using pixel by pixel method. From leftmost column to right, the block size is the size of the image size, 1/2 of the image size, 1/4 of the image size, respectively.}
\label{Fig_5}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[scale = 0.399]{Fig_6.pdf}
\end{center}
\caption{$a$ value computed in different scales. (a) $w$ is size of image. (b) $w$ is half size of image. (c) $w$ is quarter size of image. (d) $w$ is 1/8 size of image. (e) $w$ is 1/16 size of image. (f) $w$ is 1/32 size of image.
{The image is padded to handle boundary pixels.}
}
\label{Fig_6}
\end{figure}
\subsection{Multi-scale Fusion}
It is important to maintain the brightness consistency and also preserve the details during tone mapping, so that a good image can be obtained. Based on analysis of Section III. A, we propose to tone map pixels in detail and texture areas with smaller scales, while larger scales will be used for tone mapping pixels in uniform areas. In order to achieve this goal, we need to detect uniform and texture areas in WDR images first. There are a number of statistics such as entropy or measures of dispersion that can be used for detecting uniform areas. In our case, we use the following function from the recently proposed guided image filter \cite{he2010guided} to perform the task.
\begin{equation}
a_k = \frac{\sigma_w^2}{\sigma_w^2 + \epsilon}
\label{m-e representation}
\end{equation}
where $\sigma_w^2$ is the variance that is computed for window $w$. For window $w$, $a_k$ gives a score to its center pixel and measures if $w$ is of ``texture" or ``uniform". If the variance value $\sigma_w^2$ is much larger than $\epsilon$, the $a_k$ value will be close to 1, which indicates that the centre pixel of window $w$ is in a texture area (called high variance patch in \cite{he2010guided}) in which pixel values vary significantly. If the variance is much less than $\epsilon$, the $a_k$ value will be close to 0, and the centre pixel is regarded as belonging to a uniform area (called flat patch in \cite{he2010guided}) in which pixel values are mostly the same. Therefore, we call this factor textural score.
The variance value and $a_k$ value change with the size of window $w$, which gives us the ability to detect uniform and texture areas under different scales. \autoref{Fig_6} shows the computed $a_k$ values of different window sizes. In these images, the brighter the pixels, the higher the corresponding $a_k$ values. It can be seen that the window size affects the values greatly. When using large window sizes, images are blurry and only skeleton can be seen. When the window size reduces, fine detail and texture gradually appear. For example, in \autoref{Fig_6} (a-c) only rough skeleton are visible, but in \autoref{Fig_6} (d-f) where the window sizes are much smaller, the leaf texture of the tree and object edges are detected.
{Unlike other weight functions such as entropy or the one proposed in Duan \textit{et al.} \cite{duan2010tone}, the proposed weight function has a unique feature that its computation becomes very simple when taking advantage of the integral image technique. We will give more detail in the following section.}
If a pixel is in a texture area, we ideally want it to be tone mapped with a smaller window size, because it reveals more detail and contrast. If a pixel is in a uniform area, we ideally want it to be tone mapped with a large window size, because it can mantain brightness consistency and brings no artifacts. We fullfill this goal with the following equation:
\begin{equation}
u = \frac{\sum_{i=1}^{s-1}a_{w_i}^iu_{w_i}}{\sum_{i=1}^{s-1}a_{w_i}^i}
\label{histogram-equal-adjusted}
\end{equation}
For each pixel in WDR image, $u_{w_i}$ and $a_{w_i}$ represent the tone mapped value and the texture score that are computed under scale $w_i$. $s$ is the number of scales that are used.
Considering a pixel in a uniform area, $a_{w_i}$ will be close to $0$, and $a_{w_i}^i$ will approach $0$ more rapidly because of the exponent. This could greatly reduce the weight of the corresponding $u_{w_i}$ value. However, if a pixel is in a uniform area, then $a_{w_i}$ will be close to 1, $a_{w_i}^i$ will also be close to 1, which gives more weight to detail from small scales.
{The fusion function intuitively assigns more weight to large scales and less weight to small scales using the different exponent values of Eq. 6. Tone curves generated in larger scales are more global, they don't introduce artifacts as we have shown in Fig. 5. Small scales are more local, and the corresponding images are more likely to have artifacts. The biased weights of large and small scales minimizes any visible artifacts. }
\subsection{Optimization}
The proposed algorithm needs to compute variance and histogram of multiple scales at each pixel location. It will be a time-comsuming process, especially under circumstances that the resolutions of current WDR images are mostly over mega-pixels. Optimizing the process is inevitable if an acceptable processing time is required. Here, we adopt integral image \cite{viola2004robust} and integral histogram \cite{porikli2005integral} to reduce complexity and processing time of the variance and histogram computation, respectively.
The variance value $\sigma_w$ can be computed by:
\begin{equation}
\sigma_w = \frac{\sum_w{i^2} - (\sum_w{i})^2}{|w|^2}
\label{m-e representation}
\end{equation}
where $i$ is the pixel of the WDR image, $|w|^2$ is the number of pixels in window $w$. We accelerate the above calculation with the help of integral image. If $I$ is the integral image of the WDR image, then the value at any location $I(x, y)$ is calculated by:
\begin{equation}
{T}(x, y) = \sum_{x' \leq x, \ y' \leq y}i(x', y')
\end{equation}
A great feature of integral image
{$T$}
is that summation of any rectangular region in the original image can be computed efficiently in a single pass. For example, if there are four points $A (x_0, y_0)$, $B (x_1, y_0)$, $C (x_1, y_1)$ and $D (x_0, y_1)$ in image $i$, the rectangular that is enclosed by the four points is equal to:
\begin{equation}
\sum_{\substack{x_0 < x_1 \le x_1 \\ y_0 < y <y_1}}i(x, y) = {T}(A) + {T}(C) - {T}(B) - {T}(D)
\label{RGBtoGray}
\end{equation}
For fast computation of Eq. 9, we can first get the integral image of $i$ and $i^2$. Then, the summation of $\sum i^2$ and $\sum i$ of any window $w$ can be replaced with simple addition and subtraction operations as shown in Eq. 11.
To facilitate the computation of histogram of any window size,
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale = 0.35]{ParamVar.pdf}
\end{center}
\caption{Tone mapped images obtained with various $n$ and $\epsilon$ values.}
\label{ParamsChange}
\end{figure}
we first construct an $n$-channel integral histogram $H$ where the $k$-th channel $H_k$ is an integral image. It is computed by:
\begin{equation}
H_k(x, y) = \sum_{\substack{x' \leq x, \ y'\leq y }} i_k(x',y')
\end{equation}
where
\begin{equation}
i_k (x', y')=\left\{
\begin{array}{rcl}
1 & & {i(x, y) \in b_k}\\
0 & & {i(x, y) \notin b_k}
\end{array} \right.
\end{equation}
$b_k$ is the k-th bin of the histogram. After the $H$ is computed, population of the $i$-th bin can be computed instantly using also simple additions and subtractions:
\begin{equation}
p_i = H_i(A) + H_i(C) - H_i(B) - H_i(D)
\label{RGBtoGray}
\end{equation}
After $p_i$ values are computed, tone mapping function can be easily computed by using Eq. 2.
Integral image and integral histogram only need to be computed once for the use of all different scales. Subsequent computations are mostly simple addition and subtraction operations {which can be carried out in a matrix parallel fashion.}
Additional parallelization can further optimize the proposed algorithm because the computation of different scales are independent and can also be computed in parallel. {The use of integral image and integral histogram greatly reduces the per-pixel tone curve computation burden, because the majority of the algorithm is computed in in parallel.}
\section{Implementation and Experimental Results}
\subsection{Implementation}
The WDR color images have red, green, blue three channels, but our algorithm operates on the luminance channel. Consequently, we first convert the HDR image to luminance image by using \autoref{RGBtoGray} below:
\begin{equation}
L = 0.299R + 0.587G + 0.114B
\label{RGBtoGray}
\end{equation}
The color information is restored by using the following equation \cite{fattal2002gradient}:
\begin{equation}
C_{out} = (\frac{{C_{in}}}{L_{in}})^{sat} \cdot L_{out}
\label{Color}
\end{equation}
where $C = R, G, B$ represents the red, green and blue channel, respectively. $L_{in}$ and $L_{out}$ denote the luminance before and after the tone mapping. Parameter $sat$ controls the color saturation degree of the result image.
{
If it is set too small, the result image looks pale, but if it is set too large, the color will become over saturated. \cite{fattal2002gradient} found that when this parameter is between 0.4 and 0.6, the results are satisfactory, Gu \textit{et al.} \cite{gu2013local} set it as 0.6 in their experiment. In our experiments, we also set $sat = 0.6$ because it gives good color performance for most images.}
The number of scales $s$ is the most important parameters in the proposed algorithm. As we have stated previously, in order to maintain the image brightness consistency, the block size of the largest scale should be the same as the image size. To obtain possible details in different scales, we adopt the popular image pyramid method. Therefore, for any two adjacent scales, the block size $w_{i+1}$ is computed as $w_i/2$.
In a lot of algorithms, $64 \times 64$ are considered as a small enough block size for image processing. Hence, we chose our number of scales so that the smallest block size is equal to or less than $64 \times 64$. Considering a $2000 \times 2000$ WDR image, 6 scales are enough to make the smallest block size satisfy our criteria.
Two other free parameters are involved in our algorithm: the number of bins $n$, and the regularization term $\epsilon$. The effect of these parameters for a tone mapped image is shown in \autoref{ParamsChange}. Nine tone mapped images are presented in a matrix with $n$ varying vertically and $\epsilon$ varying horizontally. The overall image contrast increases with the increase of $n$, while more local details are revealed with the decrease of $\epsilon$. We find values for $n = 5$ and $\epsilon = 0.1$ to usually produce satisfactory results, good brightness while preserving local contrast and details.
\subsection{Customized Application}
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale = 0.12]{Fig_9.pdf}
\end{center}
\caption{WDR image captured by phone camera and our application. (a-d) LDR image captured by phone camera. (e) merged image with iPhone 6's built-in HDR feature.
(f) WDR image captured by our developed application.}
\label{Fig_9}
\end{figure}
We implemented our algorithm in both Android and iOS platforms. The developed application is called \textit{CaptureWDR}. The application first captures four sequential images under different exposures and merges them into a WDR image with Paul Debevec's method \cite{debevec2008recovering}. The WDR image is then tone mapped with the proposed algorithm. \autoref{Fig_9} (a)-(d) show four LDR images taken under different exposures with a phone camera. The exposure time increase steadily that the high lighted cloudy sky and the low luminance roof are both captured in different images. \autoref{Fig_9} (e) shows the image captured with iPhone 6's built-in HDR feature. \autoref{Fig_9} (f) shows the tone mapped image of our application.
Our result is significantly brighter than \autoref{Fig_9} (e) and shows more detail, especially the roof and the parking lot on the right side of the image. In the iOS and Android implementation, we set the number of scales $s = 5$, number of bins $n = 5$ and $\epsilon = 0.1$ as default values. The response time for our application is less than a second on a iPhone 6 including WDR merging, tone mapping and displaying. We haven't adopted any parallelism or GPU computing in our program yet, the response time will reduce significantly after any parallel programming. We have tested our application in various lighting situations such as sunny outdoor, dark indoor and shaded areas. The resulting images are satisfactory and exhibit good contrast and brightness.
\begin{figure*}[tbh]
\begin{center}
\includegraphics[scale = 0.38]{Compare-Memorial.pdf}
\end{center}
\caption{Comparison of the reproduced memorial church WDR image between our algorithm and other algorithms. (a) Result taken from \cite{reinhard2002photographic}. (b) Result taken from \cite{fattal2002gradient}. (c) Result taken from \cite{drago2003adaptive}. (d) Result of \cite{durand2002fast}. (e) Result of \cite{meylan2006high}. (f) Result of \cite{paris2015local}. (g) Result of \cite{gu2013local}. (h) Result of our proposed MS-Hist algorithm.}
\label{compare-memorial}
\end{figure*}
\subsection{Comparison Results}
Our proposed algorithm is compared with other reported works. We first choose the famous memorial church image for comparison because it is one of the most commonly used image for testing tone mapping algorithms.
\autoref{compare-memorial} shows the image tone mapped with eight different tone mapping algorithms including ours. The seven other algorithms are state-of-the-art tone mapping algorithms and they all show effective tone mapping efforts. In the eight images, only (b), (g) and our result show averagely bright scenes, other images are dark, and the details of the upper left corner are lost. In images (b), (d) and (f), the upper left corner are visible, but only in images of (g) and (h) the details are completely presented. The contrast near
the window area of the memorial church image is particularly challenging; this part is zoomed and also showed in \autoref{compare-memorial}. In images (a)-(d), this area is mostly saturated, especially on the rightmost window: the paintings between the windows suffer from loss of contrast. In images (e)-(g), the window area {is} no longer saturated, and the painting on the rightmost window can be seen clearly. Among those four images, ours shows the best contrast, especially on the paintings between the windows.
The memorial image example in \autoref{compare-memorial} shows that our algorithm can produce visually bright and high contrast images. Although our image may suffer some loss in naturalness, but it is clear that this is in exchange of the visibility which is more important and critical. The following assessment alone with TMQI\cite{yeganeh2013objective} score in later experiments validate that tone mapped image overall in high quality.
In the following, we use three simple quality measures to {assess} the performance of our tone mapping algorithms: the brightness value $\alpha$, the sharpness value $\beta$ and the standard deviation value which are used to measure the image brightness, details and contrast respectively.
The brightness value $\alpha$ and sharpness values $\beta$ are computed using the following equation:
\begin{equation}
{
\alpha =\frac{1}{N} \sum F, \ \ \ \beta = \sum |\nabla F|}
\label{local contrast}
\end{equation}
where $N$ is the number of pixels in final tone mapped image
{$F$}.
The comparison results for the images in \autoref{compare-memorial} are shown in \autoref{overall compare}. The results are consistent with our visual experience. Our tone mapped image obtains the highest sharpness and contrast values. In the psychophysical experiment, both Gu \textit{et al}.'s and our algorithm achieve high scores.
\begin{table}
\centering
\vfill
\caption{\textsc{Quantitative Measurements On \autoref{compare-memorial}}}
\begin{tabular}{ |l | l | l | l| }
\hline
\hline
Image & \tabincell{l}{ Sharpness} & Brightness & Contrast\\
\hline
Fig. 9(a) Reinhard \cite{reinhard2002photographic} & 8.4697& 91.3766 & 53.7200 \\
Fig. 9(b) Fattal \cite{fattal2002gradient} & 13.0405 & \textbf{117.1412} & 58.1158 \\
Fig. 9(c) Drago \cite{drago2003adaptive} &5.8122 & 85.4665& 47.7418\\
Fig. 9(d) Durand \cite{durand2002fast} & 7.6244 & 73.7916 & 48.5908 \\
Fig. 9(e) Meylan \cite{meylan2006high} & 8.461 & 104.6918 & 54.3960 \\
Fig. 9(f) Paris \cite{paris2015local} & 9.4782 & 85.2719 & 47.7444 \\
Fig. 9(g) Gu \cite{gu2013local} & 13.1790 & 110.0256 & 68.5587\\
\tabincell{l}{Fig. 9(h) Currently prop- \\osed algorithm MS-Hist} & \textbf{14.8941} & 115.1672 & \textbf{70.6124}\\
\hline
\end{tabular}
\label{overall compare}
\end{table}
Objective assessment of tone mapping algorithms is important. In recent years, indexs such as tone-mapped image quality index (TMQI) \cite{yeganeh2013objective}, feature similarity indexes
for tone-mapped images (FSITM) \cite{nafchi2015fsitm}, blind tone-mapped quality indexes (BTMQI) \cite{gu2016blind}, HDR-VDP \cite{mantiuk2011hdr}, DRIM \cite{gu2013local} were proposed to provide a single score for evaluating tone mapping algorithms. These indexes mostly consider one or two aspects such as naturalness and structureness of the tone mapped images and give corresponding scores. However, evaluation of images are also a psychophysical process determined by human observers, and it can hardly be fully described by only few metrics. {In our experiment, we use both psychophysical experiment and objective assessment to evaluate different tone mapping algorithms.} We conduct psychophysical experiment first and then make objective assessment with TMQI.
We select 20 images for a psychophysical experiment. These images are standard WDR radiance maps that are widely used in tone mapping algorithm tests. All images and corresponding WDR files can be found in our project site\footnote{https://github.com/jieyang1987/Tone-Mapping-Based-on-Multi-scale-Histogram-Synthesis}. In our experiment, {we give each participant a website address\footnote{https://surveyhero.com/c/53b8aa3} which links to our on-line survey. The participant doesn't have any prior knowledge on any of the tested algorithms. There are no constrains on where and when they should take the survey. The participants can also choose whatever device that comes in handy to do the survey such as mobile phone, tablet or computer screen. Unlike some experiments that are done in a controlled environment, our experiment gives a more robust result reflecting how the algorithm performs in real life. In the on-line survey,}
anticipates were shown 20 questions, each question is related to four images that had been tone mapped with different algorithms. The four images were randomly marked with (a), (b), (c) and (d). Observers were asked to choose the image that they prefer the most. Observers were allowed to choose multiple images if they thought the images were equally pleasant. A total number of 129 volunteers anticipated to the psychophysical experiment. The results are summarized in \autoref{psychophysical_experimentation}. Overall, we got the most votes for 14 images and a total number of 877 votes. Gu \textit{et al.}'s algorithm \cite{gu2013local} won in four images, and it got 544 votes. After the experiment, we found that observers are more likely to choose images exhibit more brightness and contrast.
{We use TMQI index score to assess the 20 images that are used in the psychophysical experiment. The results are listed in Table III. In the 20 images, our algorithm gets highest TMQI scores for 10 images. Paris \textit{et al.}, Gu \textit{et al.} and Durand \textit{et al.} get highest TMQI scores for 7, 2 and 1 images, respectively. MS-Hist also achieves the highest average TMQI value among the four algorithms. The TMQI score gives rough idea of the overall quality, but it is not always precise. For example, \textit{AtriumMorning} image tone mapped by our algorithm is the most preferable result, but it has the lowest TMQI value. We show all the tested image in our supplementary material.}
\begin{table}
\centering
\vfill
\caption{\textsc{Statistic Results of Psychophysical Experimentation}}
\begin{tabular}{ l c c c c}
\hline
\hline
Image & \tabincell{c}{Durand \textit{et al.} \\ \cite{durand2002fast}} & \tabincell{c}{Paris \textit{et al.} \\ \cite{paris2015local}} & \tabincell{l}{Gu \textit{et al.} \\ \cite{gu2013local}} & Proposed \\
\hline
AtriumMorning & 2 & 8 & 30 & \textbf{84} \\
AtriumNight & 4 & 16 & 40 & \textbf{56} \\
belgium & 1 & 3 & \textbf{56} & 43 \\
cathedral & 13& 30 & \textbf{34} & 28 \\
crowfoot & 6 & 33 & 13 & \textbf{42} \\
designCenter & 4 & 6 & \textbf{43} & 42 \\
groveD & 8 & 5 & 31 & \textbf{47} \\
memorial & 17& 20 & 29 & \textbf{31} \\
moraine1 & 2 & 29 & 14 & \textbf{45} \\
moraine2 & 1 & 10 & 36 & \textbf{46} \\
orion & 9 & 17 & 29 & \textbf{38} \\
tmN & 19& 13 & \textbf{33} & 29 \\
vernicular & 5 & 10 & 23 &\textbf{54} \\
vinesunset & 2 & 21 & 31 &\textbf{41} \\
rend01 & \textbf{37} & 22 & 6 & 24 \\
Rockies3b & 3 & 16 & 12 & \textbf{56} \\
moto & 19 & \textbf{42} & 2 & 27 \\
tinterna & 2 & 16 & 12 & \textbf{61} \\
nancy\_cathedral & 3 & 6 & 38 & \textbf{44} \\
garage & 5 & 15 & 32 & \textbf{39} \\ \hline \hline
Total & 162 & 338 & 544 & \textbf{877}\\
\hline
\end{tabular}
\label{psychophysical_experimentation}
\end{table}
\begin{table}
\centering
\vfill
\caption{\textsc{TMQI Scores for the Tested Images}}
\begin{tabular}{ l c c c c}
\hline
\hline
Image & \tabincell{c}{Durand \textit{et al.} \\ \cite{durand2002fast}} & \tabincell{c}{Paris \textit{et al.} \\ \cite{paris2015local}} & \tabincell{l}{Gu \textit{et al.} \\ \cite{gu2013local}} & Proposed \\
\hline
AtriumMorning & \textbf{0.9722} & 0.9315 & 0.8797 & 0.8669 \\
AtriumNight & 0.8316 &\textbf{0.9306} & 0.9194& 0.9146\\
belgium & 0.8183 & 0.8650 & 0.9400& \textbf{0.9428} \\
cathedral & 0.8586 & 0.8933 & 0.8900 & \textbf{0.9154} \\
crowfoot & 0.8187 & \textbf{0.9119} &0.8705 & 0.8897 \\
designCenter & 0.7334 & 0.7984 & 0.8753 & \textbf{0.9529} \\
groveD & 0.9236 & 0.9160 & 0.8350 & \textbf{0.9601} \\
memorial & 0.8689 & \textbf{0.9034} & 0.8521 & 0.8476 \\
moraine1 & 0.8317 & 0.9025 & 0.8622 & \textbf{0.9027} \\
moraine2 & 0.8076 & 0.8666 & \textbf{0.9373}& 0.9230 \\
orion & 0.7016 & 0.7014 & 0.7655 & \textbf{0.7817} \\
tmN & 0.7898 & 0.8435 & 0.8054 & \textbf{0.8962} \\
vernicular & 0.8681 & \textbf{0.9440} & 0.9170 &0.9306 \\
vinesunset & 0.8130 & 0.8628 & 0.7847 &\textbf{0.8821} \\
rend01 & 0.9366 & \textbf{0.9400} & 0.7545 & 0.9074 \\
Rockies3b & 0.8328 & 0.8957 & 0.8422 & \textbf{0.9385} \\
moto & 0.8140 & \textbf{0.8692} & 0.8200 & 0.7619 \\
tinterna & 0.8586 & \textbf{0.9589}& 0.9431 & 0.9566\\
nancy\_cathedral & 0.7666 & 0.8365 & 0.9612 & \textbf{0.9657} \\
garage & 0.7551 & 0.8379 & \textbf{0.9656} &0.9605 \\ \hline \hline
Average & 0.8300 & 0.8805 & 0.8710 & \textbf{0.9048}\\
\hline
\end{tabular}
\label{objective_score}
\end{table}
\section{Conclusion and Further Work}
In this paper, we have presented a novel tone mapping algorithm based on multiple-scale histograms. Histograms of different scales correspond to different tone mapping functions where functions of large scale histograms are used to maintain image brightness consistency and functions of small scales are used to preserve local details. A fusion function was proposed in order to take advantage of the large scale functions and small scale functions so that both WDR image brightness consistency and local details are both kept in the final tone mapped image. The proposed algorithm is also optimized to reduce computation complexity and processing time. We have compared our algorithm with some other tone mapping algorithms through objective and psychophysical experiments. The results have shown that our algorithm can produce appealing images with high brightness and high contrast.
Our further work would be to improve the user interface, image stabilization, implement GPU version of the algorithm and apply the algorithm to fields such as bio-medical imaging, infrared imaging and security applications.
\section{Acknowledgement}
The authors would like to thank the Alberta
Innovates Technology Futures (AITF) and Natural Sciences and
Engineering Research Council of Canada (NSERC) for supporting this research. The authors would like to thank Nasir Mohamed Osman and Douglas Michael McDonald for developing the Android and iOS applications. The authors also would like to thank Dr. Alain Hor\'e for providing valuable suggestions on writing of this paper. At last, the authors would like to thank all participants of the psychophysical experiment.
{\small
\bibliographystyle{ieeetr}
|
1,108,101,566,009 | arxiv | \section{Introduction}
One of the most celebrated results in spectral geometry is due to Hermann Weyl in 1911: given any bounded domain $\Omega$ in $\mathbb{R}^n$, one can obtain its volume by understanding the eigenvalue distribution of the Laplacian on $\Omega$. This result, usually referred to as Weyl's law, has since been extended to the Laplace-Beltrami operator and other elliptic operators on Riemannian manifolds \cite{Stanton}.
However, as the Kohn Laplacian $\Box_b$ is not elliptic, it is an open problem to find an analog of Weyl's law for the Kohn Laplacian on functions for CR manifolds.
Stanton and Tartokoff presented in \cite{Stanton1984TheHE} a version
on $q$-forms that are not functions or of top-degree. Motivated by this problem, in \cite{REU2020Weyl}, the authors prove an analog of Weyl's law for the Kohn Laplacian on functions on odd-dimensional spheres. Moreover, they conjecture that their result generalizes to compact strongly pseudoconvex embedded CR manifolds of hypersurface type.
In this paper, we first generalize the aforementioned result to lens spaces. In particular, we show that one can hear the order of its fundamental group. Moreover, the universal constant obtained in the asymptotic behavior of the eigenvalue counting function for the Kohn Laplacian matches the universal constant computed on spheres, providing further evidence for the conjecture above.
The second portion of this paper is inspired by the 1979 result of Ikeda and Yamamoto \cite{ikeda1979} which shows that if two 3-dimensional lens spaces with fundamental group of order $k$ are isospectral with respect to the standard Laplacian, then they are isometric as Riemannian manifolds. That is, one can hear the shape of a lens space from its standard Laplacian.
We investigate the analogous question: does the spectrum of the Kohn Laplacian determine when two lens spaces are CR isometric? We say that two CR manifolds are CR isospectral if the spectrums of the Kohn Laplacian on each manifold are identical. We suspect that, as in the Riemannian setting, one can hear the shape of a 3-dimensional lens space as a CR manifold from its Kohn Laplacian. As a partial result towards this conjecture, we show that two isospectral lens spaces must have fundamental groups of equal order. Moreover, we show that if the order is prime, and if the dimension is $3$, being CR isospectral is equivalent to being CR isometric.
The organization of this paper is as follows. We first prove the analog of Weyl's law for the Kohn Laplacian on lens spaces. The proof follows from a careful counting of solutions to a system of diophantine equations and asymptotic analysis of binomial coefficients. Next, we prove the result on isospectral lens spaces by using a chain of equivalencies and extending some techniques of Ikeda and Yamamoto.
\section{Preliminaries}
Let $S^{2n-1}$ denote the unit sphere in $\mathbb{C}^n$. We begin by formally defining lens spaces.
\begin{definition}\label{def:lens_space}
Let $k$ be a positive integer and let $\ell_1,\ldots, \ell_n$ be integers relatively prime to $k$. Let $\zeta = e^{2\pi i/k}$. Define the map $g: S^{2n-1}\to S^{2n-1}$ by
\[g\left(z_1,\ldots, z_n\right) = \left(\zeta^{\ell_1} z_1,\ldots,\zeta^{\ell_n} z_n\right). \]
Let $G$ be the cyclic subgroup of the unitary group $U\left(n\right)$ generated by $g$. The \emph{lens space} $L\left(k;\ell_1,\ldots,\ell_n\right)$ is the quotient space, $G\setminus S^{2n-1}$.
\end{definition}
We now consider some preliminary results that are necessary in our analysis of the spectrum of the Kohn Laplacian on lens spaces. First we present Folland's calculation of the spectrum of $\Box_b$ on spheres.
\begin{theorem}[\cite{Folland}]\label{thm:spectrum_box_b} The space of square integrable functions on $S^{2n-1}$ yields the following spectral decomposition for $\square_b$,
\[L^2\left(S^{2n-1}\right) =\bigoplus_{p,q\geq 0} \calH_{p,q}\left(S^{2n-1}\right),\]
where $\calH_{p,q}\left(S^{2n-1}\right)$ denotes the space of harmonic polynomials on the sphere of bidegree $\left(p,q\right)$. Furthermore, $\calH_{p,q}\left(S^{2n-1}\right)$ has corresponding eigenvalue $2q\left(p+n-1\right)$ and
\begin{align*}
\dim \calH_{p,q} \left(S^{2n-1}\right)
=\left( \frac{p+q}{n-1}+1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}.
\end{align*}
\end{theorem}
For the remainder of the paper we will write $\mathcal{H}_{p,q}$ for $\calH_{p,q}\left(S^{2n-1}\right)$. Note that Theorem \ref{thm:spectrum_box_b} allows us to quickly compute the spectrum of $\Box_b$ on $S^{2n-1}$; each eigenvalue is a non-negative even integer and the multiplicity of an eigenvalue $\lambda$ is
\[\sum_{2q(p+n-1) = \lambda} \dim\calH_{p,q}.\]
It is also important to note that a basis for $\mathcal{H}_{p,q}$ can be computed explicitly.
\begin{theorem}[\cite{REU18}]
For $\alpha \in \mathbb{N}^n$, define $\left|\alpha\right| = \sum_{j=1}^n\alpha_j$. For $\alpha,\beta \in\mathbb{N}^n$ let
\[\overline{D}^{\alpha} = \frac{\partial^{\left|\alpha\right|}}{\partial \overline{z}_1^{\alpha_1}\cdots \partial \overline{z}_n^{\alpha_n}} \quad \text{and} \quad \ D^{\beta} = \frac{\partial^{\left|\beta\right|}}{\partial z_1^{\beta_1}\cdots \partial z_n^{\beta_n}}. \]
The following yields a basis for $\mathcal{H}_{p,q} \left(S^{2n-1}\right)$:
\[
\left\{\overline{D}^{\alpha}D^{\beta}|z|^{2-2n} :\ |\alpha|=p,\ |\beta|=q,\ \alpha_1=0\textnormal{ or }\beta_1=0\right\}.\]
\end{theorem}
Note that $G$ acts naturally on $L^2\left(S^{2n-1}\right)$ by precomposition. In particular, note that each $\calH_{p,q}$ space is invariant under $G$. It follows that,
\begin{theorem}
The space of square integrable functions on $L(k;\ell_1,\dots,\ell_n)$ has the following spectral decomposition for $\square_b$ on $L(k;\ell_1,\dots,\ell_n)$,
\[L^2\left(L\left(k;\ell_1,\dots,\ell_n\right)\right) = \bigoplus_{p,q\geq 0} \calH_{p,q}\left(L\left(k;\ell_1,\dots,\ell_n\right)\right) \cong \bigoplus_{p,q\geq 0} \calH^G_{p,q},\]
where $\calH_{p,q}^G$ is the subspace of $\calH_{p,q}$ consisting of elements invariant under the action of $G$. That is,
\[\calH_{p,q}^G = \left\{f \in \calH_{p,q} : f \circ g = f\right\}.\]
\end{theorem}
Just as on the sphere, each $\calH_{p,q}^G$ (presuming it is non-trivial) is an eigenspace of $\Box_b$ with eigenvalue $2q\left(p+n-1\right)$. It follows that the multiplicity of an eigenvalue $\lambda$ in the spectrum of $\Box_b$ on the lens space $G \setminus S^{2n-1}$ is given by
\[\sum_{2q\left(p+n-1\right) = \lambda} \dim\calH_{p,q}^G,\]
where we further restrict $p,q$ such that $\calH_{p,q}^G \neq \left\{0\right\}$.
This observation gives the following proposition.
\begin{proposition} \label{prop:system}
The dimension of $\calH^G_{p,q}$ is equal to the number of solutions $\left(\alpha,\beta\right)$ to the system
\begin{align*}
\left|\alpha\right| = p, \quad \left|\beta\right| = q,\\
\alpha_1 = 0 \quad \text{or} \quad \beta_1 = 0,\\
\sum_{j = 1}^{n} \ell_j\left(\alpha_j - \beta_j\right) \equiv 0\mod k,
\end{align*}
where $\alpha=\left(\alpha_1,\ldots,\alpha_n\right)$ and $\beta = \left(\beta_1,\ldots,\beta_n\right)$ are $n$-tuples of nonnegative integers and $\left|\cdot\right|$ denotes the sum of these integers.
\end{proposition}
\begin{proof}
Note that the first two conditions on the system are given by the basis of $\mathcal{H}_{p,q}$. Now let $f_{\alpha,\beta} = \overline{D}^{\alpha}D^{\beta}|z|^{2-2n}$. Recall that $f_{\alpha,\beta} \in \mathcal{H}_{p,q}^G$ if and only if $ f_{\alpha,\beta} \circ g = f_{\alpha,\beta}$. By the chain rule and the fact that $g$ can be thought of as a unitary matrix,
\begin{align*}
f_{\alpha,\beta} \left(w\right) &= \left(\overline{D}^\alpha D^\beta \left|z\right|^{2- 2n}\right)\big|_{z=w}\\
&= \left(\overline{D}^\alpha D^\beta \left| g z\right|^{2- 2n}\right)\big|_{z=w}\\
&= \zeta^{\sum_{j=1}^n \ell_j \left(\beta_j - \alpha_j\right)} \left(\overline{D}^\alpha D^\beta \left|z^{2 - 2n}\right|\right)\big|_{z=gw}\\
&=\zeta^{\sum_{j=1}^n \ell_j \left(\beta_j - \alpha_j\right)} g \circ f_{\alpha,\beta}\left(w\right).
\end{align*}
Since the collection of the $f_{\alpha,\beta}$ comprise of a basis for $\calH_{p,q}$ on which $g$ acts diagonally, the dimension of the $G$-invariant subspace of $\calH_{p,q}$ is simply the number of these basis vectors that are fixed by $g$. Thus, $f_{\alpha,\beta}\circ g = f_{\alpha,\beta}$ if and only if the last condition in the proposition holds.
\end{proof}
\section{Results}
We now outline the main results of this paper. For $\lambda>0$, let $N\left(\lambda\right)$ denote the number of positive eigenvalues, including multiplicity, of $\square_b$ on $L^2\left(S^{2n-1}\right)$ that are at most $\lambda$. Similarly, let $N_L\left(\lambda\right)$ denote the number of positive eigenvalues, including multiplicity, of $\square_b$ on $L^2\left(L(k;\ell_1,\ldots,\ell_n)\right)$ that are at most $\lambda$.
\begin{theorem}\label{thm:oneoverk}
Given a lens space $L\left(k;\ell_1,\ldots,\ell_n\right)$, we have
\[\lim_{\lambda \rightarrow\infty}\frac{N_L(\lambda)}{N(\lambda)}=\frac{1}{k}.\]
\end{theorem}
Note that since the volume of the lens space is scaled by $k$, it is not surprising that the eigenvalue counting functions scale similarly. Furthermore, combining this result with the explicit calculations in \cite{REU2020Weyl}, yields the following analog of Weyl's law.
\begin{corollary}
We have
\[\lim_{n\to\infty} \frac{N_L\left(\lambda\right)}{\lambda^n} = u_n \frac{ \operatorname{vol}\left(S^{2n-1}\right)}{k} = u_n \operatorname{vol}\left(L\left(k;\ell_1,\ldots,\ell_n\right)\right),\]
where $u_n$ is a universal constant depending only on $n$, given by \[u_n = \frac{n - 1}{n \left(2\pi\right)^n \Gamma\left(n + 1\right)} \int_{-\infty}^{\infty} \left(\frac{x}{\sinh x}\right)^n e^{-(n - 2)x}\,dx.\]
\end{corollary}
As a consequence of Theorem \ref{thm:oneoverk}, the following corollary is immediate.
\begin{corollary} \label{cor:same_k}
If the lens spaces $L\left(k;\ell_1, \ldots, \ell_n\right)$ and $L\left(k'; \ell_1', \ldots, \ell_n'\right)$ are CR isospectral, then $k = k'$.
\end{corollary}
With Corollary \ref{cor:same_k}, we obtain the following result on isospectral quotients of $S^3$.
\begin{theorem}
\label{thm:isospec}
Let $k$ be an odd prime number. Let $L\left(k; \ell_1, \ell_2\right)$ and $L\left(k; \ell_1', \ell_2'\right)$ be the lens spaces generated by the groups $G$ and $G'$, respectively. The following are equivalent.
\begin{enumerate}
\item $L\left(k; \ell_1, \ell_2\right)$ and $L\left(k; \ell_1', \ell_2'\right)$ are CR isometric.
\item $L\left(k; \ell_1, \ell_2\right)$ and $L\left(k; \ell_1', \ell_2'\right)$ are CR isospectral.
\item $\dim \calH_{p,q}^G = \dim \calH_{p,q}^{G'}$ for all $p,q\geq 0$.
\item There exists an integer $a$ and a permutation $\sigma$ such that $\left(\ell_1',\ell_2'\right) \equiv \left(a\ell_{\sigma\left(1\right)},a\ell_{\sigma\left(2\right)}\right)\mod k.$
\end{enumerate}
\end{theorem}
\section{Diophantine Equations and Proof of Theorem \ref{thm:oneoverk}}
We begin our proof of Theorem \ref{thm:oneoverk} with a series of technical lemmas.
\begin{lemma}\label{lem:lim=0}
For $p,q \in \mathbb{Z}$, $p,q\geq 0$, define
\[a_{p,q}=\binom{
p+n-2}{n-2}\binom{q+n-2}{n-2} \text{ and } b_{p,q}=\left(\frac{p+q}{n-1}+1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}.\]
The following equality holds,
\[\lim_{\lambda \rightarrow \infty} \frac{\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}a_{p,q}}{\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}b_{p,q}}=0.\]
\end{lemma}
\begin{proof}
The claim is intuitive due to the fact that $b_{p,q} = \left(\frac{p+q}{n-1}+1\right) a_{p,q}$. Let $A_m = \sum_{p=0}^{ m - n + 1 }\sum_{q=1}^{\left\lfloor \frac{m}{p+n-1} \right\rfloor}a_{p,q}$ and $B_m = \sum_{p=0}^{ m - n + 1 }\sum_{q=1}^{\left\lfloor \frac{m}{p+n-1} \right\rfloor}b_{p,q}$.
Note that it suffices to show $B_m/A_m\to\infty$ as $m\to\infty$. Fix $R>0$, and note that $B_m/A_m > R$ if and only if
\[C_m = \sum_{p=0}^{m- n + 1} \sum_{q=1}^{\left\lfloor \frac{m}{p + n - 1}\right\rfloor} \left(\frac{p + q}{n-1} + 1 - R \right) a_{p,q} > 0.\]
In particular, there is a negative part $N_R$ of the above sum contributed by indices where $\frac{p + q}{n-1}+ 1 - R < 0$. But for large enough $m$, this negative part is always dominated by the positive part. By taking $m \geq M + n - 1$ where $M$ is such that $\frac{M+1}{n-1} + 1 - R + N_R > 0$, the term given by $p = m - n + 1$ and $q = 1$ contributes enough to make $C_m$ positive.
\end{proof}
\subsection{Lower and Upper Bounds}
In this section we establish a lower and upper bound that will be used to control the number of solutions to the Diophantine system in Proposition \ref{prop:system}.
We state first the lower bound and then the analogous upper bound.
\begin{lemma}\label{lem:lower_bound}
If $N,m,d \in \mathbb{Z}_{\geq 0}$, $m,d > 0$, then
\[
\sum_{r=0}^N\binom{r+n-3}{n-3}\sum_{j=0}^{\left\lfloor \frac{1}{m}(N-r+1)\right\rfloor - 1}\left\lfloor \frac{1}{d}(jm+1)\right \rfloor \geq \frac{1}{md}\sum_{q=0}^{N}\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m- \left( d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2}\right) \binom{q + n - 2}{n-2},
\]
where in the case that the upper index of a summation is $-1$, we take it to be equal to zero.
\end{lemma}
\begin{lemma}\label{lem:upper_bound}
If $N,m,d \in \Z_{\geq 0}$, $m, d > 0$, then
\[ \sum_{r=0}^N\binom{r+n-3}{n-3}\sum_{j=1}^{\left\lceil \frac{1}{m}(N-r+1)\right\rceil}\left\lceil \frac{jm}{d}\right\rceil
\leq \frac{1}{md}\sum_{q=0}^N\left( \frac{q+n-1}{n-1}+d + \frac{3}{2}m + (m^2+md)\frac{n-2}{q+n-2}\right)\binom{q+n-2}{n-2}.
\]
\end{lemma}
We prove only the lower bound as the proof for the upper bound follows similarly.
\begin{proof}
First, note that for any $M \geq -1$,
\[\sum_{j=0}^M\left\lfloor \frac{jm+1}{d} \right\rfloor \geq \sum_{j=0}^M\frac{1}{d}\left(jm-d+1\right)
=\frac{1}{d}(M+1)\left(\frac{mM}{2}+1-d\right).\]
Thus, it suffices to give the claimed lower bound for the sum
\[\sum_{r=0}^N\binom{r+n-3}{n-3}\frac{1}{d}\left\lfloor \frac{N-r+1}{m}\right\rfloor \left( \frac{m\left(\lfloor \frac{N-r+1}{m}\rfloor - 1 \right)}{2} + 1 - d \right).\]
We see that for $r \leq N$,
\begin{align*}
\left\lfloor \frac{N-r+1}{m}\right\rfloor\left(\frac{m\left(\left\lfloor \frac{N-r+1}{m}\right\rfloor -1\right)}{2}+1-d\right)
&\geq \frac{1}{m} \left(N - r + 1 - m\right) \left(\frac{N - r + 1}{2} - m + 1 - d\right)\\
&=\frac{1}{m} \left( \frac{\left(N - r + 2\right)^2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 2 \right) - \left(1 + m\right)\left(\frac{1}{2} - m - d \right)\right)\\
&\geq \frac{1}{m} \left( \frac{\left(N - r + 2\right)^2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 1 \right) - \left(d + \frac{3}{2}m\right) \right)\\
&\geq \frac{1}{m} \left( \binom{N - r + 2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 1 \right) - \left(d + \frac{3}{2}m\right) \right).
\end{align*}
Now, the combinatorial identities,
\[\sum_{q=0}^N\binom{q+n-1}{n-1}=\binom{N+n}{n}\quad \text{and} \quad \sum_{m=0}^n\binom{m}{j}\binom{n-m}{k-j}=\binom{n+1}{k+1}\text{ for }0 \leq j \leq k \leq n\]
imply
\[\sum_{r=0}^N\binom{r+n-3}{n-3}\binom{N-r+2}{2}=\sum_{q=0}^N\binom{q+n-1}{n-1}\text{ and }\sum_{r=0}^N\binom{r+n-3}{n-3}\binom{N-r+1}{1}=\sum_{q=0}^N\binom{q+n-2}{n-2}.\]
Applying these identities and the lower bound above, the claim follows,
\begin{align*}
\sum_{r=0}^N\binom{r+n-3}{n-3}\frac{1}{d}\left\lfloor \frac{N-r+1}{m}\right\rfloor &\left( \frac{m\left(\lfloor \frac{N-r+1}{m}\rfloor - 1 \right)}{2} + 1 - d \right)
\\
&\geq\sum_{r=0}^N\binom{r+n-3}{n-3}\frac{1}{md}\left( \binom{N - r + 2}{2} - \left(d + \frac{3}{2}m\right)\left(N- r + 1 \right) - \left(d + \frac{3}{2}m\right) \right)\\
&= \frac{1}{md} \sum_{q=0}^N \left( \frac{q + n - 1}{n - 1 } - d - \frac{3}{2} m - \left( d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2}\right) \binom{q + n - 2}{n-2}.
\end{align*}
\end{proof}
\subsection{Bounding the Number of Solutions to the Diophantine System}
Given a lens space $L = L\left(k; \ell_1,\ldots, \ell_n\right)$, with eigenvalue counting function $N_L(\lambda)$, note
\[N_L\left(2\lambda\right) = \sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor }\dim\mathcal{H}_{p,q}^G\]
and
\[N\left(2\lambda\right) = \sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor }\dim\mathcal{H}_{p,q} = \sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor } \left(\frac{p+q}{n-1} + 1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2},\]
where $N\left(\lambda\right)$ is the eigenvalue counting function the Kohn Laplacian on $S^{2n-1}$. These formulas are due to the fact that the eigenvalues corresponding to $\mathcal{H}_{p,q}$ and $\mathcal{H}_{p,q}^G$ are $2q \left(p + n - 1 \right)$.
Recall from Proposition \ref{prop:system}, $\dim\mathcal{H}_{p,q}^G$ is equal to the number of solutions $\left(\alpha,\beta\right)$ to the Diophantine system
\[
\sum_{j=1}^{n}\ell_j\left(\alpha_j-\beta_j\right) \equiv 0 \tn{ mod } k,\]
where $\alpha_j,\beta_j \geq 0$, $\alpha_1\beta_1 = 0$, and $\left|\alpha\right| = p$ and $\left|\beta\right| = q$.
Since our goal is to study $N_L\left(2\lambda\right)$, we can replace the conditions $\left|\alpha\right| = p$ and $\left|\beta\right| = q$ with $0 < \left|\beta\right|\left(\left|\alpha\right|+n-1\right) \leq \lambda$. That is, for fixed $\lambda > 0$, we study the number of solutions $\left(\alpha,\beta\right)$ to the Diophantine equation
\begin{align}
\sum_{j=1}^{n}\ell_j\left(\alpha_j-\beta_j\right) \equiv 0 \tn{ mod } k,
\label{eq:important!}
\end{align}
where $\alpha_j,\beta_j \geq 0$, $\alpha_1 \beta_1 = 0$, and $ 0 < \left|\beta\right|\left(\left|\alpha\right|+n-1\right) \leq \lambda $.
In particular, we establish upper and lower bounds for the number of solutions to the system (\ref{eq:important!}) in the following two lemmas, where they are distinguished by fixing $\alpha_1 = 0$ or $\beta_1 = 0$. We will make use of Lemma \ref{lem:lower_bound} and Lemma \ref{lem:upper_bound} as well as the following fact,
\begin{proposition*}
Given $a,b,c,d \in \N$, with $a \leq b$, the number of solutions to the system
\begin{align*}
a &\leq x \leq b\\
x &\equiv c \tn{ mod }d
\end{align*}
is either $\left\lfloor \frac{b-a+1}{d} \right\rfloor$ or $\left\lceil \frac{b-a+1}{d}\right \rceil$.
\end{proposition*}
\begin{lemma} \label{lem:alpha1=0}
For $\lambda>n$, when $\alpha_1=0$, the number of solutions to the system (\ref{eq:important!}) is greater than
\[\frac{1}{k}\sum_{p=0}^{\left\lfloor\lambda - n + 1\right\rfloor}\binom{p+n-2}{n-2}\sum_{q=0}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2} m\right) \frac{n-2}{q + n - 2}\right)\binom{q+n-2}{n-2}\]
and less than
\[\frac{1}{k}\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\binom{p+n-2}{n-2}\sum_{q=0}^{\left\lfloor \frac \lambda {p+n-1} \right\rfloor}\left( \frac{q+n-1}{n-1}+ d + \frac{3}{2}m + \left(m^2+md\right)\frac{n-2}{q+n-2}\right)\binom{q+n-2}{n-2}\]
where $m = \gcd \left(k,\ell_1-\ell_2\right)$ and $d = k/m$.
\end{lemma}
\begin{lemma} \label{lem:beta1=0}
For $\lambda > n$, when $\beta_1=0$, the number of solutions to the system (\ref{eq:important!}) is greater than
\[\frac{1}{k}\sum_{q=1}^{\left\lfloor \frac{\lambda}{n-1}\right\rfloor}\binom{q+n-2}{n-2}\sum_{p=0}^{\left\lfloor \frac{\lambda}{q}-n+1 \right\rfloor}\left(\frac{p+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2}m\right) \frac{n-2}{p + n - 2}\right)\binom{p+n-2}{n-2}\]
and less than
\[\frac{1}{k}\sum_{q=1}^{\left\lfloor \frac{\lambda}{n-1} \right\rfloor}\binom{q+n-2}{n-2}\sum_{p=0}^{\left\lfloor\frac{\lambda}{q}-n+1\right\rfloor}\left( \frac{p+n-1}{n-1} + d + \frac{3}{2}m + \left(m^2+md\right)\frac{n-2}{p+n-2} \right)\binom{p+n-2}{n-2}\]
where $m = \gcd \left(k,\ell_1-\ell_2\right)$ and $d = k/m$.
\end{lemma}
We prove only Lemma \ref{lem:alpha1=0}, as the proof of Lemma \ref{lem:beta1=0} follows the same argument.
\begin{proof}
The outline of the argument is as follows. Recall that we want to estimate the number of $\left(\alpha,\beta\right)$ solving the Diophantine system, (\ref{eq:important!}). To do this, we first note that by stars and bars, we can find a number of candidates for $\alpha$. Thus, by fixing some $\alpha'$, it suffices to find the number of $\beta$ such that $\left(\alpha',\beta\right)$ is a solution to (\ref{eq:important!}). To reduce the problem further, by another application of stars and bars, we can find a number of values of $\beta_3,\ldots,\beta_n$ that may contribute to solutions of (\ref{eq:important!}). So for fixed $\alpha',\beta_3,\ldots,\beta_n$, by using the above proposition, we estimate the number of $\beta_1$ so that $\left(\alpha',\beta_1,\beta_2,\beta_3,\ldots,\beta_n\right)$ solve (\ref{eq:important!}). Note that $\beta_2$ is determined by $\beta_1,\beta_3,\ldots,\beta_n$.
We now proceed with the details. For $p \in \Z_{\geq 0}$, there are $\binom{p+n-2}{n-2}$ values of $\alpha$ where $\alpha_1=0$ and $\left|\alpha\right|=p$. Similarly, for $r \in \Z_{\geq 0}$, there are $\binom{r+n-3}{n-3}$ values for $\beta_3,\ldots,\beta_n$ such that $\sum_{j=3}^n\beta_j=r$. Now we would like to compute bounds on the number of $\beta_1$ that yield a solution to (\ref{eq:important!}) along with $\alpha,\beta_3,\ldots,\beta_n$ for a fixed value of $\left|\alpha\right|=p$ and $\sum_{j=3}^n\beta_j=r$, and fixed $0 \leq p \leq \left\lfloor\lambda-n+1\right\rfloor$. Note that from this information, $q = \left|\beta\right|$ is not fixed and it can take different values as $\beta_1$ changes. However, we may write $\beta_2 = q - r - \beta_1$ and therefore,
\[\left(\ell_1-\ell_2\right)\beta_1\equiv \sum_{j=2}^n\ell_j\alpha_j - \ell_2\left(q-r\right) - \sum_{j=3}^n\ell_j\beta_j \tn{ mod } k.\]
Since $m = \tn{gcd}\left(k,\ell_1-\ell_2\right)$ and $dm = k$, there exists a $d'$ coprime to $d$ so that $d'm = \ell_1 - \ell_2$. It follows that,
\[d'm\beta_1 \equiv \sum_{j=2}^n\ell_j\alpha_j - \ell_2\left(q-r\right) - \sum_{j=3}^n\ell_j\beta_j \tn{ mod } md.\]
The existence of $\beta_1$ that satisfies this equation is equivalent to $m$ dividing $\sum_{j=2}^n \ell_j\alpha_j -\ell_2\left(q-r\right)-\sum_{j=3}^n \ell_j \beta_j$ as $d$ and $d'$ are coprime. Furthermore, this division condition is equivalent to
\begin{equation}\label{eq:q_congruence}
q \equiv \ell_2^{-1}\left(\ell_2r+\sum_{j=2}^n\ell_j\alpha_j-\sum_{j=3}^{n}\ell_j\beta_j\right) \tn{ mod } m.
\end{equation}
Since $q$ is subject to the conditions $r \leq q \leq \left\lfloor\frac{\lambda}{p+n-1}\right\rfloor$, by the above proposition, the number of solutions to (\ref{eq:q_congruence}) is bounded by
\[\left\lfloor \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1)\right) \right\rfloor\text{ and } \left\lceil \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1)\right) \right\rceil.\]
Note that if there is no $\beta_1$ that solves (\ref{eq:important!}) given $\alpha,\beta_3,\ldots,\beta_n$, then the lower bound is zero with no harm. This case corresponds to when the sum $\sum_{j=0}^{\left\lfloor \frac{1}{m} \left(N - r + 1 \right)\right\rfloor - 1} \left\lfloor \frac{1}{d} \left(jm + 1 \right)\right\rfloor$ has its upper index equal to $-1$ in Lemma \ref{lem:lower_bound}. In particular, we continue with our analysis by assuming that there is such a $\beta_1$, and therefore $q$, satisfying (\ref{eq:q_congruence}).
Note that if (\ref{eq:q_congruence}) is satisfied, we can divide the equation by $m$ to obtain
\begin{equation}\label{eq:b1_congruence}
\beta_1 \equiv \left(d'\right)^{-1}\frac{1}{m}\left(\sum_{j=2}^n\ell_j\alpha_j-\ell_2\left(q-r\right)-\sum_{j=3}^n\ell_j\beta_j\right) \tn{ mod } d.
\end{equation}
For a fixed value of $q$ which solves (\ref{eq:q_congruence}), since $0 \leq \beta_1 \leq q-r$, we can bound the number of solutions to (\ref{eq:b1_congruence}) by
\[\left\lfloor \frac{1}{d}(q-r+1) \right\rfloor \text{ and }\left\lceil \frac{1}{d}(q-r+1) \right\rceil.\]
First we look at the lower bound.
The values for $q$ which solve (\ref{eq:q_congruence}) are of the form $r+c + jm$, where $0 \leq c < m$ is some integer and $j$ is also an integer ranging from $0$ to the number of solutions of (\ref{eq:q_congruence}) minus one. Note that for all $j$,
\[\left\lfloor \frac1d((r+c+jm) - r + 1) \right\rfloor \geq \left\lfloor \frac1d(jm+1) \right\rfloor,\]
and therefore a lower bound on the number of solutions to both (\ref{eq:q_congruence}) and (\ref{eq:b1_congruence}) is
\[\sum_{j=0}^{\left\lfloor \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1\right) \right\rfloor-1}\left\lfloor\frac{1}{d}(jm+1)\right\rfloor.\]
So a lower bound on the number of solutions to (\ref{eq:important!}) when $\alpha_1=0$ is
\[\sum_{p=0}^{\lfloor\lambda - n + 1\rfloor}\binom{p+n-2}{n-2}\sum_{r=0}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}\binom{r+n-3}{n-3}\sum_{j=0}^{\left\lfloor \frac{1}{m}\left( \left\lfloor \frac{\lambda}{p+n-1}\right\rfloor - r + 1\right) \right\rfloor-1}\left\lfloor\frac{1}{d}(jm+1)\right\rfloor.\]
By Lemma \ref{lem:lower_bound}, the first part of the lemma follows.
Now we examine the upper bound. Note that for all $j$,
\[\left\lceil \frac1d((r+c+jm)-r+1)\right\rceil \leq \left\lceil \frac 1d(j+1)m \right\rceil.\]
So an upper bound on the number of solutions to the system when $\alpha_1 = 0$ is
\[\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\binom{p+n-2}{p}\sum_{r=0}^{\left\lfloor \frac \lambda {p+n-1} \right\rfloor}\binom{r+n-3}{n-3}\sum_{j=1}^{\left\lceil \frac{1}{m}(\lfloor \frac \lambda {p+n-1} \rfloor-r+1) \right\rceil}\left\lceil \frac{jm}{d}\right\rceil.\]
By Lemma \ref{lem:upper_bound}, the second part of the lemma follows.
\end{proof}
\subsection{Proof of Theorem \ref{thm:oneoverk}}
We now use Lemma \ref{lem:alpha1=0} and Lemma \ref{lem:beta1=0} and our reformulation of $N_L\left(2\lambda\right)$ as the number of solutions to equation (\ref{eq:important!}) to prove Theorem \ref{thm:oneoverk}.
\begin{proof}
To get a lower bound on sum $N_L(2\lambda)$, we note that we have three parts.
First, from Lemma \ref{lem:alpha1=0}, the part where $\alpha_1=0$:
\begin{align*}
\frac{1}{k}\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=0}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}&\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2} \right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}\\
\hspace{-20mm}=& \frac{1}{k}\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1}\right\rfloor}\left(\frac{q+n-1}{n-1}-d-\frac{3}{2}m - \left(d + \frac{3}{2} m \right) \frac{n-2}{q + n - 2} \right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}\\
&+\frac{1}{k}\sum_{p=0}^{\lfloor\lambda-n+1\rfloor}\left(1-2d-3m \right)\binom{p+n-2}{n-2}.
\end{align*}
Next, from Lemma \ref{lem:beta1=0}, the part where $\beta_1=0$:
\[\frac{1}{k}\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\left( \frac{p+n-1}{n-1} - d -\frac{3}{2}m - \left(d + \frac{3}{2} m \right) \frac{n-2}{p + n - 2}\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}.\]
Note that here we swapped the order of summation from the original statement of Lemma \ref{lem:beta1=0}.
Finally, we have the part to be subtracted off which is given by $\alpha_1=\beta_1=0$. This part is bounded above by a formula from stars and bars:
\[\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}.\]
Putting this together, a lower bound of $N_L \left(2\lambda\right)$ is
\[
\frac{1}{k}\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor} \left(\frac{p + q}{n - 1} + A +\frac{B}{q + n - 2} + \frac{C}{p + n - 2}\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2} + \frac{1}{k}\sum_{p=0}^{\lfloor\lambda-n+1\rfloor}D\binom{p+n-2}{n-2},\]
where $A,B,C,D$ are constants dependent only on $m,d$ and $n$. By Lemma \ref{lem:lim=0}, we know
\[\lim_{\lambda \rightarrow \infty} \frac{\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}}{\sum_{p=0}^{\lfloor \lambda - n + 1 \rfloor}\sum_{q=1}^{\left\lfloor \frac{\lambda}{p+n-1} \right\rfloor}\left(\frac{p+q}{n-1}+1\right)\binom{p+n-2}{n-2}\binom{q+n-2}{n-2}}=0.\]
Therefore, $\lim_{\lambda\to\infty} \frac{N_L\left(\lambda\right)}{N \left(\lambda\right)}\geq \frac{1}{k}$. The upper bound follows similarly, except there is no need to consider the part where $\alpha_1 = \beta_1 = 0$.
\end{proof}
\begin{remark}
This proof shows furthermore that $N_L\left(2\lambda\right)$ has asymptotic behavior, \[u_n \operatorname{vol} \left(L\left(k;\ell_1,\ldots,\ell_n\right)\right) + O \left(\sum_{p=0}^{\left\lfloor \lambda - n + 1 \right\rfloor} \sum_{q = 1}^{\left\lfloor \frac{\lambda}{p + n - 1 } \right\rfloor} \binom{ p + n - 2}{n - 2} \binom{q + n - 2}{n - 2}\right).\]By some
calculations, we conjecture that this remainder term can be bounded above by $O \left(\lambda^{n - 1} \log \lambda\right)$ but not $O \left(\lambda^{n-1}\right)$. If true, this would be similar to the remainder term
on compact Heisenberg manifolds in \cite{strichartz2015}, except the fact that the authors speculates in the paper that the estimate can be improved to $O\left(\lambda^{n}\right)$.
\end{remark}
\section{Isospectral lens spaces and Proof of Theorem \ref{thm:isospec}}
In this section, we provide more details and a proof of Theorem \ref{thm:isospec}. For convenience, we restate it here.
\noindent\textbf{Theorem \ref{thm:isospec}.}
Let $k$ be an odd prime number. Let $L(k; \ell_1, \ell_2)$ and $L(k; \ell_1', \ell_2')$ be the lens spaces generated by the groups $G$ and $G'$, respectively. The following are equivalent.
\begin{enumerate}
\item $L(k; \ell_1, \ell_2)$ and $L(k; \ell_1', \ell_2')$ are CR isometric.
\item $L(k; \ell_1, \ell_2)$ and $L(k; \ell_1', \ell_2')$ are CR isospectral.
\item $\dim \calH_{p,q}^G = \dim \calH_{p,q}^{G'}$ for all $p,q\geq 0$.
\item There exists an integer $a$ and a permutation $\sigma$ such that $\left(\ell_1',\ell_2'\right) \equiv \left(a\ell_{\sigma\left(1\right)},a\ell_{\sigma\left(2\right)}\right)\mod k$.
\end{enumerate}
\subsection{(2) implies (3)}
Of the four implications we show, this is the only one that requires $k$ to be an odd prime. In particular, we need this assumption after Lemma \ref{Lem:dimension}. Fix a lens space $L(k;\ell_1,\ell_2)$ and let $G$ be the corresponding group. Let $d= \gcd(k, \ell_1- \ell_2)$. We begin with some observations about $\dim\calH^G_{p,q}$.
\begin{lemma}
\label{lem:symmetric}
$\dim\calH^G_{p,q} = \dim\calH^G_{q,p}$ for all $p,q$.
\end{lemma}
This follows from Proposition \ref{prop:system}, observing that $(\alpha,\beta)\leftrightarrow(\beta,\alpha)$ is a one-to-one correspondence between solutions to the system for $\calH^G_{p,q}$ and solutions to the system for $\calH^G_{q,p}$.
\begin{lemma}
\label{lem:doesnotdivide}
If $d$ does not divide $p-q$, then $\dim\calH_{p,q}^G = 0$.
\end{lemma}
\begin{proof} From Proposition~\ref{prop:system}, we know $\dim\calH_{p,q}^G$ is the number of solutions to \[\ell_1(\alpha_1 - \beta_1) + \ell_2(\alpha_2 - \beta_2) \equiv 0\mod k\]
such that $\alpha_1 \beta_1 = 0$, and $\alpha_1 + \alpha_2 = p$, $\beta_1+\beta_2 =q$. Suppose for contradiction $d$ does not divide $p-q$, but a solution exists. We then have
\begin{align*}
\ell_1\left(\alpha_1 - \beta_1\right) + \ell_2\left(\left(p - \alpha_1\right) - \left(q-\beta_1\right)\right) &\equiv 0 \mod k\\
\left(\ell_1 - \ell_2\right)\left(\alpha_1 - \beta_1\right) &\equiv - \ell_2\left(p-q\right) \mod k.
\end{align*}
Note that $d$ is coprime to $\ell_2$, since $\ell_2$ is coprime to $k$ and $d$ divides $k$. Since $d$ does not divide $p-q$, we see that $d$ does not divide the right hand side of the equation. However, $d$ divides $\ell_1 - \ell_2$, which is a contradiction, meaning no such solution exists.
\end{proof}
We now relate the dimensions of invariant subpaces.
\begin{lemma
If $d$ divides $p-q$, then \[\dim \calH^G_{p+k,q} =\dim \calH^G_{p,q+k} = d+ \dim\calH^G_{p,q}. \]
\end{lemma}
\begin{proof} From Proposition~\ref{prop:system}, $\dim\calH_{p,q}^G$ is the number of solutions to
\[\ell_1\left(\alpha_1 - \beta_1\right) + \ell_2\left(\alpha_2 - \beta_2\right) \equiv 0\mod k\]
such that $\alpha_1 + \alpha_2 = p$, $\beta_1+\beta_2 =q$, and either $\alpha_1 = 0$ or $\beta_1 = 0$. In the $\alpha_1 = 0$ case, this reduces to
\begin{equation}
\label{eq:alpha}
\ell_1\left(-\beta_1\right) + \ell_2\left(p - q +\beta_1\right) \equiv 0\mod k\quad\text{ where } 0\leq \beta_1 \leq q.
\end{equation}
Similarly, in the $\beta_1 = 0$ case, we obtain
\begin{equation}
\label{eq:beta}
\ell_1\left(\alpha_1\right) + \ell_2\left(p - q -\alpha_1\right) \equiv 0\mod k\quad\text{ where } 0\leq \alpha_1 \leq p.
\end{equation}
By inclusion-exclusion, $\dim\calH_{p,q}^G$ is the number of solutions to (\ref{eq:alpha}) plus the number of solutions to (\ref{eq:beta}) minus the number of solutions where $\alpha_1 = \beta_1 = 0$. Note that in the $\alpha_1 = \beta_1 = 0$ case, the Diophantine system reduces to $\ell_1\left(p-q\right) \equiv 0\mod k$, which has one solution if $k$ divides $p-q$, and zero solutions otherwise. For ease of notation, define the following indicator function for divisibility
\[\divides\left(k, a\right) = \begin{cases}
1 & k \text{ divides } a\\
0 & \text{otherwise}
\end{cases}.\]
Let $m_{p,q}$ denote the number of solutions to (\ref{eq:alpha}) and $n_{p,q}$ denote the number of solutions to (\ref{eq:beta}). Using this notation, the above inclusion-exclusion argument can be written as \[\dim\calH_{p,q}^G = m_{p,q} + n_{p,q} - \divides\left(k, p-q\right).\]
Note that $m_{p+k,q}$ is the number of solutions to $\ell_1\left(-\beta_1\right) + \ell_2\left(p+k-q+\beta_1\right) \equiv 0\mod k$ for $0\leq \beta_1\leq q$. This equation is equivalent to $\ell_1\left(-\beta_1\right) + \ell_2\left(p-q+\beta_1\right) \equiv 0\mod k$, which implies $m_{p+k,q} = m_{p,q}$. By the same logic, we can see that $n_{p,q+k} = n_{p,q}$.
We now show $m_{p,q+k} = m_{p,q}+d$ and $n_{p+k,q} = n_{p,q} + d$. Note that $m_{p,q+k} - m_{p,q}$ is the number of solutions to $\ell_1\left(-\beta_1\right) + \ell_2\left(p-q+\beta_1\right) \equiv 0\mod k$ where $q+1 \leq \beta_1\leq q+k$. Reducing modulo $k$, this is equal to the number of solutions where $1\leq \beta_1\leq k$. The equivalence may be rewritten as
\[\left(\ell_1 - \ell_2\right)\beta_1 \equiv \ell_2\left(p-q\right)\mod k, \] where $1 \leq \beta_1 \leq k$.
Since $d$ divides $\ell_1 - \ell_2$ and $p-q$, we obtain
\[\left(\frac{\ell_1-\ell_2}{d}\right)\beta_1 \equiv \ell_2\left(\frac{p-q}{d}\right)\mod{k/d}.\]
Since $d = \gcd(k,\ell_1-\ell_2)$, the above equivalence has a unique solution for $1\leq \beta_1\leq k/d$. Thus, there are precisely $d$ solutions in $1\leq \beta_1 \leq k$. Hence, $m_{p,q+k} = m_{p,q} + d$. Similarly, we see that $n_{p+k,q} = n_{p,q} + d$.
Combining these facts, we have
\[
\dim\calH_{p+k,q}^G = m_{p+k,q} + n_{p+k,q} - \divides\left(k, p-q\right)
= m_{p,q} + n_{p,q} + d - \divides\left(k, p-q\right)
= \dim\calH_{p,q}^G +d\]
and
\[
\dim\calH_{p,q+k}^G = m_{p,q+k} + n_{p,q+k} - \divides\left(k, p-q\right)= m_{p,q}+d + n_{p,q} -
\divides\left(k, p-q\right)= \dim\calH_{p,q}^G +d\]
completing the proof.
\end{proof}
In what follows, it will be useful to distinguish between ``mod" as an equivalence relation and ``mod" as a binary operator. To that end, we will use continue to use ``$\mod k$" to denote the equivalence relation modulo $k$, and we will use ``$\%$" to denote the binary modulo operator. That is to say, $p\% k$ denotes the smallest non-negative integer equivalent to $p$ modulo $k$. For example, $11\%5 = 1$, $12\%4 = 0$, and $1\%7 = 1$. Using this notation, the previous lemma implies
\begin{lemma}[Dimension Lemma]\label{Lem:dimension}
If $d$ divides $p-q$, then
\[\dim \calH^G_{p,q} = \dim\calH^G_{p\% k,q\% k} + d\left(\left\lfloor\frac{p}{k}\right\rfloor + \left\lfloor\frac{q}{k}\right\rfloor\right). \]
\end{lemma}
Now, let $L(k;\ell_1',\ell_2')$ be another lens space generated by the group $G'$, and let $d' = \gcd(k,\ell_1' - \ell_2')$.
\begin{proposition}\label{Propd=d'} Consider any two lens spaces $L\left(k;\ell_1,\ell_2\right)$ and $L\left(k;\ell_1',\ell_2'\right)$ where $d = 2$ and $d' = 4$ do not occur simultaneously. If the lens spaces are CR isospectral, then $d = d'$.
\label{prop:gcd}
\end{proposition}
\begin{remark}
We suspect that the above statement for lens spaces with $d = 2$ and $d' = 4$ is also true. This case seems to require a more subtle analysis than the growth rate arguments considered below. However, since in the rest of this section we assume $k$ is prime, this restriction on $d$ and $d'$ does not impact the rest of the paper.
\end{remark}
\begin{proof} We show that if $d \neq d'$, then the spectra differ.
{\bf Case 1:} Assume $2 < d < d'$. Let $r$ be a prime to be determined such that $r \equiv 1 \mod d d'$. Let $\lambda = 4r$. It follows that the only $\left(p,q\right)$ so that $ 2 q \left( p + 1 \right) = \lambda$ are $\left(2r-1,1\right)$, $\left(r-1,2\right)$, $\left(1,r\right)$, and $\left(0,2r\right)$. Since $r\equiv 1 \mod d$ and $r\equiv 1 \mod d'$, by Lemma \ref{lem:doesnotdivide}
\begin{align*}
\operatorname{mult}_G \left(\lambda\right)
&= \dim \mathcal{H}^G_{2r - 1,1} + \dim \mathcal{H}^G_{1,r}\\
\operatorname{mult}_{G'} \left(\lambda\right)
&= \dim \mathcal{H}^{G'}_{2r - 1,1} + \dim \mathcal{H}^{G'}_{1,r}.
\end{align*}
By the dimension lemma,
\begin{align*}
\operatorname{mult}_G \left(\lambda\right)
&= \dim \mathcal{H}^G_{2r - 1\% k,1} + \dim \mathcal{H}^G_{1,r\% k} + d \left(\left\lfloor \frac{2r - 1}{k}\right\rfloor + \left\lfloor \frac{r}{k}\right\rfloor\right)\\
\operatorname{mult}_{G'} \left(\lambda\right)
&= \dim \mathcal{H}^{G'}_{2r - 1\% k,1} + \dim \mathcal{H}^{G'}_{1,r\% k}+ d' \left(\left\lfloor \frac{2r - 1}{k}\right\rfloor + \left\lfloor \frac{r}{k}\right\rfloor\right).
\end{align*}
Since $d < d'$, the multiplicity will differ for sufficiently large $r$.
{\bf Case 2:} Assume $2 = d < d'$ where $d' \neq 4$. Let $r$ be a prime to be determined such that $r \equiv 1 \mod d'$. Let $\lambda = 4r$. By Lemma \ref{lem:doesnotdivide}, we see that,
\begin{align*}
\operatorname{mult}_G \left(\lambda\right)
&= \dim \mathcal{H}_{1,r}^G + \dim \mathcal{H}_{2r-1,1}^G + \dim \mathcal{H}_{0,2r}^G + \dim \mathcal{H}_{r-1,2}^G\\
\operatorname{mult}_{G'} \left(\lambda\right)
&= \dim \mathcal{H}_{1,r}^{G'} + \dim \mathcal{H}_{2r-1,1}^{G'}.
\end{align*}
By the dimension lemma,
\begin{align*}
\operatorname{mult}_{G} \left(\lambda\right)
&= \dim \mathcal{H}_{1,r\% k}^G + \dim \mathcal{H}_{2r-1 \% k, 1}^G + \dim \mathcal{H}_{0,2r\% k}^G + \dim\mathcal{H}_{r - 1 \% k, 2}^G + 2 \left(\left\lfloor \frac{r}{k}\right\rfloor + \left\lfloor \frac{2r - 1}{k}\right\rfloor + \left\lfloor \frac{2r}{k}\right\rfloor + \left\lfloor \frac{r - 1}{k}\right\rfloor\right)\\
\operatorname{mult}_{G'} \left(\lambda\right)
&= \dim \mathcal{H}_{1,r\% k}^{G'} + \dim \mathcal{H}_{2r-1 \% k, 1}^{G'} + d' \left(\left\lfloor \frac{r}{k}\right\rfloor + \left\lfloor \frac{2r - 1}{k}\right\rfloor\right).
\end{align*}
This implies $\operatorname{mult}_G \left(\lambda\right)$ has growth rate $\frac{12}{k}r$ while $\operatorname{mult}_{G'} \left(\lambda\right)$ has growth rate $\frac{3d'}{k}r$. Since the growth rates are different, for large enough $r$ the multiplicities will differ.
{\bf Case 3:} Assume $1 = d < d'$. Let $r$ be any prime greater than $k$ so that $r\equiv 1 \mod d'$. Let $\lambda = 2r$. It follows that the only $\left(p,q\right)$ so that $2q \left(p+1\right) = \lambda$ are $\left(r-1,1\right)$ and $\left(0,r\right)$. Since $d'$ does not divide $r-2$ or $r$, $\calH_{r-1,1}^{G'}$ and $\calH_{0,r}^{G'}$ are both empty by Lemma \ref{lem:doesnotdivide}. This implies $\operatorname{mult}_{G'}\left(\lambda\right) = 0$. Since $r > k$, by the dimension lemma,
\[\operatorname{mult}_G\left(\lambda\right) = \dim\calH_{r-1,1}^{G} + \dim\calH_{0,r}^{G} = \dim\calH_{r-1\% k,1}^{G} + \dim\calH_{0,r\% k}^{G} + \lrfl{\frac {r-1}k} +\lrfl{\frac r k}> 0,\]
which implies the multiplicities differ.
\end{proof}
Recall that in this section, (2) implies (3), we wish to show that if we have two CR isospectral lens spaces given by groups $G$ and $G'$, then $\dim \mathcal{H}_{p,q}^G = \dim \mathcal{H}_{p,q}^{G'}$ for all $p,q\geq 0$.
For each $p, q \geq 0$, define \[x_{p,q} = \dim \calH^G_{p,q}-\dim \calH^{G'}_{p,q}.\]
By the dimension lemma and Proposition \ref{prop:gcd}, we have
\[
x_{p,q} =\dim\calH^G_{p\% k,q\% k} + d\left(\lrfl{\frac{p}{k}} + \lrfl{\frac{q}{k}}\right)-\left(\dim\calH^{G'}_{p\% k,q\% k} + d\left(\lrfl{\frac{p}{k}} + \lrfl{\frac{q}{k}}\right)\right)= x_{p\% k,q\% k}.\]
Thus, it suffices to show $x_{p,q} = 0$ for $0\leq p,q\leq k-1$. Define
\[X = \begin{pmatrix}
x_{0,0} & x_{0,1} & \cdots & x_{0,k-1} \\
x_{1,0} & x_{1,1} & \cdots & x_{1,k-1} \\
\vdots &\vdots & \ddots & \vdots \\
x_{k-1,0} & x_{k-1,1} & \cdots & x_{k-1,k-1}\\
\end{pmatrix}.\]
By our observation above, $X = 0$ implies $\dim \calH^G_{p,q} = \dim \calH^{G'}_{p,q}$ for all $p,q\geq 0$. From Lemma \ref{lem:symmetric}, we see that $X$ is symmetric.
Assuming our two lens spaces are CR isospectral, for every $\lambda \in 2\N$, we have $$\sum_{2q(p+1) = \lambda}\dim \calH^G_{p,q} = \sum_{2q(p+1) = \lambda}\dim \calH^{G'}_{p,q}.$$ This implies that
\begin{align*}
0=\sum_{2q(p+1) = \lambda}\left(\dim \calH^G_{p,q} -\dim \calH^{G'}_{p,q}\right)
=\sum_{2q(p+1) = \lambda}x_{p,q}
=\sum_{2q(p+1) = \lambda}x_{p\% k,q\% k}.
\end{align*}
For integers $a,b$ so that $0\leq a,b\leq k - 1$, define
\[c^\lambda_{a,b} = \#\left\{(p,q)\in \mathbb{Z}_{\geq 0}\times \mathbb{Z}_{\geq 0} :
p \equiv a\mod k,\,
q \equiv b\mod k,\,
2q \left(p+1\right) = \lambda\right\}.\]
Note that $c^\lambda_{a,b}$ is the number of times $x_{a,b}$ appears in the above sum. Therefore, we have \[\sum_{0 \leq a,b \leq k-1} c^\lambda_{a,b} x_{a,b} = 0. \]
For each $\lambda$, define the $k \times k$ matrix $C^{\lambda}$:
\[C^\lambda = \begin{pmatrix}
c_{0,0}^\lambda & c_{0,1}^\lambda & \cdots & c_{0,k-1}^\lambda \\
c_{1,0}^\lambda & c_{1,1}^\lambda & \cdots & c_{1,k-1}^\lambda \\
\vdots & \vdots & \ddots & \vdots \\
c_{k-1,0}^\lambda & c_{k-1,1}^\lambda & \cdots & c_{k-1,k-1}^\lambda\\
\end{pmatrix}.\]
Then, we may write the above equations more compactly as \[C^\lambda \cdot X = 0\text{ for all $\lambda \in 2\N$},\]
where $\cdot$ denotes the Frobenius inner product. We will show that the only symmetric matrix $X$ that satisfies all of these equations is the zero matrix.
\begin{definition} Define the operator $T$ on $k\times k$ matrices, which moves the top row to the bottom and shifts every other row up by one:
\[T: \begin{pmatrix}
a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\
a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\
\vdots &\vdots & \ddots & \vdots\\
a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\
\end{pmatrix} \mapsto
\begin{pmatrix}
a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\
a_{2,0} & a_{2,1} & \cdots & a_{1,k-1} \\
\vdots &\vdots & \ddots & \vdots\\
a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\
a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\
\end{pmatrix}. \]
\end{definition}
Note that $T^{-1}$ moves the bottom row to the top and shifts every other row down by one:
\[T^{-1}: \begin{pmatrix}
a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\
a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\
\vdots &\vdots & \ddots & \vdots\\
a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\
\end{pmatrix} \mapsto
\begin{pmatrix}
a_{k-1,0} & a_{k-1,1} & \cdots & a_{k-1,k-1} \\
a_{0,0} & a_{0,1} & \cdots & a_{0,k-1} \\
a_{1,0} & a_{1,1} & \cdots & a_{1,k-1} \\
\vdots &\vdots & \ddots & \vdots\\
a_{k-2,0} & a_{k-2,1} & \cdots & a_{k-2,k-1} \\
\end{pmatrix}. \]
We now use $T$ to prove the following theorem about $\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N}$.
\begin{theorem}
Let $k$ be a prime number. It follows that,
\[\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N} = T\left(\Sym_k\right)\]
where $\Sym_k$ denotes the space of real $k\times k$ symmetric matrices.
\end{theorem}
\begin{proof}
We first show $\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N} \subseteq T\left(\Sym_k\right)$. Fix $\lambda\in 2\mathbb{N}$ and let
\[S^\lambda_{a,b} = \left\{\left(p,q\right)\in \mathbb{Z}_{\geq 0} \times \mathbb{Z}_{\geq 0}:
p \equiv a \mod{k},\,
q \equiv b \mod{k},\,
2q \left(p+1\right) = \lambda\right\}.\]
Note that $c^\lambda_{a,b} = \#S^\lambda_{a,b}$. The map \[(p,q) \mapsto (q-1,p+1)\]
is a bijection from $S^\lambda_{a,b}$ to $S^{\lambda}_{b-1,a+1}$. Therefore $c^\lambda_{a,b} = c^\lambda_{b-1,a+1}$ for all $a,b$. Note that \[T^{-1}\left(C^\lambda\right) = \begin{pmatrix}
c_{k-1,0}^{\lambda} & c_{k-1,1}^{\lambda} & c_{k-1,2}^{\lambda} & \cdots & c_{k-1,k-1}^{\lambda} \\
c_{0,0}^{\lambda} & c_{0,1}^{\lambda} & c_{0,2}^{\lambda} & \cdots & c_{0,k-1}^{\lambda} \\
c_{1,0}^{\lambda} & c_{1,1}^{\lambda} & c_{1,2}^{\lambda} &\cdots & c_{1,k-1}^{\lambda} \\
\vdots & \vdots &\vdots & \ddots & \vdots \\
c_{k-2,0}^{\lambda} & c_{k-2,1}^{\lambda} & c_{k-2,2}^{\lambda} & \cdots & c_{k-2,k-1}^{\lambda} \\
\end{pmatrix}.\]
Since $c^\lambda_{a,b} = c^\lambda_{b-1,a+1}$, the above matrix is symmetric. Hence, $T^{-1}\left(C^\lambda\right)$ is in $\Sym_k$ for each $\lambda$. This implies $C^\lambda$ is in $T\left(\Sym_k\right)$ for each $\lambda$.
We now show the other direction: $T\left(\Sym_k\right)\subseteq \spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N}$. In particular, since $T$ is bijective, we show $\Sym_k\subseteq T^{-1}\left(\spn_\R\left\{C^\lambda\right\}_{\lambda\in2\N}\right)$. Let $E_{i,j}$ denote the $k\times k$ matrix whose $i,j$ entry is 1 and all other entries are 0. Note that $T^{-1}\left(E_{i,j}\right) = E_{\left(i+ 1\right) \% k , j}$ and the set
\[\left\{E_{i,j} + E_{j,i} : 0\leq i \leq j \leq k-1\right\}\]
is a basis for $\Sym_k$. We show that each of these basis elements is in $T(\spn_\R\{C^\lambda\}_{\lambda\in2\N})$. Let $0\leq i\leq j\leq k-1$.
\textbf{Case 1:} Assume $0 = i = j $. Let $\lambda = 2k$. Since $k$ is prime, the only $\left(p,q\right)$ so that $2q\left(p+1\right) = \lambda$ are $\left(0,k\right)$ and $\left(k-1,1\right)$. This implies $C^{2k} = E_{0,0} + E_{k-1,1}$. Therefore,
\begin{equation}
\label{eq:twok}
T^{-1}\left(C^{2k}\right) = E_{1,0} + E_{0,1}.
\end{equation}
Now let $\lambda = 2k^2$. Similarly, the only pairs $\left(p,q\right)$ so that $2q\left(p+1\right) = \lambda$ are $\left(0,k^2\right)$, $\left(k-1,k\right)$, and $\left(k^2-1,1\right)$. This implies $C^{2k} = E_{0,0} + E_{k-1,0} + E_{k-1,1}$. Therefore,
\begin{equation}
\label{eq:twoksquared}
T^{-1}\left(C^{2k^2}\right) = E_{1,0} + E_{0,0} + E_{0,1}.
\end{equation}
From (\ref{eq:twok}) and (\ref{eq:twoksquared}), it follows that \[
E_{0,0} = T^{-1}\left(C^{2k^2}\right) - T^{-1}\left(C^{2k}\right).\]
Thus, the basis vector $E_{0,0} + E_{0,0}$ is in $T^{-1}\left(\spn_\R\{C^{\lambda}\}_{\lambda\in2\N}\right)$.
\textbf{Case 2:} Assume $0 = i < j$. Since $k$ is prime, $j$ is trivially coprime to $k$. By Dirichlet's theorem, there exists a prime $r$ so that $r \equiv j \mod k$. Let $\lambda = 2k r$. The only $\left(p,q\right)$ so that $2q\left(p+1\right) = \lambda$ are $\left(kr-1, 1\right)$, $\left(0, kr\right)$, $\left(k-1, r\right)$, and $\left(r-1, k\right)$. This implies,
\begin{equation}
\label{eq:twokr}
T^{-1}\left(C^{2kr}\right) = E_{0,1} + E_{1,0} + E_{0,j} + E_{j,0}.
\end{equation}
From (\ref{eq:twok}) and (\ref{eq:twokr}) \[E_{0,j} + E_{j,0} = T^{-1}\left(C^{2k r}\right) - T^{-1}\left(C^{2k}\right),\]
so $E_{0,j} + E_{j,0} \in T^{-1}\left(\spn_\R\{C^{\lambda}\}_{\lambda\in2\N}\right)$.
\textbf{Case 3:} Assume $0 < i \leq j$. Since $k$ is prime, $i$ and $j$ are coprime to $k$. Let $r$, $s$, and $t$ be primes such that $r \equiv j \mod k$, $s\equiv i \mod k$, and $t\equiv ij \mod k$. Let $\lambda = 2 rs$. Just as in case 2, we obtain the ordered pairs $\left(rs-1, 1\right)$, $\left(0, rs\right)$, $\left(s-1, r\right)$, and $\left(r-1, s\right)$, which implies
\begin{equation}
\label{eq:twors}
T^{-1}\left(C^{2rs}\right) = E_{ij\%k,1} + E_{1,ij\%k} + E_{i,j} + E_{j,i}.
\end{equation}
Finally, let $\lambda = 2t$. As in case 1, the only ordered pairs are $\left(t-1,1\right)$ and $\left(0,t\right)$, so
\begin{equation}
\label{eq:twot}
T^{-1}\left(C^{2t}\right) = E_{ij\%k,1} + E_{1,ij\%k}.
\end{equation}
From (\ref{eq:twors}), and (\ref{eq:twot}), we see that \[E_{i,j} + E_{j,i} = T^{-1}\left(C^{2rs}\right) - T^{-1}\left(C^{2 t}\right),\]
so $E_{i,j} + E_{j,i} \in T^{-1}\left(\spn_\R\{C^{\lambda}\}_{\lambda\in2\N}\right)$.
\end{proof}
The following corollary is immediate when we regard the vector space of all real $k\times k$ matrices as an inner product space equipped with the Frobenius inner product, and when we notice that symmetric matrices and skew-symmetric matrices are orthogonal under this inner product and any $k\times k$ matrix is the sum of a symmetric matrix and skew-symmetric matrix.
\begin{corollary}
If a matrix $X$ solves the system
\[C^\lambda \cdot X = 0\text{ for all }\lambda = 2,4,6,\ldots,\]
then $X\in T\left(\Sym_k\right)^\perp = T\left(\Skew_k\right)$, where $\Skew_k$ denotes the space of real $k\times k$ skew-symmetric (also known as antisymmetric) matrices.
\end{corollary}
We now complete the proof of (2) implies (3). Assuming we have two CR isospectral lens spaces $L\left(k;\ell_1,\ell_2\right)$ and $L\left(k;\ell_1',\ell_2'\right)$ where $k$ is an odd prime, then there exists a symmetric matrix $X$ so that $C^\lambda\cdot X = 0$ for all $\lambda\in 2 \mathbb{N}$. By the above corollary, $X\in \Sym_k \cap T\left(\Skew_k\right)$. Equivalently, $T^{-1}\left(X\right)\in T^{-1}\left(\Sym_k\right) \cap \Skew_k$. Since $T^{-1}\left(X\right)\in\Skew_k$, we have
\[T^{-1}\left(X\right) = - T^{-1}\left(X\right)^t.\]
Applying $T$ to both sides yields,
\[ X = - T\left(T^{-1}\left(X\right)^t\right).\]
Since $X$ is symmetric, transposing both sides imply
\[X = - T\left(T^{-1}\left(X\right)^t\right)^t.\]
Writing this equation in matrix form, we obtain
\[\begin{pmatrix}
x_{0,0} & x_{0,1} & x_{0,2} & \cdots & x_{0,k-1} \\
x_{1,0} & x_{1,1} & x_{1,2} & \cdots & x_{1,k-1} \\
x_{2,0} & x_{2,1} & x_{2,2} & \cdots & x_{2,k-1} \\
\vdots &\vdots & \vdots & \ddots & \vdots\\
x_{k-1,0} & x_{k-1,1} & x_{k-1,2} & \cdots & x_{k-1,k-1} \\
\end{pmatrix} =
\begin{pmatrix}
x_{k-1,1} & x_{k-1,2} & x_{k-1,3} & \cdots & x_{k-1,k-1} & x_{k-1,0} \\
x_{0,1} & x_{0,2} & x_{0,3} & \cdots & x_{1,k-1} & x_{1,0} \\
x_{1,1} & x_{1,2} & x_{1,3} & \cdots & x_{1,k-1} & x_{2,0} \\
\vdots &\vdots & \vdots & \ddots & \vdots & \vdots\\
x_{k-2,1} & x_{k-2,2} & x_{k-2,3} & \cdots & x_{k-2,k-1} & x_{k-2,0} \\
\end{pmatrix}.\]
This means $x_{a,b} = -x_{\left(a-1\right)\% k,\left(b+1\right)\% k}$ for all $0\leq a,b\leq k-1$. Applying this fact $k$ times, we see that \[x_{a,b} = \left(-1\right)^k x_{\left(a-k\right)\% k,\left(b+k\right)\% k} = \left(-1\right)^k x_{a,b} = - x_{a,b}.\]
Therefore, we must have $x_{a,b} = 0$ for all $a,b$. Thus, $X = 0$. As noted above, this implies $\dim\calH_{p,q}^G = \dim\calH_{p,q}^{G'}$ for all $p,q$, completing the proof that (2) implies (3) in Theorem \ref{thm:isospec}.
\subsection{(3) implies (4)}
In this section, we extend techniques from \cite{ikeda1979} to the CR setting. For a lens space $L(k;\ell_1,\ldots,\ell_n)$ corresponding to group $G$, consider the generating function below taking inputs as pairs of complex numbers with sufficiently small modulus,
\[F_{\left(k; \ell_1,\ldots,\ell_n\right)}\left(z,w\right) = \sum_{p,q\geq 0} \left(\dim\calH_{p,q}^G\right)z^p w^q.\]
We now present a closed form for this generating function that connects the CR spectrum of this lens space and its geometry.
\begin{theorem}
\label{thm:genfunc}
Let $G$ be the group corresponding to the lens space $L\left(k;\ell_1,\ldots,\ell_n\right)$. It follows that,
\[F_{\left(k; \ell_1,\ldots,\ell_n\right)}\left(z,w\right) = \frac1k\sum_{m = 0}^{k-1} \frac{1 - z w} {\prod_{i = 1}^n\left(z-\zeta^{-m\ell_i}\right)\left(w-\zeta^{m\ell_i} \right)}\]
where $\zeta = e^{2\pi i/k}$.
\end{theorem}
\begin{proof}
Let $\calP_{p,q}$ denote the space of polynomials of bidegree $p,q$ on $S^{2n-1}$. We may consider $\calH_{p,q}$ and $\calP_{p,q}$ as $G$-modules over $\mathbb{C}$. Note that $\calP_{p,q} \cong \calH_{p,q}\oplus |z|^2\calP_{p-1,q-1}$ \cite{klima2004}.
Let $\chi_{p,q}$ be the character of $\calH_{p,q}$ and $\widetilde{\chi}_{p,q}$ be the character of $\calP_{p,q}$. Note that \[\left\{z^{\alpha}\overline{z}^\beta : \alpha,\beta\in \Z_{\geq 0}^n, \left|\alpha\right| = p, \left|\beta\right| = q\right\}\]
is a basis for $\calP_{p,q}$, and recall that
\[g^m \cdot z^{\alpha} \overline{z}^\beta = \zeta^{m \left(\sum_{i=1}^n{\ell_i \left(\alpha_i - \beta_i\right)}\right)} z^{\alpha} \overline{z}^\beta.\]
Since this is a basis on which $g^m$ acts diagonally, we see that \[\widetilde{\chi}_{p,q}\left(g^m\right) = \sum_{\left|\alpha\right| = p,\left|\beta\right| = q} \zeta^{m \left(\sum_{i=1}^n{\ell_i \left(\alpha_i - \beta_i\right)}\right)}.\]
Notice that in
\[\prod_{i = 1}^n \left(1 + \zeta^{m \ell_i}z + \zeta^{2m \ell_i}z^2 + \cdots\right) \left(1 + \zeta^{-m \ell_i}w + \zeta^{-2m \ell_i}w^2 + \cdots\right),\]
the coefficient of $z^p w^q$ is precisely $\widetilde{\chi}_{p,q}\left(g^m\right)$. It follows that,
\[\sum_{p,q\geq 0}\widetilde{\chi}_{p,q}\left(g^m\right) z^p w^q =
\frac{1}{\prod_{i = 1}^n\left(1 - \zeta^{m \ell_i}z\right) \left(1 - \zeta^{-m \ell_i}w\right)},\]
where we have used the fact that $1 + x + x^2 + \cdots = \left(1-x\right)^{-1}$.
We now relate the characters to the dimension of eigenspaces. Since $\calP_{p,q} \cong \calH_{p,q} \oplus |z|^2\calP_{p-1,q-1}$, it follows that \[\chi_{p,q} = \widetilde\chi_{p,q}- \widetilde{\chi}_{p-1,q-1}.\]
Note that when $p$ or $q$ is zero $\calP_{p,q}=\calH_{p,q}$; therefore, we set $\widetilde{\chi}_{0,-1}=\widetilde{\chi}_{-1,0}=\widetilde{\chi}_{-1,-1}=0$.
Moreover, $\dim\calH_{p,q}^G$ can be obtained by averaging the character $\chi_{p,q}$ over the group $G$ as follows,
\[\dim \mathcal{H}^G_{p,q} = \frac1k \sum_{m = 0}^{k-1} \chi_{p,q}\left(g^m\right).\]
Combining all of these facts, we see that
\begin{align*}
\sum_{p,q\geq 0} \left(\dim\mathcal{H}_{p,q} ^G\right) z^p w^q
&= \frac1k\sum_{p,q\geq 0} \lrp{\sum_{m = 0}^{k-1} \chi_{p,q}\left(g^m\right)} z^p w^q\\
&= \frac1k\sum_{p,q\geq 0} \lrp{\sum_{m = 0}^{k-1} \widetilde\chi_{p,q}\left(g^m\right) - \widetilde\chi_{p-1,q-1}\left(g^m\right)} z^p w^q\\
&= \frac1k\sum_{m = 0}^{k-1}\lrp{\lrp{ \sum_{p,q\geq 0} \widetilde\chi_{p,q}\left(g^m\right)z^p w^q} - z w \lrp{\sum_{p,q\geq 1}\widetilde\chi_{p-1,q-1}\left(g^m\right) z^{p-1} w^{q-1}}}\\
&= \frac1k\sum_{m = 0}^{k-1} \frac{1-zw} {\prod_{i = 1}^n\left(1-\zeta^{m\ell_i} z\right)
\left(1-\zeta^{-m\ell_i} w\right)}= \frac1k\sum_{m = 0}^{k-1} \frac{1-zw} {\prod_{i = 1}^n\left(z-\zeta^{-m\ell_i}\right)
\left(w-\zeta^{m\ell_i}\right)}.
\end{align*}
\end{proof}
To complete the proof that $(3)$ implies $(4)$ in Theorem \ref{thm:isospec} in the $n=2$ case, we need the following.
\begin{lemma}
\label{lem:independence}
Let $\zeta = e^{2\pi i/k}$. The set
\[\left\{\frac{1}{\left(z-\zeta^i \right)\left(w-\zeta^{-i}\right)\left(z-\zeta^j \right)\left(w-\zeta^{-j}\right)}\ \middle |\ 0\leq i\leq j\leq k-1\right\}\]
is linearly independent over $\C$.
\end{lemma}
\begin{proof}
We will label the elements of the above set as
\[f_{l,m} = \frac1{\left(z-\zeta^l\right)\left(w-\zeta^{-l}\right)\left(z-\zeta^m\right)\left(z-\zeta^{-m}\right)}.\]
First, note that $\operatorname{span}\left\{f_{l,l}: 0 \leq l < k-1\right\} \cap \operatorname{span}\left\{f_{l,m}: 0 \leq l < m \leq k-1\right\} = \left\{0\right\}$
and the set $\left\{f_{l,l}: 0 \leq l < k-1\right\}$
is linearly independent.
Now for $\left\{f_{l,m}: 0 \leq l < m \leq k-1\right\}$,
suppose there exists $a_{l,m}\in \mathbb{C}$ so that
\[\sum_{l=0}^{k-1}\sum_{m=l+1}^{k-1}a_{l,m}f_{l,m}=\sum_{l=0}^{k-1}\sum_{m=l+1}^{k-1}\frac{a_{l,m}}{\left(z-\zeta^l\right)\left(w-\zeta^{-l}\right)\left(z-\zeta^m\right)\left(w-\zeta^{-m}\right)}=0.\]
If we multiply by a factor of $\left(z^k-1\right)\left(w^k-1\right)$,
then we are left with a polynomial
\[\left(z^k-1\right)\left(w^k-1\right)\sum_{l=0}^{k-1}\sum_{m=l+1}^{k-1}\frac{a_{l,m}}{\left(z-\zeta^l\right)\left(w-\zeta^{-l}\right)\left(z-\zeta^m\right)\left(w-\zeta^{-m}\right)}=0.\]
Setting $z=\zeta^{l_0}$, $w=\zeta^{-m_0}$, where $0 \leq l_0 < m_0 \leq k-1$ implies $a_{l_0,m_0}=0$. Therefore, the set $\left\{f_{l,m}: 0 \leq l < m \leq k-1\right\}$ is linearly independent.
\end{proof}
\begin{corollary}
Let $L\left(k; \ell_1,\ell_2\right)$ and $L\left(k; \ell_1',\ell_2'\right)$ be lens spaces with groups $G$ and $G'$ respectively. If $\dim\calH^G_{p,q} = \dim\calH^{G'}_{p,q}$ for all $p,q$, then there exists a permutation $\sigma$ and an integer $a$ such that $\left(\ell_1,\ell_2\right) = \left(a\ell'_{\sigma\left(1\right)},a\ell'_{\sigma\left(2\right)}\right)$.
\end{corollary}
\begin{proof}
Since $\dim\calH^G_{p,q} = \dim\calH^{G'}_{p,q}$ for all $p,q$, $F_{\left(k; \ell_1,\ell_2\right)}\left(x\right) = F_{\left(k; \ell_1',\ell_2'\right)}\left(x\right)$. Applying Theorem~\ref{thm:genfunc} and cancelling like terms, we obtain
\[\sum_{m = 0}^{k-1} \frac{1}
{\left(z-\zeta^{-m\ell_1}\right)\left(w-\zeta^{m\ell_1}\right)\left(z-\zeta^{-m\ell_2}\right)\left(w-\zeta^{m\ell_2}\right)} =
\sum_{m = 0}^{k-1} \frac{1}
{\left(z-\zeta^{-m\ell_1'}\right)\left(w-\zeta^{m\ell_1'}\right)
\left(z-\zeta^{-m\ell_2'}\right)\left(w-\zeta^{m\ell_2'}\right)}.\]
By Lemma \ref{lem:independence}, the terms of these sums are linearly independent, so we may equate each term on the left with a term on the right. By considering the $m = 1$ term on the left, there exists some $m_0$ such that
\[\frac{1}{\left(z-\zeta^{-\ell_1}\right)\left(w-\zeta^{\ell_1}\right)
\left(z-\zeta^{-\ell_2}\right)\left(w-\zeta^{\ell_2}\right)}
= \frac{1}{\left(z-\zeta^{-m_0\ell_1'}\right)\left(w-\zeta^{m_0\ell_1'}\right)
\left(z-\zeta^{-m_0\ell_2'}\right)\left(w-\zeta^{m_0\ell_2'}\right)}.\]
This implies $\left(m_0\ell_1',m_0\ell_2'\right)$ is some permutation of $\left(\ell_1, \ell_2\right)$. The claim follows after setting $a = m_0$.
\end{proof}
\subsection{(4) implies (1) and (1) implies (2)}
We begin with the following theorem which shows that (4) implies (1).
\begin{theorem}
Let $L\left(k;\ell_1,\ldots,\ell_n\right)$ and $L\left(k;\ell_1',\ldots,\ell_n'\right)$ be lens spaces. If there exists a permutation $\sigma$ and an integer $a$ such that $\left(\ell_1',\ldots,\ell_n'\right) \equiv \left(a\ell_{\sigma\left(1\right)},\ldots,a\ell_{\sigma\left(n\right)}\right) \mod k$, then $L\left(k;\ell_1,\ldots,\ell_n\right)$ and $L\left(k;\ell_1',\ldots,\ell_n'\right)$ are CR isometric.
\end{theorem}
\begin{proof}
For any permutation $\sigma$, \[\left(z_1,z_2,\dots,z_n\right) \mapsto (z_{\sigma\left(1\right)},z_{\sigma\left(2\right)}, \dots, z_{\sigma\left(n\right)})\]
is a CR isometry from $S^{2n-1}$ to itself. This induces a CR isometry from $L\left(k; \ell_1, \dots, \ell_n\right)$ to $L\left(k; \ell_{\sigma(1)}, \dots, \ell_{\sigma\left(n\right)}\right)$.
Note that $a$ must be relatively prime to $k$ by the definition of a lens space. If $g$ denotes the action corresponding to $L\left(k; \ell_1, \dots, \ell_n\right)$, then $g^a$ generates the same subgroup as $g$. Hence $L\left(k; \ell_1, \dots, \ell_n\right) = L\left(k; a\ell_1, \dots, a\ell_n\right)$, which completes the proof.
\end{proof}
Finally, we need to prove (1) implies (2) to conclude the proof of Theorem \ref{thm:isospec}. This implication follows from a more general statement that the Kohn Laplacian commutes with CR isometries and therefore isometric CR manifolds are isospectral. We refer to \cite[Section 4.4]{Canzani} for the proof in the Riemannian setting and argue similarly in the CR setting.
\newcommand{\etalchar}[1]{$^{#1}$}
|
1,108,101,566,010 | arxiv | \section{Introduction}
The rare-earth intermetallic compounds of the general formula $RT_2X_2$, where $R$ is the rare-earth, $T$ is a transition metal and $X$ is a $p$-block element, crystallizing in the ThCr$_2$Si$_2$-type crystal structure have been investigated extensively owing to their interesting physical properties like superconductivity, heavy fermion behaviour, valence fluctuation, magnetic ordering etc.~\cite{steglich1979superconductivity, dung2009magnetic, thamizhavel2007anisotropic, joshi2010magnetocrystalline, drachuck2016magnetization}. A lesser number of rare-earth based compounds with the general formula (1-2-2) crystallize in the trigonal CaAl$_2$Si$_2$-type crystal structure. The reason for their relative paucity is that the number of valence electrons should not exceed 16~\cite{klufers1984alpha, kranenberg2000structure}, which is satisfied by a lesser number of compositions. However, it has been found that RAl$_2$X$_2$ (X = Si and Ge) compounds, where valence electron count is 17, are exceptions to this rule~\cite{kranenberg2000structure, kranenberg2000investigations}. Kranenberg et al. attributed the stability of these compounds to the small electronegativity differences between the Al and Si (Ge) atoms.
The $R$Al$_2$X$_2$ compounds crystallizing in the CaAl$_2$Si$_2$-type structure adopt the \textit{P\={3}m1} space group (\#164). An interesting feature of this structure type is that the rare-earth atoms form a triangular lattice in the $ab$-plane and the layers formed by Al and Si atoms are separated by distance $c$, the lattice parameter normal to the hexagonal plane. The RAl$_2$X$_2$ (R = Eu and Yb; X = Si and Ge) have been previously investigated in the polycrystalline form~\cite{kranenberg2000structure, schobinger1989magnetic}. We have reported the physical properties of EuAl$_2$Si$_2$ single crystal~\cite{maurya2015anisotropic}, which orders antiferromagnetically at 33~K and exhibits a substantially large magnetoresistance (~1200~\% at 14~T) at 2~K for the field applied along the $c$-axis. Very recently, we reported the magnetic properties of HoAl$_2$Ge$_2$ single crystal~\cite{matin2018single} which undergoes a bulk antiferromagnetic transition at 6.5~K with the $ab$-plane as the easy plane of magnetization. In continuation of our studies on this series of compounds we have successfully grown the single crystals of ErAl$_2$Ge$_2$ and probed the magnetic behavior using the techniques of magnetization, electrical resistivity and heat capacity.
\section{Experimental methods}
Single crystals of ErAl$_2$Ge$_2$\ were grown by the high temperature solution growth method, taking advantage of the deep eutectic (420~$^{\circ}$C) formed by the Al:Ge (72:28)~\cite{Okamoto1993}. High purity metals of Er, Al and Ge with a starting composition of $1 : 17.5 : 7.5$ were placed in a high quality recrystallized alumina crucible. The alumina crucible was sealed in an evacuated quartz ampoule under a partial pressure of argon gas. The pressure of the argon gas was kept at a level such that it does not exceed the atmospheric pressure at the maximum growth temperature. The ampoule was placed in a resistive heating box type furnace and heated to 1050~$^{\circ}$C at a rate of 30~$^{\circ}$C/hr and held at this temperature for 20~hr for homogenizing the melt. Then the furnace was cooled at the rate of 1.8~$^{\circ}$C/hr down to 600~$^{\circ}$C at which point the excess flux was removed by means of centrifuging. Well defined shiny single crystals with typical dimensions of 4~mm~$\times$~3~mm~$\times$~1~mm were obtained. A few pieces of the single crystals were ground for powder x-ray diffraction measurement using PANalytical X-ray machine with a monochromatic Cu-K$_{\rm \alpha}$ radiation. The magnetic measurements were performed using SQUID magnetometer (Quantum Design, USA) and the heat capacity and electrical measurements were performed using a physical property measurement system (PPMS).
\section{Results and Discussions}
\begin{figure}[b]
\includegraphics[width=0.5\textwidth]{Fig1.pdf}
\caption{(Color online) Rietveld refinement of the powder x-ray diffraction pattern of crushed single crystals of ErAl$_2$Ge$_2$. Inset shows the Laue diffraction pattern corresponding to the (001) plane. }
\label{Fig1}
\end{figure}
\subsection{X-ray studies}
The room temperature powder x-ray diffraction (XRD) pattern of ErAl$_2$Ge$_2$\ is shown in Fig.~\ref{Fig1}. All the peaks can be indexed to the trigonal CaAl$_2$Si$_2$ type crystal structure and there are no extra peaks due to any trace impurity phase(s). The Rietveld analysis of the XRD data furnishes the lattice constants as $a = 4.180$~\AA\ and $c~=~6.654$~\AA, which are in close agreement with the previously reported values~\cite{qin2008investigation}. The well defined Laue diffraction pattern, shown in the inset of Fig.~\ref{Fig1} confirms the good quality of the single crystal. The nearest Er-Er distance is in the $ab$-plane which is the same as that of the lattice constant $a$.
\subsection{Electrical resistivity}
\begin{figure}[b]
\includegraphics[width=0.5\textwidth]{Fig2.pdf}
\caption{(Color online) Temperature dependence of electrical resistivity with current in the basal plane. The inset shows the low temperature region where the magnetic ordering is indicated by an arrow. }
\label{Fig2}
\end{figure}
The temperature dependence of electrical resistivity from 2 to 300~K, with the current in the $ab$-plane, is shown in Fig.~\ref{Fig2}. The electrical resistivity decreases with the decrease in temperature typical of a metal and shows an upturn below 4~K, which is due to the onset of antiferromagnetic ordering (see magnetization data below). Similar upturn in the electrical resistivity was observed in the isostructural compound HoAl$_2$Ge$_2$~\cite{matin2018single}, below its N\'{e}el temperature. The increase in the electrical resistivity in the antiferromagnetic state is usually attributed to the formation of superzone gap, when the magnetic periodicity is different from the lattice periodicity. Because of the gap opening the number of charge carrier decreases which leads to the increase in the electrical resistivity. The superzone gap is observed in elemental rare-earth metals such as Dy, Er, Ho and Tm etc.~\cite{elliott1963theory} and has also been observed in several rare-earth intermetallic compounds like CeGe, CePd$_5$Al$_2$, UCu$_2$Sn, UNiGa etc. to name a few~\cite{das2012anisotropic, onimaru2008giant, takabatake1998superzone, aoki1996superzone} .
\subsection{Magnetic susceptibility and isothermal magnetization}
\begin{figure}[b]
\includegraphics[width=0.5\textwidth]{Fig3.pdf}
\caption{(Color online) Temperature dependence of the magnetic susceptibility for field parallel and perpendicular to the $ab$-plane. The inset shows the inverse susceptibility plot along with the modified Curie-Weiss fit. }
\label{Fig3}
\end{figure}
\paragraph{Magnetic susceptibility} The $dc$ magnetic susceptibility (main panel) and the inverse magnetic susceptibility (inset) of ErAl$_2$Ge$_2$\ are shown in Fig.~\ref{Fig3}. The susceptibility follows the Curie-Weiss behaviour at high temperatures. There is a prominent change of slope in the susceptibility at around 4~K for field applied both along the $c$-axis and normal to it, which is due to the antiferromagnetic transition at 4~K. A large anisotropy is observed at low temperatures (below 20~K) but it weakens considerably at higher temperature in the paramagnetic state. The susceptibility in the $ab$-plane increases more rapidly below 20~K than along the $c$-axis, thus indicating that the easy axis of magnetization is in the $ab$-plane. Similar behaviour was previously observed in HoAl$_2$Ge$_2$~\cite{matin2018single}. The inverse susceptibility data were fitted to the modified Curie-Weiss law, $\chi^{-1} = (\chi_0 + \frac{C}{T-\theta_p})^{-1}$, where $\chi_0$ is the temperature independent term whose contributions come from the core-electrons and the Pauli spin susceptibility of the conduction electrons. We obtain an effective magnetic moment of 9.73~$\mu_{\rm B}$/Er and 9.31~$\mu_{\rm B}$/Er and the paramagnetic Curie-Weiss temperature of $-7.35$~K and $-0.39$~K for $H~\parallel~ab$-plane and $c$-axis, respectively. The effective magnetic moment values are close to the Hund's rule derived value of 9.59~$\mu_{\rm B}$/Er for free Er$^{3+}$. If we compare the overall magnetic susceptibility with that of HoAl$_2$Ge$_2$, the anisotropy in paramagnetic state is relatively weaker for ErAl$_2$Ge$_2$. This can be attributed to the crystal electric field (CEF) effect. A similar behavior is observed in R$_2$CoGa$_8$ (R = Gd-Lu), where the sign of the $B_2^0$ parameter (discussed later) changes as one moves towards higher rare-earth side~\cite{joshi2008anisotropic}.
\paragraph{Magnetization} The isothermal magnetization measured at $T= 2$~K is shown in Fig.~\ref{Fig4} for field parallel to $ab$-plane and $c$-axis, respectively. The magnetization increases more rapidly in the $ab$-plane at low fields and shows signs of gradual saturation as the field is increased above 10~kOe. On the other hand, for $H~\parallel~c$-axis the magnetization in comparison initially increases less rapidly at low fields and it is less than the corresponding value for $H~\parallel~ab$ up to a field of 85~kOe beyond which the magnetization along the $c$-axis crosses that of $ab$-plane thus indicating the change in easy axis at high magnetic fields. It is interesting to mention here that the calculated magnetization based on a crystalline electric field model (to be discussed later) also exhibits a cross-over at slightly higher fields thus qualitatively matching with the experimental data as shown in Fig.~\ref{Fig4}(b). The overall field dependence of magnetization behavior suggests an antiferromagnetic ordering in ErAl$_2$Ge$_2$. At 140~kOe, the magnetization attains a value of $7.43~\mu_{\rm B}$/Er and $7.54~\mu_{\rm B}$/Er, respectively in the $ab$-plane and along the $c$-axis, respectively. These values are lower than the saturation moment of Er$^{3+}$ which is given by $g_{\rm J}J (\frac{6}{5} \times \frac{15}{2}=)$ 9~$\mu_{\rm B}$/Er. Apparently higher fields are necessary to attain the full moment of Er$^{3+}$.
\begin{figure}[b]
\includegraphics[width=0.5\textwidth]{Fig4.pdf}
\caption{(Color online) (a) Isothermal magnetization at $T = 2$~K for $H~\parallel~(ab)$-plane and $H~\parallel~c$-axis for fields up to 14~T. (b) Simulated magnetization plots with the CEF parameters (see text for details) . }
\label{Fig4}
\end{figure}
\paragraph{Crystal electric field analysis} We have applied the point charge model of crystalline electric field to the magnetic susceptibility data to get a semi-quantitative estimate of the CEF level splitting. The Er-atom in ErAl$_2$Ge$_2$\ unit cell, occupies the $1a$ Wyckoff's position with point symmetry $\bar{3}m$ (Sch\"{o}nflies symbol: $D_{3d})$ which possesses trigonal site symmetry. For the sake of convenience, we have used the CEF Hamiltonian for the hexagonal site symmetry which is given by,
\begin{equation}
\label{eqn4}
\mathcal{H}_{\rm CEF} = B_2^0{\bf O}_2^0 + B_4^0{\bf O}_4^0 + B_6^0{\bf O}_6^0 + B_6^6{\bf O}_6^6,
\end{equation}
where $B_{\rm n}^{\rm m}$ are the crystal field parameters and ${\bf O}_{\rm n}^{\rm m}$ are the Stevens operators~\cite{stevens1952matrix, hutchings1965solid}. The 16-fold $(2J +1; J = 15/2)$ degenerate level of free Er$^{3+}$ ion splits into 8 doublets in the CEF of hexagonal symmetry. The magnetic susceptibility including the molecular field contribution $\lambda$ is given by
\begin{equation}
\label{eqn5}
\chi^{-1} = \chi_{\rm CEF}^{-1} - \lambda_i,
\end{equation}
\begin{figure}[b]
\includegraphics[width=0.5\textwidth]{Fig5.pdf}
\caption{(Color online) The crystal electric field analysis on the inverse susceptibility data. The solid lines are fit to the CEF analysis (refer to text). The estimated energy levels are also shown. }
\label{Fig5}
\end{figure}
where $\chi_{\rm CEF}$ is CEF susceptibility. The expression for the magnetic susceptibility and magnetization based on the CEF model is given in Ref.~\cite{das2011magnetic, das2014anisotropic}. In order to analyse the inverse susceptibility in the above CEF model, we first fitted the paramagnetic inverse susceptibility to modified Curie-Weiss expression by fixing the $\mu_{\rm eff}$ value to 9.59~$\mu_{\rm B }$/Er and subtracted the resulting $\chi_0$ value from the raw susceptibility data. Finally we plotted the inverse $(\chi - \chi_0)$ versus temperature and fitted the CEF Hamiltonian of Eqn~\ref{eqn5} to the renormalized data, following the procedure applied earlier on some rare-earth intermetallic compounds~\cite{takeuchi2004magnetism, mondal2018magnetocrystalline}. The solid lines in Fig~\ref{Fig5} show the fitted inverse magnetic susceptibility based on the CEF model, with the crystal field parameters as $B_2^0 = -0.062$~K, $B_4^0 = 0.001$~K, $B_6^0 = -3.9~\times~10^{-5}$~K, $B_6^6 = 2.0~\times~10^{-5}$~K. The molecular field coefficients are $\lambda_x = -0.1$~mol/emu and $\lambda_z = -0.62$~mol/emu. The negative sign of the molecular field constants, albeit small, supports the antiferromagnetic nature of the magnetic ordering. The corresponding energy eigenvalues of the crystal field levels are also shown in Fig.~\ref{Fig5}. In practice various combinations of CEF parameters $B_n^m$'s which furnish widely different CEF energy eigenvalues provide a good fit to the susceptibility data. However, the listed CEF parameters also qualitatively explain the magnetization data as shown in Fig.~\ref{Fig4}(b). The energy eignevalues depicted in Fig.~\ref{Fig5} provide the closest fit to the Schottky heat capacity (to be discussed later) and can be taken as the first order esimates. From the mean field theory, one can obtain a rough estimate of the $B_2^0$ parameter which is related to the paramagnetic Weiss temperature and the exchange constant by the following relation~\cite{jensen1991rare}
\begin{equation}
k_{\rm B}\theta_a = \frac{1}{3}J(J+1)\mathcal{J}_{\rm ex}^a + \frac{2}{5}\left(J - \frac{1}{2}\right)\left(J + \frac{3}{2}\right)B_2^0,
\label{eqn6}
\end{equation}
and
\begin{equation}
k_{\rm B}\theta_c = \frac{1}{3}J(J+1)\mathcal{J}_{\rm ex}^c - \frac{4}{5}\left(J - \frac{1}{2}\right)\left(J + \frac{3}{2}\right)B_2^0.
\label{eqn7}
\end{equation}
Assuming, an isotropic two-ion interaction $\mathcal{J}_{\rm ex}^a = \mathcal{J}_{\rm ex}^c = \mathcal{J}_{\rm ex}$ and using $J= 15/2$ and the paramagnetic Curie-Weiss temperature $\theta_{\rm p}$ values, obtained in section 3.3, the value of $B_2^0$ was estimated to be $-0.092$~K. Given the assumptions involved in the point charge model and the limitations of the mean filed theory, this value may be considered to be in fairly good agreement with the $B_2^0$ value obtained from the CEF analysis. It thus provides some additional support to our final choice of crystal field parameters and the resulting crystal field levels.
\subsection{Heat capacity studies}
\begin{figure}[!]
\includegraphics[width=0.5\textwidth]{Fig6.pdf}
\caption{(Color online) Temperature variation of heat capacity of ErAl$_2$Ge$_2$ and LaAl$_2$Ge$_2$. Inset shows the temperature variation of the $4f$-derived heat capacity and the corresponding entropy of ErAl$_2$Ge$_2$ and the solid line is the calculated Schottky heat capacity based on CEF calculations.}
\label{Fig6}
\end{figure}
The temperature dependence of the specific heat capacity of a single crystalline ErAl$_2$Ge$_2$\ and its non-magnetic reference polycrystalline LaAl$_2$Ge$_2$ in the temperature range $2-50$~K is shown in the main panel of Fig.~\ref{Fig6}. The heat capacity of the magnetic compound ErAl$_2$Ge$_2$\ is greater than the non-magnetic compound in the entire temperature range, due to the additional $4f$-derived contribution to the heat capacity $C_{4f}$. A very sharp $\lambda$-like transition is observed at $T_{\rm N} = 4$~K suggesting the bulk nature of the magnetic ordering in ErAl$_2$Ge$_2$. We have estimated the $4f$-derived contribution to the heat capacity $C_{4f}$, by subtracting the phononic contribution (taken to be identical to that of LaAl$_2$Ge$_2$) from the heat capacity of ErAl$_2$Ge$_2$\ and plotted it in the inset of Fig.~\ref{Fig6}. Above the magnetic transition, a broad peak is observed in the magnetic part of the heat capacity, which is mainly attributed to the Schottky heat capacity arising due to the thermal population of the several low lying crystalline electric field states as the temperature is increased. Theoretically, the Schottky heat capacity can be calculated using the energy levels obtained from the CEF fitting of the magnetic susceptibility data. The solid curve in the inset of Fig.~\ref{Fig6}, representing the CEF derived Schottky heat capacity is in a semi-quantitative agreement with the experimentally observed values. An estimate of the entropy change with temperature has also been carried out and it is shown in the inset of Fig.~\ref{Fig6}. The entropy increases very rapidly and attains a value of nearly 20~J/K$\cdot$mol, thus suggesting several low lying CEF states.
\section{Summary}
In summary, we have successfully grown the single crystals of ErAl$_2$Ge$_2$\ and studied its anisotropic physical properties. From the susceptibility and magnetization data, we infer that the easy axis of magnetization lies in the $ab$-plane. The anisotropy in the magnetic susceptibility can be qualitatively explained by our CEF analysis. It has been shown that the 16 fold ($2J+1$) degenerate level of the free Er$^{3+}$ ion splits into 8 doublets with an overall separation of just 108~K with several levels lying at low energies. The estimated crystal field energy levels account semi-quantitatively for the experimentally observed Schottky heat capacity. The electrical resistivity revealed that a superzone gap forms in ErAl$_2$Ge$_2$\ below the antiferromagnetic transition. The exchange interaction, as reflected by the
magnetic transition temperature in Ho and Er analogs decreases as one moves towards the higher rare-earth side, which is in accordance with the deGennes scaling. It would be interesting to study the next compound in the series TmAl$_2$Ge$_2$ where, based on the CEF theory, one can expect to see the change of the easy axis compared to ErAl$_2$Ge$_2$\ and that work is planned as a future study.
\section*{References}
|
1,108,101,566,011 | arxiv | \section{Introduction}
\label{intro}
Achieving high efficiencies and coefficients of performance in thermoelectric nanodevices is one of the main goals of contemporary research on these effects. It is well-known that a strong energy dependence of the electronic transport is a prerequisite for efficient thermoelectric phenomena involving charge carriers, e.g. the Seebeck effect.
In a seminal paper Mahan and Sofo \cite{MS}
proposed (Kedem and Caplan \cite{Kedem} advanced related ideas in 1965)
that high values of the thermoelectric efficiency of a two-terminal electronic device are
obtained when the energy-dependent conductance
has a sharp structure
away from the nearly-common chemical potentials of the leads.
Following this proposal there were quite a few attempts to achieve
effectively narrow electronic bands, especially in nanostructures with transmission resonances
and where the enhanced scattering of phonons at interfaces also reduces the phononic heat
conductance. \cite{Ven} Examples are quantum-well superlattices and quantum wires, \cite{Hicks} and crystalline arrays of quantum dots. \cite {Cai} Likewise, the possibility to approach in nanostructures the limit of reversible processes, for which the efficiency is the highest, has been pursued, \cite{Humphrey} as well as the maximal efficiency at finite output power (see e.g. Ref. \onlinecite{Whitney} and references therein).
A different way to achieve strongly energy-dependent transport is to consider inelastic processes, in which the electrons interchange energy with a boson bath, e.g., photons or electron-hole excitations (see Fig. \ref{fig1}). The boson bath coupled to the electronic system represents the third terminal, making the setup a three-terminal one.
In such devices, in addition to the normal thermoelectric effects in the two electronic terminals, there can arise thermoelectric phenomena due to the energy transfer between the thermal terminal and the electronic ones. Physically, this is because the energy exchange between the electronic and bosonic systems induces electronic (charge and heat) currents.
Various
three-terminal thermoelectric devices and processes have been proposed: setups designed for
cooling-by-heating (CBH) processes, for which the bias voltage is kept zero and one of the electronic terminals is cooled by the thermal bath; \cite{Cleuren} three-terminal junctions based on molecular bridges, where the charge carriers interchange energy with the vibrations of the molecule forming the bridge; \cite{EIA} rectification of thermal fluctuations in chaotic cavities; \cite{Sanchez} two-sites nanostructures based on the inelastic phonon-assisted hopping; \cite{JHJ,JHJ1} cooling a two-dimensional electron gas
(which plays the role of the thermal bath) at low temperatures by elastic electron transitions to and from the leads; \cite{Edwards,Prance}
quantum ratchet converting the nonequilibrium noise of a nearby quantum point contact to dc current; \cite{Khrapai}
carbon nanotubes designed to extract energy from a discrete local oscillator at ultralow temperatures; \cite{Zippilli}
junctions connected to several electronic terminals; \cite{Sivan,Brandner,Mazza} and
cooling the vibrational motion by charge current. \cite{arrachea}
For a two-terminal system
converting thermal energy into work
or conversely
cooling one of the terminals by investing work, the maximal efficiency is achieved in the reversible Carnot
thermodynamic cycle.
When there are more than two terminals, one may define various efficiencies (or equivalently, coefficients of performance) and explore their limits in a reversible process. In Sec. \ref{YI} we examine, on general thermodynamic grounds, two cooling scenarios
feasible in a three-terminal setup and analyze their efficiencies for zero entropy production. Section \ref{general} defines the currents and the thermodynamic forces driving them, and uses those to re-express the coefficients of performance introduced in Sec. \ref{YI}. In these two sections we examine the coefficients of performance achieved in the corresponding reversible processes.
Things become even more exciting once a specific model is introduced (in Sec. \ref{model}), and the efficiency is analyzed allowing for `parasitic' processes in the junction, represented in our case by (unavoidable) phonon heat conductances (see Sec. \ref{IR}). We find that the three-terminal setup has interesting features. (a) When used to cool one of the electronic terminals by investing work (or alternatively an electric power) and heat from the boson bath, the working range of the device increases as the latter increases; (b) Depending on phonon conductances, the coefficient of performance for joint cooling and power production may be enhanced compared to the situation where work is invested. We summarize our results in Sec. \ref{SUM}.
\section{General thermodynamic considerations}
\label{YI}
The ubiquitous electronic thermoelectric nanodevice consists of a junction bridging two electronic terminals held at different temperatures and chemical potentials. We denote those by $ \mu_{L}$ and $T_{L}$ for the ``left" electronic terminal, and $ \mu_{R}$ and $T_{R}$ for the ``right" one. Our three-terminal setup includes in addition a thermal terminal
supplying bosons (e.g., phonons, photons, electron-hole excitations), thus allowing for inelastic transport processes of the charge carriers moving in-between the electronic terminals.
The boson bath is kept at yet another temperature, denoted $T_{T}$ (see
Fig. \ref{fig1}).
We begin by examining the three-terminal efficiency or coefficient of performance (COP) from a thermodynamic point of view, adopting a configuration similar to the cooling-by-heating (CBH) one. \cite{Cleuren}
e choose to cool the left, $L$, terminal and dump the heat into the right, $R$, one, taking
\begin{align}
T^{}_L < T^{}_R\ .
\end{align}
Quite generally, a ``double-driving" can be exploited, comprising both invested work $W_{\rm in}$ (supplied for instance, by an electric current)
and heat $Q_{T}$ (produced by the thermal reservoir at temperature $T_{T}$), to take heat $Q_{L}$ from the $L$ terminal and dump heat $-Q_{R}$ into the $R$ one. \cite{sign}
In the usual two-terminal thermoelectric cooling scenario $Q_{T}=0$ and the cooling is accomplished solely by the work $W_{\rm in}$.
The efficiency of this process when it is reversible, $\eta_{\rm cbe}$, is
\begin{align}
\eta^{}_{\rm cbe}=T^{}_{L}/(T^{}_{R}-T^{}_{L})\ .
\label{etacbe}
\end{align}
The other particular scenario is when no work is invested, $W_{\rm in}=0$, and cooling is achieved by $Q_{T}$, i.e., the CBH process, with the reversible efficiency $\eta_{\rm cbh}$
\begin{align}
\eta^{}_{\rm cbh}=\eta^{}_{\rm cbe}[1-(T_{R}/T_{T})]=\frac{1-(T_{R}^{}/T^{}_{T})}{(T^{}_{R}/T^{}_{L})-1}\ .
\label{etacbh}
\end{align}
To interpolate between these two limits we introduce the parameter
$\alpha = W_{\rm in}/Q_T$; in the first scenario $\alpha =\infty$ and in the second $\alpha =0$. In general,
$\alpha$ depends
on the
driving forces (see Secs. \ref{model} and \ref{IR}).
\begin{figure}[ht]
\centering
\includegraphics[width=2.in]{fig1.pdf}
\caption{(Color online) The three-terminal setup. The electronic reservoirs on the left and on the right are characterized by their respective temperatures, $T_{L}$ and $T_{R}$, and chemical potentials, $\mu_{L}$ and $\mu_{R}$. The thermal (boson) terminal is held at a third temperature, $T_{T}$. The thin (red) lines indicate the bosons' flux exchanged between the electronic system and the thermal boson bath.}
\label{fig1}
\end{figure}
The cooling efficiency for the joint process, i.e., the COP, is defined by
\begin{align}
\eta^{}_{a} = \frac{Q^{}_L}{W_{\rm in}^{} + Q^{}_T} = \frac{Q_L}{-Q_R - Q_L}\ ,
\label{eta}
\end{align}
where the second equality results from energy conservation
\begin{align}
-Q^{}_R = Q^{}_L + W^{}_{\rm in} +Q^{}_T\ .
\label{ec}
\end{align}
Since the reservoirs are each in equilibrium, the corresponding entropies are
\begin{align}
S^{}_\ell = Q^{}_\ell/T^{}_\ell\ , ~~ (\ell= L,\ R,\ T)\ ,
\label{sd}
\end{align}
and the total entropy increment due to the process,
$\Delta S$, is
\begin{align}
\Delta S=S^{}_{L}+S^{}_{T}+S^{}_{R}\ .
\label{Ds}
\end{align}
In a reversible process $ \Delta S=0$; the corresponding COP, $\eta^{\rm rev}_{a}$, is then [using Eqs. (\ref{ec}), (\ref{sd}), and (\ref{Ds})]
\begin{align}
\eta^{\rm rev}_{a} =\eta_{\rm cbe} \Big [1 - \frac{T^{}_R}{T^{}_T (1+\alpha)}\Big ]\ .
\label{3t eff}
\end{align}
As expected,
large values of $\alpha$ give $\eta_{\rm cbe}$, Eq. (\ref{etacbe}), while $\alpha = 0$ yields
the maximal efficiency of the cooling-by-heating process, Eq. (\ref{etacbh}).
Any finite positive $\alpha$ gives $\eta^{\rm rev} _{a}< \eta_{\rm cbe}$, as follows trivially from Eq. (\ref{3t eff}). So, we lose efficiency by adding the CBH power. On the other hand, $\eta ^{\rm rev}_{a}> \eta_{\rm cbh}$, so we gain efficiency compared to a pure CBH process.
The advantages of the double driving are that we get more cooling power, that the working region of the device increases (see Secs. \ref{model} and \ref{IR}) and that for
realistic purposes, one could use the free sun as the thermal terminal, \cite{Cleuren} which reduces the efficiency by a few percent only.
Next we consider the case where part of the invested heat $Q_{T}$ from the thermal terminal is exploited to cool the left electronic terminal and part of it is converted to useful work, $W_{\rm out}$. Energy conservation in this case yields
$Q_{T}+Q_{L}=W_{\rm out}-Q_{R}$, while the efficiency is
\begin{align}
\eta^{}_{b}=\frac{Q^{}_{L}+W^{}_{\rm out}}{Q^{}_{T}}\ .
\label{etab2}
\end{align}
For a reversible process, the efficiency of the thermal terminal, which supplies both the useful work and the cooling of the left electronic terminal is $1-T_{R}/T_{T}$, i.e.,
\begin{align}
Q^{}_{T}=\frac{W^{}_{\rm out}+W^{}_{\rm c}}{1-T^{}_{R}/T^{}_{T}}\ ,
\label{ref1}
\end{align}
where $W_{c}$ is the work used for cooling.
As for a reversible process the efficiency of cooling is $\eta_{\rm cbe}$, Eq. (\ref{etacbe}),
it follows that
\begin{align}
Q^{}_{L}=\frac{W^{}_{\rm c}}{(T^{}_{R}/T^{}_{L})-1}\ .
\label{ref2}
\end{align}
Inserting \cite{referee} the relations (\ref{ref1}) and (\ref{ref2}) into Eq. (\ref{etab2})
yields
\begin{align}
\eta^{\rm rev}_{b}=\Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big
) [
+\eta^{}_{\rm cbe} (1-w)]\ ,
\end{align}
where $w$ is the fraction of the work produced in the process,
\begin{align}
w=\frac{W^{}_{\rm out}}{W^{}_{\rm out}+W^{}_{\rm c}}\ .\label{www}
\end{align}
Remarkably enough, the reversible efficiency increases with $w$ when $\eta_{\rm cbe}<1$ and decreases when $\eta_{\rm cbe}> 1$. We return to this point in Sec. \ref{general}. Like the ratio $\alpha=W_{\rm in}/Q_{T}$, $w$ also depends
on the
driving forces (see Secs. \ref{model} and \ref{IR}).
For completeness, we recast the above reasoning into the configuration in which the thermal terminal is cooled by both work done (e.g., by electrical current between the left and the right terminals, see Ref. \onlinecite{arrachea}) and a heat flow between $L$ and $R$, (here we take $T_T<T_R<T_L$).
The coefficient of performance for this process, [replacing Eq. (\ref{eta})] is
\begin{align}
\widetilde{\eta }= \frac{Q^{}_T}{W^{}_{\rm in} + Q^{}_L} = \frac{Q_T}{-Q^{}_R - Q^{}_T}\ ,
\label{eta'}
\end{align}
and for a reversible process [i.e., $ \Delta S=0$, see Eq. (\ref{Ds})]
\begin{align}
\widetilde{ \eta}_{}^{\rm rev} = \eta' [1- \frac{T^{}_ R}{T^{}_L (1+\alpha ')}]\ , \label{3t eff'}
\end{align}
where $\alpha '= W_{\rm in}/Q_L$ and
$\eta'=T_{T}/(T_{R}-T_{T})$ is the usual Carnot efficiency for cooling a terminal at temperature $T_{T}$ by moving heat to a terminal having temperature $T_{R}$,
using pure work.
When $\alpha ' \rightarrow \infty$ the reversible-process efficiency becomes simply $\eta'$, while for $\alpha ' \rightarrow 0$ the expression in the square brackets in Eq. (\ref{3t eff'}) becomes $1-(T_{R}/T_{L})$. This is just the Carnot efficiency for converting the heat moved between the left and the right terminals to work. Equation (\ref{3t eff'}) is then the reversible efficiency for CBH, as found and explained by the first of Refs. \onlinecite{Cleuren} [see their Eq. (11)].
The above is very general, {\em beyond linear-response transport}. It is straightforward to introduce the (negative) correction to the efficiency
when the total entropy production (waste) is positive, as
discussed in Sec. \ref{IR}.
\section
currents and forces
}
\label{general}
Having set the thermodynamic basis for defining efficiencies (or coefficients of performance) in a three-terminal device,
we proceed to examine those from another point of view, by considering
the currents flowing in the system and the thermodynamic forces driving them.
The thermodynamic driving forces and the currents conjugate to them can be defined unambiguously by considering the entropy production of the device.
In terms of the heat/entropy production in the thermal terminal and in the electronic $L$ and $R$ terminals, $\dot{Q}_{T}$, $\dot{Q}_{L}$, and $\dot{Q}_{R}$ respectively, the entropy production is [{\it cf.} Eqs. (\ref{sd}) and (\ref{Ds})]
\begin{align}
\Delta \dot{S}=\frac{\dot{Q}^{}_{T}}{T^{}_{T}}+\frac{\dot{Q}_{L}}{T^{}_{L}}+\frac{\dot{Q}^{}_{R}}{T^{}_{R}}\ .
\label{EP}
\end{align}
The rate of the heat produced in each of the electronic reservoirs is
$\dot{Q}_{\ell}=\dot{E}_{\ell}-\mu_{\ell}\dot{N}_{\ell}$, $\ell=L $ or $R$ ($E_{\ell}$ is the total energy of the $\ell-$th electronic reservoir, and $N_{\ell}$ is the number of charge carriers there). Energy conservation implies that
\begin{align}
\dot{Q}_{T}+\dot{Q}^{}_{L}+\dot{Q}^{}_{R}=-\mu^{}_{L}\dot{N}^{}_{L}-\mu^{}_{R}\dot{N}^{}_{R}\ ,
\label{EC}
\end{align}
while charge conservation gives $\dot{N}_{L}+\dot{N}_{R}=0$.
Since the particle current emerging from the left electronic terminal is
\begin{align}
J^{}_{L}=-\dot{N}^{}_{L}\ ,
\label{JL}
\end{align}
it is seen that the right-hand side of Eq. (\ref{EC}) is just the Joule heating in the system, $J_{L}(\mu_{L}-\mu_{R})$.
Exploiting the conservation laws,
the entropy production relation Eq. (\ref{EP})
becomes
\begin{align}
T^{}_{R}\Delta\dot{S}=J^{}_{L}(\mu^{}_{L}-\mu^{}_{R})+J^{Q}_{L}\Big (1-\frac{T^{}_{R}}{T^{}_{L}}\Big )+J^{Q}_{T}\Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big )\ .
\label{EP1}
\end{align}
Here we have chosen the temperature of the right electronic bath as the reference temperature, and introduced the heat currents leaving the left terminal
\begin{align}
J^{Q}_{L}=-\dot{Q}^{}_{L}\ ,
\label{JLQ}
\end{align}
and the one leaving the thermal one
\begin{align}
J^{Q}_{T}=-\dot{Q}_{T}\ .
\label{JTQ}
\end{align}
As usual, the entropy production Eq. (\ref{EP1}) appears as a (scalar) product of a vector consisting of the three currents, $J^{}_{L}$, $J^{Q}_{L}$, and $J^{Q}_{T}$, with a vector comprising the driving forces,
the electric one,
\begin{align}
\mu^{}_{L}-\mu_{R}^{}=|e|V\ ,
\label{V}
\end{align}
and the two thermal ones, given by the temperature difference across the electronic junction and between the electronic system and the boson bath. We shall confine ourselves to the configuration where the left electronic terminal is to be cooled and will also use the thermal terminal as the chief power supplier. Then the lowest temperature of the three is $T_{L}$ and the highest one is $T_{T}$. Accordingly, the thermal driving forces are
$(T_{R}/T_{L})-1$ and $1-(T_{R}/T_{T})$.
One can imagine various scenarios for cooling the $L-$electronic terminal in the three-terminal junction depicted in Fig. \ref{fig1}. Here, as in Sec. \ref{YI}, we focus on two specific situations. We present for each of them the relevant coefficient of performance (COP) and then examine the best value it can have, which is obtained when the cooling and the accompanying processes are reversible, i.e., the entropy production vanishes. The power(s) obtained in such a process is (are) unfortunately vanishingly small. Nonetheless, it is instructive to investigate these upper limits that are the target of many studies in thermoelectricity (an example is the proposal of Ref. \onlinecite{MS}). More realistic configurations and the corresponding coefficients of performance will be considered in Sec. \ref{IR}.
(a) As in Sec. \ref{YI}, the $L-$electronic terminal can be cooled by investing power extracted from the thermal terminal {\em and} an electric power. The
COP of this process, denoted $\eta_{a}$, is the ratio of the thermal power gained and the two invested powers, \cite{com1}
\begin{align}
\eta^{}_{a}=\frac{J^{Q}_{L}}{|e|J^{}_{L}V+J^{Q}_{T}}\ .
\label{etaa}
\end{align}
The COP attains its highest possible value in a reversible process, for which the entropy production vanishes. Indeed, upon using Eq. (\ref{EP1}) for $\Delta\dot{S}=0$
we find that in a reversible process
\begin{align}
\eta^{\rm rev}_{a}=\frac{1}{(T^{}_{R}/T^{}_{L})-1}\times\frac{|e|J^{}_{L}V+[1-(T^{}_{R}/T^{}_{T})]J^{Q}_{T}}{|e|J^{}_{L}V+J^{Q}_{T}}\ .
\label{etaar}
\end{align}
When the electric power vanishes Eq. (\ref{etaar}) reproduces the COP of the `cooling-by-heating' process in the reversible limit, \cite{Cleuren} given in Eq. (\ref{etacbh}),
while in the more mundane scenario of cooling by investing only electric power, the COP is $\eta_{\rm cbe}$ given by Eq. (\ref{etacbe}).
Obviously, when the entropy production vanishes (or is rather small) the cooling-by-heating process is less effective than cooling by electric power; the joint three-terminal COP lies in-between these two limits,
\begin{align}
\eta^{}_{\rm cbh}<\eta^{\rm rev}_{a}<\eta^{}_{\rm cbe}\ .
\label{ine}
\end{align}
The best performance
in a reversible process is thus reached upon cooling by investing electric power. However, as compared to the reversible cooling-by-heating option proposed by Cleuren {\it et al.} \cite{Cleuren} for which the electric voltage vanishes, investing electric power in addition to the thermal one improves the COP. As mentioned, the three-terminal arrangement also extends the range of the `working condition' \cite{com1} of the device as compared to a two-terminal setup, see Sec. \ref{IR}.
(b) The left electronic terminal is cooled and at the same time electric power is being produced. Accordingly, the ratio between the powers gained to that invested, i.e. the COP $\eta_{b}$, is
\begin{align}
\eta^{}_{ b}=\frac{J^{Q}_{L}+|eVJ^{}_{L}|}{J^{Q}_{T}}\ .
\label{etab}
\end{align}
When the process is reversible
the COP of this process is
\begin{align}
\eta^{\rm rev}_{b}=\Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big )\frac{J^{Q}_{L}+|eVJ^{}_{L}|}{J^{Q}_{L}[(T^{}_{R}/T^{}_{L})-1]
+|eVJ^{}_{L}|}\ .
\label{etabr}
\end{align}
We see that the reversible value of the COP when the voltage vanishes is given by Eq. (\ref{etacbh}). On the other hand, if the electronic left bath is not cooled at all, then the device works as a `solar cell' (in the case where the thermal terminal is the sun), yielding electric power in response to the temperature difference between the electronic system and the thermal terminal. The best value of the COP for this configuration is the textbook Carnot efficiency $\eta_{\rm C}$
\begin{align}
\eta^{}_{\rm C}=1-T^{}_{R}/T^{}_{T}\ .
\label{Carnot}
\end{align}
As in case (a) above, $\eta^{\rm rev}_{b}$ is bounded in-between its two limiting values. But in contrast to case (a), which of the two is the bigger and which is the smaller depends on the temperature difference across the electronic reservoirs. When that temperature difference is significant,
$(T_{R}/T_{L})-1> 1$, i.e., $\eta_{\rm cbe}< 1$ [Eq. (\ref{etacbe})]
we find
\begin{align}
\eta^{}_{\rm C}>\eta^{\rm rev}_{b}> \eta^{}_{\rm cbh}\ ,\ \ \ \eta^{}_{\rm cbe}< 1\ .
\label{ine1}
\end{align}
When the temperature difference across the electrons is small then the inequality is reversed, i.e.,
\begin{align}
\eta^{}_{\rm cbh}>\eta^{\rm rev}_{b}> \eta^{}_{\rm C}\ ,\ \ \ \eta^{}_{\rm cbe}> 1\ .
\label{ine2}
\end{align}
At the crossing point, $\eta^{}_{\rm cbe}=1$, $\eta_{\rm cbh}=\eta_{\rm C}$, and the two bounds merge.
Examining Eqs. (\ref{ine1}) and (\ref{ine2}), we see that
when $\eta_{\rm cbe}<1$ it ``pays" to produce work (or power) in addition to cooling, while in the reverse case it does not.
\section{ An example of a three-terminal thermoelectric device}
\label{model}
In order to consider the coefficients of performance away from reversibility one needs to invoke a specific system and find for it the currents flowing in the setup in response to the driving forces. In the linear-response regime the relation between two vectors, the one of the currents and the one of the driving forces [{\it cf.} Eq. (\ref{EP1}) and the discussion around it] can be written in a matrix form
\begin{align}
\left [\begin{array}{c}|e|J^{}_{L} \\ \\ J^{Q}_{L} \\ \\ J ^{Q}_{T} \end{array}\right ]=\frac{1}{T^{}_{R}}{\cal M}\left [\begin{array}{cc} V \\ \\
-[(T^{}_{R}/T^{}_{L})-1]\\ \\ 1-(T^{}_{R}/T^{}_{T})\end{array}\right ]\ ,
\label{ON}
\end{align}
where the (3$\times$3) matrix ${\cal M}$ comprises the transport coefficients. Within the linear-response regime ${\cal M}$
does not depend on the driving forces and is determined by the thermal-equilibrium
properties of the setup. It then obeys the Onsager relations; for instance, for a system invariant to time-reversal--the case studied here--${\cal M}$ is symmetric.
One notes that the matrix ${\cal M}$ is
nonnegative definite, to ensure the positiveness, or vanishing, of the entropy production $\Delta\dot{S}$,
Eq. (\ref{EP1}). The matrix ${\cal M}$ may be singular, for example, when the first two rows are proportional to one another and then the two corresponding currents (for example, the electronic charge and thermal currents) are proportional to each other. This special situation, which was termed by
Kedem and Caplan\cite{Kedem} `strong coupling', is also behind the mechanism of Mahan and Sofo, \cite{MS} involving transport in a very narrow energy band.
This yields high values of the COP. In the case of Ref. \onlinecite{MS}, each transferred electron carries the same energy and heat. We will come back to this point below. It can also happen that the third row is also proportional to the first one and then all the three currents, in the three-terminal case, are proportional to each other, which may be termed `full strong coupling'. The proportionality coefficients are determined by the specific model at hand.
We obtain the transport coefficients, i.e., the matrix ${\cal M}$, for the two-level model of Ref. \onlinecite{JHJ}, depicted in Fig. \ref{fig2}. The model exploited in Ref. \onlinecite{Cleuren} has two such two-level pairs, whose effects add in the cooling and subtract in the electrical current, causing the latter to vanish.
The setup displayed in Fig. \ref{fig2}
models a small one-dimensional nanosystem in which thermoelectric transport takes place mainly via inelastic phonon (or, more generally, boson) assisted hopping, i.e., it is assumed that this inelastic hopping is the strongly-dominant electronic channel.
This will be the case for temperatures above a certain threshold temperature, denoted in Ref. \onlinecite{JHJ} by $T_{x}$. Below that temperature
the transport is dominated by the elastic tunneling conductance. By equating the latter to the boson-assisted hopping conductance, one finds that $T_{x}$
is determined
by the energies $E_{1}$ and $E_{2}$
of the localized levels, times the ratio of the localization length of the wave functions there to the junction linear dimension. \cite{JHJ}
Any relevant inelastic transmission will be exponentially small when the temperature approaches zero.
As in previous publications on this issue \cite{Cleuren}
we discuss noninteracting quasiparticles. The main effect of the interactions is expected to be a renormalization of the model parameters, e.g., changing the energies $E_{1}$ and $E_{2}$ (see Fig. \ref{fig2})
as in the theory of Pollak \cite{Pollak} and Efros-Shklovskii \cite{ES} for the density of states.
\begin{figure}[ht]
\centering
\includegraphics[width=2.2in]{3T_EFF_FIG2.pdf}
\caption{ (Color online) Two electronic reservoirs are characterized by their respective electrochemical potentials, $\mu_{L}$ and $\mu_{R}$, and temperatures, $T_{L}$ and $T_{R}$.
The electronic transport between the two is accomplished via
the two localized levels
of energies $E_{1}$ and $E_{2}$ that are well-coupled (elastically) each to its nearby reservoir.
The left reservoir is cooled, i.e., $T_{L}<T_{R}$. The thermal reservoir (held at temperature $T_{T}$) supplies the required energy for the transport.
}
\label{fig2}
\end{figure}
In the two-level model of Ref. \onlinecite{JHJ} the transfer of the charge carriers in-between the left and right electronic terminals takes place via two localized levels, of energies $E_{1}$ and $E_{2}$($>E_{1}$ for concreteness).
For instance, for an electron transferred from left to right, the boson bath gives an energy
$-E_{1}$ ($E_{2}$) to the left (right) lead
and thus the bosons transfer the energy
\begin{align}
\omega =E^{}_{2}-E^{}_{1}
\label{om}
\end{align}
to the electrons. A net energy of
\begin{align}
\overline{E}=\frac{1}{2}(E^{}_{1}+E^{}_{2})
\label{eb}
\end{align}
is transferred across the electronic system, from left to right.
It follows that (see Ref. \onlinecite{JHJ} for the details of the calculation), barring for the time-being any heat conduction by phonons present in the system, the electronic heat current is $J_{L}^{Q}=\overline{E}J^{}_{L}$ and the heat current flowing between the thermal bath and the electronic system is $J^{Q}_{T}=\omega J^{}_{L}$. The particle current $J_{L}$ is proportional to the hopping conductance of the device.
Thus, when phonon conductances are ignored, the setup is in the `full strong-coupling limit', \cite{Kedem} the
matrix ${\cal M}$ is singular, and the entropy production vanishes.
In reality, the above picture has to be generalized by adding the elastic electronic transmission and, more importantly,
the phonon heat conductances: The heat current $J_{L}^{Q}$ should be augmented by the phonons' heat flow between the two electronic terminals, and the current $J_{T}^{Q}$ by the phonons' flow between the boson bath and the electronic terminals. \cite{com2} We denote the phonon heat conductance of the electronic system by $K_{\rm P}$, and between the electronic system and the thermal terminal by $K_{\rm PP}$. For simplicity, we confine ourselves to the regime where the elastic transport of the electrons may be ignored. \cite{JHJ} Under these circumstances,
the matrix ${\cal M}$ giving the relations among the currents and the driving forces [see Eq. (\ref{ON})],
is
\begin{align}
\frac{1}{T_{R}^{}}{\cal M}=G^{}_{\rm in}&\left[\begin{array}{ccc}1 \ \ &\overline{E}/|e|\ \ &\omega/|e| \\
\overline{E}/|e|\ \ &(\overline{E}/e)^{2}\ \ &\overline{E}\omega/e^{2} \\\omega/|e|\ \ &\overline{E}\omega/e^{2}\ \ & (\omega/e)^{2}\end{array}\right ]\nonumber\\
&+\left [\begin{array}{ccc}0\ \ &0\ \ &0 \\ 0\ \ & K^{}_{\rm P}\ \ &0 \\ 0\ \ &0\ \ &K^{}_{\rm PP}\end{array}\right ]\ .
\label{ONm}
\end{align}
Here $G_{\rm in} $ is the hopping conductance of the electronic junction. The
first term on the right-hand side of Eq. (\ref{ONm}) contains the strongly-coupled part of the transport matrix. The second term there, describing the parasitic phonon conductances $K^{}_{\rm P}$ and $K^{}_{\rm PP}$,
spoils this `strong coupling' property.
The working condition \cite{com1} for cooling the left electronic terminal requires that the heat current emerging from it be positive. In our model, this amounts to
\begin{align}
&\frac{|e|V}{\overline{E}}-\Big (\frac{T^{}_{R}}{T^{}_{L}}-1\Big )+\frac{\omega }{\overline{E}}\Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big )
\geq \frac{e^{2}K^{}_{\rm P}}{\overline{E}^{2}G^{}_{\rm in}}\Big (\frac{T^{}_{R}}{T^{}_{L}}-1\Big )\ .
\label{wc}
\end{align}
Note that the phonon thermal conductance on the right-hand side, $K_{\rm P}$, is scaled by the {\em ``bare"} electronic thermal conductance of the junction, $G_{\rm in}\overline{E}^{2}/e^{2}$ [the transport coefficient contained in the $22-$element of the matrix ${\cal M}$, see Eq. (\ref{ONm})].
It is illuminating to re-examine the entropy production Eq. (\ref{EP1})
in the framework of our model in conjunction with the working condition for cooling the left electronic terminal. Using Eq. (\ref{ONm}) in the relation (\ref{ON}), and inserting the resulting currents into Eq.(\ref{EP1}) yields
\begin{align}
T^{}_{R}\Delta\dot{S}&=\frac{G^{}_{\rm in}}{e^{2}}\Big [|e|V-\overline{E}\Big (\frac{T^{}_{R}}{T^{}_{L}}-1\Big )+\omega \Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big )\Big ]^{2}\nonumber\\
&+K^{}_{\rm P}\Big (\frac{T^{}_{R}}{T^{}_{L}}-1\Big )^{2}+K^{}_{\rm PP}\Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big )^{2}\ .
\label{EPm}
\end{align}
As might have been expected, the ``parasitic" heat flows carried by the phonons [the last two terms in Eq. (\ref{EPm})] make the entropy production positive and the thermoelectric transport {\em irreversible.}
When these are ignored (or are very small) and our setup approaches the strong-coupling limit then the cooling process will be reversible provided that
$
|e|V/\overline{E}-[(T^{}_{R}/T^{}_{L})-1]+(\omega /\overline{E})[1-(T^{}_{R}/T^{}_{T})]=0$, i.e., when the heat flowing out of the left electronic terminal vanishes. In other words, when the cooling process is {\em reversible} it yields, as is always the result, {\em zero output power}.
\section{Irreversible coefficients of performance}
\label{IR}
Exploiting the transport coefficients of the specific device described in Sec. \ref{model},
the electric current leaving the left terminal is
\begin{align}
|e|J^{}_{L}=\frac{G^{}_{\rm in}\overline{E}}{|e|}\Big [\frac{|e|V}{\overline{E}}+1-
\frac{T^{}_{R}}{T^{}_{L}}+\frac{\omega}{\overline{E}}\Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big )\Big ]\ .
\end{align}
Likewise,
the electronic heat current from that terminal is
\begin{align}
J^{Q}_{L}=\frac{\overline{E}}{|e|}(|e|J^{}_{L})-
K^{}_{\rm P}\Big (\frac{T^{}_{R}}{T^{}_{L}}-1\Big )\ .
\end{align}
The working condition for the left terminal to be cooled is the positiveness of $J_{L}^{Q}$; therefore $J_{L}$ is necessarily positive. This means that as long as the bias voltage is positive, electric power is being {\em invested} in the system, while when $V$ is negative (i.e., the electric current flows against the voltage) the device {\em produces} electric power. Thus, the COP is given by $\eta_{a}$, Eq. (\ref{etaa}), when the voltage is positive and by $\eta_{b}$, Eq. (\ref{etab}), when it is negative.
Adding the expression for the heat current leaving the boson bath
\begin{align}
J^{Q}_{T}=\frac{\omega}{|e|}(|e|J^{}_{L})+
K^{}_{\rm PP}(1-\frac{T^{}_{R}}{T^{}_{T}})\ ,
\end{align}
we find that the COP's of our system are
\begin{align}
\eta^{}_{a}=\frac{B-\kappa^{}_{\rm P}(\frac{T^{}_{R}}{T^{}_{L}}-1)}{\frac{|e|V+\omega }{\overline{E}}B+\kappa^{}_{\rm PP}(1-\frac{T^{}_{R}}{T^{}_{T}})
}
\end{align}
for $V\geq 0$, and
\begin{align}
\eta^{}_{b}=\frac{(1-\frac{|e|V}{\overline{E}})B
-\kappa^{}_{\rm P}(\frac{T^{}_{R}}{T^{}_{L}}-1)}{\frac{\omega }{\overline{E}}B+\kappa^{}_{\rm PP}(1-\frac{T^{}_{R}}{T^{}_{T}})}
\label{etabfi}
\end{align}
for $V\leq 0$.
Here we have introduced for brevity the notation
\begin{align}
B=\frac{|e|V}{\overline{E}}+1-
\frac{T^{}_{R}}{T^{}_{L}}+\frac{\omega}{\overline{E}}(1-\frac{T^{}_{R}}{T^{}_{T}})\equiv \frac{e^2J^{}_L}{G^{}_{\rm in}\overline{E}}\ ,
\end{align}
and measured the phonon conductances in terms of the bare electronic heat conductance,
\begin{align}
\kappa^{}_{\rm P}=e^{2}K^{}_{\rm P}/(\overline{E}^{2}G^{}_{\rm in})\ ,\ \ \
\kappa^{}_{\rm PP}=e^{2}K^{}_{\rm PP}/(\overline{E}^{2}G^{}_{\rm in})\ .
\label{PHOC}
\end{align}
Note that the bias should exceed a certain threshold, $V_{c}$, dictated by the working condition Eq. (\ref{wc}),
\begin{align}
\frac{|e|V^{}_{c}}{\overline{E}}=
\Big (\frac{T^{}_{R}}{T^{}_{L}}-1\Big )(1+\kappa^{}_{\rm P})
-\frac{\omega}{\overline{E}}\Big (1-\frac{T^{}_{R}}{T^{}_{T}}\Big )\ .
\label{vc}
\end{align}
As mentioned,
joint cooling and energy harvesting requires negative values of the bias voltage, and hence negative values of the threshold $V_{c}$; a large enough phonon conductance $K_{\rm P}$ (the actual value depends on the model parameters) will therefore prevent such a combined action.
\begin{figure}[htp]
\centering
\includegraphics[width=7cm]{fig3.pdf}
\caption{(Color online) The slanted (blue) line is thermodynamic efficiency of a three-terminal junction, Eq. (\ref{theta}), as a function of $(T_{R}/T_{L})-1$ for $\kappa_{\rm P}=\kappa_{\rm PP}=0$, $(T^{}_{R}/T^{}_{L})-1=1.75$ and $\omega/\overline{E} =2$. The horizontal (orange) line is $\eta_{C}$, Eq. (\ref{Carnot}); the curved (green) line is $\eta_{\rm cbh}$, Eq. (\ref{etacbh}); the latter two represent the bounds Eqs. (\ref{ine1}) and (\ref{ine2}).}
\label{fig3}
\end{figure}
It is illuminating at this point to consider the thermodynamic limit of the efficiency $\eta^{}_{ b}$, Eq. (\ref{etabfi}). Setting the thermal conductances to zero implies that the bias voltage is $V_{c}$, Eq. (\ref{vc}), and then
\begin{align}
\eta^{\rm rev}_{b}=\frac{\overline{E}}{\omega}
\Big [
1- \Big (\frac{T^{}_{R}}{T^{}_{L}}-1\Big )
+\frac{\omega}{\overline{E}}
\Big (1-\frac{T^{}_{R}}{T_{T}}\Big )\Big ]\ ,
\label{theta}
\end{align}
with $(T_{R}/T_{L})-1\leq (\omega/\overline{E})[1-(T_{R}/T_{T}) ]$ to ensure the negativeness of $V_{c}$.
We plot in Fig. \ref{fig3} this COP, together with the two bounds found in Sec. \ref{general} on it; indeed, the model-dependent $\eta^{\rm rev}_{b}$ of Eq. (\ref{theta}) lies in-between the bounds (\ref{ine1}) and (\ref{ine2}), which interchange their respective roles at $T_{R}/T_{L}=2$. Varying the model parameters does not change the figure qualitatively. As noted after Eq. (\ref{www}), the reversible efficiency increases upon generating both cooling and electrical work (i.e., increasing $w$) when $\eta_{\rm cbe}<1$ and decreases when $\eta_{\rm cbe}> 1$.
The figures below display the COP's $\eta_{a}$ and $\eta_{b}$ as functions of the bias voltage, the former for positive values of the bias voltage and the latter for its negative values, as explained in Sec. \ref{general}, see Eqs. (\ref{etaa}) and (\ref{etab}).
In all figures the temperature difference between the electronic $L$ and $R$ terminals is kept constant, chosen to be $T_{R}/T_{L}=3/2$ and yielding $\eta_{\rm cbe}=2$ for the Carnot coefficient of performance for refrigeration by investing work in a two-terminal setup [see Eq. (\ref{etacbe}) and the discussion in Sec. \ref{YI}]. The various curves in each figure are for different values of $1-T_{R}/T_{T}$ (explicit values are given in the captions).
The parasitic phonon conductances [in dimensionless units, see Eqs. (\ref{PHOC})] of the electronic junction ($\kappa_{\rm P}$) and between the electronic system and the thermal terminal
($\kappa_{\rm PP}$) are taken to be equal for definiteness and decrease from 4 to 0.2, whereupon the COP increases.
In all three figures we see that the `working regime', i.e., the voltage range over which cooling is possible, increases with the
driving force of the thermal terminal $1-T_{R}/T_{T}$, namely the threshold $V_{c}$ decreases.
\begin{figure}[htp]
\centering
\includegraphics[width=7cm]{FIGS-EFF_3.pdf}
\caption{(Color online) The efficiency of a three-terminal junction as a function of $|e|V/\overline{E}$ for various values of $1-T^{}_{R}/T^{}_{T}$, and $\kappa_{\rm P}=\kappa_{\rm PP}=4$, $(T^{}_{R}/T^{}_{L})-1=0.5$ and $\omega/\overline{E} =2$. The various curves in the upward direction are for $1-T_{R}/T_{T}=0$ [the lowest solid (purple) line], $0.25,\ 0.5,\ 0.75$ and 1 [the upper dotted (magenta) curve].}
\label{c}
\end{figure}
Figure \ref{c} is for $\kappa_{\rm P}=\kappa_{\rm PP}=4$; cooling exists only beyond a positive threshold of the bias voltage [see Eq. (\ref{vc})], so no joint cooling and energy harvesting is possible.
However, that is already possible for $\kappa_{\rm P}=\kappa_{\rm PP}=2$, as shown in Fig. \ref{b}. Very interestingly, the COP increases with decreasing $V<0$ and has a small peak, which is yet smaller than the one for $V>0$. What is, seemingly, most surprising is that for even smaller parasitics, e.g., $\kappa_{\rm P}=\kappa_{\rm PP} = .2$ (Fig. \ref{a}), still away from the reversible limit,
the values of the COP at $V<0$ can be substantially larger than those for $V>0$. This means that harvesting energy may actually {\em increase the COP compared to investing energy}, a truly remarkable property of the three-terminal setup!
\begin{figure}[htp]
\centering
\includegraphics[width=7cm]{FIGS-EFF_2.pdf}
\caption{(Color online) The efficiency of a three-terminal junction as a function of $|e|V/\overline{E}$ for various values of $1-T^{}_{R}/T^{}_{T}$, and $\kappa_{\rm P}=\kappa_{\rm PP}=2$, $(T^{}_{R}/T^{}_{L})-1=0.5$ and $\omega/\overline{E} =2$. The various curves in the upward direction are for $1-T_{R}/T_{T}=0$
[the lowest solid (purple) line], $0.25,\ 0.5,\ 0.75$ and 1 [the upper dotted (magenta) curve].}
\label{b}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=7cm]{FIGS-EFF_1.pdf}
\caption{(Color online) The efficiency of a three-terminal junction as a function of $|e|V/\overline{E}$ for various values of $1-T^{}_{R}/T^{}_{T}$, and $\kappa_{\rm P}=\kappa_{\rm PP}=0.2$, $(T^{}_{R}/T^{}_{L})-1=0.5$ and $\omega/\overline{E} =2$. The various curves in the upward direction are for $1-T_{R}/T_{T}=0$
[the lowest solid (purple) line], $0.25,\ 0.5,\ 0.75$ and 1 [the upper dotted (magenta) curve].}
\label{a}
\end{figure}
The tendency of the efficiency to increase with the absolute value of the voltage for small negative values of it is
exemplified by Figs. \ref{C} which display $\eta_{b}$, Eq. (\ref{etabfi}), as a function of $(T_{L}/T_{R})-1$ for negative values of the voltage. It is seen that indeed $\eta_{b}$ increases as the absolute value of $V$ increases. However, because of the condition Eq. (\ref{vc}) necessary for the device to operate, the range of temperature differences between the thermal terminal and the electrons is reduced as well. The various curves in each of the panels in Figs. \ref{C} correspond to different values of
$1-T_{R}/T_{T}$. As $|V|$ is increased, the working condition Eq. (\ref{vc}) allows only for $T_{R}-$values which are closer and closer to $T_{T}$.
\begin{figure}
\includegraphics[width=7cm]{fig9a.pdf}
\includegraphics[width=7cm]{fig9b.pdf}
\includegraphics[width=7cm]{fig9c.pdf}\caption{(Color online) The efficiency $\eta _{b}$
as a function of the temperature difference across the electronic system, for various values of the temperature difference between the thermal reservoir and the electrons, $1-(T_{R}/T_{T})=0.25$, the dashed (brown) curve, $=0.5$, the dotted (magenta) curve, $=0.75, $ the solid (green) line and $=1.$, the dashed (blue) curve. The upper panel is for $|e|V/\overline{E}=-0.1$, the middle one is for
$|e|V/\overline{E}=-0.5$, and the lower panel is for
$|e|V/\overline{E}=-1$. The working condition Eq. (\ref{vc}) is not fulfilled for all values of $1-(T_{R}/T_{T})=0.25$ and therefore some of the curves are missing in the two lower panels. Here
$\kappa_{\rm P}=\kappa_{\rm PP}=0.05$, and $\omega/\overline{E} =2$.
}
\label{C}
\end{figure}
\section{Summary and conclusions}
\label{SUM}
We have analyzed the efficiency of two cooling processes possible in a mixed three-terminal thermoelectric junction, comprising
two electronic terminals that interchange energy with a third, thermal contact. For concreteness, we have concentrated mainly on the possibility to cool one of the electronic terminals, either by investing thermal energy extracted from the thermal terminal {\em and} electric power supplied by a bias voltage on the electrons, or by exploiting the thermal energy of the boson bath to cool the electronic terminal
{\em
and to produce } electric power. (Cooling of the thermal contact has been mentioned in Sec. \ref{YI}.)
We have found that one advantage of the three-terminal setup compared with the two-terminal one is the increase in the working regime of the device. In our case, this is manifested by the extended range of bias-voltage values for which cooling is possible. This increase is dominated by the parasitic phonon conductance; in our case it is the phonon conductance, $K_{\rm P}$, of the electronic junction, which should not be too large. With a suitable choice of $K_{\rm P}$ the threshold of the voltage becomes negative, and then the three-terminal setup cools and at the same time, produces electric power. In our model system, we find that the COP for this dual action can be {\em enhanced} compared with its value (for the same parameters) obtained when the device works just as a refrigerator. This enhancement, necessitating not-too-large parasitic thermal conductances, should become huge when the cooling is for a small cooling temperature increment, $T_R - T_L << T_L$.
\begin{acknowledgments}
This work was supported by the Israeli Science Foundation (ISF) and the US-Israel Binational Science Foundation (BSF). We thank J-H Jiang, J-L Pichard, D. Shahar, G. C. Tewari and G. Zeltzer for discussions on related questions.
OEW and AA thank FAPESP grant 2011/11973-4 for funding their visit to ICTP-SAIFR during February-March 2014, where part of this work was done.
\end{acknowledgments}
|
1,108,101,566,012 | arxiv |
\section{}
\subsection{Gradient Analysis of ASL}
The negative gradients of ASL~\cite{ben2020asymmetric} with different $\gamma$ and $m$ are shown in Fig.~\ref{img:app_asl}.
Despite attempts at different hyper-parameters, the turning point is limited as $p\in[0.8.0.9]$.
Given that, ASL still focuses too much on most possibly missing labels samples~(false negatives with $p>0.5$).
\begin{figure}[ht]
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-1}
\scriptsize (a)~ASL-$\gamma=1$
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-3}
\scriptsize (c)~ASL-$\gamma=3$
\end{minipage}%
}%
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-2}
\scriptsize (b)~ASL-$\gamma=2$
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-4}
\scriptsize (d)~ASL-$\gamma=4$
\end{minipage}%
}%
\centering
\caption{Gradient analysis of ASL loss with different $\gamma$ and $m$. ASL only down-weights extremely hard negatives~($p>0.9$), but leaves semi-hard negative samples~($p\in[0.5,0.8]$) large weights.}
\label{img:app_asl}
\end{figure}
\subsection{Additional Evaluation Metrics and Examples}
Following previous multi-label image classification researches~\cite{ben2020asymmetric,cole2021multi},mean Average Precision~(mAP) is adopted as the primary metric to evaluate model performance.
In Table~\ref{tab:app_coco0.5}, Table~\ref{tab:app_coco0.2}, Table~\ref{tab:app_cocosingle}, Table~\ref{tab:app_vocsingle}, Table~\ref{tab:app_nussingle} and Table~\ref{tab:app_openimagesingle} respectively, we report additional evaluation results including OF1 CF1, OP, OR, CP and CR on different datasets.
The positive threshold is set to a commonly used value of 0.5 for all methods, except ROLE~\cite{cole2021multi}. As shown in Fig.~\ref{img:dis}~(f), the ROLE has a threshold offset~(the lowest prediction value $p_l$ is generally around $0.3$ rather than $0$). Therefore, the positive threshold is set to $\tau=(1+p_l)/2$ for a relatively fair comparison.
The results show that our proposed Hill loss and SPLC outperform existing methods. Precision~(P) and Recall~(R) are trade-off metrics, which are sensitive to the selected threshold. Thus, we should take a comprehensive look at the two metrics, instead of focusing each single metric.
More example results of our proposed methods and BCE from the COCO test set are provided in Fig.~\ref{img:example_vis_app}.
\begin{table}[ht]
\caption{Comparison results of various metrics over other methods on the COCO-40\% labels left. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1}&
\\
\hline
BCE~(full labels) & \underline{80.3} &\underline{80.8} & \underline{70.3} & \underline{74.9} &\underline{84.3} &\underline{74.2} & \underline{78.9} \\
BCE & 70.5 &89.2 &34.4 & 45.8 &94.1 & 25.7 & 40.4\\
WAN~\cite{cole2021multi} & 72.1 &81.0 &53.2 & 62.8 &86.3 &52.0 & 64.9 \\
BCE-LS~\cite{cole2021multi} & 73.1 & 92.9 &33.5 & 44.9 &96.4 &26.2 & 41.2 \\
\hline
Sample-reweighting: \\
\hline
Focal~\cite{lin2017focal} & 71.7 &\textbf{88.9} &37.0 & 48.7 &\textbf{93.7} &28.8 & 44.0 \\
ASL~\cite{ben2020asymmetric} & 72.7 &71.4 &\textbf{65.6} & 67.7 &75.3 &\textbf{68.4} & 71.7 \\
\textbf{Hill~(Ours)} & \textbf{75.2} &80.4 &61.4& \textbf{68.6} & 85.5 &63.8 &\textbf{73.1} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & 71.5 &\textbf{88.0} &40.8 & 52.2 &\textbf{92.4} & 31.1 & 46.5 \\
ROLE~\cite{cole2021multi} & 73.7 &78.9 &60.0 &66.7 &83.6 &61.2 &70.7 \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{75.7} & 81.6 & \textbf{60.7} & \textbf{67.9} &87.7 & \textbf{63.0} & \textbf{73.3} \\
\hline
\end{tabular}
\label{tab:app_coco0.2}
}
\end{center}
\end{table}
\begin{table}[ht]
\caption{Comparison results of various metrics over other methods on the COCO-75\% labels left. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{80.3} &\underline{80.8} & \underline{70.3} & \underline{74.9} &\underline{84.3} &\underline{74.2} & \underline{78.9} \\
BCE & 76.8 &85.1 &58.1 & 67.7 &90.1 & 58.7 & 71.1\\
WAN~\cite{cole2021multi} & 77.3 &75.0 &70.2 & 72.2 &78.2 &73.7 & 75.9 \\
BCE-LS~\cite{cole2021multi} & 78.3 & 88.2 &57.2 & 67.8 &92.1 &57.9 & 71.1 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 77.0 &\textbf{83.8} &59.4 & 68.4 &\textbf{88.6} &59.8 & 71.4 \\
ASL~\cite{ben2020asymmetric} & 77.9 &63.1 &\textbf{78.8} & 69.7 &64.9 &\textbf{82.2} & 72.5 \\
\textbf{Hill~(Ours)} & \textbf{78.8} &73.6 &74.4& \textbf{73.6} &76.4 &78.3 & \textbf{77.3} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & 77.1 &\textbf{84.9} &58.7 & 68.2 &\textbf{89.7} & 59.4 & 71.5 \\
ROLE~\cite{cole2021multi} & \textbf{78.4} &77.4 &70.0 &72.9 &80.9 &73.5 & \textbf{77.0} \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{78.4} & 72.6 & \textbf{75.1} & \textbf{73.2} &74.0 & \textbf{79.3} & 76.6 \\
\hline
\end{tabular}
\label{tab:app_coco0.5}
}
\end{center}
\end{table}
\begin{table}[!htbp]
\caption{Comparison results of various metrics over other methods on the COCO-single label. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{80.3} &\underline{80.8} & \underline{70.3} & \underline{74.9} &\underline{84.3} &\underline{74.2} & \underline{78.9} \\
BCE & 68.6 &88.6 &33.0 & 43.8 &93.9 & 23.6 & 37.7\\
WAN~\cite{cole2021multi} & 70.2 &82.4 &47.9 & 58.0 &88.2 &43.9 & 58.6 \\
BCE-LS~\cite{cole2021multi} & 70.5 & 89.3 &30.8 & 40.9 &96.5 &23.1 & 37.3 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 70.2 &88.2 &36.0 & 47.0 & 93.4 & 41.4 &26.6 \\
ASL~\cite{ben2020asymmetric} & 71.8 &\textbf{90.4} &33.4 & 44.8 &\textbf{94.7} &23.7 & 37.9 \\
\textbf{Hill~(Ours)} & \textbf{73.2} &79.7 &\textbf{58.0}& \textbf{65.5} &85.3 &\textbf{58.7} & \textbf{69.5} \\
\hline
Label correction: \\
\hline
BCE + pseudo label & 69.8 &\textbf{87.2} &38.3 & 48.9 &\textbf{92.5} & 28.0 & 43.0 \\
ROLE~\cite{cole2021multi} & 70.9 &83.3 &49.1 &57.6 &88.3 &46.8 & 61.2 \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{73.2} & 83.8 & \textbf{53.1} & \textbf{61.6} &90.1 & \textbf{53.8} & \textbf{67.4} \\
\hline
\end{tabular}
\label{tab:app_cocosingle}
}
\end{center}
\end{table}
\begin{table*}[ht]
\caption{Comparison results of various metrics over other methods on the VOC-single label. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{89.0} &\underline{82.9} &\underline{84.1} & \underline{83.3} &\underline{86.4} & \underline{84.7} & \underline{85.6}\\
BCE & 85.6 &87.8 &71.6 & 77.4 &91.1 & 68.7 & 78.4\\
WAN~\cite{cole2021multi} & 87.0 &83.9 &78.4 & 80.4 &87.5 &77.7 & 82.3 \\
BCE-LS~\cite{cole2021multi} & 87.2 & 92.0 &67.7 & 75.4 &94.6 &65.9 & 77.7 \\
\hline
Sample-reweighting: \\
\hline
Focal~\cite{lin2017focal} & 86.8 &\textbf{87.2} &73.6 & 78.2 & \textbf{90.6} & 70.3 &79.1 \\
ASL~\cite{ben2020asymmetric} & 87.3 &68.4 &\textbf{87.2} & 75.9 &72.8 &\textbf{86.9} & 79.2 \\
\textbf{Hill~(Ours)} & \textbf{87.8} &85.3 &78.7& \textbf{81.1} &88.5 &79.6 & \textbf{83.8} \\
\hline
Label correction: \\
\hline
BCE + pseudo label & \textbf{89.7} &86.5 &\textbf{81.8} & \textbf{84.0} &89.1 & \textbf{83.5} & \textbf{86.2} \\
ROLE~\cite{cole2021multi} & 89.0 &87.6 &78.2 &81.5 &90.6 &79.0 & 84.4 \\
\textbf{\textbf{SPLC}~(Ours)} & 88.1 &\textbf{88.7} & 75.0 & 80.2 &\textbf{91.9} & 75.8 & 83.0 \\
\hline
\end{tabular}
\label{tab:app_vocsingle}
}
\end{center}
\end{table*}
\begin{table*}[t]
\caption{Comparison results of various metrics over other methods on the NUS-single label dataset. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{60.6} &\underline{62.7} &\underline{58.0} &\underline{59.1} &\underline{73.8} & \underline{73.4} &\underline{73.6}\\
BCE & 51.7 &67.8 &23.2 & 30.9 &84.0 & 20.7 & 33.3\\
WAN~\cite{cole2021multi} & 52.9 &61.5 &41.2 & 47.2 &75.1 &46.7 & 57.6 \\
BCE-LS~\cite{cole2021multi} & 52.5 & 66.9 &19.8 & 27.0 &85.6 &21.0 &33.7 \\
\hline
Sample-reweighting: \\
\hline
Focal~\cite{lin2017focal} & 53.6 &\textbf{69.1} &26.2 & 34.2 & \textbf{83.4} & 22.1 &35.0 \\
ASL~\cite{ben2020asymmetric} & 53.9 &53.5 &55.0 & 53.2 &67.7 &67.3 & 67.5 \\
\textbf{Hill~(Ours)} & \textbf{55.0} &54.7 &\textbf{56.3}& \textbf{54.1} &68.5 &\textbf{68.7} & \textbf{68.6} \\
\hline
Label correction: \\
\hline
BCE + pseudo label & 51.8 &\textbf{66.8} &24.8 & 32.5 &\textbf{83.6} & 22.2 & 35.1 \\
ROLE~\cite{cole2021multi} & 50.6 &56.3 &31.7 &37.4 &75.7 &50.8 &60.8 \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{55.2} &56.6 & \textbf{54.0} & \textbf{52.4} & 68.4& \textbf{73.0} & \textbf{70.6} \\
\hline
\end{tabular}
\label{tab:app_nussingle}
}
\end{center}
\end{table*}
\begin{table*}[ht]
\caption{Comparison results of various metrics over other methods on the Open Images-single label dataset. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE &60.8 &\textbf{72.9} &30.6 &38.2 &\textbf{85.9} &14.5 &24.9 \\
Focal~\cite{lin2017focal} &62.1 &72.3 &32.3 &39.8 &85.6 &15.2 &25.8 \\
ASL~\cite{ben2020asymmetric} &62.0 &60.1 &55.5 &53.4 &69.2 &38.3 &49.3 \\
\textbf{Hill~(Ours)} &\textbf{62.7} &51.7 & \textbf{66.5} & 55.0 &59.0 & \textbf{50.5} & \textbf{54.4} \\
\textbf{\textbf{SPLC}~(Ours)} &\textbf{62.9} &61.7 &56.8 &\textbf{55.4} &71.8 & 38.2 & 49.9 \\
\hline
\end{tabular}
\label{tab:app_openimagesingle}
}
\end{center}
\end{table*}
\begin{figure*}[ht]
\flushleft
\centering
\includegraphics[width=1.0\textwidth]{figures/visualization_2.pdf}
\centering
\vspace{-1em}
\caption{More example results of our proposed methods and BCE from the COCO test set. Green and red color mean positive and missing labels respectively.}
\label{img:example_vis_app}
\end{figure*}
\section{Conclusion}
In this paper, we designed simple and robust loss functions for multi-label learning with missing labels without sacrificing training and inference efficiency.
First, we presented an observation that partial missing labels can be distinguished from negatives in the early stage of training with a high precision.
Then, two novel approaches are proposed to alleviate the effect of missing labels by down-weighting and correcting potential missing labels, respectively.
Besides, we proposed to perform semi-hard mining on positives, which further promoted the performance.
Extensive experiments on several widely used benchmarks have demonstrated the superiority of the proposed methods.
Our methods substantially simplify the existing pipeline of MLML.
We hope the new state-of-the-arts established by our methods can serve as a starting point for future research on MLML.
\subsection{Additional Discussions and Experiments}
\subsubsection{Impact of the hyper-parameters}
Despite the introduction of extra hyper-parameters in Hill and SPLC compared with BCE, we can empirically select them as described in the method part. Here we give more detailed analysis and answer some common questions in practical use.
In principle, we did not invest too much efforts in finding the best hyper-parameters for each dataset.
The optimal parameters searched on the COCO-40\% labels left are shared for all other datasets.
\textit{How to set $m$ of Focal margin?}
The margin~$m$ of Focal margin controls the importance of different positives. A small $m$ focuses more on hard positives,
and the focus shift to semi-hard positives as $m$ increases. Based on our empirical evaluation in Table~\ref{tab:hyper-parameter}, we recommend $m\in[0.5,1.0]$ and set $m=1$ in all the experiments.
\textit{How to determine the certain time of early stage for SPLC?}
SPLC is insensitive to the determination of the certain time of early stage.
Experiments show that a selection from the 2-nd to 5-th epoch has little effect~(mAP within 0.2) on performance shown in Table~\ref{tab:hyper-parameter}.
The reason is that the model recalls only a small part of FNs~(but with high precision) at the early stage.
Therefore, we correct the labels after the first epoch for all the experiments of SPLC.
\textit{What if FNs are identified incorrectly for SPLC?}
In fact, it cannot be guaranteed that all identified FNs are correct, even in well-studied single-label tasks.
However, our methods would perform well as long as most identified FNs are correct, as demonstrated by the extensive experiments.
The error rate of false differentiation in training set is below 5\% by setting a high threshold~(above 0.6) in Fig.~\ref{img:observation}.
The underlying reason behind this is the distinctive features in MLML as described in Sec.~\ref{sec_observation}.
\textit{How to set the threshold of SPLC to differentiate TNs and FNs?}
The threshold $\tau$ controls the trade-off between precision and recall of FNs as shown in Fig.~\ref{img:SPLC_pr}.
$\tau$ is a relatively sensitive parameters as shown in Table~\ref{tab:hyper-parameter}.
A higher~(stricter) threshold means less FNs are recalled but the precision is higher.
We set the threshold $\tau$ to a fixed high value of 0.6 to alleviate the incorrect identification of FNs.
The reason why $\tau$ is greater than the commonly used 0.5 is that the Focal margin focuses more on positives and recalls more missing labels compared with the BCE.
It is predictable that setting different thresholds for different classes and training stages, instead of a fixed one, will bring better accuracy but more parameters to be tuned.
Thus, we leave the study of adaptive and efficient threshold for the future work.
\textit{Does the Hill loss adjust the ratio of positive and negative part by the hyper-parameters, similar to Focal loss and ASL?}
Hill loss has a fixed formulation for positive and negative part but outperforms Focal loss and ASL. Therefore, no hyper-parameter is needed to balance positives and negatives.
\begin{table}[t]
\begin{center}
\caption{MAP results of different margins in Focal margin, different epochs and thresholds in SPLC on the COCO-40\% labels left.}
\label{tab:hyper-parameter}
\begin{tabular}{c|ccccc}
\hline
\multicolumn{6}{c}{\textbf{Different margins~$(m)$}}\\
\hline
$m$ &
\multicolumn{1}{c}{0}&
\multicolumn{1}{c}{0.5}&
\multicolumn{1}{c}{1.0}&
\multicolumn{1}{c}{1.5} &
\multicolumn{1}{c}{2.0} \\
mAP & 73.57 &75.01 &\textbf{75.15} & 74.81 & 74.53 \\
\hline
\multicolumn{6}{c}{\textbf{Different epochs}}\\
\hline
epoch & 1 &2 &3 & 4 & 5 \\
mAP &75.69 &75.63 &75.60 &75.70 &75.71 \\
\hline
\multicolumn{6}{c}{\textbf{Different thresholds~$(\tau)$}}\\
\hline
threshold & 0.5 &0.55 &0.6 & 0.65 &0.7 \\
mAP &55.84 &72.41 & \textbf{75.69} & 75.46 & 74.29 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{More experiments with different backbones}
In Table~\ref{tab:backbones}, we test the applicability of Hill and SPLC for different
backbones, by comparing the different loss functions on three representative architectures: MobilenetV3-Large~\cite{howard2019searching}, Resnet101~\cite{he2016deep} and Swin-transformer-Small~\cite{liu2021swin}.
The consistent improvements brought by Hill and SPLC demonstrate that our methods are robust to different backbones.
\begin{table}[t]
\caption{Comparison results of mAP with different backbones on COCO-40\% labels left.}
\begin{center}
\scriptsize
\begin{tabular}{lcccccc}
\hline
& \multicolumn{1}{c}{MobilenetV3}&
\multicolumn{1}{c}{Resnet101} &
\multicolumn{1}{c}{Swin-transformer}
\\
\hline
BCE~(full labels) & 76.32 &81.74 &83.31 \\
BCE & 68.45 &71.92 &76.74 \\
WAN~\cite{cole2021multi} & 69.27 & 73.58 &77.20 \\
BCE-LS~\cite{cole2021multi} &68.73 &74.41 &76.88 \\
Focal~\cite{lin2017focal} &69.46 &73.58 &76.78 \\
ASL~\cite{ben2020asymmetric} & 69.65 &75.09 &76.70 \\
\textbf{Hill~(Ours)} & \textbf{71.03} & \textbf{76.53} & \textbf{78.96}\\
\textbf{\textbf{Focal margin + SPLC}~(Ours)} & \textbf{71.10} & \textbf{77.20} &\textbf{78.19} \\
\hline
\end{tabular}
\end{center}
\label{tab:backbones}
\vspace{-2mm}
\end{table}
\subsection{Analysis of Proposed Methods}
\begin{table}[t]
\caption{Ablation study of the Hill loss on the COCO-40\% labels left. Notations ``$+$'' and ``$-$'' represent the modification of positive and negative loss respectively on the basis of BCE.}
\begin{center}
\begin{tabular}{lcccc}
\hline
&
mAP &
\multicolumn{1}{c}{OF1} &
\multicolumn{1}{c}{CF1} \\
\hline
BCE & 70.49 & 45.81 & 40.42 \\
\hline
Focal$^+$~\cite{lin2017focal} & 70.87 & 49.42 & 44.63\\
\textbf{Focal margin$^+$~(Ours)} & \textbf{71.50} & \textbf{57.66} & \textbf{58.37} \\
\hline
WAN$^-$~\cite{cole2021multi} &72.05 & 62.77 & 64.87\\
Focal$^-$~\cite{lin2017focal} &72.10 & 56.43 & 54.04\\
ASL$^-$~\cite{ridnik2021asymmetric} & 72.70 & \textbf{67.72} & 71.67\\
MSE$^-$ & 74.08 & 65.12 & 68.17\\
\textbf{Hill$^-$~(Ours)} & \textbf{74.98} & 66.56 & \textbf{71.87}\\
\hline
\textbf{Hill~(Ours)} & \textbf{75.15} & \textbf{68.56} & \textbf{73.08}\\
\hline
\end{tabular}
\end{center}
\label{tab:ablation}
\vspace{-3mm}
\end{table}
\subsubsection{Ablation study of Hill loss}
Since Hill loss is composed of positive and negative loss, we conduct the ablation study on the basis of BCE to verify the effectiveness of positive and negative parts in Hill loss as shown in Table~\ref{tab:ablation}.
For the positive part, the hard-mining method~(Focal$^+$) brings a slight gain for BCE, whereas Focal margin$^+$ brings a relatively significant improvement. This shows semi-hard mining is more appropriate for positives than hard-mining in MLML.
For the negative part, we first down-weight the whole negatives with a weight parameter termed WAN$^-$, which provides a stronger baseline than BCE with solid improvements on different metrics.
Second, we compare different re-weighting losses. Hill$^-$ surpasses Focal$^-$ and ASL$^-$ by above +2\% mAP score. The main reason is that though ASL$^-$ seeks to attenuate the effect of hard negatives, the gradient analysis in Fig.~\ref{img:posloss} illustrates that ASL$^-$ still puts too much focus on these possibly false negatives compared with Hill$^-$.
Third, the superiority of Hill$^-$ over MSE$^-$ also demonstrates the necessity of re-weighting.
The performance is further improved via the integration of Focal margin$^+$ and Hill$^-$.
\begin{figure*}[t]
\flushleft
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/bce-full.pdf}
\scriptsize (a)~BCE-full labels
\includegraphics[width=1.7in]{figures/distribution/asl-val.pdf}
\scriptsize (e)~ASL
\end{minipage}%
}%
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/distribution/bce-val.pdf}
\scriptsize (b)~BCE
\includegraphics[width=1.7in]{figures/appendix/role-0.2.pdf}
\scriptsize (f)~ROLE
\end{minipage}%
}%
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/focal-0.2.pdf}
\scriptsize (c)~Focal
\includegraphics[width=1.7in]{figures/distribution/hill-val.pdf}
\scriptsize (g)~Hill~(Ours)
\end{minipage}%
}%
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/foca-margin1-0.2.pdf}
\scriptsize (d)~Focal margin~(Ours)
\includegraphics[width=1.7in]{figures/distribution/splc-val.pdf}
\scriptsize (h)~SPLC~(Ours)
\end{minipage}%
}%
\centering
\caption{Probability distributions on the COCO test set of different loss functions. Note that only (a) is obtained from a model trained with full labels on the COCO dataset, and the rest of the figures are obtained from models trained via different methods on the COCO-40\% labels left.}
\label{img:dis}
\end{figure*}
\begin{table}[t]
\caption{Quantitative results of SPLC with different loss functions on the COCO-40\% labels left.}
\begin{center}
\scriptsize
\begin{tabular}{lccc}
\hline
Method
& \multicolumn{1}{c}{mAP}&
\multicolumn{1}{c}{CF1}&
\multicolumn{1}{c}{OF1}
\\
\hline
BCE & 70.49 & 45.81 & 40.42 \\
BCE + pseudo label & 71.46 & 52.17 & 46.48 \\
BCE + \textbf{SPLC} & \textbf{73.63} & \textbf{65.56} &\textbf{72.14} \\
\hline
Focal & 71.66 & 48.67 &44.01 \\
Focal + pseudo label & 72.67 & 52.43 & 47.68 \\
Focal + \textbf{SPLC} & \textbf{73.83} & \textbf{57.95} &\textbf{56.04} \\
\hline
ASL & 72.70 & 67.72 &71.67 \\
ASL + pseudo label & 73.75 & 68.73 & 72.06 \\
ASL + \textbf{SPLC} & \textbf{74.23} & \textbf{69.23} & \textbf{72.74} \\
\hline
Focal margin & 72.26 & 60.79 &61.58 \\
Focal margin + pseudo label & 74.10 & 65.53 & 66.17 \\
\textbf{Focal margin + SPLC} & \textbf{75.69} & \textbf{67.91} &\textbf{73.33} \\
\hline
\end{tabular}
\end{center}
\label{tab:SPLC ablation}
\vspace{-5mm}
\end{table}
\subsubsection{SPLC boosts existing losses}
Table~\ref{tab:SPLC ablation} shows that both pseudo label and SPLC methods can complement existing losses and lead to consistent improvements, while SPLC performs better than the pseudo label method in both training efficiency and accuracy.
SPLC corrects the missing labels dynamically along with the training, while pseudo label method needs an extra training process to predict the labels.
Fig.~\ref{img:SPLC_pr} is drawn to explain why SPLC achieves better accuracy.
It can be observed that during the training process, SPLC gradually recalls more missing labels and correct them, guaranteeing the precision meanwhile.
By comparison, it is hard for the pseudo label method to make a good compromise between precision and recall, since the model is trained by noisy datasets and is not robust to the missing labels.
\begin{figure*}[t]
\flushleft
\centering
\includegraphics[width=1.0\textwidth]{figures/visualization_1.pdf}
\centering
\caption{Example results of our proposed methods and BCE from the COCO test set. Green and red color mean positive and missing labels respectively. Generally, the model trained with BCE tends to overfit to the missing labels, as their predicted probabilities are quite low.
By comparison, models trained by Hill and SPLC predict higher probabilities for missing labels, implying that they are more robust against missing labels.
We also show the results from the first and third epoch of SPLC to demonstrate the feasibility and rationality of loss correction at the early stage.
More examples are shown in Appendix B.}
\label{img:example_vis}
\end{figure*}
Moreover, Focal margin performs best when combined with SPLC, which obtains a remarkable mAP gain up to 3.43\%. The reason is that the Focal margin highlights semi-hard positives as shown in Fig.~\ref{img:posloss}, leading to more missing labels being corrected than the Focal loss.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{figures/compare}
\caption{The precision and recall curves of missing labels during training on the COCO training set of 40\% labels left with and without SPLC. The yellow and green rectangles refer to the precision and recall of a well-trained model used for pseudo labeling with the thresholds between 0.4 and 0.6.}
\label{img:SPLC_pr}
\end{figure}
\subsubsection{Probability distribution analysis}
\label{sec:discuss}
Fig.~\ref{img:dis} illustrates the probability distributions of different methods.
As shown in Fig.~\ref{img:dis}~(g) and (h), our proposed methods outperform existing works obviously, as positives and negatives are well differentiated and the probability distribution are similar to that of BCE trained on the COCO full labels dataset.
Specifically, decreasing the weighting of hard negatives and correcting noisy labels mitigate the effects of mislabeled samples. Besides, paying more emphasis on semi-hard positives is conducive to the learning of positive samples.
More concretely,
From Fig.~\ref{img:dis}~(a) and (b), it can be observed that many positives are incorrectly identified under the impact of missing labels.
Comparing Fig.~\ref{img:dis}~(c) and (d), we observe that more positives are recalled by Focal margin loss than Focal loss, which indicates that semi-hard mining is conducive on multi-label image recognition.
As for ASL~(Fig.~\ref{img:dis}~(f)), although it demonstrates the effectiveness of down-weighting negatives, it only down-weights extremely hard negatives~($p>0.8$), leaving semi-hard negative samples~($p\in[0.5,0.8]$) large weights.
As a result, too large weights for hard negatives and too small weights for easy ones make insufficient distinction between positives and negatives.
Despite higher mAP obtained by ROLE than baseline methods, the distribution of ROLE suffers the problem of insufficient discrimination especially for the negatives, as shown in Fig.~\ref{img:dis}~(f).
Even more, Role requires additional memory to store the label matrix for online estimation, which is not negligible for large-scale datasets.
Furthermore, we give examples of the predicted probabilities from different methods in Fig.~\ref{img:example_vis}. Hill and SPLC present better robustness against missing labels, as they allocate relatively higher probability values to missing labels than BCE.
\subsection{Comparisons with Existing Methods}
\subsubsection{Different missing ratios on COCO}
Table~\ref{tab:maintable} summarizes results of different methods on COCO under different missing ratios. As the increase of missing ratio, the model performance becomes worse and the efficacy of loss becomes significant. Though Weak Assume Negatives~(WAN) and Label Smoothing~(LS) are stronger baselines than BCE, the proposed Hill and SPLC show considerable superiority.
Specifically, for the loss re-weighting methods, Hill significantly surpasses Focal and ASL by +3.49\% and +2.45\% respectively under the setting of 40\% labels left.
For the loss correction methods, BCE + pseudo label is a two-stage method: we first train a model using BCE, afterwards use the trained model to correct the data and then retrain the model. ROLE requires additional space to store the label prediction matrix. In contrast, SPLC can correct missing labels in the training phase and does not need additional storage space. Therefore, SPLC improves the performance without any bells and whistles.
\begin{table*}[t]
\caption{Comparison results of mAP, CF1 and OF1 over other methods on COCO with different missing ratios.}
\begin{center}
\begin{tabular}{lccccccccc}
\hline
& \multicolumn{3}{c}{75\% labels left}&
\multicolumn{3}{c}{40\% labels left}&
\multicolumn{3}{c}{single label}
\\
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OF1} &
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OF1} &
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OF1}
\\
\hline
BCE~(full labels) & 80.32 & 74.89 & 78.95 & - & - & - &- &- & - \\
BCE & 76.81 & 67.73 & 71.07 & 70.49 & 45.81 & 40.42 & 68.57 & 43.78 & 37.72 \\
WAN~\cite{cole2021multi} & 77.25 & 72.19 & 75.86 & 72.05 & 62.77 & 64.87 & 70.17 & 58.02 & 58.60 \\
BCE-LS~\cite{cole2021multi} & 78.27 & 67.77 & 71.10 & 73.13 & 44.92 & 41.17 & 70.53 & 40.85 & 37.26 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 76.95 & 68.41 & 71.39 & 71.66 & 48.67 & 44.01 & 70.19 & 47.02 & 41.37 \\
ASL~\cite{ben2020asymmetric} & 77.97 & 69.67 & 72.54 & 72.70 & 67.72 & 71.67 & 71.79 & 44.77 & 37.86 \\
\textbf{Hill~(Ours)} & \textbf{78.84} & \textbf{73.57} & \textbf{77.31} & \textbf{75.15} & \textbf{68.56} & \textbf{73.08} & \textbf{73.17} & \textbf{65.47} & \textbf{69.51} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & 77.05 & 68.18 & 71.47 & 71.46 & 52.17 & 46.48 & 69.77 & 48.91 & 42.99 \\
ROLE~\cite{cole2021multi} & 78.43 &72.90 & \textbf{77.03} & 73.67 & 66.73 & 70.65 & 70.90 &57.59 & 61.16 \\
\textbf{\textbf{Focal margin + SPLC}~(Ours)} & \textbf{78.44} & \textbf{73.18} & 76.55 & \textbf{75.69} & \textbf{67.91} & \textbf{73.33} & \textbf{73.18} & \textbf{61.55} & \textbf{67.35} \\
\hline
\end{tabular}
\end{center}
\label{tab:maintable}
\end{table*}
\begin{table}[t]
\caption{Comparison results of mAP on VOC-single label and NUS-single label datasets.}
\begin{center}
\scriptsize
\begin{tabular}{lcccccc}
\hline
& \multicolumn{1}{c}{VOC-single}&
\multicolumn{1}{c}{NUS-single}
\\
\hline
BCE~(full labels) & 89.04 & 60.63 \\
BCE & 85.55 & 51.71 \\
WAN~\cite{cole2021multi} & 87.03 & 52.89 \\
BCE-LS~\cite{cole2021multi} & 87.23 & 52.47 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 86.83 & 53.58 \\
ASL~\cite{ben2020asymmetric} & 87.33 & 53.92 \\
\textbf{Hill~(Ours)} & \textbf{87.75} & \textbf{55.03} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & \textbf{89.67} & 51.79 \\
ROLE~\cite{cole2021multi} & 89.00 & 50.61\\
\textbf{\textbf{Focal margin + SPLC}~(Ours)} & 88.07 & \textbf{55.18} \\
\hline
\end{tabular}
\end{center}
\label{tab:voc&nus}
\end{table}
\begin{table}[t]
\caption{Comparison results of mAP on Open Images-single label datasets.}
\begin{center}
\scriptsize
\begin{tabular}{ccccccc}
\hline
&
\multicolumn{1}{c}{BCE}&
\multicolumn{1}{c}{Focal}&
\multicolumn{1}{c}{ASL}&
\multicolumn{1}{c}{Hill} &
\multicolumn{1}{c}{SPLC}
\\
\hline
Open Images & 60.83 &62.14 &61.95 & \textbf{62.71} & \textbf{62.86} \\
\hline
\end{tabular}
\end{center}
\label{tab:openimages}
\vspace{-5mm}
\end{table}
\subsubsection{Single label on VOC, Nus-wide and Open Images}
The single positive label results on VOC and NUS are shown in Table~\ref{tab:voc&nus}.
Hill and SPLC improve mAP performance remarkably on the NUS-single label dataset, but do not achieve obvious improvements on VOC.
The reason may be that the VOC dataset only contains 20 labels, resulting in a relatively balanced ratio of positives and negatives.
To further validate the effectiveness of proposed methods on extreme multi-label classification, we conduct experiments on Open Images with 567 labels as shown in Table~\ref{tab:openimages}.
It can be observed that both Hill and SPLC achieve improvements remarkably compared with common loss functions on Open Images, demonstrating that the proposed methods are robust in large-scale datasets and extreme classification cases.
\section{Experiment}
In this section, we firstly introduce the experimental settings. Secondly, we compare proposed methods with existing classic loss functions on four multi-label datasets.
Thirdly, ablation study and comprehensive analysis are given to delve into the working mechanism of Hill and SPLC.
Finally, more discussions and experiments are provided to answer the common questions in implementation.
\subsection{Experimental Settings}
\subsubsection{Datasets}
We conduct experiments on multiple standard multi-label benchmarks: MS-COCO~(COCO)~\cite{lin2014microsoft}, PASCAL VOC 2012~(VOC)~\cite{everingham2011pascal}, NUS-WIDE~(NUS)~\cite{chua2009nus}, and the large-scale Open Images~\cite{kuznetsova2020open}. COCO~\cite{lin2014microsoft} consists of 82,081 images with 80 classes for training and 40,137 images for test. VOC~\cite{everingham2011pascal} contains 5,717 training images with 20 classes, and additional 5,823 images are used for test.
Following \cite{ben2020asymmetric}, we collect NUS-wide~\cite{chua2009nus} with 119,103 and 50,720 images as training set and test set, consists of 81 classes.
Due to many download urls of Openimages~\cite{kuznetsova2020open} are expired, we were able to collect 1,742,125 training images and 37,306 test images, which contain 567 unique classes.
We construct the training sets of missing labels by randomly dropping positive labels for each training image with different ratios.
The number of remaining labels per image $n_r(x)=\lfloor n(x) \times (1-r)\rfloor+1$, where $n(x)$ refers to the number of all positive labels per image and $r$ controls the ratio of missing labels.
In particular, $r=1$ means single positive label per image, the same setting as~\cite{cole2021multi}. We report results for $r\in\{0.5, 0.8, 1\}$ respectively on COCO and single positive label ($r=1$) results on VOC, NUS and Open Images.
Note that the test sets are fully labeled.
The detailed statistics of datasets are shown in Table~\ref{tab:dataset}.
\begin{table}[t]
\caption{The statistics of different datasets.}
\begin{center}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{c|cccc}
\hline
&
\multicolumn{1}{c}{samples} &
\multicolumn{1}{c}{classes} &
\multicolumn{1}{c}{labels} &
\multicolumn{1}{c}{avg.label/img}
\\
\hline
COCO-full labels & 82,081 & 80 &241,035 &2.9 \\
COCO-75\% labels left & 82,081 & 80 &181,422 &2.2 \\
COCO-40\% labels left & 82,081 & 80 &96,251 &1.2 \\
COCO-single label & 82,081 & 80 &82,081 &1.0 \\
\hline
NUS-full labels & 119,103 & 81 &289,460 &2.4 \\
NUS-single label & 119,103 & 81 &119,103 &1.0 \\
\hline
VOC-full labels & 5,823 & 20 &8,331 &1.4 \\
VOC-single label & 5,823 & 20 &5,823 &1.0 \\
\hline
Open Images-full labels & 1,742,125 & 567 & 4,422,289 & 2.54\\
Open Images-single label & 1,742,125 & 567 & 1,742,125 & 1 \\
\hline
\end{tabular}
}
\end{center}
\label{tab:dataset}
\end{table}
\subsubsection{Evaluation metrics}
Following previous multi-label image classification researches~\cite{ben2020asymmetric,cole2021multi}, mean Average Precision~(mAP) is used as the primary metric to evaluate model performance.
We also report the overall F1-measure~(OF1) and per-category F1-measure~(CF1) with the positive threshold of 0.5.
Precision~(P) and recall~(R) results are reported in Appendix B.
\subsubsection{Implementation details}
We use the Resnet50~\cite{he2016deep} pretrained on ImageNet~\cite{deng2009imagenet} as the backbone network for feature extraction. The input images are uniformly resized to $448\times448$.
The optimizer is the Adam with a weight decay of 1e-4 and cycle learning rate schedule~\cite{smith2019super} is used with the max learning rate of 1e-4. Following~\cite{ben2020asymmetric}, we use the exponential moving average trick for higher baselines.
\subsubsection{Compared methods}
We firstly compare our methods with three classic baseline methods.
Weak Assume Negatives~(WAN) and Label Smoothing~(LS)~\cite{cole2021multi} are selected as the stronger baselines than BCE in MLML.
Secondly, we compare the Hill loss with loss re-weighting methods, Focal~\cite{lin2017focal} and the recently proposed ASL~\cite{ben2020asymmetric}.
Thirdly, we choose the loss correction methods, including Pseudo Label~\cite{lee2013pseudo} and Regularized Online Label Estimation~(ROLE)~\cite{cole2021multi} to validate the effectiveness of the SPLC.
For a fair comparison, we reproduce these methods using open-source codes from the authors under the same dataset configuration. We adopt the the hyper-parameters recommended in their papers.
\section{Introduction}
\IEEEPARstart{R}{ecognizing} multiple labels of a given image, also known as multi-label image recognition, is an important and practical computer vision task, as images are intrinsically multi-labeled, {\it{e.g.}}, containing multiple objects, conditions, or scene types. In fact, the single-label dataset ImageNet has a large portion being composed of images with multiple labels. It has been shown in~\cite{yun2021re} that re-labeling ImageNet~\cite{deng2009imagenet} from single-label to multi-labels can yield remarkable performance improvement.
A main challenge for multi-label learning is the difficulty in collecting ``full'' labels for supervised training. In the case of a large number of categories, it is difficult and even impractical to simultaneously annotate the full label set for each training image. As a result, images are usually annotated with partial labels and, hence, available training data is often plagued by false negatives due to miss-labeling. In this context, multi-label learning in the presence of missing labels~(MLML) has attracted much research attention~\cite{yu2014large,durand2019learning,huynh2020interactive,cole2021multi}.
Generally, learning in the presence of missing labels is not straightforward, since for a given negative label there are two possibilities, {\it{1)}}~either it is truly negative that the class corresponding to it does not exist in the image, {\it{2)}}~or it is truly positive but simply not identified by the labeling mechanism.
Existing works on MLML mainly utilize label correlation or transition matrix to estimate the missing labels by modifying the network or training procedures~\cite{huynh2020interactive,chu2018deep,bucak2011multi}, which typically require extra complex architectures or training schemes.
Besides, missing labels can also be regarded as a branch of label noise, where noise-robust loss design has shown remarkable performance in single-label recognition tasks~\cite{liu2015classification,zhang2018generalized,ren2018learning,wang2019symmetric}. However, robust loss design remains underexplored in multi-label learning.
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{figures/fig1}
\caption{(a) Illustration of multi-label learning with missing labels. (b) Illustration of our proposed Hill loss and self-paced loss correction~(SPLC) for negatives.}
\label{img:beginning}
\end{figure}
This work aims to provide some insights into MLML through investigating the efficacy of robust loss function.
In particular, we demonstrate that a careful design of the loss function can greatly boost the robustness against the missing labels and the performance of classification accuracy, while still maintaining a simple and efficient solution, based on standard architectures and training schemes.
This benefits from the distinctive observation and characteristic in multi-label learning.
We observe that the predicted probability from a model can be used to identify the false negatives~(missing labels) with a surprisingly high precision even at the early stage of training.
Specifically, it is a consensus that there exists an imbalance between positives and negatives in multiple binary classifiers, which results in the domination of negatives on positives. Due to this fact, the predicted positives are inclined to be true positives~(including labeled positives and missing labels) with a high confidence.
Inspired by this observation, we propose two simple yet effective methods to diminish the effect of missing labels.
{\it{1)}}~The first is a new robust loss for negatives, namely the Hill loss, which is derived by re-weighting the MSE loss to alleviate the effect of false negatives.
Specifically, the Hill loss puts less weight on potential false negatives and easy ones, but more weight on middle semi-hard ones to make the training of the model insensitive~(robust) to false negatives.
{\it{2)}}~The second is a \textbf{S}elf-\textbf{P}aced \textbf{L}oss \textbf{C}orrection~(termed \textbf{SPLC}) method, which uses a loss derived from the maximum likelihood~(ML) criterion under an approximate probability distribution of the missing labels.
Different from traditional correction techniques utilizing extra networks~\cite{chen2019multi,pineda2019elucidating} or extra training procedure~\cite{bucak2011multi}, the SPLC method gradually corrects missing labels based on model prediction in an automatically self-paced manner.
Moreover, we propose the Focal margin loss to exploit semi-hard positives by a simple modification to the Focal loss~\cite{lin2017focal}.
Improvements brought by the Focal margin demonstrates that semi-hard mining is more appropriate than hard mining for positives.
Extensive experiments show that our approaches can significantly outperform existing methods while do not introduce extra complex architectures or training schemes.
The main contributions are summarized as follows:
\begin{itemize}
\item An interesting observation in multi-label learning that a model can identify false negatives~(missing labels) even in the early stage of training;
\item A systematic loss study on the problem of multi-label learning with missing labels that can simplify the pipeline in this field;
\item A novel Hill loss to re-weight negatives and a simple self-paced loss correction method that are robust against missing labels;
\item Comprehensive experiments that validate the effectiveness of our approaches and establish the new state-of-the-arts of loss function in MLML on multiple datasets.
\end{itemize}
\subsection{Semi-hard Mining for Positives}
In addition to dealing with negatives via Hill and SPLC, we also delicately tailor the loss for positives to exploit the feature of multi-label learning.
In fact, hard-mining methods that can yield improvement in single-label learning are not widely used in multi-label learning.
For example, the recent state-of-the-art method ASL only adopts the naive BCE loss for positives.
Here, we attempt to explain why hard-mining does not perform well in multi-label learning by the comparative analysis of Fig.~\ref{img:observation} (b) and (d).
It indicates that excessively hard mining brought by Focal loss is inappropriate for multi-label learning.
Specifically, despite the decrease in the number of hard positives~(low probability, below $0.2$), Focal loss only pushes the proportion of 1.5\% hard positives to a high probability~(above 0.5).
Meanwhile, Focal loss ignores a large proportion of semi-hard ones with moderate probability~({\it{e.g.}}, in the region $[0.3, 0.5]$).
Therefore, we seek to further emphasize semi-hard positives compared with Focal loss.
An intuitive approach is to subtract a margin from the logits. In this manner, they are regarded as smaller values and given larger gradients because of the feature of Focal loss. Meanwhile, thanks to the characteristic ``S''-shaped curve of sigmoid function, classes with probability in the middle region~(the logit value around 0, with the corresponding probability value around 0.5) are heavily affected,
meaning that they are given larger gradients compared with those at both ends~(easy and hard samples).
Formally, we propose the Focal margin loss as:
\begin{equation}
L^+_{Focal\;margin} = (1-p_{m})^\gamma\log(p_{m})
\end{equation}
where $p_{m} = \sigma(x-m)$ and $m$ is a margin parameter. Focal margin loss degrades to Focal loss when $m=0$. $\gamma$ is set to 2, a commonly used value in Focal loss.
The gradients of BCE and Focal margin losses with different $m$ are shown in Fig.~\ref{img:posloss}. It can be observed that Focal margin endows larger weight on semi-hard positives compared with Focal loss. In our work, $m\in[0.5,1.0]$ is recommended to avoid the highlight of easy positives.
Actually, the introduction of margin is widely used in the design of loss function, most notably in SVMs~\cite{cortes1995support}.
Recently, Large-Margin Softmax~\cite{liu2016large} and Arcface~\cite{deng2019arcface} have been proposed to minimize intra-class variation and at the meantime enlarge the inter-class margin by using the angular margin. This paper is the first work to perform semi-hard mining through the combination of margin and sigmoid activation.
To sum up, the Focal margin loss for positives are integrated with Hill loss and SPLC for negatives respectively into our final approaches~(denoted by Hill and Focal margin + SPLC).
\subsection{Re-weighting Negatives into a ``Hill''}
A straightforward method to alleviate the effect of missing labels is to design a robust loss that is insensitive to false negatives. Generally, the prediction probability of a well-trained model should be closer to 1 than 0 for false negatives. Hence, the effect of false negatives can be mitigated by putting less weight on the prediction probability closing to 1 in the loss for negative labels. As shown in Fig.~\ref{img:negloss}, the Mean Squared Error~(MSE) loss naturally satisfies this desirable robustness property, as it has a low weight on prediction probability closing to 1. Hence, it would be more robust than the BCE loss and can yield a better performance in the presence of missing labels.
To further improve the robustness against false negatives, we propose the Hill loss by re-weighting the MSE loss as
\begin{equation}
\begin{aligned}
\mathcal{L}_{Hill}^- &= -w(p)\times MSE\\
&=-(\lambda-p){p}^2.
\end{aligned}
\end{equation}
The weighting term $w(p)=\lambda-p$ is designed to down-weight the loss for possibly false negatives. To fulfill this goal, considering a typical threshold of 0.5 for sigmoid-based binary classification, the hyper-parameter $\lambda$ can be selected to satisfy $\frac{\partial^2L_{Hill}^-}{\partial x^2}=0$ for $p=\sigma(x)=0.5$. Accordingly, the solution is given by $\lambda=1.5$, with which we obtain the Hill loss as illustrated in Fig.~\ref{img:negloss}. Clearly, it can be seen from Fig.~\ref{img:negloss} that the Hill loss has less weight than the MSE loss for $p\geq0.5$. Hence, it can be expected to achieve more robust performance against missing labels than MSE.
\begin{figure}[t]
\centering
\includegraphics[width=.40\textwidth]{figures/neg}
\caption{Gradient analysis of Hill loss~(shaped like a hill) in comparison with current loss functions. Hill loss puts more attention on semi-hard samples, while potential false negatives with high probability are delivered less emphasis.}
\label{img:negloss}
\end{figure}
Fig.~\ref{img:negloss} compares the gradients of different loss functions for negative labels. It is noteworthy that each of the ASL, MSE and Hill losses has less weight on possible false negatives compared with the BCE loss. However, ASL can still be seriously affected by missing labels, since it only down-weights the region very close to 1, {\it{e.g.}}, $p > 0.9$. In fact, a large part of missing labels has a prediction probability less than 0.9, as shown in Fig.~\ref{img:observation}. Though the ASL loss can be adjusted by the parameters in Eq.~(4), the adjustable extent is restricted, which is analyzed in detail in Appendix B.
\iffalse
\begin{algorithm}[t]
\caption{Self-Paced Loss Correction.}
\label{alg:SPLC}
\textbf{Input:}
Predicted results processed by sigmoid $p=\{p_1,p_2,...,p_k\}$, ground truth $y=\{y_1,y_2,...,y_k\}$, threshold parameter $thres$, epoch num $epo$.\\
\textbf{Output:}
$loss$
\begin{algorithmic}[1]
\STATE Define $loss^+(), loss^-()$ for positives and negatives
\STATE $loss=0$
\REPEAT
\IF{$epo>1$}
\FOR{$i=1$ to $k$}
\IF{$y_i=0$ and $p_i<thres$}
\STATE $loss \leftarrow loss^-(x_i)$
\ELSE
\STATE $loss \leftarrow loss^+(x_i)$
\ENDIF
\ENDFOR
\ENDIF
\RETURN loss
\UNTIL{Convergence}
\end{algorithmic}
\end{algorithm}
\fi
\subsection{An Observation in MLML}
\begin{figure}[t]
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/bce-epoch2.pdf}
\scriptsize (a)~The early stage of BCE
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/focal-epoch1-0.4.pdf}
\scriptsize (c)~The early stage of Focal
\end{minipage}%
}%
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/bce-epochbest-overfitting.pdf}
\scriptsize (b)~The late stage of BCE
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/focal-epochbest-overfitting.pdf}
\scriptsize (d)~The late stage of Focal
\end{minipage}%
}%
\centering
\caption{Probability distribution of a network prediction at the early and late stages of training on the COCO training set with 40\% labels left. (a) and (c) indicate that false negatives can be identified with a high precision but a low recall at the early training stage. (b) and (d) show that the network predictions for missing labels share a similar distribution to that for true negatives, which implies that the network over-fits to labeled positives at the late stage of training.}
\label{img:observation}
\end{figure}
Under a typical missing label setting, Fig.~\ref{img:observation} shows the probability distribution of a network prediction at the early and late stages of training with BCE and Focal loss. We make an observation:
\emph{Missing labels~(false negatives,FNs) can be distinguished from true negatives~(TNs) according to the predicted probabilities of a network at the early training stage.}
Fig.~\ref{img:observation}(a) and (c) show that although most missing labels are poorly classified at the early training stage, partial missing labels can be identified with high precision but low recall by setting a proper threshold. For example, when using the BCE loss, partial FNs can be distinguished from TNs with a threshold of 0.3 on prediction probability. Similarly, when using the Focal loss, such a threshold can be set to 0.4.
The reason behind this phenomenon is the distinctive features in multi-label learning:
{\it{1)}} the domination of negatives on positives in multiple binary classifiers; In the multi-label setting, an image typically contains a few positives but much more negatives, which results in serious positive-negative imbalance~\cite{ben2020asymmetric}.
{\it{2)}} Missing labels exacerbate the positive-negative imbalance and plague the learning of recognizing positives.
Therefore, the low ratio of positives make the model conservative in recognizing positives, once it makes such a prediction we can be confident it is correct.
Furthermore, it can be seen from Fig.~\ref{img:observation}(b) and (d) that, at the late stage of training, the network tends to over-fitting. As a result, the distribution of the network prediction for missing labels~(FNs) is similar to that for TNs, which makes the missing labels indistinguishable from true negatives.
That is, the network tends to prioritize learning simple patterns~(e.g., true labels) first before eventually memorizing~(over-fitting to) hard ones~(e.g., noisy/false labels).
This phenomenon accords well with the mainstream of viewpoint in noise-robust learning, \textit{memorization effect}, that ``deep neural networks easily fit random labels'' ~\cite{zhang2021understanding} and ``deep neural networks~(DNNs) learn simple patterns first, before memorizing''~\cite{arpit2017closer}.
In light of this, it would be beneficial to exploit these distinguishable missing labels at the early stage of training rather than the late stage.
The above analysis sheds some light on how to relieve the effect of false negatives in the presence of missing labels. In the following, we propose two approaches via down-weighting and correcting potential missing labels, respectively.
\label{sec_observation}
\section{Simple and Robust Loss Design for MLML}
In this section, we first review classic losses in multi-label learning. Then, we make an observation in the scenario of missing labels, based on which the Hill loss and the SPLC method are introduced to deal with the probably mislabeled negatives. Furthermore, we make a simple modification on the positive part of the Focal loss to highlight semi-hard positives.
\subsection{Preliminary}
Multi-label classification is usually transformed into a multiple binary classification problem, in which multiple binary classifiers are trained to predict one-vs.-rest for each label independently.
Given $K$ labels, networks predict the logits $x_i$ of the $i$-th label independently, then the probabilities are given by normalizing the logits with the sigmoid function as $p_{i} = \sigma(x_i) = \frac{1}{1+e^{-x_i}}$. Let $y_i$ denote the label for the $i$-th class, the binary classification loss is generally given by
\begin{equation}
\mathcal{L} = - \sum_{i=1}^K \left( y_i L^+_i + (1-y_i)L^-_i \right),
\end{equation}
where $L_i^+$ and $L_i^-$ stand for the positive and negative losses of $i$-th label, respectively. For simplicity, in the sequel we ignore the subscript $i$ in $L_i^+$ and $L_i^-$ to use $L^+$ and $L^-$.
The Binary Cross Entropy~(BCE) loss is the most popular loss function in multi-label classification, which is defined as
\begin{equation}
\begin{cases}
&L_{BCE}^+ = \log(p) \\
&L_{BCE}^- = \log(1-p)
\end{cases}.
\end{equation}
Further, the Focal loss~\cite{lin2017focal} is proposed to address the problem of positive-negative imbalance and hard-mining, which is given by
\begin{equation}
\begin{cases}
&L_{Focal}^+ = \alpha_+(1-p)^\gamma\log(p) \\
&L_{Focal}^- = \alpha_-p^\gamma\log(1-p)
\end{cases},
\end{equation}
where $\gamma$ is a focus parameter, and $\alpha_+$ and $\alpha_-$ are utilized to balance positives and negatives. By using $\gamma>0$, hard samples are delivered more attention.
More recently, ASL~\cite{ridnik2021asymmetric} is proposed to relieve positive-negative imbalance by operating differently on positives and negatives, which is defined as:
\begin{equation}
\begin{cases}
&L_{ASL}^+ = (1-p_m)^{\gamma_+}\log(p_m) \\
&L_{ASL}^- = p_m^{\gamma_-}\log(1-p_m)
\end{cases},
\end{equation}
where $\gamma_+<\gamma_-$ and are focus parameters for positive and negative labels, respectively, and $p_m=\max(p-m,0)$. The probability margin $m \geq 0$ is a tunable hyper-parameter. The ASL loss reduces the weight of easy negatives via using $\gamma_+ < \gamma_-$, and discards negatives with low predicted probability via the $m$ shifted probability.
\subsection{Self-Paced Loss Correction}
This subsection presents a loss correction method to alleviate the effect of missing labels. It can gradually corrects the potential missing labels during training, so we term it \textbf{S}elf-\textbf{P}aced \textbf{L}oss \textbf{C}orrection~(\textbf{SPLC}).
SPLC can be viewed as an approximate maximum likehood~(ML) method, as it adopts a loss derived from the ML criterion under an approximate distribution of missing labels.
First, for the full label setting~(without missing labels), the BCE loss (Eq.~(2)) is optimal from the ML criterion under Bernoulli distribution
\begin{equation}
P_r(y) = p^y(1-p)^{1-y} =
\begin{cases}
p, &y=1\\
1-p, & y=0
\end{cases}.
\end{equation}
Meanwhile, minimizing Eq.~(2) is equivalent to minimizing the KL divergence between the label and the model output. When in the presence of missing labels, for a negative label $y=0$, let $q\in[0,1]$ be the probability of its corresponding class being truly negative, while $1-q$ is the probability of its corresponding class being false negative~(caused by miss-labeling).
In this setting, the distribution Eq.~(6) on longer applies and a modification is:
\begin{equation}
\begin{aligned}
P_r(y,s) &= (qp)^y {\left\{(1-p)^{s} [(1-q)p]^{1-s} \right\}}^{1-y} \\ &=
\begin{cases}
qp, &y=1\\
1-p, & y=0,s=1 \\
(1-q)p, &y=0,s=0
\end{cases},
\end{aligned}
\end{equation}
where $s\in\{0,1\}$ is an indicating variable of true and false negatives, with $s=0$ standing for a false negative.
With this distribution, the optimal loss from the ML criterion is given by
\begin{equation}
\begin{cases}
&L^+ = \log(p) \\
&L^- = s\log(1-p) + (1-s)\log(p)
\end{cases}.
\end{equation}
Due to labeling mechanism, which negative labels are missing labels is not a prior known, {\it{i.e.}}, $s$ is not a prior known. Hence, the loss (Eq.~(8)) cannot be directly used for training.
However, in practice the missing labels can be empirically identified based on the model prediction as analyzed in Sec.~\ref{sec_observation}.
Hence, for a given negative label $y=0$, we can set a threshold $\tau\in(0,1)$ to identify whether it is a truly negative or false negative based on the model prediction probability $p$. Specifically, for a given negative label, it would be a missing label with high probability if $p>\tau$. In light of this understanding, a new loss that is robust to missing labels can be designed by recasting Eq.~(8) into:
\begin{equation}
\begin{cases}
&L^+ = \log(p) \\
&L^- = \mathbb{I}(p\leq \tau)\log(1-p) + (1-\mathbb{I}(p\leq \tau))\log(p)
\end{cases},
\end{equation}
where $\mathbb{I}(\cdot)\in\{0,1\}$ is the indicator function.
Furthermore, we can combine SPLC with the generalized loss function as
\begin{equation}
\begin{cases}
&L_{SPLC}^+ = loss^+(p) \\
&L_{SPLC}^- = \mathbb{I}(p\leq \tau)loss^-(p) + (1-\mathbb{I}(p\leq \tau))loss^+(p)
\end{cases},
\end{equation}
where $loss^+(\cdot)$ and $loss^-(\cdot)$ refer to generalized loss functions for positives and negatives, respectively.
In implementation, SPLC adopts the label correction for the probability that exceeds a fixed threshold during training. It presents two main advantages compared with the pseudo label method: {\it{1)}}~SPLC is more efficient. In the pseudo-label method, a trained model is utilized to correct the labels, then the network is retrained. In contrast, the network only needs to be trained once in SPLC. {\it{2)}}~SPLC is more accurate. In the pseudo-label method, the network for label correction is trained to converge by noisy data, leading to poor capability in recognizing missing labels. In contrast, correcting missing labels with SPLC at the early stage of training reduces the influence of noisy data.
\iffalse
The threshold $thres$ can be adapted from the distribution of predicted results in every mini-batch. In every mini-batch, positive and negative predicted results are represented as $p_{pos}, p_{neg}$, respectively. The threshold is calculated as:
\begin{equation}
\begin{aligned}
\bar{p}_{neg} = mean(p_{neg}) \\
\epsilon_{neg} = std(p_{neg}) \\
thres = mean(p_{pos}) \\
\end{aligned}
\end{equation}
\fi
\begin{figure}[t]
\centering
\includegraphics[width=.40\textwidth]{figures/pos}
\vspace{-.5em}
\caption{Gradient analysis of Focal margin loss with different margins in comparison with BCE. Without margin~($m=0$), Focal margin degrades into Focal loss, focusing on hard samples, whereas a proper margin~($m\in[0.5,1.0]$) enables Focal margin to highlight semi-hard samples.}
\vspace{-1.5em}
\label{img:posloss}
\end{figure}
\section{Related Work}
\subsection{Multi-label Learning with Missing Labels}
Multi-label learning with incomplete labels has been a hot topic in the community of weakly-supervised learning, due to the difficulty of annotating all the ground-truth labels~\cite{liu2021emerging}.
There are two popular related research topics for incomplete labels, Partial Multi-Label Learning ~(PML)~\cite{xu2019partial, xie2021partial}, and Multi-label Learning with Missing Labels~(MLML)~\cite{yu2014large,durand2019learning}.
To avoid the ambiguity of settings, we clarify the difference in Table~\ref{tab:missing kind}.
PML requires labeling all positive labels, which possibly introduces extra false positive labels and costs more. In other words, methods designed for PML typically rely on a candidate label set that includes all the positive labels occurring in the image.
By comparison, not all positive labels are required in MLML, which is more practical due to the lower labeling cost.
In this work, we further relax the MLML setting, where no exact negative labels~\cite{durand2019learning} are utilized and the number of positive labels is not limited to single~\cite{cole2021multi}.
This means the scenario we are working on is more flexible and general in real-word scenarios.
\begin{table}[t]
\scriptsize
\caption{Comparison among different settings in multi-label classification. \Checkmark (resp. \XSolidBrush, \textcircled{}) means a label is present (resp. absent, unknown). Falsely marked labels are in red color. PML requires all positive labels are annotated. By comparison, MLML relaxes the requirement of annotations.}
\centering
\begin{tabular}{c|ccccc}
\hline
Settings & label1 & label2 & label3 & label4 & label5
\\
\hline
Full labels & \Checkmark &\Checkmark &\XSolidBrush &\XSolidBrush &\XSolidBrush \\
PML &\Checkmark &\Checkmark &\textcolor{red}{\Checkmark} &\textcircled{} &\textcircled{}\\
MLML &\Checkmark &\textcolor{red}{\textcircled{}} &\textcircled{} &\textcircled{} &\textcircled{}\\
\hline
\end{tabular}
\label{tab:missing kind}
\end{table}
\iffalse
\begin{table}[t]
\vspace{1em}
\scriptsize
\tablestyle{8pt}{1.5}
\begin{tabular}{c|cc|cc}
\multirow{2}{*}{Settings} & \multicolumn{2}{c|}{Positive labels}&
\multicolumn{2}{c}{Negative labels}
\\
&
\multicolumn{1}{c}{True} &
\multicolumn{1}{c|}{False} &
\multicolumn{1}{c}{True} &
\multicolumn{1}{c}{False}
\\
\hline
Full labels & \checkmark & &\checkmark & \\
Partial labels & \checkmark & \checkmark &\checkmark &\\
Missing labels & \checkmark & &\checkmark & \checkmark\\
\end{tabular}
\caption{The comparison between different settings in Multi-label classification. False means that the positives~(resp. negatives) are incorrectly labeled as negatives~(resp. positives).}
\label{tab:missing kind}
\vspace{-1em}
\end{table}
\fi
Current works on MLML mainly focus on the design of networks and training schemes. The common practice is to utilize the customized networks to learn label correlations or classification confidence to realize correct recognition in missing labels.~\cite{zhang2018generalized} built a generative model to approximate the distributions of observed labels, by which the mislabeled samples can be recognized.~\cite{durand2019learning} proposed multi-stage training procedures to correct the missing labels. They utilized curriculum learning based strategies with the Bayesian uncertainty strategy for missing label correction.
This domain has seen fast progress recently, at the cost of requiring more complex methods.
In contrast, this work seeks to fulfill the potential of the loss function in MLML, which is based on standard architectures, does not increase training and inference pipelines and time.
Besides, our methods are fully complementary to existing works and can be combined with them to further improve the performance at the cost of increasing the complexity.
\subsection{Noisy-robust Learning}
Noisy-robust learning is an important problem in training Deep Neural Networks~(DNNs) since label noise is ubiquitous in real-world scenarios.
The label noises can lead the DNNs to overfit to such noises due to the ``memorization effect'' proposed by \cite{zhang2021understanding,arpit2017closer}, eventually degrading the model generalization performance.
Sample re-weighting and loss correction are two typical approaches in this field.
Sample re-weighting methods~\cite{liu2015classification,ren2018learning} typically reduce the contribution of noisy samples by down-weighting the loss of them. Many researchers~\cite{hu2019noise,zhong2019unequal} extended this idea to face recognition for noisy-robust feature learning.
Loss correction refers to the modification of the loss to compensate for the incorrect guidance provided by noisy samples.
A basic approach is the pseudo label~\cite{lee2013pseudo}, which utilizes a trained model to correct annotations and then retrain the model using the updated labels.
In~\cite{arazo2019unsupervised}, researchers found that clean and noisy samples can be distinguished from the loss distribution in noisy single-label datasets. Inspired by this observation, they proposed a mixture distribution method to correct noisy labels.
\cite{cole2021multi} proposed ROLE to jointly train classifiers and label estimator, by which the true labels can be estimated and corrected in the training phase.
Though many works about the loss design achieve notable success and push forward the research in noise-robust learning, most~(if not all) of the existing works focus on the single-label task and ignore the multi-label task.
This work validates that typical noise-robust loss functions in the single-label task~(both sample re-weighting and loss correction) can also be efficiently applied in multi-label learning.
\section{}
\subsection{Gradient Analysis of ASL}
The negative gradients of ASL~\cite{ben2020asymmetric} with different $\gamma$ and $m$ are shown in Fig.~\ref{img:app_asl}.
Despite attempts at different hyper-parameters, the turning point is limited as $p\in[0.8.0.9]$.
Given that, ASL still focuses too much on most possibly missing labels samples~(false negatives with $p>0.5$).
\begin{figure}[ht]
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-1}
\scriptsize (a)~ASL-$\gamma=1$
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-3}
\scriptsize (c)~ASL-$\gamma=3$
\end{minipage}%
}%
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-2}
\scriptsize (b)~ASL-$\gamma=2$
\centering
\includegraphics[width=1.7in]{figures/appendix/asl-4}
\scriptsize (d)~ASL-$\gamma=4$
\end{minipage}%
}%
\centering
\caption{Gradient analysis of ASL loss with different $\gamma$ and $m$. ASL only down-weights extremely hard negatives~($p>0.9$), but leaves semi-hard negative samples~($p\in[0.5,0.8]$) large weights.}
\label{img:app_asl}
\end{figure}
\subsection{Additional Evaluation Metrics and Examples}
Following previous multi-label image classification researches~\cite{ben2020asymmetric,cole2021multi},mean Average Precision~(mAP) is adopted as the primary metric to evaluate model performance.
In Table~\ref{tab:app_coco0.5}, Table~\ref{tab:app_coco0.2}, Table~\ref{tab:app_cocosingle}, Table~\ref{tab:app_vocsingle}, Table~\ref{tab:app_nussingle} and Table~\ref{tab:app_openimagesingle} respectively, we report additional evaluation results including OF1 CF1, OP, OR, CP and CR on different datasets.
The positive threshold is set to a commonly used value of 0.5 for all methods, except ROLE~\cite{cole2021multi}. As shown in Fig.~\ref{img:dis}~(f), the ROLE has a threshold offset~(the lowest prediction value $p_l$ is generally around $0.3$ rather than $0$). Therefore, the positive threshold is set to $\tau=(1+p_l)/2$ for a relatively fair comparison.
The results show that our proposed Hill loss and SPLC outperform existing methods. Precision~(P) and Recall~(R) are trade-off metrics, which are sensitive to the selected threshold. Thus, we should take a comprehensive look at the two metrics, instead of focusing each single metric.
More example results of our proposed methods and BCE from the COCO test set are provided in Fig.~\ref{img:example_vis_app}.
\begin{table}[ht]
\caption{Comparison results of various metrics over other methods on the COCO-40\% labels left. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1}&
\\
\hline
BCE~(full labels) & \underline{80.3} &\underline{80.8} & \underline{70.3} & \underline{74.9} &\underline{84.3} &\underline{74.2} & \underline{78.9} \\
BCE & 70.5 &89.2 &34.4 & 45.8 &94.1 & 25.7 & 40.4\\
WAN~\cite{cole2021multi} & 72.1 &81.0 &53.2 & 62.8 &86.3 &52.0 & 64.9 \\
BCE-LS~\cite{cole2021multi} & 73.1 & 92.9 &33.5 & 44.9 &96.4 &26.2 & 41.2 \\
\hline
Sample-reweighting: \\
\hline
Focal~\cite{lin2017focal} & 71.7 &\textbf{88.9} &37.0 & 48.7 &\textbf{93.7} &28.8 & 44.0 \\
ASL~\cite{ben2020asymmetric} & 72.7 &71.4 &\textbf{65.6} & 67.7 &75.3 &\textbf{68.4} & 71.7 \\
\textbf{Hill~(Ours)} & \textbf{75.2} &80.4 &61.4& \textbf{68.6} & 85.5 &63.8 &\textbf{73.1} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & 71.5 &\textbf{88.0} &40.8 & 52.2 &\textbf{92.4} & 31.1 & 46.5 \\
ROLE~\cite{cole2021multi} & 73.7 &78.9 &60.0 &66.7 &83.6 &61.2 &70.7 \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{75.7} & 81.6 & \textbf{60.7} & \textbf{67.9} &87.7 & \textbf{63.0} & \textbf{73.3} \\
\hline
\end{tabular}
\label{tab:app_coco0.2}
}
\end{center}
\end{table}
\begin{table}[ht]
\caption{Comparison results of various metrics over other methods on the COCO-75\% labels left. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{80.3} &\underline{80.8} & \underline{70.3} & \underline{74.9} &\underline{84.3} &\underline{74.2} & \underline{78.9} \\
BCE & 76.8 &85.1 &58.1 & 67.7 &90.1 & 58.7 & 71.1\\
WAN~\cite{cole2021multi} & 77.3 &75.0 &70.2 & 72.2 &78.2 &73.7 & 75.9 \\
BCE-LS~\cite{cole2021multi} & 78.3 & 88.2 &57.2 & 67.8 &92.1 &57.9 & 71.1 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 77.0 &\textbf{83.8} &59.4 & 68.4 &\textbf{88.6} &59.8 & 71.4 \\
ASL~\cite{ben2020asymmetric} & 77.9 &63.1 &\textbf{78.8} & 69.7 &64.9 &\textbf{82.2} & 72.5 \\
\textbf{Hill~(Ours)} & \textbf{78.8} &73.6 &74.4& \textbf{73.6} &76.4 &78.3 & \textbf{77.3} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & 77.1 &\textbf{84.9} &58.7 & 68.2 &\textbf{89.7} & 59.4 & 71.5 \\
ROLE~\cite{cole2021multi} & \textbf{78.4} &77.4 &70.0 &72.9 &80.9 &73.5 & \textbf{77.0} \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{78.4} & 72.6 & \textbf{75.1} & \textbf{73.2} &74.0 & \textbf{79.3} & 76.6 \\
\hline
\end{tabular}
\label{tab:app_coco0.5}
}
\end{center}
\end{table}
\begin{table}[!htbp]
\caption{Comparison results of various metrics over other methods on the COCO-single label. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{80.3} &\underline{80.8} & \underline{70.3} & \underline{74.9} &\underline{84.3} &\underline{74.2} & \underline{78.9} \\
BCE & 68.6 &88.6 &33.0 & 43.8 &93.9 & 23.6 & 37.7\\
WAN~\cite{cole2021multi} & 70.2 &82.4 &47.9 & 58.0 &88.2 &43.9 & 58.6 \\
BCE-LS~\cite{cole2021multi} & 70.5 & 89.3 &30.8 & 40.9 &96.5 &23.1 & 37.3 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 70.2 &88.2 &36.0 & 47.0 & 93.4 & 41.4 &26.6 \\
ASL~\cite{ben2020asymmetric} & 71.8 &\textbf{90.4} &33.4 & 44.8 &\textbf{94.7} &23.7 & 37.9 \\
\textbf{Hill~(Ours)} & \textbf{73.2} &79.7 &\textbf{58.0}& \textbf{65.5} &85.3 &\textbf{58.7} & \textbf{69.5} \\
\hline
Label correction: \\
\hline
BCE + pseudo label & 69.8 &\textbf{87.2} &38.3 & 48.9 &\textbf{92.5} & 28.0 & 43.0 \\
ROLE~\cite{cole2021multi} & 70.9 &83.3 &49.1 &57.6 &88.3 &46.8 & 61.2 \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{73.2} & 83.8 & \textbf{53.1} & \textbf{61.6} &90.1 & \textbf{53.8} & \textbf{67.4} \\
\hline
\end{tabular}
\label{tab:app_cocosingle}
}
\end{center}
\end{table}
\begin{table*}[ht]
\caption{Comparison results of various metrics over other methods on the VOC-single label. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{89.0} &\underline{82.9} &\underline{84.1} & \underline{83.3} &\underline{86.4} & \underline{84.7} & \underline{85.6}\\
BCE & 85.6 &87.8 &71.6 & 77.4 &91.1 & 68.7 & 78.4\\
WAN~\cite{cole2021multi} & 87.0 &83.9 &78.4 & 80.4 &87.5 &77.7 & 82.3 \\
BCE-LS~\cite{cole2021multi} & 87.2 & 92.0 &67.7 & 75.4 &94.6 &65.9 & 77.7 \\
\hline
Sample-reweighting: \\
\hline
Focal~\cite{lin2017focal} & 86.8 &\textbf{87.2} &73.6 & 78.2 & \textbf{90.6} & 70.3 &79.1 \\
ASL~\cite{ben2020asymmetric} & 87.3 &68.4 &\textbf{87.2} & 75.9 &72.8 &\textbf{86.9} & 79.2 \\
\textbf{Hill~(Ours)} & \textbf{87.8} &85.3 &78.7& \textbf{81.1} &88.5 &79.6 & \textbf{83.8} \\
\hline
Label correction: \\
\hline
BCE + pseudo label & \textbf{89.7} &86.5 &\textbf{81.8} & \textbf{84.0} &89.1 & \textbf{83.5} & \textbf{86.2} \\
ROLE~\cite{cole2021multi} & 89.0 &87.6 &78.2 &81.5 &90.6 &79.0 & 84.4 \\
\textbf{\textbf{SPLC}~(Ours)} & 88.1 &\textbf{88.7} & 75.0 & 80.2 &\textbf{91.9} & 75.8 & 83.0 \\
\hline
\end{tabular}
\label{tab:app_vocsingle}
}
\end{center}
\end{table*}
\begin{table*}[t]
\caption{Comparison results of various metrics over other methods on the NUS-single label dataset. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE~(full labels) & \underline{60.6} &\underline{62.7} &\underline{58.0} &\underline{59.1} &\underline{73.8} & \underline{73.4} &\underline{73.6}\\
BCE & 51.7 &67.8 &23.2 & 30.9 &84.0 & 20.7 & 33.3\\
WAN~\cite{cole2021multi} & 52.9 &61.5 &41.2 & 47.2 &75.1 &46.7 & 57.6 \\
BCE-LS~\cite{cole2021multi} & 52.5 & 66.9 &19.8 & 27.0 &85.6 &21.0 &33.7 \\
\hline
Sample-reweighting: \\
\hline
Focal~\cite{lin2017focal} & 53.6 &\textbf{69.1} &26.2 & 34.2 & \textbf{83.4} & 22.1 &35.0 \\
ASL~\cite{ben2020asymmetric} & 53.9 &53.5 &55.0 & 53.2 &67.7 &67.3 & 67.5 \\
\textbf{Hill~(Ours)} & \textbf{55.0} &54.7 &\textbf{56.3}& \textbf{54.1} &68.5 &\textbf{68.7} & \textbf{68.6} \\
\hline
Label correction: \\
\hline
BCE + pseudo label & 51.8 &\textbf{66.8} &24.8 & 32.5 &\textbf{83.6} & 22.2 & 35.1 \\
ROLE~\cite{cole2021multi} & 50.6 &56.3 &31.7 &37.4 &75.7 &50.8 &60.8 \\
\textbf{\textbf{SPLC}~(Ours)} & \textbf{55.2} &56.6 & \textbf{54.0} & \textbf{52.4} & 68.4& \textbf{73.0} & \textbf{70.6} \\
\hline
\end{tabular}
\label{tab:app_nussingle}
}
\end{center}
\end{table*}
\begin{table*}[ht]
\caption{Comparison results of various metrics over other methods on the Open Images-single label dataset. }
\begin{center}
\setlength{\tabcolsep}{1.2mm}{
\begin{tabular}{lcccccccccccccccccccccccc}
\hline
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CP} &
\multicolumn{1}{c}{CR} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OP} &
\multicolumn{1}{c}{OR} &
\multicolumn{1}{c}{OF1} &
\\
\hline
BCE &60.8 &\textbf{72.9} &30.6 &38.2 &\textbf{85.9} &14.5 &24.9 \\
Focal~\cite{lin2017focal} &62.1 &72.3 &32.3 &39.8 &85.6 &15.2 &25.8 \\
ASL~\cite{ben2020asymmetric} &62.0 &60.1 &55.5 &53.4 &69.2 &38.3 &49.3 \\
\textbf{Hill~(Ours)} &\textbf{62.7} &51.7 & \textbf{66.5} & 55.0 &59.0 & \textbf{50.5} & \textbf{54.4} \\
\textbf{\textbf{SPLC}~(Ours)} &\textbf{62.9} &61.7 &56.8 &\textbf{55.4} &71.8 & 38.2 & 49.9 \\
\hline
\end{tabular}
\label{tab:app_openimagesingle}
}
\end{center}
\end{table*}
\begin{figure*}[ht]
\flushleft
\centering
\includegraphics[width=1.0\textwidth]{figures/visualization_2.pdf}
\centering
\vspace{-1em}
\caption{More example results of our proposed methods and BCE from the COCO test set. Green and red color mean positive and missing labels respectively.}
\label{img:example_vis_app}
\end{figure*}
\section{Conclusion}
In this paper, we designed simple and robust loss functions for multi-label learning with missing labels without sacrificing training and inference efficiency.
First, we presented an observation that partial missing labels can be distinguished from negatives in the early stage of training with a high precision.
Then, two novel approaches are proposed to alleviate the effect of missing labels by down-weighting and correcting potential missing labels, respectively.
Besides, we proposed to perform semi-hard mining on positives, which further promoted the performance.
Extensive experiments on several widely used benchmarks have demonstrated the superiority of the proposed methods.
Our methods substantially simplify the existing pipeline of MLML.
We hope the new state-of-the-arts established by our methods can serve as a starting point for future research on MLML.
\subsection{Additional Discussions and Experiments}
\subsubsection{Impact of the hyper-parameters}
Despite the introduction of extra hyper-parameters in Hill and SPLC compared with BCE, we can empirically select them as described in the method part. Here we give more detailed analysis and answer some common questions in practical use.
In principle, we did not invest too much efforts in finding the best hyper-parameters for each dataset.
The optimal parameters searched on the COCO-40\% labels left are shared for all other datasets.
\textit{How to set $m$ of Focal margin?}
The margin~$m$ of Focal margin controls the importance of different positives. A small $m$ focuses more on hard positives,
and the focus shift to semi-hard positives as $m$ increases. Based on our empirical evaluation in Table~\ref{tab:hyper-parameter}, we recommend $m\in[0.5,1.0]$ and set $m=1$ in all the experiments.
\textit{How to determine the certain time of early stage for SPLC?}
SPLC is insensitive to the determination of the certain time of early stage.
Experiments show that a selection from the 2-nd to 5-th epoch has little effect~(mAP within 0.2) on performance shown in Table~\ref{tab:hyper-parameter}.
The reason is that the model recalls only a small part of FNs~(but with high precision) at the early stage.
Therefore, we correct the labels after the first epoch for all the experiments of SPLC.
\textit{What if FNs are identified incorrectly for SPLC?}
In fact, it cannot be guaranteed that all identified FNs are correct, even in well-studied single-label tasks.
However, our methods would perform well as long as most identified FNs are correct, as demonstrated by the extensive experiments.
The error rate of false differentiation in training set is below 5\% by setting a high threshold~(above 0.6) in Fig.~\ref{img:observation}.
The underlying reason behind this is the distinctive features in MLML as described in Sec.~\ref{sec_observation}.
\textit{How to set the threshold of SPLC to differentiate TNs and FNs?}
The threshold $\tau$ controls the trade-off between precision and recall of FNs as shown in Fig.~\ref{img:SPLC_pr}.
$\tau$ is a relatively sensitive parameters as shown in Table~\ref{tab:hyper-parameter}.
A higher~(stricter) threshold means less FNs are recalled but the precision is higher.
We set the threshold $\tau$ to a fixed high value of 0.6 to alleviate the incorrect identification of FNs.
The reason why $\tau$ is greater than the commonly used 0.5 is that the Focal margin focuses more on positives and recalls more missing labels compared with the BCE.
It is predictable that setting different thresholds for different classes and training stages, instead of a fixed one, will bring better accuracy but more parameters to be tuned.
Thus, we leave the study of adaptive and efficient threshold for the future work.
\textit{Does the Hill loss adjust the ratio of positive and negative part by the hyper-parameters, similar to Focal loss and ASL?}
Hill loss has a fixed formulation for positive and negative part but outperforms Focal loss and ASL. Therefore, no hyper-parameter is needed to balance positives and negatives.
\begin{table}[t]
\begin{center}
\caption{MAP results of different margins in Focal margin, different epochs and thresholds in SPLC on the COCO-40\% labels left.}
\label{tab:hyper-parameter}
\begin{tabular}{c|ccccc}
\hline
\multicolumn{6}{c}{\textbf{Different margins~$(m)$}}\\
\hline
$m$ &
\multicolumn{1}{c}{0}&
\multicolumn{1}{c}{0.5}&
\multicolumn{1}{c}{1.0}&
\multicolumn{1}{c}{1.5} &
\multicolumn{1}{c}{2.0} \\
mAP & 73.57 &75.01 &\textbf{75.15} & 74.81 & 74.53 \\
\hline
\multicolumn{6}{c}{\textbf{Different epochs}}\\
\hline
epoch & 1 &2 &3 & 4 & 5 \\
mAP &75.69 &75.63 &75.60 &75.70 &75.71 \\
\hline
\multicolumn{6}{c}{\textbf{Different thresholds~$(\tau)$}}\\
\hline
threshold & 0.5 &0.55 &0.6 & 0.65 &0.7 \\
mAP &55.84 &72.41 & \textbf{75.69} & 75.46 & 74.29 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{More experiments with different backbones}
In Table~\ref{tab:backbones}, we test the applicability of Hill and SPLC for different
backbones, by comparing the different loss functions on three representative architectures: MobilenetV3-Large~\cite{howard2019searching}, Resnet101~\cite{he2016deep} and Swin-transformer-Small~\cite{liu2021swin}.
The consistent improvements brought by Hill and SPLC demonstrate that our methods are robust to different backbones.
\begin{table}[t]
\caption{Comparison results of mAP with different backbones on COCO-40\% labels left.}
\begin{center}
\scriptsize
\begin{tabular}{lcccccc}
\hline
& \multicolumn{1}{c}{MobilenetV3}&
\multicolumn{1}{c}{Resnet101} &
\multicolumn{1}{c}{Swin-transformer}
\\
\hline
BCE~(full labels) & 76.32 &81.74 &83.31 \\
BCE & 68.45 &71.92 &76.74 \\
WAN~\cite{cole2021multi} & 69.27 & 73.58 &77.20 \\
BCE-LS~\cite{cole2021multi} &68.73 &74.41 &76.88 \\
Focal~\cite{lin2017focal} &69.46 &73.58 &76.78 \\
ASL~\cite{ben2020asymmetric} & 69.65 &75.09 &76.70 \\
\textbf{Hill~(Ours)} & \textbf{71.03} & \textbf{76.53} & \textbf{78.96}\\
\textbf{\textbf{Focal margin + SPLC}~(Ours)} & \textbf{71.10} & \textbf{77.20} &\textbf{78.19} \\
\hline
\end{tabular}
\end{center}
\label{tab:backbones}
\vspace{-2mm}
\end{table}
\subsection{Analysis of Proposed Methods}
\begin{table}[t]
\caption{Ablation study of the Hill loss on the COCO-40\% labels left. Notations ``$+$'' and ``$-$'' represent the modification of positive and negative loss respectively on the basis of BCE.}
\begin{center}
\begin{tabular}{lcccc}
\hline
&
mAP &
\multicolumn{1}{c}{OF1} &
\multicolumn{1}{c}{CF1} \\
\hline
BCE & 70.49 & 45.81 & 40.42 \\
\hline
Focal$^+$~\cite{lin2017focal} & 70.87 & 49.42 & 44.63\\
\textbf{Focal margin$^+$~(Ours)} & \textbf{71.50} & \textbf{57.66} & \textbf{58.37} \\
\hline
WAN$^-$~\cite{cole2021multi} &72.05 & 62.77 & 64.87\\
Focal$^-$~\cite{lin2017focal} &72.10 & 56.43 & 54.04\\
ASL$^-$~\cite{ridnik2021asymmetric} & 72.70 & \textbf{67.72} & 71.67\\
MSE$^-$ & 74.08 & 65.12 & 68.17\\
\textbf{Hill$^-$~(Ours)} & \textbf{74.98} & 66.56 & \textbf{71.87}\\
\hline
\textbf{Hill~(Ours)} & \textbf{75.15} & \textbf{68.56} & \textbf{73.08}\\
\hline
\end{tabular}
\end{center}
\label{tab:ablation}
\vspace{-3mm}
\end{table}
\subsubsection{Ablation study of Hill loss}
Since Hill loss is composed of positive and negative loss, we conduct the ablation study on the basis of BCE to verify the effectiveness of positive and negative parts in Hill loss as shown in Table~\ref{tab:ablation}.
For the positive part, the hard-mining method~(Focal$^+$) brings a slight gain for BCE, whereas Focal margin$^+$ brings a relatively significant improvement. This shows semi-hard mining is more appropriate for positives than hard-mining in MLML.
For the negative part, we first down-weight the whole negatives with a weight parameter termed WAN$^-$, which provides a stronger baseline than BCE with solid improvements on different metrics.
Second, we compare different re-weighting losses. Hill$^-$ surpasses Focal$^-$ and ASL$^-$ by above +2\% mAP score. The main reason is that though ASL$^-$ seeks to attenuate the effect of hard negatives, the gradient analysis in Fig.~\ref{img:posloss} illustrates that ASL$^-$ still puts too much focus on these possibly false negatives compared with Hill$^-$.
Third, the superiority of Hill$^-$ over MSE$^-$ also demonstrates the necessity of re-weighting.
The performance is further improved via the integration of Focal margin$^+$ and Hill$^-$.
\begin{figure*}[t]
\flushleft
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/bce-full.pdf}
\scriptsize (a)~BCE-full labels
\includegraphics[width=1.7in]{figures/distribution/asl-val.pdf}
\scriptsize (e)~ASL
\end{minipage}%
}%
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/distribution/bce-val.pdf}
\scriptsize (b)~BCE
\includegraphics[width=1.7in]{figures/appendix/role-0.2.pdf}
\scriptsize (f)~ROLE
\end{minipage}%
}%
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/focal-0.2.pdf}
\scriptsize (c)~Focal
\includegraphics[width=1.7in]{figures/distribution/hill-val.pdf}
\scriptsize (g)~Hill~(Ours)
\end{minipage}%
}%
\subfigure{
\begin{minipage}[h]{0.25\linewidth}
\centering
\includegraphics[width=1.7in]{figures/appendix/foca-margin1-0.2.pdf}
\scriptsize (d)~Focal margin~(Ours)
\includegraphics[width=1.7in]{figures/distribution/splc-val.pdf}
\scriptsize (h)~SPLC~(Ours)
\end{minipage}%
}%
\centering
\caption{Probability distributions on the COCO test set of different loss functions. Note that only (a) is obtained from a model trained with full labels on the COCO dataset, and the rest of the figures are obtained from models trained via different methods on the COCO-40\% labels left.}
\label{img:dis}
\end{figure*}
\begin{table}[t]
\caption{Quantitative results of SPLC with different loss functions on the COCO-40\% labels left.}
\begin{center}
\scriptsize
\begin{tabular}{lccc}
\hline
Method
& \multicolumn{1}{c}{mAP}&
\multicolumn{1}{c}{CF1}&
\multicolumn{1}{c}{OF1}
\\
\hline
BCE & 70.49 & 45.81 & 40.42 \\
BCE + pseudo label & 71.46 & 52.17 & 46.48 \\
BCE + \textbf{SPLC} & \textbf{73.63} & \textbf{65.56} &\textbf{72.14} \\
\hline
Focal & 71.66 & 48.67 &44.01 \\
Focal + pseudo label & 72.67 & 52.43 & 47.68 \\
Focal + \textbf{SPLC} & \textbf{73.83} & \textbf{57.95} &\textbf{56.04} \\
\hline
ASL & 72.70 & 67.72 &71.67 \\
ASL + pseudo label & 73.75 & 68.73 & 72.06 \\
ASL + \textbf{SPLC} & \textbf{74.23} & \textbf{69.23} & \textbf{72.74} \\
\hline
Focal margin & 72.26 & 60.79 &61.58 \\
Focal margin + pseudo label & 74.10 & 65.53 & 66.17 \\
\textbf{Focal margin + SPLC} & \textbf{75.69} & \textbf{67.91} &\textbf{73.33} \\
\hline
\end{tabular}
\end{center}
\label{tab:SPLC ablation}
\vspace{-5mm}
\end{table}
\subsubsection{SPLC boosts existing losses}
Table~\ref{tab:SPLC ablation} shows that both pseudo label and SPLC methods can complement existing losses and lead to consistent improvements, while SPLC performs better than the pseudo label method in both training efficiency and accuracy.
SPLC corrects the missing labels dynamically along with the training, while pseudo label method needs an extra training process to predict the labels.
Fig.~\ref{img:SPLC_pr} is drawn to explain why SPLC achieves better accuracy.
It can be observed that during the training process, SPLC gradually recalls more missing labels and correct them, guaranteeing the precision meanwhile.
By comparison, it is hard for the pseudo label method to make a good compromise between precision and recall, since the model is trained by noisy datasets and is not robust to the missing labels.
\begin{figure*}[t]
\flushleft
\centering
\includegraphics[width=1.0\textwidth]{figures/visualization_1.pdf}
\centering
\caption{Example results of our proposed methods and BCE from the COCO test set. Green and red color mean positive and missing labels respectively. Generally, the model trained with BCE tends to overfit to the missing labels, as their predicted probabilities are quite low.
By comparison, models trained by Hill and SPLC predict higher probabilities for missing labels, implying that they are more robust against missing labels.
We also show the results from the first and third epoch of SPLC to demonstrate the feasibility and rationality of loss correction at the early stage.
More examples are shown in Appendix B.}
\label{img:example_vis}
\end{figure*}
Moreover, Focal margin performs best when combined with SPLC, which obtains a remarkable mAP gain up to 3.43\%. The reason is that the Focal margin highlights semi-hard positives as shown in Fig.~\ref{img:posloss}, leading to more missing labels being corrected than the Focal loss.
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{figures/compare}
\caption{The precision and recall curves of missing labels during training on the COCO training set of 40\% labels left with and without SPLC. The yellow and green rectangles refer to the precision and recall of a well-trained model used for pseudo labeling with the thresholds between 0.4 and 0.6.}
\label{img:SPLC_pr}
\end{figure}
\subsubsection{Probability distribution analysis}
\label{sec:discuss}
Fig.~\ref{img:dis} illustrates the probability distributions of different methods.
As shown in Fig.~\ref{img:dis}~(g) and (h), our proposed methods outperform existing works obviously, as positives and negatives are well differentiated and the probability distribution are similar to that of BCE trained on the COCO full labels dataset.
Specifically, decreasing the weighting of hard negatives and correcting noisy labels mitigate the effects of mislabeled samples. Besides, paying more emphasis on semi-hard positives is conducive to the learning of positive samples.
More concretely,
From Fig.~\ref{img:dis}~(a) and (b), it can be observed that many positives are incorrectly identified under the impact of missing labels.
Comparing Fig.~\ref{img:dis}~(c) and (d), we observe that more positives are recalled by Focal margin loss than Focal loss, which indicates that semi-hard mining is conducive on multi-label image recognition.
As for ASL~(Fig.~\ref{img:dis}~(f)), although it demonstrates the effectiveness of down-weighting negatives, it only down-weights extremely hard negatives~($p>0.8$), leaving semi-hard negative samples~($p\in[0.5,0.8]$) large weights.
As a result, too large weights for hard negatives and too small weights for easy ones make insufficient distinction between positives and negatives.
Despite higher mAP obtained by ROLE than baseline methods, the distribution of ROLE suffers the problem of insufficient discrimination especially for the negatives, as shown in Fig.~\ref{img:dis}~(f).
Even more, Role requires additional memory to store the label matrix for online estimation, which is not negligible for large-scale datasets.
Furthermore, we give examples of the predicted probabilities from different methods in Fig.~\ref{img:example_vis}. Hill and SPLC present better robustness against missing labels, as they allocate relatively higher probability values to missing labels than BCE.
\subsection{Comparisons with Existing Methods}
\subsubsection{Different missing ratios on COCO}
Table~\ref{tab:maintable} summarizes results of different methods on COCO under different missing ratios. As the increase of missing ratio, the model performance becomes worse and the efficacy of loss becomes significant. Though Weak Assume Negatives~(WAN) and Label Smoothing~(LS) are stronger baselines than BCE, the proposed Hill and SPLC show considerable superiority.
Specifically, for the loss re-weighting methods, Hill significantly surpasses Focal and ASL by +3.49\% and +2.45\% respectively under the setting of 40\% labels left.
For the loss correction methods, BCE + pseudo label is a two-stage method: we first train a model using BCE, afterwards use the trained model to correct the data and then retrain the model. ROLE requires additional space to store the label prediction matrix. In contrast, SPLC can correct missing labels in the training phase and does not need additional storage space. Therefore, SPLC improves the performance without any bells and whistles.
\begin{table*}[t]
\caption{Comparison results of mAP, CF1 and OF1 over other methods on COCO with different missing ratios.}
\begin{center}
\begin{tabular}{lccccccccc}
\hline
& \multicolumn{3}{c}{75\% labels left}&
\multicolumn{3}{c}{40\% labels left}&
\multicolumn{3}{c}{single label}
\\
&
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OF1} &
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OF1} &
\multicolumn{1}{c}{mAP} &
\multicolumn{1}{c}{CF1} &
\multicolumn{1}{c}{OF1}
\\
\hline
BCE~(full labels) & 80.32 & 74.89 & 78.95 & - & - & - &- &- & - \\
BCE & 76.81 & 67.73 & 71.07 & 70.49 & 45.81 & 40.42 & 68.57 & 43.78 & 37.72 \\
WAN~\cite{cole2021multi} & 77.25 & 72.19 & 75.86 & 72.05 & 62.77 & 64.87 & 70.17 & 58.02 & 58.60 \\
BCE-LS~\cite{cole2021multi} & 78.27 & 67.77 & 71.10 & 73.13 & 44.92 & 41.17 & 70.53 & 40.85 & 37.26 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 76.95 & 68.41 & 71.39 & 71.66 & 48.67 & 44.01 & 70.19 & 47.02 & 41.37 \\
ASL~\cite{ben2020asymmetric} & 77.97 & 69.67 & 72.54 & 72.70 & 67.72 & 71.67 & 71.79 & 44.77 & 37.86 \\
\textbf{Hill~(Ours)} & \textbf{78.84} & \textbf{73.57} & \textbf{77.31} & \textbf{75.15} & \textbf{68.56} & \textbf{73.08} & \textbf{73.17} & \textbf{65.47} & \textbf{69.51} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & 77.05 & 68.18 & 71.47 & 71.46 & 52.17 & 46.48 & 69.77 & 48.91 & 42.99 \\
ROLE~\cite{cole2021multi} & 78.43 &72.90 & \textbf{77.03} & 73.67 & 66.73 & 70.65 & 70.90 &57.59 & 61.16 \\
\textbf{\textbf{Focal margin + SPLC}~(Ours)} & \textbf{78.44} & \textbf{73.18} & 76.55 & \textbf{75.69} & \textbf{67.91} & \textbf{73.33} & \textbf{73.18} & \textbf{61.55} & \textbf{67.35} \\
\hline
\end{tabular}
\end{center}
\label{tab:maintable}
\end{table*}
\begin{table}[t]
\caption{Comparison results of mAP on VOC-single label and NUS-single label datasets.}
\begin{center}
\scriptsize
\begin{tabular}{lcccccc}
\hline
& \multicolumn{1}{c}{VOC-single}&
\multicolumn{1}{c}{NUS-single}
\\
\hline
BCE~(full labels) & 89.04 & 60.63 \\
BCE & 85.55 & 51.71 \\
WAN~\cite{cole2021multi} & 87.03 & 52.89 \\
BCE-LS~\cite{cole2021multi} & 87.23 & 52.47 \\
\hline
Loss re-weighting: \\
\hline
Focal~\cite{lin2017focal} & 86.83 & 53.58 \\
ASL~\cite{ben2020asymmetric} & 87.33 & 53.92 \\
\textbf{Hill~(Ours)} & \textbf{87.75} & \textbf{55.03} \\
\hline
Loss correction: \\
\hline
BCE + pseudo label & \textbf{89.67} & 51.79 \\
ROLE~\cite{cole2021multi} & 89.00 & 50.61\\
\textbf{\textbf{Focal margin + SPLC}~(Ours)} & 88.07 & \textbf{55.18} \\
\hline
\end{tabular}
\end{center}
\label{tab:voc&nus}
\end{table}
\begin{table}[t]
\caption{Comparison results of mAP on Open Images-single label datasets.}
\begin{center}
\scriptsize
\begin{tabular}{ccccccc}
\hline
&
\multicolumn{1}{c}{BCE}&
\multicolumn{1}{c}{Focal}&
\multicolumn{1}{c}{ASL}&
\multicolumn{1}{c}{Hill} &
\multicolumn{1}{c}{SPLC}
\\
\hline
Open Images & 60.83 &62.14 &61.95 & \textbf{62.71} & \textbf{62.86} \\
\hline
\end{tabular}
\end{center}
\label{tab:openimages}
\vspace{-5mm}
\end{table}
\subsubsection{Single label on VOC, Nus-wide and Open Images}
The single positive label results on VOC and NUS are shown in Table~\ref{tab:voc&nus}.
Hill and SPLC improve mAP performance remarkably on the NUS-single label dataset, but do not achieve obvious improvements on VOC.
The reason may be that the VOC dataset only contains 20 labels, resulting in a relatively balanced ratio of positives and negatives.
To further validate the effectiveness of proposed methods on extreme multi-label classification, we conduct experiments on Open Images with 567 labels as shown in Table~\ref{tab:openimages}.
It can be observed that both Hill and SPLC achieve improvements remarkably compared with common loss functions on Open Images, demonstrating that the proposed methods are robust in large-scale datasets and extreme classification cases.
\section{Experiment}
In this section, we firstly introduce the experimental settings. Secondly, we compare proposed methods with existing classic loss functions on four multi-label datasets.
Thirdly, ablation study and comprehensive analysis are given to delve into the working mechanism of Hill and SPLC.
Finally, more discussions and experiments are provided to answer the common questions in implementation.
\subsection{Experimental Settings}
\subsubsection{Datasets}
We conduct experiments on multiple standard multi-label benchmarks: MS-COCO~(COCO)~\cite{lin2014microsoft}, PASCAL VOC 2012~(VOC)~\cite{everingham2011pascal}, NUS-WIDE~(NUS)~\cite{chua2009nus}, and the large-scale Open Images~\cite{kuznetsova2020open}. COCO~\cite{lin2014microsoft} consists of 82,081 images with 80 classes for training and 40,137 images for test. VOC~\cite{everingham2011pascal} contains 5,717 training images with 20 classes, and additional 5,823 images are used for test.
Following \cite{ben2020asymmetric}, we collect NUS-wide~\cite{chua2009nus} with 119,103 and 50,720 images as training set and test set, consists of 81 classes.
Due to many download urls of Openimages~\cite{kuznetsova2020open} are expired, we were able to collect 1,742,125 training images and 37,306 test images, which contain 567 unique classes.
We construct the training sets of missing labels by randomly dropping positive labels for each training image with different ratios.
The number of remaining labels per image $n_r(x)=\lfloor n(x) \times (1-r)\rfloor+1$, where $n(x)$ refers to the number of all positive labels per image and $r$ controls the ratio of missing labels.
In particular, $r=1$ means single positive label per image, the same setting as~\cite{cole2021multi}. We report results for $r\in\{0.5, 0.8, 1\}$ respectively on COCO and single positive label ($r=1$) results on VOC, NUS and Open Images.
Note that the test sets are fully labeled.
The detailed statistics of datasets are shown in Table~\ref{tab:dataset}.
\begin{table}[t]
\caption{The statistics of different datasets.}
\begin{center}
\setlength{\tabcolsep}{1mm}{
\begin{tabular}{c|cccc}
\hline
&
\multicolumn{1}{c}{samples} &
\multicolumn{1}{c}{classes} &
\multicolumn{1}{c}{labels} &
\multicolumn{1}{c}{avg.label/img}
\\
\hline
COCO-full labels & 82,081 & 80 &241,035 &2.9 \\
COCO-75\% labels left & 82,081 & 80 &181,422 &2.2 \\
COCO-40\% labels left & 82,081 & 80 &96,251 &1.2 \\
COCO-single label & 82,081 & 80 &82,081 &1.0 \\
\hline
NUS-full labels & 119,103 & 81 &289,460 &2.4 \\
NUS-single label & 119,103 & 81 &119,103 &1.0 \\
\hline
VOC-full labels & 5,823 & 20 &8,331 &1.4 \\
VOC-single label & 5,823 & 20 &5,823 &1.0 \\
\hline
Open Images-full labels & 1,742,125 & 567 & 4,422,289 & 2.54\\
Open Images-single label & 1,742,125 & 567 & 1,742,125 & 1 \\
\hline
\end{tabular}
}
\end{center}
\label{tab:dataset}
\end{table}
\subsubsection{Evaluation metrics}
Following previous multi-label image classification researches~\cite{ben2020asymmetric,cole2021multi}, mean Average Precision~(mAP) is used as the primary metric to evaluate model performance.
We also report the overall F1-measure~(OF1) and per-category F1-measure~(CF1) with the positive threshold of 0.5.
Precision~(P) and recall~(R) results are reported in Appendix B.
\subsubsection{Implementation details}
We use the Resnet50~\cite{he2016deep} pretrained on ImageNet~\cite{deng2009imagenet} as the backbone network for feature extraction. The input images are uniformly resized to $448\times448$.
The optimizer is the Adam with a weight decay of 1e-4 and cycle learning rate schedule~\cite{smith2019super} is used with the max learning rate of 1e-4. Following~\cite{ben2020asymmetric}, we use the exponential moving average trick for higher baselines.
\subsubsection{Compared methods}
We firstly compare our methods with three classic baseline methods.
Weak Assume Negatives~(WAN) and Label Smoothing~(LS)~\cite{cole2021multi} are selected as the stronger baselines than BCE in MLML.
Secondly, we compare the Hill loss with loss re-weighting methods, Focal~\cite{lin2017focal} and the recently proposed ASL~\cite{ben2020asymmetric}.
Thirdly, we choose the loss correction methods, including Pseudo Label~\cite{lee2013pseudo} and Regularized Online Label Estimation~(ROLE)~\cite{cole2021multi} to validate the effectiveness of the SPLC.
For a fair comparison, we reproduce these methods using open-source codes from the authors under the same dataset configuration. We adopt the the hyper-parameters recommended in their papers.
\section{Introduction}
\IEEEPARstart{R}{ecognizing} multiple labels of a given image, also known as multi-label image recognition, is an important and practical computer vision task, as images are intrinsically multi-labeled, {\it{e.g.}}, containing multiple objects, conditions, or scene types. In fact, the single-label dataset ImageNet has a large portion being composed of images with multiple labels. It has been shown in~\cite{yun2021re} that re-labeling ImageNet~\cite{deng2009imagenet} from single-label to multi-labels can yield remarkable performance improvement.
A main challenge for multi-label learning is the difficulty in collecting ``full'' labels for supervised training. In the case of a large number of categories, it is difficult and even impractical to simultaneously annotate the full label set for each training image. As a result, images are usually annotated with partial labels and, hence, available training data is often plagued by false negatives due to miss-labeling. In this context, multi-label learning in the presence of missing labels~(MLML) has attracted much research attention~\cite{yu2014large,durand2019learning,huynh2020interactive,cole2021multi}.
Generally, learning in the presence of missing labels is not straightforward, since for a given negative label there are two possibilities, {\it{1)}}~either it is truly negative that the class corresponding to it does not exist in the image, {\it{2)}}~or it is truly positive but simply not identified by the labeling mechanism.
Existing works on MLML mainly utilize label correlation or transition matrix to estimate the missing labels by modifying the network or training procedures~\cite{huynh2020interactive,chu2018deep,bucak2011multi}, which typically require extra complex architectures or training schemes.
Besides, missing labels can also be regarded as a branch of label noise, where noise-robust loss design has shown remarkable performance in single-label recognition tasks~\cite{liu2015classification,zhang2018generalized,ren2018learning,wang2019symmetric}. However, robust loss design remains underexplored in multi-label learning.
\begin{figure}[t]
\centering
\includegraphics[width=.45\textwidth]{figures/fig1}
\caption{(a) Illustration of multi-label learning with missing labels. (b) Illustration of our proposed Hill loss and self-paced loss correction~(SPLC) for negatives.}
\label{img:beginning}
\end{figure}
This work aims to provide some insights into MLML through investigating the efficacy of robust loss function.
In particular, we demonstrate that a careful design of the loss function can greatly boost the robustness against the missing labels and the performance of classification accuracy, while still maintaining a simple and efficient solution, based on standard architectures and training schemes.
This benefits from the distinctive observation and characteristic in multi-label learning.
We observe that the predicted probability from a model can be used to identify the false negatives~(missing labels) with a surprisingly high precision even at the early stage of training.
Specifically, it is a consensus that there exists an imbalance between positives and negatives in multiple binary classifiers, which results in the domination of negatives on positives. Due to this fact, the predicted positives are inclined to be true positives~(including labeled positives and missing labels) with a high confidence.
Inspired by this observation, we propose two simple yet effective methods to diminish the effect of missing labels.
{\it{1)}}~The first is a new robust loss for negatives, namely the Hill loss, which is derived by re-weighting the MSE loss to alleviate the effect of false negatives.
Specifically, the Hill loss puts less weight on potential false negatives and easy ones, but more weight on middle semi-hard ones to make the training of the model insensitive~(robust) to false negatives.
{\it{2)}}~The second is a \textbf{S}elf-\textbf{P}aced \textbf{L}oss \textbf{C}orrection~(termed \textbf{SPLC}) method, which uses a loss derived from the maximum likelihood~(ML) criterion under an approximate probability distribution of the missing labels.
Different from traditional correction techniques utilizing extra networks~\cite{chen2019multi,pineda2019elucidating} or extra training procedure~\cite{bucak2011multi}, the SPLC method gradually corrects missing labels based on model prediction in an automatically self-paced manner.
Moreover, we propose the Focal margin loss to exploit semi-hard positives by a simple modification to the Focal loss~\cite{lin2017focal}.
Improvements brought by the Focal margin demonstrates that semi-hard mining is more appropriate than hard mining for positives.
Extensive experiments show that our approaches can significantly outperform existing methods while do not introduce extra complex architectures or training schemes.
The main contributions are summarized as follows:
\begin{itemize}
\item An interesting observation in multi-label learning that a model can identify false negatives~(missing labels) even in the early stage of training;
\item A systematic loss study on the problem of multi-label learning with missing labels that can simplify the pipeline in this field;
\item A novel Hill loss to re-weight negatives and a simple self-paced loss correction method that are robust against missing labels;
\item Comprehensive experiments that validate the effectiveness of our approaches and establish the new state-of-the-arts of loss function in MLML on multiple datasets.
\end{itemize}
\subsection{Semi-hard Mining for Positives}
In addition to dealing with negatives via Hill and SPLC, we also delicately tailor the loss for positives to exploit the feature of multi-label learning.
In fact, hard-mining methods that can yield improvement in single-label learning are not widely used in multi-label learning.
For example, the recent state-of-the-art method ASL only adopts the naive BCE loss for positives.
Here, we attempt to explain why hard-mining does not perform well in multi-label learning by the comparative analysis of Fig.~\ref{img:observation} (b) and (d).
It indicates that excessively hard mining brought by Focal loss is inappropriate for multi-label learning.
Specifically, despite the decrease in the number of hard positives~(low probability, below $0.2$), Focal loss only pushes the proportion of 1.5\% hard positives to a high probability~(above 0.5).
Meanwhile, Focal loss ignores a large proportion of semi-hard ones with moderate probability~({\it{e.g.}}, in the region $[0.3, 0.5]$).
Therefore, we seek to further emphasize semi-hard positives compared with Focal loss.
An intuitive approach is to subtract a margin from the logits. In this manner, they are regarded as smaller values and given larger gradients because of the feature of Focal loss. Meanwhile, thanks to the characteristic ``S''-shaped curve of sigmoid function, classes with probability in the middle region~(the logit value around 0, with the corresponding probability value around 0.5) are heavily affected,
meaning that they are given larger gradients compared with those at both ends~(easy and hard samples).
Formally, we propose the Focal margin loss as:
\begin{equation}
L^+_{Focal\;margin} = (1-p_{m})^\gamma\log(p_{m})
\end{equation}
where $p_{m} = \sigma(x-m)$ and $m$ is a margin parameter. Focal margin loss degrades to Focal loss when $m=0$. $\gamma$ is set to 2, a commonly used value in Focal loss.
The gradients of BCE and Focal margin losses with different $m$ are shown in Fig.~\ref{img:posloss}. It can be observed that Focal margin endows larger weight on semi-hard positives compared with Focal loss. In our work, $m\in[0.5,1.0]$ is recommended to avoid the highlight of easy positives.
Actually, the introduction of margin is widely used in the design of loss function, most notably in SVMs~\cite{cortes1995support}.
Recently, Large-Margin Softmax~\cite{liu2016large} and Arcface~\cite{deng2019arcface} have been proposed to minimize intra-class variation and at the meantime enlarge the inter-class margin by using the angular margin. This paper is the first work to perform semi-hard mining through the combination of margin and sigmoid activation.
To sum up, the Focal margin loss for positives are integrated with Hill loss and SPLC for negatives respectively into our final approaches~(denoted by Hill and Focal margin + SPLC).
\subsection{Re-weighting Negatives into a ``Hill''}
A straightforward method to alleviate the effect of missing labels is to design a robust loss that is insensitive to false negatives. Generally, the prediction probability of a well-trained model should be closer to 1 than 0 for false negatives. Hence, the effect of false negatives can be mitigated by putting less weight on the prediction probability closing to 1 in the loss for negative labels. As shown in Fig.~\ref{img:negloss}, the Mean Squared Error~(MSE) loss naturally satisfies this desirable robustness property, as it has a low weight on prediction probability closing to 1. Hence, it would be more robust than the BCE loss and can yield a better performance in the presence of missing labels.
To further improve the robustness against false negatives, we propose the Hill loss by re-weighting the MSE loss as
\begin{equation}
\begin{aligned}
\mathcal{L}_{Hill}^- &= -w(p)\times MSE\\
&=-(\lambda-p){p}^2.
\end{aligned}
\end{equation}
The weighting term $w(p)=\lambda-p$ is designed to down-weight the loss for possibly false negatives. To fulfill this goal, considering a typical threshold of 0.5 for sigmoid-based binary classification, the hyper-parameter $\lambda$ can be selected to satisfy $\frac{\partial^2L_{Hill}^-}{\partial x^2}=0$ for $p=\sigma(x)=0.5$. Accordingly, the solution is given by $\lambda=1.5$, with which we obtain the Hill loss as illustrated in Fig.~\ref{img:negloss}. Clearly, it can be seen from Fig.~\ref{img:negloss} that the Hill loss has less weight than the MSE loss for $p\geq0.5$. Hence, it can be expected to achieve more robust performance against missing labels than MSE.
\begin{figure}[t]
\centering
\includegraphics[width=.40\textwidth]{figures/neg}
\caption{Gradient analysis of Hill loss~(shaped like a hill) in comparison with current loss functions. Hill loss puts more attention on semi-hard samples, while potential false negatives with high probability are delivered less emphasis.}
\label{img:negloss}
\end{figure}
Fig.~\ref{img:negloss} compares the gradients of different loss functions for negative labels. It is noteworthy that each of the ASL, MSE and Hill losses has less weight on possible false negatives compared with the BCE loss. However, ASL can still be seriously affected by missing labels, since it only down-weights the region very close to 1, {\it{e.g.}}, $p > 0.9$. In fact, a large part of missing labels has a prediction probability less than 0.9, as shown in Fig.~\ref{img:observation}. Though the ASL loss can be adjusted by the parameters in Eq.~(4), the adjustable extent is restricted, which is analyzed in detail in Appendix B.
\iffalse
\begin{algorithm}[t]
\caption{Self-Paced Loss Correction.}
\label{alg:SPLC}
\textbf{Input:}
Predicted results processed by sigmoid $p=\{p_1,p_2,...,p_k\}$, ground truth $y=\{y_1,y_2,...,y_k\}$, threshold parameter $thres$, epoch num $epo$.\\
\textbf{Output:}
$loss$
\begin{algorithmic}[1]
\STATE Define $loss^+(), loss^-()$ for positives and negatives
\STATE $loss=0$
\REPEAT
\IF{$epo>1$}
\FOR{$i=1$ to $k$}
\IF{$y_i=0$ and $p_i<thres$}
\STATE $loss \leftarrow loss^-(x_i)$
\ELSE
\STATE $loss \leftarrow loss^+(x_i)$
\ENDIF
\ENDFOR
\ENDIF
\RETURN loss
\UNTIL{Convergence}
\end{algorithmic}
\end{algorithm}
\fi
\subsection{An Observation in MLML}
\begin{figure}[t]
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/bce-epoch2.pdf}
\scriptsize (a)~The early stage of BCE
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/focal-epoch1-0.4.pdf}
\scriptsize (c)~The early stage of Focal
\end{minipage}%
}%
\subfigure{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/bce-epochbest-overfitting.pdf}
\scriptsize (b)~The late stage of BCE
\centering
\includegraphics[width=1.7in]{figures/distribution-bce/focal-epochbest-overfitting.pdf}
\scriptsize (d)~The late stage of Focal
\end{minipage}%
}%
\centering
\caption{Probability distribution of a network prediction at the early and late stages of training on the COCO training set with 40\% labels left. (a) and (c) indicate that false negatives can be identified with a high precision but a low recall at the early training stage. (b) and (d) show that the network predictions for missing labels share a similar distribution to that for true negatives, which implies that the network over-fits to labeled positives at the late stage of training.}
\label{img:observation}
\end{figure}
Under a typical missing label setting, Fig.~\ref{img:observation} shows the probability distribution of a network prediction at the early and late stages of training with BCE and Focal loss. We make an observation:
\emph{Missing labels~(false negatives,FNs) can be distinguished from true negatives~(TNs) according to the predicted probabilities of a network at the early training stage.}
Fig.~\ref{img:observation}(a) and (c) show that although most missing labels are poorly classified at the early training stage, partial missing labels can be identified with high precision but low recall by setting a proper threshold. For example, when using the BCE loss, partial FNs can be distinguished from TNs with a threshold of 0.3 on prediction probability. Similarly, when using the Focal loss, such a threshold can be set to 0.4.
The reason behind this phenomenon is the distinctive features in multi-label learning:
{\it{1)}} the domination of negatives on positives in multiple binary classifiers; In the multi-label setting, an image typically contains a few positives but much more negatives, which results in serious positive-negative imbalance~\cite{ben2020asymmetric}.
{\it{2)}} Missing labels exacerbate the positive-negative imbalance and plague the learning of recognizing positives.
Therefore, the low ratio of positives make the model conservative in recognizing positives, once it makes such a prediction we can be confident it is correct.
Furthermore, it can be seen from Fig.~\ref{img:observation}(b) and (d) that, at the late stage of training, the network tends to over-fitting. As a result, the distribution of the network prediction for missing labels~(FNs) is similar to that for TNs, which makes the missing labels indistinguishable from true negatives.
That is, the network tends to prioritize learning simple patterns~(e.g., true labels) first before eventually memorizing~(over-fitting to) hard ones~(e.g., noisy/false labels).
This phenomenon accords well with the mainstream of viewpoint in noise-robust learning, \textit{memorization effect}, that ``deep neural networks easily fit random labels'' ~\cite{zhang2021understanding} and ``deep neural networks~(DNNs) learn simple patterns first, before memorizing''~\cite{arpit2017closer}.
In light of this, it would be beneficial to exploit these distinguishable missing labels at the early stage of training rather than the late stage.
The above analysis sheds some light on how to relieve the effect of false negatives in the presence of missing labels. In the following, we propose two approaches via down-weighting and correcting potential missing labels, respectively.
\label{sec_observation}
\section{Simple and Robust Loss Design for MLML}
In this section, we first review classic losses in multi-label learning. Then, we make an observation in the scenario of missing labels, based on which the Hill loss and the SPLC method are introduced to deal with the probably mislabeled negatives. Furthermore, we make a simple modification on the positive part of the Focal loss to highlight semi-hard positives.
\subsection{Preliminary}
Multi-label classification is usually transformed into a multiple binary classification problem, in which multiple binary classifiers are trained to predict one-vs.-rest for each label independently.
Given $K$ labels, networks predict the logits $x_i$ of the $i$-th label independently, then the probabilities are given by normalizing the logits with the sigmoid function as $p_{i} = \sigma(x_i) = \frac{1}{1+e^{-x_i}}$. Let $y_i$ denote the label for the $i$-th class, the binary classification loss is generally given by
\begin{equation}
\mathcal{L} = - \sum_{i=1}^K \left( y_i L^+_i + (1-y_i)L^-_i \right),
\end{equation}
where $L_i^+$ and $L_i^-$ stand for the positive and negative losses of $i$-th label, respectively. For simplicity, in the sequel we ignore the subscript $i$ in $L_i^+$ and $L_i^-$ to use $L^+$ and $L^-$.
The Binary Cross Entropy~(BCE) loss is the most popular loss function in multi-label classification, which is defined as
\begin{equation}
\begin{cases}
&L_{BCE}^+ = \log(p) \\
&L_{BCE}^- = \log(1-p)
\end{cases}.
\end{equation}
Further, the Focal loss~\cite{lin2017focal} is proposed to address the problem of positive-negative imbalance and hard-mining, which is given by
\begin{equation}
\begin{cases}
&L_{Focal}^+ = \alpha_+(1-p)^\gamma\log(p) \\
&L_{Focal}^- = \alpha_-p^\gamma\log(1-p)
\end{cases},
\end{equation}
where $\gamma$ is a focus parameter, and $\alpha_+$ and $\alpha_-$ are utilized to balance positives and negatives. By using $\gamma>0$, hard samples are delivered more attention.
More recently, ASL~\cite{ridnik2021asymmetric} is proposed to relieve positive-negative imbalance by operating differently on positives and negatives, which is defined as:
\begin{equation}
\begin{cases}
&L_{ASL}^+ = (1-p_m)^{\gamma_+}\log(p_m) \\
&L_{ASL}^- = p_m^{\gamma_-}\log(1-p_m)
\end{cases},
\end{equation}
where $\gamma_+<\gamma_-$ and are focus parameters for positive and negative labels, respectively, and $p_m=\max(p-m,0)$. The probability margin $m \geq 0$ is a tunable hyper-parameter. The ASL loss reduces the weight of easy negatives via using $\gamma_+ < \gamma_-$, and discards negatives with low predicted probability via the $m$ shifted probability.
\subsection{Self-Paced Loss Correction}
This subsection presents a loss correction method to alleviate the effect of missing labels. It can gradually corrects the potential missing labels during training, so we term it \textbf{S}elf-\textbf{P}aced \textbf{L}oss \textbf{C}orrection~(\textbf{SPLC}).
SPLC can be viewed as an approximate maximum likehood~(ML) method, as it adopts a loss derived from the ML criterion under an approximate distribution of missing labels.
First, for the full label setting~(without missing labels), the BCE loss (Eq.~(2)) is optimal from the ML criterion under Bernoulli distribution
\begin{equation}
P_r(y) = p^y(1-p)^{1-y} =
\begin{cases}
p, &y=1\\
1-p, & y=0
\end{cases}.
\end{equation}
Meanwhile, minimizing Eq.~(2) is equivalent to minimizing the KL divergence between the label and the model output. When in the presence of missing labels, for a negative label $y=0$, let $q\in[0,1]$ be the probability of its corresponding class being truly negative, while $1-q$ is the probability of its corresponding class being false negative~(caused by miss-labeling).
In this setting, the distribution Eq.~(6) on longer applies and a modification is:
\begin{equation}
\begin{aligned}
P_r(y,s) &= (qp)^y {\left\{(1-p)^{s} [(1-q)p]^{1-s} \right\}}^{1-y} \\ &=
\begin{cases}
qp, &y=1\\
1-p, & y=0,s=1 \\
(1-q)p, &y=0,s=0
\end{cases},
\end{aligned}
\end{equation}
where $s\in\{0,1\}$ is an indicating variable of true and false negatives, with $s=0$ standing for a false negative.
With this distribution, the optimal loss from the ML criterion is given by
\begin{equation}
\begin{cases}
&L^+ = \log(p) \\
&L^- = s\log(1-p) + (1-s)\log(p)
\end{cases}.
\end{equation}
Due to labeling mechanism, which negative labels are missing labels is not a prior known, {\it{i.e.}}, $s$ is not a prior known. Hence, the loss (Eq.~(8)) cannot be directly used for training.
However, in practice the missing labels can be empirically identified based on the model prediction as analyzed in Sec.~\ref{sec_observation}.
Hence, for a given negative label $y=0$, we can set a threshold $\tau\in(0,1)$ to identify whether it is a truly negative or false negative based on the model prediction probability $p$. Specifically, for a given negative label, it would be a missing label with high probability if $p>\tau$. In light of this understanding, a new loss that is robust to missing labels can be designed by recasting Eq.~(8) into:
\begin{equation}
\begin{cases}
&L^+ = \log(p) \\
&L^- = \mathbb{I}(p\leq \tau)\log(1-p) + (1-\mathbb{I}(p\leq \tau))\log(p)
\end{cases},
\end{equation}
where $\mathbb{I}(\cdot)\in\{0,1\}$ is the indicator function.
Furthermore, we can combine SPLC with the generalized loss function as
\begin{equation}
\begin{cases}
&L_{SPLC}^+ = loss^+(p) \\
&L_{SPLC}^- = \mathbb{I}(p\leq \tau)loss^-(p) + (1-\mathbb{I}(p\leq \tau))loss^+(p)
\end{cases},
\end{equation}
where $loss^+(\cdot)$ and $loss^-(\cdot)$ refer to generalized loss functions for positives and negatives, respectively.
In implementation, SPLC adopts the label correction for the probability that exceeds a fixed threshold during training. It presents two main advantages compared with the pseudo label method: {\it{1)}}~SPLC is more efficient. In the pseudo-label method, a trained model is utilized to correct the labels, then the network is retrained. In contrast, the network only needs to be trained once in SPLC. {\it{2)}}~SPLC is more accurate. In the pseudo-label method, the network for label correction is trained to converge by noisy data, leading to poor capability in recognizing missing labels. In contrast, correcting missing labels with SPLC at the early stage of training reduces the influence of noisy data.
\iffalse
The threshold $thres$ can be adapted from the distribution of predicted results in every mini-batch. In every mini-batch, positive and negative predicted results are represented as $p_{pos}, p_{neg}$, respectively. The threshold is calculated as:
\begin{equation}
\begin{aligned}
\bar{p}_{neg} = mean(p_{neg}) \\
\epsilon_{neg} = std(p_{neg}) \\
thres = mean(p_{pos}) \\
\end{aligned}
\end{equation}
\fi
\begin{figure}[t]
\centering
\includegraphics[width=.40\textwidth]{figures/pos}
\vspace{-.5em}
\caption{Gradient analysis of Focal margin loss with different margins in comparison with BCE. Without margin~($m=0$), Focal margin degrades into Focal loss, focusing on hard samples, whereas a proper margin~($m\in[0.5,1.0]$) enables Focal margin to highlight semi-hard samples.}
\vspace{-1.5em}
\label{img:posloss}
\end{figure}
\section{Related Work}
\subsection{Multi-label Learning with Missing Labels}
Multi-label learning with incomplete labels has been a hot topic in the community of weakly-supervised learning, due to the difficulty of annotating all the ground-truth labels~\cite{liu2021emerging}.
There are two popular related research topics for incomplete labels, Partial Multi-Label Learning ~(PML)~\cite{xu2019partial, xie2021partial}, and Multi-label Learning with Missing Labels~(MLML)~\cite{yu2014large,durand2019learning}.
To avoid the ambiguity of settings, we clarify the difference in Table~\ref{tab:missing kind}.
PML requires labeling all positive labels, which possibly introduces extra false positive labels and costs more. In other words, methods designed for PML typically rely on a candidate label set that includes all the positive labels occurring in the image.
By comparison, not all positive labels are required in MLML, which is more practical due to the lower labeling cost.
In this work, we further relax the MLML setting, where no exact negative labels~\cite{durand2019learning} are utilized and the number of positive labels is not limited to single~\cite{cole2021multi}.
This means the scenario we are working on is more flexible and general in real-word scenarios.
\begin{table}[t]
\scriptsize
\caption{Comparison among different settings in multi-label classification. \Checkmark (resp. \XSolidBrush, \textcircled{}) means a label is present (resp. absent, unknown). Falsely marked labels are in red color. PML requires all positive labels are annotated. By comparison, MLML relaxes the requirement of annotations.}
\centering
\begin{tabular}{c|ccccc}
\hline
Settings & label1 & label2 & label3 & label4 & label5
\\
\hline
Full labels & \Checkmark &\Checkmark &\XSolidBrush &\XSolidBrush &\XSolidBrush \\
PML &\Checkmark &\Checkmark &\textcolor{red}{\Checkmark} &\textcircled{} &\textcircled{}\\
MLML &\Checkmark &\textcolor{red}{\textcircled{}} &\textcircled{} &\textcircled{} &\textcircled{}\\
\hline
\end{tabular}
\label{tab:missing kind}
\end{table}
\iffalse
\begin{table}[t]
\vspace{1em}
\scriptsize
\tablestyle{8pt}{1.5}
\begin{tabular}{c|cc|cc}
\multirow{2}{*}{Settings} & \multicolumn{2}{c|}{Positive labels}&
\multicolumn{2}{c}{Negative labels}
\\
&
\multicolumn{1}{c}{True} &
\multicolumn{1}{c|}{False} &
\multicolumn{1}{c}{True} &
\multicolumn{1}{c}{False}
\\
\hline
Full labels & \checkmark & &\checkmark & \\
Partial labels & \checkmark & \checkmark &\checkmark &\\
Missing labels & \checkmark & &\checkmark & \checkmark\\
\end{tabular}
\caption{The comparison between different settings in Multi-label classification. False means that the positives~(resp. negatives) are incorrectly labeled as negatives~(resp. positives).}
\label{tab:missing kind}
\vspace{-1em}
\end{table}
\fi
Current works on MLML mainly focus on the design of networks and training schemes. The common practice is to utilize the customized networks to learn label correlations or classification confidence to realize correct recognition in missing labels.~\cite{zhang2018generalized} built a generative model to approximate the distributions of observed labels, by which the mislabeled samples can be recognized.~\cite{durand2019learning} proposed multi-stage training procedures to correct the missing labels. They utilized curriculum learning based strategies with the Bayesian uncertainty strategy for missing label correction.
This domain has seen fast progress recently, at the cost of requiring more complex methods.
In contrast, this work seeks to fulfill the potential of the loss function in MLML, which is based on standard architectures, does not increase training and inference pipelines and time.
Besides, our methods are fully complementary to existing works and can be combined with them to further improve the performance at the cost of increasing the complexity.
\subsection{Noisy-robust Learning}
Noisy-robust learning is an important problem in training Deep Neural Networks~(DNNs) since label noise is ubiquitous in real-world scenarios.
The label noises can lead the DNNs to overfit to such noises due to the ``memorization effect'' proposed by \cite{zhang2021understanding,arpit2017closer}, eventually degrading the model generalization performance.
Sample re-weighting and loss correction are two typical approaches in this field.
Sample re-weighting methods~\cite{liu2015classification,ren2018learning} typically reduce the contribution of noisy samples by down-weighting the loss of them. Many researchers~\cite{hu2019noise,zhong2019unequal} extended this idea to face recognition for noisy-robust feature learning.
Loss correction refers to the modification of the loss to compensate for the incorrect guidance provided by noisy samples.
A basic approach is the pseudo label~\cite{lee2013pseudo}, which utilizes a trained model to correct annotations and then retrain the model using the updated labels.
In~\cite{arazo2019unsupervised}, researchers found that clean and noisy samples can be distinguished from the loss distribution in noisy single-label datasets. Inspired by this observation, they proposed a mixture distribution method to correct noisy labels.
\cite{cole2021multi} proposed ROLE to jointly train classifiers and label estimator, by which the true labels can be estimated and corrected in the training phase.
Though many works about the loss design achieve notable success and push forward the research in noise-robust learning, most~(if not all) of the existing works focus on the single-label task and ignore the multi-label task.
This work validates that typical noise-robust loss functions in the single-label task~(both sample re-weighting and loss correction) can also be efficiently applied in multi-label learning.
|
1,108,101,566,013 | arxiv | \section{Proof of Theorem~\ref{t1}}
The necessity is evident. We shall prove the sufficiency. Let
$(H,\tau)$ be a $\mathcal G$-separated paratopological group,
where $\mathcal G$ is $T^\flat$-stable class of topological
groups. Since the group $H$ is $\mathcal G$-separated, there
exists a group topology $\sigma$ on the group $H$ such that
$(H,\sigma)\in\mathcal G$. We shall define the topology on the
product $G=H\times T$ as follows. Let $\mathcal B_\tau$, $\mathcal
B_\sigma$ and $\mathcal B_T$ be open bases at the unit of the
groups $(H,\tau)$, $(H,\sigma)$ and $T$ respectively. For
arbitrary neighborhoods $U_\tau\in \mathcal B_\tau$, $U_\sigma\in
\mathcal B_\sigma$ and $U_T\in \mathcal B_T$ with $U_\tau\subset
U_\sigma$ put $[U_\tau, U_\sigma,U_T]=U_\tau\times\{e_T\}\cup
U_\sigma\times (U_T\bs\{e_T\})$, where $e_H$ and $e_T$ are the
units of the groups $H$ and $T$ respectively. The family of all
such $[U_\tau, U_\sigma,U_T]$ will be denoted by $\mathcal B$. Now
we verify the Pontryagin conditions for the family $\mathcal B$.
The Condition 1 is trivial.
To check Condition 2 consider an arbitrary set $[U_\tau,
U_\sigma,U_T]\in\mathcal B$. There exist neighborhoods
$V_\tau\in\mathcal B_\tau$, $V_\sigma\in\mathcal B_\sigma$ such
that $V_\tau^2\subset U_\tau$, $V_\sigma^2\subset U_\sigma$ and
$V_\tau\subset V_\sigma$. Since the group $T^\#$ is discrete then
there is a neighborhood $V_T\subset\mathcal B_T$ such that
$(V_T\bs\{e_T\})^2\subset U_T\bs\{e_T\}$. Then $[V_\tau,
V_\sigma,V_T]^2\subset [U_\tau, U_\sigma,U_T]$.
To verify Condition 3 consider an arbitrary point $x\in
[U_\tau,U_\sigma,U_T]\in\mathcal B$. If $x=(x_H,e_T)$, where
$x_H\in U_\tau$ then there exist neighborhoods $V_\tau\in\mathcal
B_\tau$, $V_\sigma\in\mathcal B_\sigma$ such that $V_\tau\subset
V_\sigma$, $x_HV_\tau\subset U_\tau$ and $x_HV_\sigma\subset
U_\sigma$. Then $x[V_\tau,V_\sigma, U_T]\subset [U_\tau,U_\sigma,
U_T]$. If $x=(x_H,x_T)$, where $x_H\in U_\sigma$ and $x_T\in
U_T\bs\{e_T\}$ then there exist neighborhoods $V_\tau\in\mathcal
B_\tau$, $V_\sigma\in\mathcal B_\sigma$ and $V_T\in\mathcal B_T$
such that $V_\tau\subset V_\sigma$, $x_HV_\sigma\subset U_\sigma$
and $x_TV_T\subset U_T\bs\{e_T\}$. Then $x[V_\tau, V_\sigma,
V_T]\subset [U_\tau, U_\sigma, U_T]$.
Condition 4. Let $x=(x_H,x_T)\subset H\times T$ be an arbitrary
point. Then there are neighborhoods $V_\tau\in\mathcal B_\tau$,
$V_\sigma\in\mathcal B_\sigma$ and $V_T\in\mathcal B_T$ such that
$V_\tau\subset V_\sigma$, $x_H^{-1}V_\tau x_H\subset U_\tau$,
$x_H^{-1}V_\sigma x_H\subset U_\sigma$ and $x_T^{-1}V_Tx_T\subset
U_T$. Then $x^{-1}[V_\tau,V_\sigma, V_T]x\subset [U_\tau,U_\sigma,
U_T]$.
Hence the family $\mathcal B$ is a base of a semigroup topology on
the group $G$. Denote this semigroup topology by $\rho$. The
inclusion
$\bigcap\{[U_\tau,U_\sigma,U_T]\cdot[U_\tau,U_\sigma,U_T]^{-1}:U_\tau\in\mathcal
B_\tau, U_\sigma\in\mathcal B_\sigma, U_T\in\mathcal
B_T\}\subset\{U_\sigma U_\sigma^{-1}\times
U_TU_T^{-1}:U_\sigma\in\mathcal B_\sigma, U_T\in\mathcal
B_T\}=\{(e_H,e_T)\}$ implies that the topology $\rho$ is
Hausdorff. Since the groups $T$ and $(H,\sigma)$ are saturated and
the group $T$ is nondiscrete, the group $(G,\rho)$ is saturated
too. According to \cite[Proposition 3]{BR1} the base at the unit
of the topology $\rho^\flat$ consists of the sets $UU^{-1}$, where
$U\in\mathcal B$. Thus the topology $\rho^\flat$ coincides with
the product topology of the groups $(H,\sigma)\times T^\flat$ and
hence $(G,\rho^\flat)\in\mathcal G$ and $H$ is a $\flat$-closed
subgroup of the group $G$.
\section{Proof of Theorem~\ref{t2}}
The ``if'' part of Theorem~\ref{t2} is trivial. To prove the
``only if'' part, suppose that $T$ and $(H,\tau)$ are
paratopological groups with the units $e_T$ and $e_H$, satisfying
the hypothesis of Theorem~\ref{t2}.
Using the Sorgenfrey property of the group $T$, choose an open
invariant neighborhood $U_0$ of the unit $e_T$ such that for any
neighborhood $U\subset T$ of $e_T$ there is a neighborhood
$U'\subset T$ of $e_T$ such that $x,y\in U$ for any elements
$x,y\in U_0$ with $xy\in U'$. By induction we can build a sequence
$\{U_n:n\in\omega\}$ of invariant open neighborhoods of $e_T$
satisfying the following conditions:
(1) $\{U_n:n\in\omega\}$ is a neighborhood base at the unit $e_T$
of the group $T$;
(2) $U_{n+1}^2\subset U_n$ for every $n\in\omega$;
(3) for every $n\in\omega$ and any points $x,y\in U_0$ the
inclusion $xy\in U_{n+1}$ implies $x,y\in U_n$;
(4) $\ol {U_{n}}^\flat\subsetneqq U_{n-1}$ for every $n\in\omega$,
where $\ol {U_{n}}^\flat$ denotes the closure of the set $U_{n}$
in the topology of $T^\flat$.
Remark that the condition (3) yields
(5) $(U_0\bs U_n)U_0\cap U_{n+1}=\emptyset$ and hence $U_0\bs
U_n\cap U_{n+1}U_0^{-1}=\emptyset$ for all $n$.
\noindent Since the group $T$ is saturated, we can apply
Proposition~3 of \cite{BR1} to conclude that the set
$U_{n+2}U_0^{-1}$ is a neighborhood of the unit in $T^\flat$. Then
the set $U_{n+2}U_{n+2}U_0^{-1}\subset U_{n+1}U_0^{-1}$ is a
neighborhood of $U_{n+2}$ in $T^\flat$. This observation together
with (5) yields
(6) $\overline{U_0\bs U_n}^\flat\cap U_{n+2}=\emptyset$ for all
$n$.
It follows from our assumptions on $(H,\tau)$ that there exists a
group topology $\sigma\subset\tau$ on $H$ such that the group
$(H,\sigma)$ belongs to the class $\mathcal G$ and $(H,\tau)$ has
a neighborhood base $\mathcal B_\tau$ at the unit $e_H$ consisting
of sets, closed in the topology $\sigma$. By induction we can
build a base $\{V_n:n\in\omega\}$ of open symmetric invariant
neighborhoods of $e_H$ in the topology $\sigma$ such that
$V_{n+1}^2\subset V_n$ for every $n\in\omega$.
Consider the product $H\times T$ and identify $H$ with the
subgroup $H\times\{e_T\}$ of $H\times T$. It rests to define a
topology on $H\times T$. At first we shall introduce an auxiliary
sequence $\{W_k\}$ of ``neighborhoods'' of $(e_H,e_T)$ satisfying
the Pontryagin Conditions 1,2, and 4. For every $k\in\omega$ let
\medskip
$(\star)$\quad $ W_n=\{(e_H,e_T)\}\cup\bigcup\limits_{i>2n}
V_{ni}\times (U_{i-1}\setminus U_i)$ \smallskip
\noindent and observe that $W_{n+1}\subset W_n$ for all $n$. Let
us verify the Pontryagin Conditions 1,2,4 for the sequence
$(W_n)$.
To verify Conditions 1 and 2 it suffices to show that
$W_{n}^2\subset W_{n-1}$ for all $n\ge 1$. Fix any elements
$(x,t),(x',t')\in W_n$. We have to show that $(xx',tt')\in
W_{n-1}$. Without loss of generality, we can assume that $t,t'\ne
e_T$. In this case we may find numbers $i,i'> 2n$ with $(x,t)\in
V_{ni}\times (U_{i-1}\bs U_i)$ and $(x',t')\in
V_{ni'}\times(U_{i'-1}\bs U_{i'})$. For $j=\min\{i,i'\}$ the
Conditions (2), (5) imply
\begin{multline*} (xx',tt')\in
V_{nj-1}\times (U_{j-2}\bs U_{j+1})\subset
\bigcup_{k=j-1}^{j+1}V_{(n-1)k}\times (U_{k-1}\bs U_k)\subset \\
\bigcup_{k>2(n-1)}V_{(n-1)k}\times (U_{k-1}\bs U_k)\subset
W_{n-1}.\end{multline*}
Taking into account that both the sequences $\{U_n\}$ and
$\{V_n\}$ consist of invariant neighborhoods, we conclude that the
sets $W_n$ are invariant as well. Hence the Condition 4 holds too.
Now, using the sequence $(W_n)$ we shall produce a sequence
$(O_n)$ satisfying all the Pontryagin Conditions 1--5. For every
$n\in\omega$ put $O_n=\bigcup_{i=n}^\infty W_nW_{n+1}\cdots W_i$.
Thus $W_n\supset O_{n+1}\supset W_{n+1}$ and $O_n\cap
H\times\{e_T\}=\{(e_H,e_T)\}$ for all $n$. It is easy to see that
the sequence $\{O_n\}$ consists of invariant sets and satisfies
Pontryagin conditions 1--4. Hence the family $\{O_n\}$ is a
neighborhood base at the unit of some (not necessarily Hausdorff)
topology $\tau'$ on $G=H\times T$ turning $G$ into a
paratopological SIN-group. Applying Proposition 1.3 from
\cite{Ra1} we conclude that the family $\mathcal
B_\rho=\{OU:O\in\mathcal B_{\tau'}, U\in\mathcal B_\tau \}$ is a
neighborhood base at the unit of some (not necessarily Hausdorff)
semigroup topology $\rho$ on $G$ (here we identify $H$ with the
subgroup $H\times\{e_T\}$ in $G$). Since the topology $\rho$ is
stronger than the product topology $\pi$ of the group
$(H,\sigma)\times T^\flat$, the topology $\rho$ is Hausdorff and
$H$ is a $\flat$-closed subgroup of the group $(G,\rho)$. It
follows from the construction of the topology $\rho$ that
$\rho|H=\tau$, $\chi(G,\rho)=\chi(H)$ and $|G/H|=|T|$.
At the end of the proof we show that the paratopological group
$(G,\rho)$ is saturated and $\flat$-regular. To show that the
group $(G,\rho)$ is saturated it suffices to find for every $n\ge
1$ nonempty open sets $V\subset (H,\sigma)$ and $U\subset T$ such
that $V\times U^{-1}\subset W_n$. Taking into account that the
group $T$ is saturated and the set $U_{3n-1}\bs
\overline{U_{3n}}^\flat$ is nonempty, find a nonempty open set
$U\subset T$ such that $U^{-1}\subset U_{3n-1}\bs
\overline{U_{3n}}^\flat$. Then $V^{-1}_{3n^2}\times U^{-1}\subset
V_{3n^2}\times(U_{3n-1}\setminus U_{3n})\subset W_n$. This implies
that the group $(G,\rho)$ is saturated and
$(G,\rho^\flat)=(H,\sigma)\times T^\flat\in\mathcal G$.
The $\flat$-regularity of the group $(G,\rho)$ will follow as soon
as we prove that $\overline {W_{n}V}^\pi \subset W_{n-1}V$ for
every $n\ge 2$ and $V\in\mathcal B_\tau$. Indeed, in this case, we
shall get $$\overline{O_{n+1}V}^\flat\subset
\overline{O_{n+1}V}^\pi\subset\overline{W_nV}^\pi\subset
W_{n-1}V\subset O_{n-1}V.$$
Fix any $x\in\overline{W_nV}^\pi$. If $x\in V\times\{e_T\}$, then
$x\in W_{n-1}V$. Next, assume that $x\notin H\times\{e_T\}$. The
property (4) of the sequence $(U_k)$ implies that the point $x$
has a $\pi$-neighborhood meeting only finitely many sets $H\times
U_i$, $i\in\omega$. This observation together with $x\in
\overline{W_nV}^\pi$ and ($\star$) imply that
$x\in\overline{V_{ni}V\times (U_{i-1}\setminus U_i)}^\flat$ for
some $i>2n$. The condition (6) implies that the following chain of
inclusions holds:
\begin{multline*}
x\in\overline{V_{ni}V\times (U_{i-1}\setminus U_i)}^\flat\subset
\overline{V_{ni}V}^\sigma\times (\overline{U_{i-1}\setminus
U_i)}^\flat\subset V_{ni}^2V\times (U_{i-2}\setminus
U_{i+2})\subset\\ \bigcup_{j=i-1}^{i+2}
V_{ni-1}V\times(U_{j-1}\setminus U_j)\subset \bigcup_{j>
2n-2}V_{(n-1)j}V\times (U_{j-1}\setminus U_j)\subset W_{n-1}V.
\end{multline*}
Finally, assume that $x\in H\setminus V=(H\setminus
V)\times\{e_T\}$. Since the set $V$ is $\flat$-closed in $H$,
there is $m\in\omega$ such that $V_m^{-1}V_mx\cap V=\emptyset$ and
thus $V_mx\cap V_iV=\emptyset$ for all $i\ge m$. The inclusion
$x\in \overline{W_nV}^\pi$ and ($\star$) imply $$(V_m\times
U_mU_m^{-1})x\cap (V_{ni}V\times (U_{i-1}\setminus
U_i))\ne\emptyset$$ for some $i>2n$. Then $V_mx\cap
V_{ni}V\ne\emptyset$ and $U_mU_m^{-1}\cap (U_{i-1}\setminus
U_i)\ne\emptyset$. In view of Property (5) of the sequence
$(U_k)$, the latter relation implies $m\le i$. On the other hand,
the former relation together with the choice of the number $m$
yields $ni<m\le i$ which is impossible. This contradiction
finishes the proof of the inclusion $\overline{W_nV}^{\pi}\subset
W_{n-1}V$.
\section{Proof of Theorem~\ref{t3}}
Given a topological space $(X,\tau)$ Stone \cite{Sto} and Katetov
\cite{Kat} considered the topology $\tau_r$ on $X$ generated by
the base consisting of all canonically open sets of the space
$(X,\tau)$. This topology is called the {\it regularization} of
the topology $\tau$. If $(X,\tau)$ is Hausdorff then $(X,\tau_r)$
is regular and if $(X,\tau)$ is a paratopological group then
$(X,\tau_r)$ is a paratopological group too \cite[Ex.1.9]{Ra2}. If
$(G,\tau)$ is a paratopological group then $\tau_r$ is the
strongest regular semigroup topology on the group $G$ which is
weaker than $\tau$; moreover, for any neighborhood base $\mathcal
B$ at the unit of the group $(G,\tau)$ the family $\mathcal
B_r=\{\inte\ol U:U\in\mathcal B\}$ is a base at the unit of the
group $(G,\tau_r)$ \cite[p.31--32]{Ra3}. The following proposition
is quite easy and probably is known.
\begin{proposition}\label{regularization} Let $(X,\tau)$ be a topological space. Then $(X,\tau)$ is
pseudocompact if and only if the regularization $(X,\tau_r)$ is
pseudocompact.
\end{proposition}
For the proof of Theorem~\ref{t3} we shall need a special
pseudocompact functionally Hausdorff semigroup topology on the
unit circle. We recall that a topological space $X$ is {\em
functionally Hausdorff\/} if continuous functions separate points
of $X$.
\begin{proposition}\label{pseudo} There is a functionally Hausdorff pseudocompact
first countable semigroup topology $\theta$ on the unit circle
$\IT$ which is not a group topology.
\end{proposition}
\begin{proof}
Let $\IT$ be the unit circle and $\chi:\IT\to\IQ$ be a
(discontinuous) group homomorphism onto the groups of rational
numbers. Fix any element $x_0\in\IT$ with $\chi(x_0)=1$ and
observe that $S=\{1\}\cup\{x\in\IT:\chi(x)>0\}$ is a subsemigroup
of $\IT$. Let $\theta$ be the weakest semigroup topology on $\IT$
containing the standard compact topology $\tau$ and such that $S$
is open in $\theta$. It is easy to see that $\theta$ is
functionally Hausdorff and the sets $S\cap U$, where $1\in
U\in\tau$, form a neighborhood base of the topology $\theta$ at
the unit of $\IT$.
By Proposition~\ref{regularization}, to show that the group $(\IT,\theta)$ is
pseudocompact it suffices to verify that $\theta_r=\tau$. Since $\tau$ is a
regular semigroup topology on the group $\IT$ weaker than $\theta$, we get
$\theta_r\supset\tau$. To verify the inverse inclusion we first show that
${\overline U}^\tau={\overline U}^\theta$ for any $U\in\theta$. Since
$\tau\subset\theta$ it suffices to show that ${\overline
U}^\tau\subset{\overline U}^\theta$. Fix any point $x\in{\overline U}^\tau$ and
a neighborhood $V\in\tau$ of 1. We have to show that $x(V\cap S)\cap
U\ne\emptyset$. Pick up any point $y\in xV\cap U$. Since $U$ is open in the
topology $\theta$, we can find a neighborhood $W\in\tau$ of 1 such that
$y(W\cap S)\subset xV\cap U$. Find a number $N$ such that
$\chi(yx_0^N)>\chi(x)$ and thus $yx_0^n\in xS$ for all $n\ge N$ (we recall that
$x_0$ is an element of $\IT$ with $\chi(x_0)=1$). Moreover, since $x_0$ is
non-periodic in $\IT$, there exists a number $n\ge N$ such that $x_0^n\subset
W$. Then $yx_0^n\in (yS\cap yW)\cap xS\subset (xV\cap U)\cap xS=x(V\cap S)\cap
U$. Hence $x\in {\overline U}^\theta$ and
$\overline{U}^\theta=\overline{U}^\tau$.
Then $$\inte_\theta{\ol U}^\theta=\IT \bs {\ol {\IT\bs{\ol
U}^\theta}}^\theta= \IT \bs {\ol {\IT\bs{\ol
U}^\theta}}^\tau\in\tau$$ which just yields $\theta_r\subset\tau$.
\end{proof}
Now we are able to present a {\em proof of Theorem~\ref{t3}}. The
``if'' part follows from the observation that for any Hausdorff
pseudocompact paratopological group $(G,\tau)$ its group reflexion
$G^\flat=(G,\tau_r)$ is a Hausdorff pseudocompact (and hence
totally bounded) topological group \cite{RR}.
To prove the ``only if'' part, fix a Bohr-separated abelian
paratopological group $(H,\tau)$ and let $\mathcal B_\tau$ be a
neighborhood base at the unit of the group $(H,\tau)$. It follows
that there is a group topology $\sigma'\subset\tau$ on $H$ such
that $(H,\sigma')$ is totally bounded. Let $(\hat H,\sigma)$ be
the Raikov completion of the group $(H,\sigma')$. It is clear that
$\hat H$ is a compact abelian group and $H$ is a normal dense
subgroup of $\hat H$. It follows that $\mathcal B_\tau$ is a
neighborhood base at the unit of some semigroup topology $\tau'$
on the group $\hat H$ with $\tau'|H=\tau$. Let $(\IT,\theta)$ be
the group from Proposition~\ref{pseudo}.
We shall define the topology on the product $G=\hat H\times \IT$
as follows. Let $\mathcal B_\tau$, $\mathcal B_\sigma$ and
$\mathcal B_\theta$ be the open neighborhood bases at the unit of
the groups $(H,\tau)$, $(\hat H,\sigma)$ and $(\IT,\theta)$
respectively. For arbitrary neighborhoods $U_\tau\in \mathcal
B_\tau$, $U_\sigma\in \mathcal B_\sigma$ and $U_\theta\in \mathcal
B_\theta$ with $U_\tau\subset U_\sigma$ let $[U_\tau,
U_\sigma,U_\theta]=U_\tau\times\{e_\IT\}\cup U_\sigma\times
(U_\theta\bs\{e_\IT\})$, where $e_H$ and $e_\IT$ are the units of
the groups $H$ and $\IT$ respectively. Denote by $\mathcal B$ the
family of all such $[U_\tau, U_\sigma,U_\theta]$. Repeating the
argument of Theorem~\ref{t1} check that the family $\mathcal B$ is
a base of some Hausdorff semigroup topology $\rho$ on $G$. By
$\pi$ denote the topology of the product $(\hat
H,\sigma)\times(\IT,\theta_r)$. By
Proposition~\ref{regularization} to show that the group $(G,\rho)$
is pseudocompact it suffice to verify that $\rho_r\subset\pi$.
For this we shall show that ${\ol U}^\rho\supset
U_\sigma\times{\ol U_\theta}^\theta$ for every $U=[U_\tau,
U_\sigma,U_\theta]\in \mathcal B$. Let $(x_{\hat H},x_\IT)\in
U_\sigma\times{\ol U_\theta}^\theta$ and $V=[V_\tau,
V_\sigma,V_\theta]\in \mathcal B$. It suffice to show that
$\big((x_{\hat
H},x_\IT)+V_\sigma\times(V_\theta\bs\{e_\IT\})\big)\cap
U_\sigma\times(U_\theta\bs\{e_\IT\})\not=\0$. This intersection is
nonempty if and only if the intersections $(x_{\hat
H}+V_\sigma)\cap U_\sigma$ and $(x_\IT+(V_\theta\bs\{e_\IT\}))\cap
(U_\theta\bs \{e_\IT\})$ are nonempty. The first intersection is
nonempty since $x_{\hat H}\in U_\sigma$ and the second is nonempty
since $x_\IT\in{\ol U_\theta}^\theta$ and the topology $\theta$ is
non-discrete.
|
1,108,101,566,014 | arxiv | \section{Introduction}
After discovering a topological proof of
Hindman's theorem \cite{Hind} (see \cite[p.102]{HS}, \cite{H2}), topological methods become a standard
tool in the modern combinatorics of numbers, see \cite{HS},
\cite{P}. The crucial point is that any semigroup operation
defined on a discrete space $X$ can be extended to a
right-topological semigroup operation on $\beta(X)$, the Stone-\v
Cech compactification of $X$. The extension of the operation from
$X$ to $\beta(X)$ can be defined by the simple formula:
\begin{equation}\label{extension}
\mathcal A\circ\mathcal B=\big\{A\subset X:\{x\in X:x^{-1}A\in\mathcal B\}\in\mathcal A\big\},
\end{equation}
The Stone-\v Cech compactification $\beta(X)$ of $X$ is a
subspace of the double power-set $\mathsf P^2(X)=\mathsf P(\mathsf P(X))$, which can be identified with the Cantor discontinuum $\{0,1\}^{\mathsf P(X)}$ and endowed with the compact Hausdorff topology of the Tychonoff product.
It turns out that the formula (\ref{extension}) applied to arbitrary families $\mathcal A,\mathcal B\in\mathsf P^2(X)$ of subsets of a group $X$ still defines a binary
operation $\circ:\mathsf P^2(X)\times \mathsf P^2(X)\to\mathsf P^2(X)$ that turns the double power-set $\mathsf P^2(X)$ into a compact Hausdorff right-topological semigroup that contains $\beta(X)$ as a closed subsemigroup.
The semigroup $\beta(X)$ lies in a bit larger subsemigroup $\lambda(X)\subset\mathsf P^2(X)$ consisting of all maximal linked systems on $X$. We recall that a family $\mathcal L$ of subsets of $X$ is
\begin{itemize}
\item {\em linked} if any sets $A,B\in\mathcal L$ have non-empty intersection $A\cap B\ne\emptyset$;
\item {\em maximal linked} if $\mathcal L$ coincides with each linked system $\mathcal L'$ on $X$ that contains $\mathcal L$.
\end{itemize}
The space $\lambda(X)$ is
well-known in General and Categorial Topology as the {\em
superextension}
of $X$, see \cite{vM}, \cite{TZ}.
The thorough study of algebraic properties of the superextensions
of groups was started in \cite{BGN} and
continued in \cite{BG2} and \cite{BG3}. In particular, in \cite{BG3} we proved that the minimal left ideals of the superextension $\lambda(\mathbb Z)$ are metrizable topological semigroups. In this paper we shall extend this result to the superextensions $\lambda(X)$ of all finitely-generated abelian groups $X$.
The results obtained in this paper completely reveal the topological and algebraic structure of the minimal ideal and minimal left ideals of the superextension $\lambda(X)$ of a twinic group $X$. A group $X$ is defined to be {\em twinic } if it admits a left-invariant ideal $\mathcal I$ of subsets of $X$ such that for any subset $A\subset X$ with $xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA$ for some $x,y\in X$ we have $xA=_\mathcal I yA$. Here the symbol $A\subset_\mathcal I B$ means that $A\setminus B\in\mathcal I$ and $A=_\mathcal I B$ means that $A\subset_\mathcal I B$ and $B\subset_\mathcal I A$. In Section~\ref{s:tg} we shall prove that the class of twinic groups contains all amenable groups and all groups with periodic commutators (in particular, all torsion groups), but does not contain the free group with two generators $F_2$.
We need to recall the notation for some standard 2-groups. By $Q_8$ we denote the group of quaternions. It is a multiplicative subgroup $\{1,i,j,k,-1,-i,-j,-k\}$ of the algebra of quaternions $\IH$ (which contains the field of complex numbers $\mathbb C$ as a subalgebra).
For every $k\in\omega$ let $C_{2^k}=\{z\in\mathbb C:z^{2^k}=1\}$ be the cyclic group of order $2^k$. The multiplicative subgroup $Q_{2^k}\subset\IH$ generated by the union $C_{2^{k-1}}\cup Q_8$ is called the {\em group of generalized quaternions}. The union $C_{2^\infty}=\bigcup_{k=1}^\infty C_{2^k}$ is called the {\em quasicyclic 2-group} and the union $Q_{2^\infty}=\bigcup_{k=3}^\infty Q_{2^k}$ is called {\em the infinite group of generalized quaternions}. By Theorem~\ref{BCQ}, a group $G$ is isomorphic to $C_{2^n}$ or $Q_{2^n}$ for some $n\in\mathbb N\cup\{\infty\}$ if and only if $G$ is a 2-group with a unique 2-element subgroup.
The following theorem describing the structure of minimal left ideals of the superextesions of twinic groups can be derived from Theorem~\ref{t18.11} and Proposition~\ref{p19.1}:
\begin{theorem} For each twinic group $X$ there are cardinals $q(X,C_{2^k})$, $q(X,Q_{2^k})$, $k\in\mathbb N\cup\{\infty\}$, such that
\begin{enumerate}
\item[\textup{(1)}] each minimal left ideal of $\lambda(X)$ is algebraically isomorphic to
$$Z\times\prod_{1\le k\le \infty}C_{2^k}^{\;q(X,C_{2^k})}\times\prod_{3\le k\le\infty}Q_{2^k}^{\;q(X,Q_{2^k})}$$for some semigroup $Z$ of left zeros;
\item[\textup{(2)}] each maximal subgroup of the minimal ideal of $\lambda(X)$ is algebraically isomorphic to
$$\prod_{1\le k\le \infty}C_{2^k}^{\;q(X,C_{2^k})}\times\prod_{3\le k\le\infty}Q_{2^k}^{\;q(X,Q_{2^k})}.$$
\item[\textup{(3)}] If $q(X,C_{2^\infty})=q(X,Q_{2^\infty})=0$, then
each maximal subgroup of the minimal ideal of $\lambda(X)$ is topologically isomorphic to the compact topological group
$$\prod_{1\le k<\infty}C_{2^k}^{\;q(X,C_{2^k})}\times\prod_{3\le k<\infty}Q_{2^k}^{\;q(X,Q_{2^k})}.$$
\end{enumerate}
If the group $X$ is abelian, then
\begin{enumerate}
\item[\textup{(4)}] $q(X,Q_{2^k})=0$ for every $k\in\mathbb N\cup\{\infty\}$ while $q(X,C_{2^k})$ is equal to the number of subgroups $H\subset X$ such that the quotient group $X/H$ is isomorphic to $C_{2^k}$;
\item[\textup{(5)}] for every $k\in\mathbb N$ $$q(X,C_{2^k})=\frac{|\hom(X,C_{2^k})|-|\hom(X,C_{2^{k-1}})|}{2^{k-1}},$$where $\hom(X,C_{2^k})$ is the group of homomorphisms from $X$ into $C_{2^k}$.
\end{enumerate}
\end{theorem}
\smallskip
\section{Right-topological semigroups}
In this section we recall some information from \cite{HS} related
to right-topological semigroups. By definition, a
right-topological semigroup is a topological space $S$ endowed
with a semigroup operation $\ast:S\times S\to S$ such that for
every $a\in S$ the right shift $r_a:S\to S$, $r_a:x\mapsto x\ast
a$, is continuous. If the semigroup operation $\ast:S\times S\to
S$ is (separately) continuous, then $(S,\ast)$ is a ({\em semi}-){\em topological
semigroup}. A typical example of a right-topological semigroup is the semigroup $X^X$ of all self-maps of a topological space $X$ endowed with the Tychonoff product topology and the binary operation of composition of functions.
From now on, $S$ is a compact Hausdorff right-topological
semigroup. We shall recall some known information concerning
ideals in $S$, see \cite{HS}.
A non-empty subset $I$ of $S$ is called a {\em left} (resp. {\em right})
{\em ideal\/} if $SI\subset I$ (resp. $IS\subset I$). If $I$ is both
a left and right ideal in $S$, then $I$ is called an {\em ideal}
in $S$. Observe that for every $x\in S$ the set $SxS=\{sxt:s,t\in S\}$ (resp. $Sx=\{sx:s\in S\}$, $xS=\{xs:s\in S\}$) is an ideal (resp. left ideal, right ideal) in $S$.
Such an ideal is called {\em principal}. An ideal $I\subset S$ is
called {\em minimal} if any ideal of $S$ that lies in $I$
coincides with $I$. By analogy we define minimal left and right
ideals of $S$. It is easy to see that each minimal left (resp.
right) ideal $I$ is principal. Moreover, $I=Sx$ (resp. $I=xS$) for
each $x\in I$. This simple observation implies that each minimal
left ideal in $S$, being principal, is closed in $S$. By
\cite[2.6]{HS}, each left ideal in $S$ contains a minimal left ideal.
The union $\mathsf K(S)$ of all minimal left ideals of $S$ coincides with the minimal ideal of $S$, \cite[2.8]{HS}.
All minimal left ideals of $S$ are mutually homeomorphic and all
maximal groups of the minimal ideal $\mathsf K(S)$ are algebraically
isomorphic. Moreover, if two maximal groups lie in the same
minimal right ideal, then they are topologically isomorphic.
We shall need the following known fact, see Theorem 2.11(c) \cite{HS}.
\begin{proposition}\label{p2.1} For any two minimal left ideals $A,B$ of a compact right-topological semigroup $S$ and any point $b\in B$ the right shift $r_b:A\to B$, $r_b:x\mapsto xb$, is a homeomorphism.
\end{proposition}
This proposition implies the following corollary, see \cite[Lemma 1.1]{BG3}.
\begin{corollary}\label{c2.2} If a homomorphism $h:S\to S'$ between two compact right-topological semigroups is injective on some minimal left ideal of $S$, then $h$ is injective on each minimal left ideal of $S$.
\end{corollary}
An element $z$ of a semigroup $S$ is called a {\em right zero}
(resp. a {\em left zero}) in $S$ if $xz=z$ (resp. $zx=z$) for all
$x\in S$. It is clear that $z\in S$ is a right (left) zero in $S$
if and only if the singleton $\{z\}$ is a left (right) ideal in
$S$.
An element $e\in S$ is called an {\em idempotent} if $ee=e$. By
Ellis's Theorem \cite[2.5]{HS}, the set $E(S)$ of idempotents of any compact right-topological
semigroup is not empty.
For every idempotent $e$ the set
$$\mathsf H_e=\{x\in S:\exists x^{-1}\in S\;\;(xx^{-1}x=x,\;x^{-1}xx^{-1}=x^{-1},\;xx^{-1}=e=x^{-1}x)\}$$
is the largest subgroup of $S$ containing $e$.
By \cite[1.48]{HS}, for an idempotent
$e\in E(S)$ the following conditions are equivalent:
\begin{itemize}
\item $e\in \mathsf K(S)$;
\item $\mathsf K(S)=SeS$;
\item $Se$ is a minimal left ideal in $S$;
\item $eS$ is a minimal right ideal in $S$;
\item $eSe$ is a subgroup of $S$.
\end{itemize}
An idempotent $e$ satisfying the above equivalent conditions will be called a {\em minimal idempotent} in $S$. By \cite[1.64]{HS}, for any minimal idempotent $e\in S$ the set $\mathsf E(Se)=\mathsf E(S)\cap Se$ of idempotents of the minimal left ideal $Se$ is a semigroup of left zeros, which means that $xy=x$ for all $x,y\in \mathsf E(Se)$. By the Rees-Suschkewitsch Structure Theorem (see \cite[1.64]{HS}) the map
$$\varphi:\mathsf E(Se)\times \mathsf H_e\to Se,\; \varphi:(x,y)\mapsto xy,$$ is an algebraic isomorphism of the corresponding semigroups. If the minimal left ideal $Se$ is a topological semigroup, then $\varphi$ is a topological isomorphism.
Now we see that all the information on the algebraic (and sometimes topological) structure of the minimal left ideal $Se$ is encoded in the properties of the left zero semigroup $\mathsf E(Se)$ and the maximal group $\mathsf H_e$.
\section{Acts and their endomorphism monoids}
In this section we survey the information on acts that will be widely used in this paper for describing the algebraic structure of minimal left ideals of the superextensions of groups.
Following the terminology of \cite{KKM} by an {\em act} we understand a set $X$ endowed with a left
action $\cdot :H\times X\to X$ of a group $H$ called the {\em structure group} of the act. The action should satisfy two axioms: $1x=x$ and $g(hx)=(gh)x$ for all $x\in X$ and $g,h\in H$. Acts with the structure group $H$ will be called {\em $H$-acts} or {\em $H$-spaces}.
An act $X$ is called {\em free} if the stabilizer $\Fix(x)=\{h\in H:hx=x\}$ of each point $x\in X$ is trivial.
For a point $x\in X$ by $[x]=\{hx:h\in H\}$ we denote its {\em orbit} and by $[X]=\{[x]:x\in X\}$ the orbit space of the act $X$. More generally, for each subset $A\subset X$ we put $[A]=\{[a]:a\in A\}$.
A function $f:X\to Y$ between two $H$-acts is called {\em equivariant} if $f(hx)=hf(x)$ for all $x\in X$ and $h\in H$.
A function $f:X\to Y$ is called an {\em isomorphism} of the $H$-acts $X$ and $Y$ if it is bijective and equivariant. An equivariant self-map $f:X\to X$ is called an {\em endomorphism} of the $H$-act $X$. If $f$ is bijective, then $f$ is an {\em automorphism} of $X$.
The set $\End(X)$ of endomorphisms of an $H$-act $X$, endowed with the operation of composition of functions, is a monoid called the {\em endomorphism monoid} of $X$.
Each free $H$-act $X$ is isomorphic to the product $H\times [X]$ endowed with the action $h\cdot (x,y)=(hx,y)$. For such an act the semigroup $\End(X)$ is isomorphic to the wreath product $H\wr [X]^{[X]}$ of the group $H$ and the semigroup $[X]^{[X]}$ of all self-maps of the orbit space $[X]$.
The wreath product $H\wr A^A$ of a group $H$ and the semigroup $A^A$ of self-maps of a set $A$ is defined as the semidirect product $H^A\rtimes A^A$ of the $A$-th power of $H$ with $A^A$, endowed with the semigroup operation $(h,f)*(h',f')=(h'',f'')$ where $f''=f\circ f'$ and $h''(\alpha)=h(f'(\alpha))\cdot h'(\alpha)$ for $\alpha\in A$. For any subsemigroup $S\subset A^A$ the subset $H\wr S=\{(h,f)\in H^A\rtimes A^A:f\in S\}$ is called the {\em wreath product} of $H$ and $S$. If both $H$ and $S$ are groups, then their wreath product $H\wr S$ is a group.
Observe that the maximal subgroup of $A^A$ containing the identity self-map of $A$ coincides with the group $S_A$ of all bijective functions $f:A\to A$.
\begin{theorem}\label{t2.1} Let $H$ be a group and $X$ be a free $H$-act. Then
\begin{enumerate}
\item[\textup{(1)}] the semigroup $\End(X)$ is isomorphic to the wreath product $H\wr [X]^{[X]}$;
\item[\textup{(2)}] the minimal ideal $\mathsf K(\End(X))$ of \/ $\End(X)$ coincides with the set $\{f\in\End(X):\forall x\in f(X)\; f(X)\subset[x]\}$;
\item[\textup{(3)}] each minimal left ideal of $\End(X)$ is isomorphic to $H\times [X]$ where $[X]$ is endowed with the left zero multiplication;
\item[\textup{(4)}] for each idempotent $f\in\End(X)$ the maximal subgroup $\mathsf H_f\subset\End(X)$ is isomorphic to $H\wr S_{[f(X)]}$;
\item[\textup{(5)}] for each minimal idempotent $f\in \mathsf K(\End(X))$ the maximal group $\mathsf H_f=f\cdot \End(X)\cdot f$ is isomorphic to $H$.
\end{enumerate}
\end{theorem}
\begin{proof} 1. Let $\pi:X\to[X]$, $\pi:x\mapsto[x]$, denote the orbit map and $s:[X]\to X$ be a section of $\pi$, which means that $\pi\circ s([x])=[x]$ for all $[x]\in[X]$.
Observe that each equivariant map $f:X\to X$ induces a well-defined map $[f]:[X]\to [X]$, $[f]:[x]\mapsto [f(x)]$, of the orbit spaces. Since the action of $H$ on $X$ is free, for every orbit $[x]\in[X]$ we can find a unique point $f_H([x])\in H$ such that $f\circ s([x])=(f_H([x]))^{-1}\cdot s([f(x)])$.
We claim that the map
$$\Psi:\End(X)\to H\wr [X]^{[X]},\;\; \Psi:f\mapsto (f_H,[f]),$$
is a semigroup isomorphism.
First we check that the map $\Psi$ is a homomorphism. Pick any two equivariant functions $f,g\in\End(X)$ and consider their images $\Psi(f)=(f_H,[f])$ and $\Psi(g)=(g_H,[g])$ in $H\wr[X]^{[X]}$. Consider also the composition $f\circ g$ and its
image $\Psi(f\circ g)=((f\circ g)_H,[f\circ g])$. We claim that
$$((f\circ g)_H,[f\circ g])=(f_H,[f])*(g_H,[g])=((f_H\circ[g])\cdot g_H,[f]\circ[g]).$$
The equality $[f\circ g]=[f]\circ [g]$ is clear. To prove that $(f\circ g)_H=(f_H\circ [g])\cdot g_H$, take any orbit $[x]\in [X]$. It follows from the definition of $(f\circ g)_H([x])$ that
$$
\begin{aligned}
&((f\circ g)_H([x]))^{-1}\cdot s([f\circ g(x)])=(f\circ g)\circ s([x])=f(g\circ s([x]))=\\
&f\big((g_H([x]))^{-1}\cdot s([g(x)])\big)=(g_H([x]))^{-1}\cdot f\circ s([g(x)])=\\
&(g_H([x]))^{-1}\cdot (f_H([g(x)]))^{-1}\cdot s([f\circ g(x)])=\\
&(f_H\circ [g]([x])\cdot g_H([x]))^{-1}\cdot s([f\circ g(x)])
\end{aligned}
$$which implies the desired equality $(f\circ g)_H=(f_H\circ [g])\cdot g_H$.
\smallskip
Next, we show that the homomorphism $\Psi$ is injective. Given two equivariant functions $f,g\in\End(X)$ with $(f_H,[f])=\Psi(f)=\Psi(g)=(g_H,[g])$, we need to show that $f=g$. Observe that for every orbit $[x]\in[X]$ we get
$$f(s([x]))=(f_H([x]))^{-1}\cdot s\circ [f]([x]))=(g_H([x]))^{-1}\cdot s\circ[g]([x])=g(s([x])).$$
Now for each $x\in X$ we can find a unique $h\in H$ with $x=h\cdot s([x])$ and apply the equivariantness of the functions $f,g$ to conclude that
$$f(x)=f(h\cdot s([x]))=h\cdot f(s([x]))=h\cdot g(s([x]))=g(h\cdot s([x]))=g(x).$$
Finally, we show that $\Psi$ is surjective. Given any pair $(h,g)\in H\wr [X]^{[X]}=H^{[X]}\times [X]^{[X]}$, we define an equivariant function $f\in\End(X)$ with $(h,g)=(f_H,[f])$ as follows. Given any $x\in X$ find a unique $y\in H$ with $x=y\cdot s([x])$ and let
$$f(x)=y\cdot h([x])^{-1}\cdot s(g([x])).$$
This formula determines a well-defined equivariant function $f:X\to X$ with $\Psi(f)=(h,g)$. Therefore, $\Psi:\End(X)\to H\wr[X]^{[X]}$ is a semigroup isomorphism.
\smallskip
2. Observe that the set $\mathcal I=\{f\in\End(X):\{[f(x)]:x\in X\}$ is a singleton$\}$ is a (non-empty) ideal in $\End(X)$. To show that $\mathcal I$ is the minimal ideal of the semigroup $\End(X)$, we need to check that $\mathcal I$ lies in any ideal $\mathcal J\subset \End(X)$. Take any functions $f\in\mathcal I$ and $g\in \mathcal J$. Find an orbit $[x]\in [X]$ such that $[f(z)]=[x]$ for all $z\in X$. Since the restriction $g|[x]:[x]\to[g(x)]$ is bijective and equivariant, so is its inverse $(g|[x])^{-1}:[g(x)]\to[x]$. Extend this equivariant map to any equivariant map $h:X\to X$. Then
$$f=h\circ g\circ f\in \End(X)\circ g\circ \End(X)\subset\mathcal J.$$
3. Take any idempotent $f\in \mathsf K(\End(X))$ and consider the minimal left ideal $\End(X)\cdot f$. Fix any point $z\in f(X)$ and observe that $f([x])=[z]$ for all $x\in X$ according to the preceding item. It follows that the set $Z=f^{-1}(z)$ meets each orbit $[x]$, $x\in X$, at a single point. So, we can define a unique section $s:[X]\to Z\subset X$ of the orbit map $X\to [X]$ such that $f\circ s([X])=\{z\}$.
To each equivariant map $g\in\End(X)$ assign a unique element $g_H\in H$ such that $g(x)=g_H^{-1}\cdot s([g(x)])$.
It is easy to check that the map
$$\Phi:\End(X)\cdot f\to H\times [X],\;\Phi:g\mapsto (g_H,[g]([x])),$$
is a semigroup homomorphism where the orbit space $[X]$ is endowed with the left zero multiplication.
\smallskip
4. Take any idempotent $f\in \End(X)$ and consider the surjective semigroup homomorphism
$\operatorname{pr}:\End(X)\to[X]^{[X]}$, $\operatorname{pr}:g\mapsto[g]$. It follows that $[f]$ is an idempotent of the semigroup $[X]^{[X]}$ and the image $\operatorname{pr}({\mathcal H}_f)$ of the maximal group $\mathsf H_f$ is a subgroup of $[X]^{[X]}$. It is easy to see that the maximal subgroup $\mathsf H_{[f]}$ of the idempotent $[f]$ in $[X]^{[X]}$ coincides with $S_{[f(X)]}\cdot [f]$. The preimage $\operatorname{pr}^{-1}(\mathsf H_{[f]})$ of the maximal subgroup $\mathsf H_{[f]}=S_{[f(X)]}\cdot f$ is isomorphic to the wreath product $H\wr \mathsf H_{[f]}$ and hence is a group. Now the maximality of $\mathsf H_{f}$ guarantees that $\mathsf H_{f}=\operatorname{pr}^{-1}(\mathsf H_{[f]})$ and hence $\mathsf H_f$ is isomorphic to $H\wr S_{[f(X)]}$.
\smallskip
5. If $f\in \mathsf K(\End(X))$ is a minimal idempotent, then the set $[f(X)]=\{[f(x)]:x\in X\}$ is a singleton by the second item. By the preceding item the maximal group $\mathsf H_f$ is isomorphic to $H\wr S_{[f(X)]}$, which is isomorphic to the group $H$ since $[f(X)]$ is a singleton.
\end{proof}
For each group $X$ the power-set $\mathsf P(X)$ will be considered as an $X$-act endowed with the left action $$\cdot:X\times \mathsf P(X)\to\mathsf P(X),\;\;\cdot:(x,A)\mapsto xA=\{xa:a\in A\},$$of the group $X$.
This $X$-act $\mathsf P(X)$ and its endomorphism monoid $\End(\mathsf P(X))$ will play a crucial role in our considerations.
\section{The function representation of the semigroup $\mathsf P^2(X)$}
In this section given a group $X$ we construct a topological isomorphism
$$\Phi:\mathsf P^2(X)\to\End(\mathsf P(X))$$
called the {\em function representation} of the semigroup $\mathsf P^2(X)$ in the endomorphism monoid of the $X$-act $\mathsf P(X)$.
We recall that the double power-set $\mathsf P^2(X)=\mathsf P(\mathsf P(X))$ of the group $X$ is endowed with the binary operation
$$\mathcal A\circ\mathcal B=\big\{A\subset X:\{x\in X:x^{-1}A\in\mathcal B\}\in\mathcal A\big\}.$$
The isomorphism $\Phi$ assigns to each family $\mathcal A$ of subsets of $X$ the function
$$\Phi_\mathcal A:\mathsf P(X)\to\mathsf P(X),\;\;\Phi_\mathcal A:A\mapsto\{x\in X:x^{-1}A\in\mathcal A\},$$called the {\em function representation} of $\mathcal A$.
In the following theorem by $e$ we denote the neutral element of the group $X$.
\begin{theorem}\label{t4.1} For any group $X$ the map $\Phi:\mathsf P^2(X)\to\End(\mathsf P(X))$
is a topological isomorphism with inverse $\Phi^{-1}:\varphi\mapsto \{A\subset X:e\in\varphi(A)\}$.
\end{theorem}
\begin{proof} First observe that for any family $\mathcal A\in\mathsf P^2(X)$ the function $\Phi_\mathcal A$ is equivariant, because
$$\Phi_{\mathcal A}(xA)=\{y\in X:y^{-1}xA\in{\mathcal A}\}=\{xz\in X:z^{-1}A \in{\mathcal A}\}=x\,\Phi_{\mathcal A}(A)$$for any $x\in X$ and $A\subset X$. Thus the map $\Phi:\mathsf P^2(X)\to\End(\mathsf P(X))$ is well-defined.
To prove that $\Phi$ is a semigroup homomorphism, take two families $\mathcal X,\mathcal Y\in \mathsf P^2(X)$ and let
$\mathcal Z=\mathcal X\circ\mathcal Y$. We need to check that
$\Phi_\mathcal Z(A)=\Phi_\mathcal X\circ\Phi_\mathcal Y(A)$ for every
$A\subset X$. Observe that
$$
\begin{aligned}
\Phi_\mathcal Z(A)&=\{z\in X:z^{-1}A\in\mathcal Z\}=\{z\in X:\{x\in
X:x^{-1}z^{-1}A\in\mathcal Y\}\in\mathcal X\}=\\
&=\{z\in X:\Phi_{\mathcal Y}(z^{-1}A)\in \mathcal X\}=\{z\in
X:z^{-1}\Phi_\mathcal Y(A)\in\mathcal X\}=\\
&=\Phi_\mathcal X(\Phi_\mathcal Y(A))=\Phi_\mathcal X\circ\Phi_\mathcal Y(A).
\end{aligned}
$$
\smallskip
To see that the map $\Phi$ is injective, take any two distinct families $\mathcal A,\mathcal B\in \mathsf P^2(X)$. Without loss of generality,
$\mathcal A\setminus \mathcal B$ contains some set $A\subset X$. It follows that
$e\in \Phi_\mathcal A(A)$ but $e\notin\Phi_\mathcal B(A)$ and hence
$\Phi_\mathcal A\ne\Phi_\mathcal B$.
\smallskip
To see that the map $\Phi$ is surjective, take any equivariant function $\varphi:\mathsf P(X)\to\mathsf P(X)$ and consider the family $\mathcal A=\{A\subset X:e\in\varphi(A)\}$.
It follows that for every $A\in\mathsf P(X)$
$$
\begin{aligned}
\Phi_{\mathcal A}(A)&=\{x\in X\colon x^{-1}A\in\mathcal A\}=\{x\in X\colon e\in \varphi(x^{-1}A)\}=\\
&=\{x\in X\colon e\in x^{-1}\varphi(A)\}=\{x\in X\colon x\in \varphi(A)\}=\varphi(A).
\end{aligned}$$
To prove that $\Phi:\mathsf P^2(X)\to\End(\mathsf P(X))\subset \mathsf P(X)^{\mathsf P(X)}$ is
continuous we first define a convenient subbase of the topology
on the spaces $\mathsf P(X)$ and $\mathsf P(X)^{\mathsf P(X)}$.
The product topology of $\mathsf P(X)$ is generated by the
subbase consisting of the sets
$$x^+=\{A\subset X:x\in A\}\mbox{ and }x^-=\{A\subset X:x\notin A\}$$ where $x\in X$. On the other hand, the product topology on $\mathsf P(X)^{\mathsf P(X)}$ is generated by the subbase consisting of the sets
$$
\langle x,A\rangle^+=\{f\in\mathsf P(X)^{\mathsf P(X)}:x\in f(A)\}\mbox{ and }
\langle x,A\rangle^-=\{f\in\mathsf P(X)^{\mathsf P(X)}:x\notin f(A)\}
$$where $A\in\mathsf P(X)$ and $x\in X$.
Now observe that the preimage
$$\Phi^{-1}(\langle x,A\rangle^+)=\{\mathcal A\in \mathsf P^2(X):x\in \Phi_\mathcal A(A)\}=\{\mathcal A\in \mathsf P^2(X):x^{-1}A\in\mathcal A\}$$is open in $\mathsf P^2(X)$. The same is true for the preimage
$$\Phi^{-1}(\langle x,A\rangle^-)=\{\mathcal A\in \mathsf P^2(X):x\notin \Phi_\mathcal A(A)\}=\{\mathcal A\in \mathsf P^2(X):x^{-1}A\notin\mathcal A\}$$which also is open in $\mathsf P^2(X)$.
Since the spaces $\mathsf P^2(X)\cong\{0,1\}^{\mathsf P(X)}$ and $\End(\mathsf P(X))\subset\mathsf P(X)^{\mathsf P(X)}$ are compact and Hausdorff, the continuity of the map $\Phi$ implies the continuity of its inverse $\Phi^{-1}$. Consequently, $\Phi:\mathsf P^2(X)\to\End(\mathsf P(X))$ is a topological isomorphism of compact right-topological semigroups.
\end{proof}
\begin{remark} The functions representations $\Phi_\mathcal A$ of some families $\mathcal A\subset\mathsf P(X)$ have transparent topological interpretations. For example, if $\mathcal A$ is the filter of neighborhoods of the identity element $e$ of a left-topological group $X$ and $\mathcal A^\perp=\{B\subset X:\forall A\in\mathcal A\;\;(B\cap A\ne\emptyset)\}$, then for any subset $B\subset X$ the set $\Phi_{\mathcal A}(B)$ coincides with the interior of the set $B$ while $\Phi_{\mathcal A^\perp}(B)$ with the closure of $B$ in $X$!
\end{remark}
Theorem~\ref{t4.1} has a strategical importance because it allows us to translate (usually difficult) problems concerning the structure of the semigroup $\mathsf P^2(X)$ to (usually more tractable) problems about the endomorphism monoid $\End(\mathsf P(X))$. In particular, Theorem~\ref{t4.1} implies ``for free'' that the binary operation on $\mathsf P^2(X)$ is associative and right-topological and hence $\mathsf P^2(X)$ indeed is a compact right-topological semigroup.
Now let us investigate the interplay between the properties of a family $\mathcal A\in\mathsf P^2(X)$ and those of its function representation $\Phi_\mathcal A$.
Let us define a family $\mathcal A\subset\mathsf P(X)$ to be
\begin{itemize}
\item {\em monotone} if for any subsets $A\subset B\subset X$ the inclusion $A\in\mathcal A$ implies $B\in\mathcal A$;
\item {\em left-invariant} if for any $A\in\mathcal A$ and $x\in X$ we get $xA\in\mathcal A$.
\end{itemize}
Respectively, a function $\varphi:\mathsf P(X)\to\mathsf P(X)$ is called
\begin{itemize}
\item {\em monotone} if $\varphi(A)\subset\varphi(B)$ for any subsets $A\subset B\subset X$;
\item {\em symmetric} if $\varphi(X\setminus A)=X\setminus\varphi(A)$ for every $A\subset X$.
\end{itemize}
\begin{proposition}\label{p4.3} For an equivariant function $\varphi\in \End(\mathsf P(X))$ the family $\Phi^{-1}(\varphi)=\{A\subset X:e\in\varphi(A)\}$ is
\begin{enumerate}
\item[\textup{(1)}] monotone if and only if $\varphi$ is monotone;
\item[\textup{(2)}] left-invariant if and only if $\varphi(\mathsf P(X))\subset\{\emptyset,X\}$;
\item[\textup{(3)}] maximal linked if and only if $\varphi$ is monotone and symmetric.
\end{enumerate}
\end{proposition}
\begin{proof} Let $\mathcal A=\Phi^{-1}(\varphi)$.
1. If $\varphi$ is monotone, then for any sets $A\subset B$ with $A\in\mathcal A$ we get $e\in\varphi(A)\subset\varphi(B)$ and hence $B\in\mathcal A$, which means that the family $\mathcal A$ is monotone.
Now assume conversely that the family $\mathcal A$ is monotone and take any sets $A\subset B\subset X$. Note that for any $x\in X$ with $xA\in\mathcal A$ we get $xB\in\mathcal A$. Then$$\varphi(A)=\{x\in X:x^{-1}A\in \mathcal A\}\subset \{x\in X:x^{-1}B\in\mathcal A\}=\varphi(B),$$witnessing that the function $\varphi$ is monotone.
\smallskip
2. If the family $\mathcal A$ is left-invariant, then for each $A\in\mathcal A$ we get $\varphi(A)=\{x\in X:x^{-1}A\in\mathcal A\}=X$ and for each $A\notin\mathcal A$ we get $\varphi(A)=\{x\in X:x^{-1}A\in\mathcal A\}=\emptyset$.
Now assume conversely that $\varphi(\mathsf P(X))\subset\{\emptyset,X\}$. Then for each $A\in\mathcal A$ we get $e\in\varphi(A)=X$ and then for each $x\in X$, the equivariance of $\varphi$ guarantees that $\varphi(xA)=x\varphi(A)=xX=X\ni e$ and thus $xA\in\mathcal A$, witnessing that the family $\mathcal A$ is invariant.
\smallskip
3. Assume that the family $\mathcal A$ is maximal linked. By the maximality, $\mathcal A$ is monotone. Consequently, its function representation $\varphi$ is monotone. The maximal linked property of $\mathcal A$ guarantees that for any subset $A\subset X$ we get $(A\in\mathcal A)\Leftrightarrow (X\setminus A\notin\mathcal A)$. Then
$$
\begin{aligned}
\varphi(X\setminus A)&=\{x\in X:x^{-1}(X\setminus A)\in\mathcal A\}=\{x\in X:X\setminus x^{-1}A\in\mathcal A\}=\\
&=\{x\in X:x^{-1}A\notin\mathcal A\}=X\setminus \{x\in X:x^{-1}A\in\mathcal A\}=X\setminus \varphi(A),
\end{aligned}
$$
which means that the function $\varphi$ is symmetric.
Now assuming that the function $\varphi$ is monotone and symmetric, we shall show that the family $\mathcal A=\Phi^{-1}(\varphi)$ is maximal linked. The statement (1) guarantees that $\mathcal A$ is monotone. Assuming that $\mathcal A$ is not linked, we could find two disjoint sets $A,B\in\mathcal A$. Since $\mathcal A$ is monotone, we can assume that $B=X\setminus A$. Then $e\in\varphi(A)\cap\varphi(X\setminus A)$, which is impossible as $\varphi(X\setminus A)=X\setminus\varphi(A)$. Thus $\mathcal A$ is linked. To show that $\mathcal A$ is maximal linked, it suffices to check that for each subset $A\subset X$ either $A$ or $X\setminus A$ belongs to $\mathcal A$. Since $\varphi(X\setminus A)=X\setminus \varphi(A)$, either $\varphi(A)$ or $\varphi(X\setminus A)$ contains the neutral element $e$ of the group $X$. In the first case $A\in\mathcal A$ and in the second case $X\setminus A\in\mathcal A$.
\end{proof}
Let us recall that the aim of this paper is the description of the structure of minimal left ideals of the superextension $\lambda(X)$ of a group $X$.
Instead of the semigroup $\lambda(X)$ it will be more convenient to consider its isomorphic copy
$$\End_\lambda(\mathsf P(X))=\Phi(\lambda(X))\subset\End(\mathsf P(X))$$ called the {\em function representation of} $\lambda(X)$.
Proposition~\ref{p4.3} implies
\begin{corollary}\label{c4.4} The function representation $\End_\lambda(\mathsf P(X))$ of $\lambda(X)$ consists of equivariant monotone symmetric functions $\varphi:\mathsf P(X)\to\mathsf P(X)$.
\end{corollary}
In order to describe the structure of minimal left ideals of the semigroup $\End_\lambda(\mathsf P(X))$ we shall look for a relatively small subfamily $\mathsf F\subset\mathsf P(X)$ such that the restriction operator
$$R_{\mathsf F}:\End_\lambda(\mathsf P(X))\to\mathsf P(X)^{\mathsf F},\;\;R_{\mathsf F}:\varphi\mapsto\varphi|\mathsf F,$$
is injective on each minimal left ideal of the semigroup $\Enl(\mathsf P(X))$.
Then the composition
$$\Phi_{\mathsf F}=R_{\mathsf F}\circ \Phi:\lambda(X)\to\mathsf P(X)^{\mathsf F}$$will be injective on each minimal left ideal of the semigroup $\lambda(X)$.
By Proposition~\ref{c2.2}, a homomorphism between semigroups is injective on each minimal left ideal if it is injective on some minimal left ideal. Such a special minimal left ideal of the semigroup $\lambda(X)$ will be found in the left ideal of the form $\lambda^\mathcal I(X)$ for a suitable left-invariant ideal $\mathcal I$ of subsets of the group $X$.
A family $\mathcal I$ of subsets of $X$ is called an {\em ideal} on $X$ if
\begin{itemize}
\item $X\notin \mathcal I$;
\item $A\cup B\in\mathcal I$ for any $A,B\in\mathcal I$;
\item for any $A\in \mathcal I$ and $B\subset A$ we get $B\in\mathcal I$.
\end{itemize}
Such an ideal $\mathcal I$ is called {\em left-invariant} (reps. {\em right-invariant}) if $xA\in\mathcal I$ (resp. $Ax\in\mathcal I$) for all $A\in\mathcal I$ and $x\in X$. An ideal $\mathcal I$ will be called {\em invariant} if it is both left-invariant and right-invariant.
The smallest ideal on $X$ is the trivial ideal $\{\emptyset\}$ containing only the empty set. The smallest non-trivial left-invariant ideal on an infinite group $X$ is the ideal $[X]^{<\omega}$ of finite subsets of $X$.
This ideal is invariant.
From now on we shall assume that $\mathcal I$ is a left-invariant ideal on a group $X$.
For subsets $A,B\subset X$ we write
\begin{itemize}
\item $A\subset_\mathcal I B$ if $A\setminus B\in\mathcal I$, and
\item $A=_\mathcal I B$ if $A\subset_\mathcal I B$ and $B\subset_\mathcal I A$.
\end{itemize}
The definition of the ideal $\mathcal I$ implies that $=_\mathcal I$ is an equivalence relation on $\mathsf P(X)$. For a subset $A\subset X$ its equivalence class $\bar{\bar A}^\mathcal I=\{B\subset X:B=_\mathcal I A\}$ is called the {\em $\mathcal I$-saturation} of $A$.
A family $\mathcal A$ of subsets of $X$ is defined to be {\em $\mathcal I$-saturated} if
$\bar{\bar A}^\mathcal I\subset\mathcal A$ for any $A\in \mathcal A$. Let us observe that a monotone family $\mathcal A\subset\mathsf P(X)$ is $\mathcal I$-saturated if and only if for any $A\in\mathcal A$ and $B\in\mathcal I$ we get $A\setminus B\in\mathcal A$.
Respectively, a function $\varphi:\mathsf P(X)\to\mathsf P(X)$ is called {\em $\mathcal I$-saturated} if $\varphi(A)=\varphi(B)$ for any subsets $A=_\mathcal I B$ of $X$.
\begin{proposition}\label{p4.5} A family $\mathcal A\subset\mathsf P(X)$ is $\mathcal I$-saturated if and only if its function representation $\Phi_\mathcal A:\mathsf P(X)\to\mathsf P(X)$ is $\mathcal I$-saturated.
\end{proposition}
\begin{proof} Assume that $\mathcal A$ is $\mathcal I$-saturated and take two subsets $A=_\mathcal I B$ of $X$. We need to show that $\Phi_\mathcal A(A)=\Phi_\mathcal A(B)$. The left-invariance of the ideal $\mathcal I$ implies that for every $x\in X$ we get $xA=_\mathcal I xB$ and hence $(xA\in\mathcal A)\;\Leftrightarrow\;(xB\in \mathcal A)$.
Then $$\Phi_\mathcal A(A)=\{x\in X:x^{-1}A\in\mathcal A\}=\{x\in X:x^{-1}B\in\mathcal A\}=\Phi_\mathcal A(B).$$
Now assume conversely that the function representation $\Phi_\mathcal A$ is $\mathcal I$-saturated and take any subsets $A=_\mathcal I B$ with $A\in\mathcal A$. Then $e\in\Phi_\mathcal A(A)=\Phi_\mathcal A(B)$, which implies that $B\in\mathcal A$.
\end{proof}
For an left-invariant ideal $\mathcal I$ on a group $X$ let $\lambda^\mathcal I(X)\subset\lambda(X)$ be the subspace of $\mathcal I$-saturated maximal linked systems on $X$ and $\Enl^\mathcal I(\mathsf P(X))\subset\Enl(\mathsf P(X))$ be the subspace consisting of $\mathcal I$-saturated monotone symmetric endomorphisms of the $X$-act $\mathsf P(X)$. It is clear that for any functions $f,g:\mathsf P(X)\to\mathsf P(X)$ the composition $f\circ g$ is $\mathcal I$-saturated provided so is the function $g$.
This trivial remark (and Lemma~\ref{maxfree} below) imply:
\begin{proposition} For any ideal $\mathcal I$ the function representation $\Phi:\lambda^\mathcal I(X)\to\Enl^\mathcal I(\mathsf P(X))$ is a topological isomorphism between the closed left ideals $\lambda^\mathcal I(X)$ and $\Enl^\mathcal I(\mathsf P(X))$
of the semigroups $\lambda(X)$ and $\Enl(\mathsf P(X))$, respectively.
\end{proposition}
The following lemma (combined with Zorn's Lemma) implies that the sets $\lambda^\mathcal I(X)$ and $\Enl^\mathcal I(\mathsf P(X))$ are not empty.
\begin{lemma}\label{maxfree} Each maximal $\mathcal I$-saturated linked system $\mathcal L$ on $X$ is maximal linked.
\end{lemma}
\begin{proof} We need to show that each set $A\subset X$ that meets all sets $L\in\mathcal L$ belongs to $\mathcal L$. We claim that $A\notin\mathcal I$. Otherwise, taking any subset $L\in\mathcal L$, we get $L\setminus A=_\mathcal I L$ and hence $L\setminus A$ belongs to $\mathcal L$, which is not possible as $L\setminus A$ misses the set $A$. Since $A\notin\mathcal I$, the $\mathcal I$-saturated family $\bar{\bar A}^\mathcal I$ is linked.
We claim that the $\mathcal I$-saturated family $\bar{\bar A}^\mathcal I\cup\mathcal L$ is linked. Assuming the converse, we would find two disjoint sets $A'\in\bar{\bar A}^\mathcal I$ and $L\in\mathcal L$. Then $L\cap A=_\mathcal I L\cap A'=\emptyset$ and hence the set $L\setminus A=_\mathcal I L$ belongs to $\mathcal L$, which is not possible as this set misses $A$.
Now we see that the family $\bar{\bar A}^\mathcal I\cup\mathcal L$, being $\mathcal I$-saturated and linked, coincides with the maximal $\mathcal I$-saturated linked system $\mathcal L$. Then $A\in \bar{\bar A}^\mathcal I\cup\mathcal L=\mathcal L$.
\end{proof}
Given a subfamily $\mathsf F\subset\mathsf P(X)$ consider the restriction operator $$R_{\mathsf F}:\mathsf P(X)^{\mathsf P(X)}\to\mathsf P(X)^{\mathsf F},\;\;R_{\mathsf F}:f\mapsto f|\mathsf F,$$ and let $\Enl(\mathsf F)=R_{\mathsf F}(\Enl(\mathsf P(X)))$ and $\Enl^\mathcal I(\mathsf F)=R_{\mathsf F}(\Enl^\mathcal I(\mathsf P(X))$ for a left-invariant ideal $\mathcal I$ on $X$. The space $\Enl(\mathsf F)$ is compact and Hausdorff as a continuous image of a compact Hausdorff space.
A subfamily $\mathsf F\subset\mathsf P(X)$ is called {\em $\lambda$-invariant} if $\Phi_\mathcal L(\mathsf F)\subset\mathsf F$ for each maximal linked system $\mathcal L\in\lambda(X)$. By Corollary~\ref{c4.4}, $\mathsf F$ is $\lambda$-invariant if and only if $f(\mathsf F)\subset\mathsf F$ for each equivariant monotone symmetric function $f:\mathsf P(X)\to \mathsf P(X)$.
If a family $\mathsf F\subset\mathsf P(X)$ is $\lambda$-invariant, then the space $\Enl(\mathsf F)\subset\mathsf F^{\mathsf F}$ is a compact right-topological semigroup with respect to the operation of composition of functions and the restriction operator $R_{\mathsf F}:\Enl(\mathsf P(X))\to\Enl(\mathsf F)$ is a surjective continuous semigroup homomorphism. In this case the composition
$$\Phi_{\mathsf F}=R_{\mathsf F}\circ\Phi:\lambda(X)\to\Enl(\mathsf F)$$also is a surjective continuous semigroup homomorphism and $\Enl^\mathcal I(\mathsf F)=\Phi_{\mathsf F}(\lambda^\mathcal I(X))$ is a left ideal in the semigroup $\Enl(\mathsf F)$.
In the following proposition we characterize functions that belong to the space $\Enl^\mathcal I(\mathsf F)$ for an $\mathcal I$-saturated left-invariant symmetric subfamily $\mathsf F\subset\mathsf P(X)$.
A family $\mathsf F\subset\mathsf P(X)$ is called
{\em symmetric} if for each set $A\in \mathsf F$ the complement $X\setminus A\in\mathsf F$.
\begin{theorem}\label{t4.8} For a left-invariant ideal $\mathcal I$ on a group $X$ and a $\mathcal I$-saturated symmetric left-invariant family $\mathsf F\subset\mathsf P(X)$, a function $\varphi:\mathsf F\to\mathsf P(X)$ belongs to $\Enl^\mathcal I(\mathsf F)$ if and only if $\varphi$ is equivariant, symmetric, monotone, and $\mathcal I$-saturated.
\end{theorem}
\begin{proof} The ``only if'' part follows immediately from Corollary~\ref{c4.4}. To prove the ``if'' part, fix any equivariant monotone symmetric $\mathcal I$-saturated function $\varphi:\mathsf F\to\mathsf P(X)$ and consider the
the families $$\mathcal L_\varphi=\{x^{-1}A:A\in\mathsf F,\;x\in \varphi(A)\}\;\;\mbox{and}\;\;
\bar{\bar\mathcal L}^\mathcal I_\varphi=\bigcup_{A\in\mathcal L}\bar{\bar A}^\mathcal I.$$
We claim that the family $\bar{\bar \mathcal L}_\varphi^\mathcal I$ is linked. Assuming the
converse, we could find two sets $A,B\in\mathsf F$ and two points $x\in
\varphi(A)$ and $y\in \varphi(B)$ such that $x^{-1}A\cap y^{-1}B\in\mathcal I$. Then
$yx^{-1}A\subset_{\mathcal I} X\setminus B$ and hence $yx^{-1}A\subset (X\setminus B)\cup C$ for some set $C\in\mathcal I$. Since $\mathsf F$ is symmetric and $\mathcal I$-saturated, the set $(X\setminus B)\cup C=_\mathcal I X\setminus B$ belongs to the family $\mathsf F$.
Applying to the chain of the inclusions $$yx^{-1}A\subset (X\setminus B)\cup C=_\mathcal I X\setminus B$$ the equivariant monotone symmetric $\mathcal I$-saturated function $\varphi$, we get the chain
$$yx^{-1}\varphi(A)\subset\varphi((X\setminus B)\cup C)=\varphi(X\setminus B)=X\setminus \varphi(B).$$ Then $x^{-1}\varphi(A)\subset X\setminus y^{-1}\varphi(B)$, which is
not possible because the neutral element $e$ of the group $X$
belongs to $x^{-1}\varphi(A)\cap y^{-1}\varphi(B)$.
Enlarge the $\mathcal I$-saturated linked family $\bar{\bar \mathcal L}_\varphi^\mathcal I$ to a maximal $\mathcal I$-saturated
linked family $\mathcal L$, which is maximal linked by Lemma \ref{maxfree} and
thus $\mathcal L\in\lambda^{\mathcal I}(X)$. We claim that $\Phi_\mathcal L|\mathsf
F=\varphi$. Indeed, take any set $A\in\mathsf F$ and observe that
$$\varphi(A)\subset\{x\in X:x^{-1}A\in\mathcal L_\varphi\}\subset \{x\in X:x^{-1}A\in\mathcal L\}=\Phi_\mathcal L(A).$$
To prove the reverse inclusion, observe that for any $x\in X\setminus \varphi(A)=\varphi(X\setminus A)$ we get $x^{-1}(X\setminus A)=X\setminus x^{-1}A\in\mathcal L_\varphi\subset\mathcal L$. Since $\mathcal L$ is linked, $x^{-1}A\notin\mathcal L$ and hence $x\notin\Phi_\mathcal L(A)$.
\end{proof}
\begin{corollary} For any symmetric left-invariant family $\mathsf F\subset\mathsf P(X)$, a function $\varphi:\mathsf F\to\mathsf P(X)$ belongs to $\Enl(\mathsf F)$ if and only if $\varphi$ is equivariant, symmetric, and monotone.
\end{corollary}
\section{Twin and $\mathcal I$-twin subsets of groups}\label{s4}
In this section we start studying very interesting objects called twin sets.
For an abelian (more generally, twinic) group $X$ twin subsets of $X$ form a subfamily $\mathsf T\subset\mathsf P(X)$ for which the function representation $\Phi_{\mathsf T}:\lambda(X)\to\Enl(\mathsf T)$ is injective on each minimal left ideal of the superextension $\lambda(X)$. The machinery related to twin sets will be developed in Sections~\ref{s4}--\ref{s15}, after which we shall return back to studying minimal (left) ideals of the semigroups $\lambda(X)$ and $\End(X)$.
For a subset $A$ of a group $X$ consider the following three subsets of $X$:
$$
\begin{aligned}
&\Fix(A)=\{x\in X:xA=A\},\\
&\Fix^-(A)=\{x\in X:xA=X\setminus A\}, \\
&\Fix^\pm(A)=\Fix(A)\cup\Fix^-(A).
\end{aligned}$$
\begin{definition}
A subset $A\subset X$ is defined to be
\begin{itemize}
\item {\em twin} if $xA=X\setminus A$ for some $x\in X$,
\item {\em pretwin} if $xA\subset X\setminus A\subset yA$ for some points $x,y\in X$.
\end{itemize}
The families of twin and pretwin subsets of $X$ will be denoted by $\mathsf T$ and $\mathsf{pT}$, respectively.
\end{definition}
Observe that a set $A\subset X$ is twin if and only if $\Fix^-(A)$ is not empty.
The notion of a twin set has an obvious ``ideal'' version.
For a left-invariant ideal $\mathcal I$ of subsets of a group $X$, and a subset $A\subset X$ consider the following subsets of $X$:
$$
\begin{aligned}
&\I\mbox{-}\Fix(A)=\{x\in X:xA=_\mathcal I A\},\\
&\I\mbox{-}\Fix^-(A)=\{x\in X:xA=_\mathcal I X\setminus A\},\\ &\I\mbox{-}\Fix^\pm(A)=\I\mbox{-}\Fix(A)\,\cup\,\I\mbox{-}\Fix^-(A).
\end{aligned}$$
\begin{definition}
A subset $A\subset X$ is defined to be
\begin{itemize}
\item {\em $\mathcal I$-twin} if $xA=_\mathcal I X\setminus A$ for some $x\in X$,
\item {\em $\mathcal I$-pretwin} if $xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA$ for some points $x,y\in X$.
\end{itemize}
The families of $\mathcal I$-twin and $\mathcal I$-pretwin subsets of $X$ will be denoted by $\mathsf T^\mathcal I$ and $\mathsf{pT}^\mathcal I$, respectively.
\end{definition}
It is clear that $\Tau^{\{\emptyset\}}=\Tau$ and $\mathsf{pT}^{\{\emptyset\}}=\mathsf{pT}$.
\begin{proposition}\label{p5.3}
For each subset $A\subset X$ the set $\I\mbox{-}\Fix^\pm(A)$ is a subgroup
in $X$. The set $A$ is $\mathcal I$-twin if and only if $\I\mbox{-}\Fix(A)$ is a normal subgroup of index 2 in $\I\mbox{-}\Fix^\pm(A)$.
\end{proposition}
\begin{proof} If the set $A$ is not $\mathcal I$-twin, then $\I\mbox{-}\Fix^-(A)=\emptyset$ and then $\I\mbox{-}\Fix^\pm(A)=\I\mbox{-}\Fix(A)=\{x\in X:xA=_\mathcal I A\}$ is a subgroup of $X$ by the transitivity and the left-invariance of the equivalence relation $=_\mathcal I$.
So, we assume that $A$ is $\mathcal I$-twin, which means that $\I\mbox{-}\Fix^-(A)\ne\emptyset$.
To show that $\I\mbox{-}\Fix^\pm(A)$
is a subgroup in $X$, take any two points $x,y\in \I\mbox{-}\Fix^\pm(A)$. We claim that $xy^{-1}\in\I\mbox{-}\Fix^\pm(A)$.
This is clear if $x,y\in\I\mbox{-}\Fix(A)\subset\I\mbox{-}\Fix^\pm(A)$.
If $x\in\I\mbox{-}\Fix(A)$ and $y\in\I\mbox{-}\Fix^{-}(A)$, then $xA=_\mathcal I A$, $yA=_\mathcal I X\setminus A$ and thus $A=_\mathcal I X\setminus y^{-1}A$ which implies $y^{-1}A=_\mathcal I X\setminus A$. Then $xy^{-1}A=_\mathcal I x(X\setminus
A)= X\setminus xA=_\mathcal I X\setminus A$, which means that
$xy^{-1}\in\I\mbox{-}\Fix^{-}(A)\subset\Fix^\pm(A)$.
If $x,y\in\I\mbox{-}\Fix^{-}(A)$,
then $xA=_\mathcal I X\setminus A$, $y^{-1}A=_\mathcal I X\setminus A$. This implies that
$xy^{-1}A=_\mathcal I x(X\setminus A)=_\mathcal I X\setminus xA=_\mathcal I X\setminus (X\setminus
A)=_\mathcal I A$ and consequently $xy^{-1}\in\I\mbox{-}\Fix(A)$.
To show that $\I\mbox{-}\Fix(A)$ is a subgroup of index 2 in $\Fix^\pm(A)$, fix any element $g\in\I\mbox{-}\Fix^-(A)$. Then for every $x\in \I\mbox{-}\Fix(A)$ we get $gxA=_\mathcal I gA=_\mathcal I X\setminus A$ and thus
$gx\in\I\mbox{-}\Fix^{-}(A)$. This yields $\I\mbox{-}\Fix^{-}(A)=g(\I\mbox{-}\Fix (A))$, which means that the subgroup $\I\mbox{-}\Fix(A)$ has index 2 in the group $\I\mbox{-}\Fix^\pm(A)$.
\end{proof}
The following proposition shows that the family $\Tau^\mathcal I$ of $\mathcal I$-twin
sets of a group $X$ is left-invariant.
\begin{proposition}\label{p5.4}
For any $\mathcal I$-twin set $A\subset X$ and any $x\in X$ the set $xA$ is $\mathcal I$-twin and $\I\mbox{-}\Fix^-(xA)=x\,(\I\mbox{-}\Fix^-(A))\,x^{-1}$.
\end{proposition}
\begin{proof}
To see that $xA$ is an $\mathcal I$-twin set, take any $z\in\I\mbox{-}\Fix^-(A)$ and observe that $$X\setminus xA= x(X\setminus A)=_\mathcal I xzA= xzx^{-1}xA,$$ which means that $xzx^{-1}\in\I\mbox{-}\Fix^-(xA)$ for every $z\in \I\mbox{-}\Fix^-(A)$. Hence $\I\mbox{-}\Fix^-(xA)=x\,(\I\mbox{-}\Fix^-(A))\,x^{-1}$.
\end{proof}
The preceding proposition implies that the family $\Tau^\mathcal I$ of $\mathcal I$-twin subsets of $X$ can be considered as an $X$-act with respect to the left action
$$\cdot:X\times\Tau^\mathcal I\to \Tau^\mathcal I,\quad \cdot: (x,A)\mapsto xA$$
of the group $X$. By $[A]=\{xA:x\in X\}$ we denote the orbit of a $\mathcal I$-twin set $A\in\Tau^\mathcal I$ and by $[\Tau^\mathcal I]=\{[A]:A\in\Tau^\mathcal I\}$ the orbit space.
If $\mathcal I=\{\emptyset\}$ is a trivial ideal, then we write $[\Tau]$ instead of $[\Tau^\mathcal I]$.
\section{Twinic groups}\label{s:tg}
A left-invariant ideal $\mathcal I$ on a group $X$ is called {\em twinic} if for any subset $A\subset X$ and points $x,y\in X$ with $xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA$ we get $A=_\mathcal I B$. In this case the families $\mathsf{pT}^\mathcal I$ and $\Tau^\mathcal I$ coincide.
A group $X$ is defined to be {\em twinic } if it admits a twinic ideal $\mathcal I$.
It is clear that in a twinic group $X$ the intersection ${\I\!\!\I}$ of all twinic ideals is the smallest twinic ideal in $X$ called {\em the twinic ideal} of $X$. The structure of the twinic ideal ${\I\!\!\I}$ can be described as follows.
Let ${\I\!\!\I}_0=\{\emptyset\}$ and for each $n\in\omega$ let ${\I\!\!\I}_{n+1}$ be the ideal generated by sets of the form $yA\setminus xA$ where $xA\subset_{{\I\!\!\I}_n} X\setminus A\subset_{{\I\!\!\I}_n} yA$ for some $A\subset X$ and $x,y\in X$. By induction it is easy to check that ${\I\!\!\I}_n\subset{\I\!\!\I}_{n+1}\subset\mathcal I$ is an invariant ideal and hence ${\I\!\!\I}=\bigcup_{n\in\omega}{\I\!\!\I}_n\subset\mathcal I$ is a well-defined (smallest) twinic ideal on $X$. This ideal ${\I\!\!\I}$ is invariant.
In fact, the above constructive definition of the additive invariant family ${\I\!\!\I}$ is valid for each group $X$. However, ${\I\!\!\I}$ is an ideal if and only if the group $X$ is twinic.
We shall say that a group $X$ has {\em trivial twinic ideal} if the trivial ideal $\mathcal I=\{\emptyset\}$ is twinic. This happen if and only if for any subset $A\subset X$ with $xA\subset X\setminus A\subset yA$ we get $xA=X\setminus A=yA$. In this case the twinic ideal ${\I\!\!\I}$ of $X$ is trivial.
The class of twinic groups is sufficiently wide. In particular, it contains all amenable groups. Let us recall that a group $X$ is called {\em amenable} if it admits a Banach measure $\mu:\mathsf P(X)\to[0,1]$, which is a left-invariant probability measure defined on the family of all subsets of $\mathsf P(X)$. In this case the family$$\mathcal N_\mu=\{A\subset X:\mu(A)=0\}$$is an left-invariant ideal in $X$.
It is well-known that the class of amenable groups contains all abelian groups and is closed with respect to many operations over groups, see \cite{Pat}.
A subset $A$ of an amenable group $X$ is called {\em absolutely null\/} if $\mu(A)=0$ for each Banach measure $\mu$ on $X$. The family $\mathcal N$ of all absolutely null subsets is an ideal on $X$. This ideal coincides with the intersection $\mathcal N=\bigcap_{\mu}\mathcal N_\mu$ where $\mu$ runs over all Banach measures of $X$.
\begin{theorem} Each amenable group $X$ is twinic. The twinic ideal ${\I\!\!\I}$ of $X$ lies in the ideal $\mathcal N$ of absolute null subsets of $X$.
\end{theorem}
\begin{proof} It suffices to check that the ideal $\mathcal N$ is twinic. Take any set $A\subset X$ such that $xA\subset_{\N} X\setminus A\subset_{\mathcal N}yA$ for some $x,y\in X$. We need to show that $\mu(yA\setminus xA)=0$ for each Banach measure $\mu$ on $X$. It follows from $xA\subset_{\N} X\setminus A\subset_{\mathcal N}yA$ and the invariance of the Banach measure $\mu$ that
$$\mu(A)=\mu(xA)\le \mu(X\setminus A)\le\mu(yA)=\mu(A)$$and hence
$\mu(yA\setminus xA)=\mu(A)-\mu(A)=0$.
\end{proof}
Next, we show that the class of twinic groups contains also some non-amenable groups. The simplest example is the Burnside group $B(n,m)$ for $n\ge 2$ and odd $m\ge 665$. We recall that the Burnside group $B(n,m)$ is generated by $n$ elements and one relation $x^m=1$. Adian \cite{Ad} proved that for $n\ge 2$ and any odd $m\ge 665$ the Burnside group $B(n,m)$ is not amenable, see also \cite{Osin} for a stronger version of this result. The following theorem implies that each Burnside group, being a torsion group, is twinic. Moreover, its twinic ideal ${\I\!\!\I}$ is trivial!
\begin{theorem}\label{t6.2} A group $X$ has trivial twinic ideal ${\I\!\!\I}=\{\emptyset\}$ if and only if the product $ab$ of any elements $a,b\in X$ belongs to the subsemigroup of $X$ generated by the set $b^\pm\cdot a^\pm$ where $a^\pm=\{a,a^{-1}\}$.
\end{theorem}
\begin{proof} To prove the ``if'' part, assume that ${\I\!\!\I}\ne\{\emptyset\}$.
Then ${\I\!\!\I}_{n+1}\ne\{\emptyset\}={\I\!\!\I}_{n}$ for some $n\in\omega$ and we can find a subset $A\subset X$ and points $a,b\in X$ such that $a^{-1}A\subset X\setminus A\subset bA$ but $a^{-1}A\ne bA$. Consider the subsemigroup $\Fix_\subset(A)=\{x\in X:xA\subset A\}\subset X$ and observe that $b^{-1}a^{-1}\in\Fix_\subset(A)$. The inclusion $a^{-1}A\subset X\setminus A$ implies $a^{-1}A\cap A=\emptyset$ which is equivalent to $A\cap aA=\emptyset$ and yields $aA\subset X\setminus A\subset bA$. Then $b^{-1}a\in\Fix_\subset(A)$.
Now consider the chain of the equivalences
$$X\setminus A\subset bA\;\Leftrightarrow \; A\cup bA=X\;\Leftrightarrow \;
b^{-1}A\cup A=X \;\Leftrightarrow \; X\setminus A\subset b^{-1}A$$ and combine the last inclusion with $aA\cup a^{-1}A\subset X\setminus A$ to obtain $ba,ba^{-1}\in \Fix_{\subset}(A)$. Now we see that the subsemigroup $S$ of $X$ generated by the set $\{1,ba,ba^{-1},b^{-1}a,b^{-1}a^{-1}\}$ lies in $\Fix_\subset(A)$. Observe that $b^{-1}a^{-1}A\subsetneqq A$ implies $abA\not\subset A$, $ab\notin\Fix_\subset(A)\supset S$, and finally $ab\notin S$.
This completes the proof of the ``if'' part.
To prove the ``only if'' part, assume that the group $X$ contains elements $a,b$ whose product $ab$ does not belong to the subsemigroup generated by $b^\pm a^\pm$ where $a^\pm=\{a,a^{-1}\}$ and $b^\pm=\{b,b^{-1}\}$. Then $ab$ does not belongs also to the subsemigroup $S$ generated by $\{1\}\cup b^\pm a^\pm$. Observe that $a^\pm S=S^{-1}a^{\pm}$ and $b^\pm S^{-1}=Sb^\pm$.
We claim that
\begin{equation}\label{sa}S\cap a^\pm S=\emptyset \mbox{ and }S\cap Sb^\pm=\emptyset.
\end{equation}
Assuming that $S\cap a^\pm S\ne \emptyset$ we would find a point $s\in S$ such that $as\in S$ or $a^{-1}s\in S$. If $as\in S$, then $bs^{-1}=b(as)^{-1}a\in bS^{-1}a\subset Sb^\pm a\subset S$ and hence $b=bs^{-1}s\in S\cdot S\subset S$. Then $a^\pm=b(b^{-1}a^\pm)\subset S\cdot S\subset S$, $b^\pm=(b^\pm a)a^{-1}\in S\cdot S\subset S$ and finally $ab\in S\cdot S\subset S$, which contradicts $ab\notin S$. By analogy we can treat the case $a^{-1}s\in S$ and also prove that $S\cap Sb^\pm=\emptyset$.
Consider the family $\mathcal P$ of all pairs $(A,B)$ of disjoint subsets of $X$ such that
\begin{itemize}
\item[(a)] $a^\pm A\subset B$ and $b^\pm B\subset A$;
\item[(b)] $S^{-1}B\subset B$;
\item[(c)] $1\in A$, $ab\in B$.
\end{itemize}
The family $\mathcal P$ is partially ordered by the relation $(A,B)\le(A',B')$ defined by $A\subset A'$ and $B\subset B'$.
We claim that the pair $(A_0,B_0)=(S\cup Sb^\pm ab,S^{-1}a^\pm\cup S^{-1} ab)$ belongs to $\mathcal P$. Indeed,
$$a^\pm A_0=a^\pm S\cup a^\pm Sb^\pm ab\subset S^{-1}a^\pm\cup S^{-1}a^\pm b^\pm ab\subset S^{-1}a^\pm\cup S^{-1} ab\subset B_0.$$By analogy we check that $b^\pm B_0\subset A_0$.
The items (b), (c) trivially follow from the definition of $A_0$ and $B_0$. It remains to check that the sets $A_0$ and $B_0$ are disjoint.
This will follow as soon as we check that
\begin{itemize}
\item[(d)] $S\cap S^{-1}a^\pm =\emptyset$,
\item[(e)] $S\cap S^{-1} ab=\emptyset$,
\item[(f)] $Sb^\pm ab\cap S^{-1}a^\pm=\emptyset$,
\item[(g)] $Sb^\pm ab\cap S^{-1} ab=\emptyset$.
\end{itemize}
The items (d) and (g) follow from (\ref{sa}). The item (e) follows from $ab\notin S\cdot S=S$. By the same reason, we get the item (f) which is equivalent to $ab\notin b^\pm S^{-1}\cdot S^{-1}a^\pm=b^\pm S^{-1} a^\pm=Sb^\pm a^\pm\subset S$.
Thus the partially ordered set $\mathcal P$ is not empty and we can apply Zorn's Lemma to find a maximal pair $(A,B)\ge(A_0,B_0)$ in $\mathcal P$. We claim that $A\cup B=X$.
Assuming the converse, we could take any point $x\in X\setminus(A\cup B)$ and put $A'=A\cup Sx$, $B'=B\cup a^\pm Sx$. It is clear that $a^\pm A'\subset B'$ and $b^\pm B'\subset A'$, $S^{-1}B'=S^{-1}B\cup S^{-1}a^\pm Sx\subset B\cup a^\pm SSx=B'$, $1\in A\subset A'$ and $ab\in B\subset B'$.
Now we see that the inclusion $(A',B')\in\mathcal P$ will follow as soon as we check that $A'\cap B'=\emptyset$. The choice of $x\notin B=S^{-1}B$ guarantees that $Sx\cap B=\emptyset$. Assuming that $a^{\pm}Sx\cap A\ne\emptyset$, we would conclude that $x\in S^{-1}a^\pm A\subset S^{-1}B\subset B$, which contradicts the choice of $x$.
Finally, the sets $Sx$ and $a^\pm Sx$ are disjoint because of the property (\ref{sa}) of $S$.
Thus we obtain a contradiction: $(A',B')\in\mathcal P$ is strictly greater than the maximal pair $(A,B)$. This contradiction shows that $X=A\cup B$ and consequently,
$aA\subset X\setminus A=B\subset bA$, which means that the set $A$ is pretwin and then $bA\setminus aA\in{\I\!\!\I}_1\subset{\I\!\!\I}$. Since $1\in A\setminus b^{-1}a^{-1}A$, we conclude that $bA\setminus a^{-1}A\ni b$ is not empty and thus ${\I\!\!\I}\ne\{\emptyset\}$.
\end{proof}
We recall that a group $X$ is {\em periodic} (or else a {\em torsion group}) if each element $x\in X$ has finite order (which means that $x^n=e$ for some $n\in\mathbb N$). We shall say that a group $X$ has {\em periodic commutators} if for any $x,y\in G$ the commutator $[x,y]=xyx^{-1}y^{-1}$ has finite order in $X$. It is interesting to note that this condition is strictly weaker than the requirement for $X$ to have periodic commutator subgroup $X'$ (we recall that the commutator subgroup $X'$ coincides with the set of finite products of commutators), see \cite{DK}.
\begin{proposition}\label{p9.4} Each group $X$ with periodic commutators has trivial twinic ideal ${\I\!\!\I}=\{\emptyset\}$.
\end{proposition}
\begin{proof} Since $X$ has periodic commutators, for any points $x,y\in X$ there is a number $n\in\mathbb N$ such that
$$xyx^{-1}y^{-1}=(yxy^{-1}x^{-1})^{-1}=(yxy^{-1}x^{-1})^{n}$$ and thus $xy=(yxy^{-1}x^{-1})^{n}\cdot yx$ belongs to the semigroup generated by the set $y^\pm\cdot x^\pm$. Applying Theorem~\ref{t6.2}, we conclude that the group $X$ has trivial twinic ideal ${\I\!\!\I}=\{\emptyset\}$.
\end{proof}
We recall that a
group $G$ is called {\em abelian-by-finite} (resp. {\em finite-by-abelian}) if $G$ contains a
normal Abelian (resp. finite) subgroup $H\subset G$ with finite (resp. Abelian) quotient $G/H$. Observe that each finite-by-abelian group has periodic commutators and hence has trivial twinic ideal ${\I\!\!\I}$.
In contrast, any abelian-by-finite groups, being amenable, is twinic but its twinic ideal ${\I\!\!\I}$ need not be trivial. The simplest counterexample is the isometry group $\mathrm{Iso}(\mathbb Z)$ of the group
$\mathbb Z$ of integers endowed with the Euclidean metric.
\begin{example}\label{ex6.4} The abelian-by-finite group $X=\mathrm{Iso}(\mathbb Z)$ is twinic. Its twinic ideal ${\I\!\!\I}$ coincides with the ideal $[X]^{<\omega}$ of all finite subsets of $X$.
\end{example}
\begin{proof} Let $a:x\mapsto x+1$ be the translation and $b:x\mapsto -x$ be the inversion of the group $\mathbb Z$. It is easy to see that the elements $a,b$ generate the isometry group $X=\mathrm{Iso}(\mathbb Z)$ and satisfy the relations $b^2=1$ and $bab^{-1}=a^{-1}$. Let $Z=\{a^n:n\in\mathbb Z\}$ be the cyclic subgroup of $X$ generated by the translation $a$. This subgroup $Z$ has index 2 in $X=Z\cup Zb$.
First we show that the ideal $\mathcal I=[X]^{<\omega}$ of finite subsets of $X$ is twinic. Let $A\subset X$ be a subset with $xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA$ for some $x,y\in X$. We need to show that $yA=_\mathcal I xA$.
We consider three cases.
\smallskip
1) $x,y\in Z$. In this case the elements $x,y$ commute.
The $\mathcal I$-inclusion $xA\subset_\mathcal I yA$ implies $y^{-1}xA\subset_\mathcal I A$.
We claim that $y^{-1}xA\supset_\mathcal I A$. Observe that the $\mathcal I$-inclusion
$xA\subset_\mathcal I X\setminus A$ is equivalent to $xA\cap A\in\mathcal I$ and to $A\cap x^{-1}A\in\mathcal I$, which implies $x^{-1}A\subset_\mathcal I X\setminus A$. By analogy, $X\setminus A\subset_\mathcal I yA$ is equivalent to $yA\cup A=_\mathcal I X$ and to $A\cup y^{-1}A=_\mathcal I X$, which implies $X\setminus A\subset_\mathcal I y^{-1}A$. Then $x^{-1}A\subset_\mathcal I X\setminus A\subset_\mathcal I y^{-1}A$ implies $yx^{-1}A\subset_\mathcal I A$ and by the left-invariance of $\mathcal I$,
$A\subset_\mathcal I xy^{-1}A=y^{-1}xA$ (we recall that the elements $x,y^{-1}$ commute).
Therefore, $y^{-1}xA=_\mathcal I A$ and hence $xA=_\mathcal I yA$.
\smallskip
2) $x\in Z$ and $y\in X\setminus Z$. Repeating the argument from the preceding case, we can show that $xA\subset_\mathcal I X\setminus A$ implies $x^{-1}A\subset_\mathcal I X\setminus A$. Then we get the chain of $\mathcal I$-inclusions:
$$xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA\subset_\mathcal I y(X\setminus xA)=yx(X\setminus A)\subset_\mathcal I yxyA=_\mathcal I xA,$$where the last $\mathcal I$-equality follows from the case (1) since $x,yxy\in Z$. Now we see that $xA=_\mathcal I yA$.
\smallskip
3) $x\notin Z$. Then $xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA$ implies
$$x^{-1}bA=bxb^{-1}bA=bxA\subset_\mathcal I X\setminus bA\subset_\mathcal I byA=y^{-1}bA.$$Since $x^{-1}b\in Z$, the cases (1),(2) imply the $\mathcal I$-equality $x^{-1}bA=_\mathcal I y^{-1}bA$. Shifting this equality by $b$, we see that $xA=bx^{-1}bA=_\mathcal I by^{-1}bA=yA$.
This completes the proof of the twinic property of the ideal $\mathcal I=[X]^{<\omega}$. Then the twinic ideal ${\I\!\!\I}\subset[X]^{<\omega}$. Since $[X]^{<\omega}$ is the smallest non-trivial left-invariant ideal on $X$, the equality ${\I\!\!\I}=[X]^{<\omega}$ will follow as soon as we find a non-empty set in the ideal ${\I\!\!\I}$.
For this consider the subset $A=\{a^{n+1},ba^{-n}:n\ge0\}\subset X$ and observe that
$X\setminus A=\{a^{-n},ba^{n+1}:n\ge0\}=bA$ witnessing that $A\in\Tau$. Observe also that $aA=\{a^{n+2},aba^{-n}:n\ge0\}=\{a^{n+2},ba^{-n-1}:n\ge0\}\subsetneqq A$ and thus $baA\subsetneq X\setminus A=bA$. Then $\emptyset\ne baA\setminus bA\in{\I\!\!\I}_1\subset{\I\!\!\I}$ witnesses that the twinic ideal ${\I\!\!\I}$ is not trivial.
\end{proof}
Next, we present (an expected) example of a group, which is not twinic.
\begin{example} The free group $F_2$ with two generators is not twinic.
\end{example}
\begin{proof} Assume that the group $X=F_2$ is twinic and let ${\I\!\!\I}$ be the twinic ideal of $F_2$. Let $a,b$ be the generators of the free group $F_2$. Each element $w\in F_2$ can be represented by a word in the alphabet $\{a,a^{-1},b,b^{-1}\}$. The word of the smallest length representing $w$ is called the {\em irreducible representation} of $w$. The irreducible word representing the neutral element of $F_2$ is the empty word. Let $A$ (resp. $B$) be the set of words whose
irreducible representation start with letter $a$ or $a^{-1}$ (resp. $b$ or $b^{-1}$). Consider the subset $$C=\big\{a^{2n}w:w\in B\cup\{e\},\;n\in\mathbb Z\big\}\subset F_2$$ and observe that $abaC\subset X\setminus C=aC$. Then $aC\setminus abaC\in{\I\!\!\I}_1$ by the definition of the subideal ${\I\!\!\I}_1\subset{\I\!\!\I}$. Observe that $a^3baC\subset aC\setminus abaC$ and thus $a^3baC\in{\I\!\!\I}_1$. Then also $C\in{\I\!\!\I}_1$ and $X\setminus C=aC\in{\I\!\!\I}_1$ by the left-invariance of ${\I\!\!\I}_1$. By the additivity of ${\I\!\!\I}_1$, we finally get $X=C\cup(X\setminus C)\in{\I\!\!\I}_1\subset{\I\!\!\I}$, which is the desired contradiction.
\end{proof}
Next, we prove some permanence properties of the class of twinic groups.
\begin{proposition} Let $f:X\to Y$ be a surjective group homomorphism. If the group $X$ is twinic, then so is the group $Y$.
\end{proposition}
\begin{proof} Let ${\I\!\!\I}$ be the twinic ideal of $X$. It is easy to see that $\mathcal I=\{B\subset Y:f^{-1}(B)\in{\I\!\!\I}\}$ is a left-invariant ideal on the group $Y$. We claim that it is twinic. Given any subset $A\subset Y$ with $xA\subset_\mathcal I Y\setminus A\subset_\mathcal I yA$ for some $x,y\in Y$, let $B=f^{-1}(A)$ and observe that $x'B\subset_{\I\!\!\I} X\setminus B\subset_{\I\!\!\I} y'B$ for some points $x'\in f^{-1}(x)$ and $y'\in f^{-1}(y)$. The twinic property of the twinic ideal ${\I\!\!\I}$ guarantees that $f^{-1}(yA\setminus xA)=y'B\setminus x'B\in{\I\!\!\I}$, which implies $yA\setminus xA\in\mathcal I$ and hence $xA=_\mathcal I Y\setminus A=_\mathcal I yA$.
\end{proof}
\begin{problem} Is a subgroup of a twinic group twinic? Is the product of two twinic groups twinic?
\end{problem}
For groups with trivial twinic ideal the first part of this problem has an affirmative solution, which follows from the characterization Theorem~\ref{t6.2}.
\begin{proposition}
\begin{enumerate}
\item[\textup{(1)}] The class of groups with trivial twinic ideal is closed with respect to taking subgroups and quotient groups.
\item[\textup{(2)}] A group $X$ has trivial twinic ideal if and only if any 2-generated subgroup of $X$ has trivial twinic ideal.
\end{enumerate}
\end{proposition}
\section{2-Cogroups}
It follows from Proposition~\ref{p5.3} that for a twin subset $A$ of a group $X$ the stabilizer $\Fix(A)$ of $A$ is completely determined by the subset $\Fix^-(A)$ because $\Fix(A)=x\cdot\Fix^-(A)$ for each $x\in\Fix^-(A)$. Therefore, the subset $\Fix^-(A)$ carries all the information about the pair $(\Fix^\pm(A),\Fix(A))$. The sets $\Fix^-(A)$ are particular cases of so-called 2-cogroups defined as follows.
\begin{definition} A subset $K$ of a group $X$ is called a {\em 2-cogroup} if for every $x\in K$ the shift $xK=Kx$ is a subgroup of $X$, disjoint with $K$.
By the {\em index} of a 2-cogroup $K$ in $X$ we understand the cardinality $|X/K|$ of the set $X/K=\{Kx:x\in X\}$.
\end{definition}
2-Cogroups can be characterized as follows.
\begin{proposition}\label{p6.2} A subset $K$ of a group $X$ is a 2-cogroup in $X$ if and only if there is a (unique) subgroup $H^\pm$ of $X$ and a subgroup $H\subset H^\pm$ of index 2 such that $K=H^\pm\setminus H$ and $H=K\cdot K$.
\end{proposition}
\begin{proof} If $K$ is a 2-cogroup, then for every $x\in K$ the shift $H=xK=Kx$ is a subgroup of $X$ disjoint with $K$. It follows that $K=x^{-1}H=Hx^{-1}$. Since $x^{-1}\in x^{-1}H=K$, the shift $x^{-1}K=Kx^{-1}$ is a subgroup of $X$ according to the definition of a 2-cogroup. Consequently, $x^{-1}Kx^{-1}K=x^{-1}K$, which implies $Kx^{-1}K=K$ and $Hx^{-1}x^{-1}Hx^{-1}=Kx^{-1}K=K=Hx^{-1}$. This implies $x^{-2}\in H$ and $x^2\in H$. Consequently, $xH=x^{-1}x^2H=x^{-1}H=K=Hx^{-1}=Hx^2x^{-1}=Hx$.
Now we are able to show that $H^\pm=H\cup K$ is a group. Indeed,
$$
\begin{aligned}
(H\cup K)\cdot(H\cup K)^{-1}&\subset
HH^{-1}\cup HK^{-1}\cup KH^{-1}\cup KK^{-1}\subset\\
&\subset H\cup HHx\cup xHH\cup Hx^{-1}xH=H\cup K\cup K\cup H=H^\pm.
\end{aligned}
$$
Since $K=Hx=xH$, the subgroup $H=K\cdot K$ has index 2 in $H^\pm$.
The uniqueness of the pair $(H^\pm,H)$ follows from the fact that $H=K\cdot K$ and $H^\pm=KK\cup K$.
This completes the proof of the ``only if'' part.
To prove the ``if'' part, assume that $H^\pm$ is a subgroup of $X$ and $H\subset H^\pm$ is a subgroup of index 2 such that $K=H^\pm\setminus H$. Then for every $x\in K$ the shift $xK=Kx=H$ is a subgroup of $X$ disjoint with $K$. This means that $K$ is a 2-cogroup.
\end{proof}
Proposition~\ref{p6.2} implies that for each 2-cogroup $K\subset X$ the set $K^\pm=K\cup KK$ is a subgroup of $X$ and $KK$ is a subgroup of index 2 in $K^\pm$.
By $\mathcal K$ we shall denote the family of all 2-cogroups in $X$. It is partially ordered by the inclusion relation $\subset$ and is considered as an $X$-act endowed with the conjugating action
$$\cdot:X\times\mathcal K\to\mathcal K,\;\;\cdot:(x,K)\mapsto xKx^{-1},$$
of the group $X$.
For each 2-cogroup $K\in\mathcal K$ let $\Stab(K)=\{x\in X:xKx^{-1}=K\}$ be the stabilizer of $K$ and $[K]=\{xKx^{-1}:x\in X\}$ be the orbit of $K$.
By $[\mathcal K]=\{[K]:K\in\mathcal K\}$ be denote the orbit space of $\mathcal K$ by the action of the group $X$.
A cogroup $K\in\mathcal K$ is called {\em normal} if $xKx^{-1}=K$ for all $x\in X$. This is equivalent to saying that $\Stab(K)=X$.
Since for each twin subset $A\subset X$ the set $\Fix^-(A)$ is a 2-cogroup, the function
$$\Fix^-:\mathsf T\to\mathcal K,\;\;\Fix^-:A\mapsto \Fix^-(A),$$
is well-defined and equivariant according to Proposition~\ref{p5.4}. A similar equivariant function
$$\I\mbox{-}\Fix^-:\mathsf T^\mathcal I\to\mathcal K,\;\;\I\mbox{-}\Fix^-:A\mapsto \I\mbox{-}\Fix^-(A),$$can be defined for any left-invariant ideal $\mathcal I$ on a group $X$.
Let $\wht\mathcal K$ denote the set of maximal elements of the partially ordered set $(\mathcal K,\subset)$.
The following proposition implies that the set $\wht\mathcal K$ lies in the image $\Fix^-(\Tau)$ and is cofinal in $\mathcal K$.
\begin{proposition}\label{p7.3}
\begin{enumerate}
\item[\textup{(1)}] For any linearly ordered family $\mathcal C\subset\mathcal K$ of 2-cogroups in $X$ the union $\cup\mathcal C$ is a 2-cogroup in $X$.
\item[\textup{(2)}] Each 2-cogroup $K\in\mathcal K$ lies in a maximal 2-cogroup $\wht K\in\wht\mathcal K$.
\item[\textup{(3)}] For each maximal 2-cogroup $K\in\wht\mathcal K$ there is a twin subset $A\in\Tau$ with $K=\Fix^-(A)$.
\end{enumerate}
\end{proposition}
\begin{proof} 1. Let $\mathcal C\subset\mathcal K$ be a linearly ordered family of 2-cogroups of $X$. Since each 2-cogroup $C\in\mathcal C$ is disjoint with the group $C\cdot C$ and $C=C\cdot C\cdot C$, we get that the union $K=\cup\mathcal C$ is disjoint with the union $\bigcup_{C\in \mathcal C}C\cdot C=K\cdot K$ and $K=\bigcup_{C\in\mathcal C}C=\bigcup_{C\in\mathcal C}C\cdot C\cdot C=K\cdot K\cdot K$ witnessing that $K$ is a 2-cogroup.
\smallskip
2. Since each chain in $\mathcal K$ is upper bounded, Zorn's Lemma guarantees that each 2-cogroup of $X$ lies in a maximal 2-cogroup.
\smallskip
3. Given a maximal 2-cogroup $K\in\wht{\mathcal K}$, consider the subgroups $K\cdot K$ and $K^\pm=K\cup KK$ of $X$ and choose a subset $S\subset G$ meeting each coset $K^\pm x$, $x\in X$, at a single point. Consider the set $A=KK\cdot S$ and note that $X\setminus A=KS=xA$ for each $x\in K$, which means that $K\subset\Fix^-(A)$. The maximality of $K$ guarantees that $K=\Fix^-(A)$.
\end{proof}
It should be mentioned that in general, $\Fix^-(\Tau)\ne\mathcal K$.
\begin{example} For any twin subset $A$ in the 4-element group $X=C_2\oplus C_2$ the group $\Fix(A)$ is not trivial. Consequently, each singleton $\{a\}\subset X\setminus\{e\}$ is a 2-cogroup that does not belong to the image $\Fix^-(\Tau)$.
\end{example}
A left-invariant subfamily $\mathsf F\subset\Tau$ is called
\begin{itemize}
\item {\em $\wht{\mathcal K}$-covering} if $\wht{\mathcal K}\subset\Fix^-(\mathsf F)$ (this means that for each maximal 2-cogroup $K\in\wht{\mathcal K}$ there is a twin set $A\in\mathsf F$ with $\Fix^-(A)=K$);
\item {\em minimal $\wht{\mathcal K}$-covering} if $\mathsf F$ coincides with each left-invariant $\wht{\mathcal K}$-covering subfamily of $\mathsf F$.
\end{itemize}
Proposition~\ref{p7.3}(3) implies that the family
$$\wht{\Tau}=\{A\in\Tau:\Fix^-(A)\in\wht{\mathcal K}\}$$is $\wht{\mathcal K}$-covering.
\begin{proposition} For any function $f\in\Enl(\mathsf P(X))$ the family $f(\wht{\Tau})$ is $\wht{\mathcal K}$-covering.
\end{proposition}
\begin{proof} The equivariance of the function $f$ and the left-invariance of the family $\wht{\Tau}$ imply the left-invariance of the family $f(\wht{\Tau})$. To see that $f(\wht{\Tau})$ is $\wht{\mathcal K}$-covering, fix any maximal 2-cogroup $K\in\wht{\mathcal K}$ and using Proposition~\ref{p7.3}, find a twin set $A\subset X$ with $\Fix^-(A)=K$. We claim that $\Fix^-(f(A))=\Fix^-(A)=K$. By Corollary~\ref{c4.4}, the function $f$ is equivariant and symmetric. Then for every $x\in \Fix^-(A)$, applying $f$ to the equality $xA=X\setminus A$, we obtain
$$x\,f(A)=f(xA)=f(X\setminus A)=X\setminus f(A),$$which means that $x\in\Fix^-(f(A))$ and thus $\Fix^-(A)\subset\Fix^-(f(A))$. Now the maximality of the 2-cogroup $\Fix^-(A)$ guarantees that $\Fix^-(f(A))=\Fix^-(A)$.
\end{proof}
\begin{remark} In Theorem~\ref{t17.1} we shall show that for a twinic group $X$ and a function $f\in \mathsf K\big(\Enl(\mathsf P(X))\big)$ from the minimal ideal of $\Enl(\mathsf P(X))$ the family $f(\wht{\Tau})$ is minimal $\wht{\mathcal K}$-covering.
\end{remark}
For each 2-cogroup $K\subset X$ consider the families
$$\Tau_K=\{A\in\Tau:\Fix^-(A)=K\}\mbox{ and }\Tau_{[K]}=\{A\in\Tau:\exists x\in X \mbox{ with }\Fix^-(xA)=K\}.$$
The following proposition describing the structure of minimal $\wht{\mathcal K}$-covering families can be easily derived from the definitions.
\begin{proposition}\label{p6.6} A left-invariant subfamily $\mathsf F\subset \wht{\Tau}$ is minimal $\wht\mathcal K$-covering if and only if for each $K\in\wht\mathcal K$ there is a set $A\in\mathsf F$ such that $\mathsf F\cap\Tau_{[K]}=[A]$.
\end{proposition}
\section{The characteristic group ${\mathcal H}(K)$ of a 2-cogroup $K$}\label{s8n}
In this section we introduce an important notion of the characteristic group ${\mathcal H}(K)$ of a 2-cogroup $K$ in a group $X$ and reveal the algebraic structure of characteristic groups of maximal 2-cogroups.
Observe that for each 2-cogroup $K\subset X$ its stabilizer $\Stab(K)=\{x\in X:xKx^{-1}=K\}$ contains $KK$ as a normal subgroup. So, we can consider the quotient group ${\mathcal H}(K)=\Stab(K)/KK$ called the {\em characteristic group} of the 2-cogroup $K$. Characteristic groups will play a crucial role for description of the structure of maximal subgroups of the minimal ideal of the semigroup $\lambda(X)$.
Observe that for a normal 2-cogroup $K\in\mathcal K$ the characteristic group ${\mathcal H}(K)$ is equal to the quotient group $X/KK$.
The characteristic group ${\mathcal H}(K)$ of each maximal 2-cogroup $K\subset X$ has a remarkable algebraic property: it is a 2-group with a unique 2-element subgroup. Let us recall that a group $G$ is called a {\em 2-group} if the order of each element of $G$ is a power of 2. Let us recall some standard examples of 2-groups.
By $Q_8=\{1,i,j,k,-1,-i,-j,-k\}$ we denote the group of quaternions. It is a multiplicative subgroup of the algebra of quaternions $\IH$. The algebra $\IH$ contains the field of complex numbers $\mathbb C$ as a subalgebra.
For each $n\in\omega$ let
$$C_{2^n}=\{z\in\mathbb C:z^{2^n}=1\}$$ be the cyclic group of order $2^n$. The multiplicative subgroup $Q_{2^n}\subset\IH$ generated by the set $C_{2^{n-1}}\cup Q_8$ is called {\em the group of generalized quaternions}, see \cite[\S5.3]{Rob}. The subgroup $C_{2^{n-1}}$ has index 2 in $Q_{2^n}$ and all element of $Q_{2^n}\setminus C_{2^{n-1}}$ have order 4.
According to our definition, $Q_{2^n}=Q_8$ for $n\le 3$.
For $n\ge 3$ the group $Q_{2^n}$ has the presentation
$$\langle x,y\mid x^2=y^{2^{n-2}},\; x^4=1,\; xyx^{-1}=y^{-1}\rangle.$$
The unions
$$C_{2^\infty}=\bigcup_{n\in\omega}C_{2^n}\mbox{ \ and \ } Q_{2^\infty}=\bigcup_{n\in\omega}Q_{2^n}$$ are called the {\em quasicyclic 2-group} and {\em the infinite group of generalized quaternions}, respectively.
\begin{theorem}\label{BCQ} A group $G$ is isomorphic to $C_{2^n}$ or $Q_{2^n}$ for some $1\le n\le\infty$ if and only if $G$ is a 2-group with a unique element of order 2.
\end{theorem}
\begin{proof} The ``only if'' part is trivial. To prove the ``if'' part, assume that $G$ is a 2-group with a unique element of order 2. Denote this element by $-1$ and let $1$ be the neutral element of $G$. If the group $G$ is finite, then by Theorem 5.3.6 of \cite{Rob}, $G$ is isomorphic to $C_{2^n}$ or $Q_{2^n}$ for some $n\in\mathbb N$. So, we assume that $G$ is infinite.
Since $-1$ is a unique element of order 2, the cyclic subgroup $\{-1,1\}$ is the maximal 2-elementary subgroup of $G$ (we recall that a group is 2-elementary if it can be written as the direct sum of 2-element cyclic groups). Now Theorem 2 of \cite{Shun} implies that the group $G$ is \v Cernikov and hence contains a normal abelian subgroup $H$ of finite index. Since $G$ is infinite, so is the subgroup $H$. Let $\tilde H$ be a maximal subgroup of $G$ that contains $H$.
We claim that $H=\tilde H$ and $H$ is isomorphic to the quasicyclic 2-group $C_{2^\infty}$.
Since $H$ is a 2-group, the unique element $-1$ of the group $G$ belongs to $H$. Let $f:\{-1,1\}\to C_2$ be the unique isomorphism. Since the group $C_{2^\infty}$ is injective, by Baer's Theorem \cite[4.1.2]{Rob}, the homomorphism $f:\{-1,1\}\to C_2\subset C_{2^\infty}$ extends to a homomorphism $\bar f:\tilde H\to C_{2^\infty}$. We claim that $\bar f$ is an isomorphism. Indeed, the kernel $\bar f^{-1}(1)$ of $\bar f$ is trivial since it is a 2-group and contains no element of order 2. So, $\bar f$ is injective and then $\bar f(\tilde H)$ coincides with $C_{2^\infty}$, being an infinite subgroup of $C_{2^\infty}$. By the same reason, $\bar f(H)=C_{2^\infty}$. Consequently, $H=\tilde H$ is isomorphic to $C_{2^\infty}$.
If $G=H$, then $G$ is isomorphic to $C_{2^\infty}$. So, it remains to consider the case of non-abelian group $G\ne H$.
\begin{claim}\label{cl8.2g} For every $a\in H$ and $b\in G\setminus H$ we get $b^2=-1$ and $bab^{-1}=a^{-1}$.
\end{claim}
\begin{proof} The maximality of the abelian subgroup $H$ implies that $bx\ne xb$ for some element of $H$. Since $H$ is quasicyclic, we can assume that the element $x$ has order $\ge 8$ and $a$ belongs to the cyclic subgroup generated $\la x\ra$.
Using the fact that the maximal abelian subgroup $H$ has finite index in $G$, one can show that the group $G$ is locally finite. Consequently, the subgroup
$F=\la b,x\ra$ generated by the set $\{b,x\}$ is finite. By Theorem 5.3.6 \cite{Rob}, this subgroup is isomorphic to $Q_{2^n}$ for some $n\ge 4$.
Analyzing the properties of the group $Q_{2^n}$ we see that $b^2=-1$ and $byb^{-1}=y^{-1}$ for all $y\in\la x\ra$. In particular, $bab^{-1}=a^{-1}$.
\end{proof}
Next, we show that the subgroup $H$ has index 2 in $G$. This will follow as soon as we show that for each $x,y\in G\setminus H$ we get $xy\in H$. Observe that for every $a\in H$ we get $xyay^{-1}x^{-1}=xa^{-1}x^{-1}=a$, which means that $xy$ commutes with each element of $H$ and hence $xy\in H$ by the maximality of $H$. Now take any elements $b\in G\setminus H$ and $q\in Q_{2^\infty}\setminus C_{2^\infty}$.
Extend the isomorphism $\bar f:H\to C_{2^\infty}$ to a map $\tilde f:G\to Q_{2^\infty}$ letting $\tilde f(bh)=q\cdot\bar f(h)$ for $h\in H$. Claim~\ref{cl8.2g} implies that $\tilde f$ is a well-defined isomorphism between $G=H\cup bH$ and $Q_{2^\infty}$.
\end{proof}
\begin{theorem}\label{t8.2} For each maximal 2-cogroup $K\in\wht{\mathcal K}$ in a group $X$ the characteristic group ${\mathcal H}(K)=\Stab(K)/KK$ is isomorphic either to $C_{2^n}$ or to $Q_{2^n}$ for some $1\le n\le\infty$.
\end{theorem}
\begin{proof} This theorem will follow from Theorem~\ref{BCQ} as soon as we check that ${\mathcal H}(K)$ is a 2-group with a unique element of order 2.
Let $q:\Stab(K)\to {\mathcal H}(K)$ be the quotient homomorphism. Take any element $x\in K$ and consider its image $d=q(x)$. Since $K=xKK$, the image $q(K)=\{d\}$ is a singleton. Taking into account that $x\notin KK$ and $x^2\in KK$, we see that the element $d$ has order 2 in ${\mathcal H}(K)$.
We claim that any other element $a$ of order 2 in ${\mathcal H}(K)$ is equal to $d$.
Assume conversely that some element $a\ne d$ of ${\mathcal H}(K)$ has order 2.
Let $C^\pm$ be the subgroup of ${\mathcal H}(K)$ generated by the elements $a,d$ and $C$ be the cyclic subgroup generated by the product $ad$. We claim that $d\notin C$. Assuming conversely that $d\in C$, we conclude that $d=(ad)^n$ for some $n\in\mathbb Z$. Then $a=add=ad(ad)^n=(ad)^{n+1}\in C$ and consequently $a=d$ (because cyclic groups contain at most one element of order 2). Therefore $d\notin C$. It is clear that $C^\pm=C\cup dC$, which means that the subgroup $C$ has index 2 in $C^\pm$.
Consider the subgroups $H^\pm=q^{-1}(C^\pm)$, $H=q^{-1}(C)$ and observe that the 2-cogroup $H^\pm\setminus H$ is strictly larger than $K$, which contradicts $K\in\wht{\mathcal K}$.
Since $d$ is a unique element of order 2 in ${\mathcal H}(K)$, the cyclic subgroup $D=\{d,d^2\}$ generated by $d$ is normal in ${\mathcal H}(K)$. Consequently, for each non-trivial subgroup $G\subset {\mathcal H}(K)$ the product $D\cdot G=G\cdot D$ is a subgroup in ${\mathcal H}(K)$. Now we see that $G$ must contain $d$. Otherwise, $dG$ would be a 2-cogroup in ${\mathcal H}(K)$ and its preimage $q^{-1}(dG)$ would be a 2-cogroup in $X$ that contains the 2-cogroup $K$ as a proper subset, which is impossible as $K$ is a maximal 2-cogroup in $X$.
Therefore each non-trivial subgroup of ${\mathcal H}(K)$ contains $d$. This implies that each element $x\in {\mathcal H}(K)$ has finite order which is a power of 2, witnessing that ${\mathcal H}(K)$ is a 2-group with a single element of order 2.
\end{proof}
\section{Twin-generated topologies on groups}\label{s9}
In this section we study so-called twin-generated topologies on groups.
The information obtained in this section will be used in Section~\ref{s18} for studying the topological structure of maximal subgroups of the minimal ideal of the superextension $\lambda(X)$.
Given a twin subset $A$ of a group $X$ consider the topology $\tau_A$ on $X$ generated by the subbase consisting of the right shifts $Ax$, $x\in X$. In the following proposition by the weight of a topological space we understand the smallest cardinality of a subbase of its topology.
\begin{proposition}\label{p10.1}
\begin{enumerate}
\item[\textup{(1)}] The topology $\tau_A$ turns $X$ into a right-topological group.
\item[\textup{(2)}] If $Ax=xA$ for all $x\in \Fix^-(A)$, then the topology $\tau_A$ is zero-dimensional.
\item[\textup{(3)}] The topology $\tau_A$ is $T_1$ if and only if the intersection $\bigcap_{x\in A}Ax^{-1}$ is a singleton.
\item[\textup{(4)}] The weight of the space $(X,\tau_A)$ does not exceed the index of the subgroup $\Fix(A^{-1})$ in $X$.
\end{enumerate}
\end{proposition}
\begin{proof} 1. It is clear that the topology $\tau_A$ is right-invariant.
2. If $Ax=xA$ for all $x\in \Fix^-(A)$, then the set $X\setminus A$ is open in the topology $\tau_A$ because $X\setminus A=xA=Ax$ for any $x\in\Fix^-(A)$. Consequently, $A$ is an open-and-closed subbasic set. Now we see that the space $(X,\tau_A)$ has a base consisting of open-and-closed subsets, which means that it is zero-dimensional.
3. If the topology $\tau_A$ is $T_1$, then the intersection $\bigcap_{a\in A}Aa^{-1}$ of all open neighborhoods of the neutral element $e$ of $X$ consists of a single point $e$.
Assuming conversely that $\bigcap_{a\in A}Aa^{-1}$ is a singleton $\{e\}$, for any two distinct points $x,y\in X$ we can find a shift $Aa^{-1}$, $a\in A$, that contains the neutral element $e$ but not $yx^{-1}$. Then the shift $Aa^{-1}x$ is an open subset of $(X,\tau)$ that contains $x$ but not $y$, witnessing that the space $(X,\tau_A)$ is $T_1$.
4. To estimate the weight of the space $(X,\tau_A)$, choose a subset $S\subset X$ meeting each coset $x\Fix(A^{-1})$, $x\in X$, at a single point (here $\Fix(A^{-1})=\{x\in X:xA^{-1}=A^{-1}\}$). Then the set $S^{-1}$ meets each coset $\Fix(A)x$, $x\in X$, at a single point. It is easy to see that the family $\{Ax:x\in S^{-1}\}$ forms a subbase of the topology of $\tau$ and hence the weight of $(X,\tau)$ does not exceed $|X/\Fix(A^{-1})|$.
\end{proof}
\begin{definition} A topology $\tau$ on a group $X$ will be called {\em twin-generated} if $\tau$ is equal to the topology $\tau_A$ generated by some twin subset $A\subset X$, i.e., $\tau$ is generated by the subbase $\{Ax:x\in X\}$.
\end{definition}
Because of Theorem~\ref{t8.2}, we shall be especially interested in
twin-generated topologies on the quasi-cyclic group $C_{2^\infty}$
and the infinite quaternion group $Q_{2^\infty}$.
First we consider some examples.
\begin{example}\label{e9.3}In the circle $\IT=\{z\in\mathbb C:|z|=1\}$ consider the twin subset $C_{\!\curvearrowleft}=\{e^{i\varphi}:0\le\varphi<\pi\}$.
\begin{enumerate}
\item[\textup{(1)}] For each $z\in\IT\setminus C_{2^\infty}$ the twin set $C_{2^\infty}\cap zC_{\!\curvearrowleft}$ generates the Euclidean topology on $C_{2^\infty}$.
\item[\textup{(2)}] For each $z\in C_{2^\infty}$ the twin set $C_{2^\infty}\cap zC_{\!\curvearrowleft}$ generates the Sorgenfrey topology on $C_{2^\infty}$. This topology turns $C_{2^\infty}$ into a paratopological group with discontinuous inversion.
\end{enumerate}
\end{example}
A similar situation holds for the group $Q_{2^\infty}$. Its closure in the algebra of quaternions $\IH$ coincides with the multiplicative subgroup $\IT\cup\IT\mathbf j$ of $\IH$, where $\mathbf j\in Q_8\setminus \mathbb C$ is one of non-complex quaternion units.
\begin{example}\label{e9.4}In the group $\IT\cup\IT\mathbf j\subset\IH$ consider the twin subset $Q_{\!\curvearrowleft}=C_{\!\curvearrowleft}\cup C_{\!\curvearrowleft}\,\mathbf j$.
\begin{enumerate}
\item[\textup{(1)}] For each $z\in\IT\setminus C_{2^\infty}$ the twin set $Q_{2^\infty}\cap zQ_{\!\curvearrowleft}$ generates the Euclidean topology on $Q_{2^\infty}$.
\item[\textup{(2)}] For each $z\in C_{2^\infty}$ the twin set $Q_{2^\infty}\cap z Q_{\!\curvearrowleft}$ generates the Sorgenfrey topology on $C_{2^\infty}$. This topology turns $Q_{2^\infty}$ into a right-topological group with discontinuous inverse and discontinuous left shifts $l_x:Q_{2^\infty}\to Q_{2^\infty}$ for $x\in Q_{2^\infty}\setminus C_{2^\infty}$.
\end{enumerate}
\end{example}
In the following proposition by $\tau_E$ we denote the Euclidean topology on $C_{2^\infty}$.
\begin{theorem}\label{t9.5} Each metrizable right-invariant topology $\tau\supset\tau_E$ on the group $C_{2^\infty}$ (or $Q_{2^\infty}$) is twin-generated.
\end{theorem}
\begin{proof} First we consider the case of the group $C_{2^\infty}$. Let $E_0=C_{2^\infty}\cap \{e^{i\varphi}:-\pi/3<\varphi<2\pi/3\}$ be the twin subset generating the Euclidean topology $\tau_E$ on $C_{2^\infty}$ and $E_n=C_{2^\infty}\cap\{e^{i\varphi}:|\varphi|<3^{-n-1}\pi\}$ for $n\ge 1$.
For every $n\in\mathbb N$ let $\varphi_n=\sum_{k=1}^n\pi/4^n$ and observe that
$\varphi_\infty=\sum_{k=1}^\infty\pi/4^n=\pi/3$.
Let $\tau\supset\tau_E$ be any metrizable right-invariant topology on $C_{2^\infty}$. The metrizable space $(C_{2^\infty},\tau)$ is countable and hence zero-dimensional. Since $\tau\supset\tau_E$, there exists a neighborhood base $\{U_n\}_{n=1}^\infty\subset\tau$ at the unit 1 such that each set $U_n$ is closed and open in $\tau$ and $U_n\subset E_n$ for all $n\in\mathbb N$.
The interested reader can check that the twin subset
$$A=(E_0\setminus\bigcup_{n=1}^\infty e^{i\varphi_n}E_n)\cup \bigcup_{n=1}^\infty e^{i\varphi_n}U_n\cup \bigcup_{n=1}^\infty e^{i(\pi+\varphi_n)}E_n\setminus U_n$$generates the topology $\tau$.
\smallskip
Next, assume that $\tau\supset \tau_E$ is a right-invariant topology on the group $Q_{2^\infty}$. This group can be written as $Q_{2^\infty}=C_{2^\infty}\cup C_{2^\infty}\mathbf j$, where $\mathsf j\in Q_8\setminus\mathbb C$ is a non-complex quaternion unit. Since $C_{2^\infty}\in\tau_E\subset\tau$, the subgroup $C_{2^\infty}$ is open in $Q_{2^\infty}$. By the preceding item, the topology $\tau\cap \mathsf P(C_{2^\infty})$ on the group $C_{2^\infty}$ is generated by a twin set $A\subset C_{2^\infty}\setminus\{e^{i\varphi}:\varphi\in(\frac{2\pi}3,\pi)\cup(\frac{4\pi}3,\frac{5\pi}3)\}$. A simple geometric argument shows that the topology $\tau$ is generated by the twin subset $A\cup A\mathbf j$ of $Q_{2^\infty}$.
\end{proof}
\begin{problem} Are all metrizable right-invariant topology on
$C_{2^\infty}$ and $Q_{2^\infty}$ twin-generated?
\end{problem}
\section{The characteristic group ${\mathcal H}(A)$ of a twin subset $A$}
In this section, given a twin subset $A\in\Tau$ of a group $X$ we introduce a twin-generated topology on the characteristic group ${\mathcal H}(K)$ of the 2-cogroup $K=\Fix^-(A)$.
Consider the intersection $B=A\cap \Stab(K)=B\cdot KK$ and the image $A'=q_A(B)$ of the set $B$ under the quotient homomorphism $q_A:\Stab(K)\to {\mathcal H}(K)=\Stab(K)/KK$. We claim that $A'$ is a twin subset of ${\mathcal H}(K)$.
Indeed, for every $x\in\Fix^-(A)=K\subset\Stab(K)$ we get
$X\setminus A=xA$ and consequently, $\Stab(K)\setminus B=xB$ and ${\mathcal H}(A)\setminus A'=zA'$ where $z\in q_A(x)$.
Now it is legal to endow the group ${\mathcal H}(K)$ with the topology $\tau_{A'}$ generated by the twin subset $A'$. This topology is generated by the subbase $\{A'x:x\in {\mathcal H}(K)\}$. By Proposition~\ref{p10.1} the topology $\tau_{A'}$ turns the characteristic group ${\mathcal H}(K)$ into a right-topological group, which will be called the {\em characteristic group} of $A$ and will be denoted by ${\mathcal H}(A)$. By Proposition~\ref{p10.1}, the characteristic group ${\mathcal H}(A)$ is a $T_1$-space and its weight does not exceed the cardinality of ${\mathcal H}(A)$.
The reader should be conscious of the fact that for two twin subsets $A,B\in\Tau$ with $\Fix^-(A)=\Fix^-(B)$ the characteristic group ${\mathcal H}(A)$ and ${\mathcal H}(B)$ are algebraically isomorphic but topologically they can be distinct, see Examples~\ref{e9.3} and \ref{e9.4}.
\section{Characterizing functions that belong to $\Enl^\mathcal I(\mathsf F)$}
In this section for a twinic ideal $\mathcal I$ on a group $X$ and a left-invariant subfamily $\mathsf F\subset\wht{\Tau}$ we characterize functions $f:\mathsf F\to\mathsf P(X)$ that belong to the space $\Enl^\mathcal I(\mathsf F)$. We recall that $\Enl^\mathcal I(\mathsf F)$ is the projection of $\Enl^\mathcal I(\mathsf P(X))$ onto the face $\mathsf P(X)^{\mathsf F}$.
\begin{theorem}\label{t11.1} For a left-invariant twinic ideal $\mathcal I$ on a group $X$ and a left-invariant subfamily $\mathsf F\subset\Tau$ a function $\varphi:\mathsf F\to\mathsf P(X)$ belongs to the space $\Enl^\mathcal I(\mathsf F)$ if and only if $\varphi$ is equivariant, $\mathcal I$-saturated, and $\Fix^-(A)\subset \Fix^-(\varphi(A))$ for all $A\in\mathsf F$.
\end{theorem}
\begin{proof} To prove the ``only if'' part, take any function $\varphi\in\Enl^\mathcal I(\mathsf F)$ and find a function $\psi\in\Enl^\mathcal I(\mathsf P(X))$ such that $\varphi=\psi|\mathsf F$. By Theorem~\ref{t4.8}, the function $\psi$ is equivariant, monotone, symmetric, and $\mathcal I$-saturated. Consequently, its restriction $\varphi=\psi|\mathsf F$ is equivariant and $\mathcal I$-saturated. Now fix any subset $A\in\mathsf F$ and take any point $x\in\Fix^-(A)$. The left-invariance of $\mathsf F$ guarantees that $X\setminus A=xA\in\mathsf F$, which means that the family $\mathsf F$ is symmetric.
Applying the equivariant symmetric function $\psi$ to the equality $xA= X\setminus A$, we get
$$x\,\varphi(A)=x\,\psi(A)=\psi(xA)=\psi(X\setminus A)=X\setminus\psi(A)=X\setminus\varphi(A)$$ and thus $x\in\Fix^-(\varphi(A))$ and $\Fix^-(A)\subset\Fix^-(\varphi(A))$.
\smallskip
To prove the ``if'' part, fix any equivariant $\mathcal I$-saturated function $\varphi:\mathsf F\to\mathsf P(X)$ such that $\Fix^-(A)\subset\Fix^-(\varphi(A))$.
In order to apply Theorem~\ref{t4.8}, we need to extend the function $\varphi$ to some symmetric $\mathcal I$-saturated family. This can be done as follows.
Consider the $\mathcal I$-saturization
$\bar{\bar{\mathsf F}}^\mathcal I=\bigcup_{A\in\mathsf F}\bar{\bar A}^\mathcal I$ of $\mathsf F$. Next, extend the function $\varphi$ to the function $\bar\varphi:\bar{\bar{\mathsf F}}\to \mathsf F$ assigning to each set $B\in\bar{\bar{\mathsf F}}^\mathcal I$ the set $\varphi(A)$ where $A\in\mathsf F\cap\bar{\bar B}^\mathcal I$. Since $\varphi$ is $\mathcal I$-saturated, so defined extension $\bar\varphi$ of $\varphi$ is well-defined and $\mathcal I$-saturated. The equivariance of $\varphi$ implies the equivariance of its extension $\bar\varphi$.
Let us check that the function $\bar\varphi:\bar{\bar{\mathsf F}}^\mathcal I\to\mathsf F$ is symmetric and monotone.
To see that $\bar\varphi$ is symmetric, take any set $B\in\bar{\bar{\mathsf F}}^\mathcal I$ and find a set $A\in\mathsf F\cap\bar{\bar B}^\mathcal I$. Fix any point $x\in \Fix^-(A)$. By our hypothesis $x\in\Fix^-(A)\subset\Fix^-(\varphi(A))$. It follows from $A=_\mathcal I B$ that $X\setminus A=_\mathcal I X\setminus B$ and hence
$$\bar\varphi(X\setminus B)=\varphi(X\setminus A)=\varphi(xA)=x\varphi(A)=X\setminus \varphi(A)=X\setminus\bar\varphi(B),$$
which means that the function $\bar\varphi$ is symmetric.
The monotonicity of $\bar\varphi$ will follow as soon as we check that $\varphi(A)=\varphi(B)$ for any sets $A,B\in\mathsf F$ with $A\subset_\mathcal I B$.
Pick points $a\in\Fix^-(A)$, $b\in\Fix^-(B)$. Since the ideal $\mathcal I$ is twinic, the chain of $\mathcal I$-inclusions $bB=X\setminus B\subset_\mathcal I X\setminus A=aA\subset_\mathcal I aB$ implies the chain of $\mathcal I$-equalities $bB=_\mathcal I X\setminus B=_\mathcal I X\setminus A=aA=_\mathcal I aB$, which yields $A=_\mathcal I B$ and $\varphi(A)=\varphi(B)$ as $\varphi$ is $\mathcal I$-saturated.
Therefore $\bar\varphi:\bar{\bar{\mathsf F}}^\mathcal I\to\mathsf F$ is a left-invariant symmetric monotone $\mathcal I$-saturated function defined on a $\mathcal I$-saturated left-invariant symmetric family $\bar{\bar{\mathsf F}}^\mathcal I$. By Theorem~\ref{t4.8}, $\bar\varphi$ belongs to $\Enl^\mathcal I(\bar{\bar{\mathsf F}}^\mathcal I)$ and then its restriction $\varphi=\bar\varphi|\mathsf F$ belongs to $\Enl^\mathcal I(\mathsf F)$.
\end{proof}
Let us recall that $\wht\Tau=\{A\subset X:\Fix^-(A)\in\wht\mathcal K\}$.
\begin{corollary} For a left-invariant twinic ideal $\mathcal I$ on a group $X$ the space $\Enl^\mathcal I(\wht{\Tau})$ consists of all equivariant $\mathcal I$-saturated functions $\varphi:\wht{\Tau}\to\wht{\Tau}$ such that $\Fix^-(\varphi(A))=\Fix^-(A)$ for all $A\in\wht{\Tau}$.
\end{corollary}
A similar characterization holds for functions that belong to the space $\Enl^\mathcal I(\Tau_K)$ for $K\in\wht{\mathcal K}$ (let us observe that Theorem~\ref{t4.8} is not applicable to the family $\Tau_K$ because it is not left-invariant). A function $\varphi:\Tau_K\to\Tau_K$ is {\em $\Stab(K)$-equivariant} if $\varphi(xA)=x\varphi(A)$ for all $A\in\Tau_K$ and $x\in \Stab(K)$.
\begin{proposition}\label{p11.3} For any maximal 2-cogroup $K\subset X$ and a left-invariant twinic ideal $\mathcal I$ on a group $X$ a function $\varphi:\Tau_K\to\Tau_K$ belongs to the space $\Enl^\mathcal I(\Tau_K)$ if and only if $\varphi$ is $\Stab(K)$-equivariant and $\mathcal I$-saturated.
\end{proposition}
\begin{proof} The ``only if'' part follows from Theorem~\ref{t4.8}.
To prove the ``if part'', assume that a function $\varphi:\Tau_K\to\Tau_K$ is $\Stab(K)$-invariant and $\mathcal I$-saturated. For any $A\in\Tau_K$ and $x\in K=\Fix^-(A)=\Fix^-(\varphi(A))$ we get $\varphi(X\setminus A)=\varphi(xA)=x\varphi(A)=X\setminus \varphi(A)$, which means that the function $\varphi$ is symmetric.
Now consider the
families $$\mathcal L_\varphi=\{x^{-1}A:A\in\mathsf F,\;x\in \varphi(A)\}\;\;\mbox{and}\;\;
\bar{\bar \mathcal L}^\mathcal I_\varphi=\bigcup_{A\in\mathcal L_\varphi}\bar{\bar A}^\mathcal I.$$
We claim that the family $\bar{\bar\mathcal L}_\varphi^\mathcal I$ is linked. Assuming the
converse, we could find two sets $A,B\in\mathsf F$ and two points $x\in
\varphi(A)$ and $y\in \varphi(B)$ such that $x^{-1}A\cap y^{-1}B\in\mathcal I$. Then
$yx^{-1}A\subset_{\mathcal I} X\setminus B$. Let us show that the point $c=yx^{-1}$ belongs to the subgroup $\Stab(K)$ of $X$. Given any point $z\in K$, we need to prove that $c^{-1}zc\in K$. Taking into account that $z\in K=\Fix^-(B)=\Fix^-(A)$, we see that $cA\subset_\mathcal I X\setminus B$ implies that $$cA\subset_\mathcal I X\setminus B=zB\subset_\mathcal I zc(X\setminus A)=zczA.$$ Since the ideal $\mathcal I$ is twinic, we get $cA=_\mathcal I X\setminus B=_\mathcal I zczA$, which implies $c^{-1}zcz\in\I\mbox{-}\Fix(A)$. The maximality of the 2-cogroup $K=\Fix^-(A)\subset \I\mbox{-}\Fix^-(A)$ guarantees that $\I\mbox{-}\Fix^-(A)=K$ and $\I\mbox{-}\Fix(A)=\I\mbox{-}\Fix^-(A)\cdot\I\mbox{-}\Fix^-(A)=KK$. Therefore $c^{-1}zcz\in KK$ and $c^{-1}zc\in KKz^{-1}=K$. Now we see that $yx^{-1}=c\in\Stab(K)$. So it is legal to apply the $\Stab(K)$-invariant $\mathcal I$-saturated function $\varphi$ to the $\mathcal I$-equality $yx^{-1}A=_\mathcal I X\setminus B$ and obtain $yx^{-1}\varphi(A)=\varphi(X\setminus B)=X\setminus\varphi(B)$.
Then $x^{-1}\varphi(A)\subset X\setminus y^{-1}\varphi(B)$, which is
not possible because the neutral element $e$ of the group $X$
belongs to $x^{-1}\varphi(A)\cap y^{-1}\varphi(B)$.
Further we continue as in the proof of Theorem~\ref{t4.8}.
\end{proof}
\section{The ${\mathcal H}(K)$-act $\Tau_K$ of a maximal 2-cogroup $K$}
In this section, given a maximal 2-cogroup $K$ in a group $X$ we study the structure of the subspace
$$\Tau_K=\{A\in\mathsf P(X):\Fix^-(A)=K\}\subset\mathsf P(X)$$of the compact Hausdorff space $\mathsf P(X)$. The latter space is naturally homeomorphic to the Cantor discontinuum $2^X$ where the ordinal $2=\{0,1\}$ is endowed with the discrete topology.
\begin{proposition}\label{p12.1} For any 2-cogroup $K\subset X$ the subspace $\Tau_K$ of\/ $\mathsf P(X)$ is homeomorphic to the Cantor discontinuum $2^{X/K^\pm}$ where $X/K^\pm=\{K^\pm x:x\in X\}$.
\end{proposition}
\begin{proof} Choose any subset $S\subset X$ that meets each coset $K^\pm x$, $x\in X$, at a single point, and consider the bijective function
$\Psi:\mathsf P(S)\to \Tau_K$ assigning to each subset $A\subset S$ the twin set $T_A=KKA\cup K(S\setminus A)$. Let us show that the function $\Psi$ is continuous. The subbase of the topology of $\Tau_K$ consists of the sets $\la x\ra^+=\{B\in \Tau_K:x\in B\}$ and $\la x\ra^-=\{B\in\Tau_K:x\notin B\}$ where $x\in X$. Observe that for every
$z\in K$ we get $\la x\ra^-=\{B\in\Tau_K:x\in X\setminus B= zB\}=\la z^{-1}x\ra^+,$ which means that the sets $\la x\ra^+$, $x\in X$, form a subbase of the topology of $\Tau_K$.
Now the continuity of the map $\Psi$ will follow as soon as we check that for every $x\in X$ the set $\Psi^{-1}(\la x\ra^+)=\{A\in\mathsf P(S):x\in T_A\}$ is open in $\mathsf P(S)$. Fix any subset $A\in \Psi^{-1}(\la x\ra^+)$ and let $s$ be the unique point of the intersection $S\cap K^\pm x$.
Consider the open neighborhood $O(A)=\{A'\in\mathsf P(S):A'\cap\{s\}=A\cap\{s\}\}$ of $A$ in the space $\mathsf P(S)$. We claim that $O(A)\subset\Psi^{-1}(\la x\ra^+)$. Fix any $A'\in O(A)$ and consider two cases:
(i) If $s\in A$, then $s\in A'$ and $x\in T_A\cap K^\pm s=KKs\subset T_{A'}$.
(ii) If $s\in S\setminus A$, then $s\in S\setminus A'$ and $x\in T_A\cap K^\pm s=Ks\subset K(S\setminus A')\subset T_{A'}$.
In both cases $\Psi(A')=T_{A'}\in\la x\ra^+$.
Now we see that $\Psi:\mathsf P(S)\to \Tau_K$, being a continuous bijective map defined on the compact Hausdorff space $\mathsf P(S)$, is a homeomorphism. It remains to observe that $\mathsf P(S)$ is homeomorphic to $2^{X/K^\pm}$.
\end{proof}
Let us observe that in general the subfamily $\Tau_K\subset\mathsf P(X)$ is not left-invariant. Indeed, for any $A\in\Tau_K$ and $x\in X$ the shift $xA$ belongs to $\Tau_K$ if and only if $K=\Fix^-(xA)=x\Fix^-(A)x^{-1}=xKx^{-1}$ if and only if $x\in\Stab(K)$. Thus the family $\Tau_K$ can be considered as an act endowed with the left action of the group $\Stab(K)$.
For any twin set $A\in\Tau_K$ its stabilizer $\Fix(A)=\{x\in X:xA=A\}$ is equal to $\Fix^-(A)\cdot\Fix^-(A)=KK$ and hence is a normal subgroup of $\Stab(K)$. This implies that the characteristic group ${\mathcal H}(K)=\Stab(K)/KK$ acts freely on the space $\Tau_K$. Therefore, we can (and will) consider the space $\Tau_K$ as a free ${\mathcal H}(K)$-act. For each set $A\in\Tau_K$ by
$$\lfloor A\rfloor=[A]\cap \Tau_K=\{xA:x\in \Stab(K)\}=\{hA:h\in {\mathcal H}(K)\}$$ we denote the orbit of $A$ in $\Tau_K$ and by $[\Tau_K]=\{\lfloor A\rfloor:A\in\Tau_K\}$ the orbit space of the ${\mathcal H}(K)$-act $\Tau_K$, endowed with the quotient topology. By
Theorem~\ref{t2.1}, the ${\mathcal H}(K)$-act $\Tau_K$ is isomorphic to $[\Tau_K]\times {\mathcal H}(K)$. In some cases the isomorphism between the ${\mathcal H}(K)$-acts $\Tau_K$ and $[\Tau_K]\times {\mathcal H}(K)$ is topological.
\begin{proposition}\label{p12.2} The orbit space $[\Tau_K]$ is a $T_1$-space if and only if the characteristic group ${\mathcal H}(K)$ is finite. In this case $[\Tau_K]$ is a compact Hausdorff space and the orbit map $q:\Tau_K\to[\Tau_K]$ has a continuous section $s:[\Tau_K]\to\Tau_K$, which implies that $\Tau_K$ is homeomorphic to the product $[\Tau_K]\times {\mathcal H}(K)$ where the (finite) group ${\mathcal H}(K)$ is endowed with the discrete topology.
\end{proposition}
\begin{proof} By Theorem~\ref{t8.2}, the characteristic group ${\mathcal H}(K)$ is at most countable. Since $T_K$ is a free ${\mathcal H}(K)$-act, each orbit $\lfloor A\rfloor$, $A\in\Tau_K$, has cardinality $|\lfloor A\rfloor|=|{\mathcal H}(K)|$ and hence is at most countable. Note that the orbit $\lfloor A\rfloor$ admits a transitive action of the group ${\mathcal H}(K)$ and hence is topologically homogeneous.
If $[\Tau_K]$ is a $T_1$-space, then each orbit $\lfloor A\rfloor$, $A\in\Tau_K$, is closed in the compact Hausdorff space $\Tau_K$. Now Baire theorem implies that $\lfloor A\rfloor$ has an isolated point and is discrete (being topologically homogeneous). Taking into account that $\lfloor A\rfloor$ is compact and discrete, we conclude that it is finite. Consequently $|{\mathcal H}(K)|=|\lfloor A\rfloor|<\aleph_0$.
Now assume that the characteristic group ${\mathcal H}(K)$ is finite. Let $q:\Tau_K\to[\Tau_K]$ denote the orbit map. To show that the orbit space $[\Tau_K]$ is Hausdorff, pick two distinct orbits $\lfloor A\rfloor$ and $\lfloor B\rfloor$. Since ${\mathcal H}(K)$ is finite and $xA\ne yB$ for any $x,y\in {\mathcal H}(K)$, we can find two neighborhoods $O(A)$ and $O(B)$ of $A,B$ in $\Tau_K$ such that $xO(A)\cap yO(B)=\emptyset$. Then $O(\lfloor A\rfloor)=\bigcup_{x\in {\mathcal H}(K)}xO(A)$ and $O(\lfloor B\rfloor)=\bigcup_{y\in {\mathcal H}(K)}yO(B)$ are two disjoint open ${\mathcal H}(K)$-invariant subsets in $\Tau_K$. Their images $q(O(\lfloor A\rfloor))$ and $q(O(\lfloor B\rfloor))$ are disjoint open neighborhoods of $\lfloor A\rfloor$, $\lfloor B\rfloor$ in $[\Tau_K]$, which means that the orbit space $[\Tau_K]$ is Hausdorff.
This space is compact and zero-dimensional as the image of the compact zero-dimensional space $\Tau_K$ under the open continuous map $q:\Tau_K\to[\Tau_K]$.
Using the zero-dimensionality of $[\Tau_K]$ and the finiteness of ${\mathcal H}(K)$ it is easy to construct a continuous section $s:[\Tau_K]\to\Tau_K$ of the map $q$ and prove that $\Tau_K$ is homeomorphic to ${\mathcal H}(K)\times[\Tau_K]$.
\end{proof}
Let us recall that a subfamily $\mathsf F\subset\mathsf P(X)$ is $\lambda$-invariant if $f(\mathsf F)\subset\mathsf F$ for any equivariant symmetric monotone function $f:\mathsf P(X)\to\mathsf P(X)$. For a $\lambda$-invariant subfamily $\mathsf F\subset \mathsf P(X)$ the projection
$$\Enl(\mathsf F)=\{f|\mathsf F:f\in\Enl(\mathsf P(X))\}$$ is a subsemigroup of the semigroup $\mathsf F^{\mathsf F}$ of all self-mappings of $\mathsf F$.
\begin{proposition}\label{p12.3} For any maximal 2-cogroup $K\subset X$ the family $\Tau_K$ is $\lambda$-invariant and hence $\Enl(\Tau_K)$ is a compact right-topological semigroup.
\end{proposition}
\begin{proof} Given any function $f\in\Enl(\mathsf P(X))$ and a set $A\in\Tau_K$ we need to show that $f(A)\in\Tau_K$. By Corollary~\ref{c4.4}, the function $f$ is equivariant and symmetric. Then for any $x\in K=\Fix^-(A)$ we get $xA=X\setminus A$ and hence $x\varphi(A)=\varphi(xA)=\varphi(X\setminus A)=X\setminus\varphi(A)$, which means that $x\in\Fix^-(\varphi(A))$ and $K\subset\Fix^-(\varphi(A))$. The maximality of the 2-cogroup $K$ guarantees that $K=\Fix^-(\varphi(A))$ and thus $\varphi(A)\in\Tau_K$. So, the family $\Tau_K$ is $\lambda$-invariant.
\end{proof}
\section{$\mathcal I$-incomparable and $\mathcal I$-independent families}\label{s12n}
Let $\mathcal I$ be a left-invariant ideal on a group $X$. A family $\mathsf F\subset\mathsf P(X)$ is called
\begin{itemize}
\item {\em $\mathcal I$-incomparable} if $\forall A,B\in\mathsf F$ \ $(A\subset_\mathcal I B\;\Rightarrow\;A=_\mathcal I B)$;
\item {\em $\mathcal I$-independent} if $\forall A,B\in\mathsf F$ \ $(A=_\mathcal I B\;\Rightarrow\;A= B)$.
\end{itemize}
\begin{proposition}\label{p13.1} A left-invariant ideal $\mathcal I$ on a group $X$ is twinic if and only if the family $\mathsf{pT}^\mathcal I$ of $\mathcal I$-pretwin sets is $\mathcal I$-incomparable.
\end{proposition}
\begin{proof} First assume that the family $\mathsf{pT}^\mathcal I$ is $\mathcal I$-incomparable.
To show that the ideal $\mathcal I$ is twinic, take any subset $A\subset X$ with $xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA$ for some $x,y\in X$. Then $A\in\mathsf{pT}^\mathcal I$ and also $xA,yA\in\mathsf{pT}^\mathcal I$. Since $xA\subset_\mathcal I yA$, the $\mathcal I$-incomparability of the family $\mathsf{pT}^\mathcal I$ implies that $xA=_\mathcal I yA$ and then $xA=_\mathcal I X\setminus A=_\mathcal I yA$, which means that the ideal $\mathcal I$ is twinic.
Now assume conversely that $\mathcal I$ is twinic and take two $\mathcal I$-pretwin sets $A\subset_\mathcal I B$. Since the sets $A,B$ are $\mathcal I$-pretwin, there are elements
$x,y\in X$ such that $xB\subset_\mathcal I X\setminus B$ and $X\setminus
A\subset_\mathcal I yA$. Taking into account that
$$xB\subset_\mathcal I X\setminus B\subset_\mathcal I X\setminus A\subset_\mathcal I yA\subset_\mathcal I yB,$$
and $\mathcal I$ is twinic, we conclude that $X\setminus B=_\mathcal I X\setminus
A$ and hence $A=_\mathcal I B$.
\end{proof}
\begin{corollary} For each twinic left-invariant ideal $\mathcal I$ on a group $X$ the family ${\Tau}$ of twin sets is $\mathcal I$-incomparable.
\end{corollary}
\begin{proposition}\label{p13.3} For a left-invariant ideal $\mathcal I$ on a group $X$ the family $\wht{\Tau}$ is $\mathcal I$-independent if and only if $\mathcal I\cap\wht{\mathcal K}=\emptyset$.
\end{proposition}
\begin{proof} To prove the ``only if'' part, assume that the ideal $\mathcal I$ contains some maximal 2-cogroup $K\in\wht{\mathcal K}$. Since $\mathcal I$ is left-invariant, for each $x\in K$, $KK=xK\in\mathcal I$ and hence $K^\pm=K\cup KK\in\mathcal I$.
Choose a subset $S\subset X$ that contains the neutral element $e$ of the group $X$ and meets each coset $K^\pm x$, $x\in X$, at a single point. Then $A=KKS$ and $B=KK(S\setminus\{e\})\cup K$ are two distinct twin sets with $K\subset\Fix^-(A)\cap\Fix^-(B)$. By the maximality of $K$, $K=\Fix^-(A)=\Fix^-(B)$ and hence $A,B\in\wht{\Tau}$. Since the symmetric difference $A\triangle B=KK\cup K=K^\pm\in\mathcal I$, we get $A=_\mathcal I B$, which means that the family $\wht{\Tau}$ fails to be $\mathcal I$-independent.
\smallskip
To prove the ``if'' part, assume that the family $\wht{\Tau}$ is not $\mathcal I$-independent and find two subsets $A,B\in\wht{\Tau}$ such that $A\ne B$ but $A=_\mathcal I B$. The 2-cogroup $\Fix^-(A)$ of $A$ is maximal and hence coincides with the 2-cogroup $\I\mbox{-}\Fix^-(A)\supset\Fix^-(A)$. By the same reason, $\Fix^-(B)=\I\mbox{-}\Fix^-(B)$.
The $\mathcal I$-equality $A=_\mathcal I B$ implies $\I\mbox{-}\Fix^-(A)=\I\mbox{-}\Fix^-(B)$. Denote the maximal 2-cogroup $\Fix^-(A)=\I\mbox{-}\Fix^-(A)=\I\mbox{-}\Fix^-(B)=\Fix^-(B)$ by $K$.
Then $\Fix(A)=\Fix^-(A)\cdot\Fix^-(A)=KK=\Fix(B)$ and hence $A=KKA$ and $B=KKB$. Now we see that the symmetric difference $A\triangle B=KKA\triangle KKB$ contains a subset $KKx$ for some $x\in X$. Then for any $y\in K$, we get $Kyx=KKx\subset A\triangle B\in\mathcal I$ and hence $Kyx\in\mathcal I$. Finally observe that the set $K'=x^{-1}y^{-1}Kyx$ is a maximal 2-cogroup and by the left invariance of the ideal $\mathcal I$, $K'=x^{-1}y^{-1}Kyx\in\mathcal I$. So, $\wht{\mathcal K}\cap\mathcal I\ne\emptyset$.
\end{proof}
\begin{proposition}\label{p13.4} A subfamily $\mathsf F\subset\mathsf P(X)$ is $\mathcal I$-independent for any left-invariant ideal $\mathcal I$ on $X$ if for each set $A\in \mathsf F$ the subgroup $\Fix(A)$ has finite index in $X$.
\end{proposition}
\begin{proof} Assume that $A,B\in\mathsf F$ be two subsets with $A=_\mathcal I B$ for some left-invariant ideal $\mathcal I$. Since the subgroups $\Fix(A)$ and $\Fix(B)$ have finite indices in $X$, their intersection $\Fix(A)\cap\Fix(B)$ also has finite index in $X$ and contains a normal subgroup $H\subset X$ of finite index in $X$, see \cite[I.Ex.9(a)]{Lang}.
Then $X=FH$ for some finite subset $F\subset X$.
Assuming that $A\ne B$, we can find a point $x\in A\triangle B$ and conclude that $xH=Hx\subset HA\triangle HB=A\triangle B\in\mathcal I$ and $X=FH\in\mathcal I$ by the left-invariance of the ideal $\mathcal I$. This contradiction completes the proof.
\end{proof}
\begin{proposition}\label{p13.5} Each minimal $\wht\mathcal K$-covering subfamily $\widetilde{\Tau}\subset\wht{\Tau}$ is $\mathcal I$-independent.
\end{proposition}
\begin{proof} Fix any two sets $A,B\in\widetilde{\Tau}$ with $A=_\mathcal I B$. Repeating the argument from the proof of Proposition~\ref{p13.3}, we can prove that $\I\mbox{-}\Fix^-(A)=\Fix^-(A)=\I\mbox{-}\Fix^-(B)=\Fix^-(B)=K$ for some maximal 2-cogroup $K\in\wht{\mathcal K}$. Since the family $\widetilde{\Tau}\ni A,B$ is minimal $\wht{\mathcal K}$-covering, the sets $A,B$ lie in the same orbit and hence $A=xB$ for some $x\in X$. It follows from $B=_\mathcal I A=xB$ that $x\in\I\mbox{-}\Fix(B)=\Fix(B)$ and thus $A=xB=B$.
\end{proof}
\section{The endomorphism monoid $\End(\Tau_K)$ of the ${\mathcal H}(K)$-act $\Tau_K$}
For any maximal 2-cogroup $K$ in a group $X$ the compact right-topological semigroup $\Enl(\Tau_K)$ is a subsemigroup of the endomorphism monoid $\End(\Tau_K)$ of the free ${\mathcal H}(K)$-act $\Tau_K$. The endomorphism monoid $\End(\Tau_K)$ is the space of all (not necessarily continuous) functions $f:\Tau_K\to\Tau_K$ that are equivariant in the sense that $f(xA)=xf(A)$ for all $A\in\Tau_K$ and $x\in \Stab(K)$. It is easy to check that $\End(\Tau_K)$ is a closed subsemigroup of the compact Hausdorff right-topological semigroup $\Tau_K^{\;\Tau_K}$ of all self-maps of the compact Hausdorff space $\Tau_K$. So, $\End(\Tau_K)$ is a compact Hausdorff right-topological semigroup that contains $\Enl(\Tau_K)$ as a closed subsemigroup.
If $\mathcal I$ is a left-invariant ideal on the group $X$, then the left ideal $\Enl^\mathcal I(\Tau_K)$ of $\Enl(\Tau_K)$ lies in the left ideal $\End^\mathcal I(\Tau_K)\subset \End(\Tau_K)$ consisting of all equivariant functions $f:\Tau_K\to\Tau_K$, which are $\mathcal I$-saturated in the sense that $f(A)=f(B)$ for all $A,B\in\Tau_K$ with $A=_\mathcal I B$.
In the following theorem we describe some algebraic and topological properties of the endomorphism monoid $\End(\Tau_K)$.
\begin{theorem}\label{t14.1} Let $K$ be a maximal 2-cogroup in a group $X$.
Then:
\begin{enumerate}
\item[\textup{(1)}] $\End^\mathcal I(\Tau_K)=\Enl^\mathcal I(\Tau_K)\subset\Enl(\Tau_K)\subset\End(\Tau_K)$ for any twinic ideal $\mathcal I$ on $X$;
\item[\textup{(2)}] $\End^\mathcal I(\Tau_K)=\End(\Tau_K)$ for any left-invariant ideal $\mathcal I$ on $X$ such that $\mathcal I\cap\wht\mathcal K=\emptyset$;
\item[\textup{(3)}] the semigroup $\End(\Tau_K)$ is algebraically isomorphic to the wreath product\newline ${\mathcal H}(K)\wr[\Tau_K]^{[\Tau_K]}$;
\item[\textup{(4)}] for each idempotent $f\in\End(\Tau_K)$ the maximal subgroup ${\mathcal H}_f\subset\End(\Tau_K)$\newline containing $f$ is isomorphic to ${\mathcal H}(K)\wr S_{[f(\Tau_K)]}$;
\item[\textup{(5)}] the minimal ideal $\mathsf K(\End(\Tau_K))=\{f\in\End(\Tau_K):\forall A\in f(\Tau_K),\;f(\Tau_K)\subset\lfloor A\rfloor\}$;
\item[\textup{(6)}] each minimal left ideal of the semigroup $\End(\Tau_K)$ is algebraically isomorphic to ${\mathcal H}(K)\times [\Tau_K]$ where the orbit space $[\Tau_K]$ is endowed with the left zero multiplication;
\item[\textup{(7)}] each maximal subgroup of the minimal ideal $\mathsf K(\End(\Tau_K))$ is algebraically isomorphic to ${\mathcal H}(K)$;
\item[\textup{(8)}] each minimal left ideal of the semigroup $\End(\Tau_K)$ is homeomorphic to $\Tau_K$;
\item[\textup{(9)}] for each minimal idempotent $f\in \mathsf K(\End(\Tau_K))$ the maximal subgroup\newline $\mathsf H_f=f\circ \End(\Tau_K)\circ f$ is topologically isomorphic to the twin-generated group ${\mathcal H}(A)$ where $A\in f(\Tau_K)$;
\end{enumerate}
\end{theorem}
\begin{proof} 1,2. The first statement follows from Proposition~\ref{p11.3} and the second one from Proposition~\ref{p13.3}.
3--7. Since $\Tau_K$ is a free ${\mathcal H}(K)$-act, the (algebraic) statements (3)--(7) follow from Theorem~\ref{t2.1}.
8. Given a minimal idempotent $f\in\End(\Tau_K)$, we need to prove that the minimal left ideal $\mathsf L_f=\End(\Tau_K)\circ f$ is homeomorphic to $\Tau_K\subset\mathsf P(X)$. For this fix any set $B\in f(\Tau_K)$ and observe that $f(\Tau_K)\subset \lfloor B\rfloor$ according to the statement (5). We claim that the map $$\Psi:\mathsf L_f\to\Tau_K,\;\;\Psi:g\mapsto g(B),$$is a homeomorphism. The definition of the topology (of pointwise convergence) on $\End(\Tau_K)$ implies that the map $\Psi$ is continuous.
Next, we show that the map $\Psi$ is bijective. To show that $\Psi$ is injective, fix any two distinct functions $g,h\in\mathsf L_f$ and find a set $A\in\Tau_K$ such that $g(A)\ne h(A)$. Since $f(\Tau_K)\subset \lfloor B\rfloor$, there is $x\in X$ such that $f(A)=xB$. Then $$xg(B)=g(xB)=gf(A)=g(A)\ne h(A)=hf(A)=h(xB)=xh(B)$$and hence $\Psi(g)=g(A)\ne h(A)=\Psi(h)$.
To show that $\Psi$ is surjective, take any subset $C\in\Tau_K$ and choose any equivariant map $\varphi:[B]\to[C]$ such that $\varphi(B)=C$. Then the function $g=\varphi\circ f$ belongs to $\mathsf L_f$ and has image $\Psi(g)=g(B)=C$ witnessing that the map $\Psi$ is surjective.
Since $\mathsf L_f$ is compact, the bijective continuous map $\Psi:\mathsf L_f\to\Tau_K$ is a homeomorphism. By Proposition~\ref{p12.1}, the space $\Tau_K$ is homeomorphic to the cube $2^{X/K^\pm}$.
\smallskip
9. Given a minimal idempotent $f\in \End(\Tau_K)$ we shall show that the maximal subgroup $\mathsf H_f=f\circ \End(\Tau_K)\circ f$ is topologically isomorphic to the characteristic group ${\mathcal H}(A)$ of any twin set $A\in f(\Tau_K)$.
We recall that ${\mathcal H}(A)$ is the characteristic group ${\mathcal H}(K)$ of the 2-cogroup $K=\Fix^-(A)$, endowed with the topology generated by the twin set $q(A\cap \Stab(K))$ where $q:\Stab(K)\to {\mathcal H}(K)=\Stab(K)/KK$ is the quotient homomorphism.
We define a topological isomorphism $\Theta_A:\mathsf H_f\to {\mathcal H}(A)$ in the following way. Since
$f$ is a minimal idempotent, $g(A)=fgf(A)\in f(\Tau_K)\subset\lfloor A\rfloor$. So we can find $x\in \Stab(K)$ with $fgf(A)=x^{-1}A$. Now define $\Theta_A(g)$ as the image $q(x)=xKK=KKx$ of $x$ under the quotient homomorphism $q:\Stab(K)\to {\mathcal H}(K)={\mathcal H}(A)$.
It remains to prove that $\Theta_A:\mathsf H_f\to {\mathcal H}(A)$ is a well-defined topological isomorphism of the right-topological groups.
First we check that $\Theta_A$ is well-defined, that is $\Theta_A(g)=q(x)$ does not depend on the choice of the point $x$. Indeed, for any other point $y\in X$ with $g(A)=y^{-1}A$ we get $x^{-1}A=y^{-1}A$ and thus $yx^{-1}\in\Fix(A)=K\cdot K$ where $K=\Fix^-(A)$. Consequently,
$q(x)=KKx=KKy=q(y)$.
Next, we prove that $\Theta_A$ is a group homomorphism. Given two functions $g,h\in \mathsf H_f$, find elements $x_g,x_h\in \Fix(A)$ such that $h(A)=x^{-1}_hA$ and $g(A)=x^{-1}_gA$. It follows that $g\circ h(A)=g(x^{-1}_hA)=x^{-1}_hg(A)=x^{-1}_hx^{-1}_gA=(x_gx_h)^{-1}A$, which implies that $\Theta_A(g\circ h)=x_gx_hKK=\Theta_A(g)\cdot\Theta_A(h)$.
Now, we calculate the kernel of the homomorphism $\Theta_A$. Take any function $g\in \mathsf H_f$ with $\Theta_A(g)=e$, which means that $g(A)=fgf(A)=A$.
Then for every $A'\in\Tau_{K}$ we can find $x\in X$ with $f(A')=xA$ and conclude that $g(A')=fgf(A')=fg(xA)=xfg(A)=xfgf(A)=xA=f(A')$ witnessing that $g=fgf=f$. This means that the homomorphism $\Theta_A$ is one-to-one.
To see that $\Theta_A$ is onto, first observe that each element of the characteristic group ${\mathcal H}(A)$ can be written as $[y]=yKK=KKy\in {\mathcal H}(K)$ for some $y\in\Stab(K)$. Given such an element $[y]\in {\mathcal H}(A)$, consider the equivariant function $s_{[y]}:\lfloor A\rfloor\to\lfloor A\rfloor$, $s_{[y]}:zA\mapsto zy^{-1}A=zy^{-1}KKA$. Let us show that this function is well defined. Indeed, for each point $u\in X$ with $zA=uA$, we get $u^{-1}z\in\Fix(A)$ and hence, $yu^{-1}zy^{-1}\in y\Fix(A)y^{-1}=yKKy^{-1}=KK=\Fix(A)$. Then $yu^{-1}zy^{-1}A=A$ and hence $zy^{-1}A=uy^{-1}A$.
It follows from $s_{[y]}\circ f=f\circ s_{[y]}\circ f$ that the function $s_{[y]}\circ f$ belongs to the maximal group ${\mathcal H}_f$. Since $s_{[y]}\circ f(A)=s_{[y]}(A)=y^{-1}A$, the image $\Theta_A(s_{[y]}\circ f)=[y]$. So, $\Theta_A(\mathsf H_f)={\mathcal H}(A)$ and $\Theta_A:\mathsf H_f\to {\mathcal H}(A)$ is an algebraic isomorphism.
It remains to prove that this isomorphism is topological. Observe that for every $[y]\in {\mathcal H}(A)$ we get
$s_{[y]}\circ f(A)=s_{[y]}(A)=y^{-1}KKA=y^{-1}A$. Consequently, $x\in s_{[y]}\circ f(A)$ iff $x\in y^{-1}A$ iff $y\in Ax^{-1}$.
To see that the map $\Theta_A:\mathsf H_f\to {\mathcal H}(A)$ is continuous, take any sub-basic open set $$U_x=\{[y]\in {\mathcal H}(A):y\in Ax^{-1}\},\;\; x\in \Stab(K),$$ in ${\mathcal H}(A)$ and observe that $\Theta_A^{-1}(U_x)=\{s_{[y]}\circ f:[y]\in U_x\}=\{s_{[y]}\circ f: y\in Ax^{-1}\}=\{s_{[y]}\circ f:x\in s_{[y]}\circ f(A)\}$ is a sub-basic open set in $H(f)$. To see that the inverse map $\Theta_A^{-1}:{\mathcal H}(A)\to \mathsf H_f$ is continuous, take any sub-basic open set $V_{x,T}=\{g\in H(f):x\in g(T)\}$ where $x\in X$ and $T\in\Tau_K$. It follows that $f(T)=x_TA$ for some $x_T\in X$. Then $$\begin{aligned}\Theta_A(V_{x,T})&
=\{[y]\in {\mathcal H}(A):x\in s_{[y]}\circ f(T)\}=\{[y]\in {\mathcal H}(A):x\in s_{[y]}(x_TA)\}=\\
&=\{[y]\in {\mathcal H}(A):x_T^{-1}x\in s_{[y]}(A)\}=\{[y]\in {\mathcal H}(A):y\in Ax^{-1}x_T\}
\end{aligned}$$
is a sub-basic open set in ${\mathcal H}(A)$.
\end{proof}
In the following proposition we calculate the cardinalities of the objects appearing in Theorem~\ref{t14.1}. We shall say that a cardinal $n\ge 1$ divides a cardinal $m\ge 1$ if there is a cardinal $k$ such that $m=k\times n$. The smallest cardinal $k$ with this property is denoted by $\frac mn$.
\begin{proposition}\label{p15.2} If $K\in\wht{\mathcal K}$ is a maximal 2-cogroup in a group $X$, then
\begin{enumerate}
\item[\textup{(1)}] $|\Tau_K|=2^{|X/K^\pm|}$;
\item[\textup{(2)}] $|{\mathcal H}(K)|\in\{2^k:k\in\mathbb N\}\cup\{\aleph_0\}$ and $|{\mathcal H}(K)|$ divides the index $|X/K|$ of $K$ in $X$;
\item[\textup{(3)}] $|[\Tau_K]|=\frac{|\Tau_K|}{|{\mathcal H}(K)|}=\frac{2^{|X/K^\pm|}}{|{\mathcal H}(K)|}$;
\item[\textup{(4)}] $|{\mathcal H}(K)|=|X/K|$ if the 2-cogroup $K$ is normal in $X$.
\end{enumerate}
\end{proposition}
\begin{proof} Choose any subset $S\subset X$ that meets each coset $K^\pm x$, $x\in X$, of the group $K^\pm=K\cup KK$ at a single point. It is clear that $|S|=|X/K^\pm|$.
\smallskip
1. The equality $|\Tau_K|=2^{|X/K^\pm|}$ follows from Proposition~\ref{p12.1}.
\smallskip
2. By Theorem~\ref{t8.2}, $|{\mathcal H}(K)|\in\{2^n:n\in\mathbb N\}\cup\{\aleph_0\}$. Since $\Stab(K)$ is a subgroup of $X$, $|{\mathcal H}(K)|=|\Stab(K)/KK|$ divides $|X/KK|=|X/K|$.
\smallskip
3. Since $\Tau_K$ is a free ${\mathcal H}(K)$-act,
$|[\Tau_K]|=\frac{|\Tau_K|}{|{\mathcal H}(K)|}$. This equality is clear if ${\mathcal H}(K)$ is finite. If ${\mathcal H}(K)$ is infinite, then $|{\mathcal H}(K)|=\aleph_0$ and the index $|X/K^\pm|$ of the group $K^\pm$ in $X$ is infinite. In this case
$|\Tau_K|=2^{|X/K^\pm|}>\aleph_0$ and thus $|[\Tau_K]|=\frac{2^{|X/K^\pm|}}{\aleph_0}=2^{|X/K^\pm|}$.
\smallskip
4. If the 2-cogroup $K$ is normal in $X$, then $\Stab(K)=X$ and ${\mathcal H}(K)=X/KK$. In this case $|{\mathcal H}(K)|=|X/KK|=|X/K|$.
\end{proof}
By Theorem~\ref{t14.1}(6), for any maximal 2-cogroup $K\subset X$ each minimal left ideal of the semigroup $\End(\Tau_K)$ is algebraically isomorphic to ${\mathcal H}(K)\times[\Tau_K]$. It turns out that in some cases this isomorphism is topological. We recall that the orbit space $[\Tau_K]=\Tau_K/{\mathcal H}(K)$ is endowed with the quotient topology. By Proposition~\ref{p12.2}, the orbit space $[\Tau_K]$ is compact and Hausdorff if and only if the characteristic group ${\mathcal H}(K)$ is finite.
Since $\Tau_K$ is a compact Hausdorff space, the Tychonoff power $\Tau_K^{\;\Tau_K}$ is a compact Hausdorff right topological semigroup (endowed with the operation of composition of functions). This semigroup contains the subsemigroup $C(\Tau_K,\Tau_K)$ consisting of all continuous maps $f:\Tau_K\to\Tau_K$. It is easy to check that the semigroup $C(\Tau_K,\Tau_K)$ is semitopological (which means that the semigroup operation is separately continuous).
We recall that a right-topological semigroup $S$ is called {\em semitopological} if the semigroup operation $S\times S\to S$ is separately continuous. If the semigroup operation is continuous, then $S$ is called a {\em topological semigroup}.
\begin{theorem}\label{t14.3} Let $K$ be a maximal 2-cogroup in a group $X$. For a minimal idempotent $f$ in the semigroup $\End(\Tau_K)$ and its minimal left ideal $\mathsf L_f=\End(\Tau_K)\circ f$ the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(1)}] $\mathsf L_f$ is a topological semigroup;
\item[\textup{(2)}] $\mathsf L_f$ is topologically isomorphic to the topological semigroup $[\Tau_K]\times {\mathcal H}(K)$ where the orbit space $[\Tau_K]$ is endowed with the left zero multiplication;
\item[\textup{(3)}] $\mathsf L_f$ is a semitopological semigroup;
\item[\textup{(4)}] the left shift $l_f:\mathsf L_f\to\mathsf L_f$, $l_f:g\mapsto f\circ g$, is continuous;
\item[\textup{(5)}] $f$ is continuous;
\item[\textup{(6)}] $\mathsf L_f\subset C(\Tau_K,\Tau_K)$;
\item[\textup{(7)}] ${\mathcal H}(K)$ is finite and the idempotent band $E(\mathsf L_f)$ of \ $\mathsf L_f$ is compact;
\end{enumerate}
\end{theorem}
\begin{proof} The implications $(2)\Rightarrow(1)\Rightarrow(3)\Rightarrow(4)$ are trivial.
\smallskip
$(4)\Rightarrow(5)$ Assume that the left shift $l_f:\mathsf L_f\to \mathsf L_f$ is continuous. We need to check that $f$ is continuous. First we show that for any set $B\in f(\Tau_K)$ the preimage $\mathcal Z=f^{-1}(B)$ is closed in $\Tau_K$. Assume conversely that $f^{-1}(B)$ is not closed and find a point $A_0\in\overline{\mathcal Z}\setminus \mathcal Z$. It follows that the set $B_0=f(A_0)$ is not equal to $B$. Let $\varphi:\lfloor B\rfloor\to [A_0]$ be a unique equivariant function such that $\varphi(B)=A_0$. Then the function $g_0=\varphi\circ f$ belongs to the minimal left ideal $\mathsf L_f$. Observe that $f\circ g_0(B)=f(A_0)=B_0\ne B$. Since the left shift $l_f$ is continuous, for the neighborhood $O(f\circ g_0)=\{h\in \mathsf L_f:h(B)\ne B\}$ of $f\circ g_0=l_f(g_0)$ there is a neighborhood $O(g_0)\subset\mathsf L_f$ such that $f\circ g\subset O(f\circ g_0)$
for every $g\in O(g_0)$. It follows from the equivariantness of $g_0=g_0\circ f$ and the definition of the topology (of pointwise convergence) on $\mathsf L_f\subset\Tau_K^{\;\Tau_K}$ that the point $g_0(B)=A_0$ of $\Tau_K$
has a neighborhood $O(A_0)\subset \Tau_K$ such that each function $g\in\mathsf L_f$ with $g(B)\in O(A_0)$ belongs to the neighborhood $O(g_0)$.
Since $A_0$ is a limit point of the set $\mathcal Z$, there is a set $A\in O(A_0)\cap\mathcal Z$. For this set find an equivariant function $g=g\circ f$ such that $g(B)=A$. Then $g\in O(g_0)$ and hence $f\circ g(B)\ne B$, which contradicts $g(B)=A\in f^{-1}(B)$. This contradiction proves that all preimages $f^{-1}(B)$, $B\in f(\Tau_K)$, are closed in $\Tau_K$.
Next, we show that each orbit $\lfloor A\rfloor$, $A\in\Tau_K$, is discrete. Assume conversely that some orbit $\lfloor A\rfloor$ is not discrete and consider its closure $\overline{\lfloor A\rfloor}$ in the compact Hausdorff space $\Tau_K$. The orbit $\lfloor A\rfloor$ has no isolated points, being non-discrete and topologically homogeneous. Fix any $B\in f(\Tau_K)$. By Theorems~\ref{t2.1}(2) and \ref{t8.2}, the image $f(\Tau_K)$ has cardinality $|f(\Tau_K)|=|\lfloor B\rfloor|=|{\mathcal H}(K)|\le\aleph_0$. Then we can write the compact space $\overline{\lfloor A\rfloor}$ as a countable union$$\overline{\lfloor A\rfloor}=\bigcup_{B\in f(\Tau_K)}f^{-1}(B)\cap\overline{\lfloor A\rfloor}$$ of closed subsets. By Baire's Theorem, for some $B\in f(\Tau_K)$ the set $\overline{\lfloor A\rfloor}\cap f^{-1}(B)$ has non-empty interior in $\overline{\lfloor A\rfloor}$. Since the orbit $\lfloor A\rfloor$ has no isolated points, the intersection $\lfloor A\rfloor\cap f^{-1}(B)$ is infinite, which is not possible as $f$ is equivariant.
Finally, we show that for every $B\in f(\Tau_K)$ the preimage $f^{-1}(B)$ is open in $\mathsf L_f$. Assuming the opposite, we can find a point $A_0\in f^{-1}(B)$ that lies in the closure of the set $\Tau_K\setminus f^{-1}(B)$.
Choose any equivariant function $g_0\in\mathsf L_f$ such that $g_0(B)=A_0$ and observe that $f\circ g_0(B)=B$.
Since the orbit $\lfloor B\rfloor$ of $B$ is discrete, we can find an open neighborhood $O(B)\subset \Tau_K$ of $B$ such that $O(B)\cap\lfloor B\rfloor=\{B\}$.
This neighborhood determines a neighborhood $O(f\circ g_0)=\{g\in\mathsf L_f:g(B)\in O(B)\}$ of the function $f\circ g_0$ in $\mathsf L_f\subset\Tau_K^{\;\Tau_K}$. Since the left shift $l_f:\mathsf L_f\to\mathsf L_f$ is continuous, the function $g_0$ has a neighborhood $O(g_0)\subset\mathsf L_f$ such that $l_f(O(g_0))\subset O(f\circ g_0)$.
By the definition of the topology (of pointwise convergence) on
$\mathsf L_f$, there is a neighborhood $O(g_0(B))\subset\Tau_K$ such that each function $g\in\mathsf L_f$ with $g(B)\in O(g_0(B))$ belongs to $O(g_0)$. By the choice of the point $A_0=g_0(B)$, there is a set $A\in O(g_0(B))\setminus f^{-1}(B)$. For this set choose an equivariant function $g\in\mathsf L_f$ such that $g(B)=A$. This function $g$ belongs to $O(g_0)$ and thus $f\circ g\in O(f\circ g_0)$, which means that $f\circ g(B)=B$. But this contradicts $g(B)=A\notin f^{-1}(B)$.
Thus for each $B\in f(\Tau_K)$ the preimage $f^{-1}(B)$ is open in $\Tau_K$, which implies that the function $f:\Tau_K\to\Tau_K$ is continuous.
\smallskip
$(5)\Rightarrow(6)$ Assume that $f$ is continuous. Then for any $B\in\Tau_K$ the orbit $\lfloor B\rfloor=f(\Tau_K)$ is compact (as a continuous image of the compact space $\Tau_K$). Being a compact topologically homogeneous space of cardinality $|\lfloor B\rfloor|\le|{\mathcal H}(K)|\le \aleph_0$, the orbit $\lfloor B\rfloor=f(\Tau_K)$ is finite. Then for each $g\in \mathsf L_f$ the restriction $g|\lfloor B\rfloor$ is continuous and hence $g=g\circ f$ is continuous as the composition of two continuous maps $f$ and $g|\lfloor B\rfloor$.
\smallskip
$(6)\Rightarrow(7)$ Assume that $\mathsf L_f\subset C(\Tau_K,\Tau_K)$. Then $f$ is continuous. Repeating the argument from the preceding item, we can show that the characteristic group ${\mathcal H}(K)$ is finite. By the continuity of $f$, for every $B\in\Tau_K$ the preimage $f^{-1}(B)$ is closed in $\Tau_K$.
In the following claim $\mathsf E(\mathsf L_f)$ stands for the idempotent band of the semigroup $\mathsf L_f$.
\begin{claim}\label{ELf} $\mathsf E(\mathsf L_f)=\{g\in\mathsf L_f:f\circ g(B)=B\}$.
\end{claim}
\begin{proof} If $g\in\mathsf L_f$ is an idempotent, then for the unique point $C\in g(\Tau_K)\cap f^{-1}(B)$ we get $C=g(C)$ and then $B=f(C)=fg(C)=fgf(C)=fg(B)$.
Now assume conversely that $g\in\mathsf L_f$ is a function with $fg(B)=B$.
Let $C=g(B)\in g(\Tau_K)$. Then $g(C)=gf(C)=gfg(B)=g(B)=C$. For every $A\in\Tau_K$ we can find $x\in X$ such that $g(A)=xC$ and then
$gg(A)=g(xC)=xg(C)=xC=g(A)$, which means that $g$ is an idempotent.
\end{proof}
Since the set $f^{-1}(B)\subset \Tau_K$ is closed and the calculation map $$c_B:\mathsf L_f\to\Tau_K,\;c_B:g\mapsto g(B),$$ is continuous, the preimage $c_B^{-1}(f^{-1}(B))$ is closed in $\mathsf L_f$. By Claim~\ref{ELf}, this preimage is equal to the idempotent band $\mathsf E(\mathsf L_f)$ of the semigroup $\mathsf L_f$.
\smallskip
$(7)\Rightarrow(2)$ Assume that the group ${\mathcal H}(K)$ is finite and the idempotent band $E(\mathsf L_f)$ is compact. By Proposition~\ref{p12.2}, the orbit space $[\Tau_K]$ is compact, Hausdorff and zero-dimensional, and the quotient map $q:\Tau_K\to[\Tau_K]$ is continuous and open.
We claim that for every $B\in f(\Tau_K)$ the preimage $f^{-1}(B)\subset\Tau_K$ is compact. Since the idempotent band $E(\mathsf L_f)$ is compact and the calculation map $c_B:\mathsf L_f\to\Tau_K$, $c_B:g\mapsto g(B)$, is continuous, the image $c_B(E(\mathsf L_f))$ is compact. By Claim~\ref{ELf}, $c_B(\mathsf E(\mathsf L_f))\subset f^{-1}(B)$.
To show the reverse inclusion, fix any subset $A\in f^{-1}(B)$ and choose any equivariant map $\varphi:\lfloor B\rfloor\to\lfloor A\rfloor$ such that $\varphi(B)=A$. Then the map $g=\varphi\circ f$ belongs to $\mathsf L_f$ and is an idempotent by Claim~\ref{ELf}. Since $A=g(B)$, we see that $f^{-1}(B)\subset c_B(\mathsf E(\mathsf L_f))$ and hence $f^{-1}(B)=c_B(\mathsf E(\mathsf L_f))$ is compact.
Fix any set $B\in f(\Tau_K)$. Since $|f(\Tau_K)|=|\lfloor B\rfloor|=|{\mathcal H}(K)|<\aleph_0$, the preimage $$\mathcal Z=f^{-1}(B)=\Tau_K\setminus \bigcup_{B\ne A\in\lfloor B\rfloor}f^{-1}(A)$$ is open-and-closed in $\Tau_K$. Since the compact space $\mathcal Z$ meets each orbit $\lfloor A\rfloor$, $A\in\Tau_K$, at a single point, the restriction $q|\mathcal Z:\mathcal Z\to[\Tau_K]$, being continuous and bijective, is a homeomorphism. So, it suffices to prove that $\mathsf L_f$ is topologically isomorphic to $\mathcal Z\times {\mathcal H}(K)$ where the space $\mathcal Z$ is endowed with the left zero multiplication. Define an isomorphism $\Phi:\mathcal Z\times {\mathcal H}(K)\to \mathsf L_f$ assigning to each pair $(Z,x)\in \mathcal Z\times {\mathcal H}(K)$ the function $g_{Z,x}\circ f$ where $g_{Z,x}:\lfloor B\rfloor\to \lfloor Z\rfloor$ is the unique equivariant function such that $g_{Z,x}(B)=x^{-1}Z$. It is easy to check that $\Phi$ is a topological isomorphism between $\mathcal Z\times {\mathcal H}(K)$ and $\mathsf L_f$.
\end{proof}
In the following proposition we prove the existence of continuous or discontinuous minimal idempotents in the semigroup $\End(\Tau_K)$.
Let us recall that for a left-invariant ideal $\mathcal I$ on a group $X$ by $\End^\mathcal I(\Tau_K)$ we denote the left ideal in $\End(\Tau_K)$ consisting of all equivariant $\mathcal I$-saturated functions.
\begin{proposition}\label{p14.5} Let $\mathcal I$ be a left-invariant ideal on a group $X$ and assume that a maximal 2-cogroup $K\subset X$ has finite characteristic group ${\mathcal H}(K)$. Then the semigroup $\End^\mathcal I(\Tau_K)$ contains:
\begin{enumerate}
\item[\textup{(1)}] a continuous minimal idempotent if $Kx\notin \mathcal I$ for all $x\in X$;
\item[\textup{(2)}] no continuous function if $Kx\in \mathcal I$ for all $x\in X$;
\item[\textup{(3)}] no discontinuous function (which is a minimal idempotent) if (and only if) for each $A\in\Tau_K$ the set $\bar{\bar A}^\mathcal I\cap\Tau_K$ is open in $\Tau_K$.
\end{enumerate}
\end{proposition}
\begin{proof} By Proposition~\ref{p12.2}, the orbit space $[\Tau_K]$ is compact, Hausdorff, and zero-dimensional and the orbit map $q:\Tau_K\to[\Tau_K]$ has a continuous section $s:[\Tau_K]\to\Tau_K$. Then $\mathcal Z=s([\Tau_K])$ is a closed subset of $\Tau_K$ that meets each orbit $\lfloor A\rfloor$, $A\in\Tau_K$, at a single point. Pick any $B\in\mathcal Z$ and define a continuous minimal idempotent $f:\Tau_K\to\Tau_K$ letting $f(xZ)=xB$ for each $x\in {\mathcal H}(K)$ and $Z\in\mathcal Z$.
\smallskip
1. Assuming that $Kx\notin \mathcal I$ for all $x\in X$, we shall show that the function $f$ is $\mathcal I$-saturated and hence belongs to $\End^\mathcal I(\Tau_K)$. Given any sets $A,B\in\Tau_K$ with $A=_\mathcal I B$, we need to show that $f(A)=f(B)$. We shall prove more: $A=B$. Assume conversely that $A\ne B$ and find a point $x\in A\triangle B$ in the symmetric difference $A\triangle B=(A\setminus B)\cup(B\setminus A)$. Since $KKA=\Fix(A)A=A$ and $KKB=\Fix(B)B=B$, we get $KKx\in A\triangle B\in\mathcal I$ and then for every $y\in K$, we get $Kyx=KKx\in\mathcal I$, which contradicts our assumption.
\smallskip
2. Now assume that $Kx\in\mathcal I$ for all $x\in X$. We shall prove that no function $g\in\End^\mathcal I(\Tau_K)$ is continuous. For this we show that for each $A\in\Tau_K$ the set $\bar{\bar A}^\mathcal I\cap\Tau_K$ is dense in $\Tau_K$.
Given any set $C\in\Tau_K$ and a neighborhood $O(C)$ of $C$ in $\Tau_K$, we need to find a set $B\in O(C)$ such that $B=_\mathcal I A$. By the definition of the topology on $\Tau_K\subset\mathsf P(X)$, there is a finite subset $F\subset X$ such that $O(C)\supset\{B\in\Tau_K:B\cap F=C\cap F\}$. Now we see that the set $B=(A\setminus K^\pm F)\cup(K^\pm F\cap C)\in\Tau_K$ belongs to the neighborhood $O(C)$ and $B=_\mathcal I A$ because $A\triangle B\subset K^\pm F\in\mathcal I$. Assuming that some $\mathcal I$-saturated equivariant function $g:\Tau_K\to\Tau_K$, is continuous, we conclude that the preimage $g^{-1}(f(A))\supset\bar{\bar A}^\mathcal I\cap\Tau_K$ coincides with $\Tau_K$, being a closed dense subset of $\Tau_K$. So, $g$ is constant. Since the action of the (non-trivial) group ${\mathcal H}(K)$ on $\Tau_K$ is free, the constant map $g$ cannot be equivariant.
\smallskip
3. If for every $A\in\Tau_K$ the set $\bar{\bar A}^\mathcal I\cap\Tau_K$ is open in $\Tau_K$, then each $\mathcal I$-saturated function is locally constant and hence continuous. So, $\End^\mathcal I(\Tau_K)$ contains no discontinuous function.
Now assuming that for some $A\in\Tau_K$ the set $\mathcal A=\bar{\bar A}^\mathcal I\cap\Tau_K$ is not open in $\Tau_K$, we shall construct a discontinuous minimal idempotent $f\in\End^\mathcal I(\Tau_K)$. Take any minimal idempotent $f\in\End^\mathcal I(\Tau_K)$. If $f$ is discontinuous, we are done. So assume that $f$ is continuous and fix any set $B\in f(\Tau_K)$. By Theorem~\ref{t14.1}(5), the image $f(\Tau_K)=\lfloor B\rfloor$ is finite. So, the preimage $\mathcal Z=f^{-1}(f(B))$ is open-and-closed in $\Tau_K$.
Take any $x\in\Stab(K)\setminus KK$ and consider the subset $\mathcal Z'=(\mathcal Z\setminus \mathcal A)\cup x\mathcal A$ which is not compact as $\mathcal A$ is not open in $\mathcal Z$. Then the $\mathcal I$-saturated minimal idempotent $g:\Tau_K\to\Tau_K$ defined by $g(xZ)=xB$ for $x\in {\mathcal H}(K)$ and $Z\in\mathcal Z'$ is discontinuous (because $g^{-1}(B)=\mathcal Z'$ is not closed in $\Tau_K$).
\end{proof}
\begin{corollary}\label{c14.6} For a maximal 2-cogroup $K\subset X$ and a left-invariant ideal $\mathcal I$ on $X$ the following conditions are equivalent:
\begin{enumerate}
\item[(1)] each minimal left ideal of $\End^\mathcal I(\Tau_K)$ is a topological semigroup;
\item[(2)] each minimal left ideal of $\End^\mathcal I(\Tau_K)$ is a semitopological semigroup;
\item[(3)] for each $A\in\Tau_K$ the set $\bar{\bar A}^\mathcal I\cap\Tau_K$ is open in $\Tau_K$.
\end{enumerate}
If the ideal $\mathcal I$ is right-invariant, then the conditions (1)--(3) are equivalent to
\begin{enumerate}
\item[(4)] $K$ has finite index in $X$;
\item[(5)] the semigroup $\End(\Tau_K)$ is finite.
\end{enumerate}
\end{corollary}
\begin{proof} The equivalence $(1)\Leftrightarrow(2)\Leftrightarrow(3)$ follows from Theorem~\ref{t14.3} and Proposition~\ref{p14.5}.
Now assume that the left-invariant ideal $\mathcal I$ is right-invariant.
$(3)\Rightarrow(4)$ Assume that for each $A\in\Tau_K$ the set
$\bar{\bar A}^\mathcal I\cap\Tau_K$ is open in $\Tau_K$. Then it is also closed in $\Tau_K$ being the complements of the union of open subsets $\bar{\bar B}^\mathcal I\cap \Tau_K$ for $B\ne_\mathcal I A$. By Proposition~\ref{p14.5}, $Kx\notin\mathcal I$ for some $x\in X$. Since the ideal $\mathcal I$ is right-invariant, $Kx\notin\mathcal I$ for all $x\in X$. We claim that the set $\bar{\bar A}^\mathcal I\cap\Tau_K=\{A\}$. Assuming that $\bar{\bar A}^\mathcal I\cap\Tau_K$ contains a set $B$ distinct from $A$, we can find a point $x\in A\triangle B$. Since $\Fix(A)=KK=\Fix(B)$, we get $KKx\subset KKA\triangle KKB=A\triangle B\in\mathcal I$ and thus for any point $z\in K$, we arrive to the absurd conclusion $Kzx=KKx\in\mathcal I$. Since the singleton $\bar{\bar A}^\mathcal I\cap\Tau_K=\{A\}$ in open in the space $\Tau_K$, which is homeomorphic to $2^{X/K^\pm}$, the index of the group $K^\pm$ in $X$ is finite and so is the index of $K$ in $X$.
\smallskip
The implications $(4)\Rightarrow(5)\Rightarrow(1)$ are trivial.
\end{proof}
\section{The semigroup $\Enl(\Tau_K)$}\label{s15}
In the preceding section we studied the continuity of the semigroup operation on minimal left ideals of the semigroup $\End(\Tau_K)$. In this section we shall be interested in the continuity of the semigroup operation on the semigroup $\Enl(\Tau_K)\subset\End(\Tau_K)$. This will be done in a more general context of upper subfamilies $\mathsf F\subset\Tau$.
We define a family $\mathsf F\subset\Tau$ to be {\em upper} if for any twin set $A\in\mathsf F$ and a twin subset $B\subset X$ with $\Fix^-(A)\subset\Fix^-(B)$, we get $B\in\mathsf F$.
Let us remark that $\wht\Tau$ is an upper subfamily of $\Tau$ while $\Tau_{K}$ is a minimal upper subfamily of $\Tau$ for every $K\in\wht{\mathcal K}$.
\begin{proposition} Each upper subfamily $\mathsf F\subset\Tau$ is symmetric and $\lambda$-invariant. Consequently, $\Enl(\mathsf F)$ is a compact right-topological semigroup.
\end{proposition}
\begin{proof} To prove that $\mathsf F\subset\Tau$ is symmetric, given any set $A\in\mathsf F$ choose a point $x\in\Fix^-(A)$. By Proposition~\ref{p5.4}, $\Fix^-(xA)=x\,\Fix^-(A)\,x^{-1}=\Fix^-(A)$ and hence $X\setminus A=xA\in\mathsf F$.
To see that $\mathsf F$ is $\lambda$-invariant, we need to show that $\varphi(\mathsf F)\subset\mathsf F$ for any function $\varphi\in\Enl(\mathsf P(X))$. By Corollary~\ref{c4.4}, the function $f$ is symmetric and left-invariant. Then for each $A\in\mathsf F$ and $x\in\Fix^-(A)$ we get $x\varphi(A)=\varphi(xA)=\varphi(X\setminus A)=X\setminus\varphi(A)$ and hence $x\in\Fix^-(\varphi(A))$. Since $\Fix^-(A)\subset\Fix^-(\varphi(A))$, the set $\varphi(A)$ belongs to $\mathsf F$ by the definition of an upper family.
\end{proof}
\begin{theorem}\label{t16.1} For an upper subfamily $\mathsf F\subset\Tau$ the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(1)}] $\Enl(\mathsf F)$ is a topological semigroup;
\item[\textup{(2)}] $\Enl(\mathsf F)$ is a semitopological semigroup;
\item[\textup{(3)}] for each twin set $A\in\mathsf F$ the subgroup $\Fix(A)$ has finite index in $X$.
\end{enumerate}
\end{theorem}
\begin{proof} $(3)\Rightarrow(1)$ Assume that for each twin set $A\in\mathsf F$ the stabilizer $\Fix(A)$ has finite index in $X$.
To show that the semigroup operation $\circ :\Enl(\mathsf F)\times\Enl(\mathsf F)\to\Enl(\mathsf F)$ is continuous, fix any two functions $f,g\in\Enl(\mathsf F)$ and a neighborhood $O(f\circ g)$ of their composition. We should show that the functions $f,g$ have neighborhoods $O(f),O(g)\subset \Enl(\mathsf F)$ such that $O(f)\circ O(g)\subset O(f\circ g)$.
We lose no generality assuming that the neighborhood $O(f,g)$ is of sub-basic form: $$O(f\circ g)=\{h\in\Enl(\mathsf F): x\in h(A)\}$$
for some $x\in X$ and some twin set $A\in\mathsf F$.
Let $B=g(A)$. It follows from $f\circ g\in O(f\circ g)$ that $x\in f\circ g(A)=f(B)$.
Let $O(f)=\{h\in\Enl(\mathsf F):x\in h(B)\}$.
The definition of a neighborhood $O(g)$ is a bit more complicated.
By our hypothesis, the stabilizer $\Fix(A)$ has finite index in $X$.
Let $S\subset X$ be a (finite) subset meeting each coset $\Fix(A)\,z$, $z\in X$, at a single point. Consider the following open neighborhood of $g$ in $\Enl(\mathsf F)$:
$$O(g)=\{g'\in\Enl(\mathsf F):\forall s\in S\;\;(s\in B\;\Leftrightarrow s\in g'(A))\}.$$
We claim that $O(f)\circ O(g)\subset O(f\circ g)$.
Indeed, take any functions $f'\in O(f)$ and $g'\in O(g)$.
By Theorem~\ref{t11.1}, $\Fix^-(A)\subset \Fix^-(g'(A))$ and hence $\Fix(A)\subset\Fix(g'(A))$. Then $g'(A)=\Fix(A)\cdot(S\cap g'(A)) =\Fix(A)\cdot (S\cap B)=B$ and thus $x\in f'(B)=f'\circ g'(A)$ witnessing that $f'\circ g'\in O(f\circ g)$.
\smallskip
The implication $(1)\Rightarrow(2)$ is trivial.
\smallskip
$(2)\Rightarrow (3)$ Assume that $X$
contains a twin subset $T_0\in\mathsf F$ whose stabilizer $\Fix(T_0)$ has infinite index in $X$.
Then the subgroup $H=\Fix^\pm(T_0)$ also has infinite index in $X$. By Theorem 15.5 of \cite{P}, $X\ne FHF$ for any finite subset $F\subset X$.
\begin{lemma}\label{l16.2} There are countable sets $A,B\subset X$ such that
\begin{enumerate}
\item[\textup{(1)}] $xB\cap yB=\emptyset$ for any distinct $x,y\in A$;
\item[\textup{(2)}] $|AB\cap Hz|\le 1$ for all $z\in X$;
\item[\textup{(3)}] $e\in A$, $AB\cap H=\emptyset$.
\end{enumerate}
\end{lemma}
\begin{proof} Let $a_0=e$ and $B_{<0}=\{e\}$. Inductively we shall construct sequences $A=\{a_n:n\in\omega\}$ and $B=\{b_n:n\in\omega\}$ such that
\begin{itemize}
\item $b_n\notin A_{\le n}^{-1}HA_{\le n}B_{<n}$ where $A_{\le n}=\{a_i:i\le n\}$ and
$B_{<n}=\{e\}\cup\{b_i:i< n\}$;
\item $a_{n+1}\notin HA_{\le n}B_{\le n}B_{\le n}^{-1}$.
\end{itemize}
Since $X\ne FHF$ for any finite subset $F\subset X$, the choice of the points $b_n$ and $a_{n+1}$ at the $n$-th step is always possible. It is easy to check that the sets $A,B$ satisfy the conditions (1)--(3) of the lemma.
\end{proof}
The properties (2), (3) of the set $AB$ allows us to enlarge $AB$ to a subset $S$ that contains the neutral element of $X$ and meets each coset $Hz$, $z\in X$, at a single point. Observe that each subset $E\subset S$ generates a twin subset $$T_E=\Fix(T_0)\cdot E\cup \Fix^-(T_0)\cdot (S\setminus E)$$ of $X$ such that $\Fix^-(T_0)\subset\Fix^-(T_E)$ and hence $T_E\in\mathsf F$.
\begin{lemma}\label{l16.3} There is a free ultrafilter $\mathcal B$ on $X$ and a family of subsets $\{U_a:a\in A\}\subset \mathcal B$ such that
\begin{enumerate}
\item[\textup{(1)}] $\bigcup_{a\in A}U_a\subset B$;
\item[\textup{(2)}] the set $U=\bigcup_{a\in A}aU_a$ has the property $B\not\subset x^{-1}U\cup y^{-1}U$ for every $x,y\in A$;
\item[\textup{(3)}] for every $V\in\mathcal B$ the set $\{a\in A:aV\subset U\}$ is finite.
\end{enumerate}
\end{lemma}
\begin{proof} Let $A=\{a_n:n\in\omega\}$ and $B$ be the sets constructed in Lemma~\ref{l16.2}. For every $n\in\omega$ put
$A_{\le n}=\{a_i:i\le n\}$.
Let $B_{<0}=\{e\}$ and inductively, for every $n\in\omega$ choose an element
$b_n\in B$ so that
$$b_n\notin A_{\le_n}^{-1}A_{\le n}B_{<n}\mbox{ \ where \ }B_{<n}=\{b_i:i<n\}.$$
For every $n\in\omega$ let $B_{\ge n}=\{b_{i}:i\ge n\}$. Let also
$B_{2\omega}=\{b_{2n}:n\in\omega\}$.
Let us show that for any distinct numbers $n,m$ the intersection
$a_nB_{\ge n}\cap a_mB_{\ge m}$ is empty.
Otherwise there would exist two numbers $i\ge n$ and $j\ge m$ such that
$a_nb_{i}=a_mb_{j}$.
It follows from $a_n\ne a_m$ that $i\ne j$. We lose no generality
assuming that $j>i$.
Then $a_nb_{i}=a_mb_{j}$ implies that
$$b_{j}=a_m^{-1}a_nb_{i}\in A_{\le j}^{-1}A_{\le j}B_{<j},$$ which contradicts
the choice of $b_{j}$.
Let $\mathcal B\in\beta(X)$ be any free ultrafilter such that
$B_{2\omega}\in\mathcal B$ and $\mathcal B$
is not a P-point in $\beta(X)\setminus X$. To get such an ultrafilter, take $\mathcal B$ to be a
cluster point of any
countable subset of $\beta(B_{2\omega})\setminus B_{2\omega}\subset \beta(X)$. Using the fact
that $\mathcal B$ fails
to be a P-point, we can take a decreasing sequence of subsets
$\{V_n:n\in\omega\}\subset\mathcal B$ of $B_{2\omega}$
having no pseudointersection in $\mathcal B$. The latter means that
for every $V\in\mathcal B$
the almost inclusion $V\subset^* V_n$ (which means that $V\setminus
V_n$ is finite) holds only
for finitely many numbers $n$.
For every $a=a_n\in A$ let $U_a=V_n\cap B_{\ge n}$. We claim that
the ultrafilter $\mathcal B$,
the family $(U_a)_{a\in A}$, and the set $U=\bigcup_{a\in A}a U_a=\bigcup_{n\in\omega}a_n(V_n\cap B_{\ge n})$ satisfy the requirements of
the lemma.
First, we check that $B\not\subset a_n^{-1}U\cup a_m^{-1}U$ for all $n\le m$. Take any odd number $k>m$. We claim that $b_k\notin a_n^{-1}U\cup a_m^{-1}U$. Otherwise, $b_{k}\in a^{-1}_n a_i(V_i\cap B_{\ge i})\cup a_m^{-1}a_i(V_i\cap B_{\ge i})$ for some $i\in \omega$ and hence $b_{k}=a_n^{-1}a_i b_j$ or $b_{k}=a^{-1}_ma_ib_j$ for some even $j\ge i$. If $k>j$, then both the equalities are
forbidden by the choice of $b_{k}\notin A_{\le k}^{-1}A_{\le k}B_{<k}\supset\{a^{-1}_na_ib_j,a^{-1}_ma_ib_j\}$. If $k<j$, then those
equalities are forbidden by the choice of
$b_j\notin
A_{\le j}^{-1}A_{\le j}B_{<j}\supset\{a_i^{-1}a_nb_k, a^{-1}_ia_mb_k\}$. Therefore, $B\not\subset a_n^{-1}U\cup a_m^{-1}U$.
Next, given arbitrary $V\in\mathcal B$ we show that the set $A'=\{a\in A:aV\subset U\}$ is finite. By the choice of the sequence $(V_n)$, the set $F=\{a_n:V\cap
B_{2\omega}\subset^* V_n\}$ is finite. We claim that $A'\subset F$. Indeed, take any $a_n\in A'$. It follows from $a_nV\subset
U=\bigcup_{a\in A}aB_a$ and $a_nB\cap \bigcap_{i\ne n}a_i B=\emptyset$ that $$a_n(V\cap
B_{2\omega})\subset^*a_n(V_n\cap B_{\ge n})\subset a_n V_n$$ and hence $a_n\in F$.
\end{proof}
Let $\mathcal A$ be any free ultrafilter on $X$ containing the set $A$ and observe that $U=\bigcup_{a\in A}a(V_n\cap B_{\ge n})\in\mathcal A\circ\mathcal B$. Let $\alpha=\Phi_{\mathsf F}(\mathcal A)$ and $\beta=\Phi_{\mathsf F}(\mathcal B)$ be the function representations of the ultrafilters $\mathcal A$ and $\mathcal B$, respectively. We claim that the left shift $l_\alpha:\Enl(\mathsf F)\to\Enl(\mathsf F)$, $l_\alpha:f\mapsto \alpha\circ f$, is discontinuous
at $\beta$. Since $U\subset AB\subset S$, we can consider the twin set $$T=\Fix(T_0)\cdot U\cup \Fix^-(T_0)\cdot (S\setminus U)$$ and observe that $T\in \mathcal A\circ\mathcal B$. Consequently, $\alpha\circ\beta(T)=\{x\in G:x^{-1}T\in\mathcal A\circ\mathcal B\}$ contains the neutral element, which implies that $O(\alpha\circ\beta)=\{f\in\Enl(\mathsf F):e\in f(T)\}$ is a neighborhood of $l_\alpha(\beta)=\alpha\circ\beta$ in $\Enl(\mathsf F)$.
Assuming that $l_\alpha$ is continuous at $\beta$, we can find a neighborhood $O(\beta)\subset\Enl(\mathsf F)$ of $\beta$ such that $l_\alpha(O(\beta))\subset O(\alpha\circ\beta)$. Since $\mathsf F$ is left-invariant, we can assume that $O(\beta)$ is of the basic form:
$$O(\beta)=\{f\in\Enl(\mathsf F):e\in\bigcap_{i=1}^n f(T_i)\}$$ for some twin sets $T_1,\dots,T_n\in\mathsf F$. It follows from $\beta\in O(\beta)$ that $e\in\beta(T_i)$ and thus $T_i\in\mathcal B$ for every $i\le n$.
According to Lemma~\ref{l16.3}(3), the set $F=\{a\in A:B\cap \bigcap_{i=1}^n T_i\subset a^{-1}U\}$ is finite.
We claim that the family $\mathcal L=\{T_1,\dots, T_n,X\setminus x^{-1}T:x\in A\setminus F\}$ is linked.
This will follow as soon as we check that
\begin{itemize}
\item[(i)] $T_i\cap (X\setminus x^{-1}T)\ne\emptyset$ for any $i\le n$ and $x\in A\setminus F$;
\item[(ii)] $(X\setminus x^{-1}T)\cap (X\setminus y^{-1}T)\ne\emptyset$ for all $x,y\in A$.
\end{itemize}
The item (i) is equivalent to $T_i\not\subset x^{-1}T$ for $x\in A\setminus F$. Assuming conversely that $T_i\subset x^{-1}T$, we will consecutively get $xT_i\subset T$, $S\cap xT_i\subset S\cap T=U$, and finally $B\cap T_i\subset x^{-1}S\cap T_i\subset x^{-1}U$, which contradicts $x\notin F$.
The item (ii) is equivalent to $x^{-1}T\cup y^{-1}T\ne X$ for $x,y\in A$. Assume conversely that $x^{-1}T\cup y^{-1}T=X$ for some $x,y\in A$.
It follows from $xB\subset S$ that $xB\cap T=xB\cap U$ and thus $B\cap x^{-1}T=B\cap x^{-1}U$. Similarly, $B\cap y^{-1}T=B\cap y^{-1}U$. Consequently, $$B=B\cap X=B\cap (x^{-1}T\cup y^{-1}T)=B\cap (x^{-1}U\cup y^{-1}U)\ne B$$ according to Lemma~\ref{l16.3}(2). This contradiction completes the proof of the linkedness of $\mathcal L$.
Being linked, the family $\mathcal L$ can be enlarged to a maximal linked system $\mathcal C\in\lambda(X)$. It follows from $T_1,\dots,T_n\in\mathcal L\subset\mathcal C$ that the twin representation $\gamma=\Phi_{\mathsf F}(\mathcal C)$ belongs to the neighborhood $O(\beta)$ and consequently, $\alpha\circ\gamma\in O(\alpha\circ\beta)$, which means that $T\in\mathcal A\circ \mathcal C$. The latter is equivalent to $A'=\{x\in X:x^{-1}T\in\mathcal C\}\in\mathcal A$. On the other hand, $X\setminus A'=\{x\in X:X\setminus x^{-1}T\in\mathcal C\}$ contains the set $A\setminus F\in\mathcal A$ and thus $X\setminus A'\in\mathcal A$, which is a contradiction.
\end{proof}
\begin{theorem}\label{t16.4} If the group $X$ is twinic, then for an upper subfamily $\mathsf F\subset\Tau$ the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(1)}] $\Enl(\mathsf F)$ is metrizable;
\item[\textup{(2)}] $\Enl(\mathsf F)$ is a metrizable topological semigroup;
\item[\textup{(3)}] $\mathsf F$ is at most countable.
\end{enumerate}
\end{theorem}
\begin{proof} We shall prove the implications $(3)\Rightarrow(2)\Rightarrow(1)\Rightarrow(3)$.
$(3)\Rightarrow(2)$. Assume that the family $\mathsf F$ is at most countable. We claim that for each twin subset $T\in\mathsf F$ the 2-cogroup $K=\Fix^-(T)$ has finite index in $X$. Otherwise, the subgroup $K^\pm=KK\cup K$ also has infinite index on $X$ and then $|\mathsf F|\ge|\Tau_K|=2^{|X/K^\pm|}\ge 2^\omega>\aleph_0$.
So, $\Fix(T)$ has finite index in $X$ and the implication $(3)\Rightarrow(1)$ of Theorem~\ref{t16.1} guarantees that $\Enl(\mathsf F)$ is a topological semigroup. Now we show that this semigroup is metrizable. First observe that for every $T\in\mathsf F$ the set $\Enl(\{T\})=\{\varphi|\{T\}:\varphi\in\Enl(\mathsf P(X))\}$ has finite cardinality
$$|\Enl(\{T\})|=|\{\varphi(T):\varphi\in\Enl(\mathsf P(X))\}|\le| \{A\in\Tau:\Fix(A)\supset \Fix(T)\}|.$$ Since the family $\mathsf F$ is countable, the space $\Enl(\mathsf F)\subset\prod_{T\in\mathsf F}\Enl(\{T\})$ is metrizable, being a subspace of the countable product of finite discrete spaces.
\smallskip
The implication $(2)\Rightarrow(1)$ is trivial.
\smallskip
$(1)\Rightarrow(3)$ Assuming that the family $\mathsf F$ is not countable, we shall show that the space $\Enl(\mathsf F)$ is not metrizable. We consider two cases.
\smallskip
(a) For some twin set $T\in\mathsf F$ the stabilizer $\Fix(T)$ has infinite index.
Then we can find an infinite set $S\subset X$ that intersects each coset $\Fix^\pm(T)x$, $x\in X$, at a single point. As we already know, for each subset $E\subset S$ the set
$$T_E=\Fix(T)\cdot E\cup \Fix^-(T)\cdot(S\setminus E)$$ belongs to the family $\mathsf F$. Now take any two distinct ultrafilters $\mathcal U,\mathcal V\in\beta(S)\subset\beta(X)$ and consider their function representations $f_\mathcal U=\Phi_{\mathsf F}(\mathcal U)$ and $f_\mathcal V=\Phi_{\mathsf F}(\mathcal V)$. Since $\mathcal U\ne\mathcal V$, there is a subset $E\subset S$ such that $E\in\mathcal U\setminus\mathcal V$. It follows that $T_E\in\mathcal U$ and $T_{S\setminus E}\in\mathcal V$, which implies $T_E\notin \mathcal V$ and hence $e\in f_\mathcal U(T_E)\setminus f_\mathcal V(T_E)$. This means that $f_\mathcal U\ne f_\mathcal V$ and consequently, $|\Enl(\mathsf F)|\ge|\beta(S)|\ge 2^{\mathfrak c}$, which implies that the compact space $\Enl(\mathsf F)$ is not metrizable (because each metrizable compact space has cardinality $\le\mathfrak c$).
\smallskip
(b) For each $T\in\mathsf F$ the subgroup $\Fix(T)$ has finite index in $X$.
Then each set $T\in\mathsf F$ has finite orbit $[T]=\{xT:x\in X\}$.
Consider the smallest left-invariant family $\bar{\mathsf F} =\bigcup_{T\in\mathsf F}[T]$ that contains $\mathsf F$. By Proposition~\ref{p13.4}, the family $\bar{\mathsf F}$ is $\{\emptyset\}$-independent. Since each orbit $[T]$, $T\in\bar{\mathsf F}$, is finite and $\mathsf F$ is uncountable, the orbit space
$[\bar{\mathsf F}]=\{[T]:T\in\mathsf F\}$ also is uncountable. It follows from Theorem~\ref{t11.1} that the space $\Enl(\bar{\mathsf F})$ is homeomorphic to the product $\prod_{[T]\in[\bar{\mathsf F}]}\Enl([T])$ where each space $\Enl([T])$ contains at least two equivariant functions: identity $i:[T]\to[T],\; i:A\mapsto A$ and antipodal $\alpha:[T]\to[T]$, $\alpha:A\mapsto X\setminus A$. Since the orbit space $[\bar{\mathsf F}]$ is uncountable, the product $\prod_{[T]\in[\bar{\mathsf F}]}\Enl([T])$ is non-metrizable and so is its topological copy $\Enl(\bar{\mathsf F})$.
It remains to observe that the restriction map $R:\Enl(\bar{\mathsf F} )\to\Enl(\mathsf F)$ is injective and thus a homeomorphism. Indeed, given two distinct equivariant functions $f,g\in \Enl(\mathsf F)$, we can find a set $A\in\bar{\mathsf F}$ with $f(A)\ne g(A)$. Since $[A]\cap \mathsf F\ne\emptyset$, there is $x\in X$ such that $xA\in\mathsf F$. Then $f(xA)=xf(A)\ne xg(A)=g(xA)$ and thus $f|\mathsf F\ne g|\mathsf F$.
\end{proof}
The following proposition characterizes groups containing only
countably many twin subsets. Following \cite{BGN}, we define a
group $X$ to be {\em odd} if each element $x\in X$ has odd order.
\begin{proposition}\label{p16.5}
The family $\Tau$ of twin subsets of a group $X$ is at most
countable if and only if each subgroup of infinite index in $X$ is
odd.
\end{proposition}
\begin{proof}
Assume that each subgroup of infinite index in $X$ is odd. We
claim that for every $A\in\Tau$ the subgroup $\Fix(A)$ has finite
index in $X$. Take any point $c\in\Fix^-(A)$ and consider the
cyclic subgroup $c^\mathbb Z=\{c^n:n\in\mathbb Z\}$ generated by $c$. The
subgroup $c^\mathbb Z$ has finite index in $X$, being non-odd. Since
$c^{2\mathbb Z}=\{c^{2n}:n\in\mathbb Z\}\subset\Fix(A)$, we conclude that
$\Fix(A)$ also has finite index in $X$.
Next, we show that the family $\{\Fix(A):A\in\Tau\}$ is at most
countable. This is trivially true if $\Tau=\emptyset$. If
$\Tau\ne\emptyset$, then we can take any $A\in\Tau$ and choose a
point $c\in\Fix^-(A)$. The cyclic subgroup $c^\mathbb Z$ generated by
$c$ is not odd and hence has finite index in $X$. Consequently,
the group $X$ is at most countable. Now it remains to check that
for every $x\in X$ the set $\Tau_x=\{A\in\Tau: x\in\Fix^-(A)\}$ is
finite. If the set $\Tau_x$ is not empty, then the cyclic subgroup
$x^\mathbb Z$ generated by $x$ is not odd and hence has finite index in
$X$. Consider the subgroup $x^{2\mathbb Z}$ of index 2 in $x^\mathbb Z$. It is
clear that $x^{2\mathbb Z}\subset\Fix(A)$. Let $S\subset X$ be a finite
set containing the neutral element of $X$ and meeting each coset
$x^{2\mathbb Z} z$, $z\in X$ at a single point. It follows from
$x^{2\mathbb Z}\subset\Fix(A)$ that $A=x^{2\mathbb Z}\cdot(S\cap A)$ and
consequently $|\Tau_x|\le 2^{|S|}<\infty$.
\smallskip
Now assume that some subgroup $H$ of infinite index in $X$ is not
odd. Then $H$ contains an element $c\in H$ such that the sets
$c^{2\mathbb Z}=\{c^{2n}:n\in\mathbb Z\}$ and
$c^{2\mathbb Z+1}=\{c^{2n+1}:n\in\mathbb Z\}$ are disjoint. The union
$c^{2\mathbb Z}\cup c^{2\mathbb Z+1}$ coincides with the cyclic subgroup
$c^\mathbb Z$ of $H$ generated by $c$. Find a set $S\subset X$ that
intersects each coset $c^\mathbb Z x$, $x\in X$, at a single point.
Since $c^{2\mathbb Z}$ has infinite index in $X$, the set $S$ is
infinite. Now observe that for every $E\subset S$ the union
$$T_E=c^{2\mathbb Z}\cdot E\cup c^{2\mathbb Z+1}\cdot(S\setminus E)$$ is a
twin set with $c\in \Fix^-(T_E)$. Consequently,
$\Tau\supset\{T_E:E\subset S\}$ has cardinality
$$|\Tau|\ge|\{T_E:E\subset S\}|\ge |2^S|\ge \mathfrak
c>\aleph_0.$$
\end{proof}
Now we shall apply the above results to the minimal upper subfamilies $\Tau_{K}$ with $K\in\wht{\mathcal K}$.
By Theorem~\ref{t14.1}(1), for a maximal 2-cogroup $K$ in a group $X$ minimal left ideals of $\End(\Tau_K)$ are metrizable if and only if $|X/K|\le\aleph_0$. The metrizability of the whole semigroup $\End(\Tau_K)$ is equivalent to $|X/K|<\aleph_0$.
\begin{theorem}\label{t16.6} For a maximal 2-cogroup $K$ of a group $X$ the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(1)}] $\Enl(\Tau_K)$ is metrizable;
\item[\textup{(2)}] $\Enl(\Tau_K)$ is a semitopological semigroup;
\item[\textup{(3)}] $\Enl(\Tau_K)$ is a finite semigroup;
\item[\textup{(4)}] $\Enl(\Tau_K)$ is isomorphic to $C_{2^k}\wr m^m$ or $Q_{2^k}\wr m^m$ for some $1\le k\le m<\infty$;
\item[\textup{(5)}] $K$ has finite index in $X$.
\end{enumerate}
\end{theorem}
\begin{proof}
The implications $(1)\Rightarrow(2)\Rightarrow(5)$ follow from Theorems~\ref{t16.4} and \ref{t16.1}.
\smallskip
$(5)\Rightarrow(4)$ Assume that $K$ has finite index in $X$. Then the characteristic group ${\mathcal H}(K)$ of $K$ is finite and hence is isomorphic to $C_{2^k}$ or $Q_{2^k}$ for some $k\in\mathbb N$, see Theorem~\ref{t8.2}. Also the set $\Tau_K$ is finite and so is the orbit space $[\Tau_K]$.
By Theorem~\ref{t14.1}(3), the semigroup $\Enl(\Tau_K)$ is isomorphic to ${\mathcal H}(K)\wr[\Tau_K]^{[\Tau_K]}$ and the latter semigroup is isomorphic to $C_{2^k}\wr m^m$ or $Q_{2^k}\wr m^m$ for $m=|[\Tau_K]|$.
\smallskip
The implications $(4)\Rightarrow(3)\Rightarrow(1)$ are trivial.
\end{proof}
\section{Constructing nice idempotents in the semigroup $\Enl(\mathsf P(X))$}
In this section we prove the existence some special idempotents in
the semigroup\break $\Enl(\mathsf P(X))$. These idempotents will help us to describe the structure of the minimal ideal of the semigroup $\Enl(\mathsf P(X))$ and $\lambda(X)$ in Theorems~\ref{t17.1} and Corollary~\ref{c13.2}.
In this section we assume that $\mathcal I$ is a left-invariant ideal in a group $X$.
We recall that $\mathsf{pT}^\mathcal I$ and $\Tau^\mathcal I$ denote the families of $\mathcal I$-pretwin and $\mathcal I$-twin subsets of $X$, respectively. A function $f:\mathsf F\to\mathsf P(X)$ defined on a subfamily $\mathsf F\subset \mathsf P(X)$ is called {\em $\mathcal I$-saturated} if $f(A)=f(B)$ for any sets $A=_\mathcal I B$ in $\mathsf F$.
\begin{proposition}\label{p16.1} There is an idempotent $e_{\mathcal I}\in\Enl(\mathsf P(X))$ such that
\begin{itemize}
\item $e_{\mathcal I}(\mathsf P(X)\setminus \mathsf{pT}^\mathcal I)\subset\{\emptyset,X\}$;
\item $e_\mathcal I|\mathsf{pT}^\mathcal I=\mathrm{id}|\mathsf{pT}^\mathcal I$;
\item the function $e_\mathcal I$ restricted to $\mathsf P(X)\setminus\mathsf{pT}^\mathcal I$ is $\mathcal I$-saturated.
\end{itemize}
\end{proposition}
\begin{proof} Consider the family $\inv[N]{}^\mathcal I_2(X)\subset\mathsf P^2(X)$ of left invariant $\mathcal I$-saturated linked systems on $X$, partially ordered by the inclusion relation. This set is not empty because it contains the invariant $\mathcal I$-saturated linked system $\{X\setminus A:A\in\mathcal I\}$. By Zorn's Lemma, the partially ordered set $\inv[N]{}^\mathcal I_2(X)$ contains a maximal element $\mathcal L$, which is a maximal invariant $\mathcal I$-saturated linked system on the group $X$. By the maximality, the system $\mathcal L$ is monotone.
Now consider the family
$$\mathcal L^\perp=\{A\subset X:\forall L\in\mathcal L\;\;(A\cap L\ne\emptyset)\}.$$
\begin{claim}\label{cl12.2} $\mathcal L^\perp\setminus\mathcal L\subset\mathsf{pT}^\mathcal I$.
\end{claim}
\begin{proof} Fix any set $A\in\mathcal L^\perp\setminus \mathcal L$. First we check that $xA\cap A\in\mathcal I$
for some $x\in X$. Assuming the converse, we would conclude that
the family
$\mathcal A=\{A'\subset X:\exists x\in X\;\;(A'=_\mathcal I xA)\}$ is invariant, $\mathcal I$-saturated and linked, and so is the union $\mathcal A\cup\mathcal L$, which is not possible by the maximality of $\mathcal L$. So, there is $x\in X$ with $xA\cap A\in\mathcal I$, which is equivalent to
$xA\subset_\mathcal I X\setminus A$.
Next, we find $y\in X$ such that $A\cup yA=_\mathcal I X$, which is equivalent to $X\setminus A\subset_\mathcal I yA$.
Assuming that no such a point $y$ exists, we conclude that for any
$x,y\in X$ the union $xA\cup yA\neq_{\mathcal I} X$. Then $(X\setminus
xA)\cap (X\setminus yA)=X\setminus (xA\cup yA)\notin\mathcal I$, which
means that the family $\mathcal B=\{B\subset X:\exists x\in X\;\;(B=_\mathcal I X\setminus xA)\}$ is invariant $\mathcal I$-saturated and linked. We claim that $X\setminus A\in\mathcal L^\perp$.
Assuming the converse, we would conclude that
$X\setminus A$ misses some set $L\in\mathcal L$. Then $L\subset A$ and
hence $A\in\mathcal L$ which is not the case. Thus $X\setminus
A\in\mathcal L^\perp$. Since $\mathcal L$ is invariant and $\mathcal I$-saturated, $\mathcal B\subset\mathcal L^\perp$ and consequently, the union $\mathcal B\cup\mathcal L$, being an invariant $\mathcal I$-saturated linked system, coincides with $\mathcal L$. Then $X\setminus A\in\mathcal L$, which contradicts $A\in\mathcal L^\perp$. This contradiction shows that $X\setminus A\subset_\mathcal I yA$ for some $y\in X$.
Since $xA\subset_\mathcal I X\setminus A\subset_\mathcal I yA$, the set $A$ is $\mathcal I$-pretwin.
\end{proof}
Consider the function representation $\Phi_{\mathcal L}:\mathsf P(X)\to\mathsf P(X)$ of $\mathcal L$. By Propositions~\ref{p4.3} and \ref{p4.5}, the function $\Phi_\mathcal L$ is equivariant, monotone, $\mathcal I$-saturated, and $\Phi_\mathcal L(\mathsf P(X))\subset\{\emptyset,X\}$.
It is clear that the function $e_\mathcal I:\mathsf P(X)\to\mathsf P(X)$ defined by
$$e_\mathcal I(A)=\begin{cases}A&\mbox{if $A\in\mathsf{pT}^\mathcal I$},\\
\Phi_\mathcal L(A)&\mbox{otherwise}
\end{cases}
$$
has properties (1)--(3) of Proposition~\ref{p16.1}. It is also clear that $e_\mathcal I=e_\mathcal I\circ e_\mathcal I$ is an idempotent.
We claim that $e_\mathcal I\in\Enl(\mathsf P(X))$. By Corollary~\ref{c4.4}, we need to check that $e_\mathcal I$ is equivariant, monotone and symmetric. The equivariance of $e_\mathcal I$ follows from the equivariance of the maps $\Phi_\mathcal L$ and $\mathrm{id}$.
To show that $e_\mathcal I$ is monotone, take any two subsets $A\subset B$ of $X$ and consider four cases.
\smallskip
1) If $A,B\notin\mathsf{pT}^\mathcal I$, then $e_\mathcal I(A)=\Phi_\mathcal L(A)\subset\Phi_\mathcal L(B)=e_\mathcal I(B)$ by the monotonicity of the function representation $\Phi_\mathcal L$ of the monotone family $\mathcal L$.
\smallskip
2) If $A,B\in\mathsf{pT}^\mathcal I$, then $e_\mathcal I(A)=A\subset B=e_\mathcal I(B)$.
\smallskip
3) $A\in\mathsf{pT}^\mathcal I$ and $B\notin\mathsf{pT}^\mathcal I$. We claim that $B\in\mathcal L$. Assuming that $B\notin\mathcal L$ and applying Claim~\ref{cl12.2}, we get $B\notin\mathcal L^\perp$. Then $B$ does not intersect some set $L\in\L$ and then $A\cap L=\emptyset$. It follows that the set $X\setminus A\supset L$ belongs to the maximal invariant $\mathcal I$-saturated linked system and so does the set $yA\supset_\mathcal I X\setminus A$ for some $y\in X$ (which exists as $A\in\mathsf{pT}^\mathcal I$). By the left-invariance of $\mathcal L$, we get $A\in\mathcal L$ which contradicts $X\setminus A\in\mathcal L$ and the linkedness of $\mathcal L$. This contradiction proves that $B\in\mathcal L$. In this case $e_\mathcal I(A)=A\subset X=\Phi_\mathcal L(B)=e_\mathcal I(B)$.
\smallskip
4) $A\notin \mathsf{pT}^\mathcal I$ and $B\in\mathsf{pT}^\mathcal I$. In this case we prove that $A\notin\mathcal L$. Assuming conversely that $A\in\mathcal L$, we get $B\in\mathcal L$. Since $B\in\mathsf{pT}^\mathcal I$, there is a point $x\in X$ with $xB\subset_\mathcal I X\setminus B$. Since $\mathcal L$ is left-invariant, monotone and $\mathcal I$-saturated, we conclude that $X\setminus B\in\mathcal L$ which contradicts $B\in\mathcal L$. Thus $A\notin\mathcal L$ and $e_\mathcal I(A)=\Phi_\mathcal L(A)=\emptyset\subset e_\mathcal I(B)$.
\smallskip
Finally, we show that the function $e_\mathcal I$ is symmetric. If $A\in \mathsf{pT}^\mathcal I$, then $X\setminus A\in\mathsf{pT}^\mathcal I$ and then $e_\mathcal I(X\setminus A)=X\setminus A=X\setminus e_\mathcal I(A)$.
Next, assume that $A\notin\mathsf{pT}^\mathcal I$. If $A\in\mathcal L$, then $X\setminus A\notin \mathcal L$ by the linkedness of $\mathcal L$. In this case $e_\mathcal I(X\setminus A)=\emptyset=X\setminus X=X\setminus e_\mathcal I(A)$.
If $A\notin\mathcal L$, then by Claim~\ref{cl12.2}, $A\notin\mathcal L^\perp$ and thus $A$ is disjoint with some set $L\in\mathcal L$, which implies that $X\setminus A\in\mathcal L$. Then $e_\mathcal I(X\setminus A)=\Phi_\mathcal L(X\setminus A)=X=X\setminus\emptyset=X\setminus\Phi_\mathcal L(A)=X\setminus e_\mathcal I(A)$.
\end{proof}
Our second special idempotent depends on a subfamily $\widetilde{\Tau}$ of the family
$$\wht{\Tau}=\{A\in\Tau:\Fix^-(A)\in\wht{\mathcal K}\}$$of twin sets with maximal 2-cogroup.
\begin{theorem}\label{t16.3} If the ideal $\mathcal I$ is twinic, then for any $\mathcal I$-independent $\wht{\mathcal K}$-covering subfamily $\widetilde{\mathsf T}\subset \wht{\Tau}$ there is an idempotent ${e_{\widetilde{\Tau}}}\in\Enl^\mathcal I(\mathsf P(X))$ such that
\begin{enumerate}
\item[\textup{(1)}] ${e_{\widetilde{\Tau}}}(\mathsf P(X)\setminus \Tau^\mathcal I)\subset\{\emptyset,X\}$;
\item[\textup{(2)}] ${e_{\widetilde{\Tau}}}(\Tau^\mathcal I)=\widetilde{\Tau}$;
\item[\textup{(3)}] ${e_{\widetilde{\Tau}}}|\{\emptyset,X\}\cup\widetilde{\Tau}=\mathrm{id}$.
\end{enumerate}
\end{theorem}
\begin{proof} Let $e_\mathcal I:\mathsf P(X)\to\{\emptyset,X\}\cup\mathsf{pT}^\mathcal I$ be the idempotent from Proposition~\ref{p16.1}. Since the ideal $\mathcal I$ is twinic, $\mathsf{pT}^\mathcal I=\Tau^\mathcal I$. The idempotent ${e_{\widetilde{\Tau}}}$ will be defined as the composition ${e_{\widetilde{\Tau}}}=\varphi\circ e_\mathcal I$ where
$\varphi:\{\emptyset,X\}\cup\Tau^\mathcal I\to\{\emptyset,X\}\cup\widetilde{\Tau}$ is an equivariant $\mathcal I$-saturated function such that
\begin{enumerate}
\item[\textup{(1)}] $\varphi\circ\varphi=\varphi$;
\item[\textup{(2)}] $\varphi|\{\emptyset,X\}\cup\widetilde{\Tau}=\mathrm{id}$;
\item[\textup{(3)}] $\varphi(\Tau^\mathcal I)\subset\widetilde{\Tau}$;
\item[\textup{(4)}] $\I\mbox{-}\Fix^-(A)\subset\Fix^-(\varphi(A))$ for all $A\in\Tau^\mathcal I$.
\end{enumerate}
To construct such a function $\varphi$, consider the family $\F$ of all possible functions $\varphi:D_\varphi\to\{\emptyset,X\}\cup\widetilde{\Tau}$ such that
\begin{itemize}
\item[\textup{(a)}] $\{\emptyset,X\}\cup\widetilde{\Tau}\subset D_\varphi\subset\{\emptyset,X\}\cup\Tau^\mathcal I$;
\item[\textup{(b)}] the set $D_\varphi$ is left-invariant;
\item[\textup{(c)}] $\varphi$ is equivariant and $\mathcal I$-saturated;
\item[\textup{(d)}] $\varphi|\{\emptyset,X\}\cup\widetilde{\Tau}=\mathrm{id}$;
\item[\textup{(e)}] $\I\mbox{-}\Fix^-(A)\subset\Fix^-(\varphi(A))$ for all $A\in D_\varphi$.
\end{itemize}
The family $\F$ is partially ordered by the relation
$\varphi\le\psi$ defined by $\psi|D_\varphi=\varphi$.
The set $\F$ is not empty because it contains the identity function $\mathrm{id}$ of $\{\emptyset,X\}\cup\widetilde{\Tau}$, which is $\mathcal I$-saturated because of the $\mathcal I$-independence of the family $\widetilde{\Tau}$.
By Zorn's Lemma, the family $\F$ contains a maximal element $\varphi:D_\varphi\to\{\emptyset,X\}\cup\widetilde{\Tau}$. We claim that $D_\varphi=\{\emptyset,X\}\cup\Tau^\mathcal I$. Assuming the converse, fix a set $A\in \Tau^\mathcal I\setminus D_\varphi$ and define a family $D_\psi=D_\varphi\cup\{xA:x\in X\}$. Next, we shall extend the function $\varphi$ to a function $\psi:D_\psi\to\{\emptyset,X\}\cup\widetilde{\Tau}$.
We consider two cases.
1) Assume that $A=_\mathcal I B$ for some $B\in D_\varphi$. Then also $xA=_\mathcal I xB$ for all $x\in X$. In this case we define the function $\psi:D_\psi\to\{\emptyset,X\}\cup\widetilde{\Tau}$ assigning to each set $C\in D_\psi$ the set $\varphi(D)$ where $D\in D_\varphi$ is any set with $D=_\mathcal I C$. It can be shown that the function $\psi:D_\psi\to\{\emptyset,X\}\cup\widetilde{\Tau}$
belongs to the family $\F$, which contradicts the maximality of $\varphi$.
2) Assume that $A\ne_\mathcal I B$ for all $B\in D_\varphi$. By Proposition~\ref{p7.3}, the 2-cogroup $\I\mbox{-}\Fix^-(A)$ lies in a maximal 2-cogroup $K\in\wht{\mathcal K}$.
Since the family $\widetilde{\Tau}$ is $\wht{\mathcal K}$-covering, there is a twin set $B\in\widetilde{T}$ such that $\Fix^-(B)=K$. In this case define the function $\psi:D_\psi\to\wht{\Tau}$ by the formula
$$\psi(C)=\begin{cases} \varphi(C)&\mbox{if $C\in D_\varphi$};\\
xB&\mbox{if $C=_\mathcal I xA$ for some $x\in X$}.
\end{cases}
$$
If $xA=_\mathcal I yA$ for some $x,y\in X$, then $y^{-1}x\in \I\mbox{-}\Fix^-(A)\subset K=\Fix^-(B)$ and thus $xB=yB$, which means that the function $\psi$ is well-defined and $\mathcal I$-saturated. Also it is clear that $\psi$ is equivariant and hence belongs to the family $\F$, which is forbidden by the maximality of $\varphi$.
Thus the maximal function $\varphi$ is defined on $D_\varphi=\{\emptyset,X\}\cup\Tau^\mathcal I$ and we can put ${e_{\widetilde{\Tau}}}=\varphi\circ e_\mathcal I$ where $e_\mathcal I:\mathsf P(X)\to\{\emptyset,X\}\cup\mathsf{pT}^\mathcal I=\{\emptyset,X\}\cup\Tau^\mathcal I$ is the idempotent constructed in Proposition~\ref{p16.1}. It follows from the properties of the functions $\varphi$ and $e_\mathcal I$ that the function ${e_{\widetilde{\Tau}}}$ is equivariant and $\mathcal I$-saturated. Since the ideal $\mathcal I$ is twinic, the family $\Tau^\mathcal I=\mathsf{pT}^\mathcal I$ is $\mathcal I$-incomparable (by Proposition~\ref{p13.1}) and hence the monotonicity of the function $\varphi$ follows automatically from its $\mathcal I$-saturated property.
Then ${e_{\widetilde{\Tau}}}$ is monotone as the composition of two monotone functions.
By Corollary~\ref{c4.4}, $e_{\widetilde{\Tau}}\in\Enl(\mathsf P(X))$.
\end{proof}
Theorem~\ref{t16.3} and Proposition~\ref{p13.5} imply:
\begin{corollary}\label{c16.4} If the ideal $\mathcal I$ is twinic, then for each minimal $\wht{\mathcal K}$-covering family $\widetilde{\Tau}\subset\wht{\Tau}$ there is an idempotent ${e_{\widetilde{\Tau}}}\in\Enl^\mathcal I(\mathsf P(X))$ such that
\begin{enumerate}
\item[\textup{(1)}] ${e_{\widetilde{\Tau}}}(\mathsf P(X)\setminus \Tau^\mathcal I)\subset\{\emptyset,X\}$;
\item[\textup{(2)}] $e_{\widetilde{\Tau}}(\Tau^\mathcal I)=\widetilde{\Tau}$;
\item[\textup{(3)}] ${e_{\widetilde{\Tau}}}|\{\emptyset,X\}\cup\widetilde{\Tau}=\mathrm{id}$.
\end{enumerate}
\end{corollary}
\section{The minimal ideal of the semigroups
$\lambda(X)$ and $\Enl(\mathsf P(X))$}\label{s17}
In this section we apply Corollary~\ref{c16.4} to describe the structure of the minimal ideals the semigroups $\lambda(X)$ and $\Enl(\mathsf P(X))$.
\begin{theorem}\label{t17.1} For a twinic group $X$ a function $f\in\Enl(\mathsf P(X))$ belongs to the minimal ideal $\mathsf K(\Enl(\mathsf P(X))$ of the semigroup $\Enl(P(X))$ if and only if the following two conditions hold:
\begin{enumerate}
\item[\textup{(1)}] the family $f(\wht{\Tau})$ is minimal $\wht{\mathcal K}$-covering;
\item[\textup{(2)}] $f(\mathsf P(X))\subset\{\emptyset,X\}\cup f(\wht\Tau)$.
\end{enumerate}
\end{theorem}
\begin{proof} Let $\widetilde{\Tau}\subset\wht{\Tau}$ be a minimal $\wht\mathcal K$-covering left-invariant family and $e_{\widetilde{\Tau}}\in\Enl(\mathsf P(X))$ be an idempotent satisfying the conditions (1)--(3) of in Corollary~\ref{c16.4}. By Propositions~\ref{p13.1} and \ref{p13.5}, the family $\widetilde{\Tau}$ is $\mathcal I$-incomparable and $\mathcal I$-independent for any twinic ideal $\mathcal I$ on $X$.
\smallskip
To prove the ``if'' part of the theorem, assume that $f$ satisfies the conditions (1), (2).
To show that $f$ belongs to the minimal ideal $\mathsf K(\Enl(\mathsf P(X)))$, it suffices for each $g\in\Enl(\mathsf P(X))$ to find $h\in\Enl(\mathsf P(X))$ such that $h\circ g\circ f=f$.
The minimality and the left-invariance of the $\wht{\mathcal K}$-covering subfamily $f(\wht{\Tau})$ imply that the equivariant function $\psi=e_{\widetilde{\Tau}}\circ g|\{\emptyset,X\}\cup f(\wht{\Tau}):\{\emptyset,X\}\cup f(\wht{\Tau})\to\{\emptyset,X\}\cup\widetilde{\Tau}$ is bijective. So, we can consider the inverse function $\psi^{-1}:\{\emptyset,X\}\cup \widetilde{\Tau}\to\{\emptyset,X\}\cup f(\wht{\Tau})$ such that $\psi^{-1}\circ \psi=\mathrm{id}|\{\emptyset,X\}\cup f(\wht{\Tau})$.
This function is equivariant, symmetric, and monotone because so is $\psi$ and the family $\widetilde{\Tau}$ is $\mathcal I$-incomparable and $\mathcal I$-independent.
Then the function $\varphi=\psi^{-1}\circ e_{\widetilde{\Tau}}:\mathsf P(X)\to\{\emptyset,X\}\cup f(\wht{\Tau})$ is well-defined and belongs to $\Enl^{\mathcal I}(\mathsf P(X))$ by Corollary~\ref{c4.4}. Since
$$(\varphi\circ e_{\widetilde{\Tau}})\circ g\circ f=\psi^{-1}\circ e_{\widetilde\Tau}\circ e_{\widetilde\Tau}\circ g\circ f=\psi^{-1}\circ e_{\widetilde\Tau}\circ g\circ f=\psi^{-1}\circ \psi\circ f=f,$$ the function $f$ belongs to the minimal ideal of the semigroup $\Enl(\mathsf P(X))$.
\smallskip
To prove the ``only if'' part, take any function $f\in \mathsf K(\Enl(\mathsf P(X)))$ and for the idempotent $e_{\widetilde{\Tau}}\in\Enl(\mathsf P(X))$ find a function $g\in\Enl(\mathsf P(X))$ such that $f=g\circ e_{\widetilde{\Tau}}\circ f$.
Now the properties (1), (2) of the function $f$ follow from the corresponding properties of the idempotent $e_{\widetilde{\Tau}}$.
\end{proof}
Since the superextension $\lambda(X)$ of a group $X$ is topologically isomorphic to the semigroup $\Enl(\mathsf P(X))$, Theorem~\ref{t17.1} implies the following description of the minimal ideal $\mathsf K(\lambda(X))$ of $\lambda(X)$.
\begin{corollary}\label{c13.2} For a twinic group $X$ a maximal linked system $\mathcal L\in\lambda(X)$ belongs to the minimal ideal $\mathsf K(\lambda(X))$ of the superextension $\lambda(X)$ if and only if its function representation $\Phi_\mathcal L$ satisfies two conditions:
\begin{enumerate}
\item[\textup{(1)}] the family $\Phi_\mathcal L(\wht{\Tau})$ is minimal $\wht{\mathcal K}$-covering;
\item[\textup{(2)}] $\Phi_\mathcal L(\mathsf P(X))\subset\{\emptyset,X\}\cup \Phi_\mathcal L(\wht\Tau)$.
\end{enumerate}
\end{corollary}
\section{Minimal left ideals of superextensions of twinic groups}\label{s18}
After elaborating the necessary tools in Section~\ref{s4}-\ref{s17}, we now return to describing the structure of minimal left ideals of the superextension $\lambda(X)$ of a twinic group $X$. In this section we assume that $X$ is a group.
Our first aim is to show that if $X$ is twinic, then the restriction operator $R_{\wht\Tau}:\Enl(\mathsf P(X))\to\Enl(\wht{\Tau})$ is injective on all minimal left ideals of the semigroup $\Enl(\mathsf P(X))$. Since $\wht{\Tau}=\bigcup_{K\in\wht{\mathcal K}}\Tau_{K}$, Proposition~\ref{p12.3} implies that the family $\wht{\Tau}$ is $\lambda$-invariant and hence $\Enl(\wht\Tau)$ is a compact right-topological semigroup. For each
left-invariant ideal $\mathcal I$ on the group $X$ the semigroup $\Enl(\wht\Tau)$ contains a left ideal $\Enl^\mathcal I(\wht\Tau)$ consisting of all left-invariant monotone $\mathcal I$-saturated functions, see Theorem~\ref{t4.8}. If $\mathcal I$ is a twinic ideal with $\mathcal I\cap\wht\mathcal K=\emptyset$, then the family $\wht\Tau$ is $\mathcal I$-independent (see Proposition~\ref{p13.3}) and hence $\Enl^\mathcal I(\wht{\Tau})=\Enl(\wht{\Tau})$.
\begin{proposition}\label{p18.1} If the group $X$ is twinic, then the restriction operator $$R_{\wht{\Tau}}:\Enl(\mathsf P(X))\to\Enl(\wht{\Tau}),\;R_{\wht\Tau}:f\mapsto f|\wht\Tau,$$
is injective on each minimal left ideal of the semigroup $\Enl(\mathsf P(X))$. If ${\I\!\!\I}\cap\wht{\mathcal K}=\emptyset$, then for some idempotent $ e_{\wht\Tau}\in\Enl^{\I\!\!\I}(\mathsf P(X))$ the restriction $R_{\wht{\Tau}}|\Enl(\mathsf P(X))\circ e_{\wht\Tau}$ is a topological isomorphism between the principal left ideal $\Enl(\mathsf P(X))\circ e_{\wht\Tau}$ and $\Enl(\wht{\Tau})$.
\end{proposition}
\begin{proof} Let $\widetilde{\Tau}\subset\wht{\Tau}$ be any minimal $\mathcal K$-covering left-invariant subfamily. By Propositions~\ref{p13.5}, the family $\widetilde{\Tau}$ is $\mathcal I$-independent. By Theorem~\ref{t16.3},
there is an idempotent $e_{\widetilde{\Tau}}\in\Enl^\mathcal I(\mathsf P(X))$ such that $e_{\widetilde{\Tau}}(\mathsf P(X)\setminus\Tau^\mathcal I)\subset\{\emptyset,X\}$ and $e_{\widetilde{\Tau}}(\Tau^\mathcal I)=\widetilde{\Tau}\subset\wht{\Tau}$. The latter property of $e_{\widetilde{\Tau}}$ implies that the restriction operator $R_{\wht{\Tau}}$ is injective on the principal left ideal $\Enl(\mathsf P(X))\circ e_{\widetilde{\Tau}}$ and consequently, is
injective on each minimal left ideal of the semigroup $\Enl(\mathsf P(X))$ according to Proposition~\ref{c2.2}.
If ${\I\!\!\I}\cap\wht{\mathcal K}=\emptyset$, then by Proposition~\ref{p13.3} the family $\wht{\Tau}$ is ${\I\!\!\I}$-independent and we can repeat the above argument for the idempotent $e_{\wht{\Tau}}$.
\end{proof}
Now let us look at the structure of the semigroup $\Enl(\wht{\Tau})$.
Observe that $\wht{\Tau}=\bigcup_{[K]\in[\wht{\mathcal K}]}\Tau_{[K]}$, where $\Tau_{[K]}=\{A\subset X:\Fix^-(A)\in[K]\}$, $[\wht\mathcal K]=\{[K]:K\in\wht\mathcal K\}$ and $[K]=\{xKx^{-1}:x\in X\}$ for $K\in\wht\mathcal K$.
It follows that the restriction operators $R_{\Tau_{[K]}}:\Enl(\wht{\Tau})\to\Enl(T_{[K]})$, $[K]\in[\wht{\mathcal K}]$, compose an injective semigroup homomorphism
$$R_{\Tau_{\![\wht\mathcal K]}}:\Enl(\wht{\Tau})\to\prod_{[K]\in[\wht{\mathcal K}]}\Enl(\Tau_{[K]}),\;\;R_{\Tau_{\![\wht\mathcal K]}}:\varphi\mapsto (\varphi|\Tau_{[K]})_{[K]\in[\wht{K}]}.$$
Theorem~\ref{t11.1} implies
\begin{lemma}\label{l18.2} For any twinic ideal $\mathcal I$ on the group $X$ we get $R_{[\wht\mathcal K]}(\Enl^\mathcal I(\wht{\Tau}))=
\prod_{[K]\in[\widetilde{\mathcal K}]}\Enl^\mathcal I(\Tau_{[K]}).$
\end{lemma}
Next, we study the structure of the semigroups $\Enl^\mathcal I(\Tau_{[K]})$ for $[K]\in[\wht{\mathcal K}]$.
\begin{lemma}\label{l18.3} For any maximal 2-cogroup $K\in\wht\mathcal K$ the restriction map
$$R_{\Tau_K}:\Enl(\Tau_{[K]})\to\Enl(\Tau_K),\;\;R_{\Tau_K}:\varphi\mapsto \varphi|\Tau_K,$$is a topological isomorphism.
\end{lemma}
\begin{proof} Because of the compactness of the semigroup $\Enl(\Tau_{[K]})$ it suffices to check that the restriction operator $R_{\Tau_K}:\Enl^\mathcal I(\Tau_{[K]})\to \Enl^\mathcal I(\Tau_K)$
is one-to-one. Given two distinct functions $f,g\in\Enl^\mathcal I(\Tau_{[K]})$ find a twin set $A\in\Tau_{[K]}$ such that $f(A)\ne g(A)$. Since $\Fix^-(A)\in[K]$, there is a point $x\in X$ such that $\Fix^-(xA)=x\,\Fix^-(A)\,x^{-1}=K$. By Proposition~\ref{p5.4}, $xA\in\Tau_K$ and $f(xA)=xf(A)\ne xg(A)=g(xA)$ witnessing that $f|\Tau_K\ne g|\Tau_K$.
\end{proof}
A subfamily $\widetilde\mathcal K\subset\wht\mathcal K$ is called a {\em $[\wht\mathcal K]$-selector} if $\widetilde\mathcal K$ has one-point intersection with each orbit $[K]=\{xKx^{-1}:x\in X\}$, $K\in\wht\mathcal K$. In the following theorems we assume that $\widetilde\mathcal K\subset\wht\mathcal K$ is a $[\wht\mathcal K]$-selector.
All preceding discussion culminates in the following theorem, which can be considered as the main result of this paper.
\begin{theorem} Given a $[\wht\mathcal K]$-selector $\widetilde\mathcal K\subset\wht\mathcal K$, consider the operator $$R_{\widetilde\mathcal K}:\Enl(\mathsf P(X))\to\prod_{K\in\widetilde\mathcal K}\End(\Tau_K),\;\;
R_{\widetilde\mathcal K}:f\mapsto (f|\Tau_K)_{K\in\widetilde\mathcal K}.$$If $\mathcal I$ is a left-invariant twinic ideal on $X$, then
\begin{enumerate}
\item[\textup{(1)}] $R_{\widetilde\mathcal K}\big(\Enl^\mathcal I(\mathsf P(X))\big)=\prod_{K\in\widetilde\mathcal K}\End^\mathcal I(\Tau_K)$;
\item[\textup{(2)}] the operator $R_{\widetilde\mathcal K}$ maps isomorphically each minimal left ideal of the semigroup $\Enl(\mathsf P(X))$ onto some minimal left ideal of the semigroup $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$.
\item[\textup{(3)}] If $\mathcal I\cap\wht\mathcal K=\emptyset$, then for some idempotent $\hat e\in\Enl^\mathcal I(\mathsf P(X))$ and the principal left ideal $\mathsf L_{\hat e}=\Enl(\mathsf P(X))\circ\hat e$ the restriction
$$R_{\widetilde\mathcal K}|\mathsf L_{\hat e}:\mathsf L_{\hat e}\to \prod\limits_{K\in\widetilde\mathcal K}\End(\Tau_K)$$ is a topological isomorphism.
\end{enumerate}
\end{theorem}
\begin{proof} Write the operator $R_{\widetilde\mathcal K}$ as the composition $R_{\widetilde\mathcal K}=R^{\wht T}_{\widetilde\mathcal K}\circ R_{\wht\Tau}$ of two operators:
$$R_{\wht\Tau}:\Enl(\mathsf P(X))\to\Enl(\wht{\Tau}),\;\; R_{\wht\Tau}:f\mapsto f|\wht\Tau,$$and
$$R^{\wht\Tau}_{\widetilde\mathcal K}:\Enl(\wht{\Tau})\to\prod_{K\in\widetilde\mathcal K}\Enl(\Tau_K),\;\;R_{\widetilde\mathcal K}^{\wht\Tau}:f\mapsto (f|\Tau_K)_{K\in\widetilde\mathcal K}.$$
By Lemma~\ref{l18.3}, the operator $R^{\wht\Tau}_{\widetilde\mathcal K}$ is injective.
\smallskip
1. It follows from Lemmas~\ref{l18.2} and \ref{l18.3} that
$$R_{\widetilde\mathcal K}(\Enl^\mathcal I(\mathsf P(X))=R^{\wht\Tau}_{\widetilde\mathcal K}(\Enl^\mathcal I(\wht{\Tau}))=\prod_{K\in\widetilde\mathcal K}\End^\mathcal I(\Tau_K).$$
2. To prove the second item, fix any function $f\in \mathsf K(\Enl(\mathsf P(X)))$ and consider the minimal left ideal $\mathsf L_f=\Enl(\mathsf P(X))\circ f$. We need to show that $R_{\widetilde\mathcal K}(\mathsf L_f)$ is a minimal left ideal in $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$.
For this pick any function $g\in \mathsf K(\Enl^\mathcal I(\mathsf P(X)))$ and consider the minimal left ideal $\mathsf L_g=\Enl(\mathsf P(X))\circ g=\Enl^\mathcal I(\mathsf P(X))\circ g$.
By Proposition~\ref{p18.1}, the operator $R_{\wht\Tau}:\Enl(\mathsf P(X))\to\Enl(\wht{\Tau})$ is injective on each minimal left ideal. Consequently, the operator $R_{\widetilde\mathcal K}=R_{\widetilde\mathcal K}^{\wht\Tau}\circ R_{\wht\Tau}$ also is injective on each minimal left ideal of the semigroup $\Enl(\mathsf P(X))$. In particular, $R_{\widetilde\mathcal K}$ is injective on the minimal left ideals $\mathsf L_f$ and $\mathsf L_g$. Since $\mathsf L_g$ is a minimal left ideal of the semigroup $\Enl^\mathcal I(\mathsf P(X))$ its image $R_{\wht\Tau}(\mathsf L_g)$ is a minimal left ideal of the semigroup $\Enl^\mathcal I(\wht\Tau)$.
By Lemmas~\ref{l18.2} and \ref{l18.3}, $R^{\wht\Tau}_{\widetilde\mathcal K}$ maps isomorphically the semigroup $\Enl^\mathcal I(\wht\Tau)$ onto $\prod_{K\in\widetilde\mathcal K}\End^\mathcal I(\Tau_K)$, the image $R_{\widetilde\mathcal K}(\mathsf L_g)$ is a minimal left ideal of the semigroup
$\prod_{K\in\widetilde\mathcal K}\End^\mathcal I(\Tau_K)$. Since the latter semigroup is a left ideal in $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$, the image $R_{\widetilde\mathcal K}(\mathsf L_g)$ remains a minimal left ideal of the semigroup $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$.
This minimal left ideal is equal to the product $\prod_{K\in\widetilde\mathcal K}\mathsf L_{g_K}$ where $g_K=g|\Tau_K$ and $\mathsf L_{g_K}=\End(\Tau_K)\circ g_K$.
Because of the compactness of $\mathsf L_g$, the operator $R_{\widetilde\mathcal K}$ maps isomorphically the minimal left ideal $\mathsf L_g$ onto the minimal left ideal $\prod_{K\in\widetilde\mathcal K}\mathsf L_{g_K}$ of the semigroup $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$.
Now let us look at the minimal left ideal $\mathsf L_f$. By Proposition~\ref{p2.1}, the right shift $r_f:\mathsf L_g\to\mathsf L_f$, $r_f:h\mapsto h\circ f$, is a homeomorphism. So, there is a function $\gamma\in \Enl(\mathsf P(X))$ such that $f=\gamma\circ g\circ f$.
For every $K\in\widetilde\mathcal K$ consider the restrictions $f_K=f|\Tau_K$ and $\gamma_K=\gamma|\Tau_K$, which belong to the semigroup $\End(\Tau_K)$.
It follows from $f=\gamma\circ g\circ f$ that $f_K=\gamma_K\circ g_K\circ f_K$. Since $g_K\in \mathsf K(\End(\Tau_K))$, we conclude that $f_K$ also belongs to the minimal ideal $\mathsf K(\End(\Tau_K))$.
Then $\mathsf L_{f_K}=\End(\Tau_K)\circ f_K$ and $\mathsf L_{g_K}=\End(\Tau_K)\circ g_K$ are minimal left ideals in $\End(\Tau_K)$.
By Proposition~\ref{p2.1}, the right shift $r_{f_K}:\mathsf L_{g_K}\to\mathsf L_{f_K}$, $r_{f_K}:h\mapsto h\circ f_K$, is a homeomorphism. The homeomorphisms $r_{f_K}$, $K\in\widetilde\mathcal K$, compose a homeomorphism
$$r_{f_{\widetilde\mathcal K}}:\prod_{K\in\widetilde\mathcal K}\mathsf L_{g_K}\to \prod_{K\in\widetilde\mathcal K}\mathsf L_{f_K},\;\;r_{f_{\widetilde\mathcal K}}:(h_K)_{K\in\widetilde\mathcal K}\mapsto (h_K\circ f_K)_{K\in\widetilde\mathcal K}.$$
Now consider the commutative diagram
$$\xymatrix{
\mathsf L_f\ar[r]^-{R_{\widetilde\mathcal K}|\mathsf L_f}&\prod_{K\in\widetilde\mathcal K}\mathsf L_{f_K}\\
\mathsf L_g\ar[u]^{r_f}\ar[r]_-{R_{\widetilde\mathcal K}|\mathsf L_g}&\prod_{K\in\widetilde\mathcal K}\mathsf L_{g_K}\ar[u]_{r_{f_{\widetilde\mathcal K}}}
}$$Since the maps $r_f$, $r_{f_{\widetilde\mathcal K}}$, and $R_{\widetilde\mathcal K}|\mathsf L_g$ are homeomorphisms, so is the map $R_{\widetilde\mathcal K}|\mathsf L_f$. Consequently, the operator $R_{\widetilde\mathcal K}$ maps isomorphically the minimal left ideal $\mathsf L_f=\Enl(\mathsf P(X))\circ f$ onto the minimal left ideal $\prod_{K\in\widetilde\mathcal K}\mathsf L_{f_K}=\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)\circ(f|\Tau_K)$ of the semigroup $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$.
\smallskip
3. Assume that $\mathcal I\cap\wht\mathcal K=\emptyset$. In this case $\End^\mathcal I(\wht\Tau)=\End(\wht\Tau)$ by Proposition~\ref{p13.3}. By Proposition~\ref{p18.1}, for some idempotent $\hat e\in\Enl^\mathcal I(\mathsf P(X))$ the operator $R^{\wht\Tau}_{\widetilde\Tau}$ maps isomorphically the principal left ideal $\mathsf L_{\hat e}=\Enl(\mathsf P(X))\circ\hat e$ onto $\Enl^\mathcal I(\wht\Tau)=\Enl(\wht\Tau)$. By Lemma~\ref{l18.2}, the operator $R^{\wht\Tau}_{\widetilde\mathcal K}:\Enl^\mathcal I(\wht\Tau)\to\prod_{K\in\widetilde\mathcal K}\End^\mathcal I(\Tau_K)=\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$ is an isomorphism. So, $R_{\widetilde\mathcal K}$ maps isomorphically the principal left ideal $\mathsf L_{\hat e}$ onto $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$.
\end{proof}
Since the function representation $\Phi:\lambda(X)\to\Enl(\mathsf P(X))$, $\Phi:\mathcal L\mapsto\Phi_\mathcal L$, is a topological isomorphism, the preceding theorem implies:
\begin{corollary}\label{c18.5} Given a $[\wht\mathcal K]$-selector $\widetilde\mathcal K\subset\wht\mathcal K$, consider the continuous semigroup homomorphism $$\Phi_{\widetilde\mathcal K}:\lambda(X)\to\prod_{K\in\widetilde\mathcal K}\End(\Tau_K),\;\;
\Phi_{\widetilde\mathcal K}:\mathcal L\mapsto (\Phi_\mathcal L|\Tau_K)_{K\in\widetilde\mathcal K}.$$If the group $X$ is twinic, then
\begin{enumerate}
\item[\textup{(1)}] $\Phi_{\widetilde\mathcal K}(\Enl^{\I\!\!\I}(\mathsf P(X))=\prod_{K\in\widetilde\mathcal K}\End^{\I\!\!\I}(\Tau_K)$;
\item[\textup{(2)}] the homomorphism $\Phi_{\widetilde\mathcal K}$ maps isomorphically each minimal left ideal of the semigroup $\lambda(X)$ onto some minimal left ideal of the semigroup $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$.
\item[\textup{(3)}] If ${\I\!\!\I}\cap\wht\mathcal K=\emptyset$, then for some idempotent $\mathcal E\in\lambda^{\I\!\!\I}(X)$ and the principal left ideal $\mathsf L_{\mathcal E}=\lambda(X)\circ\mathcal E$ the restriction
$$\Phi_{\widetilde\mathcal K}|\mathsf L_{\mathcal E}:\mathsf L_{\mathcal E}\to \prod\limits_{K\in\widetilde\mathcal K}\End(\Tau_K)$$ is a topological isomorphism.
\end{enumerate}
\end{corollary}
\begin{corollary}\label{c18.6} If the group $X$ is twinic, then each minimal left ideal of $\lambda(X)$ is topologically isomorphic to a minimal left ideal of $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$ and each minimal left ideal of $\prod_{K\in\widetilde\mathcal K}\End^{\I\!\!\I}(\Tau_K)$ is topologically isomorphic to a minimal left ideal of $\lambda^{\I\!\!\I}(X)$.
\end{corollary}
\begin{proof} Corollary~\ref{c18.5}(2) implies that each minimal left ideal of $\lambda(X)$ is topologically isomorphic to a minimal left ideal of $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$. Now assume that $\mathsf L$ is a minimal left ideal of the semigroup $\prod_{K\in\widetilde\mathcal K}\End^{\I\!\!\I}(\Tau_K)$. It follows from Corollary~\ref{c18.5}(1) that the preimage $\Phi_{\widetilde\mathcal K}^{-1}(\mathsf L)$ is a left ideal in $\lambda^{\I\!\!\I}(X)$ and hence a left ideal in $\lambda(X)$. This left ideal contains some minimal left ideal $\mathsf L_\lambda$ whose image $\Phi_{\widetilde\mathcal K}$ coincides with $\mathsf L$ (being a left ideal in $\mathsf L$). By Corollary~\ref{c18.5}(2), the map $\Phi_{\widetilde\mathcal K}|\mathsf L_\lambda:\mathsf L_\lambda\to\mathsf L$ is injective and by the compactness of $\mathsf L_\lambda$ is a topological isomorphism.
\end{proof}
\begin{theorem}\label{t18.7} Let $X$ be a twinic group, $\widetilde\mathcal K\subset\wht{\mathcal K}$ be a $[\wht\mathcal K]$-selector, and $\mathcal E\in\lambda(X)$ be a minimal idempotent.
\begin{enumerate}
\item[\textup{(1)}] The maximal subgroup $\mathsf H_\mathcal E=\mathcal E\circ\lambda(X)\circ\mathcal E$ has the following properties:
\begin{enumerate}
\item[\textup{(a)}] $\mathsf H_\mathcal E$ is algebraically isomorphic to $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)$;
\item[\textup{(b)}] $\mathsf H_\mathcal E$ is topologically isomorphic to $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(A_K)$ for any twin sets $A_K\in\Phi_\mathcal E(\Tau_K)$, $K\in\widetilde\mathcal K$;
\item[\textup{(c)}] $\mathsf H_\mathcal E$ is a compact topological group if and only if ${\mathcal H}(K)$ is finite for every $K\in\wht\mathcal K$.
\end{enumerate}
\item[\textup{(2)}] The minimal left ideal $\mathsf L_\mathcal E=\lambda(X)\circ\mathcal E$ has the following properties:
\begin{enumerate}
\item[\textup{(d)}] $\mathsf L_\mathcal E$ is topologically isomorphic to the minimal left ideal $\prod_{\in\widetilde\mathcal K}\End(\Tau_K)\circ(\Phi_\mathcal E|\Tau_K)$;
\item[\textup{(e)}] $\mathsf L_\mathcal E$ is homeomorphic to $\prod_{K\in\widetilde\mathcal K}\Tau_K$, which is homeomorphic to the Cantor discontinuum $\prod_{K\in\widetilde\mathcal K}2^{X/K^\pm}$;
\item[\textup{(f)}] $\mathsf L_\mathcal E$ is algebraically isomorphic to $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times[\Tau_K]$ where the orbit space $[\Tau_K]$ of the ${\mathcal H}(K)$-act $\Tau_K$ is endowed with the left-zero-multiplication;
\item[\textup{(g)}] $\mathsf L_\mathcal E$ a topological semigroup iff $\mathsf L_\mathcal E$ is a semitopological semigroup iff each restriction
$\Phi_\mathcal E|\Tau_K$, $K\in\widetilde\mathcal K$, is a continuous function iff the maximal subgroup $\mathsf H_\mathcal E$ and the idempotent band $E(\mathsf L_\mathcal E)$ of $\mathsf L_\mathcal E$ are compact iff $\mathsf L_\mathcal E$ is topologically isomorphic to
$\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times[\Tau_K]$.
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{proof} Let $\Phi_\mathcal E\in\Enl(\mathsf P(X))$ be the function representation of the minimal idempotent $\mathcal E\in \mathsf K(\lambda(X))$. For every $K\in\widetilde\mathcal K$ let $f_K=\Phi_\mathcal E|\Tau_K$ and $\mathsf L_{f_K}=\End(\Tau_K)\circ f_K$ be the principal left ideal in $\End(\Tau_K)$, generated by the function $f_K$. By Corollary~\ref{c18.5}, the minimal left ideal $\mathsf L_\mathcal E$ is topologically isomorphic to a minimal left ideal of the semigroup $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$. This minimal ideal contains $(f_K)_{K\in\widetilde\mathcal K}$ and hence is equal to the product $\prod_{K\in\widetilde\mathcal K}\mathsf L_{f_K}$. This proves the statement (d) of the theorem. Now all the other statements follow from Theorems~\ref{t14.1} and \ref{t14.3}.
\end{proof}
Theorem~\ref{t18.7}(b) is completed by the following theorem.
\begin{theorem}\label{t18.8} Let $\widetilde\mathcal K\subset\wht\mathcal K$ be a $[\wht\mathcal K]$-selector. If the group $X$ is twinic, then for any twin sets $A_K\in\Tau_K$, $K\in\widetilde\mathcal K$, the minimal ideal $\mathsf K(\lambda(X))$ of $\lambda(X)$ contains a maximal subgroup $\mathsf H_\mathcal E$, which is topologically isomorphic to $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(A_K)$.
\end{theorem}
\begin{proof} In the semigroup $\prod_{K\in\widetilde\mathcal K}\mathsf K(\End^{\I\!\!\I}(\Tau_K))$ choose a sequence of functions $(f_K)_{K\in\widetilde\mathcal K}$ such that $f_K(\Tau_K)\subset\lfloor A_K\rfloor$ for all $K\in\widetilde\mathcal K$. This can be done in the following way. For every $K\in\widetilde\mathcal K$ first choose any minimal idempotent $g_K\in \mathsf K(\End^{\I\!\!\I}(\Tau_K))$. By Theorem~\ref{t14.1}(5), $g_K(\Tau_K)\subset\lfloor B_K\rfloor$ for some twin set $B\in\Tau_K$. Since $\Tau_K$ is a free ${\mathcal H}(K)$-act, we can choose an equivariant function $\varphi:\lfloor B_K\rfloor \to\lfloor A_K\rfloor$. Then the composition $f_K=\varphi\circ g_K$ is ${\I\!\!\I}$-saturated and has the required property: $f_K(\Tau_K)\subset\lfloor A_K\rfloor$.
Consider the minimal left ideal $\mathsf L_{f_{\widetilde\mathcal K}}=\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)\circ f_K$ and let $\mathsf L=\Phi_{\widetilde\mathcal K}^{-1}(\mathsf L)\subset\lambda(X)$ be its preimage under the map $\Phi_{\widetilde\mathcal K}$. By Corollary~\ref{c18.5}(1), $\Phi_{\widetilde\mathcal K}(\mathsf L)=\mathsf L_{f_{\widetilde\mathcal K}}$. Now let $\mathsf K(\mathsf L)$ be the minimal ideal of the left ideal $\mathsf L$. The image $\Phi_{\widetilde\mathcal K}(\mathsf K(\mathsf L))$, being a left ideal in $\mathsf L_{f_{\widetilde\mathcal K}}$, coincides with $\mathsf L_{f_{\widetilde\mathcal K}}$. So, we can find a maximal linked system $\mathcal L\in \mathsf K(\mathsf L)$ such that $\Phi_{\widetilde\mathcal K}(\mathcal L)=(f_K)_{K\in\widetilde\mathcal K}$. By Theorem~\ref{t18.7}(b), the maximal group $\mathsf H_\mathcal L$ is topologically isomorphic to $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(A_K)$.
\end{proof}
\begin{proposition}\label{p18.9} If $X$ is a twinic group, then each minimal left ideal of $\lambda(X)$ is a topological semigroup if and only if each maximal 2-cogroup $K\subset X$ has finite index in $X$.
\end{proposition}
\begin{proof} Let $\widetilde\mathcal K\subset\wht\mathcal K$ be a $[\wht\mathcal K]$-selector.
If each maximal 2-cogroup $K\subset X$ has finite index in $X$, then the set $\Tau_K$ is finite and hence the semigroup $\End(\Tau_K)$ is finite. Consequently, $\prod_{K\in\widetilde\mathcal K}\End(\Tau_K)$ is a compact topological semigroup and so is each minimal left ideal of this semigroup. By Corollary~\ref{c18.5}(2), each minimal left ideal of the semigroup $\lambda(X)$ is a topological semigroup.
If some maximal 2-cogroup in $X$ has infinite index, then Corollary~\ref{c14.6} implies that some minimal left ideal in $\prod_{K\in\widetilde\mathcal K}\End^{\I\!\!\I}(\Tau_K)$ is not a topological semigroup. By Corollary~\ref{c18.6}, some minimal left ideal in $\prod_{K\in\widetilde\mathcal K}\End^{\I\!\!\I}(\Tau_K)$ is not a topological semigroup.
\end{proof}
\begin{proposition}\label{p18.10} For a twinic group $X$ with ${\I\!\!\I}\cap\wht\mathcal K=\emptyset$ the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(1)}] some minimal left ideal of $\lambda(X)$ is a topological semigroup;
\item[\textup{(2)}] each maximal subgroup of $\lambda(X)$ is a topological group;
\item[\textup{(3)}] some maximal subgroup of $\lambda(X)$ is compact;
\item[\textup{(4)}] the characteristic group ${\mathcal H}(K)$ is finite for each maximal 2-cogroup $K\subset X$.
\end{enumerate}
\end{proposition}
\begin{proof} $(1)\Rightarrow(3)$ If some minimal left ideal of $\lambda(X)$ is a (necessarily compact) topological semigroup, then each maximal subgroup of this minimal ideal is a compact topological group.
$(3)\Rightarrow(4)$ If some maximal subgroup of $\mathsf K(\lambda(X))$ is compact, then by Theorem~\ref{t18.7}(c), each characteristic group ${\mathcal H}(K)$, $K\in\wht\mathcal K$, is finite.
$(4)\Rightarrow(1)$ If each characteristic group ${\mathcal H}(K)$, $K\in\wht\mathcal K$, is finite, then Proposition~\ref{p14.5}(1) and Theorem~\ref{t14.3} guarantee that the semigroup $\prod_{K\in\widetilde\mathcal K}\End^{\I\!\!\I}(\Tau_K)$ contains a minimal left ideal, which is a topological semigroup. By Corollary~\ref{c18.6}, this minimal left ideal is topologically isomorphic to some minimal left ideal of $\lambda(X)$.
$(4)\Rightarrow(3)$ If each characteristic group ${\mathcal H}(K)$, $K\in\wht\mathcal K$, is finite, then Theorem~\ref{t18.7}(c) guarantees that each maximal subgroup of $\mathsf K(\lambda(X))$ is a compact topological group.
$(3)\Rightarrow(4)$. Assume that for some maximal 2-cogroup $K_\infty\in\wht\mathcal K$ the characteristic group $H(K_\infty)$ is infinite. Replacing $K_\infty$ by a conjugate cogroup, we can assume that $K_\infty\in\widetilde\mathcal K$. By Theorem~\ref{t8.2}, the group $H(K_\infty)$ is isomorphic to $C_{2^\infty}$ or $Q_{2^\infty}$. In both cases, by Theorems~\ref{t9.5}, there is a twin set $A_\infty\in\Tau_{K_\infty}$ whose characteristic group ${\mathcal H}(A_\infty)$ is not a topological group. Choose a minimal idempotent $f_{\widetilde\mathcal K}=(f_K)_{K\in\widetilde\mathcal K}\in \prod_{K\in\widetilde\mathcal K}\End^{\I\!\!\I}(\Tau_K)$ such that $f_{K_\infty}(\Tau_{K_\infty})\subset\lfloor A\rfloor$.
For every $K\in\widetilde\mathcal K$ choose any twin set $A_K\in f_K(\Tau_K)$ so that $A_K=A_{\infty}$ if $K=K_\infty$.
By Corollary~\ref{c18.6}, there is a minimal idempotent $\mathcal E\in\lambda^{\I\!\!\I}(X)$ such that $\Phi_{\widetilde\mathcal K}(\mathcal E)=f_{\widetilde\mathcal K}$. By Theorem~\ref{t18.7}(b), the maximal subgroup $\mathsf H_\mathcal E=\lambda(X)\circ\mathcal E\circ\lambda(X)$ is topologically isomorphic to $\prod_{K\in \widetilde\mathcal K}{\mathcal H}(A_K)$. This subgroup is not a topological group as it contains an isomorphic copy of the right-topological group ${\mathcal H}(A_\infty)$, which is not a topological group.
\end{proof}
Now let us write Corollary~\ref{c18.5} and Theorem~\ref{t18.7} in a form, more convenient for calculations.
For every group $G\in\{C_{2^k},Q_{2^k}:k\in\mathbb N\cup\{\infty\}\}$ denote by $q(X,G)$ the number of all orbits $[K]\in[\wht{\mathcal K}]$ such that for some (equivalently, every) 2-cogroup $K\in[K]$ the characteristic group ${\mathcal H}(K)$ is isomorphic to $G$.
\begin{theorem}\label{t18.11} For each twinic group $X$
there is a cardinal $m$ such that
\begin{enumerate}
\item[\textup{(1)}] each minimal left ideal of $\lambda(X)$ is algebraically isomorphic to the semigroup
$$2^m\times \prod_{1\le k\le \infty} C_{2^k}^{\;q(X,C_{2^k})}\times \prod_{3\le k\le\infty} Q_{2^k}^{\;q(X,Q_{2^k})}$$
where the Cantor discontinuum $2^{m}$ is endowed with the left zero multiplication;
\item[\textup{(2)}] If $q(X,C_{2^\infty})=q(X,Q_{2^\infty})=0$ and ${\I\!\!\I}\cap\wht\mathcal K=\emptyset$, then some minimal left ideal of $\lambda(X)$ is topologically isomorphic to the compact topological semigroup
$$2^m\times \prod_{1\le k<\infty} C_{2^k}^{\;q(X,C_{2^k})}\times \prod_{3\le k<\infty} Q_{2^k}^{\;q(X,Q_{2^k})}.$$
\item[\textup{(3)}] each maximal subgroup of the minimal ideal $\mathsf K(\lambda(X))$ of $\lambda(X)$ is algebraically isomorphic to the group
$$\prod_{1\le k\le\infty} C_{2^k}^{\;q(X,C_{2^k})}\times\prod_{3\le k\le\infty} Q_{2^k}^{\;q(X,Q_{2^k})}.$$
\item[\textup{(4)}] If $q(X,C_{2^\infty})=q(X,Q_{2^\infty})=0$, then each maximal subgroup of the minimal ideal $\mathsf K(\lambda(X))$ of $\lambda(X)$ is topologically isomorphic to the compact topological group
$$\prod_{1\le k<\infty} C_{2^k}^{\;q(X,C_{2^k})}\times\prod_{3\le k<\infty} Q_{2^k}^{\;q(X,Q_{2^k})}.$$
\end{enumerate}
\end{theorem}
\begin{proof} 1. Fix any $[\wht\mathcal K]$-selection $\widetilde\mathcal K\subset\wht\mathcal K$. For every $K\in\widetilde\mathcal K$ put $m_K=|X/K^\pm|=\{K^\pm x:x\in X\}$ if the index of $K$ in $X$ is infinite and $m_K=2^{|X/K^\pm|}/|H(K)|$ otherwise. It follows that $|[\Tau_K]|=\frac{|\Tau_K|}{|H(K)|}=2^{m_K}$ and $[\Tau_K]$ is homeomorphic to the Cantor cube $2^{m_K}$ if the characteristic group ${\mathcal H}(K)$ is finite. Let $m=\sum_{K\in\widetilde\mathcal K}m_K$.
By Theorem~\ref{t18.7}(f), any minimal left ideal $\mathsf L$ of $\lambda(X)$ is algebraically isomorphic to the semigroup $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times [\Tau_K]$, where the orbit spaces $[\Tau_K]$ are endowed with the left zero multiplication. By Theorem~\ref{t8.2}, for every $K\in\widetilde\mathcal K$ the characteristic group ${\mathcal H}(K)$ is isomorphic to $C_{2^k}$ or $Q_{2^k}$ for some $k\in\mathbb N\cup\{\infty\}$. According to the definition, for $k\in\{1,2\}$ the group $Q_{2^k}$ is isomorphic to the quaternion group $Q_8$.
By the definition of the number $q(X,G)$, for any group $G\in\{C_{2^k},Q_{2^k}:k\in\mathbb N\cup\{\infty\}\}$ we get $q(X,G)=\{K\in\widetilde\mathcal K:{\mathcal H}(K)\cong G\}$ where $\cong$ denotes the (semi)group isomorphism.
Now we see that
$$\mathsf L\cong \prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times [\Tau_K]
\cong\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times 2^{m_K}\cong
\prod_{1\le k\le\infty}C_{2^k}^{\;q(X,C_{2^k})}\times \prod_{3\le k\le\infty}Q_{2^k}^{\;q(X,Q_{2^k})}\times 2^m.$$
\smallskip
2. If $q(X,C_{2^\infty})=q(X,Q_{2^\infty})=0$, then for every $K\in\widetilde\mathcal K$ the characteristic group ${\mathcal H}(K)$ is finite and the orbit space $[\Tau_K]$ is a zero-dimensional compact Hausdorff space. In this case the space $\Tau_K$ is homeomorphic to $[\Tau_K]\times {\mathcal H}(K)$. If
$K$ has finite index in $X$, then $\Tau_K$ has cardinality $2^{m_K}$ and hence is homeomorphic to the finite cube $2^{m_K}$. If $K$ has infinite index in $X$, then the space $\Tau_K$ is homeomorphic to the Cantor cube $2^{m_K}$ by Proposition~\ref{p12.1}. It follows from the topological equivalence of $\Tau_K$ and $[\Tau_K]\times {\mathcal H}(K)$ that $[\Tau_K]$ is a retract of the Cantor cube $\Tau_K$ and each point of $[\Tau_K]$ has character $m_K$. Now Shchepin's characterization of Cantor's cubes \cite{Shch} implies that the space $[\Tau_K]$ is homeomorphic to the Cantor cube $2^{m_K}$. Then the product $\prod_{K\in\widetilde\mathcal K}[\Tau_K]$ is homeomorphic to the Cantor cube $2^m=\prod_{K\in\widetilde\mathcal K}2^{m_K}$.
If ${\I\!\!\I}\cap\wht\mathcal K=\emptyset$, then for every $K\in\widetilde\mathcal K$ the endomorphism monoid $\End^{\I\!\!\I}(\Tau_K)$ contains a continuous minimal idempotent $f_K$ according to Proposition~\ref{p14.5}(1). By Theorem~\ref{t14.3}, the minimal left ideal $\mathsf L_{f_K}=\End^{\I\!\!\I}(\Tau_K)\circ f_K$ is topologically isomorphic to the compact topological semigroup ${\mathcal H}(K)\times[\Tau_K]$ where the space $[\Tau_K]$ is endowed with the left zero multiplication. By (the proof of) Corollary~\ref{c18.6}, the minimal idempotent $\mathsf K(\lambda(X))$ contains a maximal linked system $\mathcal E$ such that the minimal left ideal $\mathsf L_\mathcal E=\lambda(X)\circ\mathcal E$ is topologically isomorphic to the minimal left ideal $\prod_{K\in\widetilde\mathcal K}\mathsf L_{f_K}$, which is topologically isomorphic to the compact topological semigroups
$$
\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times[\Tau_K]\mbox{ \ and \ }\prod_{1\le k< \infty}C_{2^k}^{\;q(X,C_{2^k})}\times \prod_{3\le k<\infty}Q_{2^k}^{\;q(X,Q_{2^k})}\times 2^m.$$
\smallskip
3. By Theorem~\ref{t18.7}(b) each maximal subgroup $H$ of the minimal ideal $\mathsf K(\lambda(X))$ is topologically isomorphic to the right topological group $G=\prod_{K\in\widetilde\mathcal K}{\mathcal H}(A_K)$ for some twin sets $A_K\in\Tau_K$, $K\in\widetilde\mathcal K$. The latter right-topological group is algebraically isomorphic to the group
$$\prod_{1\le k\le \infty}C_{2^k}^{\;q(X,C_{2^k})}\times\prod_{3\le k\le\infty}Q_{2^k}^{\;q(X,Q_{2^k})}.$$
4. If $q(X,C_{2^\infty})=q(X,Q_{2^\infty})=0$, then all characteristic groups ${\mathcal H}(K)$, $K\in\widetilde\mathcal K$, are finite and then the group $G$ is topologically isomorphic to the compact topological group
$$\prod_{1\le k<\infty}C_{2^k}^{\;q(X,C_{2^k})}\times\prod_{3\le k<\infty}Q_{2^k}^{\;q(X,Q_{2^k})}.$$
\end{proof}
\section{The structure of the superextensions of abelian groups}
In this section we consider the structure of the superextension of abelian groups. In this case some results of the preceding section can be simplified. In this section we assume that $X$ is an abelian group. By Theorem~\ref{t6.2}, $X$ is twinic and has trivial twinic ideal ${\I\!\!\I}=\{\emptyset\}$. Let us recall that for a group $G$ by $q(X,G)$ we denote the number of orbits $[K]$, $K\in\wht\mathcal K$, such that for each $K\in[K]$ the characteristic group ${\mathcal H}(K)$ is isomorphic to the group $G$. It is clear that $q(X,Q_{2^k})=0$ for all $k\in\mathbb N\cup\{\infty\}$. On the other hand, the numbers $q(X,C_{2^k})$ can be easily calculated using the following proposition.
\begin{proposition}\label{p19.1} If $X$ is an abelian group, then for every $k\in\mathbb N\cup\{\infty\}$ the cardinal $q(X,C_{2^k})$ is equal to the number of subgroups $H\subset X$ such that the quotient group $X/H$ is isomorphic to $C_{2^k}$. If $k\in\mathbb N$, then
$$q(X,C_{2^k})=\frac{|\hom(X,C_{2^k})|-|\hom(X,C_{2^{k-1}})|}{2^{k-1}},$$
where $\hom(X,C_{2_k})$ is the group of all homomorphisms from $X$ to $C_{2^k}$.
\end{proposition}
\begin{proof} Since each maximal 2-cogroup $K\subset X$ is normal, each orbit $[K]\in[\wht{\mathcal K}]$ consists of a single maximal 2-cogroup. Consequently, $q(X,H)$ is equal to the number of maximal 2-cogroups $K\subset X$ whose characteristic group ${\mathcal H}(K)=\Stab(K)/KK=X/KK$ is isomorphic to $H$. In other words, $q(X,H)$ equals to cardinality of the set
$$\wht{\mathcal K}_H=\{K\in\wht{\mathcal K}:X/KK\cong H\},$$
where $\cong$ stands for the group isomorphism.
Let $\mathcal G_H$ be the set of all subgroups $G\subset X$ such that the quotient group $X/G$ is isomorphic to $H$.
The proposition will be proved as soon as we check that the function
$$f:\wht{\mathcal K}_H\to\mathcal G_H,\;\;f:K\mapsto KK,$$ is bijective.
To show that $f$ is injective, take any two maximal 2-cogroups $K,C$ with $KK=f(K)=f(C)=CC$. The quotient group $X/KK=X/CC$, being isomorphic to $H$, contains a unique element of order 2. Since $K$ and $C$ are cosets of order 2 in $X/KK=X/CC$, we conclude that $K=C$.
To show that $f$ is surjective, take any subgroup $G\in\mathcal G_H$. The quotient group $X/G$ is isomorphic to $H$ and thus contains a unique element $K$ of order 2. This element $K$ is a maximal 2-cogroup such that $f(K)=KK=G$.
\smallskip
To prove the second part of the proposition, observe that for a subgroup $H\subset X$ the quotient group $X/H$ is isomorphic to $C_{2^k}$ if and only if $H$ coincides with the kernel of some epimorphism $f:X\to C_{2^k}$. Observe that two epimorphisms $f,g:X\to C_{2^k}$ have the same kernel if and only if $g=\alpha\circ f$ for some automorphism of the group $C_{2^k}$. The group $C_{2^k}$ has exactly $2^{k-1}$ automorphisms determined by the image of the generator $a=e^{i\pi 2^{-k+1}}$ of $C_{2^k}$ in the 2-cogroup $aC_{2^{k-1}}$. A homomorphism $h:X\to C_{2^k}$ is an epimorphism if and only if $h(X)\not\subset C_{2^{k-1}}$. Consequently, $$q(X,C_{2^k})=\frac{|\hom(X,C_{2^k})\setminus \hom(X,C_{2^{k-1}})|}{2^{k-1}}.$$
\end{proof}
\begin{theorem}\label{t19.2} If $X$ is an abelian group, then
\begin{enumerate}
\item[\textup{(1)}] each maximal subgroup of the minimal ideal $\mathsf K(\lambda(X))$ is algebraically isomorphic to $\prod\limits_{1\le k\le\infty}C_{2^k}^{\;q(X,C_{2^k})}$.
\item[\textup{(2)}] each minimal left ideal of $\lambda(X)$ is
homeomorphic to the Cantor cube $(2^\omega)^{q(X,C_{2^\infty})}\times\prod\limits_{1\le k<\infty}(2^{2^{k-1}})^{q(X,C_{2^k})}$ and is algebraically isomorphic to the semigroup $$\prod_{1\le k\le \infty}(C_{2^k}\times Z_k)^{q(X,C_{2^k})}$$where the cube $Z_k=2^{2^{k-1}-k}$ (equal to $2^{\omega}$ if $k=\infty$) is endowed with the left zero multiplication.
\item[\textup{(3)}] the semigroup $\lambda(X)$ contains a principal left ideal, which is algebraically isomorphic to the semigroup
$$\prod_{1\le k\le \infty}(C_{2^k}\wr Z_k^{Z_k})^{q(X,C_{2^k})}.$$
\end{enumerate}
\end{theorem}
\begin{proof} Since $X$ is abelian, each 2-cogroup $K\in\wht\mathcal K$ is normal in $X$ and hence has one-element orbit $[K]=\{xKx^{-1}:x\in X\}$. Then the family $\wht\mathcal K$ is a unique $[\wht\mathcal K]$-selector. Since $\Stab(K)=X$, the characteristic group ${\mathcal H}(K)=\Stab(X)/KK$ is equal to the quotient group $X/KK$ and is abelian. By Theorem~\ref{t8.2}, ${\mathcal H}(K)$ is isomorphic to the (quasi)cyclic 2-group $C_{2^n}$ for some $n\in\mathbb N\cup\{\infty\}$.
Consequently, $\wht\mathcal K=\bigcup_{1\le n\le\infty}\wht\mathcal K_n$ where $\wht\mathcal K_n$ is the subset of $\wht\mathcal K$ that consists of all maximal 2-cogroups $K$ whose characteristic group ${\mathcal H}(K)=X/KK$ is isomorphic to the group $C_{2^k}$. By the definition of the numbers $q(X,G)$, we get $q(X,C_{2^n})=|\wht\mathcal K_n|$ for all $n\in\mathbb N\cup\{\infty\}$.
\smallskip
1. By Theorem~\ref{t18.7}(a), each maximal subgroup of $\mathsf K(\lambda(X))$ is algebraically isomorphic to $\prod_{K\in\wht\mathcal K}{\mathcal H}(K)$ and the latter group is isomorphic to $$\prod_{1\le n\le\infty}C_{2^n}^{|\wht\mathcal K_n|}= \prod_{1\le n\le\infty}C_{2^n}^{\;q(X,C_{2^n})}.$$
\smallskip
2. By Theorem~\ref{t18.7}(f), each minimal left ideal of $\lambda(X)$ is algebraically isomorphic to $\prod_{K\in\wht\mathcal K}({\mathcal H}(K)\times[\Tau_K])$ where the orbit spaces $[\Tau_K]$ are endowed with the left zero multiplication. Let $Z_\infty=2^\omega$ and $Z_n=2^{2^{n-1}-n}$ for every $n\in\mathbb N$. The cubes $Z_n$, $n\in\mathbb N\cup\{\infty\}$, are endowed with the left zero multiplication. We claim that $|[\Tau_K]|=|Z_n|$ for each $n\in\mathbb N\cup\{\infty\}$. If $n$ is finite, then $|X/K^\pm|=\frac12|X/KK|=\frac12|{\mathcal H}(K)|=2^{n-1}$ and
$$|[\Tau_K]|=\frac{|\Tau_K|}{|{\mathcal H}(K)|}=\frac{2^{X/K^\pm}}{2^n}=2^{2^{n-1}-n}=|Z_n|.$$
If $n$ is infinite, then the quotient group ${\mathcal H}(K)=X/KK$ is isomorphic to $C_{2^\infty}$ and then $|X/K^\pm|=\omega$. By Proposition~\ref{p12.1}, the space $\Tau_K$ is homeomorphic to the Cantor cube $2^\omega$ and hence has cardinality of continuum. Since the group ${\mathcal H}(K)$ is countable, the orbit space $[\Tau_K]$ also has cardinality of continuum and hence $|[\Tau_K]|=|2^\omega|=|Z_\infty|$.
Now we see that $\prod_{K\in\wht\mathcal K}({\mathcal H}(K)\times[\Tau_K])$ is algebraically isomorphic to $\prod_{1\le k\le\infty}(C_{2^k}\times Z_k)^{q(X,C_{2^k})}$.
\smallskip
3. Since $X$ has trivial twinic ideal, ${\I\!\!\I}\cap\wht\mathcal K=\emptyset$ and by Corollary~\ref{c18.5}(c) and Theorem~\ref{t14.1}(3), the semigroup $\lambda(X)$ contains a principal left ideal that is algebraically isomorphic to the semigroup
$\prod_{K\in\wht\mathcal K}{\mathcal H}(K)\wr [\Tau_K]^{[\Tau_K]}$, which is algebraically isomorphic to $\prod_{1\le k\le \infty}(C_{2^k}\wr Z_k^{Z_k})^{q(X,C_{2^k})}$.
\end{proof}
The following theorem characterizes the groups $X$ for which the algebraic isomorphism in Theorem~\ref{t19.2} are topological.
\begin{theorem}\label{t19.3} For an abelian group $X$ the following conditions are equivalent\textup{:}
\begin{enumerate}
\item[\textup{(1)}] The group $X$ admits no homomorphism onto the quasicyclic 2-group $C_{2^\infty}$.
\item[\textup{(2)}] Each maximal subgroup in the minimal ideal $\mathsf K(\lambda(X))$ of $\lambda(X)$ is topologically isomorphic to the compact topological group
$\prod\limits_{k\in\mathbb N}C_{2^k}^{\;q(X,C_{2^k})}$.
\item[\textup{(3)}] Each maximal subgroup in $\mathsf K(\lambda(X))$ is a topological group.
\item[\textup{(4)}] Some maximal subgroup of $\mathsf K(\lambda(X))$ is compact.
\item[\textup{(5)}] Each minimal left ideal of $\lambda(X)$ is topologically isomorphic to the compact topological semigroup
$$\prod\limits_{k\in\mathbb N}(C_{2^k}\times Z_k)^{q(X,C_{2^k})}$$where the finite cube $Z_k=2^{2^{k-1}-k}$ is endowed with the left zero multiplication.
\item[\textup{(6)}] $\lambda(X)$ contains a principal left ideal, which is topologically isomorphic to the compact topological semigroup
$$\prod\limits_{k\in\mathbb N}(C_{2^k}\wr Z_k^{Z_k})^{q(X,C_{2^k})}.$$
\end{enumerate}
\end{theorem}
\begin{proof} Let the subfamilies $\wht\mathcal K_n\subset\wht\mathcal K$ and the Cantor cubes $Z_n=2^{2^{n-1}-n}$, $n\in\mathbb N\cup\{\infty\}$, be defined as in the proof of Theorem~\ref{t19.2}. Then $q(X,C_{2^n})=|\wht\mathcal K_n|$.
\smallskip
$(1)\Rightarrow(5,6)$ If $X$ admits no homomorphism onto $C_{2^\infty}$, then
$q(X,C_{2^\infty})=0$ and $\wht\mathcal K_\infty=\emptyset$. In this case $\wht\mathcal K=\bigcup_{n\in\mathbb N}\wht\mathcal K_n$. For every $n\in\mathbb N$ and $K\in\wht\mathcal K_n$ the characteristic group ${\mathcal H}(K)$ is isomorphic to $C_{2^n}$ and the orbit space $[\Tau_K]$ is homeomorphic to the cube $Z_k$. By Theorem~\ref{t14.1}(3), the endomorphism monoid $\End(\Tau_K)$ is (topologically) isomorphic to ${\mathcal H}(K)\wr[\Tau_K]^{[\Tau_K]}$ and the latter semigroup is topologically isomorphic to $C_{2^n}\wr Z_n^{\,Z_n}$.
By Theorem~\ref{t18.7}(g), each minimal left ideal of $\lambda(X)$ is topologically isomorphic to $$\prod_{K\in\wht\mathcal K}{\mathcal H}(K)\times[\Tau_K]=\prod_{n\in\mathbb N}\prod_{K\in\wht\mathcal K_n}{\mathcal H}(K)\times[\Tau_K]$$ and the latter semigroup is topologically isomorphic to the compact topological semigroup $$\prod\limits_{k\in\mathbb N}(C_{2^k}\times Z_k)^{q(X,C_{2^k})}.$$
By Corollary~\ref{c18.5}(3) and Theorem~\ref{t14.1}(3), the semigroup $\lambda(X)$ contains a principal left ideal that is topologically isomorphic to the compact topological semigroup $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\wr[\Tau_K]^{[\Tau_K]}=\prod_{n\in\mathbb N}\prod_{K\in\wht\mathcal K_n}{\mathcal H}(K)\wr[\Tau_K]^{[\Tau_K]}$, which is topologically isomorphic to the compact topological semigroup
$$\prod\limits_{k\in\mathbb N}(C_{2^k}\wr Z_k^{Z_k})^{q(X,C_{2^k})}.$$
\smallskip
The implication $(5)\Rightarrow(2)\Rightarrow(3)$ are trivial.
\smallskip
$(3)\Rightarrow(1)$ If the group $X$ admits a homomorphism onto $C_{2^\infty}$, then the family $\wht\mathcal K$ contains a 2-cogroup $K_\infty$ whose characteristic group ${\mathcal H}(K_\infty)$ is isomorphic to $C_{2^\infty}$.
It follows from Example~\ref{e9.3}(2) that for some twin set $A_{K_\infty}\in\Tau_{K_\infty}$ the twin-generated group ${\mathcal H}(A_{K_\infty})$ is not a topological group. Now choose a sequence $(A_K)_{K\in\wht\mathcal K}\in\prod_{K\in\wht\mathcal K}\Tau_K$ of twin sets such that $A_K=A_{K_\infty}$ if $K=K_\infty$. Then the right-topological group $\prod_{K\in\wht\mathcal K}{\mathcal H}(A_K)$ is not a topological group. By Corollary~\ref{c18.6}, this right-topological group is topologically isomorphic to some maximal subgroup of the minimal ideal $\mathsf K(\lambda(X))$. So, $\mathsf K(\lambda(X))$ contains a maximal subgroup, which is not a topological group.
\smallskip
$(6)\Rightarrow(4)$ If $\lambda(X)$ contains a left ideal, which is a topological semigroup, then $\lambda(X)$ contains a minimal left ideal, which is a topological semigroup. Any maximal subgroup of this minimal left ideal is a compact topological group.
\smallskip
$(4)\Rightarrow(1)$ If $K(\lambda(X))$ contains a compact maximal subgroup, then by Theorem~\ref{t18.7}(c), each characteristic group ${\mathcal H}(K)$, $K\in\wht\mathcal K$, is finite and hence $q(X,C_{2^\infty})=0$.
\end{proof}
Finally, we shall characterize abelian groups whose superextension contains metrizable minimal left ideals. The characterization involves the notion of the free rank and 2-rank, see \cite[\S16]{Fu} or \cite[\S4.2]{Rob}.
Let us recall that a subset $A\not\ni e$ of an abelian group $G$ with neutral element $e$ is called {\em independent} if for any disjoint subsets $B,C\subset A$ the subgroups $\la B\ra$ and $\la C\ra$ generated by $B,C$ intersect by the trivial subgroup. The cardinality of a maximal independent subset $A\subset G$ that consists of element of infinite order (resp. of order that is a power of 2) is called the {\em free rank} (resp. the {\em 2-rank}) of $G$ and is denoted by $r_0(G)$ (resp. $r_2(G)$).
\begin{theorem}\label{t19.4} For an abelian group $X$ the following conditions are equivalent:
\begin{enumerate}
\item[\textup{(1)}] each minimal left ideal of $\lambda(X)$ is metrizable;
\item[\textup{(2)}] the family $\wht\mathcal K$ of maximal 2-cogroups is at most countable;
\item[\textup{(3)}] the group $X$ admits no epimorphism onto the group $C_{2^\infty}\oplus C_{2^\infty}$ and $X$ has finite ranks $r_0(X)$ and $r_2(X)$.
\end{enumerate}
\end{theorem}
\begin{proof} $(1)\Leftrightarrow(2)$ By Theorem~\ref{t19.2}(2), each minimal left ideal is homeomorphic to the cube $$(2^\omega)^{q(X,C_{2^\infty})}\times \prod_{1\le k<\infty}(2^{2^{k-1}})^{q(X,C_{2^k})},$$ which is metrizable if and only if $|\wht\mathcal K|=\sum_{1\le k\le\infty}q(X,C_{2^k})\le\aleph_0$.
\smallskip
For the proof of the equivalence $(2)\Leftrightarrow(3)$ we need two lemmas. We define a group $G$ to be {\em $\wht\mathcal K$-countable} if the family of maximal 2-cogroups in $G$ is at most countable.
\begin{lemma}\label{l19.5} Each subgroup and each quotient group of a $\wht\mathcal K$-countable group is $\wht\mathcal K$-countable.
\end{lemma}
\begin{proof} Assume that a group $G$ is $\wht\mathcal K$-countable.
To prove that any subgroup $H\subset G$ is $\wht\mathcal K$-countable, observe that by Proposition~\ref{p7.3}(2), each 2-cogroup $K\subset H$ can be enlarged to a maximal 2-cogroup $\bar K$ in $G$. The maximality of $K$ in $H$ guarantees that $K=\bar K\cap H$. This implies that the number of maximal 2-cogroups in $H$ does not exceed the number of maximal 2-cogroups in $G$.
To prove that any quotient group $G/H$ of $G$ by a normal subgroup $H\subset G$ is $\wht\mathcal K$-countable, observe that for each maximal 2-cogroup $K\subset G/H$ the preimage $q^{-1}(K)$ under the quotient homomorphism $q:G\to G/H$ is a maximal 2-cogroup in $G$. This implies that the number of maximal 2-cogroups in $G/H$ does not exceed the number of maximal 2-cogroups of $G$.
\end{proof}
For a group $G$ with neutral element $e$ and a set $A$ by $$\oplus^A G=\{(x_\alpha)_{\alpha\in A}\in G^A:|\{\alpha\in A:x_\alpha\ne e\}|<\aleph_0\}$$ we denote the direct sum of $|A|$ many copies of $G$.
\begin{lemma}\label{l19.6} The groups $\oplus^\omega C_2$, $\oplus^\omega \mathbb Z$ and $C_{2^\infty}\times C_{2^\infty}$ are not $\wht\mathcal K$-countable.
\end{lemma}
\begin{proof} Observe that for any abelian group $X$ the number $q(X,C_2)$ is equal to the number of subgroups having index 2 and is equal to the number of non-trivial homomorphisms $h:X\to C_2$.
Each (non-empty) subset $A\subset \omega$ determines a (non-trivial) homomorphism $$h_A:\oplus^\omega C_2\to C_2,\;\;h_A:(x_i)_{i\in\omega}\mapsto \prod_{i\in A}x_i.$$
For any distinct subsets $A,B\subset\omega$ the homomorphisms $h_A$ and $h_B$ are distinct. Consequently, for the group $X=\oplus^\omega C_2$, the family $\wht\mathcal K$ of maximal 2-cogroups has cardinality $|\wht\mathcal K|\ge \hom(X,C_2)=2^\omega$ and hence this group is not $\wht\mathcal K$-countable.
Since $\oplus^\omega C_2$ is a quotient group of $\oplus^\omega\mathbb Z$, the latter group is not $\wht\mathcal K$-countable.
Finally, we show that the group $X=\mathcal C_{2^\infty}\times C_{2^\infty}$ is not $\wht\mathcal K$-countable. It is well-known (see \cite[\S43]{Fu}) that the quasicyclic group $C_{2^\infty}$ has uncountable automorphism group $\mathrm{Aut}(C_{2^\infty})$.
For any automorphism $h:C_{2^\infty}\to C_{2^\infty}$ its graph $\Gamma_h=\{(x,h(x)):x\in C_{2^\infty}\}$ is a subgroup of $X=C_{2^\infty}\times C_{2^\infty}$ such that the quotient group $X/\Gamma_h$ is isomorphic to $C_{2^\infty}$. Consequently, $q(X,C_{2^\infty})\ge\mathrm{Auth}(C_{2^\infty})>\aleph_0$ and hence the group $X=C_{2^\infty}\oplus C_{2^\infty}$ is not $\wht\mathcal K$-countable.
\end{proof}
Now we are able to prove the equivalence $(2)\Leftrightarrow (3)$.
The implication $(2)\Rightarrow(3)$ follows from Lemmas~\ref{l19.5} and \ref{l19.6}.
To prove the implication $(3)\Rightarrow(2)$ of Theorem~\ref{t19.4}, assume that an abelian group $X$ has finite free and 2-ranks and $X$ admits no homomorphism onto the group $C_{2^\infty}\oplus C_{2^\infty}$. By Proposition~\ref{p19.1}, the cardinality of the set $\wht\mathcal K$ of maximal 2-cogroups in $X$ is equal to the cardinality of the family $\mathcal H$ of subgroups $H\subset X$ such that the quotient group $X/H$ is isomorphic to $C_{2^k}$ for some $1\le k\le\infty$.
So, it suffices to prove that $|\mathcal H|\le\aleph_0$.
Consider the subgroup $X_{\mathrm{odd}}\subset X$ consisting of the elements of odd order. Since $X$ has finite free and 2-ranks, so does the quotient group $X/X_{\mathrm{odd}}$. Then quotient group $Y=X/X_{\mathrm{odd}}$ is at most countable (because it contains no elements of odd order and has finite free and 2-ranks). Let $q:X\to Y$ be the quotient homomorphism.
Let $\mathcal M$ be the family of maximal independent subsets consisting of elements of infinite order in the group $Y=X/X_{\mathrm{odd}}$.
Since the free rank of $Y$ is finite, each (independent) set $M\in\mathcal M$ is finite and hence $\mathcal M$ is at most countable.
For each $M\in\mathcal M$ consider the free abelian subgroup $\la M\ra\subset Y$ generated by $M$. Let $G_M=Y/\la M\ra$ be the quotient group and $q_M:Y\to G_M$ be the quotient homomorphism. The maximality of $M$ implies that $G_M$ is a torsion group. Since the free and 2-ranks of the group $Y$ are finite, the quotient group $G_M$ has finite 2-rank. The group $G_M$ is the direct sum $G_M=O_M\oplus D_M$ of the subgroup $O_M$ of elements of odd order and the maximal 2-subgroup $D_M\subset G_M$. Let $p_M:G_M\to D_M=G_M/O_M$ be the quotient homomorphism.
We claim that the group $D_M$ has at most countably many subgroups.
Since $D_M$ is a quotient group of $X$ and $X$ admits no homomorphism onto the group $(C_{2^\infty})^2$ the group $D_M$ also admits no homomorphism onto $C_{2^\infty}^2$. Two cases are possible.
1) The group $D_M$ contains no subgroup isomorphic to $C_{2^\infty}$. In this case the Pr\"ufer's Theorem~17.2 \cite{Fu} guarantees that $D_M$ is a direct sum of cyclic 2-groups. Since $D_M$ has finite 2-rank, it is finite, being a finite sum of cyclic 2-groups. Then $D_M$ has finitely many subgroups.
2) The group $D_M$ contains a subgroup $D\subset M$ isomorphic to $C_{2^\infty}$. Being divisible, the subgroup $D$ is complemented in $D_M$, which means that $D_M=D\oplus F$ for some subgroup $F\subset D_M$. Since $D_M$ admits no homomorphism onto $(C_{2^\infty})^2$, the subgroup $F$ contains no subgroup isomorphic to $C_{2^\infty}$ and hence is finite by the preceding case. Taking into account that the quasicyclic 2-group $D$ has countably many subgroups, we conclude that the group $D_M=D\oplus F$ also has countably many subgroups.
In both cases the family $\mathcal D_M$ of subgroups of $D_M$ is at most countable. Then the family $\mathcal H_{M}=\{(p_M\circ q_M\circ q)^{-1}(H):H\in \mathcal D_M\}$ also is at most countable. It remains to check that $\mathcal H\subset\bigcup_{M\in\mathcal M}\mathcal H_M$.
Fix any subgroup $H\in\mathcal H$. By the definition of $\mathcal H$, the quotient group $X/H$ is a 2-group, which implies $X_{\mathrm{odd}}\subset H$.
Then $H=q^{-1}(H_Y)$ where $H_Y=q(H)$. Let $M$ be a maximal independent subset of $H_Y$ that consists of elements of infinite order. Since $Y/H_Y=X/H$ is a torsion group, the set $M$ is maximal in $Y$ and hence belongs to the family $\mathcal M$. It follows that $\la M\ra\subset H_Y$ and hence $H_Y=q_M^{-1}(H_M)$ where $H_M=q_M(H_Y)\subset G_M$. Since $G_M/H_M=Y/H_Y=X/H$ is a 2-group, the subgroup $H_M$ contains the subgroup $O_M$ of elements of odd order in $G_M$. Then $H_M=p_M^{-1}(G_M)$ where $G_M=p_M(H_M)\subset D_M$. Since $G_M\in\mathcal D_M$, we conclude that the group $H=(p_M\circ q_M\circ q)^{-1}(G_M)$ belongs to the family $\mathcal H_M\subset\mathcal H$.
\end{proof}
\section{Compact reflexions of groups}\label{s18}
In this section $X$ is an arbitrary group.
Till this moment our strategy in describing the minimal left ideals of the semigroups $\lambda(X)$ consisted in finding a relatively small subfamily $\mathsf F\subset\mathsf P(X)$ such that the function representation $\Phi_{\mathsf F}:\lambda(X)\to\Enl(\mathsf F)$ is injective on all minimal left ideals of $\lambda(X)$. Now we shall simplify the group $X$ preserving the minimal left ideals of $\lambda(X)$ unchanged.
We shall describe three such simplifying procedures. One of them is the factorization of $X$ by the subgroup $$\mathrm{Odd}=\bigcap_{K\in\wht\mathcal K}KK.$$ Here we assume that $\mathrm{Odd}=X$ if the set $\wht\mathcal K$ is empty.
The following proposition explains the choice of the notation for the subgroup $\mathrm{Odd}$. We recall that a group $G$ is called {\em odd} if each element of $G$ has odd order.
\begin{proposition}\label{p20.1} $\mathrm{Odd}$ is the largest normal odd subgroup of $X$. If $X$ is Abelian, then $\mathrm{Odd}$ coincides with the set of all elements having odd order in $X$.
\end{proposition}
\begin{proof}
The normality of the subgroup $\mathrm{Odd}=\bigcap_{K\in\wht\mathcal K}KK$ follows from the fact that $xKx^{-1}\in\wht\mathcal K$ for every $K\in\wht\mathcal K$ and $x\in X$. Next, we show that the group $\mathrm{Odd}$ is odd. Assuming the converse, we could find an element $a\in\mathrm{Odd}$ such that the sets $a^{2\mathbb Z}=\{a^{2n}:n\in\mathbb Z\}$ and $a^{2\mathbb Z+1}=\{a^{2n+1}:n\in\mathbb Z\}$ are disjoint. Then the 2-cogroup $a^{2\mathbb Z+1}$ of $X$ can be enlarged to a maximal 2-cogroup $K\in\wht\mathcal K$. It follows that $a\in K\subset X\setminus KK$ and thus $a\notin\mathrm{Odd}$, which is a contradiction.
It remains to prove that $\mathrm{Odd}$ contains any normal odd subgroup $H\subset X$. It suffices to check that for every maximal 2-cogroup $K\in\wht\mathcal K$ the subgroup $H\subset X$ lies in the group $KK$. Let $K^\pm=K\cup KK$. Since the subgroup $H$ is normal in $X$, the sets $KKH=HKK$ and $K^\pm H=HK^\pm$ are subgroups. We claim that the sets $KH=HK$ and $KKH=HKK$ are disjoint.
Assuming that $KH\cap KKH\ne\emptyset$, we can find a point $x\in K$ such that $x\in KKH$. Since $KK=xK$, there are points $z\in K$ and $h\in H$ such that $x=xzh$. Then $z=h^{-1}\in K\cap H$.
Now consider the cyclic subgroup $z^{2\mathbb Z}=\{z^{2n}:n\in\mathbb Z\}$. Since $z\in K$, the subgroup $z^{2\mathbb Z}$ does not intersect the set $z^{2\mathbb Z+1}=\{z^{2n+1}:n\in\mathbb Z\}$. On the other hand, since $H$ is odd, there is an integer number $n\in\mathbb Z$ with $z^{2n+1}=z^0\in z^{2\mathbb Z+1}\cap z^{2\mathbb Z}$.
This contradiction shows that $KH$ and $KKH$ are disjoint. Consequently, the subgroup $KKH$ has index 2 in the group $K^\pm H$ and hence $KH=K^\pm H\setminus KKH$ is a 2-cogroup in $X$ containing $H$. The maximality of $K$ in $\mathcal K$ guarantees that $K=KH$ and hence $H\subset KK$.
\end{proof}
The quotient homomorphism $q_{\mathrm{odd}}:X\to X/\mathrm{Odd}$ generates a continuous semigroup homomorphism $\lambda(q_{\mathrm{odd}}):\lambda(X)\to\lambda(X/\mathrm{Odd})$.
The following theorem was proved in \cite[3.3]{BG3}.
\begin{theorem}\label{t20.2} The homomorphism $\lambda(q_{\mathrm{odd}}):\lambda(X)\to\lambda(X/\mathrm{Odd})$ is injective on each minimal left ideal of $\lambda(X)$.
\end{theorem}
Next, we define two compact topological groups called the first and second profinite reflexions of the group $X$. To define the first profinite reflexion, consider the family $\N$ of all normal subgroups of $X$ with finite index in $X$. For each subgroup $H\in\N$ consider the quotient homomorphism $q_H:X\to X/H$. The diagonal product of those homomorphisms determines the homomorphism $q:X\to\prod_{H\in\N}X/H$ of $X$ into the compact topological group $\prod_{H\in\N}X/H$. The closure of the image $q(X)$ in $\prod_{H\in\N}X/H$ is denoted by $\bar X$ and is called the {\em profinite reflexion} of $X$.
The second profinite reflexion $\bar X_2$ is defined in a similar way with help of the subfamily $$\N_2=\Big\{\bigcap_{x\in X}xKKx^{-1}:K\in\wht\mathcal K,\; |X/K|<\aleph_0\Big\}$$ of $\N$. The quotient homomorphisms $q_H:X\to X/H$, $H\in\N_2$, compose a homomorphism $q_2:X\to\prod_{H\in\N_2}X/H$. The closure of the image $q_2(X)$ in $\prod_{H\in\N_2}X/H$ is denoted by $\bar X_2$ and is called the {\em second profinite reflexion} of $X$. Since $\mathrm{Ker}(q_2)=\bigcap\N_2\supset\bigcap_{K\in\wht{\mathcal K}}KK\supset \mathrm{Odd}$, the homomorphism $q_2:X\to\bar X_2$ factorizes through the group $X/\mathrm{Odd}$ in the sense that there is a unique homomorphism $q_{\mathrm{even}}:X/\mathrm{Odd}\to \bar X_2$ such that $q_2=q_\mathrm{even}\circ q_\mathrm{odd}$.
Thus we get the following commutative diagram:
$$
\xymatrix{
X\ar[r]^-{q_{\mathrm{odd}}}\ar[dr]^{q_2}\ar[d]^{q} & {X/\mathrm{Odd}}\ar[d]^{q_\mathrm{even}}\\
\bar X \ar[r]_-{\operatorname{pr}} &\bar X_2
}
$$
Applying to this diagram the functor $\lambda$ of superextension we get the diagram
$$
\xymatrix{
\lambda(X)\ar[r]^-{\lambda(q_{\mathrm{odd}})}\ar[dr]^{\lambda(q_2)}\ar[d]^{\lambda(q)} & {\lambda(X/\mathrm{Odd})}\ar[d]^{\lambda(q_\mathrm{even})}\\
\lambda(\bar X) \ar[r]_-{\lambda(\operatorname{pr})} &\lambda(\bar X_2)
}
$$
In this diagram $\lambda(\bar X)$ and $\lambda(\bar X_2)$ are the superextensions of the compact topological groups $\bar X$ and $\bar X_2$. We recall that the superextension $\lambda(K)$ of a compact Hausdorff space $K$ is the closed subspace of the second exponent $\exp(\exp(K))$ that consists of the maximal linked systems of closed subsets of $K$, see \cite[\S2.1.3]{TZ}.
\begin{theorem}\label{t18.3} If each maximal 2-cogroup $K$ of a twinic group $X$ has finite index in $X$, then the homomorphism $\lambda(q_2):\lambda(X)\to\lambda(\bar X_2)$ is injective on each minimal left ideal of $\lambda(X)$.
\end{theorem}
\begin{proof} The injectivity of the homomorphism $\lambda(q_2)$ on a minimal left ideal $\mathsf L$ of $\lambda(X)$ will follow as soon as for any distinct maximal linked systems $\mathcal A,\mathcal B\in\mathsf L$ we find a subgroup $H\in\mathcal N_2$ such that $\lambda q_H(\mathcal A)\ne\lambda q_H(\mathcal B)$. Fix any $[\wht\mathcal K]$-selector $\widetilde\mathcal K\subset\wht\mathcal K$.
By Corollary~\ref{c18.5}, the homomorphism $\Phi_{\widetilde{\Tau}}:\lambda(X)\to\prod_{K\in\widetilde\mathcal K}\Enl(\Tau_K)$, $\Phi_{\widetilde\Tau}:\mathcal L\mapsto (\Phi_\mathcal L|\Tau_K)_{K\in\widetilde\mathcal K}$ is injective on the minimal left ideal $\mathsf L$. Consequently, $\Phi_\mathcal A|\Tau_K\ne \Phi_{\mathcal B}|\Tau_K$ for some $K\in\widetilde\mathcal K$ and we can find a set $T\in\Tau_K$ such that $\Phi_\mathcal A(T)\ne \Phi_{\mathcal B}(T)$.
Since the 2-cogroup $K$ has finite index in $X$, the normal subgroup $H=\bigcap_{x\in X}xKKx^{-1}$ has finite index in $X$ and belongs to the family $\mathcal N_2$. Consider the finite quotient group $X/H$ and let $q_H:X\to X/H$ be the quotient homomorphism. Since $H\subset KK$, the set $T=KKT$ coincides with the preimage $q_H^{-1}(T')$ of some twin set $T'\in X/H$. This fact can be used to show that $\lambda q_H(\mathcal A)\ne \lambda q_H(\mathcal B)$.
\end{proof}
\begin{remark} For each finite abelian group $X$ the group $X/\mathrm{Odd}$ is a 2-group. For non-commutative groups it is not always true: for the group $X=A_4$ of even permutations of the set $4=\{0,1,2,3\}$ the group $X/\mathrm{Odd}$ coincides with $X$, see Section~\ref{s21.5}. Also $X/\mathrm{Odd}$ coincides with $X$ for any simple group.
\end{remark}
\section{Some examples}
Now we consider the superextensions of some concrete groups.
\subsection{The infinite cyclic group $\mathbb Z$} In order to compare the algebraic properties of the semigroups $\lambda(\mathbb Z)$ and $\beta(\mathbb Z)$ let us recall a deep result of E.~Zelenyuk \cite{Zel} (see also \cite[\S7.1]{HS}) who proved that each finite subgroup in the subsemigroup $\beta(\mathbb Z)\subset\lambda(\mathbb Z)$ is trivial. It turns out that the semigroup $\lambda(\mathbb Z)$ has a totally different property.
\begin{theorem}\label{t21.1}
\begin{enumerate}
\item[\textup{(1)}] The semigroup $\lambda(\mathbb Z)$ contains a principal left ideal topologically isomorphic to $\prod\limits_{k=1}^\infty C_{2^k}\wr Z_k^{Z_k}$ where $Z_k=2^{2^{k-1}-k}$.
\item[\textup{(2)}] Each minimal left ideal of $\lambda(\mathbb Z)$ is topologically isomorphic to $2^\omega\times \prod_{k=1}^\infty C_{2^k}$ where the Cantor cube $2^\omega$ is endowed with the left-zero multiplication.
\item[\textup{(3)}] each maximal group of the minimal ideal $\mathsf K(\lambda(\mathbb Z))$ is topologically isomorphic to $\prod\limits_{k=1}^\infty C_{2^k}$.
\item[\textup{(4)}] The semigroup $\lambda(\mathbb Z)$ contains a topologically isomorphic copy of each second countable profinite topological semigroup.
\end{enumerate}
\end{theorem}
\begin{proof} The group $\mathbb Z$ is abelian and hence has trivial twinic ideal according to Theorem~\ref{t6.2}. It is easy to see that $q(\mathbb Z,C_{2^k})=1$ for all $k\in\mathbb N$, while $q(\mathbb Z,C_{2^\infty})=0$.
\smallskip
1. By Theorem~\ref{t19.3}(6), the semigroup $\lambda(\mathbb Z)$ contains a principal left ideal that is topologically isomorphic to $\prod_{k=1}^\infty C_{2^k}\wr Z_k^{Z_k}$ where $Z_k=2^{2^{k-1}-k}$.
\smallskip
2. By Theorem~\ref{t19.3}(5), each minimal left ideal $\mathsf L$ of $\lambda(\mathbb Z)$ is topologically isomorphic to $\prod_{k=1}^\infty C_{2^k}\times Z_k$ where each cube $Z_k=2^{2^{k-1}-k}$ is endowed with the left zero multiplication. It is easy to see that the left zero semigroup $\prod_{k=1}^\infty Z_k$ is topologically isomorphic to the Cantor cube $2^\omega$ endowed with the left zero multiplication. Consequently, $\mathsf L$ is topologically isomorphic to $2^\omega\times \prod_{k=1}^\infty C_{2^k}$.
\smallskip
3. The preceding item implies that each maximal group of the minimal ideal $\mathsf K(\lambda(\mathbb Z))$ is topologically isomorphic to $\prod\limits_{k=1}^\infty C_{2^k}$.
4. The fourth item follows from the first item and the following well-known fact, see \cite[I.1.3]{CP}.
\end{proof}
\begin{lemma}\label{emb} Each semigroup $S$ is algebraically isomorphic to a subsemigroup of the semigroup $A^A$ of all self-maps of a set $A$ of cardinality $|A|\ge |S^1|$ where $S^1$ is $S$ with attached unit.
\end{lemma}
\subsection{The (quasi)cyclic 2-groups $C_{2^n}$}
For a cyclic 2-group $X=C_{2^n}$ the number
$$q(X,C_{2^{k}})=\begin{cases}1&\mbox{if $k\le n$}\\
0&\mbox{otherwise}.
\end{cases}
$$ Applying Theorem~\ref{t19.3} we get:
\begin{theorem}For every $n\in\mathbb N$
\begin{enumerate}
\item[\textup{(1)}] The semigroup $\lambda(C_{2^n})$ contains a principal left ideal isomorphic to $\prod\limits_{k=1}^n C_{2^k}\wr Z_k^{Z_k}$ where $Z_k=2^{2^{k-1}-k}$.
\item[\textup{(2)}] Each minimal left ideal of $\lambda(C_{2^n})$ is isomorphic to $ \prod_{k=1}^n C_{2^k}\times Z_k$ where each cube $Z_k=2^{2^{k-1}-k}$ is endowed with left-zero multiplication.
\item[\textup{(3)}] Each maximal group of the minimal ideal $\mathsf K(\lambda(C_{2^n}))$ is isomorphic to $\prod\limits_{k=1}^n C_{2^k}$.
\item[\textup{(4)}] The semigroup $\lambda(C_{2^n})$ contains an isomorphic copy of each semigroup $S$ of cardinality $|S|<2^{2^{n-1}-n}$.
\end{enumerate}
\end{theorem}
The superextension $\lambda(C_{2^\infty})$ has even more interesting properties.
\begin{theorem}\label{t21.4}
\begin{enumerate}
\item[\textup{(1)}] Minimal left ideals of the semigroup $\lambda(C_{2^\infty})$ are not topological semigroups.
\item[\textup{(2)}] each minimal left ideal of $\lambda(C_{2^\infty})$ is homeomorphic to the Cantor cube $2^\omega$ and is algebraically isomorphic to $\mathfrak c\times (C_{2^\infty})^\omega$ where the cardinal $\mathfrak c=2^{\aleph_0}$ is endowed with left zero multiplication;
\item[\textup{(3)}] the semigroup $\lambda(C_{2^\infty})$ contains a principal left ideal, which is algebraically isomorphic to $(C_{2^\infty}\wr \mathfrak c^\mathfrak c)^\omega$;
\item[\textup{(4)}] $\lambda(C_{2^\infty})$ contains an isomorphic copy of each semigroup of cardinality $\le \mathfrak c$;
\item[\textup{(5)}] each maximal subgroup of the minimal ideal $\mathsf K(\lambda(C_{2^\infty}))$ of $\lambda(C_{2^\infty})$ is algebraically isomorphic to $(C_{2^\infty})^\omega$;
\item[\textup{(6)}] each maximal subgroup of the minimal ideal $\mathsf K(\lambda(C_{2^\infty}))$ is topologically isomorphic to the countable product $\prod_{n=1}^\infty (C_{2^\infty},\tau_n)$ of quasicyclic 2-groups endowed with twin-generated topologies;
\item[\textup{(7)}] for any twin-generated topologies $\tau_n$, $n\in\mathbb N$, on $C_{2^\infty}$ the right-topological group $\prod_{n=1}^\infty(C_{2^\infty},\tau_n)$ is topologically isomorphic to a maximal subgroup of $\mathsf K(\lambda(C_{2^\infty}))$.
\end{enumerate}
\end{theorem}
\begin{proof} Since each proper subgroup of $C_{2^\infty}$ is finite, the family $\wht{\mathcal K}$ of maximal 2-cogroups is countable and hence can be enumerated as $\wht\mathcal K=\{K_n:n\in\omega\}$. Each maximal 2-cogroup $K\in\wht{\mathcal K}$ has infinite index and its characteristic group ${\mathcal H}(K)$ is isomorphic to $C_{2^\infty}$.
\smallskip
1. The equivalence $(1)\Leftrightarrow (2)$ of Theorem~\ref{t19.3} implies that no minimal left ideal of $\lambda(C_{2^\infty})$ is a topological semigroup.
\smallskip
2,3,5. The statements (2), (3) and (5) follow from Theorem~\ref{t19.2}.
\smallskip
4. The forth item follows from the third one because each semigroup $S$ of cardinality $|S|\le\mathfrak c$ embeds into the semigroup $\mathfrak c^{\mathfrak c}$ according to Lemma~\ref{emb}.
\smallskip
6. By Theorem~\ref{t18.7}(b), each maximal subgroup $G$ in the minimal ideal $\mathsf K(\lambda(C_{2^\infty}))$ is topologically isomorphic to the product $\prod_{K\in\wht\mathcal K}{\mathcal H}(A_K)$ of the structure groups of suitable twin subsets $A_K\in\Tau_K=\Tau_{[K]}$, $K\in\wht\mathcal K$. For each maximal 2-cogroup $K\in\wht{\mathcal K}$ the structure group ${\mathcal H}(A_K)$ is just $C_{2^\infty}$ endowed with a twin-generated topology.
\smallskip
7. Now assume conversely that $\tau_n$, $n\in\mathbb N$, are twin generated topologies on the quasicyclic group $C_{2^\infty}$. For every $n\in\mathbb N$ find a twin subset $A_n\in\Tau_{K_n}$ whose structure group ${\mathcal H}(A_n)$ is topologically isomorphic to $(C_{2^\infty},\tau_n)$. By
Theorem~\ref{t18.8}, the product $\prod_{n=1}^\infty {\mathcal H}(A_n)$ is topologically isomorphic to some maximal subgroup of $\mathsf K(\lambda(C_{2^\infty}))$.
\end{proof}
\begin{remark} Theorems~\ref{t21.4}(7) and \ref{t9.5} imply that among maximal subgroups of the minimal ideal of $\lambda(C_{2^\infty})$ there are:
\begin{itemize}
\item Raikov complete topological groups;
\item incomplete totally bounded topological groups;
\item paratopological groups, which are not topological groups;
\item semitopological groups, which are not paratopological groups.
\end{itemize}
\end{remark}
\subsection{The groups of generalized quaternions $Q_{2^n}$}
We start with the quaternion group $Q_8=\{\pm 1,\pm\mathbf i,\pm\mathbf j,\pm\mathbf k\}$. It contains 3 cyclic subgroups of order 4
corresponding to 4-element maximal 2-cogroups: $K_1=Q_8\setminus \langle\mathbf i\rangle$, $K_2=Q_8\setminus \langle\mathbf j\rangle$, $K_3=Q_8\setminus \langle\mathbf k\rangle$. The characteristic groups of those 2-cogroups are isomorphic to $C_2$. The trivial subgroup of $Q_8$ corresponds to the maximal 2-cogroup $K_0=\{-1\}$ whose characteristic group coincides with $Q_8$. By Proposition~\ref{p15.2}, we get
$$|[\Tau_{K_0}]|=\frac{|\Tau_{K_0}|}{|{\mathcal H}(K_0)|}=\frac{2^{|X/K_0^\pm|}}{|Q_8|}=2$$ and $|[\Tau_{K_i}]|=1$ for $i\in\{1,2,3\}$.
By Theorem~\ref{t18.7}(2), each minimal left ideal of the semigroup $\lambda(Q_8)$ is isomorphic to
$$(Q_8\times 2)\times (C_2\times 1)^3 =2\times Q_8\times C_2^{\;3}.$$
\smallskip
Next, given any finite number $n\ge 3$ we consider the generalized quaternion group $Q_{2^{n+1}}$. Maximal 2-cogroups in $Q_{2^{n+1}}$ are of the following form:
$$K_0=\{-1\},\; K_1=Q_{2^{n+1}}\setminus C_{2^n} \mbox{ \ and \ }K_{k,x}=\{1,x\}\cdot (C_{2^{k}}\setminus C_{2^{k-1}})$$ for $2\le k\le n$ and $x\in Q_{2^{n+1}}\setminus C_{2^n}$.
It follows that $H(K_0)=Q_{2^{n+1}}$, $H(K_1)=C_2$ and $H(K_{k,x})=C_2$.
Also $$|[T_{K_0}]|=\frac{|T_{K_0}|}{|H(K_0)|}=
\frac{2^{|Q_{2^{n+1}}/K_0^\pm|}}{|Q_{2^{n+1}}|}=\frac{2^{2^n}}{2^{n+1}}=
2^{2^n-n-1},$$
$$|[T_{K_1}]|=\frac{|T_{K_1}|}{|H(K_1)|}=\frac{2^{|Q_{2^{n+1}}/K_1^\pm|}}{|C_2|}=\frac{2^1}{2}=1,$$and
$$|[T_{K_{k,x}}]|=\frac{|T_{K_{k,x}}|}{|H(K_{k,x})|}=
\frac{2^{|Q_{2^{n+1}}/K_{k,x}^\pm|}}{|C_2|}=\frac{2^{2^{n+1}/2^{k+1}}}{2}=
2^{2^{n-k}-1}.$$
It is easy to check that two 2-cogroups $K_{k,x}$ and $K_{k,y}$ are conjugated if and only if $xy^{-1}\in C_{2^{n-1}}$. Taking any elements $x,y\in Q_{2^{n+1}}\setminus C_{2^n}$ with $xy^{-1}\notin C_{2^n}$, we conclude that the family $$\widetilde\mathcal K=\{K_0,K_{k,x},K_{k,y}:2\le k\le n\}$$ is a $[\wht\mathcal K]$-selector. Applying Theorems~\ref{t18.7}, \ref{t14.1}(3) and Corollary~\ref{c18.5}(3), we get:
\begin{theorem}\label{t21.6} Let $n\ge 2$ be a finite number. Then
\begin{enumerate}
\item[\textup{(1)}] each minimal left ideal of the semigroup $\lambda(Q_{2^{n+1}})$ is isomorphic to $$Q_{2^{n+1}}\times 2^{2^n-n-1}\times C_2\times \prod_{k=2}^{n}(C_2\times 2^{2^{n-k}-1})^2,$$
where the cubes $2^{2^n-n-1}$ and $2^{2^{n-k}-1}$ are endowed with the left zero multiplication;
\item[\textup{(2)}] each maximal subgroup of the minimal ideal $\mathsf K(\lambda(Q_{2^{n+1}}))$ is isomorphic to\newline $Q_{2^{n+1}}\times C_2^{\;2n-1}$.
\end{enumerate}
\end{theorem}
The infinite group $Q_{2^\infty}$ of generalized quaternions has a similar structure. This group contains the following maximal 2-cogroups:
$$K_0=\{-1\},\; K_1=Q_{2^\infty}\setminus C_{2^\infty},\;\mbox{ and }\;K_{k,x}=\{1,x\}\cdot C_{2^{k}}\setminus C_{2^{k-1}}$$where $k\ge 2$ and $x\in Q_{2^\infty}\setminus C_{2^\infty}$. For these 2-cogroups we get
$$H(K_0)=Q_{2^\infty},\; H(K_1)=C_2,\;\mbox{ and }\; H(K_{k,x})=C_2$$and
$$|[T_{K_0}]|=\mathfrak c,\; |[T_{K_1}]|=1,\;\mbox{ and }\;|[T_{K_{k,x}}]|=\mathfrak c.$$
Any two 2-cogroups $K_{k,x}$, $K_{k,y}$ are conjugated. Then for any $b\in Q_{2^\infty}\setminus C_{2^\infty}$ the family $\widetilde\mathcal K=\{K_0,K_{k,b}:k\in\mathbb N\}$ is a $[\wht\mathcal K]$-selector. By analogy with Theorem~\ref{t21.4} we can prove:
\begin{theorem}\label{t21.7} For the group $Q_{2^\infty}$
\begin{enumerate}
\item[\textup{(1)}] minimal left ideals of the semigroup $\lambda(C_{2^\infty})$ are not topological semigroups;
\item[\textup{(2)}] each minimal left ideal of the semigroup $\lambda(Q_{2^\infty})$ is homeomorphic to the Cantor cube and is algebraically isomorphic to $$Q_{2^\infty}\times C_2^{\;\omega}\times\mathfrak c,$$ where the cardinal $\mathfrak c$ is endowed with the left zero multiplication;
\item[\textup{(3)}] the semigroup $\lambda(Q_{2^\infty})$ contains a principal ideal isomorphic to $$(Q_{2^\infty}\wr \mathfrak c^{\mathfrak c})\times C_2\times (C_2\wr\mathfrak c^{\mathfrak c})^\omega;$$
\item[\textup{(4)}] $\lambda(Q_{2^\infty})$ contains an isomorphic copy of each semigroup of cardinality $\le \mathfrak c$;
\item[\textup{(5)}] each maximal subgroup of the minimal ideal $\mathsf K(\lambda(Q_{2^\infty}))$ is topologically isomorphic to $(Q_{2^\infty},\tau)\times C_2^{\;\omega}$ where $\tau$ is a twin-generated topology on $Q_{2^\infty}$;
\item[\textup{(6)}] for any twin-generated topology $\tau$ on $Q_{2^\infty}$ the right-topological group $(Q_{2^\infty},\tau)\times C_2^{\;\omega}$ is topologically isomorphic to a maximal subgroup of $\mathsf K(\lambda(Q_{2^\infty}))$.
\end{enumerate}
\end{theorem}
\begin{remark} Theorems~\ref{t21.7}(6) and \ref{t9.5} imply that among maximal subgroups of the minimal ideal of $\lambda(Q_{2^\infty})$ there are:
\begin{itemize}
\item Raikov complete topological groups,
\item incomplete totally bounded topological groups,
\item right-topological groups, which are not left-topological groups,
\item semitopological groups, which are not paratopological groups.
\end{itemize}
\end{remark}
\subsection{The dihedral 2-groups $D_{2^n}$}
By the {\em dihedral group} $D_{2n}$ of even order $2n$ we understand any group with presentation
$$\la a,b\mid a^n=b^2=1, \; bab^{-1}=a^{-1}\ra.$$
It can be realized as the group of symmetries of a regular $n$-gon.
So, $D_{2n}$ is a subgroup of the orthogonal group $O(2)$.
The group $D_{2n}$ contains the cyclic subgroup $C_n=\la a\ra$ as a subgroup of index 2. The subgroup of all elements of odd order is normal in $D_{2n}$ and hence coincide with the maximal normal odd subgroup $\mathrm{Odd}$. By Theorem~\ref{t20.2}, the superextension $\lambda(D_{2n})$ is isomorphic to the superextension $\lambda(D_{2n}/\mathrm{Odd})$ of the quotient group $D_{2n}/\mathrm{Odd}$. The latter group is isomorphic to the dihedral group $D_{2^k}$ where $2^k$ maximal power of 2 that divides $2n$. Therefore it suffices to consider the superextensions of the dihedral 2-groups $D_{2^k}$.
By the {\em infinite dihedral 2-group} we understand the union $$D_{2^\infty}=\bigcup_{k\in\mathbb N}D_{2^k}\subset O(2).$$
It contains the quasicyclic 2-group $C_{2^\infty}$ as a normal subgroup of index 2.
Now we analyze the structure of the superextension $\lambda(D_{2^{n}})$ for finite $n\ge 1$. Maximal 2-cogroup in $D_{2^{n}}$ are of the following form:
$$K_0=D_{2^{n}}\setminus C_{2^{n-1}} \mbox{ \ and \ } K_{k,x}=\{1,x\}\cdot(C_{2^k}\setminus C_{2^{k-1}})$$where
$1\le k<n$ and $x\in K_0=D_{2^{n}}\setminus C_{2^{n-1}}$.
The characteristic groups of these maximal 2-cogroups are isomorphic to the 2-element cyclic group $C_2$. Also
$$|[\Tau_{K_0}]|=1\mbox{ and }|[\Tau_{K_{k,x}}]|=\frac{|2^{D_{2^{n}}/K_{k,x}^\pm}|}{|{\mathcal H}(K)|}=2^{2^{n-k}-1}$$ for all $1\le k<n$ and $x\in K_0$.
Let $b\in D_{2^{n}}\setminus C_{2^{n-1}}$ be any element and $a$ be the generator of the cyclic subgroup $C_{2^{n-1}}\subset D_{2^{n}}$.
One can check that two 2-cogroups $K_{k,x}$ and $K_{k,y}$ are conjugated if and only if $x^{-1}y\in C_{2^{n-2}}$.
Therefore the family
$$\widetilde\mathcal K=\{K_0,K_{k,b},K_{k,ab}:1\le k<n\}$$ is a $[\wht\mathcal K]$-selector.
Applying Theorems~\ref{t18.7}, \ref{t14.1} and Corollary~\ref{c18.5}(3), we get
\begin{theorem}\label{t21.9} For every $n\in\mathbb N$
\begin{enumerate}
\item[\textup{(1)}] The semigroup $\lambda(D_{2^{n}})$ contains a principal left ideal isomorphic to $C_2\times \prod\limits_{k=1}^{n-1} (C_2\wr Z_k^{\;Z_k})^2$ where $Z_k=2^{2^{n-k}-1}$.
\item[\textup{(2)}] Each minimal left ideal of $\lambda(D_{2^{n}})$ is isomorphic to $C_2\times \prod_{k=1}^n (C_{2}\times Z_k)^2$ where cubes $Z_p$ are endowed with left-zero multiplication.
\item[\textup{(3)}] Each maximal group of the minimal ideal $\mathsf K(\lambda(D_{2^{n+1}}))$ is isomorphic to $C_2^{\;2n-1}$.
\item[\textup{(4)}] The semigroup $\lambda(D_{2^{n+1}})$ contains an isomorphic copy of each semigroup $S$ of cardinality $|S|<2^{2^{n-1}-1}$.
\end{enumerate}
\end{theorem}
The superextension of the infinite dihedral 2-group $D_{2^\infty}$ has quite interesting properties. All maximal subgroups of the minimal ideal $\mathsf K(\lambda(D_{2^\infty}))$ are compact topological groups. On the other hand, in the semigroup $\lambda(D_{2^\infty})$ there are minimal left ideals, which are (or are not) topological semigroups.
\begin{theorem} For the group $D_{2^\infty}$
\begin{enumerate}
\item[\textup{(1)}] each minimal left ideal of the semigroup $\lambda(D_{2^\infty})$ is homeomorphic to the Cantor cube $2^\omega$ and is algebraically isomorphic to the compact topological semigroup $C_2^{\;\omega}\times 2^\omega$ where the Cantor cube $2^\omega$ is endowed with the left zero multiplication;
\item[\textup{(2)}] each maximal subgroup of the minimal ideal $\mathsf K(\lambda(D_{2^\infty}))$ is topologically isomorphic to the compact topological group $C_2^{\;\omega}$;
\item[\textup{(3)}] $\lambda(D_{2^\infty})$ contains a minimal left ideal, which is topologically isomorphic to the compact topological semigroup
$C_2^{\;\omega}\times 2^\omega;$
\item[\textup{(4)}] $\lambda(D_{2^\infty})$ contains a minimal left ideal, which is not a semitopological semigroup.
\item[\textup{(5)}] the semigroup $\lambda(D_{2^\infty})$ contains a principal ideal isomorphic to $C_2\times (C_2\wr \mathfrak c^{\mathfrak c})^\omega;$
\item[\textup{(6)}] $\lambda(D_{2^\infty})$ contains an isomorphic copy of each semigroup of cardinality $\le \mathfrak c$.
\end{enumerate}
\end{theorem}
\begin{proof} First note that by Theorem~\ref{t6.2} the torsion group $X=D_{2^\infty}$ is twinic and has trivial twinic ideal.
Maximal 2-cogroup in $D_{2^{\infty}}$ are of the following form:
$$K_0=D_{2^{\infty}}\setminus C_{2^\infty} \mbox{ \ and \ } K_{k,x}=\{1,x\}\cdot(C_{2^k}\setminus C_{2^{k-1}})$$where
$k\in\mathbb N$ and $x\in K_0$.
The characteristic groups of these maximal 2-cogroups are isomorphic to the 2-element cyclic group $C_2$. Consequently, for any twin set $A\in\wht\Tau$ its characteristic group ${\mathcal H}(A)$ is topologically isomorphic to $C_2$. Observe that
$$|[\Tau_{K_0}]|=1\mbox{ and }|[\Tau_{K_{k,x}}]|=2^\omega$$ for all $k\in\mathbb N$ and $x\in K_0$. Since the characteristic group $H(K_{k,x})=C_2$ is finite, the orbit space $[\Tau_{K_{p,x}}]$ is a compact Hausdorff space, homeomorphic to the Cantor cube $2^\omega$.
One can check that any two 2-cogroups $K_{k,x}$ and $K_{k,y}$ are conjugated.
Therefore for any $b\in D_{2^\infty}\setminus C_{2^\infty}$ the family
$\widetilde\mathcal K=\{K_0,K_{k,b}:k\in\mathbb N\}$ is a $[\wht\mathcal K]$-selector.
\smallskip
1. By Theorem~\ref{t18.7}(e) and Proposition~\ref{p12.1}, each minimal left ideal of $\lambda(X)$ is homeomorphic to the product $\prod_{K\in\widetilde\mathcal K}\Tau_K$, which is homeomorphic to $2^{X/K_0^\pm}\times\prod_{k\in\mathbb N}2^{X/K_{k,b}^\pm}$. The latter space is homeomorphic to the Cantor cube $2^\omega$.
By Theorem~\ref{t18.7}(e), each minimal left ideal of $\lambda(X)$ is algebraically isomorphic to $\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times [\Tau_K]$ and the latter semigroup is isomorphic to $C_2^\omega\times 2^\omega$ where the Cantor cube $2^\omega$ is endowed with the left zero multiplication.
\smallskip
2. Taking into account that each characteristic group ${\mathcal H}(A)$, $A\in\wht\Tau$, is topologically isomorphic to $C_2$ and applying Theorem~\ref{t18.7}(b), we conclude that each maximal subgroup in the minimal ideal $\mathsf K(\lambda(X))$ is topologically isomorphic to the compact topological group $C_2^\omega$.
\smallskip
3. Since each characteristic group ${\mathcal H}(K)$, $K\in\wht\mathcal K$, is finite (being isomorphic to $C_2$), Proposition~\ref{p18.10} implies that some minimal left ideal of $\lambda(X)$ is a topological semigroup, which is topologically isomorphic to the compact topological semigroup $$\prod_{K\in\widetilde\mathcal K}{\mathcal H}(K)\times [\Tau_K]={\mathcal H}(K_0)\times[\Tau_{K_0}]\times\prod_{k\in\mathbb N}({\mathcal H}(K_{k,b})\times[\Tau_{K_{k,b}}])$$ by Theorem~\ref{t18.7}(g). The latter topological semigroup is topologically isomorphic to $C_2^{\;\omega}\times 2^\omega$.
\smallskip
4. Since the maximal 2-cogroups $K_{k,b}$, $k\in\mathbb N$, have infinite index in $D_{2^\infty}$, Proposition~\ref{p18.9} implies that the semigroup $\lambda(D_{2^\infty})$ contains a minimal left ideal, which is not a semitopological semigroup.
\smallskip
5. By Corollary~\ref{c18.5}(3) and Theorem~\ref{t14.1}(3), the semigroup $\lambda(D_{2^\infty})$ contains a principal left ideal that is algebraically isomorphic to the semigroup $\prod_{K\in\widetilde\mathcal K}({\mathcal H}(K)\wr[\Tau_K]^{[\Tau_K]})$, which is isomorphic to $C_2\times (C_2\wr \mathfrak c^{\mathfrak c})^\omega$.
\smallskip
6. By the preceding item, $\lambda(D_{2^\infty})$ contains a subsemigroup, isomorphic to the semigroup $\mathfrak c^{\mathfrak c}$ of all self-mapping of the continuum $\mathfrak c$. By Lemma~\ref{emb}, the latter semigroup contains an isomorphic copy of each semigroup of cardinality $\le\mathfrak c$.
\end{proof}
\subsection{Superextensions of finite groups of order $<16$}\label{s21.5}
Theorem~\ref{t19.3} and Proposition~\ref{p19.1} give us an algorithmic way of calculating the minimal left ideals of the superextensions of finitely-generated abelian groups. For non-abelian groups the situation is a bit more complicated. In this section we shall describe the minimal left ideals of finite groups $X$ of order $|X|<16$.
In fact, Theorem~\ref{t20.2} helps us to reduce the problem to studying superextensions of groups $X/\mathrm{Odd}$. The group $X/\mathrm{Odd}$ is trivial if the order of $X$ is odd. So, it suffices to check non-abelian groups of even order. If $X$ is a 2-group, then the subgroup $\mathrm{Odd}$ of $X$ is trivial and hence $X/\mathrm{Odd}=X$. Also the subgroup $\mathrm{Odd}$ is trivial for simple groups.
The next table describes the structure of minimal left ideals of the superextensions of groups $X=X/\mathrm{Odd}$ of order $|X|\le 15$. In this table $\mathcal E$ stands for a minimal idempotent of $\lambda(X)$, which generates the principal left ideal $\lambda(X)\circ\mathcal E$ and lies in the maximal subgroup $H(\mathcal E)=\mathcal E\circ\lambda(X)\circ\mathcal E$.
Below the cubes $2^n$ are considered as semigroups of left zeros.
\vskip20pt
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\phantom{$|^|_|$}$X$\phantom{$|^|_|$} & $|E(\lambda(X)\circ\mathcal E)|$ & $\mathcal E\circ\lambda(X)\circ\mathcal E$ & $\lambda(X)\circ\mathcal E$\cr
\hline
$C_2$&\phantom{$|^|_|$}$1$\phantom{$|^|_|$}& $C_2$ &$C_2$\cr
\hline
$C_4$&\phantom{\large$o^|$}$1$\phantom{\large $o^|$} & $C_2\times C_4$&$C_2\times C_4$\cr
$C_2^{\;2}$&\phantom{\Large$o_|$}$1$\phantom{\Large$o_|$}& $C_2^{\;3}$ &$C_2^{\;3}$\cr
\hline
$C_2^{\;3}$&\phantom{\Large$o^|$}$1$\phantom{\Large $o^|$}& $C_2^{\;7}$ & $C_2^{\;7}$ \cr
$C_2\oplus C_4$&$1$ & $C_2^{\;2}\times C_4^{\;2}$ & $C_2^{\;3}\times C_4^{\;2}$\cr
$C_8$&\phantom{\Large$o_|$}$2$\phantom{\Large$o_|$}& $C_2\times C_4\times C_8$ & $2\times C_2\times C_4\times C_8$\cr
\hline
$D_8$&\phantom{\Large$o^|$}$2$\phantom{\Large $o^|$}& $C_2^{\;5}$& $2^2\times C_2^{\;5}$\cr
$Q_8$&\phantom{\Large$o_|$}$2$\phantom{\Large$o_|$}& $C_2^{\;3}\times Q_8$& $2\times C_2^{\;3}\times Q_8$\cr
\hline
$A_4$&\phantom{\large$o^|_|$}$2^6$\phantom{\large $o^|_|$}& $C^{\;3}_2$ & $2^6\times C^{\;3}_2$\cr
\hline
\end{tabular}
\end{center}
\medskip
For abelian groups the entries of this table are calculated with help of Theorem~\ref{t19.3} and Proposition~\ref{p19.1}. Let us illustrate this in the example of the group $C_2\oplus C_4$.
By Proposition~\ref{p19.1}, for the group $X=C_2\oplus C_4$ we get
\begin{itemize}
\item $q(X,C_2)=|\hom(X,C_2)|-|\hom(X,C_1)|=2\cdot 2-1=3$;
\item $q(X,C_4)=\frac12(|\hom(X,C_4)|-|\hom(X,C_2)|)=\frac12(2\cdot 4-2\cdot 2)=2$;
\item $q(X,C_{2^k})=0$ for $k>2$.
\end{itemize}
Then each minimal left ideal of $\lambda(C_2\oplus C_4)$ is isomorphic to $$(C_2\times 2^{2^{1-1}-1})^{q(X,C_2)}\times(C_4\times 2^{2^{2-1}-2})^{q(X,C_4)}=(C_2\times 2^0)^3\times (C_4\times 2^0)^2=C_2^{\;3}\times C_4^{\;2}.$$
Next, we consider the non-abelian groups. In fact, the groups $Q_8$ and $D_8$ have been treated in Theorems~\ref{t21.6} and \ref{t21.9}. So, it remains to consider the alternating group $A_4$.
This group has order 12, contains a normal subgroup isomorphic to $C_2\times C_2$ and contains no subgroup of order 6. This implies that all 2-cogroups of $A_4$ lie in $C_2\times C_2$ and consequently, $A_4$ contains 3 maximal 2-cogroups. Each maximal 2-cogroup $K\subset A_4$ contains two elements and has characteristic group ${\mathcal H}(K)$ isomorphic to $C_2$. Since $|X/K^\pm|=3$,
Proposition~\ref{p15.2} guarantees that $|[\Tau_K]|=2^{|X/K^\pm|}/|{\mathcal H}(K)|=2^{3-1}=2^2$.
Applying Theorem~\ref{t18.7}, we see that each minimal left ideal of the semigroup $\lambda(A_4)$ is isomorphic to $(C_2\times 2^2)^3=2^6\times C_2^{\;3}$.
\section{Some Open Problems}
\begin{problem} Describe the structure of (minimal left ideals) of superextensions of the simple groups $A_n$ for $n\ge 5$.
\end{problem}
\begin{problem} Describe the structure of (minimal left ideals) of superextensions of the finite groups of order 16.
\end{problem}
Since the free group $F_2$ with two generators is not twinic, the results obtained in this paper cannot be applied to this group.
\begin{problem} What can be said about the structure of the superextension $\lambda(F_2)$ of the free group $F_2$?
\end{problem}
\begin{problem} Investigate the permanence properties of the class of twinic groups. Is this class closed under taking subgroups? products?
\end{problem}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.